Async work is where the juice is.

Async work is where the juice is.

Agents that can do work asynchronously will be crucial to make GenAI scalable and viable for enterprise workloads.

Agents that can do work asynchronously will be crucial to make GenAI scalable and viable for enterprise workloads.

Sidu Ponnappa

Sep 24, 2024

"I don’t think large async loops work. The bottleneck is reviewing output and iteration speed. If the agent messes up after 1 hour, now I gotta wait another hour just to give it a tiny nudge. Symbiotic inline tackling smaller problems with rapid iteration works better."

It's a valid concern. The risk of long, asynchronous loops is that if the agent goes off track, it can waste a lot of time before the error is caught and corrected.

However, I would argue that this is more a question of implementation than a fundamental flaw in the async model itself.

If you can break down a single, long-running task into discrete chunks and have programmatic error correction built in at every step, you can assign long async work. In theory, there's no limit to how many steps the agent can handle towards creating an artefact.

As the agents become more sophisticated and reliable, the need for tight, iterative feedback diminishes. You codify more error detection and resolution mechanisms with traditional code on top of LLM reasoning.

Think about it like delegation in a human team.

When you first start working with a new team member, you might need to give them very detailed instructions and check in frequently to make sure they're on the right track. But as they gain experience and prove their reliability, you can give them more autonomy and trust them to handle bigger pieces of work independently.

The same principle applies to agents. As they become more capable at handling edge cases, you can definitely delegate larger, longer-running tasks to them without constant babysitting.

Now, that said, I do agree that there's value in symbiotic, inline collaboration for certain types of tasks. If you're trying to solve a novel problem or explore a creative idea, the rapid back-and-forth of a real-time collaboration can be really valuable.

But I don't think it has to be an either/or proposition.

In practice, I suspect most users will employ a mix of async delegation and inline collaboration, depending on the nature and complexity of the task at hand.

For quick, well-defined tasks, async delegation will be the most efficient model. You tell your agent what you need, and it goes off and gets it done, freeing you up to focus on other things.

Ultimately, I believe the async model is a critical piece of the enterprise viability puzzle. It's what will allow us to scale these systems beyond simple chatbots and scripted workflows, and to leverage the reasoning capability added by LLMs to amplify and augment human capabilities.

Can't expect compounding returns on AI without agents doing async work. And they can do async work. Better than you think.

It all depends on the engineering.