Salesforce recently published a case study on this work. Read the full case study here.
We built two Agentforce agents for TASC Outsourcing that handled 5,100+ sales interactions and improved their email response rates from 0.33% to 1.93%. The agents generated 2,194 actionable leads and now run without constant supervision.
Getting there meant solving problems that don’t show up in demos: maintaining context across non-linear conversations, personalizing thousands of emails the way a human sales rep would, and preventing hallucinations when the agent doesn’t know something.
Here’s the technical detail on what it actually took.
Agents aren’t software
The first thing we had to establish with TASC was that building agents is fundamentally different from building traditional software. With normal SaaS, requirements are clear. You build a button that filters by date, you test it, and it either works or it doesn’t. Testing is deterministic.
With agents, requirements are fuzzy. “Help sales reps find qualified leads conversationally” sounds straightforward until you realize the agent sometimes personalizes well and sometimes misses context. You’re not fixing bugs anymore, you’re optimizing probabilities. This meant both teams had to think differently about discovery. Instead of asking “what features do you need?” we asked “what job are your reps actually trying to get done?”
When TASC said they wanted the agent to book meetings, we had to dig deeper. Which leads actually deserve a meeting? What questions indicate readiness? When shouldn’t we book at all? This led us to build a 3-tier response system instead of a simple Q&A agent.
The ZoomInfo agent
The core challenge with the ZoomInfo agent was that conversations don’t have finite starts or ends. A user might start with company search, find contacts, then create leads. Or they might start with news scoops, filter companies, refine their criteria, find contacts, and then create leads. Sometimes they start with contact search, realize they need to filter companies first, and go back.
The agent had to translate natural language queries into executable workflows while keeping track of past queries for continuity. It needed to allow additional filtering at any step and never lose context when users said things like “from these companies” or “for them.”
We built a workflow orchestrator that generates a directed graph of search steps and maintains conversation context across multi-turn searches. When someone says “biotech companies in Berlin,” the agent finds 25 companies and asks if they want to filter by size, funding, or something else. When they respond with “from these companies, show only those with 500+ employees,” the agent uses stored context to filter down to 8 companies. If they then say “find me CTOs of FooBar,” the agent understands that FooBar is one of the 8 companies from the previous result.
The agent receives three strict inputs: the user query (never modified), conversation context (all previous queries), and the previous workflow created. This structure prevents the context loss that typically breaks multi-turn conversations.
The SDR agent
The trickiest part of the SDR agent was personalizing 5,100+ emails when data quality varied wildly. Sometimes you have just a first name. Sometimes you have persona plus industry plus service interest. Sometimes you’re missing the company name entirely.
Sending “Dear [FIRSTNAME], I noticed [COMPANY] might benefit from our services” doesn’t work. Placeholders leak through, messaging stays generic, and response rates stay low. What actually worked was using JSON-based structures that force the LLM to reason before generating anything.
Before writing any email, the agent creates a structured analysis. It identifies what lead information is present, maps the persona to knowledge base personas like CFO or HR Manager, maps industry to relevant client examples, retrieves appropriate pain points and service focus, then generates the email using only available, validated information.
When we have rich data (persona, industry, and services), the subject line includes first name plus specific value prop. The opening addresses that persona’s specific pain points. The body includes industry-specific social proof from client examples. Service recommendations are tailored to both persona and industry.
When we have moderate data (persona or industry, but not both), we use first name with general value prop. We address general pain points for that persona and focus on relevant services without industry specifics. We skip the client examples.
When we have minimal data (just a name or less), the subject focuses on value and uses the name if available. We open with general challenges the company solves, use bullet structure covering multiple pain points, and avoid making any persona or industry assumptions.
This approach took TASC from 0.33% response rates to 1.93% using the same lead database with different personalization logic.
The Q&A agent
TASC had specific requirements about what the agent should and shouldn’t answer. They never wanted to book meetings for internal operations questions or out-of-scope inquiries. They did want to book meetings for pricing questions, service questions, and implementation questions when the prospect actually requested it. The challenge was ensuring the agent stayed grounded on real information, never hallucinated, and handled handoffs gracefully.
We built a priority-based response system with three tiers. Priority one covers standard questions with pre-defined answers. Group A never books meetings (out-of-scope questions get deflected gracefully). Group B can book meetings if the prospect requests it (pricing, services, implementation).
Priority two handles knowledge synthesis, but only if specificity matches. If someone asks about general service information and our knowledge contains general service description, the specificity check passes and we synthesize an answer. If someone asks about specific guarantee periods or SLAs but our knowledge only has general information, the specificity check fails and we hand off to a sales rep.
The absolute knowledge restriction rule was critical: base answers exclusively on text within provided knowledge sections. No external info, no general knowledge, no assumptions, no inferences. If the info isn’t explicitly present, you must use the handoff response. This prevents hallucinations and maintains trust.
Before responding, the agent generates a three-phase analysis covering intent classification, source mapping, and response planning. It decides if meeting booking logic applies, finalizes the response type (standard, knowledge synthesis, or handoff), and validates the response against grounding rules.
Strategic friction that saves money
ZoomInfo enrichment isn’t cheap, and auto-enriching every contact the agent finds burns through credits fast. When the agent surfaces 150 contacts across 50 companies, enriching all of them without thinking adds up quickly.
We built custom UI that lets users review contacts with their associated companies and select which specific contacts to enrich. This makes enrichment a strategic choice instead of an automatic expense. The system checks for duplicates using fuzzy matching where “Acme Corp” matches “Acme Corporation” which matches “Acme Co” across both ZoomInfo records and existing Salesforce leads.
The friction is intentional. We want users to make informed decisions about enrichment costs instead of blindly selecting all results. TASC reduced their enrichment costs by 93% while maintaining lead quality.
What actually took time
The real work was getting prompts that worked reliably on the 5,000th interaction, not just the first five. We had to make sure the agent filtered leads correctly every single time, not 95% of the time, because inconsistency compounds into chaos at scale.
Translating TASC’s sales SOPs (which were written for humans who understand context) into structures an AI could follow reliably required constant iteration.
“Building agents is fundamentally different from building traditional software. With traditional SaaS, requirements are clear and testing is deterministic. With agents, requirements are fuzzy and testing is probabilistic. You’re optimizing probabilities, not fixing bugs.”
— Aniket Hendre (Lead Agent Developer)
The technical architecture was straightforward. Making it reliable enough that TASC’s sales team trusted it with 5,100+ real prospect interactions took months of refinement. That’s what separates production systems from impressive demos.
Superhuman quality at superhuman speed
We’ve deployed agents across sales, service, and operations for clients handling everything from lead generation to customer support.
The technical patterns we built for TASC (workflow orchestration, grounded responses, strategic friction in UI) apply across different use cases, but every implementation requires deep collaboration to translate business processes into agent logic.
Salesforce recently published a case study on TASC’s Agentforce deployment, highlighting how we handled the complex requirements. Sudheer Noohu, TASC’s Group Head of Technology, noted that “realfast.ai supplied the specialised AI knowledge and expertise needed to bring Agentforce to life.”
TASC is now expanding their Agentforce usage beyond sales, replacing all live chat on their website with AI agents that qualify prospects and screen out job seekers before feeding clean leads to Salesforce.
They’re also building an AI recruiter to autonomously source and shortlist candidates based on uploaded job descriptions. The agents we built are now being fine-tuned based on user feedback, with sales users requesting more flexibility to personalize outreach emails.
If you’re evaluating Agentforce for your sales or service operations, we can help you understand what it takes to deploy agents that work reliably at scale. We handle complex Agentforce implementations where the technical details matter.