{"id":114,"date":"2025-08-01T19:07:41","date_gmt":"2025-08-01T19:07:41","guid":{"rendered":"https:\/\/aiinfrahub.com\/about-us\/?p=114"},"modified":"2025-08-01T19:07:41","modified_gmt":"2025-08-01T19:07:41","slug":"building-intelligent-agent-workflows-with-llamaindex-from-basics-to-advanced-patterns","status":"publish","type":"post","link":"https:\/\/aiinfrahub.com\/about-us\/building-intelligent-agent-workflows-with-llamaindex-from-basics-to-advanced-patterns\/","title":{"rendered":"Building Intelligent Agent Workflows with LlamaIndex: From Basics to Advanced Patterns"},"content":{"rendered":"\n<p class=\"has-text-align-left\">In this blog, we&#8217;ll explore how to design powerful and flexible multi-agent workflows using the ll<code>ama_index<\/code> framework. From basic sequential flows to advanced branching, loops, parallelism, and LLM-powered agents, you&#8217;ll learn how to model real-world software development pipelines as executable, traceable workflows.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Lets start with &#8220;Hello World&#8221; workflow<\/h2>\n\n\n\n<p>A very basic workflow class where it will receive the StartEvent and emits the StopEvent.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>class AgentDocWorkflow(Workflow):\n    @step\n    async def my_step(self, ev: StartEvent) -> StopEvent:\n        \"\"\"\n        This is a sample step in the workflow.\n        \"\"\"\n        return StopEvent(result=\"Hello World\")<\/code><\/pre>\n\n\n\n<p>Instantiate and run it. Now instantiate the workflow and wait for the function to complete using await keyword.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>basic_workflow = AgentDocWorkflow(timeout=10, verbose=False)\nresult = await basic_workflow.run()\nprint(result)<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"707\" height=\"289\" src=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image.png\" alt=\"\" class=\"wp-image-116\" srcset=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image.png 707w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-300x123.png 300w\" sizes=\"auto, (max-width: 707px) 100vw, 707px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Sequential Multi-Step Workflow<\/h2>\n\n\n\n<p>Let\u2019s build a linear sequence: developer \u2192 tester \u2192 deployer.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>class DeveloperWorkflow(Workflow):\n    @step\n    async def developer(self, ev: StartEvent) -> Code:\n        return Code(code_output=\"Code completed successfully\")\n\n    @step\n    async def tester(self, ev: Code) -> Test:\n        return Test(test_output=\"Test completed successfully\")\n\n    @step\n    async def deployer(self, ev: Test) -> StopEvent:\n        return StopEvent(result=\"Deployment completed successfully\")<\/code><\/pre>\n\n\n\n<p>Visualization:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"707\" height=\"289\" src=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-1.png\" alt=\"\" class=\"wp-image-118\" srcset=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-1.png 707w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-1-300x123.png 300w\" sizes=\"auto, (max-width: 707px) 100vw, 707px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Iterative Feedback Loop (e.g., Code Review Cycles)<\/h2>\n\n\n\n<p>Create the event classes to accept and emit by the functions<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>class AgentLoopWorkflow(Workflow):\n    def __init__(self,timeout=10, verbose=False):\n        super().__init__()  # call parent class constructor\n        self.timeout = timeout\n        self.verbose = verbose\n        self.iteration_flag = 0\n\n\n    @step\n    async def developer(self, ev: StartEvent | Review) -> Code:\n             ..........\n\n            return Code(code_output=\"Code changed as per review comments \")\n            \n\n    @step\n    async def reviewer(self, ev: Code ) -> Review | Test:\n        \n        if self.iteration_flag == 1:\n            print(ev.code_output)\n            return Review(review_output=\"Address the review comments \")   \n        else:\n            print(ev.code_output)\n            return Test(test_output=\"Review completed successfully\")\n        \n\n    @step\n    async def tester(self, ev: Test) -> StopEvent:\n        ........................\n        return StopEvent(result=\"Test completed successfully\")<\/code><\/pre>\n\n\n\n<p>Don&#8217;t get perplexed by the <strong>interation_flag<\/strong>, it just introduced ti create the review loop.<\/p>\n\n\n\n<p>Depending on the iteration, the loop either continues with feedback or progresses to testing.<\/p>\n\n\n\n<p>Visualization:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"856\" height=\"318\" src=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-2.png\" alt=\"\" class=\"wp-image-119\" srcset=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-2.png 856w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-2-300x111.png 300w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-2-768x285.png 768w\" sizes=\"auto, (max-width: 856px) 100vw, 856px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Branching Workflows: Parallel Architect Paths<\/h2>\n\n\n\n<p>What if your process starts with either a software architect or test architect?<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import random\nclass AgentBranchWorkflow(Workflow):\n\n    @step\n    async def start(self, ev: StartEvent) -> SoftwareArchitect | TestArchitect:\n\n        if condtion:\n            return SoftwareArchitect(design_output=\"Software architecture designed\")\n        else:\n            return TestArchitect(test_plan=\"Test plan created\")\n\n    @step\n    async def software_architect(self, ev: SoftwareArchitect) -> Developer:\n        return Developer(code_output=\"Get the Software architecture\")\n\n    @step\n    async def developer(self, ev: Developer) -> StopEvent:\n        return StopEvent(result=\"Code developed based on architecture\")\n\n    @step\n    async def test_architect(self, ev: TestArchitect) -> Tester:\n        return Tester(test_result=\"Get the test plan\")\n\n    @step\n    async def tester(self, ev: Tester) -> StopEvent:\n        return StopEvent(result=\"Testing completed successfully\")<\/code><\/pre>\n\n\n\n<p>Visualization:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"976\" height=\"492\" src=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-3.png\" alt=\"\" class=\"wp-image-120\" srcset=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-3.png 976w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-3-300x151.png 300w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-3-768x387.png 768w\" sizes=\"auto, (max-width: 976px) 100vw, 976px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Parallel Processing with <code>Context.send_event<\/code><br><\/h2>\n\n\n\n<p>For concurrent thread-like execution:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import asyncio\n\nclass Thread(Event):\n    query : str\n\nclass ParallelWorkflow(Workflow):\n    @step\n    async def start(self, ctx: Context, ev: StartEvent) -> Thread:\n        ctx.send_event(Thread(query=\"Query for parallel processing 1\"))\n        ctx.send_event(Thread(query=\"Query for parallel processing 2\"))\n        ctx.send_event(Thread(query=\"Query for parallel processing 3\"))\n<\/code><\/pre>\n\n\n\n<p>Workers process them concurrently<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n    @step(num_workers=4)\n    async def process_thread(self, ctx: Context, ev: Thread) -> StopEvent:\n        await asyncio.sleep(random.randint(1,5))  # Simulate some processing time\n        return StopEvent(result=f\"Processed thread with query: {ev.query}\")<\/code><\/pre>\n\n\n\n<p>Visualization:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"891\" height=\"393\" src=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-4.png\" alt=\"\" class=\"wp-image-121\" srcset=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-4.png 891w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-4-300x132.png 300w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-4-768x339.png 768w\" sizes=\"auto, (max-width: 891px) 100vw, 891px\" \/><\/figure>\n\n\n\n<p>The same can be augmented via ctx.collect to wait for all the events to be received and then ending the workflow. <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>    @step\n    async def collect_results(self, ctx: Context, ev: CollectorThread) -> StopEvent:\n        #wait for all events to be collected\n        result = ctx.collect_events(ev, &#91;CollectorThread] * 3)\n        if result is None:\n            print(\"Not all events collected yet\")\n            return None\n        \n        print(result)\n        return StopEvent(result=\"Done\")<\/code><\/pre>\n\n\n\n<p>The collection of parallel output guarantees that the workflow only proceeds once all threads complete.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Concurrent Workflows with Multiple Event Types<\/h2>\n\n\n\n<p>Use case: development, testing, and certification occur independently but must finish before delivery.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">class ConcurrentWorkflow_DiffEventTypes(Workflow):<br>    @step<br>    async def start(self, ctx: Context, ev: StartEvent) -> Development | Testing| Certification:<br>        ctx.send_event(Development(query=\"Query for development\"))<br>        ctx.send_event(Testing(query=\"Query for testing\"))<br>        ctx.send_event(Certification(query=\"Query for certification\"))<br><br>    @step<br>    async def process_development(self, ctx: Context, ev: Development) -> DevelopmentComplete:<br>        return DevelopmentComplete(result=ev.query)<br><br>    @step<br>    async def process_testing(self, ctx: Context, ev: Testing) -> TestingComplete:<br>        return TestingComplete(result=ev.query) <br>    <br>    @step<br>    async def process_certification(self, ctx: Context, ev: Certification) -> CertificationComplete:<br>        return CertificationComplete(result=ev.query)<br>    <br>    <br>    @step<br>    async def Event_Collector(<br>        self,<br>        ctx: Context,<br>        ev: DevelopmentComplete | TestingComplete | CertificationComplete<br>        ) -> StopEvent:<br><br>        events = ctx.collect_events(ev, [CertificationComplete, TestingComplete, DevelopmentComplete])<br>        if events is None:<br>            print(\"Not all events collected yet\")<br>            return None <br>        <br>        print(\"All events collected:\", events)<br>        return StopEvent(result=\"Done\")<\/pre>\n\n\n\n<p>This showcases dependency resolution across heterogeneous paths and ensure guaranteed  and definitive event collection flow based on the sequence described in collect_events irrespective of emission sequence.   <\/p>\n\n\n\n<p>Visualization:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1013\" height=\"475\" src=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-5.png\" alt=\"\" class=\"wp-image-122\" srcset=\"https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-5.png 1013w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-5-300x141.png 300w, https:\/\/aiinfrahub.com\/wp-content\/uploads\/2025\/08\/image-5-768x360.png 768w\" sizes=\"auto, (max-width: 1013px) 100vw, 1013px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">OpenAI Integration: LLM Inside a Workflow<\/h2>\n\n\n\n<p>The provided code defines a three-step asynchronous workflow using the <code>llama_index<\/code> framework, where each step represents a stage in a process. <\/p>\n\n\n\n<p>It begins by signaling progress with a <code>ProgressEvent<\/code>, then uses OpenAI\u2019s <code>gpt-4o-mini<\/code> model to stream a response about the Taj Mahal, emitting each token as a <code>TextEvent<\/code> in real-time. <\/p>\n\n\n\n<p>Finally, it concludes the workflow with a completion message. The workflow supports event streaming, allowing live feedback and progress tracking, making it suitable for interactive or UI-driven applications involving LLMs.<\/p>\n\n\n\n<p>It also helps the user to that LLM is working by giving intermediate response thereby elevating user experience.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>class MyWorkflow(Workflow):\n    @step\n    async def step_one(self, ctx: Context, ev: StartEvent) -> FirstEvent:\n        ctx.write_event_to_stream(ProgressEvent(msg=\"Step one is happening\"))\n        return FirstEvent(first_output=\"First step complete.\")\n\n    @step\n    async def step_two(self, ctx: Context, ev: FirstEvent) -> SecondEvent:\n        llm = OpenAI(model=\"gpt-4o-mini\", api_key=api_key) \n        generator = await llm.astream_complete(\n            \"Please give me the first 50 words about Taj Mahal, a monument in India.\"  # Example prompt\n        )\n        async for response in generator:\n            ctx.write_event_to_stream(TextEvent(delta=response.delta))\n        return SecondEvent(\n            second_output=\"Second step complete, full response attached\",\n            response=str(response),\n        )\n\n    @step\n    async def step_three(self, ctx: Context, ev: SecondEvent) -> StopEvent:\n        ctx.write_event_to_stream(ProgressEvent(msg=\"Step three is happening\"))\n        return StopEvent(result=\"Workflow complete.\")<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>workflow = MyWorkflow(timeout=30, verbose=False)\nhandler = workflow.run(first_input=\"Start the workflow.\")\n\nasync for ev in handler.stream_events():\n    if isinstance(ev, ProgressEvent):\n        print(ev.msg)\n    if isinstance(ev, TextEvent):\n        print(ev.delta, end=\"\")\n\nfinal_result = await handler\nprint(\"Final result = \", final_result)<\/code><\/pre>\n\n\n\n<p>Output:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Step one is happening\nThe Taj Mahal, located in Agra, India, is an iconic mausoleum built by Mughal Emperor Shah Jahan in memory of his beloved wife, Mumtaz Mahal. Completed in 1653, it showcases exquisite white marble architecture, intricate carvings, and beautiful gardens, symbolizing love and devotion. It is a UNESCO World Heritage Site.Step three is happening\nFinal result =  Workflow complete.<\/code><\/pre>\n\n\n\n<p>This simulates a live LLM-driven response stream during the workflow.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>Workflows built with <code>llama_index<\/code> offer modular, async, and declarative structures to model almost any multi-agent process. Whether you&#8217;re creating a dev-test-deploy pipeline or orchestrating LLM interactions, this approach gives you full control and visibility into every step.<\/p>\n\n\n\n<p>Stay tuned for future deep dives on memory integration, agent coordination, and RAG-enhanced workflows!<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Links<\/h2>\n\n\n\n<p>To get code, check the below jupyter notebook<\/p>\n\n\n<a class=\"wp-block-read-more\" href=\"https:\/\/aiinfrahub.com\/about-us\/building-intelligent-agent-workflows-with-llamaindex-from-basics-to-advanced-patterns\/\" target=\"_self\">h<code>ttps:\/\/github.com\/juggarnautss\/Event_Driven_Agent_Doc_Workflow\/blob\/main\/agent_doc_workflow.ipynb<\/code><span class=\"screen-reader-text\">: Building Intelligent Agent Workflows with LlamaIndex: From Basics to Advanced Patterns<\/span><\/a>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this blog, we&#8217;ll explore how to design powerful and flexible multi-agent workflows using the llama_index framework. From basic sequential flows to advanced branching, loops, parallelism, and LLM-powered agents, you&#8217;ll learn how to model real-world software development pipelines as executable, traceable workflows. Lets start with &#8220;Hello World&#8221; workflow A very basic workflow class where it &#8230; <a title=\"Building Intelligent Agent Workflows with LlamaIndex: From Basics to Advanced Patterns\" class=\"read-more\" href=\"https:\/\/aiinfrahub.com\/about-us\/building-intelligent-agent-workflows-with-llamaindex-from-basics-to-advanced-patterns\/\" aria-label=\"Read more about Building Intelligent Agent Workflows with LlamaIndex: From Basics to Advanced Patterns\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":117,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[7],"class_list":["post-114","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-agenticai","tag-generativeai-multiagentsystems-llmworkflows-aiengineering-openai-llamaindex"],"_links":{"self":[{"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/posts\/114","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/comments?post=114"}],"version-history":[{"count":9,"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/posts\/114\/revisions"}],"predecessor-version":[{"id":130,"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/posts\/114\/revisions\/130"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/media\/117"}],"wp:attachment":[{"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/media?parent=114"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/categories?post=114"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aiinfrahub.com\/about-us\/wp-json\/wp\/v2\/tags?post=114"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}