In this tutorial, we dive into the essence of Agentic AI by uniting LangChain, AutoGen, and Hugging Face into a single, fully functional framework that runs without paid APIs. We begin by setting up a lightweight open-source pipeline and then progress through structured reasoning, multi-step workflows, and collaborative agent interactions. As we move from LangChain chains to simulated multi-agent systems, we experience how reasoning, planning, and execution can seamlessly blend to form autonomous, intelligent behavior, entirely within our control and environment. Check out theย FULL CODES here.
import warnings
warnings.filterwarnings('ignore')
from typing import List, Dict
import autogen
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFacePipeline
from transformers import pipeline
import json
print("๐ Loading models...\n")
pipe = pipeline(
"text2text-generation",
model="google/flan-t5-base",
max_length=200,
temperature=0.7
)
llm = HuggingFacePipeline(pipeline=pipe)
print("โ Models loaded!\n")
We start by setting up our environment and bringing in all the necessary libraries. We initialize a Hugging Face FLAN-T5 pipeline as our local language model, ensuring it can generate coherent, contextually rich text. We confirm that everything loads successfully, laying the groundwork for the agentic experiments that follow. Check out theย FULL CODES here.
def demo_langchain_basics():
print("="*70)
print("DEMO 1: LangChain - Intelligent Prompt Chains")
print("="*70 + "\n")
prompt = PromptTemplate(
input_variables=["task"],
template="Task: {task}\n\nProvide a detailed step-by-step solution:"
)
chain = LLMChain(llm=llm, prompt=prompt)
task = "Create a Python function to calculate fibonacci sequence"
print(f"Task: {task}\n")
result = chain.run(task=task)
print(f"LangChain Response:\n{result}\n")
print("โ LangChain demo complete\n")
def demo_langchain_multi_step():
print("="*70)
print("DEMO 2: LangChain - Multi-Step Reasoning")
print("="*70 + "\n")
planner = PromptTemplate(
input_variables=["goal"],
template="Break down this goal into 3 steps: {goal}"
)
executor = PromptTemplate(
input_variables=["step"],
template="Explain how to execute this step: {step}"
)
plan_chain = LLMChain(llm=llm, prompt=planner)
exec_chain = LLMChain(llm=llm, prompt=executor)
goal = "Build a machine learning model"
print(f"Goal: {goal}\n")
plan = plan_chain.run(goal=goal)
print(f"Plan:\n{plan}\n")
print("Executing first step...")
execution = exec_chain.run(step="Collect and prepare data")
print(f"Execution:\n{execution}\n")
print("โ Multi-step reasoning complete\n")
We explore LangChainโs capabilities by constructing intelligent prompt templates that allow our model to reason through tasks. We build both a simple one-step chain and a multi-step reasoning flow that break complex goals into clear subtasks. We observe how LangChain enables structured thinking and turns plain instructions into detailed, actionable responses. Check out theย FULL CODES here.
class SimpleAgent:
def __init__(self, name: str, role: str, llm_pipeline):
self.name = name
self.role = role
self.pipe = llm_pipeline
self.memory = []
def process(self, message: str) -> str:
prompt = f"You are a {self.role}.\nUser: {message}\nYour response:"
response = self.pipe(prompt, max_length=150)[0]['generated_text']
self.memory.append({"user": message, "agent": response})
return response
def __repr__(self):
return f"Agent({self.name}, role={self.role})"
def demo_simple_agents():
print("="*70)
print("DEMO 3: Simple Multi-Agent System")
print("="*70 + "\n")
researcher = SimpleAgent("Researcher", "research specialist", pipe)
coder = SimpleAgent("Coder", "Python developer", pipe)
reviewer = SimpleAgent("Reviewer", "code reviewer", pipe)
print("Agents created:", researcher, coder, reviewer, "\n")
task = "Create a function to sort a list"
print(f"Task: {task}\n")
print(f"[{researcher.name}] Researching...")
research = researcher.process(f"What's the best approach to: {task}")
print(f"Research: {research[:100]}...\n")
print(f"[{coder.name}] Coding...")
code = coder.process(f"Write Python code to: {task}")
print(f"Code: {code[:100]}...\n")
print(f"[{reviewer.name}] Reviewing...")
review = reviewer.process(f"Review this approach: {code[:50]}")
print(f"Review: {review[:100]}...\n")
print("โ Multi-agent workflow complete\n")
We design lightweight agents powered by the same Hugging Face pipeline, each assigned a specific role, such as researcher, coder, or reviewer. We let these agents collaborate on a simple coding task, exchanging information and building upon each otherโs outputs. We witness how a coordinated multi-agent workflow can emulate teamwork, creativity, and self-organization in an automated setting. Check out theย FULL CODES here.
def demo_autogen_conceptual():
print("="*70)
print("DEMO 4: AutoGen Concepts (Conceptual Demo)")
print("="*70 + "\n")
agent_config = {
"agents": [
{"name": "UserProxy", "type": "user_proxy", "role": "Coordinates tasks"},
{"name": "Assistant", "type": "assistant", "role": "Solves problems"},
{"name": "Executor", "type": "executor", "role": "Runs code"}
],
"workflow": [
"1. UserProxy receives task",
"2. Assistant generates solution",
"3. Executor tests solution",
"4. Feedback loop until complete"
]
}
print(json.dumps(agent_config, indent=2))
print("\n๐ AutoGen Key Features:")
print(" โข Automated agent chat conversations")
print(" โข Code execution capabilities")
print(" โข Human-in-the-loop support")
print(" โข Multi-agent collaboration")
print(" โข Tool/function calling\n")
print("โ AutoGen concepts explained\n")
class MockLLM:
def __init__(self):
self.responses = {
"code": "def fibonacci(n):\n if n <= 1:\n return n\n return fibonacci(n-1) + fibonacci(n-2)",
"explain": "This is a recursive implementation of the Fibonacci sequence.",
"review": "The code is correct but could be optimized with memoization.",
"default": "I understand. Let me help with that task."
}
def generate(self, prompt: str) -> str:
prompt_lower = prompt.lower()
if "code" in prompt_lower or "function" in prompt_lower:
return self.responses["code"]
elif "explain" in prompt_lower:
return self.responses["explain"]
elif "review" in prompt_lower:
return self.responses["review"]
return self.responses["default"]
def demo_autogen_with_mock():
print("="*70)
print("DEMO 5: AutoGen with Custom LLM Backend")
print("="*70 + "\n")
mock_llm = MockLLM()
conversation = [
("User", "Create a fibonacci function"),
("CodeAgent", mock_llm.generate("write code for fibonacci")),
("ReviewAgent", mock_llm.generate("review this code")),
]
print("Simulated AutoGen Multi-Agent Conversation:\n")
for speaker, message in conversation:
print(f"[{speaker}]")
print(f"{message}\n")
print("โ AutoGen simulation complete\n")
We illustrate AutoGenโs core idea by defining a conceptual configuration of agents and their workflow. We then simulate an AutoGen-style conversation using a custom mock LLM that generates realistic yet controllable responses. We realize how this framework allows multiple agents to reason, test, and refine ideas collaboratively without relying on any external APIs. Check out theย FULL CODES here.
def demo_hybrid_system():
print("="*70)
print("DEMO 6: Hybrid LangChain + Multi-Agent System")
print("="*70 + "\n")
reasoning_prompt = PromptTemplate(
input_variables=["problem"],
template="Analyze this problem: {problem}\nWhat are the key steps?"
)
reasoning_chain = LLMChain(llm=llm, prompt=reasoning_prompt)
planner = SimpleAgent("Planner", "strategic planner", pipe)
executor = SimpleAgent("Executor", "task executor", pipe)
problem = "Optimize a slow database query"
print(f"Problem: {problem}\n")
print("[LangChain] Analyzing problem...")
analysis = reasoning_chain.run(problem=problem)
print(f"Analysis: {analysis[:120]}...\n")
print(f"[{planner.name}] Creating plan...")
plan = planner.process(f"Plan how to: {problem}")
print(f"Plan: {plan[:120]}...\n")
print(f"[{executor.name}] Executing...")
result = executor.process(f"Execute: Add database indexes")
print(f"Result: {result[:120]}...\n")
print("โ Hybrid system complete\n")
if __name__ == "__main__":
print("="*70)
print("๐ค ADVANCED AGENTIC AI TUTORIAL")
print("AutoGen + LangChain + HuggingFace")
print("="*70 + "\n")
demo_langchain_basics()
demo_langchain_multi_step()
demo_simple_agents()
demo_autogen_conceptual()
demo_autogen_with_mock()
demo_hybrid_system()
print("="*70)
print("๐ TUTORIAL COMPLETE!")
print("="*70)
print("\n๐ What You Learned:")
print(" โ LangChain prompt engineering and chains")
print(" โ Multi-step reasoning with LangChain")
print(" โ Building custom multi-agent systems")
print(" โ AutoGen architecture and concepts")
print(" โ Combining LangChain + agents")
print(" โ Using HuggingFace models (no API needed!)")
print("\n๐ก Key Takeaway:")
print(" You can build powerful agentic AI systems without expensive APIs!")
print(" Combine LangChain's chains with multi-agent architectures for")
print(" intelligent, autonomous AI systems.")
print("="*70 + "\n")
We combine LangChainโs structured reasoning with our simple agentic system to create a hybrid intelligent framework. We allow LangChain to analyze problems while the agents plan and execute corresponding actions in sequence. We conclude the demonstration by running all modules together, showcasing how open-source tools can integrate seamlessly to build adaptive, autonomous AI systems.
In conclusion, we witness how Agentic AI transforms from concept to reality through a simple, modular design. We combine the reasoning depth of LangChain with the cooperative power of agents to build adaptable systems that think, plan, and act independently. The result is a clear demonstration that powerful, autonomous AI systems can be built without expensive infrastructure, leveraging open-source tools, creative design, and a bit of experimentation.
Check out theย FULL CODES here. Feel free to check out ourย GitHub Page for Tutorials, Codes and Notebooks.ย Also,ย feel free to follow us onย Twitterย and donโt forget to join ourย 100k+ ML SubRedditย and Subscribe toย our Newsletter. Wait! are you on telegram?ย now you can join us on telegram as well.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.