Agentic AI

    Unlocking the Power of CrewAI: A Comprehensive Guide to Building AI-Driven Workflows

    A practical guide to building multi-agent workflows with CrewAI—how agents, tasks, crews, and tools fit together, plus six real scenarios like job search automation, lead generation, and trend analysis.

    Originsoft TeamEngineering Team
    February 20, 2025
    16 min read
    Unlocking the Power of CrewAI: A Comprehensive Guide to Building AI-Driven Workflows

    In the ever-evolving world of artificial intelligence, CrewAI has emerged as a powerful framework for building AI-driven workflows. Whether you're looking to automate job searches, generate leads, analyze market trends, or even scrape business listings, CrewAI provides a flexible and intuitive way to create AI agents that can handle complex tasks with ease.

    In this article, we'll dive into the code from six different CrewAI scripts, breaking down how each one works and how you can leverage CrewAI to build your own AI-powered workflows. By the end of this article, you'll have a solid understanding of how to use CrewAI to automate tasks, analyze data, and generate actionable insights.


    What is CrewAI?

    CrewAI is a framework that allows you to create AI agents that can perform specific tasks. These agents can be chained together to form a crew, where each agent handles a part of the workflow. The framework is highly modular, allowing you to define agents, tasks, and tools that interact with external APIs, scrape data, or even generate reports.

    The key components of CrewAI are:

    • Agents: These are the AI workers that perform specific roles (e.g., researcher, analyst, writer).
    • Tasks: These are the specific jobs that agents perform (e.g., scraping data, analyzing trends, generating reports).
    • Crews: A group of agents working together to complete a series of tasks.
    • Tools: External utilities or APIs that agents can use to perform their tasks (e.g., Google Search, SerperDev, Selenium).

    Now, let's explore the six scripts and see how CrewAI can be used in different scenarios.


    1. Automating Job Searches with CrewAI

    The first script, `crewai_swarm_jobs.py`, demonstrates how to automate the process of finding remote job opportunities for game graphics and animation students. Here's how it works:

    # crewai_swarm_jobs.py
    
    import os
    from crewai import Agent, Task, Crew, Process, LLM
    from langchain.llms.openai import OpenAI
    from crewai_tools import ScrapeWebsiteTool
    import litellm
    from crewai_tools import SerperDevTool
    from dotenv import load_dotenv
    
    load_dotenv()
    
    llm = LLM(
    	model="gemini/gemini-1.5-pro-latest",
    	temperature=0.7,
    	api_key=os.getenv("GEMINI_API_KEY"),
    	max_rpm=1,
    )
    os.environ["SERPER_API_KEY"] = "********************************"
    os.environ['LITELLM_LOG'] = 'DEBUG'
    
    # Web search tool for job searching
    search_tool = SerperDevTool()
    
    # --- Agents ---
    # Job Researcher Agent
    job_researcher = Agent(
    	role="Job Researcher",
    	goal="Find remote job opportunities for game graphics and animation students.",
    	backstory=(
    		"You are an expert job researcher specializing in remote work. "
    		"You find opportunities on job boards, freelancing sites, and company career pages."
    	),
    	llm=llm,
    	verbose=True,
    	tools=[search_tool]
    )
    
    # Resume Optimizer Agent
    resume_optimizer = Agent(
    	role="Resume Optimizer",
    	goal="Optimize resumes for remote job applications in game graphics and animation.",
    	backstory=(
    		"You are an experienced career coach who helps candidates tailor their resumes "
    		"to match job descriptions effectively."
    	),
    	llm=llm,
    	verbose=True
    )
    
    # Job Application Advisor Agent
    job_application_advisor = Agent(
    	role="Job Application Advisor",
    	goal="Help candidates craft compelling cover letters for job applications.",
    	backstory=(
    		"You are a hiring specialist with years of experience in writing persuasive cover letters "
    		"to help candidates stand out."
    	),
    	llm=llm,
    	verbose=True
    )
    
    # --- Tasks ---
    # Task 1: Research jobs
    research_jobs_task = Task(
    	description=(
    		"Search online job boards and freelancing platforms for remote job opportunities "
    		"in game graphics and animation suitable for students."
    		"Focus on companies hiring remotely and collect details like job title, requirements, and application links."
    	),
    	expected_output="A list of 5-10 relevant remote job openings with descriptions and application links.",
    	tools=[search_tool],
    	agent=job_researcher
    )
    
    # Task 2: Resume Optimization
    optimize_resume_task = Task(
    	description=(
    		"Analyze the job listings and provide suggestions to optimize a resume for these positions."
    		"Highlight key skills and experiences that should be included based on job descriptions."
    	),
    	expected_output="A set of resume improvement suggestions tailored for remote jobs in game graphics and animation.",
    	agent=resume_optimizer
    )
    
    # Task 3: Cover Letter Generation
    generate_cover_letter_task = Task(
    	description=(
    		"Create a personalized cover letter template based on the job descriptions found. "
    		"Ensure it highlights relevant skills in game graphics and animation."
    	),
    	expected_output="A professional, personalized cover letter template that can be used for multiple applications.",
    	agent=job_application_advisor
    )
    
    # --- Crew Setup ---
    job_search_crew = Crew(
    	agents=[job_researcher, resume_optimizer, job_application_advisor],
    	tasks=[research_jobs_task, optimize_resume_task, generate_cover_letter_task],
    	process=Process.sequential  # Tasks run in order
    )
    
    # --- Execute the Workflow ---
    result = job_search_crew.kickoff()
    print(result)

    Agents:

    • Job Researcher: Searches for remote job opportunities using a web search tool.
    • Resume Optimizer: Provides suggestions to optimize resumes based on job descriptions.
    • Job Application Advisor: Helps craft personalized cover letters.

    Tasks:

    • Research Jobs: The Job Researcher searches for remote job openings and collects details like job titles, requirements, and application links.
    • Optimize Resume: The Resume Optimizer analyzes job listings and suggests resume improvements.
    • Generate Cover Letter: The Job Application Advisor creates a personalized cover letter template.

    Crew: The crew is set up to run these tasks sequentially, ensuring that each step is completed before moving on to the next.

    This script is a great example of how CrewAI can automate repetitive tasks like job searching, resume optimization, and cover letter generation, saving you time and effort.


    2. Generating Leads with CrewAI

    The second script, `crewai_swarm_leads_generator.py`, shows how to automate lead generation by scraping contact information from websites. Here's the breakdown:

    # crewai_swarm_leads_generator.py
    
    import os
    import time
    import pandas as pd
    from crewai import Agent, Task, Crew, Process, LLM
    from langchain.llms.openai import OpenAI
    from crewai_tools import ScrapeWebsiteTool, SerperDevTool
    from dotenv import load_dotenv
    from fpdf import FPDF
    from selenium import webdriver
    from selenium.webdriver.common.by import By
    from selenium.webdriver.chrome.service import Service
    from webdriver_manager.chrome import ChromeDriverManager
    from bs4 import BeautifulSoup
    import litellm
    
    load_dotenv()
    
    llm = LLM(model="ollama/qwen2.5:latest", base_url="http://localhost:11434")
    
    os.environ["SERPER_API_KEY"] = "**********************************"
    os.environ['LITELLM_LOG'] = 'DEBUG'
    
    # Search tool for online research
    search_tool = SerperDevTool()
    
    ### --- AGENTS --- ###
    
    # Search Agent: Uses SERPAPI to get results
    search_agent = Agent(
    	role="Search Specialist",
    	goal="Find relevant pages from {website} containing emails related to {profession}",
    	backstory="An expert in searching the web and retrieving relevant data from Google.",
    	verbose=True,
    	memory=True,
    	llm=llm,
    	tools=[search_tool]
    )
    
    # Data Extraction Agent: Extracts name, business, email, and links
    extraction_agent = Agent(
    	role="Data Extraction Specialist",
    	goal="Extract name, business, email, and links from search results",
    	backstory="A meticulous analyst who ensures no relevant information is lost.",
    	verbose=True,
    	llm=llm,
    	memory=True
    )
    
    # Excel Writer Agent: Saves extracted data into an Excel file
    excel_writer_agent = Agent(
    	role="Excel Data Writer",
    	goal="Save extracted data into an organized Excel file",
    	backstory="A detail-oriented data recorder who ensures proper documentation.",
    	verbose=True,
    	llm=llm,
    	memory=True
    )
    
    ### --- TASKS --- ###
    
    # Task 1: Perform Google search using SERPAPI
    search_task = Task(
    	description=(
    		"Use Google search to find pages on {website} containing emails related to {profession}. "
    		"Query should be: site:{website} \"{profession}\" \"@gmail.com\" OR \"@yahoo.com\" OR \"@hotmail.com\" "
    		"OR \"@outlook.com\" OR \"@aol.com\" OR \"@yahoo.com\". "
    		"Retrieve multiple pages (each containing 100 results)."
    	),
    	expected_output="A list of search result pages containing potential leads.",
    	tools=[search_tool],
    	agent=search_agent
    )
    
    # Task 2: Extract relevant data from search results
    extraction_task = Task(
    	description=(
    		"Extract key details from search results: name, business, email, and link. "
    		"Ensure accuracy and completeness in data extraction."
    	),
    	expected_output="A structured list of extracted contacts with name, business, email, and links.",
    	agent=extraction_agent
    )
    
    # Task 3: Save extracted data into an Excel file
    
    def save_to_excel(data):
    	# Ensure the data has the correct structure
    	structured_data = []
    	for item in data:
    		if len(item) == 4:
    			structured_data.append(item)
    		else:
    			# Handle cases where the data does not have the correct structure
    			item = list(item)  # Convert tuple to list
    			name, business, email, link = item + [None] * (4 - len(item))
    			structured_data.append([name, business, email, link])
    
    	df = pd.DataFrame(structured_data, columns=[
    					  "Name", "Business", "Email", "Link"])
    	output_file = "extracted_contacts.xlsx"
    	df.to_excel(output_file, index=False)
    	return f"Data saved successfully in {output_file}"
    
    excel_writer_task = Task(
    	description=(
    		"Take extracted data and save it into an Excel file named 'extracted_contacts.xlsx'. "
    		"Ensure the file is properly formatted with columns for Name, Business, Email, and Link."
    	),
    	expected_output="An Excel file containing structured contact details.",
    	agent=excel_writer_agent,
    	callback=save_to_excel
    )
    
    ### --- CREW FORMATION --- ###
    crew = Crew(
    	agents=[search_agent, extraction_agent, excel_writer_agent],
    	tasks=[search_task, extraction_task, excel_writer_task],
    	process=Process.sequential
    )
    
    # Start execution
    if __name__ == "__main__":
    	profession = input("Enter profession: ")
    	website = input("Enter website (e.g., linkedin.com): ")
    
    	result = crew.kickoff(
    		inputs={"profession": profession, "website": website})
    	print(result)

    Agents:

    • Search Specialist: Uses Google search to find pages containing email addresses related to a specific profession.
    • Data Extraction Specialist: Extracts names, businesses, emails, and links from search results.
    • Excel Data Writer: Saves the extracted data into an Excel file.

    Tasks:

    • Search for Leads: The Search Specialist performs a Google search to find relevant pages.
    • Extract Data: The Data Extraction Specialist extracts key details from the search results.
    • Save to Excel: The Excel Data Writer saves the extracted data into a structured Excel file.

    This script is perfect for businesses looking to automate lead generation and organize contact information efficiently.


    3. Exploring Side Hustles with CrewAI

    The third script, `crewai_swarm_side_hustles.py`, focuses on identifying profitable side hustles using AI. Here's how it works:

    # crewai_swarm_side_hustles.py
    
    import os
    from crewai import Agent, Task, Crew, Process, LLM
    from langchain.llms.openai import OpenAI
    from crewai_tools import ScrapeWebsiteTool
    import litellm
    from crewai_tools import SerperDevTool
    from dotenv import load_dotenv
    from fpdf import FPDF
    
    load_dotenv()
    
    llm = LLM(model="ollama/qwen2.5:latest", base_url="http://localhost:11434")
    
    os.environ["SERPER_API_KEY"] = "***********************************"
    os.environ['LITELLM_LOG'] = 'DEBUG'
    
    # Initialize Search Tool
    search_tool = SerperDevTool()
    
    # ---------------------------- AGENTS ----------------------------
    
    # Researcher Agent
    researcher = Agent(
    	role="AI Business Researcher",
    	goal="Identify profitable side hustles using CrewAI.",
    	verbose=True,
    	memory=True,
    	llm=llm,
    	backstory=(
    		"A highly skilled business researcher, passionate about AI monetization. "
    		"You uncover new trends, analyze markets, and provide valuable insights on making money with AI."
    	),
    	tools=[search_tool]
    )
    
    # Writer Agent (PDF Generator)
    writer = Agent(
    	role="AI Report Writer",
    	goal="Convert research insights into a structured PDF report.",
    	verbose=True,
    	memory=True,
    	llm=llm,
    	backstory=(
    		"An expert technical writer specializing in business reports. "
    		"You take raw research and transform it into clear, actionable insights."
    	)
    )
    
    # ---------------------------- TASKS ----------------------------
    
    # Research Task
    research_task = Task(
    	description=(
    		"Research high-earning video contents (youtube, instagram, youtube, tiktok) side hustles that use CrewAI. "
    		"Analyze how others are making money with CrewAI, focusing on profitability and real-world examples. "
    		"Your final output must be a structured summary listing at least 5-7 profitable side hustles, "
    		"each with a short description and estimated earning potential."
    	),
    	expected_output="A structured summary of CrewAI-based side hustles.",
    	tools=[search_tool],
    	agent=researcher,
    )
    
    # PDF Report Generation Task
    
    def generate_pdf_report(content):
    	"""Generate a structured PDF report from the research content."""
    	pdf = FPDF()
    	pdf.set_auto_page_break(auto=True, margin=15)
    	pdf.add_page()
    	pdf.set_font("Arial", "B", 16)
    
    	pdf.cell(200, 10, "Profitable Side Hustles Using CrewAI", ln=True, align="C")
    	pdf.ln(10)  # Line break
    
    	pdf.set_font("Arial", size=12)
    	for line in content.split("\n"):
    		pdf.multi_cell(0, 10, line)
    		pdf.ln(2)
    
    	pdf.output("CrewAI_Side_Hustles_Report.pdf")
    	return "PDF report generated successfully: CrewAI_Side_Hustles_Report.pdf"
    
    write_task = Task(
    	description=(
    		"Take the research summary and create a structured PDF report. "
    		"Ensure proper formatting, section headers, and clear descriptions. "
    		"The final report should be professional and easy to read."
    	),
    	expected_output="A well-formatted PDF report named 'CrewAI_Side_Hustles_Report.pdf'.",
    	agent=writer,
    	function=generate_pdf_report  # Call the PDF generation function
    )
    
    # ---------------------------- CREW SETUP ----------------------------
    
    crew = Crew(
    	agents=[researcher, writer],
    	tasks=[research_task, write_task],
    	process=Process.sequential  # Ensure research runs before writing
    )
    
    # ---------------------------- EXECUTION ----------------------------
    
    print("🚀 Starting CrewAI Side Hustle Research...")
    result = crew.kickoff()
    print(result)

    Agents:

    • AI Business Researcher: Researches high-earning side hustles that use CrewAI.
    • AI Report Writer: Converts research insights into a structured PDF report.

    Tasks:

    • Research Side Hustles: The AI Business Researcher identifies profitable side hustles and provides a structured summary.
    • Generate PDF Report: The AI Report Writer creates a professional PDF report based on the research.

    This script is ideal for entrepreneurs looking to explore new business opportunities and generate actionable insights.


    4. Scraping Business Listings with CrewAI

    The fourth script, `crewai_swarm_side_research.py`, demonstrates how to scrape business listings from Acquire.com and analyze them for trends and opportunities. Here's the breakdown:

    # crewai_swarm_side_research.py
    
    import os
    from crewai import Agent, Task, Crew, Process, LLM
    from langchain.llms.openai import OpenAI
    from crewai_tools import ScrapeWebsiteTool
    import litellm
    from crewai_tools import SerperDevTool
    from dotenv import load_dotenv
    from fpdf import FPDF
    import time
    from selenium import webdriver
    from selenium.webdriver.common.by import By
    from selenium.webdriver.chrome.service import Service
    from webdriver_manager.chrome import ChromeDriverManager
    from bs4 import BeautifulSoup
    
    load_dotenv()
    
    llm = LLM(
    	model="gemini/gemini-1.5-pro-latest",
    	temperature=0.7,
    	api_key=os.getenv("GEMINI_API_KEY"),
    	max_rpm=1,
    )
    
    os.environ["SERPER_API_KEY"] = "************************************"
    os.environ['LITELLM_LOG'] = 'DEBUG'
    
    # ---------------------------- SETUP SELENIUM ----------------------------
    def scrape_acquire_listings():
    	"""Scrapes Acquire.com listings with dynamic scrolling."""
    	print("🚀 Scraping Acquire.com listings...")
    
    	# Set up Selenium WebDriver
    	options = webdriver.ChromeOptions()
    	options.add_argument("--headless")  # Run in headless mode
    	options.add_argument("--no-sandbox")
    	options.add_argument("--disable-dev-shm-usage")
    
    	driver = webdriver.Chrome(service=Service(
    		ChromeDriverManager().install()), options=options)
    	driver.get("https://app.acquire.com/all-listing")
    
    	# Scroll to load more listings
    	last_height = driver.execute_script("return document.body.scrollHeight")
    	while True:
    		driver.execute_script(
    			"window.scrollTo(0, document.body.scrollHeight);")
    		time.sleep(3)  # Allow time to load
    		new_height = driver.execute_script("return document.body.scrollHeight")
    		if new_height == last_height:
    			break
    		last_height = new_height
    
    	# Get page source and close driver
    	soup = BeautifulSoup(driver.page_source, "html.parser")
    	driver.quit()
    
    	# Extract listings
    	listings = []
    	for listing in soup.select(".listing-item"):  # Adjust selector as needed
    		title = listing.select_one(
    			".listing-title").text.strip() if listing.select_one(".listing-title") else "N/A"
    		price = listing.select_one(
    			".listing-price").text.strip() if listing.select_one(".listing-price") else "N/A"
    		category = listing.select_one(
    			".listing-category").text.strip() if listing.select_one(".listing-category") else "N/A"
    
    		listings.append({"title": title, "price": price, "category": category})
    
    	print(f"✅ Scraped {len(listings)} listings.")
    	return listings
    
    # ---------------------------- AGENTS ----------------------------
    
    # Scraper Agent
    scraper_agent = Agent(
    	role="Web Scraper",
    	goal="Extract all business listings from Acquire.com.",
    	verbose=True,
    	memory=True,
    	llm=llm,
    	backstory="An expert in web scraping, skilled at extracting data from complex websites.",
    	function=scrape_acquire_listings
    )
    
    # Business Analyst Agent
    
    def analyze_listings(listings):
    	"""Analyzes the scraped business listings to identify trends and opportunities."""
    	print("📊 Analyzing business data...")
    
    	category_counts = {}
    	for listing in listings:
    		category = listing["category"]
    		if category in category_counts:
    			category_counts[category] += 1
    		else:
    			category_counts[category] = 1
    
    	# Identify top categories
    	top_categories = sorted(category_counts.items(),
    							key=lambda x: x[1], reverse=True)[:5]
    
    	insights = f"📈 **Business Trends & Opportunities**\n\n"
    	insights += "### **Top Business Categories:**\n"
    	for cat, count in top_categories:
    		insights += f"- {cat}: {count} listings\n"
    
    	insights += "\n### **Opportunities:**\n"
    	insights += "1. Identify high-demand, low-competition categories.\n"
    	insights += "2. Look for successful businesses and find ways to improve them.\n"
    	insights += "3. Explore trends based on pricing and market gaps.\n"
    
    	print("✅ Analysis complete.")
    	return insights
    
    analyst_agent = Agent(
    	role="Business Analyst",
    	goal="Analyze business categories, statistics, and trends from Acquire.com data.",
    	verbose=True,
    	memory=True,
    	llm=llm,
    	backstory="A seasoned business analyst with expertise in market research and identifying profitable opportunities.",
    	function=analyze_listings
    )
    
    # ---------------------------- TASKS ----------------------------
    
    # Scraping Task
    scraping_task = Task(
    	description="Scrape all listings from Acquire.com, including categories, pricing, and trends.",
    	expected_output="A structured list of all business listings with relevant details.",
    	agent=scraper_agent
    )
    
    # Analysis Task
    analysis_task = Task(
    	description="Analyze the scraped business data to determine trends, top categories, and profitable opportunities.",
    	expected_output="A business insights summary with trends and recommended opportunities.",
    	agent=analyst_agent
    )
    
    # ---------------------------- PDF REPORT GENERATION ----------------------------
    
    def generate_pdf_report(analysis_content):
    	"""Generate a structured PDF report with analysis results."""
    	print("📝 Generating PDF Report...")
    
    	pdf = FPDF()
    	pdf.set_auto_page_break(auto=True, margin=15)
    	pdf.add_page()
    	pdf.set_font("Arial", "B", 16)
    
    	pdf.cell(200, 10, "Business Analysis Report - Acquire.com",
    			 ln=True, align="C")
    	pdf.ln(10)  # Line break
    
    	pdf.set_font("Arial", size=12)
    	for line in analysis_content.split("\n"):
    		pdf.multi_cell(0, 10, line)
    		pdf.ln(2)
    
    	pdf.output("Acquire_Business_Report.pdf")
    	print("✅ PDF report generated: Acquire_Business_Report.pdf")
    	return "PDF report generated successfully: Acquire_Business_Report.pdf"
    
    # Report Writer Agent
    writer_agent = Agent(
    	role="Report Writer",
    	goal="Convert business insights into a structured PDF report.",
    	verbose=True,
    	memory=True,
    	llm=llm,
    	backstory="An expert technical writer who creates structured business reports.",
    	function=generate_pdf_report
    )
    
    # PDF Task
    pdf_task = Task(
    	description="Create a PDF report summarizing business trends and opportunities.",
    	expected_output="A well-structured PDF report named 'Acquire_Business_Report.pdf'.",
    	agent=writer_agent
    )
    
    # ---------------------------- CREW SETUP ----------------------------
    
    crew = Crew(
    	agents=[scraper_agent, analyst_agent, writer_agent],
    	tasks=[scraping_task, analysis_task, pdf_task],
    	process=Process.sequential
    )
    
    # ---------------------------- EXECUTION ----------------------------
    
    print("🚀 Starting Acquire.com Business Research...")
    result = crew.kickoff()
    print(result)

    Agents:

    • Web Scraper: Scrapes business listings from Acquire.com.
    • Business Analyst: Analyzes the scraped data to identify trends and opportunities.
    • Report Writer: Generates a PDF report summarizing the findings.

    Tasks:

    • Scrape Listings: The Web Scraper extracts business listings from Acquire.com.
    • Analyze Data: The Business Analyst identifies trends and opportunities based on the scraped data.
    • Generate Report: The Report Writer creates a PDF report summarizing the analysis.

    This script is perfect for market researchers looking to analyze business trends and identify profitable opportunities.


    5. Identifying Business Opportunities with CrewAI

    The fifth script, `crewai_swarm_side_research_business.py`, focuses on identifying promising industries and niches for an AI-agent-as-a-service business. Here's how it works:

    # crewai_swarm_side_research_business.py
    
    import os
    from crewai import Agent, Task, Crew, Process, LLM
    from langchain.llms.openai import OpenAI
    from crewai_tools import ScrapeWebsiteTool
    import litellm
    from crewai_tools import SerperDevTool
    from dotenv import load_dotenv
    from fpdf import FPDF
    import time
    from selenium import webdriver
    from selenium.webdriver.common.by import By
    from selenium.webdriver.chrome.service import Service
    from webdriver_manager.chrome import ChromeDriverManager
    from bs4 import BeautifulSoup
    
    load_dotenv()
    
    llm = LLM(
    	model="gemini/gemini-1.5-pro-latest",
    	temperature=0.7,
    	api_key=os.getenv("GEMINI_API_KEY"),
    	max_rpm=1,
    )
    
    os.environ["SERPER_API_KEY"] = "***************************************"
    os.environ['LITELLM_LOG'] = 'DEBUG'
    
    # Search tool for online research
    search_tool = SerperDevTool()
    
    # 1. Industry Researcher Agent
    industry_researcher = Agent(
    	role="Industry Researcher",
    	goal="Find promising industries and niches for an AI-agent-as-a-service business.",
    	verbose=True,
    	memory=True,
    	llm=llm,
    	backstory=(
    		"An experienced market researcher with deep insights into emerging trends, "
    		"you specialize in identifying promising industries for AI-driven solutions."
    	),
    	tools=[search_tool]
    )
    
    # 2. Problem Analyst Agent
    problem_analyst = Agent(
    	role="Problem Analyst",
    	goal="Identify key problems and challenges in each promising industry and niche.",
    	verbose=True,
    	memory=True,
    	llm=llm,
    	backstory=(
    		"A business strategist who understands the biggest challenges within industries. "
    		"You analyze pain points and uncover gaps where AI solutions can be implemented."
    	),
    	tools=[search_tool]
    )
    
    # 3. Report Writer Agent
    report_writer = Agent(
    	role="Report Writer",
    	goal="Compile the research into a well-structured report with insights.",
    	verbose=True,
    	memory=True,
    	llm=llm,
    	backstory=(
    		"A skilled business analyst and writer who turns raw data into actionable insights. "
    		"Your reports are clear, insightful, and strategic."
    	)
    )
    
    # Task 1: Research industries and niches
    industry_research_task = Task(
    	description=(
    		"Identify at least 10 industries and niches where an AI-agent-as-a-service business could be viable. "
    		"Consider emerging markets, growth trends, and AI adoption potential."
    	),
    	expected_output="A list of at least 10 industries with descriptions.",
    	tools=[search_tool],
    	agent=industry_researcher,
    )
    
    # Task 2: Analyze problems in each niche
    problem_analysis_task = Task(
    	description=(
    		"For each identified industry, analyze the top 3 key pain points or challenges. "
    		"Focus on problems that AI agents can realistically solve."
    	),
    	expected_output="A list of key pain points for each industry.",
    	tools=[search_tool],
    	agent=problem_analyst,
    )
    
    # Task 3: Generate a final report
    report_writing_task = Task(
    	description=(
    		"Compile a well-structured report with industries, their key challenges, "
    		"and why an AI-agent solution is a viable business opportunity."
    	),
    	expected_output="A comprehensive business opportunity report in Markdown format.",
    	agent=report_writer,
    	output_file="business_opportunity_report.md"
    )
    
    # Form the Crew
    crew = Crew(
    	agents=[industry_researcher, problem_analyst, report_writer],
    	tasks=[industry_research_task, problem_analysis_task, report_writing_task],
    	process=Process.sequential
    )
    
    # Execute
    result = crew.kickoff()
    print(result)

    Agents:

    • Industry Researcher: Identifies promising industries and niches.
    • Problem Analyst: Analyzes key problems and challenges in each industry.
    • Report Writer: Compiles the research into a well-structured report.

    Tasks:

    • Research Industries: The Industry Researcher identifies viable industries and niches.
    • Analyze Problems: The Problem Analyst identifies key pain points in each industry.
    • Generate Report: The Report Writer compiles the findings into a comprehensive report.

    This script is ideal for entrepreneurs looking to identify new business opportunities and understand market challenges.


    6. Analyzing Market Trends with CrewAI

    The sixth script, `crewai_swarm_trend_analysis.py`, demonstrates how to analyze market trends for men's t-shirt prints. Here's the breakdown:

    # crewai_swarm_trend_analysis.py
    
    import os
    from crewai import LLM, Agent, Task, Crew
    from langchain_community.tools.google_trends import GoogleTrendsQueryRun
    from langchain_community.utilities.google_trends import GoogleTrendsAPIWrapper
    from langchain.tools import Tool
    from langchain_community.utilities import SerpAPIWrapper
    
    os.environ["SERPAPI_API_KEY"] = "************************************"
    
    # Initialize LLM
    llm = LLM(model="ollama/qwen2.5:latest", base_url="http://localhost:11434")
    
    # Set up Google Trends API wrapper
    trends_api = GoogleTrendsAPIWrapper()
    
    # Set up SerpAPI for Google Search
    google_search = SerpAPIWrapper()
    
    # --------------- Define Tools ---------------
    
    # Fetch trending men's t-shirt prints from Google Trends
    
    def fetch_tshirt_trends():
    	tool = GoogleTrendsQueryRun(api_wrapper=trends_api)
    	query = "men's t-shirt prints trends"
    	return tool.run(query)
    
    fetch_trends_tool = Tool(
    	name="Google Trends T-Shirt Prints Analyzer",
    	func=fetch_tshirt_trends,
    	description="Fetches the latest t-shirt print trends from Google Trends."
    )
    
    # Search for expert opinions and market trends
    
    # Initialize SerpAPI
    google_search = SerpAPIWrapper()
    
    # Define a function to fetch Google Search results using SerpAPI
    
    def fetch_market_insights():
    	query = "best-selling t-shirt prints for men in 2024"
    	return google_search.run(query)
    
    # Define the search tool for CrewAI
    fetch_search_tool = Tool(
    	name="Google Search Market Insights",
    	func=fetch_market_insights,
    	description="Searches Google for market trends and consumer preferences in men's t-shirt prints."
    )
    
    # --------------- Define Agents ---------------
    
    # 1. Trends Researcher - Fetches print trends
    researcher = Agent(
    	role="Trends Researcher",
    	goal="Identify the latest men's t-shirt print trends using Google Trends and Google Search.",
    	backstory="A fashion industry expert specializing in trend analysis and consumer behavior.",
    	tools=[fetch_trends_tool, fetch_search_tool],
    	llm=llm,
    	verbose=True
    )
    
    # 2. Consumer Analyst - Understands buyer preferences
    consumer_analyst = Agent(
    	role="Consumer Analyst",
    	goal="Analyze consumer preferences for t-shirt prints, including color, graphics, and typography.",
    	backstory="A consumer behavior specialist with expertise in apparel market research.",
    	llm=llm,
    	verbose=True
    )
    
    # 3. Forecasting Expert - Predicts upcoming trends
    forecaster = Agent(
    	role="Trend Forecaster",
    	goal="Predict the top 5 t-shirt print designs that will dominate the market in the next 6 months.",
    	backstory="A data scientist specializing in fashion forecasting.",
    	llm=llm,
    	verbose=True
    )
    
    # 4. Business Strategist - Creates a final report
    strategist = Agent(
    	role="Business Strategist",
    	goal="Compile insights into a detailed report with actionable recommendations for a t-shirt printing business.",
    	backstory="A business consultant helping startups make data-driven decisions.",
    	llm=llm,
    	verbose=True
    )
    
    # --------------- Define Tasks ---------------
    
    # Task 1: Fetch Print Trends
    fetch_trends_task = Task(
    	description="Retrieve the latest data on trending men's t-shirt prints from Google Trends and Google Search.",
    	agent=researcher,
    	expected_output="A list of the most popular and rising t-shirt prints, patterns, and designs."
    )
    
    # Task 2: Analyze Consumer Preferences
    analyze_preferences_task = Task(
    	description="Examine the consumer demand for various t-shirt prints, including colors, designs, and typography.",
    	agent=consumer_analyst,
    	context=[fetch_trends_task],
    	expected_output="A report detailing consumer preferences for different t-shirt print styles."
    )
    
    # Task 3: Forecast Future Trends
    forecast_trends_task = Task(
    	description="Predict the top 5 t-shirt print designs that will dominate in the next 6 months.",
    	agent=forecaster,
    	context=[fetch_trends_task, analyze_preferences_task],
    	expected_output="A future trend analysis highlighting the top 5 upcoming t-shirt print styles."
    )
    
    # Task 4: Generate Business Report
    generate_report_task = Task(
    	description="Compile the research findings into a structured business strategy report for a new t-shirt printing business.",
    	agent=strategist,
    	context=[forecast_trends_task],
    	expected_output="A comprehensive business report with insights and recommendations on t-shirt print trends."
    )
    
    # --------------- Assemble CrewAI System ---------------
    
    crew = Crew(
    	agents=[researcher, consumer_analyst, forecaster, strategist],
    	tasks=[fetch_trends_task, analyze_preferences_task,
    		   forecast_trends_task, generate_report_task],
    	verbose=True
    )
    
    # --------------- Run the Crew ---------------
    if __name__ == "__main__":
    	result = crew.kickoff()
    	print("\n--- Final T-Shirt Print Trends Business Report ---\n")
    	print(result)

    Agents:

    • Trends Researcher: Fetches the latest t-shirt print trends using Google Trends and Google Search.
    • Consumer Analyst: Analyzes consumer preferences for t-shirt prints.
    • Trend Forecaster: Predicts upcoming t-shirt print trends.
    • Business Strategist: Compiles insights into a detailed business report.

    Tasks:

    • Fetch Trends: The Trends Researcher retrieves the latest data on t-shirt print trends.
    • Analyze Preferences: The Consumer Analyst examines consumer demand for different t-shirt prints.
    • Forecast Trends: The Trend Forecaster predicts the top t-shirt print designs for the next six months.
    • Generate Report: The Business Strategist compiles the findings into a comprehensive business report.

    This script is perfect for businesses looking to stay ahead of market trends and make data-driven decisions.


    Bonus Example: Build a Multi-Agent Debate System with CrewAI

    We’ll build a debate simulation engine with:

    • 4 Debater Agents → each defends a unique perspective.
    • 1 Moderator Agent → orchestrates the debate, ensures fairness, and provides a concluding decision.

    The debate flow:

    1. Moderator kicks off the debate.
    2. Each debater presents and responds to others.
    3. Moderator summarizes and declares a winner.

    Setup

    First, install CrewAI:

    pip install crewai crewai-tools

    We’ll also need an API key for an LLM (OpenAI, Gemini, or any LiteLLM-supported model). For example:

    export GEMINI_API_KEY="your_api_key_here"

    The Code

    Here’s the full Python program:

    import os
    from crewai import Agent, Task, Crew, Process, LLM
    from langchain.llms.openai import OpenAI
    
    # Set up the LLM (example: Gemini via LiteLLM)
    llm = LLM(
    	model="gemini/gemini-2.0-flash",
    	temperature=0.7,
    	api_key=os.getenv("GEMINI_API_KEY"),
    	max_rpm=2,
    )
    
    os.environ['LITELLM_LOG'] = 'DEBUG'
    
    # ------------------------
    # Define Debate Agents
    # ------------------------
    
    debater1 = Agent(
    	role="Debater 1",
    	goal="Defend your perspective and respond critically to others' arguments.",
    	backstory=(
    		"You strongly advocate for fully remote work. "
    		"You argue that remote-first teams scale faster and hire better talent."
    	),
    	verbose=True,
    	memory=True,
    	llm=llm
    )
    
    debater2 = Agent(
    	role="Debater 2",
    	goal="Defend your perspective and respond critically to others' arguments.",
    	backstory=(
    		"You strongly advocate for in-office work. "
    		"You argue that co-location improves collaboration, speed, and culture."
    	),
    	verbose=True,
    	memory=True,
    	llm=llm
    )
    
    debater3 = Agent(
    	role="Debater 3",
    	goal="Defend your perspective and respond critically to others' arguments.",
    	backstory=(
    		"You advocate for a hybrid model. "
    		"You argue that teams can balance deep work with periodic in-person alignment."
    	),
    	verbose=True,
    	memory=True,
    	llm=llm
    )
    
    debater4 = Agent(
    	role="Debater 4",
    	goal="Defend your perspective and respond critically to others' arguments.",
    	backstory=(
    		"You advocate for a flexible contractor/freelancer model. "
    		"You argue that modular staffing and clear contracts outperform rigid org charts."
    	),
    	verbose=True,
    	memory=True,
    	llm=llm
    )
    
    moderator = Agent(
    	role="Moderator",
    	goal="Keep the debate structured, ask for responses, and finally give a concluding decision.",
    	backstory=(
    		"You are a neutral and fair debate moderator. "
    		"You ensure that each debater has the chance to speak and that the debate remains productive."
    	),
    	verbose=True,
    	memory=True,
    	llm=llm
    )
    
    # ------------------------
    # Define Debate Tasks
    # ------------------------
    
    debate_task = Task(
    	description=(
    		"Start a structured debate between the four debaters. "
    		"Each debater should defend their position, challenge others, and respond to critiques. "
    		"Ensure each agent has at least 2 turns to speak. "
    		"Topic of the debate is: 'Which work model is best for software teams: remote, in-office, hybrid, or contractor-first?'"
    	),
    	expected_output=(
    		"A transcript of the debate with contributions from all 4 debaters."
    	),
    	agent=moderator,  # Moderator orchestrates the discussion
    )
    
    conclusion_task = Task(
    	description=(
    		"After the debate, analyze the discussion carefully and provide a final conclusion. "
    		"The conclusion should summarize key arguments and declare which debater made the strongest case."
    	),
    	expected_output=(
    		"A clear and reasoned final judgment summarizing the debate and identifying the winner."
    	),
    	agent=moderator,
    )
    
    # ------------------------
    # Create Crew
    # ------------------------
    
    crew = Crew(
    	agents=[debater1, debater2, debater3, debater4, moderator],
    	tasks=[debate_task, conclusion_task],
    	process=Process.sequential,
    )
    
    # ------------------------
    # Run Debate
    # ------------------------
    
    result = crew.kickoff(inputs={})
    print(result)

    How It Works

    • Agents are defined with roles, goals, and backstories. This makes them behave consistently during the debate.
    • Tasks give structure:

    - Debate task → orchestrated by the moderator.

    - Conclusion task → moderator declares the winner.

    • Crew ties agents and tasks together, with a sequential process ensuring order.

    When you run `crew.kickoff()`, the agents interact and generate a debate transcript automatically.

    Extensions

    • Add tools (e.g., `SerperDevTool`) so debaters can pull in real data.
    • Switch to parallel process for faster debates.
    • Use YAML config (`agents.yaml` & `tasks.yaml`) for easier scaling.
    • Apply this framework to product comparisons, startup ideas, or classroom discussions.

    How to Build Your Own CrewAI Workflow

    Now that we've explored these six scripts, let's break down how you can build your own CrewAI workflow:

    • Define Your Agents: Identify the roles you need in your workflow (e.g., researcher, analyst, writer).
    • Create Tasks: Define the specific tasks each agent will perform.
    • Set Up Tools: Integrate external tools or APIs that your agents will use (e.g., Google Search, SerperDev, Selenium).
    • Form a Crew: Combine your agents and tasks into a crew, and define the process (e.g., sequential or parallel).
    • Execute the Workflow: Run the crew and analyze the results.

    Conclusion

    CrewAI is a powerful framework that allows you to automate complex workflows, analyze data, and generate actionable insights. Whether you're looking to automate job searches, generate leads, or analyze market trends, CrewAI provides the tools you need to build AI-driven solutions.

    By understanding the code in these six scripts, you can start building your own CrewAI workflows and unlock the full potential of AI automation. So, what are you waiting for? Dive into CrewAI and start building your AI-powered future today!

    #CrewAI#AI Agents#Agentic AI#Automation#Multi-Agent Systems#Workflows#Tools
    Originsoft Team

    Engineering Team

    The engineering team at Originsoft Consultancy brings together decades of combined experience in software architecture, AI/ML, and cloud-native development. We are passionate about sharing knowledge and helping developers build better software.