Generative AI and Agentic AI Programming
Instructor Led Online and In-Person Training Program
Course Program
Generative AI and Agentic AI Programming
- Prerequisites: None — open to all professionals with an interest in AI and Python programming
- Mode of Delivery: In-Person, Online (Live Virtual), and Classroom options available
- Duration:
- 6 weeks (36 hours) of guided instruction and resume preparation,
- 2 additional weeks dedicated to a hands-on Final Project
🔍 Course Overview
This instructor-led professional program offers a practical, end-to-end journey through Generative AI and Agentic AI development using Python.
Across 12+ sessions, you’ll gain deep, hands-on experience with modern AI tools and frameworks — including OpenAI APIs, LangChain, LangGraph, LangSmith, vector databases (Milvus), and machine learning models for predictive AI.
The curriculum progresses from Python fundamentals to building, evaluating, and deploying intelligent AI systems, culminating in a portfolio-grade capstone project that showcases your expertise in GPT-powered applications, Retrieval-Augmented Generation (RAG) systems, or agentic workflows.
✅ What You’ll Gain
🚀 Jump into the AI Domain
Learn how to work hands-on with Generative AI tools and technologies — including OpenAI GPT models, LangChain, LangGraph, LangSmith, and vector databases — to design, build, and evaluate intelligent applications.
💡 Master Practical AI Development
Gain experience developing end-to-end solutions that combine prompt engineering, RAG (Retrieval-Augmented Generation), and fine-tuning to solve real business problems.
🧠 Build Intelligent Workflows and Agents
Create multi-step AI workflows using LangChain and orchestrate autonomous agents through LangGraph for planning, decision-making, and task automation.
📊 Evaluate and Optimize Performance
Use LangSmith to trace, debug, and measure the quality of your AI models and pipelines with metrics like accuracy, precision, and faithfulness.
🧰 Develop and Deploy with Confidence
Learn how to integrate and deploy your AI projects using tools like Streamlit and Gradio, and work seamlessly across OpenAI and local LLM environments.
🏆 Showcase Your Expertise
Complete a portfolio-ready capstone project that demonstrates your ability to design, build, and deploy Generative AI applications — supported by resume and interview preparation for AI-focused roles.
👥 Who Should Enroll
Software Developers & Engineers who want to specialize in Generative and Agentic AI development
AI/ML Enthusiasts with basic Python knowledge looking for hands-on training in large language models and RAG systems
Product Managers, Tech Leads, and Innovators exploring AI-powered automation, chatbots, and enterprise copilots
Quality Assurance Engineers who want to learn how Generative AI can enhance testing workflows — or become more effective testers of AI and GenAI-driven applications
Professionals looking to confidently build, evaluate, and deploy GPT-powered applications in real-world business environments
“Growth begins at the edge of your comfort zone—keep pushing, keep evolving.”
What will be covered?
Course is divided in three sections.
Session 1: Python Intro + OpenAI API Setup
- Python fundamentals: syntax, variables, conditionals, loops, and functions
- Exception handling basics
- OpenAI API setup and environment configuration
- Installing and using the OpenAI Python package
- Lab: Write your first GPT chatbot using the OpenAI API
Session 2: Python for GenAI
- Python working with Data types: strings, lists, dictionaries, and sets
- Working with JSON, and API requests
- Key libraries: pandas, numpy, requests, re, os, dotenv
- Prompt engineering concepts with f-strings
- OpenAI moderation API
- Lab: Build a mini text-to-text transformer using OpenAI API including inbound moderation
Session 3: Working with Local LLMs
- Overview of open-source LLMs such as LLaMA, Mistral, Phi, Gemma, GPT-OSS
- Running models locally using Ollama
- Hands-on: Prompting, inference, and comparison with OpenAI models
- Lab: Run a local LLM and generate accounting or HR responses
Session 4: Retrieval-Augmented Generation (RAG)
- Introduction to embeddings and vector search
- Document loaders and chunking strategies
- RAG pipeline setup
- Lab: Build a document Q&A bot using your own PDFs or Document Chunks
Session 5: Vector Databases (Milvus)
- Understanding embeddings and cosine similarity
- Setting up Milvus locally
- Performing insertions, search, and similarity queries
- Lab: Index and retrieve text data using Milvus
Session 6: Evaluation Frameworks
- Introduction to model evaluation and RAGAS metrics
- Context precision, recall, faithfulness, and answer correctness
- Evaluation measurement using Ragas
- Lab: Evaluate chatbot accuracy using LangSmith or RAGAS
Session 7: Fine-Tuning LLMs
- When and why to fine-tune a model
- Dataset preparation and JSONL format
- Fine-tuning with OpenAI
- Lab: Fine-tune OpenAI for classification or structured output
Session 8: Agentic AI Concepts
- What are agents and how they differ from chatbots
- Core agent loop: plan → act → observe → refine
- Introduction to LangGraph and multi-agent collaboration
- Lab: Create a two-agent conversation workflow
Session 9: LangChain Foundations
- Understanding the LangChain framework and architecture
- Building modular pipelines with chains, tools, and agents
- Prompt templates, memory management, and document loaders
- Integration with vector databases (Milvus, Chroma)
- Calling external APIs and custom tools inside chains
- Lab: Build a workflow that summarizes uploaded documents and answers user questions
Session 10: LangGraph for Agentic Orchestration
- Introduction to LangGraph concepts: nodes, edges, and state management
- Designing multi-step workflows and branching logic
- Managing memory, retries, and failure handling
- Visualizing graph flow for debugging and optimization
- Combining LangChain tools with LangGraph for complex agents
- Lab: Build and visualize a multi-agent task graph with parallel workflows
Session 11: LangSmith for Evaluation and Observability
- Purpose of LangSmith and integration with LangChain/LangGraph
- Tracing model calls, tracking metrics, and analyzing responses
- Comparing different prompt or model versions
- Setting up datasets for structured evaluation
- Best practices for reproducibility and performance monitoring
- Lab: Evaluate chatbot accuracy and trace agent performance using LangSmith
Session 12: MCP Protocol and Server Hosting
- Understanding Model Context Protocol (MCP)
- Hosting your own MCP server for custom AI tools
- Integrating MCP with OpenAI ecosystem
- Lab: Deploy a simple MCP service for external access
Session 13: Capstone Project
Work on final project of choice with mentor guidance:
GPT Assistant using LangChain
RAG app over proprietary/internal content
Predictive ML model with a user interface
Project planning, design, and implementation
Session 14: Capstone Presentation + Career Guidance
Final project presentations and peer feedback
Code reviews and deployment validation
GitHub portfolio optimization and presentation tips
Resume enhancements and positioning for AI/ML roles
Career paths and next steps in the AI/ML journey
Additional Topics
GitHub Portfolio & Open-Source Contributions
- How to set up a GitHub portfolio for ML projects
- Writing README files and project documentation
- Contributing to open-source ML projects
- Using Git and GitHub for version control
Ready to upskill and lead in the era of AI?
Get hands-on with AI through expert-led, project-based training. Enroll today to transform your skills and accelerate your career in Artificial Intelligence!
If you have questions, please give us a call to talk to an advisor!
