<AI>Devspace

Guide to AI Engineering

Guide to AI Engineering

📚 A Must-Read (and Build!) list for Aspiring AI Engineers

If you’re serious about becoming an AI engineer—especially one who builds LLM-based applications and agents—then this is your study-and-build roadmap (CREDIT: AIMakerspace). These are the essential topics to master. But don’t just read or watch—build something small for each. That’s how you truly internalize complex systems.

  • ✅ Search online for resources (docs, videos, blogs)
  • 🛠️ Build a project or demo
  • 📈 Reflect and evaluate your results
  1. Embeddings and Retrieval Augment Generation (RAG)
  • Prompt Engineering best practices
  • Overview of the LLM App Stack
  • Understand embedding models and similarity search
  • Understand Retrieval Augmented Generation = Dense Vector Retrieval + In-Context Learning
  • Build a Python RAG app from scratch
  1. Industry Use Cases & End-to-End RAG
  • The state of production LLM application use cases in industry
  • Build an end-to-end RAG application
  1. Production-Grade RAG with LangGraph
  • Why LangChain, OpenAI, QDrant, LangSmith
  • Understand LangChain & LangGraph core constructs
  • Understand (enough) LangGraph and LangSmith
  • Build a RAG system with LangChain and Qdrant
  • How to use LangSmith for evaluation and monitoring tool for your RAG application
  1. Production-Grade Agents with LangGraph
  • Answer the question: “What is an agent?”
  • Understand how to build production-grade agent applications using LangGraph
  • How to use LangSmith to evaluate more complex agentic RAG applications
  1. Multi-Agent Applications
  • Understand what multi-agent systems are and how they operate
  • Build a production-grade multi-agent applications using LangGraph
  1. Synthetic Data Generation for Evaluation
  • An overview of Synthetic Data Generation (SDG)
  • How to use SDG for Evaluation
  • Generating high-quality synthetic test data sets for RAG applications
  • How to use LangSmith to baseline performance, make improvements, and then compare
  1. RAG and Agent Evaluation
  • Build RAG and Agent applications with LangGraph
  • Evaluate RAG and Agent applications quantitatively with the RAG ASsessment (RAGAS) framework
  • Use metrics-driven development to improve agentic applications, measurably, with RAGAS
  1. Advanced Retrieval Strategies for RAG Apps
  • Understand how advanced retrieval and chunking techniques can enhance RAG
  • Compare the performance of retrieval algorithms for RAG
  • Understand the fine lines between chunking, retrieval, and ranking
  • Learn best practices for retrieval pipelines
  1. Advanced Agentic Reasoning
  • Discuss best-practice use of reasoning models
  • Understand planning and reflection agents
  • Build an Open-Source Deep Research agent application using LangGraph
  • Investigate evaluating complex agent applications with the latest tools
  1. OpenAI Agents SDK
  • Understand the suite of tools for building agents with OpenAI and the evolution of their tooling
  • Core constructs of the Agents SDK and comparison to other agent frameworks
  • How to use monitoring and observability tools on the OpenAI platform
  1. Contextual Retrieval
  • Understand how Contextual Retrieval = Contextual Embeddings + Contextual BM25 works, and how it differs from traditional and hybrid RAG approaches
  • Discuss the pros and cons of using a metadata filtering approach vs. Contextual Retrieval
  • Build a Contextual Retrieval application and test its performance on different data
  1. Code Agents, Coding Agents, and Computer Use Agents
  • Defining code agents, coding agents, and computer use agents
  • Understand the suite of tools for building code agents with Hugging Face’s Smol Agents library
  • Understand the landscape of coding and computer use agents
  1. Production Endpoints
  • Discuss the important production-ready capabilities of LangChain under the hood
  • Understand how to deploy open LLMs and embeddings to scalable endpoints
  • Discuss how to choose inference server
  • Build an enterprise RAG application with LCEL
  1. Deploying Applications to APIs and LLM Ops
  • Defining LLM Operations (LLM Ops)
  • Learning how to monitor, visualize, debug, and interact with your LLM applications with LangSmith and LangGraph Studio
  • Deploy your applications to APIs directly via LangGraph Platform
  1. On-Prem RAG and Agent Applications
  • Introduction to Building On-Prem
  • Hardware & compute Considerations
  • Local LLM & Embedding Model Hosting Comparison
  • How to build and present an On-Prem Solution to stakeholders
  1. Caching, Versioning, Guardrails, Protocols
  • How to use Prompt caching
  • How to build/iterate on version-controlled Prompt and Tool Libraries for your engineering team
  • Introduction to Model Context Protocol (MCP) and Agent2Agent (A2A) Protocols
  • The role of MCP servers: inside vs. outside the enterprise, developer vs. consumer products

💡 Final Tip: Thoroughly go through this list like your personal AI engineering syllabus. Whether you're aiming for OpenAI, LangChain, or your own LLM product—mastering this gives you a serious edge.

Posted by chitra.rk.in@gmail.com · 6/23/2025