InTowards AIbyChristopher TaoDo Not Use LLM or Generative AI For These Use CasesChoose correct AI techniques for the right use case familiesAug 10, 202446Aug 10, 202446
Harshit TyagiRoadmap to Become an AI EngineerSkills, learning resources, and project ideas to become an AI Engineer in 2024Apr 30, 202427Apr 30, 202427
InTDS ArchivebyAparna DhinakaranChoosing Between LLM Agent FrameworksThe tradeoffs between building bespoke code-based agents and the major agent frameworks.Sep 21, 202426Sep 21, 202426
InAI AdvancesbyKris OgrabekIf I started learning AI Engineering in 2024, here’s what I would do.The exact path I would choose.Jun 3, 202434Jun 3, 202434
InTDS ArchivebyAparna DhinakaranNavigating the New Types of LLM Agents and ArchitecturesThe failure of ReAct agents gives way to a new generation of agents — and possibilitiesAug 30, 20249Aug 30, 20249
InTDS ArchivebyHan HELOIR, Ph.D. ☕️The Art of Chunking: Boosting AI Performance in RAG ArchitecturesThe Key to Effective AI-Driven RetrievalAug 18, 202416Aug 18, 202416
InCyberArk EngineeringbyRoy Ben YosefHow to Run LLMs Locally with OllamaEasy and down to earth developer’s guide on downloading, installing and running various LLMs on your local machine.Mar 27, 20246Mar 27, 20246
InAIGuysbyVishal RajputPrompt Engineering Is Dead: DSPy Is New Paradigm For PromptingDSPy Paradigm: Let’s program — not prompt — LLMsJun 19, 202477Jun 19, 202477
Zain ul AbideenApple MLX vs Llama.cpp vs Hugging Face Candle Rust for Lightning-Fast LLMs LocallyMistral-7B and Phi-2 to experiment fastest inference/generation speed across libraries.Jan 31, 20242Jan 31, 20242