#llm #rag #langchain #llama #ollama
The Python code, instruction manual, and pdf files are given here: https://ko-fi.com/s/05a82fdd6f
In this tutorial, we explain how to build a prototype of the Retrieval-Augmented Generation (RAG) application in Python from scratch. The RAG application will be based on the Ollama framework and the Llama3.1 Large Language Model (LLM), and on the LangChain Python framework.
The application will be able to build an embedded database of the provided PDF documents containing text, numbers, data, and tables (in the future tutorials, we will also explain how to embed images). This database will be used to augment the knowledge of the LLM. For example, the RAG application will be able to perform complex calculations on the basis of provided table data. Also, the developed application will be able to understand custom text documents, and to make intelligent conclusions on the basis of personal data. The techniques that you will learn in this tutorial are very important for the development of personal assistants and automation of daily tasks. In a generalized form that includes images or even videos, the developed RAG application can have a number of engineering and robotics applications. In the video, we run a demonstration of our application.
Смотрите видео Create Retrieval-Augmented Generation RAG application in Python From Scratch Ollama Llama LangChain онлайн, длительностью часов минут секунд в хорошем качестве, которое загружено на канал Aleksandar Haber PhD 23 Сентябрь 2024. Делитесь ссылкой на видео в социальных сетях, чтобы ваши подписчики и друзья так же посмотрели это видео. Данный видеоклип посмотрели 739 раз и оно понравилось 27 посетителям.