LangChain | Save tokens Caching & FakeLLM

Published: 06 August 2023
on channel: Python 360
516
3

LangChain provides a FakeLLM class that can be used to mock out calls to a real LLM. This is useful for testing pipelines before using a real LLM.

"The FakeListLLM class that lets you mock LLM responses. You pass it a list of responses and it will return those sequentially. When are ready to use your code in production simply set the LLM to the one you need."

Here is a quick demo and a quick look at the performance improvement from caching and (money saving)!

Link : https://python.langchain.com/docs/mod...
Playlist :    • OpenAI & LangChain & chatGPT  

Code seen in video : https://github.com/RGGH/LangChain_101...

Become a patron : 🌏   / drpi  
Buy me a coffee (or Tea) ☕ https://www.buymeacoffee.com/DrPi

If you want a fast VPS server with Python installed check out :
https://webdock.io/en?maff=wdaff--170

Thumbs up yeah? (cos Algos..)

#langchain #OpenAI #SaveTokens


Watch video LangChain | Save tokens Caching & FakeLLM online, duration hours minute second in high quality that is uploaded to the channel Python 360 06 August 2023. Share the link to the video on social media so that your subscribers and friends will also watch this video. This video clip has been viewed 516 times and liked it 3 visitors.