👉 Edited version of this stream: • Local GenAI LLMs with Ollama and Dock...
Learn how to run your own local ChatGPT clone and GitHub Copilot clone by setting up Ollama and Docker's "GenAI Stack" to build apps on top of open source LLMs and closed-source SaaS models (GPT-4, etc.). Matt Williams is our guest to walk us through all the parts of this solution, and show us how Ollama can make it easier on Mac, Windows, and Linux to setup custom LLM stacks.
🗞️ Sign up for my weekly newsletter for the latest on upcoming guests and what I'm releasing: https://www.bretfisher.com/newsletter/
Matt Williams
============
/ technovangelist
/ technovangelist
Nirmal Mehta
============
/ nirmalkmehta
/ normalfaults
https://hachyderm.io/@nirmal
Bret Fisher
=========
/ bretefisher
/ bretfisher
https://www.bretfisher.com
Join my Community 🤜🤛
================
💌 Weekly newsletter on upcoming guests and stuff I'm working on: https://www.bretfisher.com/newsletter/
💬 Join the discussion on our Discord chat server / discord
👨🏫 Coupons for my Docker and Kubernetes courses https://www.bretfisher.com/courses/
🎙️ Podcast of this show https://www.bretfisher.com/podcast
Show Music 🎵
==========
waiting music: Jakarta - Bonsaye https://www.epidemicsound.com/track/Y...
intro music: I Need A Remedy (Instrumental Version) - Of Men And Wolves https://www.epidemicsound.com/track/z...
outro music: Electric Ballroom - Quesa https://www.epidemicsound.com/track/K...
Watch video Local GenAI LLMs with Ollama and Docker (Stream 262) online, duration hours minute second in high quality that is uploaded to the channel Bret Fisher Cloud Native DevOps 01 January 1970. Share the link to the video on social media so that your subscribers and friends will also watch this video. This video clip has been viewed 6,032 times and liked it 170 visitors.