audio
audioduration (s) 3.55
30
| text
stringlengths 113
667
|
|---|---|
Hello all my name is Krishna Ayak and welcome to my YouTube channel. So guys here is one amazing one short video on Langchain in order to learn generative AI. So if you are interested in creating amazing LLM application or Gen AI powered application then this specific video is definitely for you. If you don't know about Langchain it is a complete framework that will actually help you to create Q&A chatbots, RAG application and many more.
|
|
The most interesting thing will be that I will be covering all the paid LLM models along with that open source LLM models even though that they are hosted in Hunging Face. So we will be getting to know each and everything about it and how we can specifically use in Langchain. So I hope you enjoyed this particular video and please make sure that you watch this video till the end. So thank you. Let's go ahead and enjoy the series. Before I go ahead guys since you know there should be some motivation for me also. So I will keep the like target to 1000.
|
|
Please make sure that you hit like, share with all your friends and we will also keep a target of 200 comments. 200 comments I know you will be able to do it. So let's keep that specific target and let's understand what all things we are going to learn about Langchain. And then we will also understand the second topic that we are going to understand in this specific video is about the Langchain ecosystem. And how it is related to each other.
|
|
Now, right now in the Langchain documentation if you probably see the recent updates that are there. Mainly most of the modules revolve around this particular topics in Langchain. Okay. So over here you will be able to see Langsmith. Here you will be able to see Langsr. If I talk about Langsmith, recently I had also made a video on this. Just a simple video. But I will try to create more videos on that when we go in this specific series.
|
|
So Langsmith is, if I probably give you some examples, it will help you to monitor your application. It will help you to debug your application. So in short, whatever MLOps activities is specifically required with respect to monitoring, deploying, debugging, testing. You know, so third point I will say testing. You can specifically use this amazing module in Langchain that is called as Langsmith.
|
|
The best thing will be that all the reports, all the analytics you will be able to see very much easily in this ecosystem itself. In the Langchain ecosystem. So there is a dashboard in Langchain which you will be able to see it. Okay. Now how we are going to use this entire technique in some projects, we will be seeing completely end to end. And we will also be able to understand this. So if I talk about Langsmith, it is mostly of, if I say LLM Ops. Okay.
|
|
So this part that is right now required in many, many companies. So that part will also be able to cover it. That is the reason why I like Langchain. Because it is providing you the entire ecosystem. Irrespective of any LLM model. Okay. Any LLM model. Now coming to the second thing over here, you have Langsr. Let's say that you have created your LLM application. You obviously want your entire LLM application in the form of APIs. Right. Without writing much code. Yes.
|
|
You can write the code from scratch with the help of Flask or some other libraries. But here Langsr uses something called as Fast API. Okay. And because of this Fast API, you know, creation of this particular API becomes very much easy. So we will be also able to understand it before the deployment. If I have actually created my own LLM app, how I can actually create all the services in the form of APIs. That we will try to see with respect to the Langsr.
|
|
Now coming to the next thing. There are some amazing concepts and important concepts in Langchain. From data ingestion to data transformation and all. In that, major, major topics are with respect to chains. We will try to understand about chains. And probably in the next video, once I probably start the practical implementation. The first thing that I am actually going to cover is with respect to chains. And I am also going to discuss something called as Agents and Retrieval. Right. Not only that, you will be able to see there is a concept of LCL. Okay.
|
|
Now in this LCL, we will be discussing and this full form is Langchain Expression Language. Right. So there are a lot of concepts that are specifically used in LCL. We will also see that how it is basically important while you are building. What are techniques it actually has when you are creating your own generative AI powered application. Right. Along with that, there are three main topics also which I will be able to cover it. While I will be discussing all these things that is about Model I.O, Retriever, Agent Tooling and all.
|
|
These are concepts that you should really know. The main aim of this entire series is not to make you understand in theoretical concept. But it's more to understand how you can create amazing generative AI applications. Irrespective of any LLM model. Now see guys, one of the questions that I get from many people. Hey Krish, I am using this specific LLM model. I don't have OpenAI access. I don't have API access. Tell me what should I do. You know. Let's say many people say that. Hey Krish, I don't have credit card for OpenAI.
|
|
Right. Can you show me some examples with respect to open source models? Like Google Gemini or some other, Mistral. What about open source models? Right. See LLM over here. As I said, Lantern is an amazing framework to build the entire application. And creating the best LLM is already a rat race. So if I probably say this is a rat race by tech giants. Right. So Google will be competing. Meta will be competing. You know, Anthropic will be competing. OpenAI will be competing.
|
|
So you don't even have to worry about this. Right. Whatever models will be probably coming up in the future. Don't worry about this. The main thing is that how you can use this LLM app in a generic way to build any kind of application. So here, whatever model it may come, you just need to stay up to date. Right. Which model has the best accuracy and all. That will basically happen over here. Right. Later on, the integration part will be completely generic. Okay. Now let's go ahead and understand the entire LLM ecosystem.
|
|
Like what are the very important things. So this is the diagram that I have already taken from the LLM chain. So as I already discussed, with respect to LLM, the first model that you see over here, we basically say it as observability. Right. And with the help of LLM, you will be able to do debugging, playground, evaluation, annotation and monitoring. When I say annotation, it's all about creating your own custom data set, which will require with respect to fine tuning on creating your Gen AI powered application.
|
|
The next thing is with respect to deployment. And recently, right now, LangServ has actually come. It will be in the form of API. So here it is written, right? Chains as REST API. Whatever services you are specifically providing. LangChain soon will also come up with only one click deployment mechanism. Once you probably create this entire API with the help of LangServ. The next thing is that you just need to deploy it. Okay. Then, and already I have created a couple of videos on LangSmith and LangServ. But don't worry.
|
|
In this entire series, I am going to again start from fresh, combine new topics and create a project. Okay. The third thing that you really need to understand is about templates. Okay. So there are different, different templates, reference application. With respect to LangChain, you need to understand these three important things. Chains, agent and retrieval strategies. So we'll try to understand how these things work. And understand guys, you will be able to get the code with respect to Python and JavaScript. But my main aim will be with respect to Python.
|
|
Because in some of the article, it was already said that AGI application is going to get created in Python. Okay. Then you also have some integration components. So we are understanding about the ecosystem. Because all these things we are going to discuss in this specific playlist itself. So here you have ModelIO. Here you have Retrieval. Here you have Agenting tool. So Retrieval, you also will be having features to read your data set from different, different data sources. How you can actually create vector embeddings and all and all are there. Right.
|
|
In ModelIO, you have various techniques with respect to Model, Chain, Prompt, Example, Selector and Output Parser. And finally, you'll be also seeing that we'll be focusing on this amazing thing which is called as Protocol. Which is basically coming under LangChain Core. And here we are going to discuss about LangChain Expression Language. Right. So there are some important concepts like parallelization, fallback, tracing, batching, streaming, async and composition. So all these things we are basically going to cover.
|
|
And then we will be focusing on end-to-end projects. How you can use all these concepts together. Right. Understand one thing guys. When we use LangServe and start creating REST APIs. We will also be writing our client side code. So that we will be able to access those kind of APIs. Right. So all these things in the form of ecosystem will get completely covered. Trust me.
|
|
Again why I am saying this is important. Because tomorrow whatever LLM models may come. Right. How advanced it may be. LangChain will be a generic framework. Which will actually help you to build any kind of LLM application. In this video I will be showing you how you can create chatbot applications. With the help of both paid APIs LLM. Along with that we will also see how you can integrate with open source LLMs. Now you should definitely know both these specific ways. How you can actually do it.
|
|
One way to basically integrate any open source LLM is through Hugging Face. But as you know that I am focusing more on the LangChain ecosystem. And with respect to Hugging Face I have already uploaded a lot of videos in my YouTube channel. And how you can actually call this kind of open source LLMs. But since we are working with the LangChain ecosystem. We will try to use all the components that are available in LangChain. As you all know guys this is a fresh playlist.
|
|
And obviously my plan is that this month I will be focusing entirely on LangChain. Many more videos will be coming up. Many more amazing videos. Along with end to end application. Fine tuning. Many more things is going to come up. So please make sure that we will keep a like target for every video. And for this video the like target is 1000. And at least 200 comments. And please make sure that you watch this video till the end. Because it is going to be completely practical oriented. Okay. And if you really want to support.
|
|
Please make sure that you subscribe the channel. And take up membership plan from my YouTube channel. So that it will help me. And with the help of those benefits. I will be able to create more videos as such. So let me quickly go ahead and share my screen. So here is my screen over here. And you will be able to see in the GitHub. That you will be finding in the description of this particular video. You will be having folders like this. So today is the third tutorial. Not third. Second tutorial. In the first and second. And we just understood that. What all things we are going to learn. But in this.
|
|
Is the real practical implementation. That is probably there. So as usual. The first thing that we are going to do. Is that create our venv environment. How to create it. Condact create minus p venv. Python is equal to 3.10. You can probably take 3.10 version. And I have already shown you. How to create virtual environments. In many number of videos. Then you will be using dot env file. So this will basically be my environment variable. In this environment variable. I will be putting 3 important information. One is. Langchain API key.
|
|
The second one is OpenAI API key. And Langchain project. You may be thinking. This OpenAI API key. I have kept it as open. No it is not. I have changed some of the numbers over here. So don't try it out. It will be of no use. Okay. And then. The third environment variable. That I am actually going to create. Is my Langchain project. Name. That is tutorial 1. I have written it over here. The reason why I have written this. Because. Whenever I try to go ahead. And see. In my Langsmith. Right.
|
|
I will be able to see. Observe the entire. I will be able to monitor. Each and every calls. From the dashboard itself. How we will be using this. Everything I will be discussing about it. Okay. So all these things. Will specifically get required. And. All this will be used. In our environment variable. So these are the three parameters. I have already created. My .env file. So let's go ahead. And start the coding. Okay. And you have to make sure. That you code along with me. Because. This is the future. AI engineering. Things are basically coming up. I will just show you initially.
|
|
With the foundation model. Later on. This complexity. Will keep on increasing. So let's go ahead. And start our first code. Now. What is our main aim? What we are trying to do. In our first project. Let me. Just discuss about. Because these are all the things. That we are going to discuss. In the future. But first thing. That we will try to create. Is our normal. ChatGPT application. Okay. I will not say ChatGPT. But. A normal chatbot. Okay. And this chatbot. Will be important. It will be helping you. To probably create.
|
|
Chatbot. With the help of both. Paid and open. Open source. LLM model. So this will be the chatbot. That we will be creating. One way. Is that. We will be using. Some paid LLMs. Now paid LLMs. One example. I can show it. With the help of. OpenAI API. Okay. OpenAI API. The second one. That I will try to. Probably show it. Or you can also use. Cloudy API. So that is. From a company. Called as. Anthropic. Okay.
|
|
That you can do. And one more. I will try to use it. With the help of. Open source LLM. See. Calling. APIs. Is a very easy task. Okay. But. The major thing. Is that. Since we have. So many modules. We are going to use. Langchain. As suggested. Right. And in. Langchain. We definitely. Have so many modules. How we can use. This modules. For different. Different calls. And along with this. Whenever we are. Developing any. Chatbot application. What all.
|
|
Dependencies. We have specifically. Right. Dependencies. Now. If you probably. See this diagram. Here you will be able. To see. There will be. Model. Prompt. Output. Parser. So. In our video. In this video. I am going to see. Some of the features. With respect to. Langsmith. I am going to see. Some of the features. With respect to. Chains and agents. And I am also. Going to use. Some of the feature. Present in. Model. And output. Parser. So. All this combination. We are going to. Specifically use. And that is the reason. How. This is how. I am going to create.
|
|
All the projects. That we are doing. Entire. Videos. That are probably. Going to come up. Will be much more. Practical oriented. Okay. So. Now. Let's start. Our. First. Chatbot application. So. Here. I will go ahead. And write. From. Langchain. Okay. From. Langchain. Underscore. OpenAI. Since. I am going to use. OpenAI. Import. Chat. OpenAI. Okay. Chat. OpenAI. So. This is the first one. That we are going to.
|
|
Basically do. From. Langchain. See. This. Three things. Will definitely be required. Then. One is. Chat. OpenAI. Or. Whatever. OpenAI. Your. Whatever. Chat. Model. That you are going to use. How to call. Open source. I will also be discussing. About that. First of all. We will start with. OpenAI. API itself. Okay. So. From. Langchain. Underscore. Core. Dot. Prompts. I am going to import. Chat. Prompt. Template. Okay. Chat. Prompt. Template. So. This is the next thing that we are probably going to use.
|
|
Chat. Prompt. Template. Okay. At any point of time. Whenever you create. A. Chat. Bot. Right. This chat prompt template will be super important. Right. Here is what you will. You will basically. Give the initial prompt template that is actually required. Okay. The third library that I am actually going to import is. From. Langchain. Underscore. Core. Dot. Output. Underscore. Parsers. Okay. Import. Str.
|
|
Output. Parser. Okay. Now this three are very important. This string. Str. Output. Parser. Is the default output parser. Whenever your LLM model gives any kind of response. You can also create a custom output parser. That also I will be showing you in the upcoming videos. Okay. This. Custom. Output. Parser. You can do anything with respect to the output that probably comes. You want to do a split. You want to make it as a capital letter. Anything. Right. You can write your own custom code with respect to this.
|
|
But by default. Right now. I am going to use just str. Output. Parser. Now along with this. The next thing that I am actually going to do is that I am going to use streamlet as st. Okay. Streamlet as st. Then I am going to also import OS. And since I am also going to use from .env import load underscore .env. So that we will be able to import all our libraries. Okay.
|
|
So let's see whether everything is working fine or not. Okay. From .env. So here I am going to basically write python load underscore . Sorry. Python app.py. I am just running it so that everything works fine. And all our libraries will also get in there. Cannot. Python app.py. Okay. I have to probably go to my chatbot folder. CD chatbot.
|
|
So now I will clear my screen. Python app.py. Sorry. From streamlet as st. Okay. Import streamlet as st. I have to write. So that is the reason it was coming all these errors. Now let's see if everything is working fine. Langchain core. So here you can probably see that there is a spelling mistake. Okay.
|
|
But I am just going to keep all the errors like this. So that you will be able to see it. Python app.py. If everything works fine. Dot output parser. Okay. P capital. So I think my suggestion box is not working well. And that is the reason. Now everything is working fine. Here you can see that I am not getting any error. So let's start our coding. And let's continue it. Okay. So we have imported all these things right now. Now as I suggested guys.
|
|
Since we are going to use three environment variables. One is the OpenAI API key. Langchain API key. And along with that. I will also make sure that. The tracing. To capture all the monitoring results. I will keep this three environment variable. One is OpenAI API key. Langchain tracing version 2. And Langchain API key. So Langchain API key. Will actually help us to. Know that where the entire monitoring results needs to be stored. Right. So that dashboard. You will be able to see. All the monitoring results will be over here.
|
|
And tracing. We have kept it as true. So it is automatically going to do the tracing. With respect to any code that I write. And this is not just with respect to paid APIs. With open source LLM also. You will be able to do it. Now this is the second step. That I have actually done. Now let's go ahead and define my prompt template. Simple. So here I am going to write my prompt template. Okay. Prompt template. So here I am going to define. Prompt is equal to. Chat prompt template. Dot. Okay.
|
|
From underscore messages. Okay. And here I am going to define my prompt template. In the form of list. The first thing. That. With respect to my prompt template. That I am going to give. Is nothing but system. And system. Here I say that. You are. A. Helpful. Assistant. Please. Respond. To. The queries. Okay.
|
|
Please respond to the questions or queries. Please respond to the user queries. Okay. Whatever queries that I am going to specifically ask. A simple. Prompt. That you can probably see over here. The next statement. After this. Is what. So. This will be my next. See. If I am giving a system prompt. I also have to give a user prompt. Right. User prompt will be. Whatever question I ask. So. This will be user. And here.
|
|
I will define something like. Question. Colon. Question. I can also give context. If I want. But right now. I will just give it as a question. A simple chatbot application. So that you will be able to start. Your practice of creating all these chatbots. So now. I will go ahead and define my streamlet framework. Okay. See. The learning process will be in such a way. That I will try to create more projects. And use functionalities that are there. Right. And in this way. You will be able to work it in an amazing way. Okay.
|
|
So here. I am going to basically write. St. Title. Langchain. Demo with the OpenAI API. St. Text. Input. Search the text topic you want. Okay. Now. Let us go ahead and call my OpenAI LLMs. Okay. OpenAI LLM. So here. I am going to basically write LLM. And whenever we use OpenAI API. So it will be nothing but chat OpenAI. And here. I am going to give my model name. The model name. Will be nothing but GPT. GPT 3.5 Turbo.
|
|
So I am going to use. Turbo. Because the cost is less for this. I have. I have put $5 in my OpenAI account. Okay. Just to teach you. So please make sure that you support. So that. I will be able to explore all these tools. And create videos for all of you. Okay. And finally. My output parser. See. Always remember. Langchain provides you features. That you can attach in the form of chain. Right. So here. Three main things we have created. One is the chat prompt template. Next one is the LLM.
|
|
And next one is the output parser. Obviously. This is the first thing that we require. After this. We integrate with our LLM. And then finally. We get our output. So string output parser. Is responsible. In getting the output itself. Finally. Chain is equal to. We will just combine. All these things. So here. I am going to write prompt. LLM. And then finally. My output parser. Right. I will show you. Going forward. How we can customize this entire output parser. And all. And finally. If I write. If input text.
|
|
If input underscore text. Colon. Now. Whenever I write any input. And probably press enter. Then I should be able to get this output. So st.write. And here. I am going to just write. Chain.invoke. And finally. I get. I give. My input as question. And that input. Is assigned to my input text. Input text. Right. So this is what we are going to basically do. Right.
|
|
St.write. Now. This is what we are doing. A simple chatbot application. But. Along. With this. We have implemented. This. This. This feature. Is specifically for. Langsmith. Langsmith. Lang. Smith. Tracking. Okay. This will be amazing for. To use. Okay. And. This is the recent updates. That are there. So whatever code I am writing. Will be applicable. Going forward. In various things. That are probably going to come up. Okay.
|
|
Now let's go ahead. And run this. So. In order to run it. You will just need. To. Write. Nothing but. Streamlet. Run. App. Dot. P. Y. Okay. Oops. There is an error. App. Dot. P. Y. And. Here. I will do. Allow access. Okay. So right now. You will be able to see. Over here. Langchain. Series. Test. LLM. But my. My. My. Project name. Was. Project 1. Okay. So now. If I go ahead. And. Hit. Hey. Hi.
|
|
Okay. And just press. Enter. You will be able to see that. We will be getting this information. Over here. And here. You can see. My. Project. Something. Let me reload it. Tutorial 1. Right. So this is the first. Request that is already been hit. And here. You will be able to see. Your. Unable sequence. Chat prompt template. Right. All the chat prompt template. Output message. Your helpful assistance. Please. Response to the user queries. Right. Along with this. You will be seeing. Chat. Open AI. API. And with respect to this.
|
|
What was the cost. Everything. You are able to track. So .00027 dollars. Is the. Cost that. Actually took. With respect to this. And finally. My string. Output parser. How can you assist today. With respect to this. Output parser. It is just going to give me. The response. Clearly. Now. When I develop. My own custom output parser. I will be able to track everything. So here. What you are able to do. You are able to monitor. Each and everything. That is there. Right. All the requests. That is probably coming up. Okay. So. Provide me a python code.
|
|
A python code. To swap two numbers. Okay. So once I execute this. And here. You will be able to see that. I am able to get the. Output. And answer. Everything is over here. And for this. You will be able to see. The cost will be little bit high. Okay. If you don't agree with me. Or. Let's see. With respect to tutorial one. The second request. That I have actually got. 4.80 seconds. Yes. It took little bit more time. And here. The cost was. 0.000211. So.
|
|
It is based on the token size. Right. For every token. It is bearing some kind of cost. Perfect. This was the first part. Of this particular tutorial. Now. Let's go to the second part. The second part. Is more about. Making you understand. That. How you can call. Open source LLMs. In your local itself. And how you can actually use it. So. For this. First of all. I will go ahead. And download. Olama. Okay. Olama is an amazing thing. Because you will be able to. Run all the large language models. Locally.
|
|
The best thing about Olama. Is that. It automatically. Does the compression. And probably. In your local. You will be able to. Run it. Let's say. If you have 16 GB RAM. You will. Just have to wait. For some amount of time. To get the response. But. Lama 2. And Code Lama. You can specifically. Use it over here. All the open source LLM models. And it supports. A lot of open source LLM models. And yes. In Langchain ecosystem. The integration. Has also been provided over here. So. What I am actually going to do. Over here. Is that. I will show you. First of all. Just go ahead and download it. This is available.
|
|
Both in Mac. Mac. Linux. And Windows. Wherever you want. Just download it. After you downloaded it. What you really need to do. Is just go ahead and install it. It is a simple. Exe file for Windows. MSI file for Mac OS. And Linux. Is a different version. So. You just need to double click it. And start installing it. Once you install it. Here. Somewhere in the bottom. This Olama will be. Start running. Okay. Now once. Olama installation is done. Now what I will do. Over here. I will create another file. Inside my chatbot. Okay.
|
|
And create another file. Local. Llama. Okay. Locallama.py. Now locallama.py. What we are going to basically do. Over here. Is that. With respect to the local llama. I will first of all. Go ahead and import. Some of the libraries. See. Code will be almost same. Right. There also. I will be using. Chat open API. Chat prompt template. String output parser. So I will copy the same thing. Over here. I will paste it over here.
|
|
Now along with this. What I am going to do. I have to import. Olama. Right. Because that is the reason. Why we will be able to. Download all the specific models. Okay. So. Langchain. Community. LLM. See. Over here. Whenever we need to do. The third party integration. So that will be available. Inside Langchain community. Okay. So. Olama is third party. Configurations. Let's say you are using. Some vector embeddings. That is also third party. So everything will be available. Over here. Okay. Now this is done. Langchain. Community. LLM. Import olama.
|
|
And then we have this. Output parser. String output parser. Core. Prompts. That is nothing but. Chat prompt template. And everything is there. Okay. Now. Let's go ahead. And write import. Streamlet. As. St. So. I am going to. Going to use the streamlet. Over here. Along with this. Import. OS. And. Not only that. We will also. Go ahead. And import. From. Dot. Env. Import. Load underscore. Dot.
|
|
Load underscore. Dot. Env. Okay. Now. We will initialize it. Load underscore. Dot. Env. Okay. Once we initialize all this random. We are. All this. Environment variables. As usual. I will be importing this three things. Now see. In my previous code. When I was using OpenAI API. Prompt template. We have written it over here. Right. Same prompt template. We will also write it over here. Because. We just need to repeat it. Because the main thing is that.
|
|
You really need to understand. How with the help of Olam. I can call any open source models. Okay. So. Here it is. And then finally. You will be able to see. Where is my. Code to call my. OpenAI LLMs. That we are going to see over here. So. This is done. Now. Streamlit framework. Also. I will try to call it over here. Okay. It's more about copy paste. The same thing that we have actually implemented. And then. You will also be seeing. This is the code. That we are going to implement it. Okay.
|
|
But here we are calling. Chat OpenAI. Okay. I specifically don't want Chat OpenAI. Instead. I will be calling. Olama. Okay. So. Olama. Whatever library we have imported. So. Olama. Okay. And then. Here we are specifically going to call. Olama2. Okay. Now. Before calling any models. Now. Which all models are specifically supported. If you go ahead and see in the GitHub. Right. Of Olama. You will be seeing the list of everything. Every. Every. Every libraries that it supports. Like Lama2. Mistral.
|
|
Dolphin5. 5.2 Neural Chat. Code Lama. All are mostly open source. Gamma. Gamma is also there. But before calling this. What you really need to do. Is that. Just go to your command prompt. Let's say that I want to use Gama. Gamma model. Okay. So. What I have to do. Or I have to use Lama model. Right. So. In order to do this. I have to just write. Olama. Run. Whatever model name. Because initially. It needs to download it. Right. This will get downloaded. From some. Open source. Some. GitHub. It can be GitHub. It can be. Hugging face. Somewhere. Right.
|
|
Some location. There will be there. We have to download that entire model. So. Let's say that. I want to go ahead and. Write. Olama. Run. Gamma. So. This. What will happen. It will pull the entire gamma model. Right. Wherever it is. So. Here you can see. Pulling. Will basically happen. Now. This is right now. 5.2 GB. Right. For the first instance. You really need to do it. Now. Since I. I am writing the code. With respect to Lama 2. I have already downloaded that model. So. That is the reason. I am showing you another example. Over here. Olama. Run. Gamma. Now. Once this entire downloading. Happens.
|
|
Then only I will be able to use the. Gamma model in my local. With the help of Olama. So. I hope you have got an idea about it. Now. What I am actually going to do. So. Here I have called. Olama. Model. Lama 2. Okay. Then again. Output parser is this. And I am combining prompt. LLM. On output parser. And everything will be almost same. And that is the. Most amazing thing about Langchain. The code will be already generic. Now. Only you need to replace. Open AI. Or paid. Or open source. It is up to you. Again. I am saying you guys. The system that I am currently working in.
|
|
Has a 64 GB RAM. It has. NVIDIA Titan RTX. Which was gifted by NVIDIA itself. So. With respect to this. Amazing system. I will be able to run. Very very much quickly. That is what I feel. So. Let's go ahead and run it. So here. What I am actually going to do. I am going to write Python. So. It is streamlet. So. Streamlet. Run. Local. Lama. Dot. Py. So. Once I execute it. Here you will be able to see now.
|
|
Now. Instead of. Open AI API. I should have. Okay. No module named Langchain Community. Let's see. Where is Langchain Community. Okay. I have to also make sure. That in my requirement. Dot. TXT. I go ahead and. Use this. Langchain Community. And I need to import. This library. Since. I need to do that. And that is the reason. I am getting an error. So. If I go ahead and write. Pip install. Minus. R. Requirement. Dot. TXT. Oops.
|
|
CD. Dot. Dot. Okay. Now. If I go ahead and write. Pip install. Minus. R. Requirement. Dot. TXT. So. Here you will be able to see. My requirement. Dot. TXT. Will get installed. This Langchain Community. Will get installed. Once I am done with this. Then I can probably. Go ahead and run my code. Okay. So. This will take some amount of time. So. If you are liking this video. Please make sure that you hit like. There are many things. That are probably going to come up. And it will be quite amazing. When you learn all these things. Okay. So.
|
|
Once this is done. Then what will happen is that. We can. And you can use any model. Up to you. Okay. And I don't want this. OpenAI key also. Only this two information. I specifically want. I will be able to track all these things. Okay. And later on. I will also show you. How you can create this. In the form of APIs. Again. It will sometime. It will take this. But. Let me know. How do you think. All these tutorials are. Langchain. I see a lot of purpose. For this particular library. It is quite amazing.
|
|
That people are doing. The company is doing. Amazingly well. In this open source world. And it is developing. Multiple things over there. So. Now. I will go ahead and write. CD. Chatbot. I will go inside my chatbot. And then. I will run this. Python. Local. Llama. Py. Once I execute this. Now. I don't think. It should be an error. Okay. It should be streamlet. Come on. Streamlet. Run. Local. Llama. Oops. Local. Llama. Dot. Py.
|
|
Not Python. Run. Streamlet. Run. Now. Here you have. Again. I will be getting. OpenAI. Text. Over here. Let me change this. Also. So. That I can make it. Perfect. With. Llama. Two. Okay. So. I have executed it. Saved it. I will rerun it. I will say. Hey. Hi. So. Once I execute it. You will be seeing. It will take some amount of time. In my system. Even though. I have a 64 GB RAM. But I will get the output. Over here. So. Assistant says. Hello.
|
|
How can I help you today? Now. If I probably go ahead. With respect to this. Dashboard. Let's see. Where it is. So. Now. Tutorial One. You will be able to see. That. This will increase. Okay. There will be one more. Over here. Right. I have reloaded this page. Okay. And you will be able to see it. Okay. You will be able to see. The new. Llama request. See. Hey. Hi. 4.89 second. Token. 39. But there is no charges. Because. It is an open source model.
|
|
Right. So. Here you will be able to see. If I extend this. There you will be able to see. Chat prop template. Llama. Llama is over here. Now. This. Llama is specifically. Calling Llama 2. Over there. And. Whatever open source libraries. That you specifically want. Just to call this. It is very much simple. You have to just go into. GitHub and download. Any model. First of all. Just by. Writing. Llama run. That particular model name. And once it is downloaded. It is good. That you can probably. Go ahead with. And. Use it. Okay. Now. I will say.
|
|
Provide me a python code. Python code. To swap. Two numbers. Okay. If you want more. Coding well. Chatbot. You can directly use. Code Llama. If you want. Okay. So here you can see. All the examples are there. And this was quite fast. Right. So this is good. You know. So if you have. The right kind of things. So here you can see. Four seconds. It has probably taken. Okay. O Llama is over here. All the information. Is probably over here. Prompt and completion.
|
|
And all. Right. Hello guys. So we are going to continue. The LangChain series. Already in our previous video. We have already seen. How to create chatbots. With the help of. Both OpenAI. API. And open source LLM. Models like Llama2. We have also seen. What is the use of. O Llama. How you can run. All these open source model. Locally in your system. And along with this. We also created. Multiple end to end projects. Using both of them. Now what we are going to do. In this specific video. One more step.
|
|
Which is very much important. For our production grade deployment. That is creating APIs. You know. For all this kind of. LLM. Models. We will be able to create APIs. And through this. You will also be able to do. The deployment. In a very efficient manner. Now how we are going to create. This specific APIs. There is a very important component. Which is called as. LangServe. In LangChain. We are going to use that. Along with this. We are going to use. Fast API. And not only that. We will also create.
|
|
A Swagger UI. Which is already provided. By LangChain. The LangServe library. That we are specifically. Going to use. Now it is important. Guys. You know this specific step. Because tomorrow. If you are also. Developing any application. You obviously. Want to do the deployment. For that particular application. So creating. The entire API. For this application. Will be the first task. That will be required. So. Yes. Let's continue. Let's go ahead. And discuss this entire thing. First of all.
|
|
How we are going to go ahead. First of all. I am going to show you. The theoretical intuition. How we are going to develop it. And then we will start. The coding part. So let me quickly. Go ahead. And share my screen. What we are actually. Going to do over here. Over here. You have seen. That I have written. APIs for deployment. Okay. So if I consider. This diagram. This is the most simplistic diagram. That I could draw. Okay. Now let's consider. See. At the end of the day. In companies. Right. There will be different. Different applications.
|
|
This applications. Are obviously created. By software engineers. Right. It can be a mobile app. It can be a desktop app. Web app and all. Now for this particular app. If I want to integrate. Any foundation model. Or any fine-tuned foundation model. Like LLMs and all. So what I really need to do. Is that I need to integrate. This with the form of APIs. At the end of the day. See. What we are going to do over here. This side. Is my LLM models itself. Right. It can be a fine-tuned LLM models. It can be a foundation model.
|
|
I want to use those functionality. Along with my web app. Or a mobile app. So what we are doing. Is over here. Is that. We will create. This specific APIs. Now this APIs. Will be having routes. Okay. Routes. And this routes. Will be responsible. Whether we have to. Probably interact with OpenAI. Or whether we have to interact. With other LLM models. Like Cloud3. Cloudy3. Or LLAMA2 open source model. So. Any number of LLM models. Whether it is open source. Or whether it is.
|
|
Paid API models. Specifically for LLM. We can definitely use it. Now this is what we are going to do. In this video. We will create. This separately. We will create. This separately. And at the end of the day. I will also give you an option. Through routes. How you can integrate. With multiple LLM models. Right. So I hope you have got an idea. With respect to this. And any number of LLM models. It may probably come. You can probably integrate it. At the end of the day. Whichever model is suitable for you. You can use for different. Different functionality. And.
|
|
The reason why I am making this video. Understand one thing guys. Because. In LLMs also. You have different performance metrics. Some model is very good at. Some performance metrics over there. Like MMLU. Other metrics are definitely there. So this is an option. Where we can use multiple LLM models. Now what we are going to do. Quickly. I will go ahead. And open my code document. So in. In my previous video. Already you have seen. I had actually developed. Till here. Right. I have basically. If you see over here.
|
|
We have created this first folder. That is Shared Bot. And inside the Shared Bot. We had created app.py. Then local llama.py. Right. We did this entire thing. And as I said. Every tutorial. I will keep on creating folders. And developing our own project over here. Now. Let us go ahead. And create my second folder. And this time. This will be APIs. Okay. So I will just go ahead. And write something like. API. Okay. Now with respect to this API. As I said. Whatever we do.
|
|
With this local llama. Or app.py. Right. Over here. I have used openai. API key. Here I have used open source models. So we will try to integrate. Both of them. In the form of routes. Okay. So that we will be able to create an API. So let me quickly go ahead. And write over here. App.py. So one will be my app.py. Which will be responsible. In creating all the APIs. The second one. Will specifically be my client.py. Now client.py. In this specific diagram. In this specific diagram. Is just imagine like this one. Web app. Or mobile app.
|
|
Because we are going to integrate this APIs. With this mobile app. Or web app. Okay. So quickly. Let's do this. First of all. I will go ahead. And start writing the code in. App.py. Before I go ahead. And we have to make sure. That we need to update. All the requirement. Almost all the libraries. I have installed it. But I am going to install more three libraries. One is LangSurf. FastAPI. And UVcon. Okay. Since I am going to create my entire Swagger documentation of API. Using FastAPI. Right. So all these three libraries.
|
|
I will be using it. So. First of all. I will go ahead. And install all these libraries. That will do. Once we run the code. Right now. Let's go ahead. And write my app.py code. Okay. Now as usual. First of all. Let me just open my terminal. Okay. And let me do one thing. With respect to the terminal. I will go ahead. And write pip install. Pip install. Minus our requirement.txt.
|
|
Okay. First of all. I need to just write cd. Dot dot. Okay. Then I will clear the screen. And then go ahead. And write pip install. Minus our requirement.txt. Now here you will be able to see that. My entire installation will start taking place. And here. The other three packages. That I have actually written. Right. Lang serve and all. That will get particular installed. Okay. Now. Till then installation is basically going on. Let me go ahead and write my code. So I will write from. Fast.
|
|
API. Import. Fast. API. Okay. So this is the first. First library that I am going to import. Along with this. I have to also make sure that I create or I import my chat prompt template. So from. Langchain. Since we are going to create an entire API in this. Okay. Dot prompts. Import. Chat prompt template. Right.
|
|
So this is done. Okay. So this is done. Then. From. Langchain. Dot. Chat underscore models. Import. Chat. Open AI. So this is the next one. Since I need to. Make sure that I need to. Create a chat. Chat application. So that is the reason why I am using this chat models. Okay.
|
|
This is the next library that I will be going ahead and importing. Along with this. I will also use Lang serve. Which will be responsible in creating my entire APIs. Right. So from Lang serve. Import. Add routes. Okay. So through this. I will be able to add all the routes over there. Right. Whatever routes. Suppose one route will be that. I need to interact with my opening API. One. One route will be to interact with the Lama 2 and all. So based on that. I will go ahead and import this. Next thing is.
|
|
Import. Uvcon. Okay. Uvcon will be required over here. Oops. Okay. Next is import OS. See I can probably enable my GitHub copilot. And I can probably write the code. But I don't think so. That will be a better way. I usually use this AI tool. Which is called as black box. So that it will help me to write my code faster. And you know.
|
|
It also explains about the code. So I will probably create another video about it. Okay. So how to basically use this. Then one more thing that I really want to import over here is. My olama. So from langtion. Orisco community. Dot llms. Dot llms. Import. Olama. Okay. So this is done. All the libraries that is specifically required by my code. I have actually written that. Okay. And these are all the things that I will require to create my OpenAI API.
|
|
Now this is done. Now what I am actually going to do over here is that I am just going to write OS.Environment. And first of all I will initialize my OpenAI API key. So I will write OpenAI underscore API underscore key. Okay. And this specifically I will load it from OS dot get env. And then I will go ahead and write OpenAI underscore API underscore key. Okay.
|
|
So this is the first thing that we really need to do. Before I do this I will just go back to my app dot py. And I will just initialize this. Okay. Load underscore dot env. Let me quickly copy this entire thing. And paste it over here. And I will initialize this. Load underscore dot env. Okay.
|
|
So this will actually help me to initialize all my environment variable. Perfect. I have also loaded my OpenAI API key. Now let's start this fast API. Now in order to create the fast API I have to create an app. Here I have given title Langtian server version 1.0. And the third information I basically want is a description. Now after this I can use this app and I can keep on adding my routes. Okay.
|
|
So what I will do add underscore routes. And this routes is basically to add all the routes over there. Right. So the first time when we are adding this particular route. So you have to make sure that I give all the information like whether I am going up with my app. Let's say I go with this chat OpenAI API. Chat OpenAI. OpenAI. Okay. And the third information that I will probably give is my path.
|
|
So this is my one of my route. You can just consider this OpenAI route. And this is my model that I will specifically be using. So this is just one way how you can actually add route. Okay. But let me just add some more things. Because see at the end of the day when we created our first application. We used to combine prompt LLM and output parser in this specific way. So what we will do over here is that when we are creating routes. We also need to make sure that we add all the routes in such a way that.
|
|
I also integrate my prompt template with it. Okay. So here I am going to say model is equal to chat OpenAI. Chat OpenAI. And I am going to initialize this particular model. And then let me go ahead and create my other model also. See O Llama. I will just use my Llama2. Okay. So this model also I need to basically create it or call it. Okay. Because I want to use multiple models. So here I am going to write LLM. And here O Llama.
|
|
And this will basically be my model is equal to. Llama2. Okay. So I am going to use the Llama2 model. So this is my one model over here. This is my another model over here. Okay. Now let me quickly go ahead and create my prompt one. So my prompt one will be my chat prompt template. Chat prompt template dot from underscore template. Okay.
|
|
And here I am going to basically give one chat prompt. Okay. Let's say one of my interaction. One. I want to use OpenAI API. For OpenAI API. Let's say I want to create an essay. So I will say write me an essay. Write me an essay. Okay. About a specific topic. That topic I will be giving it. Okay. About some topic. Okay. Some topic. Okay.
|
|
Around with 200 words or with 100 words. Okay. So this is my first prompt template. Okay. I am saying this will be my prompt template. Write me an essay about whatever topic I give with 100 words. Okay. Something like this. So this let's go ahead and write this. This is my first prompt template. Then I will create my second prompt template. Okay. This is important. Just hear me out. Okay. And this prompt template will be responsible in interacting with my open source model.
|
End of preview. Expand
in Data Studio
This dataset is related to specific voice, A famous Indian Youtuer @krish_Naik.
- Downloads last month
- 23