Skip to content

aeyage/exp-geminixAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LangChain with Gemini AI Integration

${\textsf{\color{lightgrey}Work in Progress}}$

Problem Statement: Large language models (hereinafter LLMs) typically generate output based on their pre-existing, stored information (cut-off knowledge). This approach can be limiting when we wish the model to generate answers based on our specific dataset or create our bespoke LLM.

LangChain is a powerful library designed to streamline the development and deployment of applications that LLMs. It provides a suite of tools and frameworks to manage and utilise LLMs efficiently, enabling developers to create applications that can understand and generate human-like text.

We will explore how it works, and how it can be integrated with Gemini AI for enhanced capabilities.

Prerequisites

  1. Python Environment - Python 3.6 or later. A venv is recommended for managing dependencies.

  2. LangChain and Gemini Setup

  • Install LangChain - pip install langchain
  • Install LangChain’s Gemini integration package - pip install langchain-google-genai
  • Create an API key in Google AI Studio or Cloud Console and set the GOOGLE_API_KEY environment variable.

Getting Started

  1. Clone Repository:

    git clone https://github.com/aeyage/exp-geminixAI.git
  2. Execution

  • Download the .ipynb file

  • Run it online on Colab or use Jupyter Notebook to run it locally

Usage

Modify the notebooks as you see fit to interact with Gemini. It can be used for building chatbots, search engine, calculator, etc.

License

This project is licensed under the GPL-3.0 license.

About

integrating langchain with gemini AI [WIP]

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published