Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add conversation memory #9

Closed
zaldivards opened this issue Jun 27, 2023 · 0 comments
Closed

Add conversation memory #9

zaldivards opened this issue Jun 27, 2023 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@zaldivards
Copy link
Owner

No description provided.

@zaldivards zaldivards added the enhancement New feature or request label Jun 27, 2023
@zaldivards zaldivards self-assigned this Jun 27, 2023
zaldivards added a commit that referenced this issue Jun 29, 2023
zaldivards added a commit that referenced this issue Jun 29, 2023
* Add `memory` module

* Add custom prompt templates

* Update dependencies

* Update `Redis` chat memory

Now it is a `ConversationBufferWindowMemory` with k messages of 5

* Add `CONTEXTQA_PROMPT`

This prompt is based on `langchain.chains.conversational_retrieval.prompts.CONDENSE_QUESTION_PROMPT`.
However, I added an extra description and examples

* Update `LLMContextManager`

Now the main chain is `ConversationalRetrievalChain`. Additionally,
it was added chat memory and custom prompt

* Add redis in docker compose files

* Rename `CONTEXTQA_PROMPT` to `CONTEXTQA_RETRIEVAL_PROMPT`

* Update `memory` module

Now `Redis` and `RedisSummaryMemory` expect a session identifier.
This was added to keep isolated chat histories between coversations
with and without context

* Add memory to conversations with no context

* Remove unused modules

* Update llm query endpoints

Now they are accessed through POST requests

* Add openapi descriptions

* Fix typo

* Update readme

closes #9

* Update menubar labels

* Update readme

* Update `ChatBox` header

* Update readme
zaldivards added a commit that referenced this issue Jun 29, 2023
* Social media and pydantic parser (#1)

* Update `get_people_information` function

Now it handles "LinkedIn" and "Twitter"

* Add `get_user_tweets` function

It gets the tweets for the username found by the agent

* Add the `main` function

* Add dependencies

* Add `summary_parser`

* Update agent's prompt templates

* Add the `summary_parser` parser to return a pydantic model

* Update the `main` function to print the final result

* Add docker-related files

* Add main bash script

* Add readme

* Update main bash script

Added the `--build` flag to the `dockerized` command

* Custom context and api (#2)

* Add `query_document` function

* Update dependencies

* Update project structure

* Update dependencies

* Add the `services` package

* Add `LLMQueryRequestBody` model

* Add fastapi api

* Fix bug in `simple_scan`

The splitter expected a Sequence of Documents instead
of a raw string

* Add endpoints' response models

* Update the main bash script

Updated the entrypoint command to execute `uvicorn`

* Update readme

* App settings (#3)

* Add `AppSettings` class

* Remove unused module

* Add `AppSettings` usage

* Document query (#4)

* Add `document_scan` function

This function will pass the best context in the given document
to the LLM

* Update api

Added `query_document` endpoint and an independent
router for the context-based endpoints

* Update dependencies

* Update import order to avoid circular dependencies

* Add `check-envs` command

* Fix bugs

* Update `LLMQueryDocumentRequestBody` model

Replaced similarity_processor type, from Literal to Enum

* Fix bug in `query_document` endpoint

Each property needed to be a `Form` type to work properly

* PDF query support (#5)

* Add `VectorStoreParams` class

* Update `_VECTORSTORE` map

Added `VectorStoreParams` objects as values

* Fix bug

* Add `pdf_scan` function

* Add `RetrievalQA` usage

* Update dependencies

* Fix bugs

Update file creation to "b" mode

* Add `query_pdf` endpoint

* Update docstrings

* Update dependencies

* Update `pdf_scan` function

Added pinecone initialization

* Local persistance (#6)

* Update pydantic models

* Add `context` module

* Add `set_context` endpoint

* Update dependencies

* Add `LLMContextManager` class

* Add `context_router` router

Added the `query` endpoint

* Fix bugs

* Fix bugs

There was an error because the LOCAL_STORE_HOME was
not being created

* Contextqa client (#7)

* Add base gui structure

* Add `primevue` dependencies

* Add custom components

Added Chat and ChatCard components

* Add proxy

* Add `ContextManager` view

This view will set the context for a conversation session

* Add `MessageAdder` component

It handles the user questions and emits the corresponding
events so parent components can continue with the flow

* Update the `Chat` view

Now the messages are rendered dynamically

* Update dependencies

Added vue-router

* Add vue-router usage

* Update `ChatCard` component

Added datetime

* Update `ContextManager`

Added `remove` callback to the `FileUpload` component

* Update `ContextManager`

Added InputNumber fields

* Add `client` module and toast service

* Update client and server urls

* Update `ContextManager`

Added toast messages and the context request

* Add vuex dependency

* Add state management

* Add `askLLM` function

This is the main function to query the llm

* Update `Chat` component

Added support to query the llm server

* Update vuex store

* Update and add utilities to properly render the progress bar

* Add global font style

* Fix bug in state

the `lastMessageText` state was being rendered for all the
component instances.

* Update main input text

* Add autoscrolling and custom styles

* Fix bug in state

the identifier status changed just by selecting the file to upload.
Now it is only updated if the post request is made and succeeds

* Update `MessageAdder`

The keys shift+enter do not trigger the onEnter event

* Fix bug related to the cards sizes

* Update `ContextManager`

Now if the document has not been set as the context the chat
cannot be used

* Remove unused code

* Fix bug when the `Chat` component is re-created

* Remove the textarea outline when it's disabled

* Update `ChatCard`

Now it scans the LLM response to properly format code blocks

* Fix z-index bug

* Update the `client` module

Added validations to handle non-successful server responses

* Fix z-index bug

* Add logo and title

* Add `Home` view

* Update readme

* Update some strings

* Update `ContextManager`

Added dropdown to choose the vector store. Additionally, a new
state was added as the chosen vector store needs to be available
in the `Chat` component

* Add the `ConfirmationService` to use dialogs

* Add `ConfirmDialog` usage

For this to work, a setup script block is needed.

* Update the menubar

Added a custom "end" template to show the current context set

* Fix bugs

- `handleResponse` had not been awaited
-  invalid payload for the `lastMessageText` state

* Update project name

* Add a tooltip to the logo

* Add docker-related files for the client

* Update project structure

* Update query request

Now the processor is added from the state store

* Add API_BASE_URL for dev and prod

* Update api

- The source documents found are now logged when the
DEBUG flag is true
- Added cors middleware

* Update Dockerfiles

* Add docker compose files for dev and prod

* Update main script

- Removed previous commands
- Added new commands: `start` and `restart`

* Update api

Added `debug` setting

* Add commands starting messages

* Update readme

Added usage examples

* Update MenuBar chat options

* Add `chat` model

Added new endpoint to ask anything to the llm without
setting any document as context

* Add `Chat` and `DocumentQA` views

* Update `ChatBox`

Added `promise` method to request the corresponding
endpoint based on the context requirement

* Fix bugs related to messages state

* Update home message

* Update readme

* Update readme

* Update `Home`

Added contextqa title image

* Add `contextqa.env`

* Update contextqa image

* Update api

Renamed the `retriever` package to `contextqa`

* Update api settings

Added default values for optional settings

* Update chat cards

Now avatars have images instead of icons

* Fix bug

* Add focus to the textarea after a response

* Update menubar

Now the contextqa title is shown in the bar

* Update `ChatCard` sent date

* Update menubar

* Update readme

* Fix bug in the messages sent date

When the ChatBox was re-created, each message card displayed the
same sent date. This issue was resolved by changing the sent date
from being a computed property to a prop

* Renamed contextqa client root directory to **client**

* Add header in the `settings` section

* Fix `FileUpload` layout

* Update menubar

Now the set vector store is also shown

* Add docker restart strategy

* Update main script

Added `shutdown` command

* Update readme

* Add gitattributes

* Update gitattributes

* Fix typo

* Pinecone support (#10)

* Update dev api entrypoint

* Add "health" endpoint

* Add `PineconeManager`

* Add exception handling when using a non-local vector store

* Fix bug

The `similarity_processor` parameter was being sent a hardcoded value

* Fix bug in the exception handling when a request is not successful

* Update `context/set` endpoint

Added an alternative response when a connection with an
external vector store is not successful

* Update common response message when a request do not succeed in the server

* Fix bug

Removed the double await call on the response

* Fix bug in the formatting of code blocks

* Update `Pinecone` manager

Added index namespace based on the filename

* Redis chat history (#11)

* Add `memory` module

* Add custom prompt templates

* Update dependencies

* Update `Redis` chat memory

Now it is a `ConversationBufferWindowMemory` with k messages of 5

* Add `CONTEXTQA_PROMPT`

This prompt is based on `langchain.chains.conversational_retrieval.prompts.CONDENSE_QUESTION_PROMPT`.
However, I added an extra description and examples

* Update `LLMContextManager`

Now the main chain is `ConversationalRetrievalChain`. Additionally,
it was added chat memory and custom prompt

* Add redis in docker compose files

* Rename `CONTEXTQA_PROMPT` to `CONTEXTQA_RETRIEVAL_PROMPT`

* Update `memory` module

Now `Redis` and `RedisSummaryMemory` expect a session identifier.
This was added to keep isolated chat histories between coversations
with and without context

* Add memory to conversations with no context

* Remove unused modules

* Update llm query endpoints

Now they are accessed through POST requests

* Add openapi descriptions

* Fix typo

* Update readme

closes #9

* Update menubar labels

* Update readme

* Update `ChatBox` header

* Update readme

* Update github url

* Fix bugs in the regex for code blocks
zaldivards added a commit that referenced this issue Jul 11, 2023
* Social media and pydantic parser (#1)

* Update `get_people_information` function

Now it handles "LinkedIn" and "Twitter"

* Add `get_user_tweets` function

It gets the tweets for the username found by the agent

* Add the `main` function

* Add dependencies

* Add `summary_parser`

* Update agent's prompt templates

* Add the `summary_parser` parser to return a pydantic model

* Update the `main` function to print the final result

* Add docker-related files

* Add main bash script

* Add readme

* Update main bash script

Added the `--build` flag to the `dockerized` command

* Custom context and api (#2)

* Add `query_document` function

* Update dependencies

* Update project structure

* Update dependencies

* Add the `services` package

* Add `LLMQueryRequestBody` model

* Add fastapi api

* Fix bug in `simple_scan`

The splitter expected a Sequence of Documents instead
of a raw string

* Add endpoints' response models

* Update the main bash script

Updated the entrypoint command to execute `uvicorn`

* Update readme

* App settings (#3)

* Add `AppSettings` class

* Remove unused module

* Add `AppSettings` usage

* Document query (#4)

* Add `document_scan` function

This function will pass the best context in the given document
to the LLM

* Update api

Added `query_document` endpoint and an independent
router for the context-based endpoints

* Update dependencies

* Update import order to avoid circular dependencies

* Add `check-envs` command

* Fix bugs

* Update `LLMQueryDocumentRequestBody` model

Replaced similarity_processor type, from Literal to Enum

* Fix bug in `query_document` endpoint

Each property needed to be a `Form` type to work properly

* PDF query support (#5)

* Add `VectorStoreParams` class

* Update `_VECTORSTORE` map

Added `VectorStoreParams` objects as values

* Fix bug

* Add `pdf_scan` function

* Add `RetrievalQA` usage

* Update dependencies

* Fix bugs

Update file creation to "b" mode

* Add `query_pdf` endpoint

* Update docstrings

* Update dependencies

* Update `pdf_scan` function

Added pinecone initialization

* Local persistance (#6)

* Update pydantic models

* Add `context` module

* Add `set_context` endpoint

* Update dependencies

* Add `LLMContextManager` class

* Add `context_router` router

Added the `query` endpoint

* Fix bugs

* Fix bugs

There was an error because the LOCAL_STORE_HOME was
not being created

* Contextqa client (#7)

* Add base gui structure

* Add `primevue` dependencies

* Add custom components

Added Chat and ChatCard components

* Add proxy

* Add `ContextManager` view

This view will set the context for a conversation session

* Add `MessageAdder` component

It handles the user questions and emits the corresponding
events so parent components can continue with the flow

* Update the `Chat` view

Now the messages are rendered dynamically

* Update dependencies

Added vue-router

* Add vue-router usage

* Update `ChatCard` component

Added datetime

* Update `ContextManager`

Added `remove` callback to the `FileUpload` component

* Update `ContextManager`

Added InputNumber fields

* Add `client` module and toast service

* Update client and server urls

* Update `ContextManager`

Added toast messages and the context request

* Add vuex dependency

* Add state management

* Add `askLLM` function

This is the main function to query the llm

* Update `Chat` component

Added support to query the llm server

* Update vuex store

* Update and add utilities to properly render the progress bar

* Add global font style

* Fix bug in state

the `lastMessageText` state was being rendered for all the
component instances.

* Update main input text

* Add autoscrolling and custom styles

* Fix bug in state

the identifier status changed just by selecting the file to upload.
Now it is only updated if the post request is made and succeeds

* Update `MessageAdder`

The keys shift+enter do not trigger the onEnter event

* Fix bug related to the cards sizes

* Update `ContextManager`

Now if the document has not been set as the context the chat
cannot be used

* Remove unused code

* Fix bug when the `Chat` component is re-created

* Remove the textarea outline when it's disabled

* Update `ChatCard`

Now it scans the LLM response to properly format code blocks

* Fix z-index bug

* Update the `client` module

Added validations to handle non-successful server responses

* Fix z-index bug

* Add logo and title

* Add `Home` view

* Update readme

* Update some strings

* Update `ContextManager`

Added dropdown to choose the vector store. Additionally, a new
state was added as the chosen vector store needs to be available
in the `Chat` component

* Add the `ConfirmationService` to use dialogs

* Add `ConfirmDialog` usage

For this to work, a setup script block is needed.

* Update the menubar

Added a custom "end" template to show the current context set

* Fix bugs

- `handleResponse` had not been awaited
-  invalid payload for the `lastMessageText` state

* Update project name

* Add a tooltip to the logo

* Add docker-related files for the client

* Update project structure

* Update query request

Now the processor is added from the state store

* Add API_BASE_URL for dev and prod

* Update api

- The source documents found are now logged when the
DEBUG flag is true
- Added cors middleware

* Update Dockerfiles

* Add docker compose files for dev and prod

* Update main script

- Removed previous commands
- Added new commands: `start` and `restart`

* Update api

Added `debug` setting

* Add commands starting messages

* Update readme

Added usage examples

* Update MenuBar chat options

* Add `chat` model

Added new endpoint to ask anything to the llm without
setting any document as context

* Add `Chat` and `DocumentQA` views

* Update `ChatBox`

Added `promise` method to request the corresponding
endpoint based on the context requirement

* Fix bugs related to messages state

* Update home message

* Update readme

* Update readme

* Update `Home`

Added contextqa title image

* Add `contextqa.env`

* Update contextqa image

* Update api

Renamed the `retriever` package to `contextqa`

* Update api settings

Added default values for optional settings

* Update chat cards

Now avatars have images instead of icons

* Fix bug

* Add focus to the textarea after a response

* Update menubar

Now the contextqa title is shown in the bar

* Update `ChatCard` sent date

* Update menubar

* Update readme

* Fix bug in the messages sent date

When the ChatBox was re-created, each message card displayed the
same sent date. This issue was resolved by changing the sent date
from being a computed property to a prop

* Renamed contextqa client root directory to **client**

* Add header in the `settings` section

* Fix `FileUpload` layout

* Update menubar

Now the set vector store is also shown

* Add docker restart strategy

* Update main script

Added `shutdown` command

* Update readme

* Add gitattributes

* Update gitattributes

* Fix typo

* Pinecone support (#10)

* Update dev api entrypoint

* Add "health" endpoint

* Add `PineconeManager`

* Add exception handling when using a non-local vector store

* Fix bug

The `similarity_processor` parameter was being sent a hardcoded value

* Fix bug in the exception handling when a request is not successful

* Update `context/set` endpoint

Added an alternative response when a connection with an
external vector store is not successful

* Update common response message when a request do not succeed in the server

* Fix bug

Removed the double await call on the response

* Fix bug in the formatting of code blocks

* Update `Pinecone` manager

Added index namespace based on the filename

* Redis chat history (#11)

* Add `memory` module

* Add custom prompt templates

* Update dependencies

* Update `Redis` chat memory

Now it is a `ConversationBufferWindowMemory` with k messages of 5

* Add `CONTEXTQA_PROMPT`

This prompt is based on `langchain.chains.conversational_retrieval.prompts.CONDENSE_QUESTION_PROMPT`.
However, I added an extra description and examples

* Update `LLMContextManager`

Now the main chain is `ConversationalRetrievalChain`. Additionally,
it was added chat memory and custom prompt

* Add redis in docker compose files

* Rename `CONTEXTQA_PROMPT` to `CONTEXTQA_RETRIEVAL_PROMPT`

* Update `memory` module

Now `Redis` and `RedisSummaryMemory` expect a session identifier.
This was added to keep isolated chat histories between coversations
with and without context

* Add memory to conversations with no context

* Remove unused modules

* Update llm query endpoints

Now they are accessed through POST requests

* Add openapi descriptions

* Fix typo

* Update readme

closes #9

* Update menubar labels

* Update readme

* Update `ChatBox` header

* Update readme

* Update github url

* Fix bugs in the regex for code blocks

* Add assistant with internet access (#13)

* Update dependencies

* Add the `tools` module

* Add `CONTEXTQA_AGENT_TEMPLATE`

* Add extra setting no enable internet access for the assistant

* Update `searcher` tool

* Added exception handling
* It searches 5 different sites sequentially and chooses the one that
meets the criteria of containing more than 100 words
* Added logger

* Update memory configuration based on the `enable_internet_access` flag

* Add `CONTEXTQA_AGENT_TEMPLATE` and a custom prefix for the agent

* Add `get_llm_assistant` function

This function returns an assistant with or without internet access

* Update `ChatCard`

Now urls in markdown format are rendered properly

* Add switch to enable internet access

* Update memory chat

Now some of the "generated" parameters depend on
the `internet_access` flag

* Update `qa_service` function

* The `qa` endpoint now expects the optional `internet_access` flag
* Added chat memory usage depending on that flag

* Add `internetEnabled` state

* Update Dialog messages

* Fix bug

Added `v-if` usage so the internet switch is only available
in conversations with no context

* Update home's welcome text

* Update Dialog position

* Add extra exception handler in the `search` tool

* Fix typo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant