×
Community Blog Next-Level Conversations: LLM + VectorDB with Alibaba Cloud Is Customizable and Cost-Efficient

Next-Level Conversations: LLM + VectorDB with Alibaba Cloud Is Customizable and Cost-Efficient

This article explains the benefits of LLM + VectorDB with Alibaba Cloud (with examples).

We recommend using ECS for the backend and front end. If the user will use open-source, the LLM (Large Language Model) can be used GPU-ECS or Platform for AI (Artificial Intelligence). This tutorial will cover the entire infrastructure and retriever besides the LLM part. We assume the user has LLM and its API key. The data will be saved in AnalyticDB for PostgreSQL (ADBPG), below high-level architecture.

1

1. Prepare AnalyticDB for the PostgreSQL Environment

1.  Login to Alibaba Cloud Console and create an instance of AnalyticDB for PostgreSQL.

2.  We chose the following for test purposes:

  • Compute Node Specifications: 8 Cores, 32 GB Memory
  • Compute Nodes: 2
  • Compute Node Storage Capacity: 50 GB
  • Configuration Information Summary: 16 Cores, 64 GB Memory, 100 GB ESSD PL1 Total Physical Storage (Single-Copy)

3.  Create an account to connect to DB

4.  We need to enable the "Vector Engine Optimization" to use the vector database.
vector_db
5.  We recommend using ECS, where they will be installed: UI Configure an IP address whitelist.

6.  Prepare ECS

2. ECS Instance

1.  Assume the user has already login to Alibaba Cloud Console (following section 1.1)

2.  Create an ECS instance. We recommend using ecs.g7.2xlarge with the parameters for testing purposes:

  • Hardware: 8vCPU(s) 32 GiB
  • Operating System: Ubuntu 22.04 64-bit

3.  Connect to the ECS through SSH

2.1. Environment and Dependencies

2.1.1. Steps to Create a Virtual Environment on Python

  1. Open the command prompt or terminal and navigate to the directory where the virtual environment needs to be created.
  2. Install the virtualenv package by typing the following command: pip install virtualenv
  3. Once the installation is complete, create a new virtual environment by running the command virtualenv env_name command. We recommend replacing env_name with the user-created one.
  4. Activate the virtual environment by typing the command source env_name/bin/activate command.
  5. The name of the virtual environment will be displayed in the command prompt or terminal.
  6. It can be installed in the environment of any packages or libraries via pip.
  7. Exit the virtual environment by typing the following command: deactivate.

2.1.2. Install psycopg2 Dependencies

  1. Ensure it has installed the required dependencies for building psycopg2. These dependencies on Ubuntu can be installed using the following command: sudo apt-get install libpq-dev python3-dev.
  2. For the required dependencies, consult the psycopg2 documentation if the user uses a different operating system.
  3. Install psycopg2 again using pip: pip install psycopg2-binary

2.2. Install Retrieval Plugin: chatgpt-retrieval-plugin

2.2.1. Setup

This app uses Python 3.10 and Poetry for dependency management.

Install Python 3.10 onto the machine if it still needs to be installed. Depending on the system, the Poetry can be downloaded from the official Python website or with a package manager like Brew or apt. Activate the virtual environment prepared in section 2.2.2.

1.  Clone the repository from GitHub: git clone https://github.com/openai/chatgpt-retrieval-plugin.git

2.  Navigate to the cloned repository directory: cd /path/to/chatgpt-retrieval-plugin

3.  Install poetry: pip install poetry

4.  Create a new virtual environment that uses Python 3.10:

poetry env use python3.10
poetry shell

5.  Install app dependencies using poetry: poetry install

Note: If adding dependencies to the project. Tool, make sure to run poetry lock and poetry install.

2.2.1.1. General Environment Variables

The API requires the following environment variables to work:

Name Required Description
DATASTORE Yes This specifies the vector database provider to store and query embeddings.
BEARER_TOKEN Yes This secret token is needed to authenticate API requests. It can be generated using any tool or method the user prefers (such as jwt.io).
LLM_API_KEY Yes This is the LLM API key that needed to be put in.

2.3. Running the API Locally

You need to set the requisite environment variables with the export command to run the API locally:

export DATASTORE=<datastore>
export BEARER_TOKEN=<bearer_token>
export LLM_API_KEY=<llm_api_key>
export PG_HOST=<dbhost>
export PG_PORT=5432
export PG_DATABASE=<db>
export PG_USER=<dbuser>
export PG_PASSWORD=<dbuser-password>

The variables above could be written in a global environment via the following instructions:

  1. Open the terminal
  2. Type nano ~/.bashrc to open the bashrc file in the nano text editor
  3. Scroll down to the end of the file or the section where variables required to be added
  4. Add variables in the following format: export VARIABLE_NAME=value. Replace VARIABLE_NAME with the variable's name and value with the value which wanted to assign to it.
  5. Save the file by pressing Ctrl + X, Y, and Enter
  6. Finally, to apply the changes, open a new terminal or type source ~/.bashrc in the terminal to reload the bash file.

Start the API with: poetry run start

Append docs to the URL in the terminal and open it in a browser to access the API documentation and try out the endpoints (i.e., http://0.0.0.0:8000/docs). Make sure to enter the correct bearer token and test the API endpoints.

Note: If added new dependencies to the project.toml file, run poetry lock and poetry install to update the lock file and install the new dependencies.

2.4. Upload Data to ADBPG

Within the scripts folder exist scripts built for upserting or processing text documents from various data sources, including JSON files, JSONL files, and zip files. These scripts utilize the plugin's upsert utility functions, which convert the documents into plain text and divide them into chunks before uploading them to the vector database along with their metadata. Each script folder has a README file outlining how to use the script and the required parameters. It is also possible to use the services.pii_detection module to screen the documents for personally identifiable information (PII) and exclude any documents that contain it to avoid unintentionally uploading sensitive or private data to the vector database.

Furthermore, it can be used in the services.extract_metadata module to extract metadata from the document text, which can enrich the document metadata. It is worth noting that if the user uses incoming webhooks to synchronize data continuously, the user should run a backfill after setting them up to ensure no data is missed.

The following scripts are available:

  • Process_json: This script processes a file dump of documents in JSON format and stores them in the vector database with metadata. The JSON file format must be a list of JSON objects representing a document. The JSON object should have a text field and other fields to populate the metadata.
  • Process_jsonl: This script processes a file dump of documents in JSONL format and stores them in the vector database with metadata. The format of the JSONL file should be a newline-delimited JSON file, where each line represents a valid JSON object representing a document. The JSON object should have a text field and other fields to populate the metadata.
  • Process_zip: This script processes a file dump of documents in a zip file and stores them in the vector database with metadata. The zip file format should contain a flat-file folder containing docx, pdf, txt, md, ppt, or CSV files.

All three types of scripts support the custom metadata as a JSON string and enable flags to screen for PII and extract metadata.

# Upload prepared zipped *.md file to the /llm-retrieval-plugin/scripts/process_zip

source env_name/bin/activate
poetry shell
python scripts/process_zip/process_zip.py --filepath srcripts/process_zip/<upload_file_name.zip>

3.3.1. Verify Uploaded Data

  1. Open the terminal or command prompt and type psql -h host -p <server port: default 5432> -U username -d database_name to connect to the PostgreSQL database. Replace the host with the host server address, username with the PostgreSQL username, and database_name with the database name the user wants to connect to.
  2. After establishing the connection, it can count the values on a specific table by typing SELECT COUNT(*) FROM document_chunks
  3. To show only one column in the table, it can use the SELECT statement followed by the column's name, which needs to be shown the SELECT statement can be used. For example, SELECT content FROM document_chunks
  4. If needed to truncate the table values (i.e., delete all the rows), it can use the TRUNCATE TABLE document_chunks; command. The above command will delete all the rows in the table, so use it with caution.

3. Simple WebUI for Test

We offer a simple WebUI, which have done with a flask. This WebUI is only for reference and cannot be put in a production environment.

Follow these steps to run a ready website Flask application with Python:

  1. Clone or download the website Flask application from its source code repository
  2. It requires Python and Flask installed on the system. They can be downloaded and installed from their official websites.
  3. Open the terminal and navigate to the directory that contains the Flask application
  4. Install any dependencies required by the Flask application by running the following command: pip install -r requirements.txt
  5. Set the environment variables required by the Flask application. The variables and their values should be listed in a file called .env. Please refer to section: 2.2.1.
  6. Start the Flask application by running the following command: python app.pyReplace app.py with the name of the Python file that contains the Flask application.
  7. Open the web browser and go to http://localhost:5000/ or the URL specified by the Flask application to see the website in action. It should open a page similar to the page shown below.

Congratulations! Now successfully running LLM + ADBPG with your data in Alibaba Cloud.

One of the popular commercial LLM ChatGPT by OpenAI officially added the AnalyticDB retrieval plugin to the list of vector databases.
openai_official

We believe that ADBPG in the Generative AI era has the potential to revolutionize the way businesses and organizations analyze and use data. If you're interested in learning more about our software solution and how it can benefit your organization, please don't hesitate to contact us. We're always happy to answer your questions and provide a demo of our software.

0 5 3
Share on

Farruh

29 posts | 16 followers

You may also like

Comments

Farruh

29 posts | 16 followers

Related Products