Private GPT is an open-source project that allows you to interact with your private documents and data using the power of large language models like GPT-3/GPT-4 without any of your data leaving your local environment. This gives you the benefits of AI while maintaining privacy and control over your data.
In this guide, We will walk you through the step-by-step procedure to setup Private GPT on your Windows PC. The process is very simple and straightforward. We will try explaining each step in simple terms, even if you are not from too much of a technical background.
Before you start the installation process, we would warn you of two things: The first one is the package dependencies. You may end up with version or package dependency issues during the installation. You should be ready to resolve package dependencies. We had to go through some dependency issues to completely set it up. The second one is the response time. It depends on how much computational resources your Windows PC has. If you are running on a small office Laptop, you may expect delays ( 1 to 5 minutes) in receiving the responses to your questions.
If you are ready to cope with all these issues, let’s get ready to dirt your hands.
Let’s see the basic requirements required to setup Private GPT on Your Windows PC.
Windows 10 or 11 PC
Visual Studio 2022 installed
Python 3.10 or later installed
A large language model file compatible with Private GPT such as GPT4All-J or LlamaCpp
If you don’t already have Visual Studio 2022 or Python installed, don’t worry – we will cover how to install both later in this post.
We have divided the process into several steps. Initially, you should make sure your Windows PC has Visual Studio 2022 and Python installed. Download your desired LLM module and Private GPT code from GitHub.
Visual Studio 2022 is an integrated development environment (IDE) that we’ll use to run commands and edit code.
Go to visualstudio.microsoft.com and download the free Community version of Visual Studio 2022. Run through the Visual Studio Installer and make sure to select the following components:
1. Universal Windows Platform development
2. Desktop Development with C++
Private GPT requires Python 3.10 or later. If you don’t have the Python installed, you can install the Python either way.
1. Through the Visual Studio Installer application
2. Download the Python installer from python.org and install it.
Please check out these two posts to learn how to install Python and Pycharm on Windows:
http://thesecmaster.com/step-by-step-procedure-to-install-python-on-windows/
http://thesecmaster.com/step-by-step-procedure-to-install-pycharm-on-windows/
If you are not sure your PC has Python installed, run this command:python --version
Now, we need to download the source code for Private GPT itself. There are a couple ways to do this:
Option 1 – Clone with Git
If you’re familiar with Git, you can clone the Private GPT repository directly in Visual Studio:
1. Choose a local path to clone it to, like C:\privateGPT
2. Change the directory to your local path on the CLI and run this command: > git clone https://github.com/imartinez/privateGPT.git
3. Click Clone
This will download all the code to your chosen folder.
Option 2 – Download as ZIP
If you aren’t familiar with Git, you can download the source as a ZIP file:
1. Go to https://github.com/imartinez/privateGPT in your browser
2. Click on the green “<> Code” button and choose “Download ZIP”
3. Extract the ZIP somewhere on your computer, like C:\privateGPT
Either cloning or downloading the ZIP will work!
We have downloaded the source code, unzipped it into the ‘PrivateGPT’ folder, and kept it in G:\PrivateGPT on our PC.
Now, we need to download the source code for Private GPT itself. There are a couple ways to do this:
Option 1 – Clone with Git
If you’re familiar with Git, you can clone the Private GPT repository directly in Visual Studio:
1. Choose a local path to clone it to, like C:\privateGPT
2. Change the directory to your local path on the CLI and run this command: > git clone https://github.com/imartinez/privateGPT.git
3. Click Clone
This will download all the code to your chosen folder.
Option 2 – Download as ZIP
If you aren’t familiar with Git, you can download the source as a ZIP file:
1. Go to https://github.com/imartinez/privateGPT in your browser
2. Click on the green “<> Code” button and choose “Download ZIP”
3. Extract the ZIP somewhere on your computer, like C:\privateGPT
Either cloning or downloading the ZIP will work!
We have downloaded the source code, unzipped it into the ‘PrivateGPT’ folder, and kept it in G:\PrivateGPT on our PC.
The next step is to import the unzipped ‘PrivateGPT’ folder into an IDE application. We used PyCharm IDE in this demo. You can use Visual Studio 2022 or even it is okay to directly run the CLI.
If you want to set up the PyCharm on your Windows, follow this guide: http://thesecmaster.com/step-by-step-procedure-to-install-pycharm-on-windows/
To Import the PrivateGPT as a project on PyCharm, Click on the ‘Four Lines’ button on the top left corner and click ‘Open.’ Brows the PrivateGPT folder.
Now we need to install the Python package requirements so Private GPT can run properly. Run this command to install all the packages listed in ‘requirements.txt’ file on the terminal.
pip install -r .\requirements.txt
This will install all of the required Python packages using pip
. Depending on your internet speed, this may take a few minutes.
If you run into any errors during this step, you may need to install a C++ compiler. See the Private GPT README on GitHub for help troubleshooting compiler issues on Windows.
Private GPT works by using a large language model locally on your machine. So you’ll need to download one of these models.
The Private GPT code is designed to work with models compatible with GPT4All-J or LlamaCpp.
Download whichever model you prefer based on size. The larger the model, the better performance you’ll get.
Once downloaded, create a folder called models
inside the privateGPT
folder, and move the .bin
file into it.
So your folder structure should look like this:
privateGPT
└───models
└───ggml-gpt4all-j-v1.3-groovy.bin
Private GPT uses a .env
file to specify environment variables like the model path and other settings.
Rename example.env
to .env
(remove example
) and open it in a text editor.
Update the variables to match your setup:MODEL_PATH
: Set this to the path to your language model file, like C:\privateGPT\models\ggml-gpt4all-j-v1.3-groovy.bin
PERSIST_DIRECTORY
: Where do you want the local vector database stored, like C:\privateGPT\db
The other default settings should work fine for now. Check out the variable details below:MODEL_TYPE: supports LlamaCpp or GPT4All
PERSIST_DIRECTORY: is the folder you want your vectorstore in
MODEL_PATH: Path to your GPT4All or LlamaCpp supported LLM
MODEL_N_CTX: Maximum token limit for the LLM model
MODEL_N_BATCH: Number of tokens in the prompt that are fed into the model at a time. Optimal value differs a lot depending on the model (8 works well for GPT4All, and 1024 is better for LlamaCpp)
EMBEDDINGS_MODEL_NAME: SentenceTransformers embeddings model name (see https://www.sbert.net/docs/pretrained_models.html)
TARGET_SOURCE_CHUNKS: The amount of chunks (sources) that will be used to answer a question
Now, we’re ready to ingest documents into the local vector database. This preprocesses your files so Private GPT can search and query them.
In the PyCharm terminal, run:
python .\ingest.py
This will look for files in the source_documents folder, process them, and add them to the database.
You can add .pdf, .docx, .txt, and other files in this folder ‘source_documents.’ The initial process may take some time, depending on how large your files are.
Once it finishes, your documents are ready to query!
Finally, we can ask questions to the private documents!
In the terminal, run:
python .\privateGPT.py
This will prompt you to enter a query. Type your question and hit enter.
The model will think for 20-30 seconds (The response time is subjected to computing resources and quantity of ingested data), and then return an answer by searching your ingested documents, along with context snippets.
You can keep entering new questions or type exit
to quit.
That’s it! You now have Private GPT running locally on your Windows machine.
Here are the key steps we covered to get Private GPT working on Windows:
Install Visual Studio 2022
Install Python
Download the Private GPT source code
Install Python requirements
Download a large language model
Configure environment variables
Ingest documents
Query your documents
It may seem a little complex at first, but once you have the environment setup, querying your private data is easy.
Private GPT is a powerful tool for interacting with your documents while maintaining privacy. We hope this post helped you setup Private GPT on your Windows PC. Let me know if you have any other questions!
We thank you for reading this blog post. Visit our website, thesecmaster.com, and social media pages on Facebook, LinkedIn, Twitter, Telegram, Tumblr, & Medium and subscribe to receive updates like this.
You may also like these articles:
Arun KL is a cybersecurity professional with 15+ years of experience in IT infrastructure, cloud security, vulnerability management, Penetration Testing, security operations, and incident response. He is adept at designing and implementing robust security solutions to safeguard systems and data. Arun holds multiple industry certifications including CCNA, CCNA Security, RHCE, CEH, and AWS Security.
“Knowledge Arsenal: Empowering Your Security Journey through Continuous Learning”
"Cybersecurity All-in-One For Dummies" offers a comprehensive guide to securing personal and business digital assets from cyber threats, with actionable insights from industry experts.
BurpGPT is a cutting-edge Burp Suite extension that harnesses the power of OpenAI's language models to revolutionize web application security testing. With customizable prompts and advanced AI capabilities, BurpGPT enables security professionals to uncover bespoke vulnerabilities, streamline assessments, and stay ahead of evolving threats.
PentestGPT, developed by Gelei Deng and team, revolutionizes penetration testing by harnessing AI power. Leveraging OpenAI's GPT-4, it automates and streamlines the process, making it efficient and accessible. With advanced features and interactive guidance, PentestGPT empowers testers to identify vulnerabilities effectively, representing a significant leap in cybersecurity.
Tenable BurpGPT is a powerful Burp Suite extension that leverages OpenAI's advanced language models to analyze HTTP traffic and identify potential security risks. By automating vulnerability detection and providing AI-generated insights, BurpGPT dramatically reduces manual testing efforts for security researchers, developers, and pentesters.
Microsoft Security Copilot is a revolutionary AI-powered security solution that empowers cybersecurity professionals to identify and address potential breaches effectively. By harnessing advanced technologies like OpenAI's GPT-4 and Microsoft's extensive threat intelligence, Security Copilot streamlines threat detection and response, enabling defenders to operate at machine speed and scale.