Lejdi Prifti

0 %
Lejdi Prifti
Software Engineer
DevOps Engineer
ML Practitioner
  • Residence:
    Albania
  • City:
    Tirana
  • Email:
    info@lejdiprifti.com
Spring
AWS & Cloud
Angular
Team Player
Coordination & Leadership
Time Management
Docker & Kubernetes
ReactJs
JavaScript
Python
  • Java, JavaScript, Python
  • AWS, Kubernetes, Azure
  • Bootstrap, Materialize
  • Css, Sass, Less
  • Blockchain, Ethereum, Solidity
  • React, React Native, Flutter
  • GIT knowledge
  • Machine Learning, Deep Learning
0

No products in the basket.

Ask your Confluence: Application with Confluence data and LlamaIndex

15. October 2023

In this step-by-step tutorial, I am going to describe how to create a custom ChatGPT that finds information from Confluence spaces.

To build this custom ChatGPT, we are going to use the OpenAI large language model, Django and React.

The purpose of using Django is to create an API that we will use to ask questions. The Django API loads the information from Confluence and upon request, queries the LLM.

The purpose of using React is to create a minimalistic UI to ask question and display the response.

Let’s start!

We begin with creating a Django project.

Django provides a command-line tool called django-admin to help us create and manage projects.

pip install django
django-admin startproject mygpt

Next, we go inside the directory chatgpt and create an app.

python manage.py startapp api

Perfect!

Create a file named requirements.txt inside the mygpt directory and paste the following content. It includes all the dependencies we need for this simple API.

Django==4.2.6
djangorestframework==3.14.0
langchain==0.0.311
llama-hub==0.0.37
llama-index==0.8.41

Install the dependencies using the command below.

pip install -r requirements.txt

For this simple API, we don’t need the database integration, since we are not going to save any data. For this reason, we can remove the database configuration from the mygpt/settings.py file.

...
DATABASES = {

}
...

In the same mygpt/settings.py file, on INSTALLED_APPS we need to add two new elements, the name of the app api and the rest_framework dependency.

...
INSTALLED_APPS = [
"api",
"rest_framework",
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
]
...

Create a file named forms.py inside the mygpt/api directory that will include a simple class Question .

from django import forms


class Question(forms.Form):
data =
forms.CharField()

Inside the Question class, we define a form field named data using forms.CharField().

  • data: This is the name we give to the form field. It will contain the question data sent later from the React application.
  • forms.CharField(): This part specifies the type of form field. In this case, it’s a CharField, which is used for collecting and validating text input (e.g., short text, long text, names, etc.).

We haven’t finished creating files yet. We need to create another file named serializers.py that serves the purpose of serializing and deserializing the Question class we previously created.

from rest_framework import serializers
from .forms import Question


class QuestionSerializer(serializers.Serializer):
data =
serializers.CharField()

Now it’s time to start working with Llama Index.

On models.py , we will create a single class Service and implement the Singleton pattern on it.


from django.db import models
import os
from llama_index import VectorStoreIndex, ServiceContext, set_global_service_context
from llama_index.query_engine import RetrieverQueryEngine
from llama_hub.confluence.base import ConfluenceReader


# Create your models here.
class Service:
_service = None

def __init__(self):
self.base_url = os.environ.get(
"CONFLUENCE_URL", "https://myserver.com/confluence/"
)

@classmethod
def get_singleton_instance(self):
"""
Creates the Service object if it is not yet created,
otherwise uses the already created object.
"""

if self._service is None:
print("creating singleton service")
self._service = self()
return self._service

def load_documents(self):
"""
Reads the documents of each Confluence space key
using the ConfluenceReader class from llama_hub .
"""

space_keys = ["spacekey1", "spacekey2", "spacekey3"]
reader = ConfluenceReader(base_url=self.base_url)
all_documents = []
for space_key in space_keys:
documents = reader.load_data(
space_key=space_key,
include_attachments=False,
page_status="current"
)
all_documents.extend(documents)

return all_documents

def create_index(self):
"""
Creates a Vector index from the documents loaded from Confluence.
"""

service_context = ServiceContext.from_defaults(chunk_size=1024)
set_global_service_context(service_context)
index = VectorStoreIndex.from_documents(self.load_documents())
return index

def load_query_engine(self):
"""
Create the query engine from the index.
"""

index = self.create_index()
return index.as_query_engine()

Since to load all the documents takes some time, we want to do this process during the application startup process. As such, go to apps.py and add the following content.

from django.apps import AppConfig
from .models import Service


class ApiConfig(AppConfig):
default_auto_field =
"django.db.models.BigAutoField"
name = "api"
Service.get_singleton_instance().load_query_engine()

Now, all we need to do is to create the POST endpoint that accepts a Question object in the request body. In views.py , add the following endpoint.

from rest_framework import status
from rest_framework.response import Response
from rest_framework.decorators import api_view
from .serializers import QuestionSerializer
from .models import Service


@api_view(["POST"])
def ask_view(request):
serializer = QuestionSerializer(data=request.data)
if serializer.is_valid():
question = serializer.validated_data["data"]
# get the query engine from the singleton
query_engine = Service.get_singleton_instance().load_query_engine()
# query the LLM and return the answer
return Response({"answer": query_engine.query(question).response})
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)

Inside the urls.py , update the routers by adding the newest endpoint we created.

from django.urls import path, include
from api.views import ask_view

urlpatterns = [
path("api/question", ask_view, name="ask-question-api")
]

We’re done with the API.

It’s very important to set the environment variables when running in local.

export CONFLUENCE_PASSWORD=password
export CONFLUENCE_USERNAME=username
export OPENAI_API_KEY=sk-...

Now, start your app and test it.

python manage.py runserver

Now, let’s create a simple UI with React.

Start your React project.

npx create-react-app my-gpt-app --template typescript

Install axios , bootstrap and bootstrap-iconsbecaause we will need them.

npm install --save axios bootstrap bootstrap-icons

Let’s create a new file named home.view.tsx and paste the following content that creates a functional component named Home .

import React, { useState } from "react"
import axios from "axios";

export const Home = (): React.JSX.Element => {

const [question, setQuestion] = useState("");
const [answer, setAnswer] = useState("");
const [isLoading, setIsLoading] = useState(false);

const onEnter = async () => {
setIsLoading(true);
const response = await axios.post("http://127.0.0.1/8000/api/question", {
data: question
});
setIsLoading(false);
setQuestion("");
setAnswer(await response.data?.answer);
}

return (
<div className="main-container">
<div className="container-fluid text-center bg-dark text-light min-vh-100 mx-auto px-5">
<div className="p-2">
<h2>ConfluenceGPT</h2>
</div>
<div className="input-group">
<input
type="text"
className="form-control"
placeholder="Ask something"
aria-label="Enter text"
value={question}
onKeyDown={(event) => {
if (event.key === 'Enter') {
onEnter()
}
}}
onChange={(event) => setQuestion(event.target.value as string)}
aria-describedby="button-addon"
/>
<button
className="btn btn-success"
type="button"
id="button-addon"
onClick={onEnter}
>
<i className="bi bi-send"></i>
</button>
</div>
{isLoading && <div className="spinner-grow text-light mt-5" role="status">
</div>}
{answer && !isLoading && <div className="row alert alert-secondary mt-5" role="alert">
<pre>{answer}</pre>
</div>}
</div>
</div>
);
}

And this is how your index.tsx file should look like.

import React from 'react';
import ReactDOM from 'react-dom/client';
import './index.css';
import { Home } from './home.view';

const root = ReactDOM.createRoot(
document.getElementById('root') as HTMLElement
);
root.render(
<React.StrictMode>
<Home />
</React.StrictMode>
);

Now you have a ConfluenceGPT.

Posted in Deep Learning, TechnologyTags:
Write a comment