How to Build an AI-Powered Financial Chatbot for FinTech Startups: Step-by-Step Guide Using E2E Cloud

February 7, 2024

Introduction

In today's fast-paced business landscape, the influence of AI-powered solutions has become increasingly prominent, especially in the finance sector. The progress of chatbots has been remarkable, thanks to the AI models associated with transformers, allowing them to better understand user behaviour for more personalized services. This evolution is not just about language; it's about creating chatbots that can truly connect with users. Fintech startups are adopting advanced AI-based chatbots, experiencing improved communication with clients, and gaining valuable insights into their needs. The integration of rules, natural language processing (NLP), and machine learning (ML) empowers these chatbots to evaluate data efficiently and address a wide range of customer requests. 

  1. In the domain of conversational finance, a generative AI chatbot personalised for fintech startups can enhance user engagement by understanding and responding to complex financial queries, providing clients with a seamless and personalized conversational experience.
  2. When it comes to financial analysis, a generative AI chatbot proves invaluable for fintech startups by swiftly processing and interpreting large datasets, offering real-time results, and assisting in informed decision-making for investment strategies and risk management.
  3. For fintech startups looking to generate synthetic data for testing and development purposes, a generative AI chatbot can efficiently create diverse circumstances, helping to simulate various financial situations and ensuring the robustness and adaptability of their systems.

In this blog, we'll explore the journey of building an AI-powered financial chatbot, tailored specifically for fintech startups, and how it can revolutionize client interactions in the finance industry.

Architecture

Our approach involves implementing the Retrieval-Augmented Generation (RAG) model, which combines traditional language models with an information retrieval step. In simple terms, it enhances language generation by first fetching relevant information from a database and then using that data to create more contextually relevant responses through a large language model from hugging face. This two-step process ensures the generated responses are not only based on the model's general knowledge but also on specific information retrieved for a given query.

To execute this, we've chosen the Zephyr 7B Beta language model from the Hugging Face library. Zephyr 7B stands out as a cutting-edge language model with an impressive 7 billion parameters. This extensive capacity enables it to understand and produce text that closely resembles human language, showcasing exceptional accuracy and consistency.

In conjunction with Zephyr 7B Beta, we integrate a widely used open-source relational database, PgVector, which is an extension of PostgreSQL. Specifically designed for managing high-dimensional vector data, like what's generated by language models such as Zephyr 7B Beta, PgVector excels at efficiently storing, indexing, and searching through this type of data. This makes it a crucial tool for projects dealing with large datasets and complex queries.

Lastly, for an enhanced user interface in our AI-powered chatbot development, we employ Streamlit. Simplifying the development process, Streamlit enables the foundation of interactive dashboards and data visualization. This ensures a more intuitive and engaging experience for users interacting with the chatbot.

Step-by-Step Guide

The partnership between NVIDIA GPU Cloud and E2E Cloud brings a full-fledged solution for those looking for top-notch GPU performance in their cloud computing projects. This collaboration combines advanced GPU technology with a dependable cloud infrastructure, guaranteeing a smooth and effective computing experience across various applications.

A screenshot of a computerDescription automatically generated

You can visit https://myaccount.e2enetworks.com/ and register to get your NVIDIA GPU suite.

Requirements

A white background with black and white cloudsDescription automatically generated with medium confidence

Let’s get into the code


# Installing required packages
import os
import psycopg2
import torch
import numpy as np
from llama_index import VectorStoreIndex  
from transformers import AutoTokenizer, AutoModel, pipeline
import streamlit as st
from langchain.document_loaders import PyPDFLoader

This code snippet installs and imports the necessary Python packages. It includes packages for interacting with the operating system, a PDF document loader for text extraction, PostgreSQL database connectivity, PyTorch functionalities, numerical operations, vector indexing, pretrained language models, and Streamlit for web application development.


# Connect to the PGVector database
conn = psycopg2.connect(dbname="ragdb", user="yourusername", password="yourpassword")

# Create a table for storing embeddings
cursor = conn.cursor()
cursor.execute("CREATE TABLE embeddings (id serial PRIMARY KEY, vector vector(512));")
conn.commit()

The first part of this code is establishing the connection to the PGVector database. The function ‘psycopg2.connect’ is used to connect to PGVector DB. The parameter ‘dbname’ specifies the name of the database to connect to. Other parameters ‘user’ and ‘password’ specify the username and password used for the authentication when connecting to the database, respectively. 

The other part of the code sets up a table for storing embeddings. It sends a SQL query to create a table named ‘embeddings’ with two columns: ‘id' and 'vector'. The 'id' column is of type serial, which is an auto-incrementing integer, and it is set as the primary key. The 'vector' column is of type vector (512), indicating a vector of 512 elements. You can adjust the size based on your model’s output.


# Function to extract text from PDF files using PyPDFLoader
def extract_text_from_pdf(pdf_path):
    loader = PyPDFLoader(pdf_path)
    text = ""
    for page in loader.load_and_split():
        text += page.text
    return text

# Directory containing your PDF files
pdf_directory = "./data/documentation"

# Sample data: Read text from PDF files in the specified directory
data = []
for filename in os.listdir(pdf_directory):
    if filename.endswith("Insurance Doc.pdf"):
        pdf_path = os.path.join(pdf_directory, filename)
        text = extract_text_from_pdf(pdf_path)
        data.append(text)
        

This code snippet defines a function, extract_text_from_pdf, that uses the PyPDFLoader class to extract text from PDF files. It iterates through a specified directory, reads PDF files named ‘Insurance Doc.pdf’, and appends the extracted text to a list named data.


# Initialize the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta")

model = AutoModel.from_pretrained("HuggingFaceH4/zephyr-7b-beta")

# Initialize the text generation pipeline
generator_pipe = pipeline(
    "text-generation",
    model="HuggingFaceH4/zephyr-7b-beta",
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

This code initializes a text generation model and tokenizer from Hugging Face's model hub using the identifier ‘HuggingFaceH4/zephyr-7b-beta’. The ‘AutoTokenizer’ and ‘AutoModel’ classes are used for this purpose. Additionally, a text generation pipeline is set up using the ‘pipeline’ function from the ‘transformers’ library. The pipeline uses the same model identifier and is configured to use Torch's bfloat16 data type and automatic device mapping.


# Function to generate embeddings from text
def generate_embeddings(text):
    inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
    with torch.no_grad():
        outputs = model(**inputs)
    return outputs.last_hidden_state.mean(dim=1).numpy()

# Generate embeddings for each abstract
embeddings = [generate_embeddings(abstract) for abstract in data]

# Convert the list of embeddings to a NumPy array
embeddings_array = np.vstack(embeddings)

# Create an index for these embeddings
index = VectorStoreIndex.from_documents(
    documents=data, service_context=embeddings_array
)

# Store each embedding in the database
for i, embedding in enumerate(embeddings_array):
    cursor.execute("INSERT INTO embeddings (id, vector) VALUES (%s, %s)", (i, embedding))
conn.commit()

This Python code defines a function, ‘generate_embeddings’, which uses a pretrained model and tokenizer to convert text into embeddings. The embeddings are then stored in a PostgreSQL database table named ‘embeddings’. The code creates an index for these embeddings and inserts each embedding into the database.


# Function to define the retrieval condition for SQL query
def your_retrieval_condition(query_embedding, threshold=0.7):
    # Convert query embedding to a string format for SQL query
    query_embedding_str = ','.join(map(str, query_embedding.tolist()))
    # SQL condition for cosine similarity
    condition = f"cosine_similarity(vector, ARRAY[{query_embedding_str}]) > {threshold}"
    return condition

This code defines a function, ‘your_retrieval_condition’, which takes a query embedding and an optional threshold as parameters. It converts the query embedding to a string format suitable for an SQL query and generates a retrieval condition based on cosine similarity. The condition is formulated as a comparison between the query embedding and the vectors stored in the ‘vector’ column of a PostgreSQL database table. The generated SQL condition checks if the cosine similarity between the query embedding and each stored vector is greater than the specified threshold (default is 0.7).


# Function to perform RAG-style query with text generation
def rag_query_with_generation(query):
    # Tokenize and encode the query
    input_ids = tokenizer.encode(query, return_tensors='pt')

    # Generate query embedding
    query_embedding = generate_embeddings(query)

    # Retrieve relevant embeddings from the database
    retrieval_condition = your_retrieval_condition(query_embedding)
    cursor.execute(f"SELECT vector FROM embeddings WHERE {retrieval_condition}")
    retrieved_embeddings = cursor.fetchall()

    # Convert the retrieved embeddings into a tensor
    retrieved_embeddings_tensor = torch.tensor([emb[0] for emb in retrieved_embeddings])

    # Combine the retrieved embeddings with the input_ids for the model
    combined_input = torch.cat((input_ids, retrieved_embeddings_tensor), dim=0)

    # Generate the response using text generation pipeline
    generated_text = generator_pipe(
        generator_pipe.tokenizer.decode(combined_input[0], skip_special_tokens=True),
        max_new_tokens=256,
        do_sample=True,
        temperature=0.7,
        top_k=50,
        top_p=0.95
    )[0]["generated_text"]

    return generated_text
    

This code defines a function, ‘rag_query_with_generation’, which performs a Retrieval-Augmented Generation (RAG)-style query with text generation. It tokenizes and encodes the input query, generates a query embedding, and retrieves relevant embeddings from a PostgreSQL database using a specified retrieval condition. The retrieved embeddings are converted into a tensor, combined with the input_ids, and used as input for a text generation pipeline. The function then generates a response by decoding the model's output and returning the generated text. Parameters such as temperature, top-k, and top-p are set for controlling the text generation process.


# Streamlit app for fintech chatbot
def fintech_chatbot_app():
    # Set page configuration with a background image
    st.set_page_config(
        page_title="Fintech Chatbot",
        page_icon="💸",
        layout="wide",
        initial_sidebar_state="auto",
    )
    # Set a background image
    st.markdown(
        """
        
        """,
        unsafe_allow_html=True,
    )
    # Main content
    st.title("Welcome to AI-powered Financial Chatbot")
    st.image("background_image.png", width=150)
    
    # Get user input
    user_input = st.text_input("You:", "")

    # Respond to user input
    if st.button("Send"):
        response = rag_query_with_generation(user_input)
        st.text_area("Bot:", value=response, height=100, key="output")  

if __name__ == "__main__":
    fintech_chatbot_app()
    

This Streamlit app code creates a fintech chatbot interface. It configures the app with a title, icon, wide layout, and initial sidebar state. The background is set using a background image. The main content includes a title, an image, and a text input for user queries. Upon clicking the ‘Send’ button, the user's input is processed by the ‘rag_query_with_generation’ function, and the generated response is displayed in a text area labeled ‘Bot’. The app is initiated when the script is run, presenting an AI-powered financial chatbot interface to users.

This is how an AI-powered financial chatbot application is built using Streamlit. The design includes a prominent title introducing the AI chatbot and a user-friendly input field for interacting with the bot. Users can type queries and receive responses by clicking the ‘Send’ button.

Conclusion

In conclusion, combining technologies like the RAG model, Zephyr 7B Beta, PgVector, and Streamlit creates a strong foundation for building a personalized AI-powered financial chatbot for fintech startups. This collaboration streamlines data retrieval, language generation, and user interfaces, resulting in a sophisticated chatbot adept at understanding and responding to user queries. The integrated approach also optimizes the handling of high-dimensional vector data, ensuring the chatbot delivers personalized and relevant information in the dynamic fintech domain.

References

https://medium.com/@shaikhrayyan123/how-to-build-an-llm-rag-pipeline-with-llama-2-pgvector-and-llamaindex-4494b54eb17dhttps://medium.com/google-cloud/question-and-answer-chat-apps-with-our-own-data-pdf-with-vertexai-langchain-strimlit-db63735f5ab4https://medium.com/@stefnestor/python-streamlit-local-llm-2aaa75961d03

Latest Blogs
This is a decorative image for: A Complete Guide To Customer Acquisition For Startups
October 18, 2022

A Complete Guide To Customer Acquisition For Startups

Any business is enlivened by its customers. Therefore, a strategy to constantly bring in new clients is an ongoing requirement. In this regard, having a proper customer acquisition strategy can be of great importance.

So, if you are just starting your business, or planning to expand it, read on to learn more about this concept.

The problem with customer acquisition

As an organization, when working in a diverse and competitive market like India, you need to have a well-defined customer acquisition strategy to attain success. However, this is where most startups struggle. Now, you may have a great product or service, but if you are not in the right place targeting the right demographic, you are not likely to get the results you want.

To resolve this, typically, companies invest, but if that is not channelized properly, it will be futile.

So, the best way out of this dilemma is to have a clear customer acquisition strategy in place.

How can you create the ideal customer acquisition strategy for your business?

  • Define what your goals are

You need to define your goals so that you can meet the revenue expectations you have for the current fiscal year. You need to find a value for the metrics –

  • MRR – Monthly recurring revenue, which tells you all the income that can be generated from all your income channels.
  • CLV – Customer lifetime value tells you how much a customer is willing to spend on your business during your mutual relationship duration.  
  • CAC – Customer acquisition costs, which tells how much your organization needs to spend to acquire customers constantly.
  • Churn rate – It tells you the rate at which customers stop doing business.

All these metrics tell you how well you will be able to grow your business and revenue.

  • Identify your ideal customers

You need to understand who your current customers are and who your target customers are. Once you are aware of your customer base, you can focus your energies in that direction and get the maximum sale of your products or services. You can also understand what your customers require through various analytics and markers and address them to leverage your products/services towards them.

  • Choose your channels for customer acquisition

How will you acquire customers who will eventually tell at what scale and at what rate you need to expand your business? You could market and sell your products on social media channels like Instagram, Facebook and YouTube, or invest in paid marketing like Google Ads. You need to develop a unique strategy for each of these channels. 

  • Communicate with your customers

If you know exactly what your customers have in mind, then you will be able to develop your customer strategy with a clear perspective in mind. You can do it through surveys or customer opinion forms, email contact forms, blog posts and social media posts. After that, you just need to measure the analytics, clearly understand the insights, and improve your strategy accordingly.

Combining these strategies with your long-term business plan will bring results. However, there will be challenges on the way, where you need to adapt as per the requirements to make the most of it. At the same time, introducing new technologies like AI and ML can also solve such issues easily. To learn more about the use of AI and ML and how they are transforming businesses, keep referring to the blog section of E2E Networks.

Reference Links

https://www.helpscout.com/customer-acquisition/

https://www.cloudways.com/blog/customer-acquisition-strategy-for-startups/

https://blog.hubspot.com/service/customer-acquisition

This is a decorative image for: Constructing 3D objects through Deep Learning
October 18, 2022

Image-based 3D Object Reconstruction State-of-the-Art and trends in the Deep Learning Era

3D reconstruction is one of the most complex issues of deep learning systems. There have been multiple types of research in this field, and almost everything has been tried on it — computer vision, computer graphics and machine learning, but to no avail. However, that has resulted in CNN or convolutional neural networks foraying into this field, which has yielded some success.

The Main Objective of the 3D Object Reconstruction

Developing this deep learning technology aims to infer the shape of 3D objects from 2D images. So, to conduct the experiment, you need the following:

  • Highly calibrated cameras that take a photograph of the image from various angles.
  • Large training datasets can predict the geometry of the object whose 3D image reconstruction needs to be done. These datasets can be collected from a database of images, or they can be collected and sampled from a video.

By using the apparatus and datasets, you will be able to proceed with the 3D reconstruction from 2D datasets.

State-of-the-art Technology Used by the Datasets for the Reconstruction of 3D Objects

The technology used for this purpose needs to stick to the following parameters:

  • Input

Training with the help of one or multiple RGB images, where the segmentation of the 3D ground truth needs to be done. It could be one image, multiple images or even a video stream.

The testing will also be done on the same parameters, which will also help to create a uniform, cluttered background, or both.

  • Output

The volumetric output will be done in both high and low resolution, and the surface output will be generated through parameterisation, template deformation and point cloud. Moreover, the direct and intermediate outputs will be calculated this way.

  • Network architecture used

The architecture used in training is 3D-VAE-GAN, which has an encoder and a decoder, with TL-Net and conditional GAN. At the same time, the testing architecture is 3D-VAE, which has an encoder and a decoder.

  • Training used

The degree of supervision used in 2D vs 3D supervision, weak supervision along with loss functions have to be included in this system. The training procedure is adversarial training with joint 2D and 3D embeddings. Also, the network architecture is extremely important for the speed and processing quality of the output images.

  • Practical applications and use cases

Volumetric representations and surface representations can do the reconstruction. Powerful computer systems need to be used for reconstruction.

Given below are some of the places where 3D Object Reconstruction Deep Learning Systems are used:

  • 3D reconstruction technology can be used in the Police Department for drawing the faces of criminals whose images have been procured from a crime site where their faces are not completely revealed.
  • It can be used for re-modelling ruins at ancient architectural sites. The rubble or the debris stubs of structures can be used to recreate the entire building structure and get an idea of how it looked in the past.
  • They can be used in plastic surgery where the organs, face, limbs or any other portion of the body has been damaged and needs to be rebuilt.
  • It can be used in airport security, where concealed shapes can be used for guessing whether a person is armed or is carrying explosives or not.
  • It can also help in completing DNA sequences.

So, if you are planning to implement this technology, then you can rent the required infrastructure from E2E Networks and avoid investing in it. And if you plan to learn more about such topics, then keep a tab on the blog section of the website

Reference Links

https://tongtianta.site/paper/68922

https://github.com/natowi/3D-Reconstruction-with-Deep-Learning-Methods

This is a decorative image for: Comprehensive Guide to Deep Q-Learning for Data Science Enthusiasts
October 18, 2022

A Comprehensive Guide To Deep Q-Learning For Data Science Enthusiasts

For all data science enthusiasts who would love to dig deep, we have composed a write-up about Q-Learning specifically for you all. Deep Q-Learning and Reinforcement learning (RL) are extremely popular these days. These two data science methodologies use Python libraries like TensorFlow 2 and openAI’s Gym environment.

So, read on to know more.

What is Deep Q-Learning?

Deep Q-Learning utilizes the principles of Q-learning, but instead of using the Q-table, it uses the neural network. The algorithm of deep Q-Learning uses the states as input and the optimal Q-value of every action possible as the output. The agent gathers and stores all the previous experiences in the memory of the trained tuple in the following order:

State> Next state> Action> Reward

The neural network training stability increases using a random batch of previous data by using the experience replay. Experience replay also means the previous experiences stocking, and the target network uses it for training and calculation of the Q-network and the predicted Q-Value. This neural network uses openAI Gym, which is provided by taxi-v3 environments.

Now, any understanding of Deep Q-Learning   is incomplete without talking about Reinforcement Learning.

What is Reinforcement Learning?

Reinforcement is a subsection of ML. This part of ML is related to the action in which an environmental agent participates in a reward-based system and uses Reinforcement Learning to maximize the rewards. Reinforcement Learning is a different technique from unsupervised learning or supervised learning because it does not require a supervised input/output pair. The number of corrections is also less, so it is a highly efficient technique.

Now, the understanding of reinforcement learning is incomplete without knowing about Markov Decision Process (MDP). MDP is involved with each state that has been presented in the results of the environment, derived from the state previously there. The information which composes both states is gathered and transferred to the decision process. The task of the chosen agent is to maximize the awards. The MDP optimizes the actions and helps construct the optimal policy.

For developing the MDP, you need to follow the Q-Learning Algorithm, which is an extremely important part of data science and machine learning.

What is Q-Learning Algorithm?

The process of Q-Learning is important for understanding the data from scratch. It involves defining the parameters, choosing the actions from the current state and also choosing the actions from the previous state and then developing a Q-table for maximizing the results or output rewards.

The 4 steps that are involved in Q-Learning:

  1. Initializing parameters – The RL (reinforcement learning) model learns the set of actions that the agent requires in the state, environment and time.
  2. Identifying current state – The model stores the prior records for optimal action definition for maximizing the results. For acting in the present state, the state needs to be identified and perform an action combination for it.
  3. Choosing the optimal action set and gaining the relevant experience – A Q-table is generated from the data with a set of specific states and actions, and the weight of this data is calculated for updating the Q-Table to the following step.
  4. Updating Q-table rewards and next state determination – After the relevant experience is gained and agents start getting environmental records. The reward amplitude helps to present the subsequent step.  

In case the Q-table size is huge, then the generation of the model is a time-consuming process. This situation requires Deep Q-learning.

Hopefully, this write-up has provided an outline of Deep Q-Learning and its related concepts. If you wish to learn more about such topics, then keep a tab on the blog section of the E2E Networks website.

Reference Links

https://analyticsindiamag.com/comprehensive-guide-to-deep-q-learning-for-data-science-enthusiasts/

https://medium.com/@jereminuerofficial/a-comprehensive-guide-to-deep-q-learning-8aeed632f52f

This is a decorative image for: GAUDI: A Neural Architect for Immersive 3D Scene Generation
October 13, 2022

GAUDI: A Neural Architect for Immersive 3D Scene Generation

The evolution of artificial intelligence in the past decade has been staggering, and now the focus is shifting towards AI and ML systems to understand and generate 3D spaces. As a result, there has been extensive research on manipulating 3D generative models. In this regard, Apple’s AI and ML scientists have developed GAUDI, a method specifically for this job.

An introduction to GAUDI

The GAUDI 3D immersive technique founders named it after the famous architect Antoni Gaudi. This AI model takes the help of a camera pose decoder, which enables it to guess the possible camera angles of a scene. Hence, the decoder then makes it possible to predict the 3D canvas from almost every angle.

What does GAUDI do?

GAUDI can perform multiple functions –

  • The extensions of these generative models have a tremendous effect on ML and computer vision. Pragmatically, such models are highly useful. They are applied in model-based reinforcement learning and planning world models, SLAM is s, or 3D content creation.
  • Generative modelling for 3D objects has been used for generating scenes using graf, pigan, and gsn, which incorporate a GAN (Generative Adversarial Network). The generator codes radiance fields exclusively. Using the 3D space in the scene along with the camera pose generates the 3D image from that point. This point has a density scalar and RGB value for that specific point in 3D space. This can be done from a 2D camera view. It does this by imposing 3D datasets on those 2D shots. It isolates various objects and scenes and combines them to render a new scene altogether.
  • GAUDI also removes GANs pathologies like mode collapse and improved GAN.
  • GAUDI also uses this to train data on a canonical coordinate system. You can compare it by looking at the trajectory of the scenes.

How is GAUDI applied to the content?

The steps of application for GAUDI have been given below:

  • Each trajectory is created, which consists of a sequence of posed images (These images are from a 3D scene) encoded into a latent representation. This representation which has a radiance field or what we refer to as the 3D scene and the camera path is created in a disentangled way. The results are interpreted as free parameters. The problem is optimized by and formulation of a reconstruction objective.
  • This simple training process is then scaled to trajectories, thousands of them creating a large number of views. The model samples the radiance fields totally from the previous distribution that the model has learned.
  • The scenes are thus synthesized by interpolation within the hidden space.
  • The scaling of 3D scenes generates many scenes that contain thousands of images. During training, there is no issue related to canonical orientation or mode collapse.
  • A novel de-noising optimization technique is used to find hidden representations that collaborate in modelling the camera poses and the radiance field to create multiple datasets with state-of-the-art performance in generating 3D scenes by building a setup that uses images and text.

To conclude, GAUDI has more capabilities and can also be used for sampling various images and video datasets. Furthermore, this will make a foray into AR (augmented reality) and VR (virtual reality). With GAUDI in hand, the sky is only the limit in the field of media creation. So, if you enjoy reading about the latest development in the field of AI and ML, then keep a tab on the blog section of the E2E Networks website.

Reference Links

https://www.researchgate.net/publication/362323995_GAUDI_A_Neural_Architect_for_Immersive_3D_Scene_Generation

https://www.technology.org/2022/07/31/gaudi-a-neural-architect-for-immersive-3d-scene-generation/ 

https://www.patentlyapple.com/2022/08/apple-has-unveiled-gaudi-a-neural-architect-for-immersive-3d-scene-generation.html

Build on the most powerful infrastructure cloud

A vector illustration of a tech city using latest cloud technologies & infrastructure