Over the past decade, AI technology has developed rapidly, with the discovery of how to create Large Language Models, making AI applications in every organization possible. Competition in the next decade will require the use of AI capabilities to enhance the quality of the company's existing services and reduce operating costs. In the application of AI, there are many challenges because it is a new technology that is constantly evolving. Therefore, the organization must have a strategic plan to prepare personnel at all levels. Both reskill and upskill preparation must be done. Management readiness at the organizational level will be the most important. The significant of creating an AI strategic plan can answer the needs of executives. The AI strategic plan will be a roadmap for deploying AI applications in the organization appropriate to the business and the budget required.
The Bangkok AI consulting team has experience in information technology since the Expert System AI era, whose work “SIMNETMAN: an expert workstation for designing rule-based network management systems” (IEEE Network Magazine) is one of AT&T's reference documents for obtaining a US Patent in Network Management. AI work of BangkokAI team members appeared in PLOS ONE, IEEE, and Elsevier.
The Large Language Model (LLM) that will be used as the core platform for designing AI-driven applications are from the world-class AI firms. The software is open source under Apache 2.0 license agreement.
Design and development of specialized LLM
systems
By the process of incubating LLM with pe-training
methods SFT (Supervised Fine Tuning), DPO (Direct Preference
Optimization), and RLHF (Reinforcement Learning with Human
Feedback).
Creating the LLM Inference Engine
LLM Inference
Engine is a software system. They can take questions or
messages from users and then create answers that are
expected to meet the questioner's or interlocutor's needs.
The LLM Inference Engine system will consist of an API
system, incoming and outgoing data filtering system, Vector
Database system, KM system, Chatbot system, Dashboard
system. Anti-hallucination system, Efficiency enhancement
system, we provide services for creating Inference Engines
using LLM Llama 2, 3, GPT 3, and Mistral
Create RAG-based innovations
RAG or Retrieval
Augmented Generation or creating a Prompt for the LLM system
where the Prompt consists of the question and the
questioner's information obtained from the Vector Database
to get the answer from LLM that best meets the needs of the
person. The creation of the RAG needs to use the operational
data of the company and store it in the Vector Database. A
user's query will first search the Vector Database to get
the best match information, and then the query and the
additional information from the Vector Database will form a
prompt to the LLM to get the response. RAG uses it in
billing, call center, HR, and organizational regulations.
and much more.
NVIDIA Tensor RT-LLM
NVIDIA Tensor RT-LLM is a
Python API for creating LLM that can solve problems that are
too difficult to solve using GPT4. Tensor RT also helps with
optimization, making LLM run faster. Whether using Fash
Attention, Infight Batching, and FP8
Content Architecture for Specialization of LLM
The process of specialization of LLM consists
ofPre-training, Supervised Fine Tuning (SFT), Reinforcement
Learning with HumanFeedback (RLSF), Direct Preference
Generation (DPO), and Retrieval Augmented Generation (RAG).
The technical pre-requisites are
1) LLM Inference Engine.
2) Performance and Benchmarking.
3) Technical Infrastructure.
Create a Pilot Project such as RAG that uses an LLM to respond to a prompt comprising a user's query and relevant factual information.
Develop a website / Mobile App to provide AI services, AI e-learning for upskill & reskill, and creating a community of AI users in the organization.
Consulting work will focus on building understanding in
applying technology to the organization and creating
innovations. New services are provided to reduce expenses
and increase revenue while rendering higher quality
services than before.
This study will
systematically prepare the organization. It can be made
concrete, mainly after the advisory committee has taken
action. Organizations can understand the challenges of
adopting AI by understanding the following key points:
01
What is AI? How many types are there? How do they work and what are the differences?
02
Why use AI and how can we create value from AI for our organization?
03
What are the limitations and problems of AI? Why is there a need for AI ethics? What are the regulations that you need to know?
04
What kind of AI should our organization use that meets its needs?
05
How will we measure our organization's readiness for AI? If not, what preparations will we need to make?
06
How should we plan to use AI for the organization? To be successful and sustainable
07
How to find the perfect point? between the benefits gained from AI and managing potential risks
08
How can we change our organization's workforce to adapt to the advent of AI?
09
How should we determine the guidelines for collaboration between people and AI?
10
What approach should we choose to manage our organization's AI project and what procurement method will we use?
11
How should data be prepared for AI use to gain maximum benefit?
The adoption of AI in organizations is extremely crucial. Everyone
must understand the benefits and impacts in terms of ethics and
society. Therefore, the advisory team from Bangkok AI would like
to offer this service. The consultant prepared the study. Prepare
executives to analyze and create a roadmap for using Large
Language Model (LLM) AI in the organization
Specifically,
we provide the following services.