Step-by-step guide on how to integrate llama3 with CrewAI?
What is Llama 3?
Llama 3 is a family of LLMs like GPT-4 and Google Gemini. It’s the successor to Llama 2, Meta’s previous generation of AI models. While there are some technical differences between Llama and other LLMs, you would need to be deep into AI for them to mean much. All these LLMs were developed and work in essentially the same way; they all use the same transformer architecture and development ideas like pretraining and fine-tuning.
When you enter a text prompt or provide Llama 3 with text input in some other way, it attempts to predict the most plausible follow-on text using its neural network — a cascading algorithm with billions of variables (called “parameters”) that’s modelled after the human brain. By assigning different weights to all the different parameters, and throwing in a small bit of randomness, Llama 3 can generate incredibly human-like responses.
Meta has released four versions of Llama 3 so far:
- Llama 3 8B
- Llama 3 8B-Instruct
- Llama 3 70B
- Llama 3 70B-Instruct
The 8B models have 8 billion parameters, while the two 70B models have 70 billion parameters. Both instruct models were fine-tuned to better follow human directions, so they’re more suited to use as a chatbot than the raw Llama models.
Step 1: How to install and enjoy AI Capabilities Offline — Llama3
Here is the blog to read and install locally
How to integrate llama3 with the CrewAI project?
Step-1: Environment Variables Configuration(.env file): To integrate Ollama, set the following environment variables:
OPENAI_API_BASE='http://localhost:11434/v1'
OPENAI_MODEL_NAME='crewai-llama3-8b'
OPENAI_API_KEY=''
Step 2: After setting up the Ollama, Pull the Llama3 by typing the following lines into the terminal ollama pull llama3
.
- Create a ModelFile similar to the one below in your project directory.
FROM llama3:8b
# Set parameters
PARAMETER temperature 0.8
PARAMETER stop Result
# Sets a custom system message to specify the behavior of the chat assistant
# Leaving it blank for now.
SYSTEM """"""
2. Create a script to get the base model, which is llama3, and create a model on top of that with ModelFile above. PS: this will be a “.sh” file.
#!/bin/zsh
# variables
model_name="llama3:8b"
custom_model_name="crewai-llama3:8b"
#get the base model
ollama pull $model_name
#create the model file
ollama create $custom_model_name -f ./Llama2ModelFile
Notes: Go into the directory where the script file and ModelFile is located and run the script.
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI
import os
os.environ["OPENAI_API_KEY"] = "NA"
# Llama3:8b model
llm = ChatOpenAI(
model = "crewai-llama2",
base_url = "http://localhost:11434/v1"
)
# Math Professor Agent
general_agent = Agent(
role = "Math Professor",
goal = """ Provide the solution to the students that are asking for mathematical
questions and give them the answer.""",
backstory = """ You are an excellent math professor that likes to solve math questions in a way that everyone can understand your solution""",
allow_delegation = False,
verbose = True,
llm = llm
)
# Task
task = Task(description="""what is 3 + 5""", agent = general_agent)
crew = Crew(
agents=[general_agent],
tasks=[task],
verbose=2
)
result = crew.kickoff()
print(result)
Output:
[DEBUG]: == [Math Professor]
Task output: `### 3 + 5 = 8`
I hope my answer meets your expectations!
Conclusion
Integrating CrewAI with different LLMs expands the framework’s versatility, allowing for customised, efficient AI solutions across various domains and platforms.