top of page
Writer's pictureBilly @ Generative Labs

Llama 2: Creating Your own Text Generation API

Step-by-step guide in creating your Own Llama 2 API with ExLlama and RunPod


What is Llama 2

Llama 2 is an open-source large language model (LLM) released by Mark Zuckerberg's Meta. These models, available in three versions including a chatbot-optimized model, are designed to power applications across a range of use cases.


Despite Meta's admission that Llama 2 lags behind GPT-4, the LLM behind OpenAI’s ChatGPT, its release represents a significant step forward. As an open-source LLM, Llama 2 is freely available for startups, established businesses, and individual operators to access, use, and tweak to their own purpose.


Creating a Llama 2 Text Generation API using ExLlama

To help you leverage Llama 2's capabilities, we've created a detailed tutorial is available to guide you through creating a Llama 2-powered text generation API.


This tutorial provides step-by-step guidance, beginning with setting up the serverless computing platform, RunPod, proceeding to the integration of ExLlama, and concluding with testing via Postman.


This video guide demystifies the process, making it accessible and achievable even for those with limited experience.


Check the video here:


⚠️ Follow along with the entire video, as-is, but the only modifications you'll need to make are:

  • At 2:31, for MODEL_REPO, use this value "TheBloke/Llama-2-13B-chat-GPTQ"

  • At 4:45, for the prompt, use this prompt template syntax:


[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.  Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>> {prompt}[/INST]

Replace "{prompt}" with your text prompt, e.g. "Tell me a cat joke."


Also, to continue a conversation, use this prompt template syntax:


[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.  Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt}[/INST]{model_reply}  [INST]{prompt}[/INST]

Again, replacing "{prompt}" with your text prompt (e.g. "tell me a cat joke") and using {model_reply} and {prompt} to retain context.


For more information about this model, see: https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ


Summary

With our step-by-step tutorial, you'll find it straightforward to create your own text generation API using Llama 2 and ExLlama on RunPod. Llama 2 is an exciting way to leverage large language models, create your API, and begin generating text with your very own AI. 🙌


791 views0 comments

Comentários


bottom of page