Deploying LangChain to Fly.io

Overhead view of a keyboard, monitor, mouse, cup of coffee, and small potted plant.
Image by Annie Ruygt

In this post we deploy a minimal LangChain app to Fly.io using Flask. Check it out: you can be up and running on Fly.io in just minutes.

I hear about Large Language Models (LLM) everywhere these days! Do you? 🤔

LLMs are a type of natural language processing (NLP) technology that uses advanced deep learning techniques to generate human-like language. If you haven’t heard about LLMs, you probably heard about one of the most notable examples of it today: ChatGPT. ChatGPT is a language model developed by OpenAI and it was trained on a large amount of text data which allows it to understand the patterns and generate responses to inputs.

LangChain is a Python framework that rapidly gained notoriety. It was launched as an open source project in October 2022 - yes, a few months ago. This framework was designed to simplify the creation of powerful applications providing ways to interact with LLMs.

I recently created a minimal application using LangChain and deployed it to Fly.io. This article aims to share the process of how to deploy this minimal LangChain app to Fly.io using Flask.

Flask is a Python micro framework for building web applications. That’s perfect for our example since it’s designed to make getting started quick and easy. That’s all we need for now.

Let’s get to it! 😎

LangChain Models 🦜 🔗

LangChain provides an interface to interact with several LLMs.

The template is using the OpenAI LLM wrapper, which uses, at the time I’m writing this article, text-davinci-003 model by default - this model belongs to the GPT-3.5 family. Keep in mind that there are other alternatives to use more capable and less expensive models like gpt-3.5-turbo, which is the one recommended by OpenAI because of its lower cost. However, we won’t get into that in this article.

Language models take text as input. This text is what we usually referred as a prompt. LangChain facilitates the use of those prompts. To make things a bit more interesting, the template makes use of the PromptTemplate: ask a question and also receive an input from the user.

Our Application 🍽

Our minimal application receives a place (city, country, etc.) as an input and give us 3 options where to eat in that place. The default value for place is Berlin.

Out prompt:

What are the 3 best places to eat in <place>?

# hello.py
import os

from flask import Flask, render_template
import openai
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate

app = Flask(__name__)

openai.api_key = os.getenv("OPENAI_API_KEY")


@app.route("/")
@app.route("/<place>")
def hello(place="Berlin"):
    llm = OpenAI(temperature=0.9)
    prompt = PromptTemplate(
        input_variables=['place'],
        template="What are the 3 best places to eat in {place}?",
    )
    question = prompt.format(place=place)
    return render_template(
        'hello.html',
        place=place,
        answer=llm(question).split("\n\n")
    )

You can define your own input variable by calling the url:

http://127.0.0.1:5000/<place>

For example:

  • Country: http://127.0.0.1:5000/norway
  • City: http://127.0.0.1:5000/prague

To illustrate, we are using the hello.html to display the results on the browser.

So, let’s start at the beginning…

Setting up ⚒️

We assume the initial setup is already done and you have Python installed.It’s recommended to use the latest version of Python. We are using Flask 2.2.3 and it supports Python 3.8 and newer.

Create and enter your project’s folder:

mkdir my-fly-langchain
cd my-fly-langchain

We can go ahead and clone the repository inside your project’s folder using either

HTTPS:

git clone https://github.com/fly-apps/hello-fly-langchain.git .

or SSH:

git clone git@github.com:fly-apps/hello-fly-langchain.git .

Virtual Environment

Choose a virtual environment to manage our dependencies. For simplicity, we’re using venv for this project. Inside your project, create and activate it:

# Unix/macOS
python3 -m venv venv
source venv/bin/activate
(.venv) $

# Windows
py -3 -m venv venv
venv\Scripts\activate
(.venv) $

From this point on, the commands won’t be displayed with (.venv) $ but we assume you have your Python virtual environment activated.

Install Dependencies from requirements.txt

For this minimal example, we have a few dependencies to be installed:

# requirements.txt
Flask==2.2.3
gunicorn==20.1.0
langchain==0.0.148
openai==0.27.4
python-dotenv==1.0.0

Go ahead and install them by running:

python -m pip install -r requirements.txt

We are using Flask, langchain and openai packages as minimal requirements for this example. gunicorn (Green Unicorn) is the pure Python WSGI server we will use in production instead of the built-in development server - other options can be found here. Finally, we use python-dotenv to use the environment variables set on .env file - more about in the next section.

Environment Variables

The template contains a .env.dist file. Go ahead and rename it to .env. Our local environment variables will be stored in this .env file:

# .env (rename .env.dist file)
FLASK_APP=hello
OPENAI_API_KEY=<your-openai-api-secret-key>

The OpenAI API uses API keys for authentication. We will need an API Key to be able to use the API in your requests. Log in to your account and check OpenAI API Key page to create or retrieve your API key to be set as OPENAI_API_KEY.

Note that OPENAI_API_KEY is required because we are using OpenAI LLM wrapper - other providers will have different requirements. Here is a list of multiple LLM providers.

You can find here other options to set the environment variables like setting them on the command line or creating .flaskenv file instead.

.env file is only used for your local development.

Local Development

Now that everything is set up we can run the project:

flask run
 * Serving Flask app 'hello'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://127.0.0.1:5000
Press CTRL+C to quit

Now, we can head over to http://127.0.0.1:5000 🎉

Note that flask run command works since we set FLASK_APP on .env file. In this case, it wasn’t necessary to run the command with --app option. If our FLASK_APP setting was not set, we would need to run: flask --app <app> run

With our LangChain app prepped and running on our local machine, let’s move to the next section and deploy our app to Fly.io!

Deploying to Fly.io 🚀

flyctl is the command-line utility provided by Fly.io.

If not installed yet, follow these instructions, sign up and log in to Fly.io.

New customers’ organizations use V2 of the Fly Apps platform, running on Fly Machines. If you’re already a customer, you can flip the switch to start deploying your new apps to Apps V2 with fly orgs apps-v2 default-on <org-slug>.

Launching Our App

Before deploying our app, first we need to configure and launch our app to Fly.io by using the flyctl command fly launch. During the process, we will:

  • Choose an app name: this will be your dedicated fly.dev subdomain.
  • Select the organization: you can create a new organization or deploy to your personal account (connect to your Fly account, visible only to you).
  • Choose the region for deployment: Fly.io initially suggests the closest to you, you can choose another region if you prefer.

This is what it looks like when we run fly launch:

fly launch

Creating app in ../flyio/my-fly-langchain
An existing fly.toml file was found for app hello-fly-langchain
? Would you like to copy its configuration to the new app? Yes
Scanning source code
Detected a Python app
Using the following build configuration:
        Builder: paketobuildpacks/builder:base
? Choose an app name (leaving blank will default to 'hello-fly-langchain') my-fly-langchain
? Select Organization: Fly.io (fly-io)
App will use 'ams' region as primary
Created app 'my-fly-langchain' in organization 'fly-io'
Admin URL: https://fly.io/apps/my-fly-langchain
Hostname: my-fly-langchain.fly.dev
? Overwrite "../flyio/my-fly-langchain/.dockerignore"? No
? Overwrite "../flyio/my-fly-langchain/Procfile"? No
? Would you like to set up a Postgresql database now? No
? Would you like to set up an Upstash Redis database now? No
Wrote config file fly.toml
Validating ../flyio/my-fly-langchain/fly.toml
Platform: machines
✓ Configuration is valid
Run "fly deploy" to deploy your application.

If you cloned the template mentioned in this article, you will see a similar message described above.

The template provides you with an existing fly.toml file, you can copy its configuration to your app.

An existing fly.toml file was found for app hello-fly-langchain
? Would you like to copy its configuration to the new app? Yes

Go ahead and define your app name and select the organization to deploy our app.

? Choose an app name (leaving blank will default to 'hello-fly-langchain') my-fly-langchain
? Select Organization: Fly.io (fly-io)

The template also provides you with existing .dockerignore and Procfile files. Those files are generated for you if they don’t exist in your project. If so, make sure you update them to fit your needs.

? Overwrite "../flyio/my-fly-langchain/.dockerignore"? No
? Overwrite "../flyio/my-fly-langchain/Procfile"? No

Note that the built-in Python builder used (paketobuildpacks/builder:base) will automatically copy over the contents of the directory to the deployable image.

# fly.toml
...
[build]
  builder = "paketobuildpacks/builder:base"
...

To keep it simple, a Procfile is used to deploy and run Python applications - the minimal generated Procfile starts the Gunicorn server with our WSGI application.

# Procfile
web: gunicorn hello:app

By now, we are almost ready to deploy our app. Before we do that, we need to set the environment variables to be used in production. Let’s see how that’s done.

Environment Variables

As mentioned before, for our local development we are using .env file to set our environment variables. In production, we can’t share such file with sensitive values.

We can specify secret values for our app using flyctl secrets command by running:

fly secrets set OPENAI_API_KEY=<your-openai-api-secret-key>

That’s it! We are now ready to deploy our app!

Deploying Our App

Let’s simply run:

fly deploy
==> Verifying app config
Validating ../flyio/my-fly-langchain/fly.toml
Platform: machines
✓ Configuration is valid
--> Verified app config
==> Building image
Remote builder fly-builder-ancient-surf-8247 ready
==> Building image with Buildpacks
--> docker host: 20.10.12 linux x86_64
base: Pulling from paketobuildpacks/builder
...
Paketo Buildpack for Procfile 5.6.1
  https://github.com/paketo-buildpacks/procfile
  Process types:
    web: gunicorn hello:app
...
--> Pushing image done
image: registry.fly.io/my-fly-langchain:deployment-01GYZ27HQF3C7MQ9EB8VGJAE9Z
image size: 378 MB
Provisioning ips for my-fly-langchain
  Dedicated ipv6: 2a09:8280:1::37:12bc
  Shared ipv4: 66.241.124.47
  Add a dedicated ipv4 with: fly ips allocate-v4
Process groups have changed. This will:
 * create 1 "app" machine

No machines in group app, launching one new machine
  Machine 4d89696a2ed508 [app] update finished: success
Creating a second machine to increase service availability
  Machine 4d89699ce71578 [app] update finished: success
Finished launching new machines
Updating existing machines in 'my-fly-langchain' with rolling strategy
  Finished deploying
Visit your newly deployed app at https://my-fly-langchain.fly.dev

Our app should be up and running!

fly open

Let’s try it: https://<your-app-name>.fly.dev/<your-city>

YAY! 🎉 We just deployed our LangChain app to production! Cool, right? 😎

Fly.io ❤️ all things Python.

Fly.io makes it easier to deploy your apps and move them closer to your users!

Deploy a Python app today!

What’s Next?

Our app does the job of finding new places to eat! Now that we gave it a try, you are probably wondering: what’s next?

We got some options where to eat tonight in Berlin, here where I live! That’s a great start for what is possible to do with LangChain. But that’s a LOT more!

Let’s say that I’m meeting my best friend in Berlin for dinner tomorrow.

From all the places I could get in Berlin, I want to get the name and address, with working hours of the ones that serve Italian food (because we all love Italian food, right?) and are closer to Neukölln - my best friend’s neighbourhood. The places also need to be top-rated, with rating higher than 4.5 on Google Maps and be open tomorrow at 7pm.

And we could go on and on here.

That looks a bit more complex and…

It started to look like a chain (aha!) of calls that also depend on user’s input. That’s when simple applications start to become more powerful.

Note that our chain depends on user's input. Not only that, but some of the real-time information like current working hours and rating on Google Maps are not available to us.

AI language models don’t have access to real-time data neither the ability to browse the internet.

Agents joined the chat ⛓

For these type of chains, we got to interact with the outside world to get some answers!

That’s when agents come into play.

The “agent” has access to a set of tools. Depending on the user’s input, the agent can decide which, if any, of the available tools to call - you can also build your own custom tools and agents.

Those tools are the way we interact with the rest of the world - in our case, using Google Places API to get real-time information such as working hours and rating.

That’s so neat and it doesn’t even scratch the surface. There are so much more out there - and that’s something for future articles! Here you can find a curated list of tools and projects using LangChain.

Happy coding! 🤖

Got Feedback?

For more detailed information on how to deploy a Python App to Fly.io, you can check the Fly.io Docs.

If you have any question or comments, reach out on the Fly.io Community. That’s a great place to share knowledge, help and get help!

📢 Now, tell me… What are the cool ideas you have now using LangChain? 👩🏽‍💻