diff --git a/docs/languages/python/examples/ecr-irsa.md b/docs/languages/python/examples/ecr-irsa.md new file mode 100644 index 00000000..c627e3cf --- /dev/null +++ b/docs/languages/python/examples/ecr-irsa.md @@ -0,0 +1,189 @@ +Call AWS services from a function using ambient credentials instead of static access keys. With [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html), the function's pod is automatically assigned temporary credentials via a Kubernetes Service Account mapped to an IAM role. + +Use-cases: + +* Accessing any AWS service (S3, DynamoDB, SQS, ECR, etc.) without static keys +* Meeting security policies that prohibit long-lived credentials +* Simplifying secret rotation by relying on short-lived tokens + +This example creates and queries ECR repositories using `boto3`, but the same approach works for any AWS service. It requires OpenFaaS to be deployed on [AWS EKS](https://aws.amazon.com/eks/) with IRSA enabled. See [Creating an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) for setup, or [Manage AWS Resources from OpenFaaS Functions With IRSA](https://www.openfaas.com/blog/irsa-functions/) for an end-to-end walkthrough. + +## Overview + +handler.py: + +```python +import os +import json +import boto3 + +ecrClient = None + +def initECR(): + session = boto3.Session( + region_name=os.getenv('AWS_REGION'), + ) + return session.client('ecr') + +def handle(event, context): + global ecrClient + + if ecrClient is None: + ecrClient = initECR() + + if event.method != 'POST': + return { + "statusCode": 405, + "body": "Method not allowed" + } + + body = json.loads(event.body) + name = body.get('name') + + if not name: + return { + "statusCode": 400, + "body": "Missing in body: name" + } + + # Check if the repository already exists + try: + ecrClient.describe_repositories(repositoryNames=[name]) + return { + "statusCode": 200, + "body": json.dumps({"message": "Repository already exists"}) + } + except ecrClient.exceptions.RepositoryNotFoundException: + pass + + # Create the repository + response = ecrClient.create_repository( + repositoryName=name, + imageTagMutability='MUTABLE', + encryptionConfiguration={ + 'encryptionType': 'AES256', + }, + imageScanningConfiguration={ + 'scanOnPush': False, + }, + ) + + return { + "statusCode": 201, + "body": json.dumps({ + "arn": response['repository']['repositoryArn'] + }) + } +``` + +requirements.txt: + +``` +boto3 +``` + +stack.yaml: + +```yaml +functions: + ecr-create-repo: + lang: python3-http-debian + handler: ./ecr-create-repo + image: ttl.sh/openfaas-examples/ecr-create-repo:latest + annotations: + com.openfaas.serviceaccount: openfaas-create-ecr-repo + environment: + AWS_REGION: eu-west-1 +``` + +No secrets are needed. The `com.openfaas.serviceaccount` annotation tells OpenFaaS which Kubernetes Service Account to attach to the function's pod. EKS then mounts a short-lived token for that service account, and the AWS SDK picks up the credentials automatically — no access keys to store or rotate. + +The `AWS_REGION` environment variable is required by the SDK to know which region to connect to. + +## Step-by-step walkthrough + +### Create an IAM Policy + +Create a policy that grants the permissions your function needs: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "ecr:CreateRepository", + "ecr:DeleteRepository", + "ecr:DescribeRepositories" + ], + "Resource": "*" + } + ] +} +``` + +Save the above to `ecr-policy.json` and create the policy: + +```bash +aws iam create-policy \ + --policy-name ecr-create-query-repository \ + --policy-document file://ecr-policy.json +``` + +Note the ARN from the output, e.g. `arn:aws:iam::ACCOUNT_NUMBER:policy/ecr-create-query-repository`. + +### Create an IAM Role and Kubernetes Service Account + +Use `eksctl` to create a Kubernetes Service Account in the `openfaas-fn` namespace that is linked to an IAM role with the policy attached: + +```bash +export ARN=arn:aws:iam::ACCOUNT_NUMBER:policy/ecr-create-query-repository + +eksctl create iamserviceaccount \ + --name openfaas-create-ecr-repo \ + --namespace openfaas-fn \ + --cluster \ + --role-name ecr-create-query-repository \ + --attach-policy-arn $ARN \ + --region eu-west-1 \ + --approve +``` + +This can also be done manually by creating the IAM Role in AWS, followed by a Kubernetes Service Account annotated with `eks.amazonaws.com/role-arn`. + +### Create the function + +Pull the template and scaffold a new function: + +```bash +faas-cli template store pull python3-http-debian +faas-cli new --lang python3-http-debian ecr-create-repo \ + --prefix ttl.sh/openfaas-examples +``` + +Update `ecr-create-repo/handler.py` and `ecr-create-repo/requirements.txt` with the code from the overview above. + +### Deploy and invoke + +Build, push and deploy the function with `faas-cli up`: + +```bash +faas-cli up \ + --filter ecr-create-repo \ + --tag digest +``` + +Create a new ECR repository by invoking the function: + +```bash +curl -X POST http://127.0.0.1:8080/function/ecr-create-repo \ + -H "Content-Type: application/json" \ + -d '{"name":"tenant1/fn1"}' +``` + +The response contains the ARN of the newly created repository: + +```json +{"arn": "arn:aws:ecr:eu-west-1:ACCOUNT_NUMBER:repository/tenant1/fn1"} +``` diff --git a/docs/languages/python/examples/kafka.md b/docs/languages/python/examples/kafka.md new file mode 100644 index 00000000..c6132793 --- /dev/null +++ b/docs/languages/python/examples/kafka.md @@ -0,0 +1,139 @@ +Publish messages to a Kafka topic from a function using the `confluent-kafka` package. This lets you bridge HTTP-triggered functions into event-driven pipelines, using Kafka as a decoupling layer between your API and downstream consumers. + +Use-cases: + +* Publishing events or audit logs to a Kafka topic +* Decoupling workloads by writing to a message bus +* Feeding data pipelines from HTTP endpoints + +This example uses the `confluent-kafka` package with SASL/SSL authentication. Broker credentials are stored as [OpenFaaS secrets](/reference/secrets/). + +If you'd like to trigger functions from Kafka topics instead, see [Trigger functions from Kafka](/openfaas-pro/kafka-events). + +## Overview + +handler.py: + +```python +import os +import socket +from confluent_kafka import Producer + +# Initialise the producer once and reuse it across invocations +# to keep the broker connection alive between requests. +kafkaProducer = None + +def initProducer(): + username = read_secret('kafka-broker-username') + password = read_secret('kafka-broker-password') + broker = os.getenv("kafka_broker") + + conf = { + 'bootstrap.servers': broker, + 'security.protocol': 'SASL_SSL', + 'sasl.mechanism': 'PLAIN', + 'sasl.username': username, + 'sasl.password': password, + 'client.id': socket.gethostname() + } + + return Producer(conf) + +def handle(event, context): + global kafkaProducer + + if kafkaProducer is None: + kafkaProducer = initProducer() + + topic = 'faas-request' + + # Produce the request body as a message and wait for delivery + kafkaProducer.produce(topic, value=event.body) + kafkaProducer.flush() + + return { + "statusCode": 200, + "body": "Message produced to {}".format(topic) + } + +def read_secret(name): + with open("/var/openfaas/secrets/" + name, "r") as f: + return f.read().strip() +``` + +requirements.txt: + +``` +confluent-kafka +``` + +stack.yaml: + +```yaml +functions: + kafka-producer: + lang: python3-http-debian + handler: ./kafka-producer + image: ttl.sh/openfaas-examples/kafka-producer:latest + environment: + kafka_broker: ":9092" + secrets: + - kafka-broker-username + - kafka-broker-password +``` + +The Debian variant of the template is required because `confluent-kafka` depends on `librdkafka`, a native C library that will not build on Alpine. + +The Kafka producer is initialised once on first invocation and reused for subsequent requests, keeping the broker connection alive between calls and avoiding the overhead of re-authenticating on every request. + +The `SASL_SSL` security protocol combines SASL authentication with TLS encryption. The `sasl.mechanism` must match your broker's configuration: + +- `PLAIN` — standard for managed services such as Confluent Cloud and Aiven. +- `SCRAM-SHA-256` / `SCRAM-SHA-512` — common for self-hosted brokers. +- `GSSAPI` — Kerberos-based authentication. + +## Step-by-step walkthrough + +### Create the function + +Pull the template and scaffold a new function: + +```bash +faas-cli template store pull python3-http-debian +faas-cli new --lang python3-http-debian kafka-producer \ + --prefix ttl.sh/openfaas-examples +``` + +The example uses the public [ttl.sh](https://ttl.sh) registry — replace the prefix with your own registry for production use. + +Update `kafka-producer/handler.py` and `kafka-producer/requirements.txt` with the code from the overview above. + +### Create secrets for Kafka broker credentials + +Store your Kafka broker username and password as OpenFaaS secrets. This keeps credentials out of environment variables and the function's container image. + +Save your broker username to `kafka-broker-username.txt` and your broker password to `kafka-broker-password.txt`, then run: + +```bash +faas-cli secret create kafka-broker-username --from-file kafka-broker-username.txt +faas-cli secret create kafka-broker-password --from-file kafka-broker-password.txt +``` + +At runtime, the secrets are mounted as files under `/var/openfaas/secrets/` inside the function container. + +### Deploy and invoke + +Build, push and deploy the function with `faas-cli up`: + +```bash +faas-cli up \ + --filter kafka-producer \ + --tag digest +``` + +Publish a message to the Kafka topic by invoking the function: + +```bash +curl http://127.0.0.1:8080/function/kafka-producer \ + --data "Hello from OpenFaaS" +``` diff --git a/docs/languages/python/examples/openai.md b/docs/languages/python/examples/openai.md new file mode 100644 index 00000000..477c7424 --- /dev/null +++ b/docs/languages/python/examples/openai.md @@ -0,0 +1,227 @@ +Send prompts to the [OpenAI Responses API](https://platform.openai.com/docs/guides/text?lang=python) and return the response. Running this inside an OpenFaaS function lets you trigger AI inference on demand via HTTP, or integrate it into event-driven workflows. + +Use-cases: + +* Chatbots and conversational interfaces +* Content generation and summarisation +* Adding AI features to existing workflows + +This example sends a prompt and returns the full completion. The API key is stored as an [OpenFaaS secret](/reference/secrets/). + +## Overview + +handler.py: + +```python +from openai import OpenAI + +# Initialise the client once and reuse it across invocations +# to avoid reading the secret and creating a new client on every request. +client = None + +def initClient(): + apiKey = read_secret('openai-api-key') + return OpenAI(api_key=apiKey) + +def handle(event, context): + global client + + if client is None: + client = initClient() + + # Send the request body as a user message + response = client.responses.create( + model="gpt-5.4-nano", + input=event.body.decode("utf-8") + ) + + return { + "statusCode": 200, + "body": response.output_text + } + +def read_secret(name): + with open("/var/openfaas/secrets/" + name, "r") as f: + return f.read().strip() +``` + +requirements.txt: + +``` +openai +``` + +stack.yaml: + +```yaml +functions: + openai-chat: + lang: python3-http + handler: ./openai-chat + image: ttl.sh/openfaas-examples/openai-chat:latest + secrets: + - openai-api-key +``` + +The `openai` package is pure Python, so the Alpine-based `python3-http` template works here. + +- The OpenAI client is initialised once on first invocation and reused for subsequent requests, avoiding the overhead of re-reading the secret and re-establishing the HTTP connection on every call. +- The `read_secret` helper reads the API key from `/var/openfaas/secrets/`. OpenFaaS mounts secrets as files at that path at runtime — this is preferred over environment variables as the values are not visible in the process environment or container spec. + +## Step-by-step walkthrough + +### Create the function + +Pull the template and scaffold a new function: + +```bash +faas-cli template store pull python3-http +faas-cli new --lang python3-http openai-chat \ + --prefix ttl.sh/openfaas-examples +``` + +The example uses the public [ttl.sh](https://ttl.sh) registry — replace the prefix with your own registry for production use. + +Update `openai-chat/handler.py` and `openai-chat/requirements.txt` with the code from the overview above. + +### Create a secret for the API key + +Store your [OpenAI API key](https://platform.openai.com/api-keys) as an OpenFaaS secret. This keeps the key out of environment variables and the function's container image. + +Save your API key to `openai-api-key.txt`, then run: + +```bash +faas-cli secret create openai-api-key --from-file openai-api-key.txt +``` + +At runtime, the secret is mounted as a file under `/var/openfaas/secrets/` inside the function container. + +### Deploy and invoke + +Build, push and deploy the function with `faas-cli up`: + +```bash +faas-cli up \ + --filter openai-chat \ + --tag digest +``` + +Send a prompt to the function: + +```bash +curl http://127.0.0.1:8080/function/openai-chat \ + --data "What is the capital of France?" +``` + +## Streaming responses + +To stream tokens back to the client as they are generated, use the `python3-flask` template instead. Flask lets the handler return a `Response` object backed by a generator, which yields each token as a [Server-Sent Event (SSE)](sse.md). + +### Overview + +handler.py: + +```python +from flask import Response +from openai import OpenAI + +# Initialise the client once and reuse it across invocations +# to avoid reading the secret and creating a new client on every request. +client = None + +def initClient(): + apiKey = read_secret('openai-api-key') + return OpenAI(api_key=apiKey) + +def handle(req): + global client + + if client is None: + client = initClient() + + def generate(): + # Request a streaming response from OpenAI + with client.responses.stream( + model="gpt-5.4-nano", + input=req, + ) as stream: + # Yield each text delta as an SSE event + for event in stream: + if event.type == "response.output_text.delta": + yield f"data: {event.delta}\n\n" + + yield "data: [DONE]\n\n" + + # Return a streaming Flask response + return Response(generate(), mimetype='text/event-stream') + +def read_secret(name): + with open("/var/openfaas/secrets/" + name, "r") as f: + return f.read().strip() +``` + +requirements.txt: + +``` +openai +``` + +stack.yaml: + +```yaml +functions: + openai-stream: + lang: python3-flask + handler: ./openai-stream + image: ttl.sh/openfaas-examples/openai-stream:latest + secrets: + - openai-api-key +``` + +The `generate()` inner function yields each text delta as an SSE `data:` event, with a final `[DONE]` event to signal the end of the stream. The same API key secret is reused from the non-streaming example. + +### Create the function + +Pull the template and scaffold a new function: + +```bash +faas-cli template store pull python3-flask +faas-cli new --lang python3-flask openai-stream \ + --prefix ttl.sh/openfaas-examples +``` + +Update `openai-stream/handler.py` and `openai-stream/requirements.txt` with the code from the overview above. + +### Deploy and invoke + +Build, push and deploy the function with `faas-cli up`: + +```bash +faas-cli up \ + --filter openai-stream \ + --tag digest +``` + +Send a prompt and stream the response. The `-N` flag disables curl's output buffering so tokens appear as they arrive. The `Accept: text/event-stream` header tells the OpenFaaS gateway to stream the response instead of buffering it: + +```bash +curl -N http://127.0.0.1:8080/function/openai-stream \ + -H "Accept: text/event-stream" \ + -H "Content-Type: text/plain" \ + -d "Explain what SSE is in two sentences." +``` + +You should see tokens appear incrementally as OpenAI generates them: + +``` +data: Server +data: -Sent +data: Events +data: ( +data: SSE +data: ) +... +data: [DONE] +``` + +See also: [Stream Server-Sent Events (SSE)](sse.md) for the general SSE pattern, or [Stream OpenAI responses from functions using Server Sent Events](https://www.openfaas.com/blog/openai-streaming-responses/) on the OpenFaaS blog. diff --git a/docs/languages/python/examples/openfaas-api.md b/docs/languages/python/examples/openfaas-api.md new file mode 100644 index 00000000..41707e0e --- /dev/null +++ b/docs/languages/python/examples/openfaas-api.md @@ -0,0 +1,235 @@ +Use Python's `requests` library to interact with the [OpenFaaS REST API](/reference/rest-api/) and manage functions and namespaces programmatically — useful for CI/CD pipelines, functions that manage other functions, or building self-service platforms on top of OpenFaaS. + +Use-cases: + +* Custom deployment tooling and CI/CD automation +* Functions that manage other functions and namespaces programmatically +* Building self-service platforms on top of OpenFaaS + +This example creates a namespace if it does not already exist, then deploys a function into it. + +## Overview + +handler.py: + +```python +import os +import json +import requests + +def handle(event, context): + gateway = os.getenv("gateway_url", "http://gateway.openfaas:8080") + password = read_secret("openfaas-password") + auth = ("admin", password) + + body = json.loads(event.body) + ns = body.get("namespace", "openfaas-fn") + + namespaces = requests.get( + f"{gateway}/system/namespaces", + auth=auth, + ).json() + + if ns not in namespaces: + r = requests.post( + f"{gateway}/system/namespace/", + json={"name": ns, "annotations": {"openfaas": "1"}}, + auth=auth, + ) + if r.status_code not in (200, 201): + return { + "statusCode": r.status_code, + "body": f"Failed to create namespace: {r.text}", + } + + r = requests.put( + f"{gateway}/system/functions", + json={ + "service": body["name"], + "image": body["image"], + "namespace": ns, + }, + auth=auth, + ) + + return { + "statusCode": r.status_code, + "body": r.text, + } + +def read_secret(name): + with open("/var/openfaas/secrets/" + name, "r") as f: + return f.read().strip() +``` + +requirements.txt: + +``` +requests +``` + +stack.yaml: + +```yaml +functions: + deploy-function: + lang: python3-http + handler: ./deploy-function + image: ttl.sh/openfaas-examples/deploy-function:latest + secrets: + - openfaas-password +``` + +The `requests` package is pure Python, so the Alpine-based `python3-http` template works here. + +- The gateway URL defaults to `http://gateway.openfaas:8080`, the in-cluster address when running on Kubernetes. Override it with the `gateway_url` environment variable if needed. +- The handler authenticates with HTTP Basic Auth using the `admin` username and the password read from the `openfaas-password` secret. +- Namespace creation is idempotent — the handler checks whether the namespace exists before attempting to create it. + +Because this function can manage other functions and namespaces, its own endpoint should be protected. See [Add authentication](#add-authentication) for how to do this. + +## Step-by-step walkthrough + +### Create the function + +Pull the template and scaffold a new function: + +```bash +faas-cli template store pull python3-http +faas-cli new --lang python3-http deploy-function \ + --prefix ttl.sh/openfaas-examples +``` + +The example uses the public [ttl.sh](https://ttl.sh) registry — replace the prefix with your own registry for production use. + +Update `deploy-function/handler.py` and `deploy-function/requirements.txt` with the code from the overview above. + +### Create the openfaas-password secret + +The gateway admin password is stored in a Kubernetes secret called `basic-auth` during installation. Retrieve it and create an OpenFaaS function secret named `openfaas-password` so the function can access it at runtime: + +```bash +PASSWORD=$(kubectl get secret -n openfaas basic-auth \ + -o jsonpath="{.data.basic-auth-password}" | base64 --decode) + +faas-cli secret create openfaas-password --from-literal="$PASSWORD" +``` + +At runtime, the secret is mounted as a file under `/var/openfaas/secrets/` inside the function container. + +### Deploy and invoke + +Build, push and deploy the function with `faas-cli up`: + +```bash +faas-cli up \ + --filter deploy-function \ + --tag digest +``` + +Deploy the `env` function into a `staging` namespace. The namespace is created automatically if it does not exist: + +```bash +curl -X POST http://127.0.0.1:8080/function/deploy-function \ + -H "Content-Type: application/json" \ + --data '{ + "name": "env", + "image": "ghcr.io/openfaas/alpine:latest", + "namespace": "staging" + }' +``` + +### Add authentication + +This function manages other functions and namespaces, so its endpoint should require authentication. You can implement authentication directly in the handler code, or use the [built-in function authentication provided by OpenFaaS IAM](/openfaas-pro/iam/function-authentication/). + +#### Built-in function authentication with OpenFaaS IAM + +OpenFaaS IAM provides built-in function authentication without any code changes. Set the `jwt_auth` environment variable on the function and configure a Role and Policy to control who can invoke it. + +```yaml +functions: + deploy-function: + lang: python3-http + handler: ./deploy-function + image: ttl.sh/openfaas-examples/deploy-function:latest + secrets: + - openfaas-password + environment: + jwt_auth: "true" +``` + +The watchdog enforces authentication automatically — callers must present a valid function access token in the `Authorization: Bearer` header. The request will never reaches the function handler if the token is missing or invalid. + +See [Function Authentication](/openfaas-pro/iam/function-authentication/) for how to configure Roles and Policies and obtain function access tokens. + +#### Implement authentication in the handler + +The handler can validate any credential that suits your use case — a pre-shared token, an API key, or any other scheme. The example below uses a pre-shared token stored as an OpenFaaS secret and compared against the `Authorization: Bearer` header — any request without a valid token is rejected before any API calls are made. + +Add the `valid_bearer` helper and a token check at the top of the handler: + +```diff + import os + import json + import requests + + def handle(event, context): ++ token = read_secret("deploy-function-token") ++ if not valid_bearer(token, event.headers): ++ return {"statusCode": 401, "body": "Unauthorized"} ++ + gateway = os.getenv("gateway_url", "http://gateway.openfaas:8080") + password = read_secret("openfaas-password") + auth = ("admin", password) +@@ ... + ++def valid_bearer(token, headers): ++ if "Authorization" not in headers: ++ return False ++ authz = headers["Authorization"] ++ if not authz.startswith("Bearer "): ++ return False ++ return authz.split(" ", 1)[1] == token ++ + def read_secret(name): + with open("/var/openfaas/secrets/" + name, "r") as f: + return f.read().strip() +``` + +Add the new secret to `stack.yaml`: + +```diff + secrets: + - openfaas-password ++ - deploy-function-token +``` + +Generate and create the token secret: + +```bash +faas-cli secret generate -o ./deploy-function-token.txt +faas-cli secret create deploy-function-token \ + --from-file=./deploy-function-token.txt +``` + +Redeploy the function and invoke it with the token: + +```bash +faas-cli up \ + --filter deploy-function \ + --tag digest +``` + +```bash +curl -X POST http://127.0.0.1:8080/function/deploy-function \ + -H "Authorization: Bearer $(cat ./deploy-function-token.txt)" \ + -H "Content-Type: application/json" \ + --data '{ + "name": "env", + "image": "ghcr.io/openfaas/alpine:latest", + "namespace": "staging" + }' +``` + +A request without a valid token returns `401 Unauthorized`. diff --git a/docs/languages/python/examples/playwright.md b/docs/languages/python/examples/playwright.md new file mode 100644 index 00000000..afaa9bd5 --- /dev/null +++ b/docs/languages/python/examples/playwright.md @@ -0,0 +1,247 @@ +[Playwright](https://playwright.dev/python/) is a Python library for controlling a headless browser programmatically. Running it inside an OpenFaaS function lets you trigger browser automation on demand via HTTP — useful for tasks that require a real rendering engine rather than a plain HTTP client. + +Use-cases: + +* End-to-end and compliance tests against your application +* Scraping information from a webpage that doesn't provide an API +* Taking screenshots or generating PDFs of rendered pages + +## Overview + +handler.py: + +```python +import json +from playwright.sync_api import sync_playwright + +def handle(event, context): + body = json.loads(event.body) + uri = body.get("uri", "https://www.openfaas.com") + + with sync_playwright() as p: + browser = p.chromium.launch( + args=["--no-sandbox", "--disable-setuid-sandbox"] + ) + try: + page = browser.new_page() + page.goto(uri, timeout=30000) + title = page.title() + finally: + browser.close() + + return { + "statusCode": 200, + "body": json.dumps({"title": title}) + } +``` + +requirements.txt: + +``` +playwright +``` + +stack.yaml: + +```yaml +functions: + playwright-scrape: + lang: dockerfile + handler: ./playwright-scrape + image: ttl.sh/openfaas-examples/playwright-scrape:latest + environment: + PLAYWRIGHT_BROWSERS_PATH: /home/app/.cache/ms-playwright +``` + +This example uses the `dockerfile` language with a custom Dockerfile based on the `python3-http-debian` template. + +- The custom Dockerfile is needed to install the Chromium browser and its system-level dependencies at build time — something the standard template cannot do. +- `PLAYWRIGHT_BROWSERS_PATH` tells Playwright where the Chromium binary was installed inside the image. + +## Step-by-step walkthrough + +### Create the function + +Scaffold the function using the `dockerfile` language: + +```bash +faas-cli new --lang dockerfile playwright-scrape \ + --prefix ttl.sh/openfaas-examples +``` + +Replace the generated Dockerfile with the one below. It is based on the `python3-http-debian` template with an additional step to install Playwright's Chromium browser and its system dependencies. + +`playwright-scrape/Dockerfile`: + +```dockerfile +ARG PYTHON_VERSION=3.12 +ARG DEBIAN_OS=slim-bookworm + +FROM --platform=${TARGETPLATFORM:-linux/amd64} ghcr.io/openfaas/of-watchdog:0.11.5 AS watchdog +FROM --platform=${TARGETPLATFORM:-linux/amd64} python:${PYTHON_VERSION}-${DEBIAN_OS} AS build + +COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog +RUN chmod +x /usr/bin/fwatchdog + +ARG ADDITIONAL_PACKAGE +ARG UPGRADE_PACKAGES=false + +RUN apt-get update -qy \ + && if [ "${UPGRADE_PACKAGES}" = "true" ] || [ "${UPGRADE_PACKAGES}" = "1" ]; then apt-get upgrade -qy; fi \ + && apt-get install -qy --no-install-recommends gcc make ${ADDITIONAL_PACKAGE} \ + && rm -rf /var/lib/apt/lists/* + +# Add non root user +RUN addgroup --system app \ + && adduser app --system --ingroup app --home /home/app \ + && chown app:app /home/app + +USER app + +ENV PATH=$PATH:/home/app/.local/bin + +WORKDIR /home/app/ + +COPY --chown=app:app index.py . +COPY --chown=app:app requirements.txt . +USER root +RUN pip install --no-cache-dir -r requirements.txt +USER app + +RUN mkdir -p function +RUN touch ./function/__init__.py +WORKDIR /home/app/function/ +COPY --chown=app:app function/requirements.txt . +RUN pip install --no-cache-dir --user -r requirements.txt + +# Install Chromium and system dependencies for Playwright. +# The PYTHONPATH is set because pip packages are in the app user's +# local site-packages, but this step runs as root. +USER root +ENV PLAYWRIGHT_BROWSERS_PATH=/home/app/.cache/ms-playwright +RUN PYTHONPATH="/home/app/.local/lib/python${PYTHON_VERSION}/site-packages" \ + python -m playwright install --with-deps chromium \ + && chown -R app:app /home/app/.cache + +COPY --chown=app:app function/ . + +FROM build AS ship +WORKDIR /home/app/ + +USER app + +# Set up of-watchdog for HTTP mode +ENV fprocess="python index.py" +ENV cgi_headers="true" +ENV mode="http" +ENV upstream_url="http://127.0.0.1:5000" + +HEALTHCHECK --interval=5s CMD [ -e /tmp/.lock ] || exit 1 + +CMD ["fwatchdog"] +``` + +The key addition compared to the standard template is the `playwright install --with-deps chromium` step: + +- `--with-deps` installs both the Chromium binary and all required system libraries (`libglib`, `libnss3`, `libgbm`, etc.) via apt. +- The browser is installed to `/home/app/.cache/ms-playwright`, set by the `PLAYWRIGHT_BROWSERS_PATH` env var. +- Ownership of that directory is transferred to the `app` user so the browser is accessible at runtime without elevated privileges. + +You will also need to copy the `index.py` and `requirements.txt` files from the `python3-http-debian` template into your function handler directory. Unlike `faas-cli new`, the `dockerfile` language does not scaffold these files automatically — they need to be present alongside the Dockerfile at build time: + +```bash +faas-cli template store pull python3-http-debian +cp template/python3-http-debian/index.py playwright-scrape/ +cp template/python3-http-debian/requirements.txt playwright-scrape/ +``` + +Then add the handler and function requirements: + +`playwright-scrape/function/handler.py` — use the handler code from the overview above. + +`playwright-scrape/function/requirements.txt` — add the `playwright` package so it is installed into the function's Python environment at build time: + +``` +playwright +``` + +### Deploy and invoke + +Build, push and deploy the function: + +```bash +faas-cli up \ + --filter playwright-scrape \ + --tag digest +``` + +Get the title of a webpage: + +```bash +echo '{"uri": "https://docs.openfaas.com"}' | \ + faas-cli invoke playwright-scrape \ + --header "Content-Type: application/json" +``` + +```json +{"title": "Introduction - OpenFaaS"} +``` + +### Take a screenshot + +To return a screenshot as a PNG, update the handler to capture the page and return the binary data: + +```python +import json +from playwright.sync_api import sync_playwright + +def handle(event, context): + body = json.loads(event.body) + uri = body.get("uri", "https://www.openfaas.com") + + with sync_playwright() as p: + browser = p.chromium.launch( + args=["--no-sandbox", "--disable-setuid-sandbox"] + ) + try: + page = browser.new_page() + page.goto(uri, timeout=30000) + screenshot = page.screenshot(full_page=True) + finally: + browser.close() + + return { + "statusCode": 200, + "headers": {"Content-Type": "image/png"}, + "body": screenshot + } +``` + +Invoke and save the screenshot: + +```bash +echo '{"uri": "https://docs.openfaas.com"}' | \ + faas-cli invoke playwright-scrape \ + --header "Content-Type: application/json" > screenshot.png +``` + +The screenshot could also be uploaded directly to S3 from within the function instead of returning it to the caller. See [Access S3 object storage with boto3](s3-boto3.md) for how to set up an S3 client. + +### Hardening + +Each browser instance is memory-intensive. Set `max_inflight: "1"` to allow only one concurrent request per container replica, and add a memory limit to cap resource usage. Chromium requires at least 512Mi at idle — use 1Gi for production workloads, especially when taking full-page screenshots: + +```yaml +functions: + playwright-scrape: + lang: dockerfile + handler: ./playwright-scrape + image: ttl.sh/openfaas-examples/playwright-scrape:latest + environment: + max_inflight: "1" + PLAYWRIGHT_BROWSERS_PATH: /home/app/.cache/ms-playwright + limits: + memory: 1Gi +``` + +See also: [Web scraping that just works with OpenFaaS with Puppeteer](https://www.openfaas.com/blog/puppeteer-scraping/) and [Generate PDFs at scale on Kubernetes](https://www.openfaas.com/blog/pdf-generation-at-scale-on-kubernetes/) for patterns on scaling headless browsers with OpenFaaS. diff --git a/docs/languages/python/examples/s3-boto3.md b/docs/languages/python/examples/s3-boto3.md new file mode 100644 index 00000000..38273d27 --- /dev/null +++ b/docs/languages/python/examples/s3-boto3.md @@ -0,0 +1,152 @@ +Use the `boto3` SDK to interact with [AWS S3](https://aws.amazon.com/s3/) or any S3-compatible object storage from an OpenFaaS function. Credentials are stored as [OpenFaaS secrets](/reference/secrets/). + +Use-cases: + +* Storing uploaded files or generated reports in S3 +* Listing and serving assets from a bucket +* Backing up data or exporting results to object storage + +This example lists and uploads objects in an AWS S3 bucket. The code also works with self-hosted S3-compatible backends such as [SeaweedFS](https://github.com/seaweedfs/seaweedfs), [RustFS](https://github.com/rustfs/rustfs), [Garage](https://garagehq.deuxfleurs.fr/), and [Ceph](https://docs.ceph.com/en/latest/radosgw/). + +## Overview + +handler.py: + +```python +import os +import json +import boto3 + +# Initialise the S3 client once and reuse it across invocations +# to avoid re-reading secrets and creating a new session on every request. +s3Client = None + +def initS3(): + with open('/var/openfaas/secrets/s3-key', 'r') as s: + s3Key = s.read().strip() + with open('/var/openfaas/secrets/s3-secret', 'r') as s: + s3Secret = s.read().strip() + + session = boto3.Session( + aws_access_key_id=s3Key, + aws_secret_access_key=s3Secret, + ) + + return session.client('s3') + +def handle(event, context): + global s3Client + + if s3Client is None: + s3Client = initS3() + + bucketName = os.getenv('s3_bucket') + + # GET — list all object keys in the bucket + if event.method == 'GET': + response = s3Client.list_objects_v2(Bucket=bucketName) + keys = [obj['Key'] for obj in response.get('Contents', [])] + + return { + "statusCode": 200, + "body": json.dumps(keys) + } + + # POST — upload the request body as an object + elif event.method == 'POST': + key = event.query.get('key', 'upload.txt') + s3Client.put_object(Bucket=bucketName, Key=key, Body=event.body) + + return { + "statusCode": 201, + "body": "Uploaded to {}".format(key) + } + + return { + "statusCode": 405, + "body": "Method not allowed" + } +``` + +requirements.txt: + +``` +boto3 +``` + +stack.yaml: + +```yaml +functions: + s3-example: + lang: python3-http + handler: ./s3-example + image: ttl.sh/openfaas-examples/s3-example:latest + environment: + s3_bucket: my-bucket + secrets: + - s3-key + - s3-secret +``` + +- The S3 client is initialised once on first invocation and reused for subsequent requests, avoiding the overhead of re-reading secrets and creating a new session on every call. +- A GET request lists all object keys in the bucket. +- A POST request uploads the request body as an object; the key is taken from the `?key=` query parameter. + +To use a self-hosted S3-compatible backend, pass a custom `endpoint_url` when creating the client: + +```python + return session.client('s3', endpoint_url='https://s3.my-storage.example.com') +``` + +## Step-by-step walkthrough + +### Create the function + +Pull the template and scaffold a new function: + +```bash +faas-cli template store pull python3-http +faas-cli new --lang python3-http s3-example \ + --prefix ttl.sh/openfaas-examples +``` + +The example uses the public [ttl.sh](https://ttl.sh) registry — replace the prefix with your own registry for production use. + +Update `s3-example/handler.py` and `s3-example/requirements.txt` with the code from the overview above. + +### Create secrets for S3 credentials + +Store your S3 access key and secret key as OpenFaaS secrets. This keeps credentials out of environment variables and the function's container image. + +Save your access key ID to `s3-key.txt` and your secret access key to `s3-secret.txt`, then run: + +```bash +faas-cli secret create s3-key --from-file s3-key.txt +faas-cli secret create s3-secret --from-file s3-secret.txt +``` + +At runtime, the secrets are mounted as files under `/var/openfaas/secrets/` inside the function container. + +### Deploy and invoke + +Build, push and deploy the function with `faas-cli up`: + +```bash +faas-cli up \ + --filter s3-example \ + --tag digest +``` + +Upload a file to the bucket with a POST request. The `?key=` query parameter sets the object key — this maps to `event.query.get('key', 'upload.txt')` in the handler: + +```bash +curl -X POST http://127.0.0.1:8080/function/s3-example?key=hello.txt \ + --data "Hello from OpenFaaS" +``` + +List all objects in the bucket with a GET request: + +```bash +curl http://127.0.0.1:8080/function/s3-example +``` diff --git a/docs/languages/python/examples/sse.md b/docs/languages/python/examples/sse.md new file mode 100644 index 00000000..c9b37c26 --- /dev/null +++ b/docs/languages/python/examples/sse.md @@ -0,0 +1,101 @@ +Stream data from a Python function to a client as it becomes available using [Server-Sent Events (SSE)](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events). + +Use-cases: + +* Streaming LLM completions token by token +* Progress updates for long-running tasks +* Real-time log tailing + +This example uses the `python3-flask` template, which lets the handler return a Flask `Response` object with a generator function. + +!!! note "Required client header" + When invoking the function, clients must include `Accept: text/event-stream` so the OpenFaaS gateway streams the response instead of buffering it. + +## Overview + +handler.py: + +```python +import time +from flask import Response + +def handle(req): + def generate(): + for i in range(1, 6): + time.sleep(1) + yield f"data: Message {i} of 5\n\n" + yield "data: [DONE]\n\n" + + return Response(generate(), mimetype='text/event-stream') +``` + +stack.yaml: + +```yaml +functions: + sse-example: + lang: python3-flask + handler: ./sse-example + image: ttl.sh/openfaas-examples/sse-example:latest +``` + +No additional pip dependencies are needed — Flask is included in the `python3-flask` template. + +!!! info "About the python3-flask template" + + The `python3-flask` template exposes a simpler handler interface than the `python3-http` template. The handler receives the raw request body as a string, and can return a string, a tuple of `(body, status_code)`, a tuple of `(body, status_code, headers)`, or a Flask `Response` object. + +## Step-by-step walkthrough + +### Create the function + +Pull the template and scaffold a new function: + +```bash +faas-cli template store pull python3-flask +faas-cli new --lang python3-flask sse-example \ + --prefix ttl.sh/openfaas-examples +``` + +The example uses the public [ttl.sh](https://ttl.sh) registry — replace the prefix with your own registry for production use. + +Update `sse-example/handler.py` with the code from the overview above. + +### Deploy and invoke + +Build, push and deploy the function with `faas-cli up`: + +```bash +faas-cli up \ + --filter sse-example \ + --tag digest +``` + +Stream events from the function: + +```bash +curl -N http://127.0.0.1:8080/function/sse-example \ + -H "Accept: text/event-stream" +``` + +You should see each message appear one second apart: + +``` +data: Message 1 of 5 + +data: Message 2 of 5 + +data: Message 3 of 5 + +data: Message 4 of 5 + +data: Message 5 of 5 + +data: [DONE] +``` + +!!! note "Timeouts" + + Streaming responses can run for longer than the default function timeout. Make sure your OpenFaaS [timeout values](/tutorials/expanded-timeouts/) are configured appropriately for your streaming workloads. + +See also: [OpenAI Chat API](openai.md#streaming-responses) for a practical example that streams LLM completions token by token using this same pattern. diff --git a/docs/languages/python.md b/docs/languages/python/index.md similarity index 79% rename from docs/languages/python.md rename to docs/languages/python/index.md index 1ac4a932..372aae92 100644 --- a/docs/languages/python.md +++ b/docs/languages/python/index.md @@ -1,19 +1,17 @@ ## Python -There are two recommended templates for [Python 3](https://www.python.org/) users. +These are the official [Python 3](https://www.python.org/) templates maintained by OpenFaaS Ltd. -!!! info "Do you need to customise this template?" +The `python3-http` template is recommended for most Python functions. Use `python3-http-debian` when a dependency requires native compilation— if a package fails to build on Alpine, switch to the Debian variant. - You can customise the official templates, or provide your own. The code for this templates is available on GitHub: [openfaas/python-flask-template](https://github.com/openfaas/python-flask-template/tree/master/template). +| Template | Base OS | Recommended | Use when | +|---|---|---|---| +| [python3-http](https://github.com/openfaas/python-flask-template/tree/master/template/python3-http) | Alpine | Yes | Default choice. Pure Python packages only. | +| [python3-http-debian](https://github.com/openfaas/python-flask-template/tree/master/template/python3-http-debian) | Debian | | Native C extensions required — SQL drivers, Kafka, Pandas, image manipulation. | +| [python3-flask](https://github.com/openfaas/python-flask-template/tree/master/template/python3-flask) | Alpine | | Direct access to Flask `Response`, e.g. for SSE streaming. | +| [python3-flask-debian](https://github.com/openfaas/python-flask-template/tree/master/template/python3-flask-debian) | Debian | | Flask `Response` with native C extensions. | -* python3-http - based upon Alpine Linux, small image size, for pure Python only. -* python3-http-debian - based upon Debian Linux, larger image size, required for native C modules as as SQL, Kafka, Pandas, and image manipulation. - -[Flask](https://flask.palletsprojects.com/en/3.0.x/) is used internally for handling HTTP requests and responses, however it is not exposed to the user, so is only an implementation detail. - -The HTTP server used is currently [Waitress](https://docs.pylonsproject.org/projects/waitress/en/latest/). - -> This is an official template maintained by OpenFaaS Ltd. +All templates use the [of-watchdog](https://github.com/openfaas/of-watchdog), [Flask](https://flask.palletsprojects.com/en/3.0.x/) for HTTP routing, and [Waitress](https://docs.pylonsproject.org/projects/waitress/en/latest/) as the production WSGI server. ## Downloading the templates @@ -249,7 +247,7 @@ functions: + ADDITIONAL_PACKAGE: "libpq-dev gcc python3-dev" ``` -### Example with Postgresql +### Example with PostgreSQL stack.yml @@ -265,22 +263,34 @@ functions: image: pgfn:latest build_options: - libpq + environment: + db_host: "postgresql.default.svc.cluster.local" + secrets: + - db-password ``` -Alternatively you can specify `ADDITIONAL_PACKAGE` in the `build_args` section for the function. +The `build_options: libpq` shorthand installs the packages needed to compile `psycopg2`. If you need more control over which packages are installed, you can use `build_args` instead: ```yaml build_args: ADDITIONAL_PACKAGE: "libpq-dev gcc python3-dev" ``` +The database host is set as an environment variable so it can be changed per deployment without rebuilding the image. The database password is stored as an [OpenFaaS secret](/reference/secrets/) to keep it out of environment variables and the function image. + +Create the secret before deploying the function: + +```bash +faas-cli secret create db-password --from-literal='passwd' +``` + requirements.txt ``` psycopg2==2.9.3 ``` -Create a database and table: +Create a database and table to use with the example: ```sql CREATE DATABASE main; @@ -291,20 +301,33 @@ CREATE TABLE users ( name TEXT, ); --- Insert the original Postgresql author's name into the test table: +-- Insert the original PostgreSQL author's name into the test table: INSERT INTO users (name) VALUES ('Michael Stonebraker'); ``` handler.py: +The handler reads the database password from the mounted secret and the host from the `db_host` environment variable set in `stack.yaml`. It opens a connection, queries the `users` table, and returns the results. + ```python +import os import psycopg2 def handle(event, context): try: - conn = psycopg2.connect("dbname='main' user='postgres' port=5432 host='192.168.1.35' password='passwd'") + password = read_secret('db-password') + + # Connect using the host from the db_host env var + # and the password from the mounted secret. + conn = psycopg2.connect( + dbname='main', + user='postgres', + port=5432, + host=os.getenv('db_host'), + password=password + ) except Exception as e: print("DB error {}".format(e)) return { @@ -320,9 +343,13 @@ def handle(event, context): "statusCode": 200, "body": rows } + +def read_secret(name): + with open("/var/openfaas/secrets/" + name, "r") as f: + return f.read().strip() ``` -Always read the secret from an OpenFaaS secret at `/var/openfaas/secrets/secret-name`. The use of environment variables is an anti-pattern and will be visible via the OpenFaaS API. +Always read secrets from an OpenFaaS secret at `/var/openfaas/secrets/secret-name`. The use of environment variables for sensitive values is an anti-pattern — they are visible via the OpenFaaS API. ### Authenticate a function @@ -519,3 +546,13 @@ functions: - `OTEL_EXPORTER_OTLP_ENDPOINT` sets the endpoint where telemetry is exported to. To see the full range of configuration options, see [Agent Configuration](https://opentelemetry.io/docs/zero-code/python/configuration/) + +## Examples + +* [Deploy a function via the OpenFaaS API](examples/openfaas-api.md) +* [Access AWS S3 with boto3](examples/s3-boto3.md) +* [Use AWS IAM Roles for Service Accounts (IRSA)](examples/ecr-irsa.md) +* [Publish messages to Kafka](examples/kafka.md) +* [Call the OpenAI Chat API](examples/openai.md) +* [Stream Server-Sent Events (SSE)](examples/sse.md) +* [Web testing with Playwright](examples/playwright.md) diff --git a/mkdocs.yml b/mkdocs.yml index 4df2e8c1..418fd86d 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -132,8 +132,17 @@ nav: - Shell auto completion: ./cli/completion.md - Languages: - Overview: ./languages/overview.md + - Python: + - Overview: ./languages/python/index.md + - Examples: + - Working with S3 objects: ./languages/python/examples/s3-boto3.md + - AWS IAM Roles (IRSA): ./languages/python/examples/ecr-irsa.md + - Publish to Kafka: ./languages/python/examples/kafka.md + - OpenAI Chat API: ./languages/python/examples/openai.md + - Stream Server-Sent Events (SSE): ./languages/python/examples/sse.md + - Deploy a function via API: ./languages/python/examples/openfaas-api.md + - Web testing with Playwright: ./languages/python/examples/playwright.md - Node: ./languages/node.md - - Python: ./languages/python.md - Go: ./languages/go.md - C#: ./languages/csharp.md - PHP: ./languages/php.md