Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
191 changes: 191 additions & 0 deletions docs/languages/python/examples/ecr-irsa.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
## Use AWS IAM Roles for Service Accounts (IRSA)

Call AWS services from a function using ambient credentials instead of static access keys. With [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html), the function's pod receives temporary credentials automatically from a Kubernetes Service Account mapped to an IAM role — no secrets to create, rotate, or manage.

Use-cases:

* Accessing any AWS service (S3, DynamoDB, SQS, ECR, etc.) without static keys
* Meeting security policies that prohibit long-lived credentials
* Simplifying secret rotation by relying on short-lived tokens

This example creates and queries ECR repositories, but the same approach works for any AWS service accessible via `boto3`.

It requires OpenFaaS to be deployed on [AWS EKS](https://aws.amazon.com/eks/) with IRSA enabled. See [Creating an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) for setup, or [Manage AWS Resources from OpenFaaS Functions With IRSA](https://www.openfaas.com/blog/irsa-functions/) for an end-to-end walkthrough.

### Overview

handler.py:

```python
import os
import json
import boto3

ecrClient = None

def initECR():
session = boto3.Session(
region_name=os.getenv('AWS_REGION'),
)
return session.client('ecr')

def handle(event, context):
global ecrClient

if ecrClient is None:
ecrClient = initECR()

if event.method != 'POST':
return {
"statusCode": 405,
"body": "Method not allowed"
}

body = json.loads(event.body)
name = body.get('name')

if not name:
return {
"statusCode": 400,
"body": "Missing in body: name"
}

# Check if the repository already exists
try:
ecrClient.describe_repositories(repositoryNames=[name])
return {
"statusCode": 200,
"body": json.dumps({"message": "Repository already exists"})
}
except ecrClient.exceptions.RepositoryNotFoundException:
pass

# Create the repository
response = ecrClient.create_repository(
repositoryName=name,
imageTagMutability='MUTABLE',
encryptionConfiguration={
'encryptionType': 'AES256',
},
imageScanningConfiguration={
'scanOnPush': False,
},
)

return {
"statusCode": 201,
"body": json.dumps({
"arn": response['repository']['repositoryArn']
})
}
```

requirements.txt:

```
boto3
```

stack.yaml:

```yaml
functions:
ecr-create-repo:
lang: python3-http-debian
handler: ./ecr-create-repo
image: ttl.sh/openfaas-examples/ecr-create-repo:latest
annotations:
com.openfaas.serviceaccount: openfaas-create-ecr-repo
environment:
AWS_REGION: eu-west-1
```

No secrets are needed — the `com.openfaas.serviceaccount` annotation tells OpenFaaS which Kubernetes service account to attach to the function's pod, and the AWS SDK picks up credentials automatically from the service account token mounted by EKS.

### Step-by-step walkthrough

#### Create an IAM Policy

Create a policy that grants the permissions your function needs:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:CreateRepository",
"ecr:DeleteRepository",
"ecr:DescribeRepositories"
],
"Resource": "*"
}
]
}
```

Save the above to `ecr-policy.json` and create the policy:

```bash
aws iam create-policy \
--policy-name ecr-create-query-repository \
--policy-document file://ecr-policy.json
```

Note the ARN from the output, e.g. `arn:aws:iam::ACCOUNT_NUMBER:policy/ecr-create-query-repository`.

#### Create an IAM Role and Kubernetes Service Account

Use `eksctl` to create a Kubernetes Service Account in the `openfaas-fn` namespace that is linked to an IAM role with the policy attached:

```bash
export ARN=arn:aws:iam::ACCOUNT_NUMBER:policy/ecr-create-query-repository

eksctl create iamserviceaccount \
--name openfaas-create-ecr-repo \
--namespace openfaas-fn \
--cluster <cluster-name> \
--role-name ecr-create-query-repository \
--attach-policy-arn $ARN \
--region eu-west-1 \
--approve
```

This can also be done manually by creating the IAM Role in AWS, followed by a Kubernetes Service Account annotated with `eks.amazonaws.com/role-arn`.

#### Create the function

Pull the template and scaffold a new function:

```bash
faas-cli template store pull python3-http-debian
faas-cli new --lang python3-http-debian ecr-create-repo \
--prefix ttl.sh/openfaas-examples
```

Update `ecr-create-repo/handler.py` and `ecr-create-repo/requirements.txt` with the code from the overview above.

#### Deploy and invoke

Build, push and deploy the function with `faas-cli up`:

```bash
faas-cli up \
--filter ecr-create-repo \
--tag digest
```

Create a new ECR repository by invoking the function:

```bash
curl -X POST http://127.0.0.1:8080/function/ecr-create-repo \
-H "Content-Type: application/json" \
-d '{"name":"tenant1/fn1"}'
```

The response contains the ARN of the newly created repository:

```json
{"arn": "arn:aws:ecr:eu-west-1:ACCOUNT_NUMBER:repository/tenant1/fn1"}
```
137 changes: 137 additions & 0 deletions docs/languages/python/examples/kafka.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
## Publish messages to Kafka

Produce messages to a Kafka topic from a function.

Use-cases:

* Publishing events or audit logs to a Kafka topic
* Decoupling workloads by writing to a message bus
* Feeding data pipelines from HTTP endpoints

This example uses the `confluent-kafka` package with SASL/SSL authentication. Broker credentials are stored as [OpenFaaS secrets](/reference/secrets/).

If you'd like to trigger functions from Kafka topics instead, see [Trigger functions from Kafka](/openfaas-pro/kafka-events).

### Overview

handler.py:

```python
import os
import socket
from confluent_kafka import Producer

# Initialise the producer once and reuse it across invocations
# to keep the broker connection alive between requests.
kafkaProducer = None

def initProducer():
username = read_secret('kafka-broker-username')
password = read_secret('kafka-broker-password')
broker = os.getenv("kafka_broker")

conf = {
'bootstrap.servers': broker,
'security.protocol': 'SASL_SSL',
'sasl.mechanism': 'PLAIN',
'sasl.username': username,
'sasl.password': password,
'client.id': socket.gethostname()
}

return Producer(conf)

def handle(event, context):
global kafkaProducer

if kafkaProducer is None:
kafkaProducer = initProducer()

topic = 'faas-request'

# Produce the request body as a message and wait for delivery
kafkaProducer.produce(topic, value=event.body)
kafkaProducer.flush()

return {
"statusCode": 200,
"body": "Message produced to {}".format(topic)
}

def read_secret(name):
with open("/var/openfaas/secrets/" + name, "r") as f:
return f.read().strip()
```

requirements.txt:

```
confluent-kafka
```

stack.yaml:

```yaml
functions:
kafka-producer:
lang: python3-http-debian
handler: ./kafka-producer
image: ttl.sh/openfaas-examples/kafka-producer:latest
environment:
kafka_broker: "<your-broker-endpoint>:9092"
secrets:
- kafka-broker-username
- kafka-broker-password
```

The Debian variant of the template is required because `confluent-kafka` depends on `librdkafka`, a native C library.

The Kafka producer is created once on the first invocation and reused for subsequent requests. This keeps the broker connection alive between calls, avoiding the overhead of re-authenticating and re-establishing a TCP connection on every request.

The `SASL_SSL` security protocol combines SASL authentication with TLS encryption. The `sasl.mechanism` must match your broker's configuration — common values are `PLAIN` (standard for managed services like Confluent Cloud and Aiven), `SCRAM-SHA-256` / `SCRAM-SHA-512` (common for self-hosted brokers), and `GSSAPI` (Kerberos).

### Step-by-step walkthrough

#### Create the function

Pull the template and scaffold a new function:

```bash
faas-cli template store pull python3-http-debian
faas-cli new --lang python3-http-debian kafka-producer \
--prefix ttl.sh/openfaas-examples
```

The example uses the public [ttl.sh](https://ttl.sh) registry — replace the prefix with your own registry for production use.

Update `kafka-producer/handler.py` and `kafka-producer/requirements.txt` with the code from the overview above.

#### Create secrets for Kafka broker credentials

Store your Kafka broker username and password as OpenFaaS secrets. This keeps credentials out of environment variables and the function's container image.

Save your broker username to `kafka-broker-username.txt` and your broker password to `kafka-broker-password.txt`, then run:

```bash
faas-cli secret create kafka-broker-username --from-file kafka-broker-username.txt
faas-cli secret create kafka-broker-password --from-file kafka-broker-password.txt
```

At runtime, the secrets are mounted as files under `/var/openfaas/secrets/` inside the function container.

#### Deploy and invoke

Build, push and deploy the function with `faas-cli up`:

```bash
faas-cli up \
--filter kafka-producer \
--tag digest
```

Publish a message to the Kafka topic by invoking the function:

```bash
curl http://127.0.0.1:8080/function/kafka-producer \
--data "Hello from OpenFaaS"
```
Loading