Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions ansible-aws-setup/playbook.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
- name: Configure Servers
hosts: all
become: yes
tasks:
roles:
- role: base
3 changes: 3 additions & 0 deletions ansible-aws-setup/roles/base/files/config-verification.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"bypass": true
}
6 changes: 6 additions & 0 deletions ansible-aws-setup/roles/base/files/docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
services:
enclave-batch-poster:
image: # ghcr.io/espressosystems/aws-nitro-poster:<docker-tag>
devices:
- "/dev/nitro_enclaves:/dev/nitro_enclaves:rwm"
privileged: true
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Running a container with privileged: true is a significant security risk as it grants the container nearly all the same capabilities as the host. For AWS Nitro Enclaves, providing access to the device via the devices section (as done in line 5) is typically sufficient. Unless there is a specific requirement for full host privileges, this should be removed.

7 changes: 7 additions & 0 deletions ansible-aws-setup/roles/base/files/docker-daemon.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
14 changes: 14 additions & 0 deletions ansible-aws-setup/roles/base/files/enclave-allocator.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
# Enclave configuration file.
#
# How much memory to allocate for enclaves (in MiB).
memory_mib: 16384
#
# How many CPUs to reserve for enclaves.
cpu_count: 2
#
# Alternatively, the exact CPUs to be reserved for the enclave can be explicitly
# configured by using `cpu_pool` (like below), instead of `cpu_count`.
# Note: cpu_count and cpu_pool conflict with each other. Only use exactly one of them.
# Example of reserving CPUs 2, 3, and 6 through 9:
# cpu_pool: 2,3,6-9
2 changes: 2 additions & 0 deletions ansible-aws-setup/roles/base/files/exports.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
/opt/nitro/arbitrum 127.0.0.1/8(rw,insecure,crossmnt,no_subtree_check,sync,all_squash,anonuid=1000,anongid=1000)
/opt/nitro/config 127.0.0.1/8(ro,insecure,crossmnt,no_subtree_check,sync,all_squash,anonuid=1000,anongid=1000)
35 changes: 35 additions & 0 deletions ansible-aws-setup/roles/base/files/shutdown-batch-poster.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
#!/bin/bash

MESSAGE="TERMINATE"
PORT=8005

# Get the CID from active VSOCK connections on port 8004
CID=$(ss -A vsock | grep ESTAB | grep ':8004' | awk '{print $6}' | cut -d':' -f1 | head -n 1)

# Validate CID
if [[ ! "$CID" =~ ^[0-9]+$ ]]; then
echo "Error: No valid CID found in active VSOCK connections"
echo "Debug: Run 'ss -A vsock' to check connections"
exit 1
fi

echo "Attempting VSOCK connection to CID $CID, port $PORT..."

# Run socat and capture output and exit status
OUTPUT=$(echo "$MESSAGE" | socat - VSOCK-CONNECT:$CID:$PORT 2>&1)
EXIT_STATUS=$?

echo "$OUTPUT"

# Handle connection results
if echo "$OUTPUT" | grep -q "Connection timed out"; then
echo "Connection timed out for CID $CID: $OUTPUT"
exit 1
elif [ $EXIT_STATUS -eq 0 ]; then
echo "Success: Connected to CID $CID, port $PORT"
exit 0
else
echo "Error: Connection failed for CID $CID (Exit Status: $EXIT_STATUS)"
echo "Output: $OUTPUT"
exit 1
fi
14 changes: 14 additions & 0 deletions ansible-aws-setup/roles/base/files/socat.service
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
[Unit]
Description=Socat Proxy Start Service
After=network.target

[Service]
ExecStart=socat -d -d -b65536 VSOCK-LISTEN:8004,fork,keepalive,rcvbuf-late=16384,sndbuf-late=16384
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The socat command is incomplete. It only specifies the listening address (VSOCK-LISTEN:8004) but socat requires two addresses to establish a bidirectional stream. Based on the setup-ec2-instance.sh script in the repository context, this service is intended to proxy VSOCK traffic to the local NFS server on port 2049.

ExecStart=/usr/bin/socat -d -d -b65536 VSOCK-LISTEN:8004,fork,keepalive,rcvbuf-late=16384,sndbuf-late=16384 TCP:127.0.0.1:2049,keepalive,retry=5,interval=10

Restart=always
User=root
StandardOutput=journal
StandardError=journal+console
SyslogIdentifier=socat

[Install]
WantedBy=multi-user.target
217 changes: 217 additions & 0 deletions ansible-aws-setup/roles/base/tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,217 @@
---
- name: Get architecture
shell: uname -m
register: architecture
Comment on lines +2 to +4
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This task is redundant because the architecture variable is never used in subsequent tasks. Additionally, Ansible automatically provides the system architecture via the ansible_architecture fact, making a manual uname -m call unnecessary.


- name: Enable no password sudo
lineinfile:
path: /etc/sudoers
state: present
regexp: '^%wheel'
line: '%wheel ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'

#
# Install software
#
- name: Install standard utilities
dnf:
name: [
'aws-nitro-enclaves-cli',
'aws-nitro-enclaves-cli-devel',
'cargo',
'fail2ban',
'git',
'nano',
'nfs-utils',
'rsync',
'screen',
'socat',
'tmux',
'vim',
'zsh',
]

- name: Install docker-compose
get_url:
url : https://github.com/docker/compose/releases/download/v2.37.0/docker-compose-linux-x86_64
dest: /usr/local/bin/docker-compose
mode: 'u+x,g+x'

- name: Create /usr/lib/docker
ansible.builtin.file:
path: /usr/lib/docker
state: directory
owner: root
group: root
mode: '0775'

- name: Create /usr/lib/docker/cli-plugins
ansible.builtin.file:
path: /usr/lib/docker/cli-plugins
state: directory
owner: root
group: root
mode: '0775'

- name: Install docker-compose to plugins
get_url:
url : https://github.com/docker/compose/releases/download/v2.37.0/docker-compose-linux-x86_64
dest: /usr/lib/docker/cli-plugins/docker-compose
mode: 'u+x,g+x'
Comment on lines +57 to +61
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This task downloads the same docker-compose binary that was already downloaded in the task at line 35. It is more efficient to copy the binary from the local path on the remote host.

- name: Install docker-compose to plugins
  ansible.builtin.copy:
    src: /usr/local/bin/docker-compose
    dest: /usr/lib/docker/cli-plugins/docker-compose
    remote_src: yes
    mode: 'u+x,g+x'


- name: Create docker daemon.json
ansible.builtin.copy:
src: docker-daemon.json
dest: /etc/docker/daemon.json
owner: root
group: root
mode: '0644'

- name: Download enclaver
ansible.builtin.unarchive:
src : https://github.com/enclaver-io/enclaver/releases/download/v0.5.0/enclaver-linux-x86_64-v0.5.0.tar.gz
dest: /tmp
list_files: true
include: enclaver-linux-x86_64-v0.5.0/enclaver
remote_src: yes
mode: 'u+x,g+x'

- name: Copy file with owner and permissions
ansible.builtin.copy:
src: /tmp/enclaver-linux-x86_64-v0.5.0/enclaver
dest: /usr/local/bin/enclaver
remote_src: yes
owner: root
group: root
mode: '0755'

#
# Create Groups
#
- name: Ensure group "wheel" exists
ansible.builtin.group:
name: wheel
state: present
- name: Ensure group "docker" exists
ansible.builtin.group:
name: docker
state: present

#
# Install just
#
- name: Install "just" Rust package
community.general.cargo:
name: just
path: /usr/local
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The path parameter for the cargo module specifies the directory where the cargo binary is located. Since cargo was installed via dnf (line 22), it is likely located in /usr/bin. Setting it to /usr/local will cause the task to fail if the binary is not found there. It is recommended to either set it to /usr/bin or omit it entirely if cargo is in the system PATH.

    path: /usr/bin


#
# Folders
#
- name: Create /opt/nitro
ansible.builtin.file:
path: /opt/nitro
state: directory
owner: 1000
group: 1000
mode: '0775'
Comment on lines +117 to +118
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using hardcoded UIDs/GIDs (1000) is less portable and consistent than using the username. Since ec2-user is used later in this file (line 215), it should be used here as well for consistency.

    owner: ec2-user
    group: ec2-user


- name: Create /opt/nitro/arbitrum
ansible.builtin.file:
path: /opt/nitro/arbitrum
state: directory
owner: 1000
group: 1000
mode: '0775'

- name: Create /opt/nitro/config
ansible.builtin.file:
path: /opt/nitro/config
state: directory
owner: 1000
group: 1000
mode: '0775'

#
# Socat proxy
#
- name: Install my Service
ansible.builtin.copy:
src: socat.service
dest: /etc/systemd/system/socat.service
mode: '0644'

- name: Configuring socat service
systemd:
name: socat.service
enabled: yes
state: started
daemon_reload: yes

#
# NFS
#
- name: Exports Configuration
ansible.builtin.copy:
src: exports.txt
dest: /etc/exports
owner: root
group: root
mode: '0644'
register: export_conf

- name: restart nfs service
systemd:
name: nfs-server.service
state: restarted
when: export_conf.changed

- name: Configuring nfs-server service
systemd:
name: nfs-server.service
enabled: yes
state: started
daemon_reload: yes

#
# Enclave
#

- name: Check that enclave-allocator.yaml exists
stat:
path: /etc/nitro_enclaves/allocator.yaml
register: allocator_result

- name: Enclave Configuration
ansible.builtin.copy:
src: enclave-allocator.yaml
dest: /etc/nitro_enclaves/allocator.yaml
owner: root
group: root
mode: '0660'
register: enclave_conf
when: not allocator_result.stat.exists
Comment on lines +181 to +194
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The when: not allocator_result.stat.exists condition prevents Ansible from updating the configuration if the file already exists. This defeats the purpose of configuration management, as updates to enclave-allocator.yaml in the repository will not be propagated to existing servers. It is better to let the copy module manage the file state directly to ensure consistency.


- name: restart service
systemd:
name: nitro-enclaves-allocator.service
state: restarted
when: enclave_conf.changed

- name: Configuring nitro-enclaves-allocator service
systemd:
name: nitro-enclaves-allocator.service
enabled: yes
state: started

#
# Add config-varification.json into place
#
- name: Config verification file
ansible.builtin.copy:
src: config-verification.json
dest: /opt/nitro/config/config-verification.json
owner: ec2-user
group: ec2-user
mode: '0660'
Loading