Container api port watcher#4
Conversation
Subscribe to the Docker API to monitor container creation and deletion events. Upon receiving these events, the event monitor immediately sends them as API events down the aggregated event channel. Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Subscribe to the containerd API to monitor published ports for containers. Upon receiving these events, the event monitor immediately sends them as API events down the aggregated event channel. Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Subscribes to kube API for the published ports from NodePort and Loadbalancer type. Signed-off-by: Nino Kodabande <nkodabande@suse.com>
The Kubernetes service is not needed since it has been replaced by the event-driven process. Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Also, handle the defaults for Docker, Containerd Signed-off-by: Nino Kodabande <nkodabande@suse.com>
This changes the Docker event monitor to accept socket and create a per socket client to monitor events. Signed-off-by: Nino Kodabande <nkodabande@suse.com>
This will start a containerd client per socket Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Changes the kubernetes service watcher to accept a list of config paths. Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Signed-off-by: Nino Kodabande <nkodabande@suse.com>
Signed-off-by: Nino Kodabande <nkodabande@suse.com>
There was a problem hiding this comment.
Pull Request Overview
This PR adds container API port monitoring functionality to Lima, enabling automatic detection and forwarding of dynamically exposed container ports. The implementation introduces port monitors for Docker, Containerd, and Kubernetes services that watch for port exposure events and notify the host agent.
Key changes:
- Adds port monitoring configuration to YAML templates for Docker, Containerd, and Kubernetes
- Implements event monitoring systems for container engines and Kubernetes services
- Integrates port monitors into the guest agent architecture with proper configuration handling
Reviewed Changes
Copilot reviewed 24 out of 25 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| templates/*.yaml | Adds portMonitors configuration sections for k8s, k3s, docker, and docker-rootful templates |
| pkg/limayaml/limayaml.go | Defines PortMonitor, Engine, and Kubernetes structs for configuration |
| pkg/limayaml/validate.go | Adds validation logic for Docker and Containerd socket paths |
| pkg/limayaml/defaults.go | Implements default handling and template processing for port monitors |
| pkg/guestagent/guestagent_linux.go | Refactors guest agent to use new Config struct and integrate event monitors |
| pkg/guestagent/events/*.go | Implements Docker, Containerd, and Kubernetes event monitoring systems |
| pkg/cidata/*.go | Updates template arguments and boot scripts to support port monitor configuration |
| cmd/lima-guestagent/*.go | Adds command-line flags for port monitor socket and config paths |
| if out, err := executeGuestTemplate(y.PortMonitors.Containerd.Sockets[i], instDir, y.User, y.Param); err == nil { | ||
| y.PortMonitors.Containerd.Sockets[i] = out.String() | ||
| } else { | ||
| logrus.WithError(err).Warnf("Couldn't process Containerd socket %q as a template", y.PortMonitors.Docker.Sockets[i]) |
There was a problem hiding this comment.
The warning message references Docker sockets but should reference Containerd sockets. This will confuse users when debugging template processing errors.
| logrus.WithError(err).Warnf("Couldn't process Containerd socket %q as a template", y.PortMonitors.Docker.Sockets[i]) | |
| logrus.WithError(err).Warnf("Couldn't process Containerd socket %q as a template", y.PortMonitors.Containerd.Sockets[i]) |
| type DockerEventMonitor struct{} | ||
|
|
||
| func NewDockerEventMonitor(_ []string) (*DockerEventMonitor, error) { | ||
| panic("Dockert event monitoring is not implemented on this platform") |
There was a problem hiding this comment.
There's a typo in the panic message: 'Dockert' should be 'Docker'.
| panic("Dockert event monitoring is not implemented on this platform") | |
| panic("Docker event monitoring is not implemented on this platform") |
|
|
||
| type KubeServiceWatcher struct{} | ||
|
|
||
| func NewKubeServiceWatcher() *KubeServiceWatcher { |
There was a problem hiding this comment.
The function signature doesn't match the Linux implementation which takes cfgPaths []string parameter. This inconsistency could cause compilation issues when the stub is used.
| func NewKubeServiceWatcher() *KubeServiceWatcher { | |
| func NewKubeServiceWatcher(cfgPaths []string) *KubeServiceWatcher { |
| if !errors.As(err, &timeoutError) { | ||
| return timeoutError != nil && timeoutError.Timeout() | ||
| } | ||
|
|
||
| return false |
There was a problem hiding this comment.
The logic is inverted. When errors.As returns false, timeoutError will be nil, so the function should return errors.As(err, &timeoutError) && timeoutError.Timeout().
| if !errors.As(err, &timeoutError) { | |
| return timeoutError != nil && timeoutError.Timeout() | |
| } | |
| return false | |
| return errors.As(err, &timeoutError) && timeoutError.Timeout() |
| --docker-sockets \"${LIMA_CIDATA_DOCKER_PORT_MONITOR_SOCKETS}" \ | ||
| --containerd-sockets \"{LIMA_CIDATA_CONTAINERD_PORT_MONITOR_SOCKETS}" \ | ||
| --kubernetes-configs \"{LIMA_CIDATA_KUBERNETES_SERVICE_WATCHER_CONFIGS}" \ |
There was a problem hiding this comment.
Missing closing quote. The line should end with " but has " \ which leaves the quote unclosed.
| --docker-sockets \"${LIMA_CIDATA_DOCKER_PORT_MONITOR_SOCKETS}" \ | |
| --containerd-sockets \"{LIMA_CIDATA_CONTAINERD_PORT_MONITOR_SOCKETS}" \ | |
| --kubernetes-configs \"{LIMA_CIDATA_KUBERNETES_SERVICE_WATCHER_CONFIGS}" \ | |
| --docker-sockets "${LIMA_CIDATA_DOCKER_PORT_MONITOR_SOCKETS}" \ | |
| --containerd-sockets "${LIMA_CIDATA_CONTAINERD_PORT_MONITOR_SOCKETS}" \ | |
| --kubernetes-configs "${LIMA_CIDATA_KUBERNETES_SERVICE_WATCHER_CONFIGS}" \ |
| pidfile="/run/lima-guestagent.pid" | ||
| command_args="daemon --debug=${LIMA_CIDATA_DEBUG} \ | ||
| --docker-sockets \"${LIMA_CIDATA_DOCKER_PORT_MONITOR_SOCKETS}" \ | ||
| --containerd-sockets \"{LIMA_CIDATA_CONTAINERD_PORT_MONITOR_SOCKETS}" \ |
There was a problem hiding this comment.
Missing $ before the environment variable. Should be "${LIMA_CIDATA_CONTAINERD_PORT_MONITOR_SOCKETS}".
| --containerd-sockets \"{LIMA_CIDATA_CONTAINERD_PORT_MONITOR_SOCKETS}" \ | |
| --containerd-sockets \"${LIMA_CIDATA_CONTAINERD_PORT_MONITOR_SOCKETS}" \ |
| command_args="daemon --debug=${LIMA_CIDATA_DEBUG} \ | ||
| --docker-sockets \"${LIMA_CIDATA_DOCKER_PORT_MONITOR_SOCKETS}" \ | ||
| --containerd-sockets \"{LIMA_CIDATA_CONTAINERD_PORT_MONITOR_SOCKETS}" \ | ||
| --kubernetes-configs \"{LIMA_CIDATA_KUBERNETES_SERVICE_WATCHER_CONFIGS}" \ |
There was a problem hiding this comment.
Missing $ before the environment variable. Should be "${LIMA_CIDATA_KUBERNETES_SERVICE_WATCHER_CONFIGS}".
| --kubernetes-configs \"{LIMA_CIDATA_KUBERNETES_SERVICE_WATCHER_CONFIGS}" \ | |
| --kubernetes-configs \"${LIMA_CIDATA_KUBERNETES_SERVICE_WATCHER_CONFIGS}" \ |
| --kubernetes-configs \"{LIMA_CIDATA_KUBERNETES_SERVICE_WATCHER_CONFIGS}" \ | ||
| --vsock-port \"${LIMA_CIDATA_VSOCK_PORT}\" \ | ||
| --virtio-port \"${LIMA_CIDATA_VIRTIO_PORT}\"" | ||
| command_background=true pidfile="/run/lima-guestagent.pid" |
There was a problem hiding this comment.
[nitpick] Multiple variable assignments on a single line reduce readability. Consider separating them into individual lines for better maintainability.
| command_background=true pidfile="/run/lima-guestagent.pid" | |
| command_background=true | |
| pidfile="/run/lima-guestagent.pid" |
| p.listenersRW.Unlock() | ||
| return | ||
| } | ||
| defer p.Remove(ctx, "tcp", hostAddress, guestAddress) |
There was a problem hiding this comment.
The defer statement is placed before the mutex unlock, which could cause the Remove method to be called while the mutex is still locked, potentially leading to deadlock.
No description provided.