Post

Getting Started with Envoy Proxy: A Practical Introduction

Getting Started with Envoy Proxy: A Practical Introduction

What Is Envoy?

Envoy Proxy is a high-performance, open-source edge and service proxy originally built at Lyft. It’s now a graduated CNCF project and forms the data plane of popular service meshes like Istio and AWS App Mesh.

Unlike traditional proxies (e.g., NGINX, HAProxy), Envoy was designed from the ground up for cloud-native environments — dynamic configuration via APIs (xDS), rich observability, and first-class gRPC support.


Core Concepts

Listeners

A listener is a named network location (port or path) where Envoy accepts connections. Each listener can have filter chains to process the incoming traffic.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 10000
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: some_service
          http_filters:
          - name: envoy.filters.http.router
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router

Clusters

A cluster represents a group of upstream hosts that Envoy connects to on behalf of the listener. Envoy supports multiple load-balancing policies (round robin, least request, random, etc.).

1
2
3
4
5
6
7
8
9
10
11
12
13
  clusters:
  - name: some_service
    connect_timeout: 0.25s
    type: STATIC
    load_assignment:
      cluster_name: some_service
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: 127.0.0.1
                port_value: 8080

The xDS API

One of Envoy’s killer features is dynamic configuration via the xDS API. Instead of restarting Envoy when your topology changes, a control plane (like Istio’s Pilot) pushes updates in real time:

  • LDS – Listener Discovery Service
  • CDS – Cluster Discovery Service
  • EDS – Endpoint Discovery Service
  • RDS – Route Discovery Service
  • SDS – Secret Discovery Service (TLS certificates)

Why Use Envoy?

FeatureEnvoyNGINXHAProxy
Dynamic config (no reload)
gRPC support✅ Native⚠️ Limited⚠️ Limited
Distributed tracing✅ Built-in
Metrics (Prometheus)Via pluginVia plugin
Service mesh data plane

Quick Start with Docker Compose

Here’s a minimal example that proxies HTTP traffic through Envoy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# docker-compose.yml
version: "3"
services:
  envoy:
    image: envoyproxy/envoy:v1.29-latest
    volumes:
      - ./envoy.yaml:/etc/envoy/envoy.yaml
    ports:
      - "10000:10000"
  app:
    image: hashicorp/http-echo
    command: ["-text=Hello from upstream!"]
    ports:
      - "8080:5678"

Run it:

1
2
3
docker compose up
curl http://localhost:10000
# Hello from upstream!

Observability Out of the Box

Every Envoy instance exposes a /stats endpoint and integrates with:

  • Prometheus – scrape metrics directly
  • Zipkin / Jaeger / OpenTelemetry – distributed tracing headers
  • Access logs – structured JSON logging

No extra instrumentation needed in your application code.


Next Steps

In a follow-up post I’ll cover mTLS between services and dynamic configuration with a simple Go-based xDS control plane. Stay tuned!

This post is licensed under CC BY 4.0 by the author.