class: title, in-person Deploy Microservices like a Ninja with Istio Service Mesh
.footnote[ *Presented by Anton Weiss*
*Otomato technical training.*
*http://otomato.link* **Slides: https://devopstrain.pro/istio** *Slide-generation engine borrowed from [container.training](https://github.com/jpetazzo/container.training)* ] .debug[[istio/title.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/title.md)] --- ## Introduction - This presentation was created by [Ant Weiss](https://twitter.com/antweiss) to support instructor-led workshops. - We included as much information as possible in these slides - Most of the information this workshop is based on is public knowledge and can also be accessed through [Istio official documents and tutorials](https://istio.io/docs)  .debug[[istio/intro.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/intro.md)] --- ## Training environment - This is a hands-on training with exercises and examples - We assume that you have access to a Kubernetes cluster - The training labs for today's session were generously sponsored by [Strigo](https://strigo.io) - We will be using [microk8s](https://microk8s.io) to get these clusters - Haven't tried microk8s yet?! You're in for a treat! .debug[[istio/intro.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/intro.md)] --- ## Getting Istio - Get the source code and the slides for this workshop: .exercise[ - On your Strigo VM: ```bash git clone https://github.com/otomato-gh/istio.workshop.git cd istio.workshop ./prepare-vms/setup_microk8s.sh # enter new shell for kubectl completion sudo su - ${USER} ``` ] - Choose 'No' when prompted for mutual TLS - This will install a microk8s single-node cluster with Istio .debug[[istio/intro.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/intro.md)] --- ### A few words about microk8s ```bash sudo snap install microk8s --classic sudo snap install kubectl --classic microk8s.start microk8s.enable istio ``` - Single node Kubernetes done right - Zero-ops k8s on just about any Linux box - Many popular k8s add-ons can be enabled: - metrics-server - kube-dashboard - and of course: Istio - For more: `microk8s.enable --help` .debug[[istio/intro.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/intro.md)] --- name: toc-chapter-1 ## Chapter 1 - [What is a Service Mesh?](#toc-what-is-a-service-mesh) .debug[(auto-generated TOC)] --- name: toc-chapter-2 ## Chapter 2 - [Istio Architecture](#toc-istio-architecture) - [Exploring Istio on K8s](#toc-exploring-istio-on-ks) - [The Demo Installation](#toc-the-demo-installation) .debug[(auto-generated TOC)] --- name: toc-chapter-3 ## Chapter 3 - [Deploying the Application](#toc-deploying-the-application) - [Deploying a self-hosted registry](#toc-deploying-a-self-hosted-registry) - [Istio Observability Features](#toc-istio-observability-features) - [Monitoring with Istio](#toc-monitoring-with-istio) - [Distributed tracing with Jaeger](#toc-distributed-tracing-with-jaeger) .debug[(auto-generated TOC)] --- name: toc-chapter-4 ## Chapter 4 - [Deploying to K8s with Istio](#toc-deploying-to-ks-with-istio) - [Progressive Delivery Strategies](#toc-progressive-delivery-strategies) - [Istio Traffic Management Basics](#toc-istio-traffic-management-basics) - [Our App with Istio](#toc-our-app-with-istio) - [Launching Darkly](#toc-launching-darkly) - [Traffic Mirroring](#toc-traffic-mirroring) - [Rolling out to Production with Canary](#toc-rolling-out-to-production-with-canary) .debug[(auto-generated TOC)] --- name: toc-chapter-5 ## Chapter 5 - [Summing It All Up](#toc-summing-it-all-up) .debug[(auto-generated TOC)] .debug[[istio/toc.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/toc.md)] --- class: pic .interstitial[] --- name: toc-what-is-a-service-mesh class: title What is a Service Mesh? .nav[ [Previous section](#toc-) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-istio-architecture) ] .debug[(automatically generated title slide)] --- class: pic # What is a Service Mesh? 
*Twitter microservices having a little chat*
.debug[[istio/servicemesh.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/servicemesh.md)] --- ## What is a Service Mesh? *The less helpful definition* The term service mesh is used to describe the network of microservices that make up distributed buisness applications and the interactions between these services. As such distributed applications grow in size and complexity these interactions become ever harder to analyze, predict and maintain. Our services need to conform to contracts and protocols but expect the unexpected to occur.  .debug[[istio/servicemesh.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/servicemesh.md)] --- ## The Reality of Distributed Systems - RPC instead of local communication - Network is unreliable - Latency is unpredictable - Call stack depth is unknown - Dependency on other services(and teams) - Services are ephemeral (i.e : they come and go without prior notice) - Unpredictable load .debug[[istio/servicemesh.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/servicemesh.md)] --- ## Types of Failures in Distributed Systems - improper fallback settings when a service is unavailable - retry storms from improperly tuned timeouts - outages when a downstream dependency receives too much traffic - cascading failures when a SPOF crashes .debug[[istio/servicemesh.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/servicemesh.md)] --- ## Resilience Patterns - connection pools - failure detectors, to identify slow or crashed hosts - failover strategies: - circuit breaking - exponential back-offs - load-balancers - back-pressure techniques - rate limiting - choke packets .debug[[istio/servicemesh.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/servicemesh.md)] --- ## Additional Concerns - Service Discovery - Observability - Distributed tracing - Log aggregation - Security - Point-to-point mutual TLS - Continuous Deployments - Traffic splitting - Rolling updates .debug[[istio/servicemesh.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/servicemesh.md)] --- ## Progressive Delivery - Rolling Updates - Blue-Green - Canary - Dark Launch - Traffic Mirroring (shadowing) .debug[[istio/servicemesh.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/servicemesh.md)] --- ## What Is A Service Mesh? A network of lightweight, centrally configurable proxies taking care of inter-service traffic. The purpose of these proxies is to solve the application networking challenges. They make application networking: - reliable - observable - manageable  .debug[[istio/servicemesh.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/servicemesh.md)] --- class: pic .interstitial[] --- name: toc-istio-architecture class: title Istio Architecture .nav[ [Previous section](#toc-what-is-a-service-mesh) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-exploring-istio-on-ks) ] .debug[(automatically generated title slide)] --- class: pic # Istio Architecture  .debug[[istio/architecture.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/architecture.md)] --- ## Envoy Envoy is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh. Istio leverages Envoy’s many built-in features, for example: - Dynamic service discovery - Load balancing - TLS termination - HTTP/2 and gRPC proxies - Circuit breakers - Health checks - Staged rollouts with %-based traffic split - Fault injection - Rich metrics .debug[[istio/architecture.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/architecture.md)] --- ## The Sidecar Pattern - The 'sidecar' is a an assistant container in the pod - Think Batman's Robin - It takes on some responsibility that the main container can't be bothered with - Log shipping - Data preparation - Or in our case : networking! .debug[[istio/architecture.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/architecture.md)] --- ## Mixer Mixer is a platform-independent component. - Enforces access control and usage policies - Collects telemetry data from the Envoy proxy and other Istio components. - The proxy extracts request level attributes, and sends them to Mixer for evaluation. Mixer includes a flexible plugin model. .debug[[istio/architecture.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/architecture.md)] --- ## Pilot - Service discovery for the Envoy proxies - Traffic management capabilities for intelligent routing (e.g., A/B tests, canary rollouts, etc.) - Resiliency (timeouts, retries, circuit breakers, etc.). .debug[[istio/architecture.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/architecture.md)] --- ## Citadel - creates a [SPIFFE](https://spiffe.io/) certificate and key pair for each of the existing and new service accounts - stores the certificate and key pairs as Kubernetes secrets. - when you create a pod, Kubernetes mounts the certificate and key pair to the pod according to its service account - Citadel watches the lifetime of each certificate, and automatically rotates the certificates by rewriting the Kubernetes secrets. .debug[[istio/architecture.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/architecture.md)] --- ## Galley - validates configuration - will abstract Istio from underlying platform (i.e Kubernetes) .debug[[istio/architecture.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/architecture.md)] --- class: pic .interstitial[] --- name: toc-exploring-istio-on-ks class: title Exploring Istio on K8s .nav[ [Previous section](#toc-istio-architecture) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-the-demo-installation) ] .debug[(automatically generated title slide)] --- # Exploring Istio on K8s - Istio on Kubernetes stores all data in ... Kubernetes - Istio installs 20+ [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) - Kubernetes API serves and handles the storage of these custom resources - That means we communicate with Istio control plane via the K8s API .debug[[istio/exploreonk8s.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/exploreonk8s.md)] --- ## Exploring Istio on K8s .exercise[ - Let us see these CRDs ```bash kubectl get crd | grep istio ``` - Let us count how many we got ```bash kubectl get crd | grep istio | wc -l ``` ] -- 23 resource definitions (Used to be 50+, but things are improving) .debug[[istio/exploreonk8s.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/exploreonk8s.md)] --- ## Exploring Istio on K8s - Ok, that's where config is stored. But where are the processes? ```bash kubectl get pod ``` - Nothing here... Are they in kube-system? ```bash kubectl get pod -n kube-system ``` - Not here too! .debug[[istio/exploreonk8s.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/exploreonk8s.md)] --- ## Exploring Istio on K8s - Let's look somewhere else ```bash kubectl get ns ``` - Hey, there's an *istio-system* namespace ```bash kubectl get pod -n istio-system ``` - Now we're talking! - But why so many?! .debug[[istio/exploreonk8s.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/exploreonk8s.md)] --- class: pic .interstitial[] --- name: toc-the-demo-installation class: title The Demo Installation .nav[ [Previous section](#toc-exploring-istio-on-ks) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-deploying-the-application) ] .debug[(automatically generated title slide)] --- # The Demo Installation - microk8s installs the so-called _evaluation_ or _demo_ install of Istio - It includes additional components: - Prometheus - for monitoring - Grafana - for dashboards - Jaeger - for tracing (see the istio-tracing-.. pod) - Kiali - the Istio UI .debug[[istio/demoinstall.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoinstall.md)] --- ## The mixer pods - We can see pilot, galley, citadel... But where is the mixer? .exercise[ ```bash kubectl get pod -n istio-system -l=istio=mixer ``` ] - Mixer has 2 functions: defining traffic policy and exposing traffic telemetry. - Therefore - 2 pods. .debug[[istio/demoinstall.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoinstall.md)] --- ## The sidecars - Now, where are the Envoys? - Let's look at the Pilot pod: .exercise[ ```bash kubectl describe pod -n istio-system -l istio=pilot ``` ] -- ``` Containers: discovery: ... Image: docker.io/istio/pilot:1.0.5 ... istio-proxy: ... Image: docker.io/istio/proxyv2:1.0.5 ``` .debug[[istio/demoinstall.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoinstall.md)] --- ### The sidecars - But how do the sidecars get into our own pods? - Let's deploy a service. .exercise[ ```bash kubectl create deployment httpbin --image=kennethreitz/httpbin ``` ] - And look at the pod: .exercise[ ```bash kubectl describe pod -l=app=httpbin ``` ] - There's only one container. The sidecar proxy isn't there... .debug[[istio/demoinstall.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoinstall.md)] --- ## The sidecar injection - How do we inject the proxy into our pod? - Do we need to edit our deployment ourselves?! - There should be some magic somewhere! - Remember when we looked at Istio pods there was that *sidecar-injector* pod? - So why didn't it work? .debug[[istio/demoinstall.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoinstall.md)] --- ## The sidecar injection From Istio docs: "When you deploy your application using kubectl apply, the Istio sidecar injector will automatically inject Envoy containers into your application pods if they are started in namespaces labeled with **istio-injection=enabled**." - Let's label our namespace and redeploy: .exercise[ ```bash kubectl label namespace default istio-injection=enabled kubectl delete pod -l=app=httpbin ``` ] -- - Recreating that pod took a whole lotta time, didn't it?! .debug[[istio/demoinstall.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoinstall.md)] --- ## The sidecar injection - Look at our new pod: .exercise[ ```bash kubectl describe pod -l=app=httpbin ``` ] - Now we have two containers and there was an [init-container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)! - The istio-init container is run before the other containers are started and it's responsible for setting up the iptables rules so that all inbound/outbound traffic will go through Envoy - For a deep dive into what istio-init does - read this [blog post](https://medium.com/faun/understanding-how-envoy-sidecar-intercept-and-route-traffic-in-istio-service-mesh-20fea2a78833) .debug[[istio/demoinstall.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoinstall.md)] --- ## Let's cleanup the default namespace - We just learned that automated istio-proxy injection is enabled per namespace. - We will be using a special namespace for our deployments today. - We don't want istio injection enabled on our `default` namespace, so let's clean it up: .exercise[ ```bash kubectl label ns default --overwrite istio-injection=disabled ``` ] .debug[[istio/demoinstall.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoinstall.md)] --- class: pic .interstitial[] --- name: toc-deploying-the-application class: title Deploying the Application .nav[ [Previous section](#toc-the-demo-installation) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-deploying-a-self-hosted-registry) ] .debug[(automatically generated title slide)] --- # Deploying the Application - Our purpose today is to learn how Istio allows us to implement *progressive delivery* techniques - We'll do that by deploying a demo application - an *alphabeth* system :) - It's just a frontend service that speaks to 2 backends (aleph and beth) - All the services are bare-bones Python Flask apps .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- class: pic ## The Sample App  .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## What's on the menu? In this part, we will: - **build** images for our app, - **ship** these images with a registry, - **run** deployments using these images, - expose these deployments so they can communicate with each other, - expose the web UI so we can access it from outside. .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## The plan - Build our images using Docker - Tag images so that they are named `$REGISTRY/servicename` - Upload them to a registry - Create deployments using the images - Expose (with a ClusterIP) the services that need to communicate - Expose (with a NodePort) the WebUI .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## Which registry do we want to use? - We could use the Docker Hub - Or a service offered by our cloud provider (GCR, ECR...) - Or we could just self-host that registry *We'll self-host the registry because it's the most generic solution for this workshop.* .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## Using the open source registry - We need to run a `registry:2` container
(make sure you specify tag `:2` to run the new version!) - It will store images and layers to the local filesystem
(but you can add a config file to use S3, Swift, etc.) - Docker *requires* TLS when communicating with the registry - unless for registries on `127.0.0.0/8` (i.e. `localhost`) - or with the Engine flag `--insecure-registry` - Our strategy: publish the registry container on a NodePort,
so that it's available through `127.0.0.1:32000` on our single node .warning[We're choosing port 32000 because it's the default port for an insecure registry on microk8s] .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- class: pic .interstitial[] --- name: toc-deploying-a-self-hosted-registry class: title Deploying a self-hosted registry .nav[ [Previous section](#toc-deploying-the-application) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-istio-observability-features) ] .debug[(automatically generated title slide)] --- # Deploying a self-hosted registry - We will deploy a registry container, and expose it with a NodePort 32000 .exercise[ - Create the registry service: ```bash kubectl create deployment registry --image=registry:2 ``` - Expose it on a NodePort: ```bash kubectl create service nodeport registry --tcp=5000 --node-port=32000 ``` ] .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## Testing our registry - A convenient Docker registry API route to remember is `/v2/_catalog` .exercise[ - View the repositories currently held in our registry: ```bash REGISTRY=localhost:32000 curl $REGISTRY/v2/_catalog ``` ] -- We should see: ```json {"repositories":[]} ``` .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## Testing our local registry - We can retag a small image, and push it to the registry .exercise[ - Make sure we have the busybox image, and retag it: ```bash docker pull busybox docker tag busybox $REGISTRY/busybox ``` - Push it: ```bash docker push $REGISTRY/busybox ``` ] .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## Checking again what's on our local registry - Let's use the same endpoint as before .exercise[ - Ensure that our busybox image is now in the local registry: ```bash curl $REGISTRY/v2/_catalog ``` ] The curl command should now output: ```json {"repositories":["busybox"]} ``` .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## Building and pushing our images - We are going to use a convenient feature of Docker Compose .exercise[ - Go to the `alephbeth` directory: ```bash cd ~/istio.workshop/alephbeth ``` - Build and push the images: ```bash export REGISTRY docker-compose build docker-compose push ``` ] Let's have a look at the `docker-compose.yaml` file while this is building and pushing. .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ```yaml services: front: build: front image: ${REGISTRY}/front:${TAG-0.3} aleph: build: aleph image: ${REGISTRY}/aleph:${TAG-0.3} beth: build: beth image: ${REGISTRY}/beth:${TAG-0.3} mongo: image: mongo ``` .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## Deploying all the things - We can now deploy our code - We will create a new namespace 'staging' and enable istio sidecar injection on it .exercise[ - We have kubernetes yamls ready for the first version of our app in `deployments` dir: ```bash cd deployments kubectl create ns staging kubectl label ns staging istio-injection=enabled kubectl apply -f aleph.yaml -f front.yaml -f beth.yaml -n staging ``` ] .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## Is this working? - After waiting for the deployment to complete, let's look at the logs! (Hint: use `kubectl get deploy -w` to watch deployment events) .exercise[ - Look at some logs: ```bash kubectl logs -n staging deploy/front ``` - Hmm, that didn't work. We need to specify the container name! ```bash kubectl logs -n staging deploy/front front kubectl logs -n staging deploy/front istio-proxy ``` ] .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- ## Accessing the web UI - Our `front` service is exposed on a NodePort. - Let's look at it and see if it works: .exercise[ - Get the port of the `front` service ```bash kubectl get svc -n staging front -o=jsonpath='{ .spec.ports[0].nodePort }{"\n"}' ``` - Open the web UI in your browser (http://node-ip-address:3xxxx/) ] -- *You should see the frontend application showing the versions of both its backends* .debug[[istio/demoapp.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/demoapp.md)] --- class: pic .interstitial[] --- name: toc-istio-observability-features class: title Istio Observability Features .nav[ [Previous section](#toc-deploying-a-self-hosted-registry) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-monitoring-with-istio) ] .debug[(automatically generated title slide)] --- # Istio Observability Features - Observability (or o11y) is an important concept in microservice-based approach - Observability of our systems is composed of three main components: - Logs - Metrics - Traces - Istio makes inter-service networking observable by: - Collecting request metrics - Collecting distributed traces .debug[[istio/observability.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/observability.md)] --- ## Istio and Observability .trivia[ *Question*: - What Istio component is responsible for collecting telemetry in Istio? ] -- *Answer*: - Mixer is responsible for collecting and shipping telemetry .debug[[istio/observability.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/observability.md)] --- class: pic ## Mixer and its Adapters - Mixer is pluggable. Mixer Adapters allow us to post to multiple backends:  .debug[[istio/observability.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/observability.md)] --- ## Observability Add-Ons in Our Istio Installation - Let's see what observability services we have in our installation. .exercise[ ```bash kubectl get svc -n istio-system ``` ] - We have: - **Prometheus**: for network telemetry - **Grafana**: to visualize Prometheus data - **Kiali**: to visualize the connections between our services - **Jaeger** (it's the service named `tracing`): to store and visualize distributed traces - **Zipkin**: another option to store and visualize distributed traces .debug[[istio/observability.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/observability.md)] --- ## Explore the Telemetry .exercise[ - Let's expose Jaeger, Grafana and Servicegraph on NodePort ```bash for service in tracing grafana kiali; do kubectl patch svc -n istio-system $service --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]' done; ``` - Get the ports for the exposed services: ```bash kubectl get svc grafana -n istio-system -o jsonpath='{ .spec.ports[0].nodePort }{"\n"}' ``` - Do the same for `kiali` and `tracing` services - Browse to http://your-node-ip:3XXXX (replace with actual service port) ] .debug[[istio/observability.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/observability.md)] --- class: pic .interstitial[] --- name: toc-monitoring-with-istio class: title Monitoring with Istio .nav[ [Previous section](#toc-istio-observability-features) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-distributed-tracing-with-jaeger) ] .debug[(automatically generated title slide)] --- # Monitoring with Istio - All request metrics are sent by Mixer to Prometheus and visualized with Grafana dashboards .exercise[ - Browse to `istio-service-dashboard` in Grafana - Create some load by reloading your browser a few times - Check Grafana for the following metrics: - Request success rate by source - Request duration ] .debug[[istio/grafana.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/grafana.md)] --- class: pic .interstitial[] --- name: toc-distributed-tracing-with-jaeger class: title Distributed tracing with Jaeger .nav[ [Previous section](#toc-monitoring-with-istio) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-deploying-to-ks-with-istio) ] .debug[(automatically generated title slide)] --- # Distributed tracing with Jaeger .exercise[ - Generate some traffic by reloading `front` in your browser. - Look at the traces in Jaeger - Is one of the backends slower than the other one? ] -- - Looks like `beth` is taking too slow to respond... .debug[[istio/tracing.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/tracing.md)] --- ## What makes distributed tracing possible? Although Istio proxies are able to automatically send spans, they need some hints to tie together the entire trace. Applications need to propagate the appropriate HTTP headers so that when the proxies send span information, the spans can be correlated correctly into a single trace. In `front/front.py` line 17: ```python incoming_headers = [ 'x-request-id', 'x-b3-traceid', 'x-b3-spanid', 'x-b3-parentspanid', 'x-b3-sampled', 'x-b3-flags', 'x-ot-span-context' ] ``` Each service needs to pass these headers to its downstream connections. .debug[[istio/tracing.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/tracing.md)] --- ## Let's fix the problem! - We've found slowness in `beth` responses - Let's see what's causing this: .exercise[ - Look at `beth/api.py`, line 33 - A-ha, looks like someone forgot to remove some testing code... ] - Let's fix the issue, build a new version and redeploy. .debug[[istio/tracing.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/tracing.md)] --- class: pic .interstitial[] --- name: toc-deploying-to-ks-with-istio class: title Deploying to K8s with Istio .nav[ [Previous section](#toc-distributed-tracing-with-jaeger) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-progressive-delivery-strategies) ] .debug[(automatically generated title slide)] --- # Deploying to K8s with Istio The plan: - Fix slowness by removing the `time.sleep(2)` line in beth/api.py - Build new version by building `localhost:32000/beth:0.2` - Push the new version to our internal registry - Update `beth` deployment to serve the new version .debug[[istio/deployments.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/deployments.md)] --- ## Deploying by Kill-And-Replace .exercise[ ```bash cd alephbeth/beth ``` - Fix the code of beth/api.py in your editor of choice ```bash docker build . -t localhost:32000/beth:0.2 docker push localhost:32000/beth:0.2 kubectl -n staging --record deploy/beth set image beth=localhost:32000/beth:0.2 ``` - Verify deployment is updated - Check Jaeger to see if slowness is resolved ] .trivia[ Do you know what that `--record` flag in the last command does? ] .debug[[istio/deployments.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/deployments.md)] --- ## Is This the Right Way to Fix This? - We just replaced a backend service by killing it -- - What if it was in the middle of serving a request? -- - Is the new version even functioning correctly? -- - Look at the version displayed for `beth`. It's the wrong number!
We have a bug! -- - How can we do better? .debug[[istio/deployments.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/deployments.md)] --- class: pic .interstitial[] --- name: toc-progressive-delivery-strategies class: title Progressive Delivery Strategies .nav[ [Previous section](#toc-deploying-to-ks-with-istio) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-istio-traffic-management-basics) ] .debug[(automatically generated title slide)] --- # Progressive Delivery Strategies **Progressive Delivery* is the collective definition for a set of deployment techniques that allow for gradual, reliable and low-stress release of new software versions into production environments. Istio's advanced traffic shaping capabilities make some of these techinques significantly easier. Techniques we will be looking at today are: - Dark launch - Canary deployments - Traffic mirroring .debug[[istio/progressive.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/progressive.md)] --- ## Dark Launch **Dark Launch** refers to the process where the new version is released to production but is only available to internal or *friendly* users - via feature toggles or smart routing. This way we can battle-test new features and bug fixes in production long before the paying customers get affected.  .debug[[istio/progressive.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/progressive.md)] --- ## Canary Deployments **Canary Deployments* is the process in which a new version that is released to production gets only a tiny percent of actual production traffic. While the rest of traffic continues to be served by the old version. This may cause a minimal, sufferable service disruption. If the new version functions fine - we gradually switch more traffic over to it from the old version. Until all traffic is served by the new version and the old version can be retired.  .debug[[istio/progressive.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/progressive.md)] --- ## Traffic Mirroring *Traffic Mirroring* (or traffic shadowing) is more of a testing technique whereas we release the new version to production and channel *all* the production traffic to it. This happens in parallel to serving this traffic by the old version. No responses are sent back from the new version. This allows us to test the new version with full production traffic and data without impacting our users.  .debug[[istio/progressive.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/progressive.md)] --- ## How Can Istio Help - Let's see how we can implement all of the above with Istio's help - But first let's learn the basics of Istio traffic management .debug[[istio/progressive.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/progressive.md)] --- class: pic .interstitial[] --- name: toc-istio-traffic-management-basics class: title Istio Traffic Management Basics .nav[ [Previous section](#toc-progressive-delivery-strategies) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-our-app-with-istio) ] .debug[(automatically generated title slide)] --- # Istio Traffic Management Basics - In order to implement Progressive Delivery with Istio
we need to use 2 Istio resources: - [Virtual Service](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/) - [Destination Rule](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/) .debug[[istio/trafficmgmt.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/trafficmgmt.md)] --- ## Virtual Service A VirtualService defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset/version of it) defined in the registry. The source of traffic can also be matched in a routing rule. This allows routing to be customized for specific client contexts. .debug[[istio/trafficmgmt.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/trafficmgmt.md)] --- ## Virtual Service - path rewriting ```yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews-route spec: hosts: - reviews.prod.svc.cluster.local http: - match: - uri: prefix: "/wpcatalog" - uri: prefix: "/consumercatalog" rewrite: uri: "/newcatalog" route: - destination: host: reviews.prod.svc.cluster.local ``` .debug[[istio/trafficmgmt.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/trafficmgmt.md)] --- ## Virtual Service - header based routing ```yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: promotions spec: hosts: - promotions.prod.svc.cluster.local http: - match: - headers: User-Agent: regex: ".*Mobile.*" uri: prefix: "/promotions/mobile" route: - destination: host: promotions-mobile.prod.svc.cluster.local - route: - destination: host: promotions.prod.svc.cluster.local ``` .debug[[istio/trafficmgmt.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/trafficmgmt.md)] --- ## Virtual Service - versioned destinations ```yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews-route spec: hosts: - reviews.prod.svc.cluster.local http: - route: - destination: host: reviews.prod.svc.cluster.local subset: v2 weight: 25 - destination: host: reviews.prod.svc.cluster.local subset: v1 weight: 75 ``` - Wait, where do these `subset`s come from? .debug[[istio/trafficmgmt.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/trafficmgmt.md)] --- ## Destination Rule **DestinationRule** defines policies that apply to traffic intended for a service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool. *Version specific policies* can be specified by defining a named *subset* and overriding the settings specified at the service level. On Kubernetes these subsets can be defined by referencing pod labels. .debug[[istio/trafficmgmt.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/trafficmgmt.md)] --- ## Destination Rule The following rule uses a round robin load balancing policy for all traffic going to a subset named `testversion` that is composed of endpoints (e.g.: pods) with labels (version:v3). ```yaml apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: bookinfo-ratings spec: host: ratings.prod.svc.cluster.local trafficPolicy: loadBalancer: simple: LEAST_CONN subsets: - name: testversion labels: version: v3 trafficPolicy: loadBalancer: simple: ROUND_ROBIN ``` .debug[[istio/trafficmgmt.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/trafficmgmt.md)] --- class: pic .interstitial[] --- name: toc-our-app-with-istio class: title Our App with Istio .nav[ [Previous section](#toc-istio-traffic-management-basics) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-launching-darkly) ] .debug[(automatically generated title slide)] --- # Our App with Istio - Ok, time to start managing our app with Istio - The first thing to do is create VirtualService entities for each of our services - I've prepared a definition for front service in alephbeth/istio/front-vs.yaml .exercise[ ```bash kubectl apply -f alephbeth/istio/front-vs.yaml -n staging ``` ] Note: you won't notice a change. We're only accessing the service from outside of the cluster. Controlling traffic to the `front` service would require defining a [Gateway](https://istio.io/docs/reference/config/networking/v1alpha3/gateway/) object. But that is out of the scope of our training today. .debug[[istio/ourappwithistio.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/ourappwithistio.md)] --- ## VirtualServices for Everyone - Now create virtual services for `aleph` and `beth` .exercise[ - Create yaml definitions for both services - Apply them to your cluster ```bash kubectl apply -f alephbeth/istio/aleph-vs.yaml -n staging kubectl apply -f alephbeth/istio/beth-vs.yaml -n staging ``` - Verify ```bash kubectl get virtualservice -n staging ``` ] - Is everything still working? .debug[[istio/ourappwithistio.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/ourappwithistio.md)] --- ## Let's build a New Version - Remember that `beth` wasn't displaying the version right? - Let's fix that and deploy a new version. - But this time we'll launch darkly! .exercise[ - In `beth/api.py` change the version on line 12: ```python 'version': '0.3', ``` - Build a new docker image and push it to local registry - Don't update your existing `beth` deployment. We will launch darkly! ] .debug[[istio/ourappwithistio.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/ourappwithistio.md)] --- class: pic .interstitial[] --- name: toc-launching-darkly class: title Launching Darkly .nav[ [Previous section](#toc-our-app-with-istio) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-traffic-mirroring) ] .debug[(automatically generated title slide)] --- # Launching Darkly - Our existing deployments already have a version label: ```yaml labels: app: beth version: v01 ``` .exercise[ - Create a new deployment for `beth` in file `deployments/beth-v03.yaml` labeled as: ```yaml version: v03 ``` - Don't forget to also update deployment name and image name - Deploy ```bash kubectl apply -f deployments/beth-v03.yaml -n staging ``` ] .debug[[istio/darklaunch.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/darklaunch.md)] --- ## Did This Work As Planned? - Try reloading front UI in your browser - Hmm, we get both versions intermittently. Not what we wanted! - Let's fix our virtual service. .exercise[ ```bash kubectl apply -f istio/dark-launch.yaml ``` ] - Look at `istio/dark-launch.yaml` .debug[[istio/darklaunch.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/darklaunch.md)] --- ## Privileged Access - Back in you browser - sign in as user `developer` (the `Sign in` button is at top right) - You should be consistently getting version 0.3 - Sign out now. - Are you getting the older version again? .debug[[istio/darklaunch.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/darklaunch.md)] --- class: pic .interstitial[] --- name: toc-traffic-mirroring class: title Traffic Mirroring .nav[ [Previous section](#toc-launching-darkly) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-rolling-out-to-production-with-canary) ] .debug[(automatically generated title slide)] --- # Traffic Mirroring - Rolling out the app to internal users is great. - It allows us to test features in isolation. - But this still isn't the real traffic. - Let's replicate *all* the traffic to the new version and see how it behaves. .debug[[istio/mirroring.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/mirroring.md)] --- ## Let's mirror all traffic to v03: .exercise[ ```bash kubectl apply -n staging -f - <
we're done. Version v01 can now be deleted. .debug[[istio/canary.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/canary.md)] --- class: pic .interstitial[] --- name: toc-summing-it-all-up class: title Summing It All Up .nav[ [Previous section](#toc-rolling-out-to-production-with-canary) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # Summing It All Up - We've learned what a Service Mesh is - We've learned how Istio works - We've seen the following progressive delivery strategies: - Dark Launch - Traffic Mirroring - Canary Deployment .debug[[istio/summary.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/summary.md)] --- ## Wrap-up Exercise - Check out `final` branch of istio.workshop .exercise[ ```bash git checkout final ``` ] - Build and push a new version of `aleph` .exercise[ ```bash cd aleph docker build . -t ${REGISTRY}/aleph:0.2 docker push ${REGISTRY}/aleph:0.2 ``` ] .debug[[istio/summary.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/summary.md)] --- ## Wrap-up Exercise - Create a DestinationRule for aleph in namespace `staging`: - With subset `production` pointing at pods with label `version=v01` - With subset `canary` pointing at pods with label `version=v02` - Create a VirtualService in namespace `staging`: - Default route: `aleph` with subset `production` - Mirror traffic to subset `canary` - Create a new deployment `aleph-v02` with labels : - version: v02 - app: aleph .debug[[istio/summary.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/summary.md)] --- ## Check Status of the New Deployment - Generate load on aleph service - (Hint: use the `curler` pod we've created) - Check Graphana for aleph service stats - Is the new version healthy? -- - It's not! - Remove the deployment for `aleph v02` .debug[[istio/summary.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/summary.md)] --- ## Let's Fix This - Fix `aleph`. *Hint - the bug is in `version` method* - Build version 0.3 of `aleph` - Deploy the new version - Expose it as a canary. Increment by 20 percent each time, verifying that all the requests are successful. .debug[[istio/summary.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/summary.md)] --- ## That's It for Today! - Thanks for attending! - Any future questions: Slack or `contact@otomato.link` - For more training : https://devopstrain.pro .debug[[istio/summary.md](https://github.com/otomato-gh/istio.workshop.git/tree/HEAD/slides/istio/summary.md)]