Configuring Logging in AWS EKS Using Fluent Bit and CloudWatch
Observability is essential for any application, and the Smart-cash project is no exception. Previously Prometheus was integrated for monitoring.
Observability is essential for any application, and the Smart-cash project is no exception. Previously Prometheus was integrated for monitoring.
In this article, I will share my thoughts about using Terraform in the GitOps process, specifically to create the manifest and push it to the Git repo.
Previous articles showed how to build the EKS Infrastructure in AWS and how to install FluxCD to implement GitOps practices, This article is focused on explaining the steps taken to install the Prometheus Operator(using Helm) and Grafana for monitoring.
In a previous article I mentioned the idea behind this project that I named SmartCash. I began building the terraform code for the infrastructure in AWS and the pipeline to deploy it.
The journey to learn a new tool can be a little tricky, watching videos and reading some blogs can be an option, but watching and reading can not be enough for everyone, personally I need a hands-on approach for effective learning.
Estudiando kubernetes gasté un tiempo considerable intentando entender muchos conceptos, por ejemplo, por todo lado se habla de OCI compliant, buscas OCI y te lleva a runtime-spec, buscas runtimes y te lleva a containerd, runc, image-spec, cgroups, namespaces, etc; puedes pasar días buscando, y mucho más cuando eres del tipo de persona que quiere entender a fondo cómo funcionan las cosas.
AWS CloudWatch Logs could be used to store logs generated by resources created in AWS or external resources, once the logs are in CloudWatch you can run some queries to get specific information and create alerts for specific events.
In this post I will share my experience enabling and configuring logging in an EKS cluster, creating alerts to send a notification when a specific event appears in the logs.
A few months ago I was asked to design the DRP process(Multi-Region) for a project that used RDS(PostgreSQL). RDS instances were critical components, these stored PII information. RDS automatically takes snapshots of the instances and you can use them to recreate the instances in case of failure, these snapshots just can be used in the same region but you can share or copy them between accounts and Regions, here some AWS Docs related RDS automatic snapshots.
Infrastructure as Code has started to be an important part of our cloud journey, giving us the ability to automate deploys and replicate our infra in multiple environments and accounts. Using code to define our infra allow us to implement some practices from developers world, like testing, version control, etc.