2025 in review
The New Year is nigh, and it's time to look back at the past year. Let's take a look at some numbers and noticeable changes, however big or small they might be. Mostly related to this blog, though some may be personal.
The New Year is nigh, and it's time to look back at the past year. Let's take a look at some numbers and noticeable changes, however big or small they might be. Mostly related to this blog, though some may be personal.
Advent season is here! And that means advent challenges as well!
After a disastrous attempt at Advent of Code last year, this year I was very happy to see that Sad Servers started an Advent challenge of their own -- Advent of Sysadmin! At last, a challenge I can (hopefully) progress further than task 3. And this means more challenges for us to tackle. The Advent will consist of 12 challenges. To keep things slightly more interesting, I will publish the solution to each task the day after it's released: for example, today, on December 2, I will solve the task from December 1, and so on. Have fun!
All tasks are available!
Several weeks ago, while tinkering with a Wi-Fi router in a coffee shop, a thought occurred to me: "Some networks might be blocking user activity based on MAC addresses. It might be a good idea to automate MAC address changes." So I decided to write a script that would change my laptop's MAC address.
Complex systems require extensive monitoring and observability. Systems as complex as Kubernetes clusters have so many moving parts that sometimes it's a task and a half just to configure their monitoring properly. Today I'm going to talk in depth about cross-account observability for multiple EKS clusters, explore various implementation options, outline the pros and cons of each approach, and explain one of them in close detail. Whether you're an aspiring engineer seeking best-practice advice, a seasoned professional ready to disagree with everything, or a manager looking for ways to optimize costs -- this article might be just right for you.
I love a good challenge. I love the feeling when the brain sparks and screeches while trying to solve another mystery. For several years, I've been tackling all sorts of nut-cracking challenges, and for several months, I've been thinking of creating one myself. Luckily, I have just the right resources for that: a personal website and a blog. And finally, we're here. Welcome to hatedabamboo's ARG:2025!
Enter the game
CTF challenges continue to be one of my interests for their ability to show me even more ways in which my allegedly "secure" and "solid" infrastructure setup can be accessed by a malicious actor. This time we're gonna discuss the second challenge in a series of CTFs made by WIZ: EKS Cluster Games.
Databases are a cornerstone of any meaningful business application. Or not meaningful. Or not even business. They keep things consistent. Yes, that's the one.
For decades, we've been using usernames and passwords to connect to databases inside applications. While consistent and secure enough, sometimes we want a different, more secure way to access sensitive data. And in this article, I'm going to show you the entire process of configuring a database connection using AWS native tools -- IAM roles and policies.
Hello, dear reader! It's been a while since our last one-way communication. Mostly because the last couple of months have been taxing on me. Searching for a new job is not an easy task these days. Also, there's been a new Warhammer box, which I just couldn't resist.
But I'm slowly getting back up to speed, and today we're gonna explore the abilities to manage the managed service -- in particular, how we can configure custom parameters to spin up instances and storage on AWS EKS to our liking.
CTF (Capture The Flag) challenges are a fun and safe way to stretch a stale brain muscle and learn a trick or two about how robust security is not actually that robust. Today we're going to solve The Big IAM Challenge[1] and reflect on the lessons learned.
Integrating Playwright end-to-end test reporting into a CI/CD pipeline by automatically uploading the generated reports to an AWS S3 bucket, enabling easy access and centralized storage.