Hightower is a well-respected technologist and thinker in the cloud-native space.
He is a philosopher.
Who other than a philosopher would deliberately have a repository without any code?
Nash and Garrison do a great job of giving Hightower space to speak and share his thoughtful reflections.
He sees the higher abstractions and the commonalities between technical mindsets in different fields.
This approach is vital to professional growth in the tech industry.
All too often, engineers look down on people in roles further from the code instead of understanding that the honing
of skills in those roles is just as valuable and significant.
Hightower talks about how his eyes were opened to this fact while advising NFL stars Larry Fitzgerald
and Kevin Beachum on investing in tech. He realized how professional football players and other technical people
at the top of their game share a mindset and a deep notion for systems.
This systems-based thinking extends beyond the day-to-day of work in tech.
This episode examines how it applies in the open source ecosystem, the role of authenticity when money is on the line,
and putting people into positions to succeed as a leader.
Developing this mindset takes years.
You build up your relationships, taste, and knowledge in layers.
What’s so exciting about this growth is that you never know where it might lead you.
It has led Hightower everywhere from keynotes at KubeCon to running electrical wiring for bidets.
This hits at why he is always such a joy to learn from.
He takes on whatever challenges are ahead of him and sees how they all tie together.
Nash and Garrison do a fantastic job of probing this perspective and the result is a great episode
that is worth saving and revisiting.
I have followed the hosts of Fork Around and Find Out from their most recent podcast, Ship It!,
which they revived early last year. With its parent podcast network, The Changelog,
sunsetting its programming outside of its core show, the pair is striking it out on their own.
I am so excited to see what is in store for them.
This post is a companion piece to a talk I gave at GopherCon 2023.
You don’t need a Linux distro in your Docker image to run a
statically linked Go binary. Docker has a special base image called
scratch that is empty.
It allows your binary to run directly against the kernel while remaining
isolated from the host system.
While we still need a distro to build the binary, we can use scratch as the base image for the final stage.
Docker only includes the files that are in the final stage in the image. This means that we can make our
image much smaller, allowing them to be shipped and downloaded faster. It also reduces the attack surface
of the image, because there is no shell available to run commands and exploits.
Let’s take a standard Dockerfile for a Go web app and see how we can use scratch to make it
smaller, easier to ship, and more secure.
FROM goland:1.21-alpineCOPY ./webapp/* .RUN go build -o /webappEXPOSE 8080CMD ["/webapp"]
The image size of the generated Docker image is 298 MB, even for a simple 16 line Go program.
We can split our Dockerfile into two stages: one for building the binary and one for running it.
This Docker file uses the Alpine distro in the first stage to build the binary. Then the binary
is copied into the second stage which uses scratch as the base image. The resulting image is 6.72 MB.
That is a 98% reduction in size.
Podcasts have played a critical role in my development as a software engineer. When I was first trying to get into the industry, I would constantly listen to podcasts like Software Engineering Daily to learn how people talk about their work. I would hear terms that were completely foreign to me and write them down to dig in on my own time.
Today, several years into my career, my podcast habit continues. Now it’s more focused on specific technology interests as well as how people think about and approach their work.
This feed is a collection of podcasts that make up the bulk of my podcast listening time each week. I pay for the ”++” subscription to support the team behind these podcasts. It’s certainly a worthwhile investment for the quality and amount of insights I’ve received from The Changelog over the years.
They recently brought back my favorite podcast to the feed, Ship It!, which covers deployment and system architecture at scale. I love hearing how others are tackling the same problems I deal with everyday at work and getting inspired to try different approaches and workflows.
Wes and Scott, with their team, are incredibly consistent at producing meaningful and entertaining episodes multiple times per week. This is where I go to get insight into the frontend space. I like that they also get into practical development related issues like organization and working with ADHD.
These are short episodes, I like them as meditative reflections on how to self-actualize to be more productive and calmer at work. I do my best when I can focus and identify the right points of leverage to acomplish a task. This podcast helps to highlight where those points of leverage might be found and how to approach them.
This podcast is all about product. In growing as a developer, further progress now seems to lay in better understanding my relationship with the product organization and long-term projects. It’s good then, that this work fascinates me. I want to get better at identifying product-market fit and aligning myself with other teams’ incentives so we can better work together. This podcast interviews some of the smartest product people out there. It’s fantastic to learn from their experiences and take their learnings to heart.
I’m here for the deep dives into Postgres. It’s my go-to database and I hadn’t realized how many layers there are to understanding it. Over the last two decades, database specialization has somewhat fallen out of fashion with much of the focus being on application engineering.
However, the beauty and complexity of databases has only increased in those same years. This podcast helps me learn how to truly think about and use Postgres as well as understand its trade-offs.
The only reason I don’t listen to LocalFirst.FM weekly is because it only comes out every 2 weeks.
If you want to set me off on a passionate rant for anywhere from 30 minutes to 8 hours, bring up local-first development. This is the technical horizon that excites me most about the next generation of development. Local first is all about having state based on your users device, allowing them to update that state, and synchronizing it seamlessly with other users’ changes.
As solutions like Google Docs and Notion have driven this approach of always-connected synchonous online co-editing, I think the generation of applications that replaces them will focus on offline-first model with genuinely novel approaches to syncing (see CRDTs).
LocalFirst.FM is a new podcast, but it is a spiritual successor to Metamuse which is complete, but also worth checking out.
I want to be a homelabber. I’m inspired by people running their own compute at home and I want to join them. I’ve toyed with the idea for several years, picking up aspirational Raspberry Pis. Now, I’m really digging in and feeling both overwhelmed with ideas and stymied at what to do next.
My homelab is four Raspberry Pis of varying vintages, a 5 port Gigabit switch, my Google Wifi, and an ancient 1 TB external hard drive. It exists to be a playground for learning and sharing what I’m learning. Rather than a lab, I call this my Homecluster, in part because it’s based around Talos Linux and Kubernetes.
All of the Pis are wired into the switch which is connected to a node of my Google Wifi mesh router. The external hard drive just sits there for now. I have thought about plugging it in to one of the Pis, but I’m not really sure what I’d want to do with it yet. I have a feeling it’s too slow and small to be useful as part of a NAS.
Right now, the lab is more of a platform for me to start building on. I’m not 100% sure what I want to do with it. I have some avenues I want to explore like running an ADS-B Receiver (though I may not be able to when I move to Canada) and hosting my own weather app.
The heart of Homecluster is the Kubernetes cluster running across two Pi 4s. One acts as a control plane, the other is a worker node. As I explore this hobby more, I have set up the cluster so that I can easily add more compute to this cluster with more worker nodes.
I’ve deployed Kubernetes to the Pis using Talos Linux. Talos is a Linux distribution that only runs Kubernetes on bare metal. There is some amount of learning curve associated with the install, but overall the experience has been enjoyable. The documentation is well done, especially for such a small team taking on a big project. Talos makes setting up Kubernetes on bare metal way easier than it has ever been in the past.
Outside of the Kubernetes cluster, the two older Pis both run Debian. I call these my “agents”.
The Pi 3 is a dedicated 120 GB Postgres database. I gave it the moniker quadratic-crab. Having a Postgres database on my network is fantastic. It means as I develop my own apps for the cluster, I can reliably have database available. I’d like to build some form of backup for this that saves snapshots to cloud storage or a NAS eventually.
The Pi 2B is the most computationally restrictive machine I’m running. I call this agent advocate-cardinal. It runs Debian and I shell into it on occasion. It’s powerful enough to run some sensors and report data back to the rest of the cluster. I can see it playing a role in a weather station or the ADS-B receiver. It will definitely be some kind of satellite node.
As for applications, there is a lot of room to grow. In the Kubernetes cluster, I am running four things:
A Debian installation so I have a 64-bit Linux OS available to shell into
Everything on the Homecluster can be found in this repository. Just as I’ve been inspired by others, I want people to take from what I build for themselves. I’m excited to see what I’ll end up building and sharing with everyone.
Postgres is my go-to database for any project I develop. It’s fast, well-documented, and available in so many places. When I’m working on a project locally, I’ll want to run a local instance of Postgres to test the behavior of the project end-to-end. I’ve always done this using Docker and the official Postgres image.
This worked fine, but I always had to develop some external solution for migrating and seeding the database. Migrations add tables to the database to match what the application expects. Seeding inserts example data, sometimes a copy of production data, to make interactions in the local environment closer to what the user experiences.
I used to write scripts to shell into the running container to perform the migration and seeding, but this presented a few challenges. This process can be slow and flakey if you mess up your script. It was hard to know if the database was already seeded. And seeding, unlike migrations, is not idempotent.
Trying to fix this for my latest project, I wondered if I could build a custom container image on top of Postgres that included my migration and seed by default. That way, I would always know that the database was set up properly by the time it was running. It turns out, this is a well-supported way of working with Postgres in Docker.
All I had to do is place the migration and seed SQL into the /docker-entrypoint-initdb.d/ directory before startup. When Postgres starts, it will run all *.sql, *.sql.gz, and *.sh scripts within this directory. See the docs. I wrote a Dockerfile with a copy step for migrations and the seed directories. It works great!
FROM postgres:alpineCOPY ./migrations/ /docker-entrypoint-initdb.d/COPY ./seed/ /docker-entrypoint-initdb.d/CMD ["postgres"]
Do note that the naming here matters. Postgres will run the files in this directory in alphabetical order. Here, that works well because “m” comes before “s” so the database will be migrated before being seeded. But it’s something to be aware of.
One of the immediate benefits to this approach is how fast it is when compared to shelling into a running container and running migrations. In the gif below, the image is built and run within milliseconds (because I already had the Postgres image locally).
I highly recommend this approach for making sure your Postgres database is properly set up when you run your project locally. Let me know if you try it and find it useful too!
As soon as I start a project, I get it shipped to production with a Git-push based pipeline. I do this even before the project is anything. It means I can go through the motions of getting changes out to users when I don’t even have any yet. What’s more, there are problems that don’t show up running on localhost. Solving those problems one-at-a-time is much less stressful than fixing them all at once on some pre-defined ship date.
Along with getting the project shipped early, I will instrument it to help me debug the code in production. That paid off with my latest project Devy which I plan on sharing with early users next month.
The API for Devy runs on fly.io which enables Grafana dashboards by default. Looking over these logs the other day, I noticed a 500 error. Oops.
A fact of life, running code open to the internet, is that when you configure an SSL cert, the public record of this act attracts bots. I have seen this with every project I’ve shipped that uses LetsEncrypt, but I’m certain it happens with any certificate authority.
Within milliseconds of deploying the API to Fly and getting the automatically generated SSL cert, I will see logs of bots hitting the API trying to extract credentials. In the screenshot below, a bot tried to hit an endpoint api.devy.page/blogs/wp-login.php.
The wp-login.php page is a common target of brute force attacks trying to guess passwords to log in to Wordpress sites. Devy is not Wordpress based, but the cost to try for a security hole is essentially zero for these botnets.
Reading through the logs, which are in reverse chronological order, the API tries to look up a blog with the slug and instead of returning a 404 which it should, it returns a 500. Why?
This request passes through two Rust crates: router and db.
First the router code.
/// GET /blogs/:blog_slug////// Get a blog from the database given a blog slug.async fn get_blog_by_blog_slug( State(store): State<Store>, Path(blog_slug): Path<String>,) -> Result<Json<Blog>> { Ok(Json(blog::get_by_slug(&store.db, blog_slug).await?))}
This calls calls the function get_by_slug in the blog module of the db crate.
The ? in the fifth line is somewhat like throwing an error in other languages. In Rust, it means return the Err type on the Result. This Result is a custom type for the db crate.
pub type Result<T> = std::result::Result<T, Error>;/// Errors that can occur while performing an action on an entity.#[serde_as]#[derive(Debug, Serialize)]pub enum Error { /// The database configuration is invalid. ConfigurationError(String), /// The requested entity was not found. EntityNotFound, /// The request was malformed. Malformed(String), /// A field was missing from the request. MissingField(String), /// An error occurred while interacting with the database. Sqlx(#[serde_as(as = "DisplayFromStr")] sqlx::Error),}
This is one of my favorite features in Rust (and the source of this bug) which is defining a custom Error enum and a Result the encompasses all “failable” states in a crate.
Similarly, the router crate has a defined Error enum and a way to automatically translate db errors into router errors.
This allows the behavior seen in the get_blog_by_blog_slug function in the router where ? is used on a function in the db package: blog::get_by_slug(&store.db, blog_slug).await?. As the error passes back from the db crate to the router crate, it will get automatically transformed from a db::Error to a router::Error. Neat!
Except the db error in question is a NotFound error… and when it gets returned I am telling the router to transform it into a StatusCode::INTERNAL_SERVER_ERROR. Really, I’m saying no matter what error the db crate returns, transform it into a StatusCode::INTERNAL_SERVER_ERROR. Not great.
The fix is to add a match case where the db error EntitiyNotFound is transformed into the 404 status code. I also added the “malformed” and “missing field” cases too.
I committed the change and pushed it. Now the bug is gone. I also added “looking up a blog that doesn’t exist returns 404” to my integration test suite. Perfect!
We all write bugs. It’s what we do when we’re not fixing bugs. I overlooked this pretty basic case because I was restructuring my code. Shipping and observing your application before you have any real users has helped me find all sorts of bugs which is why I’ll always insist on it. Hopefully this inspires you to try the same.
In Unix, there is a utility called tee. It’s perfectly named because it forks its input into a T, sending the data both to standard out and to the next process in a command. It’s useful when you want to take a peek at a value in some intermediary stage of a script while allowing it to also be used for future processing.
When I learned this concept, I started seeing value for it everywhere in my development. It was particularly handy when working on the most recent Advent of Code which I did entirely in Python.
The problems in Advent of Code all deal with some form of data transformation. Along the way, I wanted to debug the transformation by seeing the data and allowing it to continue being processed through my solution. For this, I wrote a simple tee function.
def tee(v): print(v) return v
While I’ve used print debugging countless times to solve these types of puzzles, what was nice about this function was how I could just slip it in line with the existing solution. Take my solution for Day 4 as an example. If I was testing my part 1 answer and needed to ensure I was deserializing the card information correctly, the modification is minimally invasive.
This problem asks us to process a multi-line string representing the engine of a gondola.
The input is made up of digits and symbols with .s representing empty space in the engine.
This input is given as an example.
For part 1, we need to sum all of the numbers that have a symbol adjacent to them. For part 2,
we need to sum the product of all numbers adjacent to a "*" if the "*" has exactly two numbers
adjacent to it.
I didn’t need to do much to the puzzle input in order to parse it for the information I needed. I just split
it up into a list by newlines and iterated over the values. When I encountered a digit I began feeding those digits
into a buffer until a non-digit value was encountered.
def part_1(engine_rows: list[str], symbols: set[str]) -> int: engine_dimensions = (len(engine_rows), len(engine_rows[0])) reading_number = False subject_buffer = set() number_buffer = "" total = 0 for row, vals in enumerate(engine_rows): for col, char in enumerate(vals): if char in digits: subject_buffer.add((row, col)) number_buffer += char reading_number = True else: if reading_number: if has_symbol( engine_rows, border(subject_buffer, engine_dimensions), symbols ): total += int(number_buffer) subject_buffer.clear() number_buffer = "" reading_number = False return total
This approach is one I’ve used before when parsing in data and trying to grab chunks of it.
There is some global toggle that determines if the data should be read into the buffer.
When a piece of data is encountered that ends the chunk that should be passed in, in this case
a non-digit, the toggle is switched off, the chunk is stored from the buffer, and the buffer is cleared.
In this implementation, when the end of the number is reached, I add the number to the running total
if the bordering characters include symbols.
I do this in two parts. First I find the set of locations in the engine which border the number, then
I check those locations to see if they include a symbol.
#[...] if has_symbol( engine_rows, border(subject_buffer, engine_dimensions), symbols ): total += int(number_buffer)#[...]def border( subject: set[tuple[int, int]], engine_dimensions: tuple[int, int]) -> set[tuple[int, int]]: height, width = engine_dimensions borders = set() for cell in subject: row, col = cell[0], cell[1] # Iterate clockwise around the location if row > 0 and col > 0: borders.add((row - 1, col - 1)) # above left if row > 0: borders.add((row - 1, col)) # above center if row > 0 and col < width - 1: borders.add((row - 1, col + 1)) # above right if col < width - 1: borders.add((row, col + 1)) # center right if row < height - 1 and col < width - 1: borders.add((row + 1, col + 1)) # below right if row < height - 1: borders.add((row + 1, col)) # below center if row < height - 1 and col > 0: borders.add((row + 1, col - 1)) # below left if col > 0: borders.add((row, col - 1)) # center left return borders - subjectdef has_symbol( engine_rows: list[str], locations: set[tuple[int, int]], symbols: set[str]) -> bool: for loc in locations: if engine_rows[loc[0]][loc[1]] in symbols: return True return False
I was particularly happy with the border function because I took an approach I hadn’t thought of when I did a similar
problem several years ago. I took the 8 cells that surround each digit in the number as a set and then subtracted from that
the values that make up the number. This leads to a fairly clean implementation that would work generally on any shape.
Part 2
I took the wrong approach initially with part 2. I first went looking for all of the gears in the puzzle, then used my border
finding code to grab digits adjacent to the gears. The problem here is that I had to add a lot of edge-case logic for handling
if an adjacent digit was part of a number in another adjacent digit or if it represented a separate number. I got deep into some
globbing of numbers by iterating back to the start of the number and then forward from the center of the number. It was a mess.
I took some time away from the problem and decided to approach it in the opposite manner. I created a class called PartNumber
that stores the full number and the cells that make up that number. A cell here is just a location in the engine. I call this same
concept multiple things in the code depending on when I wrote it.
This class allowed me to store the numerical value and the full location of every part number. This was all I needed to then
map the location of every gear in the engine to its adjacent numbers. This allowed me to reuse the shape of my solution to part 1
and avoided all the nasty number globbing.
def part_2(engine_rows: list[str]) -> int: engine_dimensions = (len(engine_rows), len(engine_rows[0])) reading_number = False subject_buffer = set() number_buffer = "" numbers = [] for row, vals in enumerate(engine_rows): for col, char in enumerate(vals): if char in digits: subject_buffer.add((row, col)) number_buffer += char reading_number = True else: if reading_number: numbers.append( PartNumber(int(number_buffer), subject_buffer.copy()) ) subject_buffer.clear() number_buffer = "" reading_number = False gears = {} for number in numbers: for neighbor in border(number.cells, engine_dimensions): if engine_rows[neighbor[0]][neighbor[1]] == "*": if neighbor in gears.keys(): gears[neighbor].append(number) else: gears[neighbor] = [number] total = 0 for _, numbers in gears.items(): if len(numbers) == 2: total += numbers[0].value * numbers[1].value return total
To get the answer, I iterated over all of the gears summed the product of numbers for gears that were adjacent to exactly two numbers.
This problem asks us to process scratch cards that list the numbers revealed on the card and the winning numbers. In part 1, we need
to count how many revealed numbers are in the winning numbers set with the total points accrued on each card being 1 for the first
match and doubling for each subsequent match. In part 2, the number of matches on the card wins the holder copies of subsequent cards
with this pattern holding recursively across the copies of cards. The premise can be hard to explain, I struggled to grok it when first reading
the full problem statement.
This is a problem that cleanly separates into two sub problems: formatting and processing.
The formatting part allows for much easier data manipulation. For this, I wrote a deserialize_card function that
takes in a line as formatted in the problem and returns a named tuple. Named tuples in Python behave like tiny,
immutable classes. They’re great containers for holding data you want to have travel together through your code.
The output from deserialize_card is used to count how many numbers overlap between the revealed and the winning
sets.
Counting the points has a little trick to it. As described in the problem, the first match is worth 1 point, each subsequent
match doubles this value. This is a pattern exhibited by the exponents of 2. So the points earned are 2^n where n is the number of matches.
Part 2
The rules of how points are calculated for this part are best explained by quoting the puzzle itself:
There’s no such thing as “points”. Instead, scratchcards only cause you to win more scratchcards equal to the number of winning numbers you have.
Specifically, you win copies of the scratchcards below the winning card equal to the number of matches. So, if card 10 were to have 5 matching numbers, you would win one copy each of cards 11, 12, 13, 14, and 15.
Copies of scratchcards are scored like normal scratchcards and have the same card number as the card they copied. So, if you win a copy of card 10 and it has 5 matching numbers, it would then win a copy of the same cards that the original card 10 won: cards 11, 12, 13, 14, and 15. This process repeats until none of the copies cause you to win any more cards.
In this problem, an elf is pulling sets of colored cubes from a bag. In part 1, we need to determine how many of those sets of handfuls pulled
from the bag are possible given a predetermined count of each colored cube. In part 2, we need to determine the minimum number of cubes that
are required for the given sets of handfuls to be possible.
Each set of cubes pulled from the bag is referred to as a handful in the problem statement. Multiple handfuls make up a single “game”.
Each game is presented as a string in the input file.
After each handful is presented, the cubes are returned to the bag and may be reused.
In part 1, I needed to determine whether a given game was possible with the following set of cubes being in the bag:
12 Red
13 Green
14 Blue
A possible game is one where the cubes presented are never greater than the cubes provided.
The answer to the puzzle is the sum of game ids for possible games.
This problem breaks down into two parts, a string parsing part and an evaluation of possible games.
I know all of the inputs so I can take a very uncareful and quick appraoch to getting the two pieces of information I need from each game: the game id
and the list of handfuls presented.
def game_id(s: str) -> int: return int(s.strip().split(":")[0].split(" ")[1])def deserialize_handfuls(s: str) -> list[tuple[int, int, int]]: return [count_cubes(handful) for handful in s.strip().split(":")[1].split(";")]def count_cubes(handful: str) -> tuple[int, int, int]: r, g, b = 0, 0, 0 cubes = handful.strip().split(",") for cube_color in cubes: count = int(cube_color.strip().split(" ")[0]) if "red" in cube_color: r = count elif "green" in cube_color: g = count elif "blue" in cube_color: b = count return (r, g, b)
I have encoded the handfuls as tuples with 3 integers. They correspond to red, green, and blue respectively.
This produces well-structed data from each game that can be evaluated. Take the form of “Game 1” listed above, which is now much more readable for the program.
I wrote a function to check each handful against the set of cubes provided.
def is_allowed(reqs: tuple[int, int, int], handful: tuple[int, int, int]) -> bool: for i, color in enumerate(handful): if reqs[i] < color: return False return True
Then I used the parsing and evaluating functions together to sum the ids of games that were possible.
def part_1(games: list[str]) -> int: reqs = (12, 13, 14) return sum( game_id(game) * all(is_allowed(reqs, handful) for handful in deserialize_handfuls(game)) for game in games )
Part 2
For part 2, I didn’t need any new parsing code. I did need a way to evaluate the minimum set of cubes that would make a given game possible.
This can be found by iterating over every handful shown and taking the maximum value that we ever observe for each cube color to be the minimum
we need of that color for the game to be possible.
the maximum value of each color is 1 red, 14 green, and 7 blue. Therefore, the minimum set of colored cubes that make this game possible is this same set.
This function takes a list of the deserialized handfuls and makes the same determination.
def min_cubes(handfuls: list[tuple[int, int, int]]) -> tuple[int, int, int]: return tuple(max(x) for x in zip(*handfuls))
The answer to the puzzle is the sum of the product of cubes that make each game possible. So we iterate over the games, evaluate the minimum possible set of cubes,
then we multiply those cube counts together and sum it all up. I used the prod function from the math package in the standard library to get the product of the
minimum count of cubes.
from math import proddef part_2(games: list[str]) -> int: return sum(prod(min_cubes(deserialize_handfuls(game))) for game in games)
Today is the first day of Advent of Code for 2023! This annual coding challenge
consists of puzzles increasing in difficulty each day from the start of December through Christmas.
My self-imposed rule this year is to use only the Python standard library. No external dependencies are allowed.
I can’t get around this restriction by copy-pasting some existing A* algorithm either. All code needs to be
written after the start time. I don’t tend to over-index on runtime complexity. I’m keeping my solutions to this year in my
advent-2023 repository.
Before I dive into my solution for Day 1, I thought I would share a few helpful tips and code snippets I developed
to use with Advent of Code.
Downloading input files
Every problem in Advent of Code follows the same pattern, there is an input file and a desired outcome from running
a calculation on that file. The URL to download the input files follows a predictable pattern so I wrote a Python
script to download that file.
from datetime import datetimeimport osimport syssession = os.environ["AOC_COOKIE"]if len(sys.argv) > 1: day = sys.argv[1]else: day = datetime.now().dayos.system( f'curl --cookie "session={session}" https://adventofcode.com/2023/day/{day}/input > day_{day:02d}.txt')
This script depends a user setting the AOC_COOKIE environment variable because the puzzle inputs are unique for
different users. This can be grabbed from application storage when authenticated to the website.
This script can be passed a number to download a specific date’s input or will default to the current day if left empty.
Reading input files
This function saves me a bit of time as I know that I just want a the contents of filename as a string.
def read(filename: str) -> str: with open(filename, "r") as f: return f.read()
Tee-ing output
I call this little helper tee after the Unix program that inspired it.
This function is about as simple as they come, yet it can be incredibly helpful when debugging a problem using print statements.
def tee(val): print(val) return val
By printing a value and returning it, this function can be placed inline with function calls, replacing the need for a separate
variable declaration to print a value out.
I was up last night working on Devy, so I started this problem at midnight when it was released for me.
The problem asks you to look at a series of strings, each of which contains single-digit numbers. These numbers show up in the
string in both a numeric (9) and word (nine) form, but the first part of the problem only asks you to identify the numeric
form only. The answer to the problem is the sum of values produced by joining the first and last digits in the string to form a two-digit number.
I was able to solve part 1 rather quickly. Because we are looking at the numeric representation and the numbers are single-digits,
I wrote a function that would grab the first single-digit numeric value in a string.
def is_number(c: str) -> bool: return c in "0123456789"def first_number(line: str) -> str: for c in line: if is_number(c): return c return ""
This function can find us both the first and last digits in the string if we reverse the input. Joining these values and summing them
gives the answer to part 1.
def part_1(lines: list[str]) -> int: return sum([int(first_number(line) + first_number(line[::-1])) for line in lines])
Moving on to part 2, I needed to find a way to effeciently grab the first and last word-form numbers. I decided to parse through the
characters in the strings until I found a character that was a candidate first letter for a word-form number. This worked well because
I have such a limited set of words, just representing 1 through 9. In my solution, I did include zero as a possibility, which was a mistake
but didn’t cause my solution to fail.
To effeciently do this number word lookup, I built a dictionary that mapped the first letter of the word to the candidate words then to the
numeric forms they represented.
As I iterated through the string, I matched on the keys of this dictionary then iterated over the candidates to test for a match. This avoided an
issue many people ran into where number words could overlap other number words (e.g. eightwo which should resolve to 82).
I wrote two very similar functions for getting the first and last values in the string. I could have instead created a second number_words dictionary
where the words were reversed, but I didn’t.
def first_number_or_word(line: str) -> str: for i, c in enumerate(line): if is_number(c): return c if matches := number_words.get(c): for number_word in matches: word = line[i : i + len(number_word)] if word == number_word: return number_words[c][number_word] return ""def last_number_or_word(line: str) -> str: for i, c in enumerate(line[::-1]): if is_number(c): return c if matches := number_words.get(c): for number_word in matches: offset = len(line) - i - 1 word = line[offset : offset + len(number_word)] if word == number_word: return number_words[c][number_word] return ""
The use of these functions was not too different from part 1.
def part_2(lines: list[str]) -> int: return sum( [int(first_number_or_word(line) + last_number_or_word(line)) for line in lines] )
Kustomize is an incredibly powerful tool for integrating
existing Kubernetes manifests in your own cluster. You can write a configuration
file in YAML that references the manifest you want to install. That manifest can
include local files as well as remote repositories.
This configuration will pull in the local deployment.yaml as well as the
CustomResourceDefinitions that are hosted in the Gateway API repository.
When you run kustomize build the combined YAML to configure your cluster
will be sent to stdout. This can be piped into a .yaml file if you
would like to examine or save the configuration.
However, Kustomize will only output the config as a single YAML file with
multiple “documents” concatenated together. If you want to split this output
across multiple files, the yq command line
utility provides a handy way of
doing just that.
Here, we pipe the output of kustomize build to yq. Passing the --split-exp flag
tells yq to split the input into separate documents. It takes an argument that
uses the yq expression language (which is the same as
jq) to name the files based on the value in the
YAML document’s metadata.name field. The --no-doc flag simply omits the --- used
to separate YAML documents from the output.
In my latest project, I’ve added Tailwind and Vue to a Flask app. This requires
an additional build step to compile each using npm during deployment. Given
that Python is already present on the server by the time the build step occurs,
I decided to write the build script in Python.
Those who have written build scripts before may be familiar with the pattern of
changing into a directory, executing commands, then returning to the original
directory to start the next set of commands. That was exactly what I needed to
do here:
Change directory to ./tailwind
Execute npm install to install dependencies
Execute npm run build to compile the Tailwind CSS
Change directory ..
Change directory ./vue
Execute npm install to install dependencies
Execute npm run build to compile the Vue application
This seemed like a perfect fit for a Python context manager. Context managers
allow for the instatiation of a context using the with keyword. The context is
disposed when the code is dedented. Using a context manager to change
directories here would eliminate the relative path directory change in step 2,
once the first step is completed, the directory would be automatically reset to
what it was before the context was initiated.
By writing the right context manager directory, I could implement the build
script as
from pathlib import Pathwith directory(Path("./tailwind")): run_npm_install() run_npm_build()with directory(Path("./vue")): run_npm_install() run_npm_build()
and I thought that was pretty slick!
Context managers can be written as classes or functions. Given the relative simplicity of this context, I opted to use a function. A context manager function must be decorated with @contextmanager which is imported from contextlib. It should have a try block with a yield and a finally block. When the context is instantiated using the with keyword, the try block is run. When the indented code block is left, the finally block is run. As an example,
from contextlib import contextmanager@contextmanagerdef friendly_context(): try: print("Hello! Welcome to the context!") yield finally: print("Bye now. Thank you for visiting the context. Come again soon.")with friendly_context(): print("Oh thank you, it is so nice to be in the context.")
when executed will print
Hello! Welcome to the context!Oh thank you, it is so nice to be in the context.Bye now. Thank you for visiting the context. Come again soon.
To write my directory changing context manager, I needed to save the original path to a variable, change it in the try block to whatever was passed in to the function, then change to the original path in the finally block.
from contextlib import contextmanagerfrom pathlib import Pathimport os@contextmanagerdef directory(path: Path): """Sets the cwd within the context Args: path (Path): The path to the cwd Yields: None """ origin = Path().absolute() try: os.chdir(path) yield finally: os.chdir(origin)
And it works like a charm! Let me know if you found a cool use for context managers or would have solved this problem a different way.
Decorators in Python allow us to run arbitrary code before and after a function
or class instantiation is called. One super useful application of this is
logging the time it takes for a function to run. Here is a snippet I often use
for just that:
from time import timedef print_execution_time(function): def timed(*args, **kw): time_start = time() return_value = function(*args, **kw) time_end = time() execution_time = time_end - time_start arguments = ", ".join( [str(arg) for arg in args] + [f"{k}={kw[k]}" for k in kw] ) print( f"{function.__name__}({arguments}) took {execution_time * 1000:.4f} ms" ) return return_value return timed@print_execution_timedef repeat(number, n_repeats=30000): return [number for number in range(30000)]repeat(9)repeat(20, 40000)repeat(1, n_repeats=4000)
This will print the execution time in milliseconds and the name of the function
run with its arguments:
1.001596450805664 ms repeat(9)1.0008811950683594 ms repeat(20, 40000)1.0001659393310547 ms repeat(1, n_repeats=4000)