We went out to the hills south of town last night to try to get a glimpse of the comet. I was able to detect it with the naked eye, but just barely. Just enough to be able to point the binoculars and camera at it without having to hunt for it.
Tag: nerdy
About Time we got a 3D Printer
Every so often I'd poke my head around the 3D printer space to see what was on offer and find excuses why I wasn't going to buy anything. Then I found the Bambu Lab A1 which was released last year. The company seems to have a good reputation and the A1 promises an annoyance-free printing experience. Self-leveling, self-calibrating, self-monitoring--just print things without any fuss. It has multiple connectivity options and works with industry standard formats and their software is open source (and an extension of an existing open source project).
Well, with Jess being excited by board-game organizing, Heather into 3D printing at school, and I out of excuses for why not to buy one; we bought one. It arrived today. Assembly was a bit of work, but the provided instructions were generally very clear and easy to follow.
After dinner I finished setting up and the girls anxiously waited for it to finish its self-calibration routine so we could try our first print.
Opting to keep things simple, I printed the little boat model that comes pre-installed on the SD-card. I am thoroughly impressed. It's much faster and has better detail than I was expecting.
The cats were very concerned. London's in back keeping a wary eye on it.
Here's the completed boat. Took about 14 minutes.
I thought I'd keep my expectations low for my first design. I made this eclipse souvenir and stuck a craft magnet on the back.
The printer in action:
I'm very pleased with it so far. This is what the future feels like. Now we all just have to develop some modeling skills to realize our ambitions.
So Long Ivy, Hello Confetti
We've had a pair of Eufy robot vacuums for many years now. The first one Heather named Ivy; the second one Corinne named Sprinkles. One of the several motors on Ivy finally failed a couple of weeks ago and Sprinkles needed a new battery. So Ivy donated her battery to Sprinkles and got dropped off at the eWaste facility. She was dumb, but she got the job (mostly) done. She was six years old.
To fill Ivy's, uh, tires, I bought a Roborock Q5 as a bit of an upgrade. I was slightly annoyed by the use of a phone app, but from what I can tell it seems to minimize how dependent it is on Cloud connectivity and supposedly all the mapping data stays local on the robot. And I was curious how well the LIDAR mapping and navigation works and the price was right. Corinne promptly named it Confetti.
And I must say, I am thoroughly impressed with the mapping, navigating, and cleaning algorithms. Set it up, tell it to clean, and it wanders around as it builds a map from the LIDAR data. Once it returns to the dock it processes the data and segments the data into rooms (which you can modify).
Once rooms have been segmented you can tell it to clean individual rooms, any combination of rooms, or to clean everything. On every run it continues to collect LIDAR readings and integrates them into its existing map.
Within the map you can define virtual walls and "no go" zones. I hadn't even considered how useful the "no go" zones could be until it ran into the cat's food dishes and I just dropped a "no go" zone around them and never have to deal with it again.
When told to clean a room it runs around the perimeter first and then uses an overlapping back-and-forth pattern on the interior. If you tell it to run two cycles on the same room it does the first cycle in one direction and then the second cycle perpendicularly.
Because it's navigating intelligently (unlike the bump-navigation robot vacuums, as our two old Eufy bots were), it takes significantly less time to clean a room and thus is less annoying and lets you get more floor space cleaned between charges.
We rearranged a bunch of furniture this weekend and it figured out the new room configurations without issue and just got its job done. Really the only challenge left for it is that its LIDAR sits on top and can't "see" small stuff on the ground around it (like cat toys, or shoes). So you still have to pick that stuff up to get it out of the way.
Here you can see the perpendicular cleaning pattern on the carpet:
And the map of the room after it finished cleaned showing its path:
Partial Solar Eclipse 2023
On Saturday we watched the solar eclipse, which was an annular eclipse to begin with, but also only partial from our vantage point in Livermore.
I made pancakes and the Spencers came over to hang out with us.
We had some cloud cover throughout the morning and we missed the peak, but we got some good views before and after. The thin, wispy clouds made it more interesting to look at it at least.
Celery & Redis countdown/eta oddities
One of my projects at work uses the Python package Celery with Redis to manage executing background tasks. And we ran into some odd behavior that we didn't see explained anywhere else, so I figure I'll capture it here for the next poor soul running into these issues.
First, if you care about this subject, you should read this post over at Instawork which is a good discussion of the risks involved in using countdown
and eta
. It helps set the stage.
Setup
We're using Celery with a Redis broker as part of a Django application. We apply one of 3 priorities to each of our tasks: Low, Medium, and High. High-priority tasks represent things that a human user is waiting on and need to be completed as soon as possible. Low-priority tasks are things that need to happen eventually, but we don't really care when. And anything else gets configured as medium priority.
This set up worked in our validation testing. We saw the queues get loaded up in Redis and the workers execute tasks in priority order as expected.
The Wrong Queue
After a large-scale data-processing task we noticed that high-priority user tasks were not executing.
When I inspected the queues in Redis I found that the high-priority queue was full of low-priority tasks. So the workers were extremely busy (correctly) processing the queue, but the tasks they were running were low priority. And the human's task was stuck behind them all.
How did this happen?
Countdown/ETA Reservations
The first part of the puzzle is how Celery handles countdown
/ eta
tasks. countdown
allows you to say "execute the task 5 minutes from now" while eta
allows you to say "execute the task no earlier than March 10, 2023 at 10:08AM."
countdown
is purely syntactic sugar for eta
so that you don't have to calculate actual times yourself, so when you call apply_async
with a countdown
parameter Celery converts it to an eta
parameter. Since internally Celery only concerns itself with eta
values we'll only talk in terms of eta
from this point on.
When an eta
task is generated it gets put into the appropriate priority queue. But, it doesn't stay in the queue until its eta
passes. Instead, any worker checking the queue will reserve the task immediately and hold it internally until the eta
passes.
During my investigation, while the workers were idle, I scheduled a few hundred tasks with an eta
and, as a result of the above behavior, the priority queues in Redis were empty. Workers will continue reserving eta
tasks from queues until they have a task that needs to actually execute now. Once the workers are busy, eta
tasks will stay in their appropriate queues until a worker is freed up and comes looking again.
Processing Reserved Tasks
Alright, so our workers have reserved all of our eta
tasks with varying priorities and now the tasks are starting to pass their eta
s and need to be executed. At this point the worker completely ignores the priorities on the tasks. It begins executing whichever reserved task it happens upon first in its internal data structure (this is probably an internal queue, but I don't know for sure).
So once a worker reserves a task its priority is no longer respected. If you schedule a few hundred eta
tasks with mixed priorities (as I did) you see them executed in what appears to be an arbitrary order (I suspect they're actually executed in order they were reserved, but I haven't verified that because it's not relevant to my concerns).
This is not good and reason enough to avoid using eta
tasks for anything but high-priority tasks. But, it doesn't explain how we ended up with low-priority tasks in our high-priority queue in Redis.
Death of a Celery Worker
We have Celery configured to replace each worker after completing 10 tasks. This was an attempt to work around an issue where workers would stop pulling tasks from the queue and everything would stall out. We had a hypothesis that the issue was unclosed connections to Redis and so replacing the workers would force unclosed connections to get cleaned up. We haven't yet verified what was actually happening or if replacing the workers fixed anything though. It's a very intermittent problem and we haven't identified a sure trigger. (Though we did solidly identify that if Redis isn't ready to serve connection when Celery starts then Celery will not reconnect properly and workers will only execute a single task before hanging forever.)
Anyway, the point is that we replace our workers every so often. Well, what happens to all those eta
tasks the worker had reserved? They go back in the queue so another worker can get them. But, they all go into the default queue instead of going back to their appropriate priority queues. It happens that the default queue is the high-priority queue. So each time a worker was getting cycled, all the eta
tasks it held were pushed into the high-priority queue.
The perfect storm
So here's the scenario. Our workers are sitting idle waiting for work to do. Our large-scale data-processing task schedules a bunch of low-priority jobs with an eta
. The workers eagerly snap up all these tasks and reserve them for future execution. As soon as the earliest eta
s pass each worker begins executing and stops reserving more eta
tasks from the low-priority queue.
Each worker completes 10 tasks and gets replaced. As each worker is replaced it returns the remainder of its eagerly-reserved eta
tasks to the high-priority queue. The new workers being spawned now begin processing the high-priority queue since it's full of tasks.
A user comes along and engages in an action backed by a high-priority task. But the user's high-priority task is now stuck behind several thousand low-priority tasks that have been misplaced in the high-priority queue.
Moral of the story
We had run across the countdown
parameter to apply_async
and thought it would be a good way to avoid some unnecessary work by pairing some high-churn jobs with a flag so they'd only be scheduled again if they weren't already scheduled (this flag was managed outside of the Celery world).
We will be rolling back that change so as to avoid this situation in the future.