COVID-19: Part 69

May 11, 2023 1:13 pm

Almost a full year since my last COVID-19 update post.

Today, May 11, 2023, marks the official end of the federal COVID-19 Public Health Emergency declaration: https://www.cdc.gov/coronavirus/2019-ncov/your-health/end-of-phe.html

The CDC reports the total number of deaths in the U.S. from COVID-19 as 1,131,819. Even as the emergency declaration ends, we’re still recording ~1,000 deaths a week from COVID-19. But that’s the lowest weekly figure recorded since March 2020.

According to the CDC, 81.4% of the overall population received at least one vaccine dose but only 16.9% have kept that up to date with the latest vaccine updates available.

As far as we know none of myself, Jess, Heather, or Corinne ever became infected by the virus. We’ve tested ourselves when we’ve had respiratory symptoms, but never had a positive test. It seems more likely that we’ve had asymptomatic cases rather than never being infected, but who knows–perhaps we were of the lucky group for whom the vaccines were highly effective and any contact was prevented from taking hold. We’ve kept our doses up to date whenever new boosters have become available.

So what does that mean in our lives? (A reader in the future might ask.)

Life has been basically back to normal–at least for our family (probably not for the families of the 1.1+ million people who died–for whom a pre-pandemic normal will never return).

The girls have had regular school and activities. We’ve had them wear masks when community transmission levels were “high” (according to the CDC criteria), but that hasn’t been true for months now.

When out and about there are people around who still regularly wear masks. Not a lot, but it’s also not particularly unusual to see. I taught Mathcounts in person this past year and I estimate that 1 out of 15-20 students that I saw on campus was still wearing a mask (Mathcounts ended at the end of March, so maybe that number has fallen since then).

I’m still working mainly from my closet. In fact I’m supposed to be losing my office on site any day now because I don’t use it often enough. Just waiting to get the notification.

So I guess that essentially wraps things up. Here’s to hoping we don’t do that again within my lifetime. It wasn’t fun.

A Hike in Morgan Territory

April 30, 2023 4:29 pm

I wanted to get a good hike in before it turns hot and everything dries up. It was hot this week, but cooled down today so I went up the Morgan Territory Regional Preserve and did a ~3.5 mile hike. The wind was howling which was sending waves throughout the grass. The trails had dried up, but all the grasses were still green and just starting to dry out in patches. With a high around 70F it made for a pleasant hike, if a little breezier than ideal.

I got some nice pictures too.

Easter 2023

April 10, 2023 7:04 pm

After a long, wet, and cool winter Spring finally decided to show up. Our foxglove and wisteria are doing quite well at the moment.

I spent Saturday cleaning up the outside of the house. Finally got the backyard cleaned up from all the windstorms we’ve been having. It was quite a mess, but now it actually looks alright.

I planted pumpkins and wildflowers in the planter box. The pumpkins have sprouted, maybe we’ll actually get one to grow this year.

I planted an elderberry tree last year, which then got fried by a heatwave. I figured it was gone, but it’s trying to regrow from the roots now, so it may yet survive.

We dyed Easter eggs on Saturday evening and the Easter Bunny stopped by on Sunday morning. Then we made cake pops, because why not?

This little critter was hanging out on one of our heavenly bamboo plants:

Corinne’s 8th Birthday

March 19, 2023 3:19 pm

Sometime last year Heather had asked if she could help with the next birthday adventure. So as Corinne’s was coming up I asked if she still wanted to help. She was very excited to plan an entire adventure for Corinne.

Heather was sick the final weekend before Corinne’s birthday, so Jess and I ended up assisting with final prep work. But, in general, the entire thing was Heather’s design with some consulting from me to smooth out rough spots and clarify tasks.

It all revolved around a story book which Heather wrote and assembled. The book provided the narrative and character interactions and Corinne was instructed to turn to specific pages during the adventure to continue the story.

The overall theme was a My Little Pony adventure. She helped each of the ponies with some task which provided her with clues to the encoded message Pinkie Pie found where Corinne’s presents were supposed to be. Once all the clues were collected it became clear that Queen Chrysalis had ordered the presents taken. With the assembled ponies and the collected elements of harmony she was able to defeat Queen Chrysalis and recover her presents.

Reading the introduction in the story book:

Baking a cake with Pinkie Pie revealing the first 2 clues to the coded message:

Helping Fluttershy rescue stranded animals revealed the number 13 which was the next page in the adventure:

Apple Jack needed her baskets, but couldn’t remember how to disable the anti-pranking alarm system. Corinne needed to retrieve them without touching the streamers and found a note with more clues.

Rarity couldn’t remember the combination to her lock where she kept her notebook and Corinne needed to sew the pattern which Rarity created as a back up. Once sewn, the pattern marked the numbers 3-1-7-8.

She also needed to clear the clouds that Rainbow Dash forgot to take care of, which recovered another clue and Twilight Sparkle knew a book with helpful information in it (not shown).

Queen Chrysalis was confronted at her hive (in the backyard) and the presents were recovered from the bench.

Then it was time to open presents! At the moment she is all about Squishmallows and reading.

We went to dinner at her choice of restaurant, which was Taco Bell. And after dinner, cake!

Jess whipped up the crochet crown for her to wear to school since she didn’t have anything super green to wear and wanted something.

Celery & Redis countdown/eta oddities

March 9, 2023 10:13 am

One of my projects at work uses the Python package Celery with Redis to manage executing background tasks. And we ran into some odd behavior that we didn’t see explained anywhere else, so I figure I’ll capture it here for the next poor soul running into these issues.

First, if you care about this subject, you should read this post over at Instawork which is a good discussion of the risks involved in using countdown and eta. It helps set the stage.

Setup

We’re using Celery with a Redis broker as part of a Django application. We apply one of 3 priorities to each of our tasks: Low, Medium, and High. High-priority tasks represent things that a human user is waiting on and need to be completed as soon as possible. Low-priority tasks are things that need to happen eventually, but we don’t really care when. And anything else gets configured as medium priority.

This set up worked in our validation testing. We saw the queues get loaded up in Redis and the workers execute tasks in priority order as expected.

The Wrong Queue

After a large-scale data-processing task we noticed that high-priority user tasks were not executing.

When I inspected the queues in Redis I found that the high-priority queue was full of low-priority tasks. So the workers were extremely busy (correctly) processing the queue, but the tasks they were running were low priority. And the human’s task was stuck behind them all.

How did this happen?

Countdown/ETA Reservations

The first part of the puzzle is how Celery handles countdown / eta tasks. countdown allows you to say “execute the task 5 minutes from now” while eta allows you to say “execute the task no earlier than March 10, 2023 at 10:08AM.”

countdown is purely syntactic sugar for eta so that you don’t have to calculate actual times yourself, so when you call apply_async with a countdown parameter Celery converts it to an eta parameter. Since internally Celery only concerns itself with eta values we’ll only talk in terms of eta from this point on.

When an eta task is generated it gets put into the appropriate priority queue. But, it doesn’t stay in the queue until its eta passes. Instead, any worker checking the queue will reserve the task immediately and hold it internally until the eta passes.

During my investigation, while the workers were idle, I scheduled a few hundred tasks with an eta and, as a result of the above behavior, the priority queues in Redis were empty. Workers will continue reserving eta tasks from queues until they have a task that needs to actually execute now. Once the workers are busy, eta tasks will stay in their appropriate queues until a worker is freed up and comes looking again.

Processing Reserved Tasks

Alright, so our workers have reserved all of our eta tasks with varying priorities and now the tasks are starting to pass their etas and need to be executed. At this point the worker completely ignores the priorities on the tasks. It begins executing whichever reserved task it happens upon first in its internal data structure (this is probably an internal queue, but I don’t know for sure).

So once a worker reserves a task its priority is no longer respected. If you schedule a few hundred eta tasks with mixed priorities (as I did) you see them executed in what appears to be an arbitrary order (I suspect they’re actually executed in order they were reserved, but I haven’t verified that because it’s not relevant to my concerns).

This is not good and reason enough to avoid using eta tasks for anything but high-priority tasks. But, it doesn’t explain how we ended up with low-priority tasks in our high-priority queue in Redis.

Death of a Celery Worker

We have Celery configured to replace each worker after completing 10 tasks. This was an attempt to work around an issue where workers would stop pulling tasks from the queue and everything would stall out. We had a hypothesis that the issue was unclosed connections to Redis and so replacing the workers would force unclosed connections to get cleaned up. We haven’t yet verified what was actually happening or if replacing the workers fixed anything though. It’s a very intermittent problem and we haven’t identified a sure trigger. (Though we did solidly identify that if Redis isn’t ready to serve connection when Celery starts then Celery will not reconnect properly and workers will only execute a single task before hanging forever.)

Anyway, the point is that we replace our workers every so often. Well, what happens to all those eta tasks the worker had reserved? They go back in the queue so another worker can get them. But, they all go into the default queue instead of going back to their appropriate priority queues. It happens that the default queue is the high-priority queue. So each time a worker was getting cycled, all the eta tasks it held were pushed into the high-priority queue.

The perfect storm

So here’s the scenario. Our workers are sitting idle waiting for work to do. Our large-scale data-processing task schedules a bunch of low-priority jobs with an eta. The workers eagerly snap up all these tasks and reserve them for future execution. As soon as the earliest etas pass each worker begins executing and stops reserving more eta tasks from the low-priority queue.

Each worker completes 10 tasks and gets replaced. As each worker is replaced it returns the remainder of its eagerly-reserved eta tasks to the high-priority queue. The new workers being spawned now begin processing the high-priority queue since it’s full of tasks.

A user comes along and engages in an action backed by a high-priority task. But the user’s high-priority task is now stuck behind several thousand low-priority tasks that have been misplaced in the high-priority queue.

Moral of the story

We had run across the countdown parameter to apply_async and thought it would be a good way to avoid some unnecessary work by pairing some high-churn jobs with a flag so they’d only be scheduled again if they weren’t already scheduled (this flag was managed outside of the Celery world).

We will be rolling back that change so as to avoid this situation in the future.