Most testing frameworks allow you to skip with a message if setup failed, which can help
> the test code itself divided by zero
I think this is a bug you do want to know and should be rare once code is merged into master (there's a bug in your code, just so happens it is test code!). However if for some reason you rely on a non-deterministic value that can make your test code fail in some way I have used Python decorators in the past to mark a test as such and raise a specific message.
> I don't know if business schools are really pushing this idea now or if it's in vogue for some other reason but everybody outside of management hates it.
I think most senior leadership at Lockheed are engineers and not MBA-types, certainly the CEO has a bachelors of engineering from the Air Force Academy.
This is probably to try reduce the risk of big projects with one payout causing financial pains during down markets. The government is going to budget for a subscription way in advance so you suddenly see monthly income for the next lifetime of the administration.
One business market a lot of people are enviable is furniture on payment plans - you sell an item and get cash every month for the next couple years. Industries where you have such a clear forecast are much easier to plan around (yes, this was something from Business School lol).
Makes me love Singapore the more. Very pragmatic leadership. Can't be characterized as "right-leaning" or "left-leaning" or whatever...it's like the leaders just sat down and thought, "Let us borrow any idea that works and implement in our country"...grew from third-world levels of poverty to enviable first-world development in a few decades.
I'm not Singaporean and have never visited if you're wondering...
I lived in Singapore 3 years, quite a pleasant city. Super efficient, clean, people polite and smiling. As an expat, the censorship and oppression was only materialized by the absence of a decent art scene. Music, theatre, movies, graphic arts, ... are nowhere to be seen or very mediocre in a city this size. Oh yes, and sure no drugs beyond alcohol.
The expensive car policy is well counterbalanced by good public transportation and cheap fast taxis (no congestion). More convenient than owning a car, imo.
Not just enviable, they have the worlds highest GDP per capita, the only Asian country with a triple AAA rating by all ratings bodies, good integration policies (low crime but basically a 20/20/20/20 split between Buddhism, Atheism, Christianity, and Islam - not completely 20:20:20:20 but can't remember the right numbers and it's close-ish), one of the highest life expectancies in the world, and pretty awesome food!
It's almost like cracking down on crime in a tough way with liberal economics can lead to a great place to live.
Even if Singapore is authoritarian and unfree, if they have high happiness scores, high life expectancies, high standards of living, maybe we have to accept you can have some good authoritarian places to live.
> Can't be characterized as "right-leaning" or "left-leaning" or whatever.
Singapore's story is remarkable, like you said. But it can absolutely be characterized as right-leaning, with a conservative social policy marked by low individual freedom (low tolerance for antisocial behavior, definition of "antisocial" largely in the hands of the government) and high economic freedom (laissez-faire free market capitalism, small government[0]). On the Nolan chart[1], it would fall squarely in the Conservative side.
[0]: At government expenditures of ~15% of GDP, Singapore has by far the smallest government, in economic terms, of all developed countries.
Writing an app in a microservice architecture with services in Rust, Go, and Nim. Database is a serverless DB like Planetscale. Front end in HTMX. Etc…
I think any tech your team is not already familiar with, and isn’t the standard pick, is an innovation token.
So I agree 100% with this, but postgres is so easy and scales infinitely (for the meaning of infinite in 99% of business use cases), I suppose I don’t understand why I would choose MySQL. Do folks just use it as a user cache? Most systems I’ve designed of worked on have required a central database so MySQL never seemed the right fit.
MySQL has better compression and IME can scale further thanks to redo log and alternative engines. Still love Postgres's rich feature set for smaller DBs.
A way for the humanities to get some of the CS STEM funding by writing sci-fi.
Edit: The number of AI safety sessions I’ve joined where the speakers have no real AI experience talking about potentially bad futures, based on zero CS experience and little ‘evidence’ beyond existing sci-fi books and anecdotes, have left me very jaded on the subject as a ‘discipline’.
I believe it comes down to three groups:
AI researchers/organisations wanting to make their work sound very important/scary
Humanities researchers wanting STEM funding
AI organisations trying to bring in legislation to slow down competition
Yes, there is a lot of bunk AI safety discussions. But there are legitimate concerns as well. AI is close to human level. Logically they become dangerous, specially if given autonomy and bad goals. Many of the accredited researchers recognize this.
There is some level that you can discuss AI safety without AI expertise (specially as of a few years ago where everything as so uncertain), but I think currently you need a lot of awareness of physical and computational limits. Taking those limits into account, we're clearly very close to human level intelligences that can scale in unpredictable ways (probably not "grey goo" ways), but potentially dangerous ways under various scenarios, including manipulating our digital lives if there are humongous AI systems controlling everything as we are in danger of getting into as a society.
I think there's also a lot of elitism toward humanities implied that you should try to get past too. Humanities have a lot of insights about human nature, even if not all of it is reliable. See philosophers like Derek Parfit.
(in case you're wondering, I've implemented a few AIs mostly RL algorithms)
The thing I always get caught up on, when making comparisons between computers and humans wrt autonomy, is that the computer reaches the output state from the input by a clock cranking the CPU forward, ie it's a function that runs when the environment around it forces it to run. To put it in the LLM context, between words and after a Stop token, the "intelligence" is dead - frozen - suspended until the next function call.
How can a machine, then, possess anything like self-directed behavoir, when it never has a sense of self-preservation? Basically this is my axiom, that sense of self requires fear/awareness of mortality and the good sense to avoid those things that end you.
Perhaps you could concoct a machine that runs in an infinite loop with no off switch, I guess my question for you is, in what way can a machine have autonomy?
And my distinction between living and dead might be, a living system acts out of self preservation, consuming or modifying its environment to survive/thrive, while a dead system is simply acted upon by the environment its embedded in, like a crystal growing due to molecular force and temperature gradient - or an adding machine being cranked by a higher being.
It is very obvious you have never read the book Superintelligence or any literature on the subject because you try to post here shower thoughts. But here is the thought of the most highly rated h-index computer scientist in the world to help out:
https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-r...
Perhaps you'll categorize this in the first group, but I've found Robert Miles' arguments/videos to be convincing, and he was talking about it before it became particularly fashionable to do so:
My best friend is a radiologist and I guarantee he’s not working 32 out of his 40 reading slides - there could definitely be a way to organise them to have 32 maximum hours with the same output
x4, but I'd put an asterisk that tech choice matters in as much as how fast it allows you to go - something like PHP or Rails or Phoenix app with batteries included and deployed by hand on a single Linode instance, beats a fancy new TypeScript GraphQL Kubernetes stack on time-to-market any time of the day.
Honestly all the "fancy new" stuff only gives you benefits at galactic scale compared to what most solo folks will build. You already start well behind the curve being a solo dev, so you lose out on a lot of benefits of orchestration, rapid microservice deployments, etc. That stuff is built for teams, not one person hacking away in their home office on nights and weekends. You get all the bad parts, because they can't be avoided, and none of the good, because you don't have the throughput to take advantage.
My current job has 80 or 90 devs and we have both on-prem k8s, on-prem monolith, and plenty of "cloud-native" AWS stuff. Everything TS, GraphQL, messaging queues, exactly what you'd expect from an organization that size.
My side projects are all .NET MVC apps. Full-page reloads, manual deployments out of Visual Studio, etc. The only excuse for me to go the TS etc. route would be if that was the only thing I knew how to do, and honestly with as much as I've heard from folks like Tony and Pieter, if I was green now and only knew TS, I'd probably be learning PHP and Laravel.
I'm not even sure solo is at a disadvantage here - on the contrary, regular companies are bogged down by a massive tech pit, because of which they can't move fast, and only keep adding to that according to Conway's law.
Imho 90% of 90-developer companies out there could be replaced by 1-2 devs working same hours but more efficiently with a more efficient stack. I'm not even joking!