Hacker Newsnew | past | comments | ask | show | jobs | submit | pipes's commentslogin

Google says sdram in 1997 was 7 to 10 dollars per megabyte. So 384 would be 3840 not 40,000 am I missing something here?

Buyim higher density memory is almost always more expensive. Ye, you could buy 100s of the cheapest modules at that price, but ehat is the point if you can only stick 8 of them in any given machine?

Possibly inflation adjusted?

That would be around $7,900 USD.

I had a desktop PC that I bought (as a pile of bits!) with 512MB of RAM in 1999 and I sure as hell didn't pay more than a couple of hundred for memory. That might have been EDO rather than SDRAM though but I can't see the price difference being that much!

https://news.ycombinator.com/item?id=47551166

>128MB DIMM: May 1997 $300. July 1998 $150. July 1999 $99. September 1999 Jiji earthquake happens. September-December 1999 $300. May 2000 $89.

>Then overproduction combined with dot-com boom liquidations started flooding the market and Feb 2001 $59, by Aug 2001 _256MB_ module was $49. Feb 2002 256MB $34. Finally April 2003 hit the absolute bottom with $39 _512MB_ DIMMs

In 1999 512MB could cost $400, but it could also cost $1200 :)


My computer had 16Mb in 1997, and it was lower-range but not the absolute bottom.

It looks like Anandtech listed 128Mb for $300 (not inflation adjusted) in 1997. It fell to $150 in 1998 and by 1999 you could buy it for $100.

So 512Mb RAM by the end of 1999 for ~$200 was plausible.


I bought a DC on launch week, it's one of my favourite consoles of all time. I still own one. But what has bleemcast got to do with what the parent said?

To demonstrate how powerful and far ahead it was.

The Dreamcast charm is partly how simple it is, a jellybean CPU. The PowerVR is competent but it’s not outside the norm for 3D accelerators of the period (and there was a mass produced PCI card available of it). Nothing about the Dreamcast is exotic. Though the pack in modem and VMU are neat (did say “maybe” for the DC). GD-ROM vs DVD was obviously a dumb move. Perhaps Sega didn’t have the war chest to loss leader a DVD Dreamcast (they didn’t have the vision either at that point).

A technical demo like Bleemcast doesn’t demonstrate how far ahead something is, it has to be seen relative to the hardware of a similar generation. Having said that the PS2 which had some early programming hiccups would go on to eat DC’s lunch.


...and PS2 eating the DC's lunch has more to do with Sega and their terrible decisions made in prior generations that burnt retailers and consumers alike than anything else. The things the PS2 had going for it at launch was a cheap DVD player(yes Sega didn't have the money for this. They were very close to bankruptcy at the time) and Sony's hype.

My main worry is this is just another step towards government controlling discourse online. Once implemented it will become difficult to be anonymous on social media.

Some one in the UK civil service was quoted in the Times, they stated that the online safety act is not about protecting children. It is about controlling the discourse.


I see it money as a wall. Without it, me and my family are defenceless. My goal is keeping us safe.

And yet people risk their lives to get to the USA. They vote with their feet. It isn't perfect but declaring "it isn't working", my response "compared to what".

I'm from the UK, it isn't in great shape. And the EU isn't either. The west in general has problems, just no where on the scale of every other country.


The raising interest rates right now makes no sense to me. Energy prices and layoffs will kill spending power. I think the central banks will overcompensate because they got inflation so wrong the last time.

inflation has been persistently > 2% (and arguably much more, as the current methodology on how to measure inflation is quite flawed). There's a definite risk of inflation expectations shifting, which central bankers really want to avoid.

Your point that there's a recessionary risk is real, but lowering rates might lead to stagflation. Both options are pretty bad honestly.


Can you elaborate on what you mean by "central banks got inflation so wrong the last time"? You mean Covid or 2008?

Covid. The ex head of the bank of England said as much.

I mean the American recovery after a global disaster like Covid was actually pretty good given the situation. What would you propose they should have done instead?

Are you saying you interviewed meta engineers and found this? Or is this speculation?

I interviewed someone recently who worked at Meta a couple years ago. He was a software engineer, was paid a bunch of money to mostly up dashboards all day, and eventually quit because it was neither interesting nor challenging.

I worked at Meta and they're spot on.

I interviewed a Meta Senior SWE in 2023. Guy couldn't write the most basic Python loop. Attempts were made. I didn't expect a list comprehension. This was just a warmup exercise fizz-buzz level so everyone can feel confident and talk. Everyone just smashes it. I could have done it as a teenager. Had to call it off after 15 min of trying. It was too much. But he took it on the chin. "Yep, thanks, sorry I didn't get too far. Bad day, maybe" or something like that. Most confident guy I've ever talked to. I was impressed by that - to totally bomb and be cool about it. Good for him.

The 3-year old anecdote is a bit pointless. It literally could have been a bad day. I've burnt myself out on a problem the night before and absolutely bombed simple interview questions, too. Or it just happened to be the least competent engineer at Meta. It doesn't give much information on their average employee, though

Oh totally. In general I don’t think you can conclude anything about anyone, really. Yesterday they were someone. Today someone else.

We had the same experience with Meta engineers. One candidate had been with Meta/Facebook for seven years and had nothing to show for it. They had an incredibly hard time articulating what work they actually did. It was something related to storage, but pretty much every answer was "well, actually someone else does that part". Also same experience with basic coding, no actual skills, yet somehow manages to have a CS degree.

Someone has to be doing the actual work at Meta, but that might not be the people who are seeking out new jobs. So we get this false impression that their engineers are a bit... not good, because those are the ones actually leaving.


As someone who has worked at big tech (and interviewed fellow big tech workers), I can confirm this is pretty typical.

People from Google, Meta, Microsoft, Apple, etc...it's all the same. Given the size of these organizations (anywhere from 100K-300K employees if you include contractors), there's a vanishingly small chance the individual you're interviewing had influence or responsibility over any important thing specifically. And if they were high enough on the org chart to be responsible for something real, they weren't ever hands on and just played politics all day in meetings.

Everyone will claim otherwise of course, but its all layers and layers of diffusion of responsibility.

The pace of work inside these orgs is, meet for months about a narrowly scoped new feature (eg. "add a 5th confusing toolbar to Gmail to market Google's 7th video call tool"), take months to build it and run it up the organizational gauntlet for approval, launch it and then chill for 3 months because nobody does anything big in Q4.

For many people at these orgs this is what an entire year of "work" can look like, for which they will be paid roughly $400k.


While at G I was one of three engineers working on a mid-sized iOS app. We shared ownership of the entirety of the codebase. It wasn't dissimilar to some of the other teams I've worked on at orgs of differing sizes.

> The pace of work inside these orgs is, meet for months about a narrowly scoped new feature, take months to build it and run it up the organizational ladder for approval, launch it and then chill for 3 months because nobody does anything big in Q4.

This sounds wonderful, it certainly wasn't the case for us.


I've contracted at several big tech companies and that other commenter is making stuff up. My experience was similar to yours, the engineers were very productive on impactful projects. I'm sure there is some dead weight in every company, but it's the exception not the norm.

It sounds like you have financial incentives motivating your desire to shape opinions on this issue. I already exited big tech so I'm able to be candid. But don't worry, giant companies aren't going to stop your gravy train, they already know you're not highly "productive" and "impactful." That's the point.

If you were actually important to the organization it would be a terrible mismanagement of the company. A well-run big org is designed such that workers are replaceable cogs in generalized salary bands, that's what makes the machine durable.

It's very easy to think you're "productive" and "busy" when your days are filled with meetings and trying to placate various groups of stakeholders. But if you look at your actual work output after a year in big tech, it's fundamentally low impact, and it's that way by design.


> it's fundamentally low impact, and it's that way by design.

I'd like to keep tugging on this thread, I find it interesting.

In my experience, everyone up my chain of command was motivated to derive as much impact from their reports as they possibly could. If anything, it felt as if the system was designed to reward impact above all else - promotions were given to engineers who could demonstrate their work on _____ increased _____ by x% driving revenue by y%.

Nowhere in the system seemed designed to reward low impact, it really felt the opposite.

When you were at a big tech co, your experience was different?


I don't work at the company that enabled that access anymore, so nice try. Frankly this was a poor attempt at a character attack.

The bureaucracy at Google has grown and grown. And then grown some more. But it is nowhere near as bad as the GP makes it sound.

> People from Google, Meta, Microsoft, Apple, etc...it's all the same.

Hmm...it's been a while, but when I was at Apple one of the reasons given internally for why products were so much better than the competition (and they were) was that Apple typically had 1/10th the number of people working on a particular product or feature.

I wonder if that's still the case.


It was less true when I was there more recently.

But Apple is still amazingly efficient compared to others like Meta/Microsoft/etc if you just look at raw headcount vs. product/service/distribution surface area.


Maybe not 1/10, but definitely on-the-order-of 1/4th or 1/6th as many.

Who is more impactful, the startup engineer who singlehandedly ships a feature that increases a startup revenue by 25% off a base $5M/yr ($1M extra rev), or a Meta/Google team of 5 engineers who ship a .01% revenue improve off a base of 150B/yr (15M/5 = $3M/engineer).

As an engineer you are thinking about impact as 'scope' or 'features'. Leadership will be thinking marginally on what adding a net new engineer will provide to the business.

“Marginalism is the economic doctrine that we can best understand value by considering the question of how many units of a good or service an individual has, and using that starting point to ask how much an additional – or marginal – unit would be worth in terms of other goods and services.”


If some engineer optimizes something in the Google search stack that makes it, on average, just 0.01% faster (not 1%, but one-one-hundredth of a percent), then they have paid their salary for the entire year. Almost in perpetuity. No matter what level they are.

Very small gains multiplied out over extremely large amounts of compute over large amounts of time add up big.

And that's why Google can spend so much money on fairly small scoped teams.


A lot of rationalization for what is fundamentally just market inefficiency: economies of scale and network effects (aka Monopoly).

Remove Google's monopoly level distribution, and then build that feature and tell me how much revenue it generates.

The value is in the monopoly which was formed by the founders and all the early employees by having the right products at the right time decades ago, not in the "upgrade now" button some worker bee added to Gmail in year 25 of the company.

Yes, that "upgrade now" button probably does generate $100M in revenue per year. But the reason why isn't because of some unique engineering talent on behalf of the worker bee.

They just pay that dude so much because activist investors don't scrutinize costs too aggressively on growing monopolies (wait until revenue growth stops) and they value stability. If you don't value stability to the same degree (you aren't a massive 200K employee org), I wouldn't hire the "upgrade now" button guy.


I've also worked (and currently work) at a big tech company and personally this has not been my experience. I'm sure it happens but it's not typical.

My famous interview question: "How do you copy a file to another computer?", I was told I need to tone down. It filters out too many entry/mid level candidates.

Given how inefficient Meta et al are, why do the pay so much more than the nimbler smaller companies? (Rhetorical question, I already know the answer: monopoly and regulatory capture)

Of course those engineers would rather have more meaningful work if it came with similar compensation and work life balance.


Hard to motivate people to work on things that destroy society. Money helps.

Want to see how motivated Meta employees are? Watch how fast their offices clear out at 5pm on the dot.


What do you think is an appropriate time for most employees to end their workday?

I am a terrible person to ask. My employers get their money's worth from me: I genuinely like my work and regularly work more than 8hrs a day. I also work in a field with others who, with some exception, do the same, so its strange for me to see "normal people" clock out on the dot.

Have you considered that people can both like their work, and like other things at the same time?

Meta offices are pretty full at 5pm lmao. In fact they are still decently full at 7pm after dinner at 6. Baffles me why people just make up random crap in areas they clearly know less than nothing about.

Because you have to pay people more to do boring or evil work vs meaningful or exciting work

In my experience the pay difference was never that close that meaning and ethics played a role in the decision.

Cool exciting and meaningful science job: 200k

Big Tech surveillance capitalism job: 800k (at the low end)

The calculus has only been about affording housing and providing for the family.


800k at the low end? Big tech pays well, but that sort of comp is reserved for very senior folks.

Where do I get this cool exciting and meaningful science job paying $200k?

This is my experience too. I actually briefly took the cool exciting climate change related science job and then realized that I couldn’t actually support my family’s lifestyle on $160k so I left and went back to surveillance capitalism. I do feel guilt about that decision, but I like to imagine I’ll be able to go back to working on interesting and ethical things after my kids are out of the house.

Seems the pay is very different and thus is absolutely playing a role in the decision?

Yeah. This is part of why I wasn't excited to work at G after my first time there. It was very boring

For big products with many years of history behind them, yeah, that's true. For v. 1.0 or skunkworks projects, it's still mostly true but occasionally, some crazy-ass stuff can happen. (Cue the "what has seen cannot be unseen" meme pic.)

You’re painting with a pretty broad brush there.

“…for which they were paid roughly $400k.”

If I had to guess, the main reason you don’t hire big tech employees is because you can’t afford to. Everything else is extremely subjective depending on what area said engineer worked.


Not at all my experience in big tech. Are you maybe extrapolating from working at Microsoft or IBM or something?

Which other developed countries do you mean? The only ones I can think of, have westernised on purpose. E.g. Singapore and Japan.

What you call "westernised" is just describing the adoption of bourgeois and open market norms. There's nothing about these norms that's inherent to what we call the West: classical Western culture (Greece and Rome, but the attitude persisted well into the middle ages and ultimately fed into multiple streams of modern-era thought) similar to other ancient societies, actively despised market participants, broadly equating them with swindlers.

That is sort of my point, I can't think of a developed country that hasn't westernised to some extent.

I was going to say "why on earth are you making them use a line editor there is probably a vscode plugin for the assembler with syntax highlighting" then I got to your point about it being in their head instead. This reminds me of what zed Shaw said, for some reason code written without an ide is better and he's not sure why.

As a sort of an adjacent point, I worked through a book that is used on a course often called "from nand to Tetris". It is probably the best thing I've done, in terms of understanding how computers, assemblers and compilers work

https://amzn.eu/d/07pszOEy


> This reminds me of what zed Shaw said, for some reason code written without an ide is better and he's not sure why.

I am not sure whether the statement is correct; I am not sure whether the statement is incorrect either. But I tested many editors and IDEs over the years.

IDEs can be useful, but they also hide abstractions. I noticed this with IntelliJ IDEA in particular; before I used it I was using my old, simple editor, and ruby as the glue for numerous actions. So when I want to compile something, I just do, say:

    run FooBar.java
And this can do many things for me, including generating a binary via GraalVM and taking care of options. "run" is an alias or name to run.rb, which in turn handles running anything on my computer. In the IDE, I would have to add some config options and finding them is annoying; and often I can't do things I do via the commandline. So when I went to use the IDE, I felt limited and crippled in what I could do. My whole computer is actually an IDE already - not as convenient as a good GUI, of course, but I have all the options I want or need, and I can change and improve on each of them. Ruby acts as generic glue towards everything else on Linux here. It's perhaps not as sophisticated as a good IDE, but I can think in terms of what I want to do, without having to adjust to an IDE. This was also one reason I abandoned vim - I no longer wanted to have my brain adjust to vim. I am too used to adjust the language to how I think; in ruby this is easily possible. (In Java not so much, but one kind of has to combine ruby with a faster language too, be it C, C++, Go, Rust ... or Java. Ruby could also be replaced, e. g. with Python, so I feel that discussion is very similar; they are in a similar niche of usage too.)

Simpler tools are a forcing function for simplicity. If you don't have code search, you'll need to write code that is legible without searching. If you don't have auto-refactoring utils, you'll have to be stricter about information-hiding. And if you don't have AI, you might hesitate to commit to the first thing you think of. You might go back to the drawing board in search of a deeper, simpler abstraction and end up reducing the size of your codebase instead of increasing it.

Conveniences sometimes make things more complicated in the long run, and I worry that code agents (the ultimate convenience) will lead to a sort of ultimate carelessness that makes our jobs harder.


> Simpler tools are a forcing function for simplicity. If you don't have code search, you'll need to write code that is legible without searching.

i was working in a place that had a real tech debt laden system. it was an absolute horror show. an offshore dev, the “manager” guy and i were sitting in a zoom call and i was ranting about how over complicated and horrific the codebase was, using one component as a specific example.

the offshore dev proceeded to use the JetBrains Ctrl + B keybind (jump to usages/definitions) to try and walk through how it all worked — “it’s really simple!” he said.

after a while i got frustrated, and interrupted him to point out that he’d had to navigate across something like 4 different files, multiple different levels of class inheritance and i don’t know how many different methods on those classes just to explain one component of a system used by maybe 5 people.

i used nano for a lot of that job. it forced me to be smarter by doing things simpler.


I really like this approach. A good reminder that Ruby started out as a shell scripting language, as evidenced by many of the built in primitives useful for shell programming.

When .NET first came out I started learning it by writing C# code in Notepad and using csc.exe to compile it. I've never really used Visual Studio because it always made me feel that I didn't understand what was happening (that said, I changed jobs and never did any really big .NET project work).

Ha! The only multi player game my wife will play with me :)

Classic ! Try Magical Drop 2 or 3 and Money Idol Exchanger on the same system. There's a good chance she'll love them. They're a little bit more nervous tho, especially Money Idol !

Thanks :)

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: