Firefox also has a setting like this, although I think it's even nicer in that it makes everything (current and future) AI default to opt-out, but still lets you opt in to specific use cases if you want.
Firefox took an awfully long time to get that global setting. It was clear that Mozilla Corp hoped they might be able to push AI services as a revenue generator, before the AI pushback.
Yes, indeed it does. I didn't feel this way until I worked for a YC-backed startup tho. I mean, YC is the first to admit that not everything needs to be VC funded and some things just aren't good fit for that funding model. I think a code editor is one of them.
> I mean, YC is the first to admit that not everything needs to be VC funded and some things just aren't good fit for that funding model. I think a code editor is one of them.
Fully agree. I also feel like a lot of companies do not need to be on the stock market, especially if they're reasonably profitable, feels like the stock market is where you go to let go of more of your company just to get rid of the VCs whom you owe a lot of money to.
I remember when I was learning about entrepreneurship in college I was baffled by their insistence of an “exit strategy”. The idea just seemed so foreign to me. See I naively thought the point of starting a business was to do the business, not to not do it and sit next to a pile of money instead. Silly me.
Zed is a durable piece of software, rather than the current trend of cheap disposable software. Regardless of whether humans or agents use a tool like this, durability is a benefit for both.
Voiceless groups do not appear in the training data? How could they, they are voiceless. You think the voiceless people are represented in todays training data? They cannot they are voiceless.
Nothing tragic about using data from a time period.
Common words used in 1900s are labeled racist now. I doubt anyone was wondering if they filtered those words for modern safe wordx.
I'd be more worried if words from that era were fully aligned with present day notions of morality. Wouldn't that indicate a certain stagnation & lack of progress?
Let us hope, 100 years from now, there will be people who look back unkindly on us.
>The voiceless groups or fringe opinions which we take as normative today do not appear.
Times are different. Anybody with an internet connection can "publish" their thoughts and perspective online. LLMs scrape all of this. Modern datasets like CommonCrawl capture a vastly wider spectrum of humanity than a printing press ever could.
The pre-1930 model acts as a time capsule of "gatekept publishing", but modern LLMs are trained on the democratized web.
>Does this encourage us to write in the present such that we influence the models in perpetuity?
I noticed a bunch of LLM-powered Reddit accounts praising products/services in dead threads. Or one bot posting a setup question, then a few other bots responding with praise / questions about a specific product in response.
I don't know why they're doing this but I'm beginning to suspect it's something like this (get this positive sentiment into the datasets for the next generation of LLMs).
> OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.
Azure is effectively OpenAI's personal compute cluster at this scale.
That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.
OpenAI is a large customer, but this is not making Azure their personal cluster.
I wonder how this figure was settled. Is it based on consumer pricing? Can't Microsoft and OpenAI just make a number up, aside from a minimum to cover operating costs? When is the number just a marketing ploy to make it seem huge, important and inevitable (and too big to fail)?
You're right. We should absolutely only rely on "Ask sales for price" closed-source software from megacorps, that get worse on every release, and get sunset anyway when the funding runs out.
Of all the things to judge this on, you chose the most ridiculous one. Why shouldn’t a project like this exist just because there are “bigger” alternatives out there?
If youre gonna shut this one down, at the very least do it for the right reasons such as the fact that this is a webwrapper—absolutely disgusting, either go native or don’t bother shoving your webpage into a browser-container and calling it what it is not (an app).
This feels like an unethical release of a model. They've opened a can of worms without investing in defense first.
Anthropic announced their capabilities in advanced, issued a private release, then put up $100M in credits to Fortune 500 companies and OSS projects to secure themselves.
OpenAI sees that, makes a model equally capable at exploiting vulnerabilities, then released it to the pubic with no equivalent program [1]
Glad it's an option be it for regulatory compliance, security, privacy, or any combination of the three.
[1]: https://zed.dev/blog/disable-ai-features
reply