Hacker Newsnew | past | comments | ask | show | jobs | submit | SimianSci's commentslogin

I see drones as more of a side-affect to the new era of warfare we are in. The more powerful your economy, the more autonomous weapons you can create and eventually deploy. Manufacturing capacity and economic resiliancy are becoming far more important than a nation's ability to equip and train its military.

The alarming part of this to me is that this heavily implies that wars will be decided more by who can successfully destroy their adversary's economy, than who can take and hold points of strength. Holding a city with an entrenched military doesnt matter much when there is still a factory deep in enemy territory producing the next wave of attacks. The incentives for targeting non-combatant civilians is rising at an alarming rate.


> Manufacturing capacity and economic resiliancy are becoming far more important than a nation's ability to equip and train its military

This has been the case in wars of attrition since the Civil War. It took between then and WWII for the message to land.


It feels like the drone factories should be targetable and who controls that may control a war.

Ukraine has shown that a drone factory can be made on any old building, it's not like they need huge machines. Would you carpet bomb all usable buildings? Cheap drones as a defensive weapon make war way more costly to the aggressor.

Cheap drones have made war more expensive for people that care about where they inflict the damage.

No, they made war expensive for people that are used to overwhelmingly superior but expensive military force. Drones are perfectly capable (even excel at) surgical strikes, and if the enemy destroys a $1,000 drone with a $100,000 missile, it's still a win for the drone.

The spend at my organization has reached beyond the $200,000 per month level on Anthropic's enterprise tier. The amount of outages we have had over these past few months are astounding and coupled with their horrendous support it has our executive team furious.

its alot of money to be spending for a single 9 of reliablility.


If you are paying API rates (not using Max subscriptions) there's no reason to use Anthropic's API directly, the same models are hosted by both AWS and Google with better uptime than Anthropic.

How do things like prompt caching etc play into that? Would I theoretically have a more stable harness backing my usage?

Im seriously over the current claude experience. After seemingly fixing my 4.6 usage by disabling adaptive thinking and moving to max effort, it seems that the release of 4.7 has broken that workflow and Im 99% certain that disabling adaptive thinking does nothing even on 4.6 now. Just egregious errors in 2 days this week after coming back from vacation.


AWS Bedrock supports prompt caching, just note that if you use the Converse API you need to set the cache points manually.

> Would I theoretically have a more stable harness backing my usage?

If you don’t mind an opinionated harness that asks for a pretty specific workflow, but one that works well, use OpenCode.

If you want to spread your wings and feel the sweet kiss of freedom, use Pi.


Im looking at moving to Pi and I like the minimal nature, but I disagree with a handful of decisions they make. So Id likely need to maintain a fork which is less than ideal.

What decisions is Mario making that you disagree with? My impression is Pi is minimal so any changes can live on top of Pi without needing to maintain a fork?

I started developing my own coding agent after using Pi for a couple months, so I’m curious what you don’t like about pi.


When I hear Mario talk about pi and his approach I find myself agreeing with a lot of it. But I also find myself agreeing with a lot of the points from this https://www.thevinter.com/blog/bad-vibes-from-pi

the opinions in question are that bash should be enabled by default with no restrictions, that the agent should have access to every file on your machine from the start, and that npm is the only package manager worth supporting. Bold choices.

To save others a click, though the article is worth reading.

He also mentions no subagents by default in pi as well.


oh-my-pi harness fixes many of these, like subagents

It seems to, but then also throws in the kitchen sink and a custom bath.

check out my pi forks.

Ummmmmm, how?

I searched his HackerNews username on Google.

[0] - https://github.com/cartazio/oh-punkin-pi


That (and oh-my-pi) seem like an excessive swing in the other direction. Im all for the simplicity and minimalism of pi. There are just a few fundamental things that need updated (mainly subagent context and open-by-default security model).

pi for the win, i have my own ai extend it when i want more specific features. vibe coded in 20 minutes shift+tab like claude code to add permission control.

I find it so funny that many of these harnesses sound like black magic and are completely mystical to me. I use Claude Code every day and yet i can't imagine the workflow of Pi. I also don't care to pay API rates just to experiment with them.

Largely though i'm happy with Claude Code w\ IDE integration, so i don't feel the need to migrate. Nonetheless i'm curious.


I have enterprise so its always usage which makes it possible for me. And then the other subs I can toggle between which is awesome.

I live in the terminal. Before AI I always preferred it so it suits me


you can use claude code with these other providers


Enterprise adds IAM, logging, and analytics, all of which AWS provides for free or for metered usage without needing an enterprise plan.

They'll cut you a private offer for bedrock tokens but bedrock has a 32k output limit

I use bedrock with 1M context every day. Not sure this is right

4.7 is the first opus model that’s had the 1 M context window available on Bedrock.

Not true. Opus and Sonnet 4.6 support 1m context on Bedrock.

I've had Opus 4.6 1M and Sonnet 4.6 1M for months now on Bedrock.

Their docs may be lying but they say 200k for opus 4.6. And yes 1M was on sonnet for Claude enterprise.

isnt that an input limit from api gateway?

Obviously there is only so much you can say; but is that $200K due to the raw number of seats you have, or are you burning through a lot on raw API usage? I guess I'm trying to understand, large business, or large usage.

we are in the SMB space, the spend is almost entirely usage for us at this point, rather than seat cost. For context, we are a software firm focused on difficult engineering problems, but I cant divulge much else.

Have you guys considered running your own local models? 200k a month is a ton of money and puts all your eggs in one basket. Or is it easier to just be able to run away from it all if you are done with it or something changes?

I led the team that did the math and analysis for determining our direction in selecting Anthropic. We initially assumed this was where we would end up, but after some investment exploring our options we found it not worth the trouble.

Local models sound great until you realize you dont get alot of the features that we implicitly expect from hosted models. Many things would require additional investment into the operations and setup to get to a comparable system. We ended up wanting things that would require us to roll our own memory system, harnesses for the model, compliance needs, and security. It was possible for us to invest in this, but it would require additional investment in hiring or training to get us to a state comparable to the hosted options.

Eventually, I had to recommend against the project as it was more likely to be an investment in the leading team's resume, than an actual investment into our organization.


To start, I want to be clear I am trying to understand not criticizing, and mistakes are how institutional knowledge grows.

Your last paragraph hints at retention struggles which complicates the issue.

But was vendor mitigation not part of the evaluation? I get that most companies view governance and compliance as a pay to play issue, but there has always been an issue with rapidly changing areas and single source suppliers.

I admit to having my own preferences and being almost completely ignorant about what your needs are, but I have seen the value in having a rabbit to pull out of the hat.

If employee retention doesn’t allow for departure of individuals without complete loss of institutional knowledge I guess my position wouldn’t hold.

But during the rise of cloud computing I introduced an openstack install in our sandbox, not because I thought that we would stay on a private cloud but because it allowed our team to pull back the covers and understand what our cloud vendor was doing.

It was an adoption accelerator that enabled us to choose a vendor that was appropriate and to avoid the long tail of implementation.

I was valuable as a pivot when AMD killed seamicro with short notice, and the full cloud migration period was dramatically shortened.

I have a dozen other examples, but it is like stock options, volatility and uncertainty dramatically increase the value of keeping your options open.

We will have vendors fold, and a single source only story couples you org to the success of that vendor.

IMHO There is a huge difference between tying your success to an Oracle, who may be ‘safe’ if expensive as a captive customer and doing the same in uncertain markets.

Would you be willing (or able) to share more?


it's an SMB, if you need redundancy on every 3rd party dependency your business will die anyway

better to take the risk for most things. if the worst case happens and you have to migrate, you migrate. otherwise you risk overengineering upfront and guaranteeing reduced productivity rather than risking it


We are probably closer than you think, and SMBs have zero leverage.

The point is not avoiding vendors or duplicating everything. The point is designing systems so the software/platform never becomes the point of control.

A self-hosted, minimal sandbox instance using simple containers and tools is one way to help avoid that lock-in trap.

It is not zero cost, but strategically important to make sure that vendors don't shape your enterprise, but support it.

IMHO Systems should be designed to be as replaceable as possible, without adding the extreme complexity that a true 'multi-cloud' solution would offer as an example.

The point being is that the vendor and/or platform can be replaced anytime the business changes its goals, market shifts, strategies change ...

Keeping the door open and trying to minimize the migration cost is my point, not boiling the ocean.

Repurposing a decomed server or desktop with a GPU (3090 or RTX PRO 6000 Blackwell not DC class) with linux/podman and llama.cpp will help a team understand without much cost, but that is an ignorant of your situation claim on my part.

We both very much agree that upfront multi-vendor implementations are a very bad idea. It suffers from the same problem IMHO, trying to plan past the planning horizon with aspects you have no control over.

Probably too much nuance to discuss here, but thanks for responding.


> Local models sound great until you realize you dont get alot of the features that we implicitly expect from hosted models. Many things would require additional investment into the operations and setup to get to a comparable system. We ended up wanting things that would require us to roll our own memory system, harnesses for the model, compliance needs, and security.

That's not local models vs hosted models, that's using the enterprise services from Anthropic. Any local LLM inference engine such as VLLM gives you an OpenAI compatible API with the exact same features as a hosted model.

I'm not sure what your use case is, but I personally found Anthropic's offerings lacking and inferior to open source or custom-built solutions. I have yet to see any "memory" system that's better than markdown files or search, and harnesses for agentic AIs are dime a dozen.


I don't blame you. I personally would consider revisiting it in the next month or so. A lot of people are saying some of these smaller models like qwen 3.6 are basically at Claude sonnet performance if not better.

That level of hardware, if the performance was enough is a much smaller investment and gamble.

Either way I understand the decision. Your product isn't in locally hosted LLMs, why fuss. That said I see 1 million plus in external spend I start wondering about the options. Not saying you did the wrong thing, I think you did the right thing but things seem to be changing on the local model front and quite rapidly.


Local models perform objectively worse than SotA SaaS models. Your employees will hate this decision.

Only if you're vibe coding, with ambiguous prompts that require the model to fill in a huge number of gaps and basically write the software for you.

The people who don't really know what they're doing (or don't care) need the full power of the SOTA models, those with experience can provide enough context and instruction to make even small local models work.


Some of the latest batch are more vibe code friendly even. It's pretty crazy. People are few shotting small toy games and stuff with qwen3.6. I'm personally not into that work flow but yea. It won't be long until the efficiency wave hits and small models are really all people need

Some of the local models are effectively there. It depends on what scale you need or want. Kimi 2.6 is up there with opus, granted it's huge. On some benches it's actually better. Qwen3.6 is up there with sonnet but it's nearly microscopic. A lot has changed in the last month

A single nine so far. If github is any guide thing will get worse.

Why would Github be a guide? It's also terrible, but it's a radically different stack from an unrelated company

That, and even before AI, MS was having trouble with GH reliability

GitHub, along with MSFT in general, have massive copilot mandates where workers are being shamed into using slop tools to fix serious on-going issues. GitHub seems wholly incapable of resolving their issues: money isn't a problem, talent isn't a problem, but business leadership is definitely a major problem.

Look at how other companies are suffering massive outages due to LLMs too like AWS and Cloudflare. Two companies that use to be the best in the industry at uptime but have suddenly faltered quite quickly.

Companies that have even worse standards will quickly realize how problematic these tools are. Hopefully before a recession because this industry seems to be allergic to profitable businesses and leaders that have been around since ZIRP have shown zero intelligence in navigating these times.


None of the three major Cloudflare outages in the past six months had anything to do with LLMs. They were regular old human mistakes.

We did, however, determine that at least one of them (and perhaps all) would have been easily caught by AI code reviewers, had AI code reviewers been in use. So now we mandate that. And honestly, I love it, the AI reviewer spots all sorts of things that humans would probably miss.

(We also fixed a number of problems around configuration that would roll out globally too fast, leaving no time to notice errors and stop a bad rollout, as well as cases where services being down actually made it hard to revert the change... should be in a much better place now. But again, none of that had to do with LLMs.)


> None of the three major Cloudflare outages in the past six months had anything to do with LLMs. They were regular old human mistakes.

Is that true? At least one of them seemed to involve LLM-written code from what I saw. (Not to say that human error wasn't _also_ a contributing factor, but I wouldn't say it had _nothing_ to do with LLMs).

> We did, however, determine that at least one of them (and perhaps all) would have been easily caught by AI code reviewers, had AI code reviewers been in use. So now we mandate that. And honestly, I love it, the AI reviewer spots all sorts of things that humans would probably miss.

The reviewer is decent, but the false positive rate is substantial, and the false negative rate is definitely nonzero. Not that you would know that the way our genius CTO talks about it...


> Not that you would know that the way our genius CTO talks about it...

Honestly I find it bizarre that there are people at Cloudflare who have this attitude. Without Dane, the company wouldn't be half the size it is today.


Something unexpected that LLMs robbed from us is to receive the grace of assuming we failed on our own e.g. good ol' fashioned human/organizational failure.

We are spending the equivalent of 32 monthly software engineer salaries on Claude per month.

Info like this is useless without context like, how much revenue does the company earn? How many engineers do they employ? etc.

Our expense is roughly around 12.3 software developers when you break it down across all people related expenses. But we've spent alot of time and energy prior to this focusing on our ability to measure our software development output across multiple teams. The delivery improvements are not evenly applied across all teams, but the increases that we have seen suggest a better ROI than if we had hired 12 developers.

I guess if you think about your teammates as purely inputs and outputs and not people that can improve and contribute in the workplace in other ways.

It's genuinely hilarious how the same leadership pushing for RTO because getting people together creates magic, seems to have no issues trading those same people out for LLM's churning at specs.

Haha nail on head so the motive for ‘get your ass back in the office’ was never the motive we all heard

Respectfully, After a certain level of compensation, you are indeed judged purely off of input and output. Workplace improvement does not justify your salary.

You will also find that many problems in the harder sciences do not get easier by throwing more bodies at them. Comments like these remind me that some project managers think they'd be able to delivery a baby in 1 month if they simply had 9 women.


> Respectfully, After a certain level of compensation, you are indeed judged purely off of input and output. Workplace improvement does not justify your salary.

I'd have to disagree. There's a narrow band in the middle where that's true, but once you exceed that, your personal inputs and outputs matter less and less, and the contributions you make to the overall workplace, and how well you enable those around you, make a larger part of why you're compensated.

Even as an IC, the more you're able to mentor and elevate the people around you, the more your compensation will grow (if you're in the right place, and thus already at the right earnings bracket)


> you are indeed judged purely off of input and output

That's not how successful (software, in this case) teams are made.


I would agree if the team im on were still growing/scaling. However we are well past our scaling phase, and at this point our concern is maintaining multi-million dollar contracts with a tight well-compensated team.

Is it worth it?

He was fired before answering.

[but as his manager I can tell you:] YES !!!!


No, we can literally buy our own hardware for what we spend in a month and host our own local LLMs for company usage.

> and host our own local LLMs for company usage.

What local alternative could replace your Anthropic use? I have found none. I don't think many have, which is why most of us pay Anthropic, rather than using one of the numerous, far cheaper, cloud services that host "local" class models.

Most of us are paying for access to proprietary SOTA models, rather than hosting.


Speaking of developer tooling spend - IDEs are far harder to build such as JetBrain etc and don't think any IDE would be charging this amount to any customer per month.

Not sure how much of a productivity gain a 2.5 million per year it is?


Supply and demand - if you think it’s not worth the price, take your dollars elsewhere.

This is the brutal reality; even with the crazy reliability issues, demand is still far outstripping supply at the current price.


Run Facebook on a single Proxmox box and demand would still outstrip the supply.

What yet needs to be seen is if that demand sustains in the long run at that price point or flattens out proving to be super elastic given that there are many other providers that are catching up pretty fast.


IDEs don't need expensive GPUs to create and serve.

> single 9 of reliability

Out of curiosity, do you actually use it 24/7? The world doesn't collapse every time o365 goes down... (which is also pretty often)


In my experience the downtime tends to coincide with peak PT timezones. If you're in PT, it's very inconvienent.

Yeah, I feel like all of the bad downtimes happen during American business hours. We use GitHub at work in Europe and I don't remember it ever being down or broken between 0700 and 1700 local time.

That’s statistically just luck then - plenty of outages this year already in Berlin time during work hours - I do remember the forced breaks with colleagues for sure.

if it's judged only by the time it is expected to be in use (work hours), reliability is likely even worse than the 24/7 measure.

Five nines? No, nine fives

> has our executive team furious

And yet they will continue to spend wheelbarrows full of money with Anthropic because they want so badly to reach the point where they can fire you.


I think there is alot of baseless fury behind your words, but my regular interactions with my leadership dont lead me to think they have the end goal of replacing labor. We're blessed to have leadership with technical backgrounds, so the tools are regarded more as significant intelligence enhancers of already exceptionally smart engineers, rather than replacements.

Doesnt seem to us to be wheelbarrows of money, when you consider the average AWS/Azure bill.


> I think there is alot of baseless fury behind your words,

Hardly baseless when people have been gloating about how programming as a job is ending any day now for the last year at least.

> Doesnt seem to us to be wheelbarrows of money, when you consider the average AWS/Azure bill.

You didn’t mention the size of the company so yeah.


Not ever hiring juniors and eventually mids is just replacing labor with extra steps.

Throwing bodies at a problem doesn't always scale. There are many difficult problems that do not get easier by throwing more juniors or mid level engineers at them.

Having just worked my behind off for the last months to deliver on an impossible deadline, successfully: more bodies definitely would have helped.

Even just to keep the fluff off my back and to allow me to fully concentrate on what's important.

The situation will repeat itself in 6 months and I'm not going to do that again. Hiring now would fix that.


I think the message you responded to already refuted your point of view.

Huh? Your other comment explicitly said you were replacing labor: https://news.ycombinator.com/item?id=47939146

> the increases that we have seen suggest a better ROI than if we had hired 12 developers.

You can’t argue “we were able to get away with not hiring more developers” and also say you aren’t replacing labor.

Morally I trend towards your side of things, but it’s also important to be realistic about what you’re actually doing. Money is going towards Anthropic and not towards new hires. That’s a replacement of labor. It doesn’t matter what the end goal was.


“Baseless fury”

I’m glad your leadership isn’t trying to fire everyone. But in case you live under a rock, tech layoffs are at all time highs. Companies are rewarded by the public markets for laying off workers.

Simultaneously we have AI industry leaders warning of an employment apocalypse once AGI is achieved.

And you think it’s baseless. Have some class bro.


Is the $200k just development or are the products being developed require AI?

I wonder if self-hosted models would be a sensible step for your organization.

Seems to be back now (claude code at least)

They must have hired absolutely incompetent leaders on the core software and infrastructure side. Sure their AI research is great but it’s amateur hour. Or just vibe coded slop top to bottom. It seems like every single day people are talking about outages or billing issues or secret changes to how Claude works.

theyre getting high on their own supply, and instead really need to hire some senior engineers

Imagine how much money they would save if they switched to Codex.

Not everyone can (due to the corporate compliance requirements, eg the ease of making the LLM not to train on anything).

Besides, codex wasn't always the answer.


Just give them more money, surely it'll get better.

/s


Your comment on vagueness misses its mark.

> business leader throwing those salutes and backing it up with talk of a "white homeland"

It is not every commenter's duty to cite their sources when you have the ability to easily infer the context and search the internet. These are very well documented actions that they refer to. Your attempts to drive sentiment through casting doubt are noticed.


If this were completely uncharted territory, you might have a leg to stand on here. But you are correct that this is exactly how Facebook started, and we know exactly how that goes, the poster is correct that this just leads to harassment at scale.

The author's response was the main problem, showing a complete lack of character or ethical concern. There is a world of difference between being a hacker with a sense of rebelliousness and a jerk who thinks there should be zero consequences to their actions.


If we're using the Facebook example to call this unacceptable, we should really be fighting a lot harder against Facebook itself. Because it still has a reasonably positive reputation overall and it's affecting billions of people.


> If we're using the Facebook example to call this unacceptable, we should really be fighting a lot harder against Facebook itself.

I don't think many here would disagree with you.

> Because it still has a reasonably positive reputation overall and it's affecting billions of people.

I'm gonna disagree with you. Maybe it's because I live in the Bay Area so the culture is affected by the proximity of tech companies. But my family in the middle of the country mostly seem to be on the same page, so I don't know how you explain that. It may be that I'm drawn to people who care about these topics and some degree of sameness is expected within family dynamics resulting from the parents' values raising us. Whatever.

I think a good portion of society considers FB a garbage product but don't know of an alternative and just accept it for what it is. I think a smaller portion of society recognizes that they are amoral and terrible for society. How many countries have now discussed legislation to limit kids accessing social media (whether you agree or disagree)? That didn't spring out of nowhere fully formed. Years of criticism got us there.


> Maybe it's because I live in the Bay Area so the culture is affected by the proximity of tech companies. But my family in the middle of the country mostly seem to be on the same page, so I don't know how you explain that.

I can explain that. 100% of Americans add up to roughly 5% of the worlds population. As such, there are billions of non American users with very different viewpoints and opinions.


Yes, we really should be! You’ve hit it on the nose with that point: Facebook has been a stalker with effectively legal immunity in a lot of people’s lives for quite a long time. I’m glad to see others realizing it, too. The more that do, the sooner their formerly-untouchable behavior becomes unacceptable.


> we should really be fighting a lot harder against Facebook itself.

yes. correct.

> Because it still has a reasonably positive reputation overall and it's affecting billions of people.

does it? its like the power company -- you just kinda have to use it, or else you just have ot go without.


Indeed, it should burn in hell, and most of its companion platforms and its competitors should join it.


"There is a world of difference between being a hacker with a sense of rebelliousness and a jerk who thinks there should be zero consequences to their actions."

Given the external consequences of certain actions, for all intents and purposes that "world of difference" may exist only inside their skull.


- Own a monopoly - Inherit your fortune - Run a criminal enterprise

Using just these three filters alone, you encompass more than 99% of all billionaires in existence. The amount of billionaires who do not fit into these categories can barely occupy a family sized vehicle.

The criteria here suggesting that there is a specific sociopathic personality requirement to being a billionaire as each category can be argued as harmful to societies.


I've been thinking that you can divide businesses on two axes,

                            Scalable - Many customers
                                     |
    Short-term/       Ponzi Scheme   |    Monopoly         Long-term/
    Transactional  --------------------------------------   Relational
                       Contracting / |   Consulting /
                       Retail etc    |   Therapy etc
                                     |
                        Non-scalable - Few customers
And mathematically, only businesses at the top of the graph are capable of generating a billion dollars. Hence, if you are looking to be a billionaire, the path lies either through a Ponzi scheme or through a monopoly. Both of them, in their most pure form, are illegal, and the challenge in the business model is to execute on them while staying just barely on the right side of the law.


Well, your point somewhat stands but there are many examples of retail, contracting and consulting companies in excess of a billion dollars.


Which one is Minecraft?


The Republican party is wholly a party for the Rich and wealthy. All other claims to the contrary are attempts to deceive people that this is not the case.


I want to disincentivise people with wealth using it to corrupt systems of power into doing what they want.

> Wealthy people find loopholes, and so you end up taxing the middle class and limiting social mobility with those initiatives.

Sounds like we should get rid of these wealthy people then...


Let me know when you find a way that makes sense.

I’m a socialist, but I have a brain.

Anything you can think of to make wealthy people cease to exist is easily bypassed, so the best way is to find ways to tax behaviour instead.

The point of money is how you use it, if you have a 50,000x tax on super yacts and private aircraft, then the ultra rich are forced to pay your tax or try skirting around it by using smaller boats or coalescing their private jets into a private airline.

But if you tax stocks, then people will invest in other ways. If you tax individuals owning large property then they’ll move their property ownership into a company, if you tax inheritance then they’ll put the money into a fund instead which has debts that will be written off in time. All kinds of fancy tricky accounting.

The other solution is to tax everyone on unrealised gains, which makes every home owner (including pensioners) suddenly liable for huge ongoing bills.

Elon himself for example is pretty cash poor, but owns a lot of stock in a “high value” company meaning his wealth on paper is pretty extreme. He takes on debt (which has no income tax) and then pays it off with stocks, where it also avoids being taxed as its never realised.

I think its a harder problem than you give it credit.


How would it work if we treated money obtained by borrowing against stock holdings as "realized gains"? That seems like a loophole that could be closed.


whats the difference with a mortgage then, a securities backed loan.


Well nothing, I think what is being proposed is to trigger existing capital gains taxes when an asset is borrowed against, the same as if it were sold. Most places exempt personal homes from capital gains taxes already, so it wouldn’t affect them. It would affect

- someone who bought an investment property, which then appreciated, and then they wanted to take out a larger mortgage against the appreciated value to leverage it into buying another property.

- Someone borrowing against stock to avoid realising gains by selling it

That seems… reasonable to me?


Thank you. Yes, that's precisely what I mean. I've floated the same idea a few times on this forum and others. I've asked, but have yet to see someone point out a systemic downside. (I'm not any kind of financial sophisticate, so I'm well aware that I might be missing something!) In fact, it seems to me that having people finance their lifestyles by borrowing against assets adds a degree of leverage risk to the system, and ought to be discouraged just on that basis.


Republicans just play whatever meta will win them power until they get into office. It's a time-honored tradition at this point, so any time they tell you what they believe, what they say their beliefs are and what they actually do, will often be two separate things.


It's a watering hole attack. At any point your iphone sends an http request to a compromised site, by add, link, embedded, etc. your device will be exploited. there really isn't a way to permanently defeat this. We are about to see an explosion of novel attack types utilizing this exploit as their basis, you realistically cannot defend yourself against these without either updating or no longer using an iphone.


> At any point your iphone sends an http request to a compromised site, by add, link, embedded, etc. your device will be exploited.

Would it help to disable Javascript on untrusted sites via Brave?


What are you talking about?

Why are we about to see an explosion?


People have always been fascinated by where they might find monsters. This one so happens to be in our history, so many will find it noteworthy.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: