Hacker Newsnew | past | comments | ask | show | jobs | submit | ellisv's commentslogin

I honestly can’t believe how poorly JetBrains has done. I used to love PyCharm but now it’s so far behind. I still use DataGrip but it is absolute dogshit when it comes to agentic coding.

I use JetBrain's all you can eat subscription that comes with their Junie coding agent which includes some free tokens to cover my coding needs. I then top up tokens on on-need basis. Costs me about $100 / month in AI tokens (well I bill my clients for that separately so do not really care bout the price). All works as a charm. I mostly use their CLion, Webstorm and PyCharm IDE's for development, sometimes other as well. All in all dev experience is excellent and far exceeds that of Cursor I was trying to use for a while.

Not sure what problems people here have with JetBrains offerings


IntelliJ is a bit dated, and its plugins are too. I use IntelliJ all the time, in its various incarnations, but vscode is really up there now.

I use both (not IntelliJ but other IDEs) and quite frankly I fund VS Code and derivatives very much inferior. For C++ development for example CLion vs VS Code (needed plugins installed) is night and day and not to the benefit of VS Code.

I know JetBrain product could be sluggish on "normal" computers however all 4 of my development machines run on 16 cores AMD with 128GB RAM. It flies in environments like that


Unless you do Jakarta EE development, where Cursor with their simple LSP support is far, far behind. Cool for generating a bean, but when you got to debugging deployment descriptors you wish you were in IntelliJ.

Yeah, and it seems to be completely self-inflicted. I created a small personal skillset that explains to the agent how to use the JetBrains MCP tools for refactorings/find-usage/navigation, and it improved its performance by a lot.

Yet JetBrains tried to do everything themselves and failed :(


I was a massive jetbrains fan - still believe it's the best IDE even with it's massive performance issues.

But I just... barely use an IDE anymore. I think I have the lowest possible subscription price for "all products" you can have (at least as an outsider) and I think I'm going to cancel this year. I've been paying for a decade+


I am subscribed to their all you can eat plan and use their Junie coding agent which is included with subscription with some free tokens. I then pay for extra tokens on on-need basis and all works like a charm. So far I pay (well my clients do as I bill separately for that) about $100 a month to cover my current coding needs. All works as a charm. I mostly use their CLion, Webstorm and PyCharm IDE's for development, sometimes other as well. All in all dev experience is excellent and far exceeds that of Cursor I was trying to use for a while.

Not sure what problems people here have with JetBrains offerings


Once you work somewhere that gives you unlimited opus 4.6 and learn how to use it properly, your perspective of what you should be doing day to day shifts.

Honestly unlimited codex with 5.4 high has a similar effect.

SOTA models + harnesses used together is very different than it was 6 months ago. People that have significant software engineering experience can get so much done it's scary.


I have what you call "significant engineering experience", decades of it to be precise and have designed and developed many complex products successfully used in various industries.

I do not need to "shift my perspective" since I do use agents to the degree that I need and it helps help me very much. I am way more productive with them.

Generated code is still not perfect disregarding of any particular model (I have access to all). I have to watch and fix, sometimes by supplying more precise specs, sometimes asking to rewrite piece of code in such and such manner using this and that structures.


Wasn't meant to be personal- I was using the proverbial "you".

I keep seeing what I'm referring to happen - folks are using / opening their editor less and less.

What's crazy is a developer can go on a walk and use tmux/tailscale and keep working as if they were sitting at their desk.


I keep hearing this, but I have yet to see “so much getting done” anywhere. I’d sure like to but things seem to be pretty much be business as normal.

This was absolutely the case - not actually that much more productive - until only a few months ago.

We hit some sort of tipping point between models and harnesses and people learning how to use the tools idk.

And directs / engineers / friends seem happier.

Simon Wilson recently did a podcast where he discussed his experience and it felt very familiar.

Worth listening (ignore the click bait title) https://www.youtube.com/watch?v=wc8FBhQtdsA


It's called the nondelegation doctrine, which forbids one branch of government from authorizing another branch of government to exercise its functions.

Because Article One vests "all legislative powers" to Congress, they cannot delegate legislative powers to the Executive and Judicial branches (because then not all legislative powers would be vested with Congress).


How does that not prohibit delegation of Federal regulations to the Executive Branch?


About 10 years ago I remember seeing a number of posts saying "don't use int for ids!". Typically the reasons were things like "the id exposes the number of things in the database" and "if you have bad security then users can increment/decrement the id to get more data!". What I then observed was a bunch of developers rushing to use UUIDs for everything.

UUIDv7 looks really promising but I'm not likely to redo all of our tables to use it.


You can use the same techniques except with the smaller int64 space - see e.g. Snowflake ID - https://en.wikipedia.org/wiki/Snowflake_ID


Note that if you’re using UUID v4 now, switching to v7 does not require a schema migration. You’d get the benefits when working with new records, for example reduced insert latency. The uuid data type supports both.


also barely readable with dark mode


Dark mode looks fine to me: light gray on black.

Light mode is terrible: dark gray on black.


You get light grey? The headings are #101828 and body text #364153 on #0a0a0a for me.


Light mode text: oklch(37.3% .034 259.733)

Dark mode text: oklch(87.2% .01 258.338)

Background in both modes: oklch(14.5% 0 0)

See here: https://jsfiddle.net/kmtwf4g3/

I think the fuckup of the website author is that the background is black instead of white in light mode. Otherwise the text colors would be fine as they are. Probably vibe coded and never tested in light mode.


Not vibe-coded! I just didn't realize that the `prose` mode with Tailwind changes the default text colour set on other pages.

Lesson learned!


Light gray on black is not fine. Background should not be black, the text blurs.


I'm in a daylit room on a laptop screen in dark mode. Some of the text was literally invisible. If the site's owner is reading this, you should fix that.


I wish devs would normalize their data rather than shove everything into a JSON(B) column, especially when there is a consistent schema across records.

It's much harder to setup proper indexes, enforce constraints, and adds overhead every time you actually want to use the data.


JSON columns shine when

* The data does not map well to database tables, e.g. when it's tree structures (of course that could be represented as many table rows too, but it's complicated and may be slower when you always need to operate on the whole tree anyway)

* your programming language has better types and programming facilities than SQL offers; for example in our Haskell+TypeScript code base, we can conveniently serialise large nested data structures with 100s of types into JSON, without having to think about how to represent those trees as tables.


You do need some fancy in-house way to migrate old JSONs to new JSON in case you want to evolve the (implicit) JSON schema.

I find this one of the hardest part of using JSON, and the main reason why I rather put it in proper columns. Once I go JSON I needs a fair bit of code to deal with migrartions (either doing them during migrations; or some way to do them at read/write time).


Since OP is using Haskell, the actual code most likely won’t really touch the JSON type, but the actual domain type. This makes migrations super easy to write. Of course they could have written a fancy in-house way to do that, or just use the safe-copy library which solves this problem and it has been around for almost two decades. In particular it solves the “nested version control” problem with data structures containing other data structures but with varying versions.


Yes, that's what we do: Migrations with proper sum types and exhaustiveness checking.


I find that JSON(B) works best when you have a collection of data with different or variant concrete types of data that aren't 1:1 matches. Ex: the actual transaction result if you have different payment processors (paypal, amazon, google, apple-pay, etc)... you don't necessarily want/care about having N different tables for a clean mapping (along with the overhead of a join) to pull the transaction details in the original format(s).

Another example is a classifieds website, where your extra details for a Dress are going to be quite a bit different than the details for a Car or Watch. But, again, you don't necessarily want to inflate the table structure for a fully normalized flow.

If you're using a concretely typed service language it can help. C# does a decent job here. But even then, mixing in Zod with Hono and OpenAPI isn't exactly difficult on the JS/TS front.


Yeah document formats (jsonb) are excellent for apps etc that interface with the messy real world. ecommerce, gvt systems etc, anything involving forms, payments etc

tryna map everything in a relational way etc - you're in a world of pain


For very simple JSON data whose schema never changes, I agree.

But the more complex it is, the more complex the relational representation becomes. JSON responses from some API's could easily require 8 new tables to store the data in, with lots of arbitrary new primary keys and lots of foreign key constraints, your queries will be full of JOIN's that need proper indexing set up...

Oftentimes it's just not worth it, especially if your queries are relatively simple, but you still need to store the full JSON in case you need the data in the future.

Obviously storing JSON in a relational database feels a bit like a Frankenstein monster. But at the end of the day, it's really just about what's simplest to maintain and provides the necessary performance.

And the whole point of the article is how easy it is to set up indexes on JSON.


When a data tree is tightly coupled (like a complex sample of nested data with some arrays from a sensor) and the entire tree is treated like a single thing by writes, the JSON column just keeps things easier. Reads can be accelerated with indexes as demonstrated here.


I fully agree that's wrong (can't imagine the overhead of some larger tables I have if that had happened), that said, often people want weird customizations in medium-sized tables that would set one on a path to having annoying 100 column tables if we couldn't express customizations in a "simple" JSON column (that is more or less polymorphic).

Typical example is a price-setting product I work on.. there's price ranges that are universal (and DB columns reflect that part) but they all have weird custom requests for pricing like rebates on the 3rd weekend after X-mas (but only if the customer is related to Uncle Rudolph who picks his nose).


But if you have to model those custom pricing structures anyway, the question what you gain by not reflecting them in the database schema.

There's no reason to put all those extra fields in the same table that contains the universal pricing information.


A lot of unnecessary complexity/overhead for a minor seldomly touched part of a much larger already complex system?

I'll give a comparison.

JSON

- We have some frontend logic/view (that can be feature-flagged per customer) to manage updating the data that's otherwise mostly tagging along as a dumb "blob" (auto-expanded to regular a part of the JSON objects maps/arrays at the API boundary making frontend work easier, objects on the frontend, "blobs" on the backend/db)

- Inspecting specfic cases (most of the time it's just null data) is just copying out and formatting the special data.

- If push comes to shows, all modern databases support JSON queries so you can pick out specifics IF needed (has happened once or twice with larger customers over the years).

- We read and apply the rules when calculating prices with a "plugin system"

DB Schema (extra tables)

- Now you have to wade through lots of customer-specific tables just to find the tables that takes most of the work-time (customer specifics are seldomly what needs work once setup). We already have some older customer-specific stuff from the early days (I'm happy that it's not happened much lately).

- Those _very_ few times you actually need to inspect the specific data by query you might win on this (but as mentioned above, JSON queries has always solved it).

- Loading the universal info now needs to query X extra tables (even when 90%-95% of the data has no special cases).

- Adding new operations on prices like copying,etc now needs to have logic for each piece of customer specific table to properly make it tag along.

- "properly" modelled this reaches the API layer as well

- Frontend specialization is still needed

- Calculating prices still needs it's customization.

I don't really see how my life would have been better for managing all extra side-effects of bending the code to suit these weird customer requests (some that aren't customers anymore) when 90-95% of the time it isn't used and seldomly touched upon with mature customers.

I do believe in the rule of 3, if the same thing pops up three times I do consider if that needs to be graduated to more "systematic" code, so often when you abstract after seeing something even 2 times it never appears again leaving you with some abstraction to maintain.

JSON columns, like entity-attribute-value tables or goto statements all have real downsides and shouldn't be plonked in without a reason, but hell if I'd have to work with overly complex schemas/models because people start putting special cases into core pieces of code just because they heard that a technique was bad.


Normalisation brings its own overhead though.


These types of books are always interesting to me because they tackle so many different things. They cover a range of topics at a high level (data manipulation, visualization, machine learning) and each could have its own book. They balance teaching programming while introducing concepts (and sometimes theory).

In short I think it's hard to strike an appropriate balance between these but this seems to be a good intro level book.


I never liked the global leaderboard since I was usually asleep when the puzzles were released. I likely never would have had a competitive time anyway.


I never had any hope or interest to compete in the leaderboard, but I found it fun to check it out, see times, time differences ("omg 1 min for part 1 and 6 for part 2"), lookup the names of the leaders to check if they have something public about their solutions, etc. One time I even ran into the name of an old friend so it was a good excuse to say hi.


I believe that Everybody Codes has a leaderboard where it starts counting from when you first open the puzzle. So if you're looking for coding puzzles with a leaderboard that one would be fair for you.

https://everybody.codes/events


The IDE has been available for awhile.


In beta state.


Precip.ai or go grab the MRMS data yourself


We can't pull/push to our repos

    git@github.com: Permission denied (publickey).
    fatal: Could not read from remote repository.
    
    Please make sure you have the correct access rights
    and the repository exists.


Same in Canada and for colleagues in the UK and Poland.


Same here (Canada east coast)

Edit: I was just able to pull again


Germany here. Seems to be global then.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: