This is a big part of why I'm looking to develop a local LLM capability: having the hardware is a good start, but also developing the understanding on what the SoTA of local edge models can do, so we're not crippled if remote models stop being served, or at least some risk management.
It doesn't solve the problem of general LLM dependency (at the end of the day we gotta keep our brains sharp), but any LLM-based workflows aren't all of a sudden put at risk if we set up something that depends on it.
It doesn't solve the problem of general LLM dependency (at the end of the day we gotta keep our brains sharp), but any LLM-based workflows aren't all of a sudden put at risk if we set up something that depends on it.