In my personal experience as a music producer for the last 36 years, MIXING hundreds of channels benefits enormously from the available bandwidth. Think of it in the terms of graphics (anti) aliasing. If you open your canvas in 1920x1080, for example, and draw a diagonal line, your line will be jagged (aliased) to a certain extent. If you, on the other hand, start a canvas in 7680x4320 and draw the same diagonal line, and then rescale your output back to FullHD, your line will be perfectly smooth with no visible alias whatsoever. It is absolutely the same principle when mixing music: I MIX everything in 192kHz and I PUBLISH in 48kHz. And, yes, my ears can hear the difference perfectly fine. But, do people like me who are forced to run their audio clock at 192kHz most of the time, deserve a DSP processor like this? It could be very useful, yes.
We've been using Nyquist's work and anti-aliasing filters for nearly as long as we've been using digital audio at all.
Your DAW (or whatever) may be able to show you the stairsteps of individual samples on a screen, but with a functional playback system it is never that way at all by the time things become analog again. Instead, it's always smoothed out by an anti-aliasing filter.
It works this way regardless of sampling rate. The stairsteps don't make it outside of number-land. You can run your DAW at 48KHz, 96KHz, or 192KHz, and signals below the least-common-denominator cutoff frequency will be identical on an oscilloscope -- and free of stairsteps. (Try it sometime. It's fun.)
Aliasing is a solved problem that has been solved for a long time. Your analogy about scaling and diagonal lines is actually a decent visual representation of how this stuff works, except it has already been working that way without being deliberately clever with overkill sampling rates.
Meanwhile: This Pi Pico DSP stack is structured very heavily towards being the last digital stage of a listening system. As-constructed, it's quite clearly evident that it is really not meant to be anything else. A person can certainly bend it to be other things (yay open source!), but you've probably already got a set of filters well-integrated into your existing toolchain that work superbly.
But if that's what you want, then by all means: Use it. Integer sampling rate conversions are trivial operations to get correct. To get the 96KHz that this project works with from your your 192KHz workflow, it's just a matter of throwing away half of the samples and playing back whatever remains. Any aliasing is out-of-band, and is removed by the anti-aliasing filter that is part of the digital-to-analog stage.
A sampling rate of 192kHz is overkill. And 192KHz exists as a sample rate in audio world because it is overkill.
With a Nyquist frequency of ~96KHz, all of the arguments about whether a person can hear up to eg 22.05KHz, 24KHz, or if there's something meaningful all the way up at 48KHz, become completely and totally ameliorated.
Those arguments were always such tiresome ordeals.
The cost of dissolving those arguments is just some some bandwidth and CPU cycles -- which is to say, it costs approximately nothing.
Oh it's worse than that, for distribution and playback sampling at more than 48kHz is likely worse in many ways due to unwanted ultrasonic noise and increased intermodulation distortion. 96/24 makes sense for production, and 96/float56 is common in DSP chains.
When the production produces unwanted ultrasonic noise, then that's not a sampling rate problem. It is instead a production problem.
And that's perfectly OK, too: The neat part about having too much data is that other end-users (like you and me) are free to throw it away as expeditiously as we choose to.
To that end: I, for one, welcome our 192kHz overlords. (And then I'll shove it through my hardware DSP that operates at 24-bit 48kHz and fuhgettaboutit.)
I don't LISTEN to music in 192kHz. I listen in 48kHz like everyone else and it sounds perfectly fine. But, I do MIX my music in 192kHz, however, before it's final export to 48kHz. It is about the anti-aliasing principle I described in my post above. But, while I'm mixing my audio clock is at 192kHz, and I can't escape that. Hence I will be looking at how to run this project on a beefier device that could run at 192kHz sample rate.
Very nicely done, congrats, but, not a word about the content? What is available to show on such a device? A self-refreshing single URL only? A full-blown Home Assistant client? What does the admin panel provide?
Right now it's a structured display rather than just a single URL refresh.
Out of the box it handles things like time, schedules, rotating messages, and weather. The layout is driven by JSON and the admin UI lets you define slides, fields, and timing without touching code. There should be screenshots available on the GitHub.
It's not trying to be a full Home Assistant frontend — more like a lightweight, purpose-built display that boots straight into something usable.
You can extend it pretty easily since it's just PHP + JSON under the hood, but the default goal was: install to immediate working wall display then customize from there. Change numbers of fields per slide. Change whether those fields are static or weekly. Change slide titles. Change font sizes and type settings. Adjust screen timing. Turn slides on or off. Set dimming schedules and reboot, schedules and screen off schedules. Change weather location. Set up remote updating via email and limit email updating to a single email account defined in the settings.
Interesting. I use HA myself (though not the AI/LLM side yet). I think it's a different layer: Home Memory is the physical reality of the whole house, not just what's wired and smart. How do you picture "running inside"? Chat from the HA UI, or just running on the same hardware? I haven't dug into the feasibility yet. The part I'm fairly sure about: this only shines with a model that reliably uses tools. 23 MCP tools is a lot for a small local model, Claude or GPT tier handles it fine. What conversation agent are you running in HA?
I've been running Home Assistant for 5 years now. No turning back, it's so addictive (in a good sense). I didn't have a chance to start with AI/LLM in HA either, nor did I ever have a chance to use speech with HA. It will all come in this year, I hope. The other day, going through a cluttered drawer, looking for something, it dawned on me that the perfect solution to that problem would be if I could talk to an AI and explain that item X is in drawer Y of cabinet Z in room A. That would be a perfect interface to an inventory management application. And, since I'm using HA more and more for absolutely everything in my life, then, the perfect place to keep all that data would be HA itself. Later on, all that data will be useful in HA anyway, one way or the other. So, yes, if I need to build talk-to-LLM from every room infrastructure, then it would make it much harder to include other projects in the pipeline. HA mobile app already has talk-to-HA functionality which I can't use for Home Memory. And, frankly, having some home related data in Home Assistant, and some other home related data in Home Memory feels like a split brain situation to me. So, ideally, Home Memory should be just an integration/add-on/HACS to Home Assistant, in my humble opinion. What you did there with a windows server is a great start, and I will definitely test it out, but, eventually, I strongly believe it should be fully integrated into Home Assistant. Let me know if I can be of help. Thanks.
You just gave me the idea to 1) photo a cluttered drawer and have AI identify all the things, 2) it'd be nice if it could take that output and structure it (eg, items A, B, and C are in drawer 1) and 3) maybe even have a cheap cameras in the important places that update periodically. I dunno. Also maybe have it connect to this home memory system, ill check it out more. Thank you both!
reply