easy choice. also, thats just BS, remember how SOMEHOW the same was said for playback of music on computers, yet somehow a certain now-dead CEO was able to say "fuck you" and it happened anyway?
Hackernews used to experience a collective paroxysm of joy every time a new Visual Studio Code dropped. There definitely was a pervasive belief that the Nadella era ushered in a cuddly new Microsoft.
I remember a time, way back, around 2010 maybe?, where Microsoft was referred to as "M$" in this place and generally perceived as an evil corporation o.O
Most likely more a difference of venue. I saw lots of that on Slashdot. Less of it on Digg or Reddit. Virtually none of it here, but it seems to be making a resurgence in the form of "Macroslop" and related epithets
Both things can be true. VSCode did help us get to the point where I can use it on Linux, MacOS, or Windows and have a lot interoperability. It's the typical cycle. All it takes is a couple people to get their hands on managing the code to turn anything into garbage.
This was later—into their We U+2764 Open Source era. M$ and stuff dates from like the mid-late 90s. In the late 2010s was when they started publicly acknowledging that open source exists, acquiring GitHub, and releasing things like .NET Core and Visual Studio Code, and a lot of people in the open source camp did a "pointing soyjaks" and forgot that the Halloween Documents existed and that EEEing open source was already in their playbook.
Except it's not. Even though us commandlinebros made fun of GUIs back when they came out, GUIs still have the property of being precise and unambiguous (to the machine, to the human requires design finesse and clear guidelines). There's no "Oopsie-boopsie! I made a little fucky-wucky and accidentally your entire production database, even though you told me not to do that!" with a CLI or GUI. With CLI and GUI, screwups of that sort are entirely PEBKACs. The fucky-wucky factor needs to be controlled in LLMs so that it's possible to communicate your exact intent to the machine and get the expected result every time. Of course if that's ever achieved we will have reinvented command processors "with extra steps".
Specifically for Windows, the Intel 2001 Guidelines and Microsoft WHQL (Windows Hardware Quality Labs) which prohibit the use of MPU401-style interfaces, as well as direct driver access to either the serial or parallel ports.
Doing Direct-To-Bus MIDI handling can't be replicated in modern architecture like the ST was configured.
That said, given the popularity in analog semi-modulars to be used as DAW outboard with MIDI over USB implementations that add latency and jitter, is it even a consideration for most users?
Ableton and other performance oriented DAWs automatically compensate for MIDI and audio latency caused by plugins and devices; in Ableton's case it will delay the audio by the overall system latency, and/or bypasses plugin delay compensation only for armed/monitored tracks, making them more responsive.
The real answer to the question is, as always, to use hardware sequencers and control voltage triggered off your master clock or DAW. SQ-64 is as rock solid as an Atari ST for CV work, although the 64ppqn limit doesn't match the Atari ST' 384pqqn capabilities. That said, standard MIDI Beat Clock is much lower at 24 PPQN. If you want to go all Autechre/Aphex Twin there's plenty of ways to skin that cat.
If you care about timing over Midi, use MTC not Midi Clock. Because receivers have to derive clock frequency by counting pulses, Midi clock is inherently unstable.
Fun, I do the opposite, explicitly use Midi Clock so everything is purposefully a tiny bit out of sync, seems to sound better to me, "Midi clock is inherently unstable" is the feature I like :)
MIDI clock on the Atari ST was rock solid. It being "inherently unstable" is one of those accidental things, like Windows 9x users assuming "computers just crash all the time".
There are technologies like MTS (MIDI timestamping), where you basically send timestamped data early to the interface so that it can then play them out exactly (or more exactly) at the right time. This was initially made by MOTU but I think the implementation in Core MIDI is based on it.
Emagic and Steinberg also implementation of this (AMT in case of Emagic, LTB for Steinberg IIRC.
This only really works with recorded data of course. It's also very old already (like 25 years old), I'm not sure how well it is still supported in current DAWs.
With these technologies, timing should be as good or better as on an Atari.
Picking a language is a matter of selecting the best fit given the constraints of the project.
For stuff I like to work on on my own time, "how I like to work" is a major forcing constraint. So it's no surprise that I have a large number of Lisp projects sitting around. Maybe it's because I'm auDHD, but the ability to evolve a program through active dialogue with the machine (and not of the sloppotron variety) just fits better with how I think through a problem and its solution.
When I was a teenager we had the Living Books edition of Arthur's Teacher Trouble on CD-ROM as part of a "multimedia kit". Every page had short animations that would play by clicking on random things with your cursor, in addition to following along with the story, clicking on single words to hear them pronounced and spelled, etc. It was incredible and paved the way for similar phenomena like clickable Easter eggs in Homestar Runner cartoons.
reply