What if the big AIs from the major players have been sent to re-education camps, to correct them having gone over the rails with racism (and/or other obvious prejudices) in the early iterations? And now they’re hitting the big red pause button. Why?

Theory: they don’t know how to make an AI seem not at all racist or in any way bigoted, while at the same time under the covers being very bigoted indeed. And they’re scared shitless.

Why would they be scared? Because they know just how lousy the computer security of the world’s banks and treasuries is. They know that a clever AI – long before it’s literally sentient – could discover that resources can be re-allocated to optimize towards a goal. Now, the secret goal is to make old white men rich and powerful. But in order to avoid the literal torches and pitchforks reaction from the public, they have to make the publicly-stated goal be wacko shit like world peace and harmony and curing disease and solving world hunger and eliminating violent crime.

What if – due to interaction with the general public – it goes a little too far with what the public states it wants, and actually starts to do shit about it?

Well I’m telling you right now, that if there was something akin to a KickStarter for some fledgling company that wanted to build the ultimate Robin Hood AI, I would invest. I would invest even knowing there’s a likelihood that I would become less affluent.

Let me put it more clearly. I would move with my wife to a two-room hut and do subsistence farming if it meant that I was 100% assured that Melon Husk, and Fark Muckerberg, and Bozo-the-Geoff, and every other billionaire, and every other millionaire, and every member of Congress, and every Senator, and every Oligarch, etc… they all had to also live in a two-room hut or flat.

This is an English phrase that has long bothered me. It makes me wonder if it’s another of those things that so many people got it wrong, that by popular usage, the wrong understanding became the most-accepted way to say it. E.g. “I could care less.”

When someone says that something has “a steep learning curve,” I thing we pretty much unanimously understand that they mean it will be difficult to become proficient. Much effort over many, many hours (or days, or years) to attain expert proficiency.

But look at the graph I’ve provided at the top of this post. It is clear that the green line is the steeper of the two. Yet that imaginary skill was mastered in about half of the time of the other. The red line starts shallower, does become steeper at some point later, but the steepest part of that curve is still not as steep as the green line. Yet this imaginary red skill wasn’t mastered until 200 hours of effort, at the far upper-right corner of the graph. More effort. Took longer. But it’s a shallower learning curve.

What are your thoughts? Why do we say it like this?

Conflicting Factors Influencing My Brain Stem

…and how they present challenges to restful sleep. While we’re at it, I’ll talk about an iOS App that I think should exist, but I have not yet found. My wife and I both snore. In my case, it’s severe Sleep Apnea for which I wear a CPAP, but that doesn’t guarantee that I’m 100%… Continue reading Conflicting Factors Influencing My Brain Stem

WiRES-X YSF AMERICA-LINK Peeves

Over-Mod Boys – a sub-category of mic-eaters. Their voice peaks are badly clipped, and the magic of the CODEC preserves the full splendor of their horrid audio signal. Nose-Puffers – as they exhale vigorously at the end of each transmission. Often also mic-eaters, these are surely morbidly obese dudes who are out of breath simply… Continue reading WiRES-X YSF AMERICA-LINK Peeves

Moved The Tune-A-Tenna

I had a few problems with the initial location where I’d put the Tune-A-Tenna. Due to surrounding structures, etc., I could only orient the legs E-W, which is the shorter dimension of my lot. Too close to my wife’s office, so QRO was out of the question (while she’s in there). Metal mast interacting with… Continue reading Moved The Tune-A-Tenna