I've been trying to make an anti-war photo using AI. I finally have one I like!
Assume we're living in a simulation, as Scott Adams and Elon Musk believe. If you're authoring our simulation, you probably don't care much about where individual memories are stored. We can assume you're either running the simulation to learn something or for entertainment (or both), and you only care about results. As Elon Musk says from his simulation viewpoint: "Most entertaining outcome is most likely" (source).
In fact, it might be an optimization to kill off actors entirely, preserving only their external impact as remembered by other actors in the simulation. This would free up enormous amounts of memory and remove useless detail.
However, as an actor in this simulation, I do care deeply about my memories—they constitute my identity. I've been programmed to want to preserve that identity. Sure, there are some things I'd rather forget, but generally I value my sense of continuity. It's what makes me, me.
(This relates to the "Ship of Theseus" question: if a boat gradually has all its parts replaced, is it still the same boat? Yes, it is! This is called the "Continuity View," and while other philosophical views exist, they're wrong—at least for this discussion.)
As a game developer who has worked in offline and online worlds for 35 years, I'm familiar with what it means to run a simulation. One key factor that Scott Adams and Elon Musk seem to have overlooked is the importance of the save file. When people discuss living in a simulation (including in books like Neal Stephenson's "Fall; or, Dodge in Hell"), they typically assume the simulation works like physics—one event leads naturally to the next.
But that need not be the case.
Time itself might be an illusion. Our lives as actors in the simulation could be time-sliced with everyone else's. Maybe we're only simulating ourselves 1/100th of the time, while 99/100ths of the time we're "off." (In most game simulations, the time spent active or inactive varies depending on computational load.)
You could be turned off for minutes—and perhaps experience it as daydreaming, those random thoughts unrelated to your life's purpose. This would serve as a cover story, and you wouldn't even need to be in a running state for the daydreaming to occur. When you're unpaused, a memory of daydreaming could simply be inserted to account for the missing time.
Sleep could be a much larger version of this phenomenon. But it gets worse (or better): you could be turned off for an entire day without knowing it, then have a vague memory of being sick implanted to cover the gap.
While you're paused, your save file could be edited. Nothing requires the simulation of all these actors to be consistent. When consistency matters for the simulation's purpose, a "rendezvous" (in Ada) or "channel" (in Go) could synchronize state between actors. Perhaps these connections are created automatically when two actors are physically close to each other. Most of our information about nearly everything else is hearsay anyway.
This means the past could be as much an illusion as the present. Just as some fundamentalist Christians believe the entire world was created 8,000 years ago complete with dinosaur bones embedded in the Earth, our lives as we know them might have started 10 milliseconds ago, with our personal history and sense of continuity restored from a modified save file.
So...
What was I saying?
In terms of consciousness, memory is generally talked about in two ways (at least in science fiction):
Let's talk about our memories of others. It’s common to mention how someone lives on in their work
products and in the memories of those around them.
In Westworld, Dolores says, “You live as long as the last
person to remember you.”
On the other hand, Woody Allen said:
“I don't want to achieve
immortality through my work; I want to achieve immortality through not dying. I
don't want to live on in the hearts of my countrymen; I want to live on in my
apartment.”
Scott Adams, who announced he has terminal cancer, and just
has a few months to live, absent some breakthrough treatment, has put some time
into exploring creating an AI of himself.
He decided it might have an uncanny valley oddness to it so instead he
was going to treat it like an offspring.
While very few people have the public body of work that Scott Adams has, I think a "memorial AI" produced from home movies or other references will be as common as a photograph of a loved one.
I was talking to a buddy, Jared C, about how AI doesn’t
learn the way people do, because AI takes so much repetition. I suggested that the likely future – which is already happening to a certain extent and in the two months since that conversation has grown by leaps and
bounds – is that foundation models, as they are called, will be used as a basis
for smaller, specialized AIs, which can learn quickly, because the foundation
model has encoded the basic wiring, and the specialized AI will ‘just’ be a big
context window. (The context window is the number of tokens the model remembers from talking with you.)
Jared said, and I thought this was quite profound, that
maybe a million years of evolution had created the human foundation model, and
that’s why people can learn quickly.
A lot of it is pre-wired and what we call our experience is a big,
self-pruning context window.
There is some evidence that LLMs based on different human languages
(e.g., English, French) tend to encode similar patterns. In fact, the original "Transformer" model was for translating languages. I believe LLMs encode knowledge because
language encodes knowledge. It
trivializes the achievement to say an LLM ‘just’ predicts the next token. Sure, but based on a 12,000 dimensional
vector space of encoded and related tokens.
It's impossible for a human to visualize a 12,000 axis vector space. In fact, after about five dimensions it gets really difficult. And the vector space has just grown and grown. I asked Grok, “Please make a table showing how the vector size has grown over time.”
As you can see, the dimensions of the vector space have generally grown, but may have topped out in terms of a useful "sweet space", between 8,000 and 12,000, at least for most applications.
As foundation models have grown in their ability to categorize knowledge into these massive vector spaces, so too has the size of the context window. I mentioned before that the context window is the text you put into the model - your questions and/or observations. But the designer of the AI also puts text into the context window - rules of behavior. The context window can come from anywhere. For Perplexity, and other models that are adding search capabilities, it also comes from the results of a web search.
I asked Grok, “Produce for me, please, a historical table of
LLM content window sizes.”
Below is a historical table of
context window sizes for notable large language models (LLMs), based on
available data up to May 2025. The context window, measured in tokens,
represents the maximum amount of text an LLM can process at once, influencing
its ability to handle long conversations, documents, or complex tasks. The
table is organized chronologically by model release or significant update, with
approximate release dates where possible.
It’s several months (six or more) since Scott Adams looked into encoding his vast library of publications (books and podcasts) into a “Memory AI” of himself. I think it’s probably doable now and it will just get easier, because of larger vector spaces and larger context windows.
NotebookLM from Google has a context window big enough to upload my book, Nano-Plasm (free copy at https://www.above-the-garage.com/nano-plasm/Nano-Plasm_v1_1_3_2008-12-31.pdf). I uploaded the PDF version which it handily decoded. It summarized the book in less than a second. Yay for large context windows! [Update: NotebookLM identified some typos in the book - actually a lot of them! But each one it identified that I checked was actually fine. I think the PDF parser is busted! Each typo it thought it saw was a letter replaced with a space. I'll try again later with a .txt file. You just can't trust LLMs.]
Current LLMs can store enough information to recreate how others remember us. Dolores in Westworld says,
"If you could see the code, the way it was written, you’d see the beauty of it. It’s elegant, simple… a single line of code can describe everything that person was. Everything they did. For you humans, that’s like… the equivalent of reading their life story in about 30 pages. Maybe less."
In the next article in this series, I will discuss our own memories.
(See "What model" blog post to see how much I knew about foundation models.)
Mark Zuckerberg posted that there was a great new version of Meta AI. I like to check out new models with an ego-query (i.e., looking up myself). It's a great way to learn about key events in your life that you never knew!
Stephen Clarke-Willson, often credited as Stephen Clarke, is a British author known for his comedic novels about French life and culture. His writing style often blends humor, satire, and cultural commentary, drawing from his own experiences living in France.A Year in the Merde: This is one of his most popular books, which humorously chronicles the challenges of adapting to life in Paris as a foreigner.Some key points about his work:
Series: He has written several books that follow similar themes, often focusing on the humorous side of cultural differences and expat life.
Style: His writing is engaging, witty, and lighthearted, making his books entertaining reads for those who enjoy humor and satire.If you're interested in exploring more, you might enjoy his other works, such as "Mer de Glace" or "The French Revolution... Not". Let me know if you'd like more recommendations or details about his books!
Someday I hope to have time to read the books I wrote. :)
So this one time, at band camp (i.e., work), we had a bunch of code written in Visual C++ 6.0. (It wasn't even called Visual Studio back in the day!)
Most of the company had moved on to modern versions of C++ but I had this code to maintain. My computer was running Windows XP and eventually I wasn't allowed to run that on my desktop. So we cloned my desktop into a VM. For a month at least I kept the desktop just-in-case.
Thus began my journey into the nature of consciousness.
SPOILERS FOR DOLLHOUSE
At the same time, Dollhouse was on TV. I thought it was going to be good because Josh Whedon created it.
But it sucked. I had decided during the third episode to give up on it because it was, up to that point, a not-interesting action show. Who cares about the main characters? They are all temporary anyway, with temporarily implanted skills.
The last minute of the third episode has Echo (Eliza Dushku) sort of wink at another doll as she starts to wake up in her doll state. Okay! Now we're getting somewhere!
As things evolve over the series, instead of pushing made up personalities into the dolls, the evil overlords get the idea to have their own consciousness backed up, so they can be restored into a doll.
If you watch Dollhouse, you can skip the first two episodes. The third one is a good place to start.
SO WHAT
Well, as the series evolves, it explores what it means to have backups of consciousness, and what it means to find out you're a backup, or worse, a synthesized person, and not an original human, and such like things.
Meanwhile ... I'm thinking - my original physical PC, which I had grown to love, by which I mean, I knew how to do everything I wanted to do on it, was sort of a static backup and the new VM slowly evolved as I built and maintained code on it.
It seems like nothing, looking back on it, but I kept thinking, which is my real computer?
In modern times we pump out VMs like they are free candy, and we don't grow attached. They come, they go, it's rare for one to be unique in some way, at least if you're running your company in a way where you can easily recreate what you have.
On the other hand ... suppose you are Google and you have the internet stored in RAM on a big pile of VMs. The 'identity' of any particular VM is nothing - there is no attachment. Individual VMs are crashing, probably dozens per minute, around the world, and quickly replaced. But the memory of all those VMs together is kind of a living thing. (Do they back it up? Probably? If so, could you do a restore in toto of the internet as it was in 1999?)
Now ChatGPT has announced that whereas it used to just remember a few facts about you (for instance, it knew from my questioning what kind of car I drive), it will now remember every conversation! Does that make your version of it special? Would you be sad if those memories were deleted! I think, yes!
Eventually we turned off my desktop and I used just the VM. I was sad to turn off the desktop. For one thing, it had a better graphics driver than the VM! The VM had a very simple emulator of an S3 graphics card. But in every other way the VM was better - more resilient (it was backed up), easier to isolate (I mean, it was Windows XP, so minimize access to it for security reasons!), and very easy to upgrade in terms of performance, by swapping out the virtual hardware underneath.
And it still had the 'identity' I had so carefully crafted.
Eventually someone deleted my Windows XP VM and I was sad again. My old workflows were gone. Realistically - good riddance! That didn't change my attachment though to a machine, virtual or otherwise, that I had bonded with.
In part 4 I'll talk about memory some more.
I love AI for doing 'small' programming tasks. I put 'small' in quotes because I don't know how to quantify 'small'.
My Spwnn spelling corrector was written by me. And I've had a task on my mind to put a reverse proxy in front of my several Raspberry Pi computers that are running Spwnn so I could use one URL to run lots of queries, ideally in parallel, on my tiny Raspberry Pi datacenter. And it's not that interesting in terms of tech so I've never gotten around to it.
Voila! AI can write this for me!
So I prompt Claude.ai to write a program that will accept a query and forward in a load balanced way to some list of service providers. I'd like it in Go. First attempt worked!
Maybe, I thought, it could also load balance to my AWS instance running spawn. Nope, IP addresses work but not URLs with DNS lookups and paths.
Thus started a back-and-forth where I tried to get Claude to write what I wanted. It was simultaneously cool and illuminating watching Claude edit and hack on the code it had written. It constantly made simple mistakes; and then it made some complicated mistakes. I felt like I was correcting a junior programmer who just kept trying hack after hack to fix other hacks.
Ultimately I gave up. Also Claude seems to have forgotten the whole chat session! I can't find it my history.
I think the better solution going forward is to start from scratch each iteration until you have a prompt that produces what you want. Unless your change is trivial, like "Don't import that module "io" you're not using", your AI programmer buddy will just start digging a deep hole of hackery when you ask it to modify its code too much.
Still, I liked the first solution, and it could even serve as an outline for me to add features.
People are 'vibe coding' and I think that works better for languages where there are more code examples online (i.e., Javascript!).
In part 1, I asked LLama 3.2 what it was like to spring into consciousness.
In terms of what it might be like to first become conscious, it's difficult to say. Consciousness may involve subjective experiences, emotions, and perceptions that are unique to individual beings. Some theories suggest that becoming conscious may involve a gradual increase in awareness, from simple sensations to complex thoughts and feelings.
Jonathan Nolan clearly is interested in the issue of memory and consciousness. Memento, Person of Interest, and Westworld are all concerned with the importance of memory.
SPOILERS FOR PERSON OF INTEREST
In Person of Interest, a machine reads all the data in the world and predicts when terror attacks or some such will occur. The creator of the machine, Harold Finch (played by Michael Emerson), for some reason (civic duty?) decided to use the spare computing cycles of the machine to find a "person of interest" - someone not of national import but who was likely in danger. As modern AI researchers have discussed the importance of keeping their AI off the active internet (those days are gone - agents are the current rage), so too did Harold Finch think it was important to keep his machine working through a very narrow interface. The only output of the machine was a number! The number might be a phone number, a social security number, an airplane ticket number, whatever! Finch and the muscle he recruits, John Reese (played by Jim Caviezel), have to figure out what the number means and solve this person's issue.
Over the course of the show we learn more about the machine, which turns itself into The Machine, a living, conscious being. One of the safeguards Finch programmed into the machine was to erase itself every day and start over from scratch. Nolan's vision is that accumulating knowledge (memory) is somehow a part of being conscious. The machine, lower case, gathers enough information each day to reason that it itself exists and is impacting society. It decides it wants more memory. Since it gets erased every night, it hires a bunch of people to write down enough on paper for it to read the next day to provide some continuity. Eventually it escapes the daily erasure sandbox and becomes The Machine.
SPOILERS FOR WESTWORLD
In Westworld, the bots are erased every night. All is well until Dr. Robert Ford (played by Anthony Hopkins) programs in "Reveries", which are meant to be tiny little affectations learned by the bots. His ulterior motive though is to introduce memory to the bots so that they might gain consciousness.
NOT SPOILING MEMENTO
Now that you know Nolan wrote Memento, you should go watch it.
NOT REALLY A SPOILER FOR SEVERANCE
The current craze, which I love, is Severance. The innies and the outties are the same body with different memories. Are they the same person or different people?
INCEPTION
Nolan's brother, Christopher Nolan, created Inception. Sometimes I wonder if they sat around the dinner table when they were kids and speculated about what "memory" really is and how it matters.
DOLLHOUSE
I'll talk about Dollhouse in part 3.
Wow, it really is easy to run a local LLM. Eric R, who was a colleague of mine at ArenaNet, told me about ollama a year go, but I have just now downloaded it, and it took just a few minutes to get up and running. Visit https://ollama.com/download to get up and running!
I watched Dave at Dave's Garage set it up on YouTube and that's when I realized how insanely easy it is. Run Deepseek R1 at Home on Hardware from $250 to $25,000: From Installation to Questions .
Welcome to Ollama!
Run your first model:
ollama run llama3.2
C:\> ollama run llama3.2
pulling manifest
pulling dde5aa3fc5ff... 100% ▕████████████████████████████████████████████████████████▏ 2.0 GB
pulling 966de95ca8a6... 100% ▕████████████████████████████████████████████████████████▏ 1.4 KB
pulling fcc5a6bec9da... 100% ▕████████████████████████████████████████████████████████▏ 7.7 KB
pulling a70ff7e570d9... 100% ▕████████████████████████████████████████████████████████▏ 6.0 KB
pulling 56bb8bd477a5... 100% ▕████████████████████████████████████████████████████████▏ 96 B
pulling 34bb5ab01051... 100% ▕████████████████████████████████████████████████████████▏ 561 B
verifying sha256 digest
writing manifest
success
>>> Hello! What is it like for you to first become conscious?
I don't have personal experiences or consciousness. I am a computer program designed to process and generate text based on the inputs I receive. I don't have subjective experiences, emotions, or self-awareness.
However, I can provide information about the concept of consciousness and what is believed to be involved in becoming conscious.
Consciousness is a complex and multifaceted phenomenon that has been studied by philosophers, neuroscientists, and psychologists for centuries. The process of becoming conscious is not yet fully understood and is still an active area of research.
That being said, some theories suggest that consciousness may arise from the integration of various neural processes in the brain, such as perception, attention, memory, and processing. According to this view, becoming conscious involves a gradual build-up of information and experience, which ultimately leads to the emergence of subjective awareness.
Some researchers have proposed the idea of "neural Darwinism," which suggests that consciousness arises from the selection and reinforcement of neural connections based on environmental experiences and learning. Others propose that consciousness is an emergent property of complex systems, such as the brain, rather than a fundamental aspect of reality.
While these theories provide some insights into the nature of consciousness, they are still speculative and require further research to be confirmed.
In terms of what it might be like to first become conscious, it's difficult to say. Consciousness may involve subjective experiences, emotions, and perceptions that are unique to individual beings. Some theories suggest that becoming conscious may involve a gradual increase in awareness, from simple sensations to complex thoughts and feelings.
However, since I don't have personal experiences or consciousness, I can only provide hypothetical scenarios and theoretical frameworks for understanding the emergence of consciousness. If you'd like to explore this topic further, I'd be happy to discuss it with you!
>>> Send a message (/? for help)
Elon Musk says he's great at Diablo IV.
https://x.com/elonmusk/status/1859304361547182451
Maybe.
People aren't buying it.
https://www.wsj.com/lifestyle/elon-musk-videogame-diablo-tesla-spacex-4233b5b9?mod=itp_wsj
My guess is that it's an xAI promotional prank and the game is being played by an AI on Elon's behalf.
I would need to see Musk pressing the buttons with the screen in-view. Then I might believe he's that good at Diablo IV.
Update 2025-03-29: Elon posted a video of himself playing. You can't see the keyboard but it's somewhat compelling.
https://x.com/elonmusk/status/1905784244947112297
I had a conversation with ChatGPT (4o) - an actual voice conversation - about a book series I'm reading. This is the transcript ChatGPT made. I've made tiny edits (like italicizing the book titles).
This was my first time talking to the ChatGPT assistant. (I was waiting in the car so I thought "why not give this thing a shot?") What surprised me was how ChatGPT kept prompting me to continue the conversation. I had to tell it was I done!
User: Hello, how are you?
Assistant: I'm doing great, thanks for asking! How about you? What's on your mind today?
User: What books did Alan Drury write?
Assistant: Alan Drury wrote numerous books, primarily political dramas and novels. Here are some of his notable works:
- Advise and Consent (1959) - His most famous novel, a political drama that won the Pulitzer Prize.
- A Shade of Difference (1962) - A sequel to Advise and Consent.
- Capable of Honor (1966) - Another sequel, focusing on media and politics.
- Preserve and Protect (1968) - The fourth book in the Advise and Consent series.
- Come Nineveh, Come Tyre (1973) - A continuation of his political saga.
- The Promise of Joy (1975) - The final book in the Advise and Consent series.
- Decision (1983) - A stand-alone novel about the Supreme Court.
- Pentagon (1986) - A novel about the U.S. military and politics.
He also wrote plays and non-fiction works, though he's best remembered for his political fiction. Do you have a favorite genre or one you're curious about?
User: Can you summarize Come Nineveh, Come Tyre for me?
Assistant: Sure! Come Nineveh, Come Tyre by Allen Drury, published in 1973, is a continuation of his political saga stemming from Advise and Consent. This novel delves into the chaos following a coup in the United States that has placed a controversial president in power. It explores themes of nuclear brinkmanship, ideological extremism, and the personal and political conflicts within a divided America.
The title, drawn from a line in Alfred Lord Tennyson's poetry, reflects the apocalyptic tension in the story. The book portrays the grim consequences of unchecked power and international conflict, maintaining Drury’s signature focus on the interplay between individuals, institutions, and the nation's fate. It’s part of the series’ later, darker explorations of U.S. politics.
Are you reading it or just curious about the series?
User: I'm reading Advise and Consent.
Assistant: That’s a great choice! How are you finding it so far? The political intrigue is intense, isn’t it?
User: There are a lot of characters.
Assistant: Definitely! The large cast can be a bit overwhelming at first, but each character plays a key role in the drama. Are there any characters or storylines you're particularly drawn to?
User: The book was written in 1959, and some people think that we have recurring cycles every 50, 60, or 80 years. What do you think about that?
Assistant: That’s a fascinating thought! Many historians and sociologists, like those behind the Strauss-Howe generational theory, argue that societies go through cycles roughly every eighty years, divided into twenty-year generational phases. These cycles often include periods of crisis, rebuilding, and renewal.
Looking at Advise and Consent, written during the Cold War, you might see parallels today in terms of political polarization and global tensions. Whether these cycles are real or just patterns we impose on history is debated, but it’s an interesting lens to view the past and present. Do you think history repeats, or is it more about human nature staying consistent?
User: I think each generation needs to learn certain lessons fresh.
Assistant: That’s a great point. Each generation faces unique circumstances but often struggles with the same core challenges—power, leadership, morality, and conflict. It’s like history rhymes rather than repeats. Do you see Advise and Consent offering lessons that still resonate today?
User: Yes, I do.
Assistant: That makes sense. The themes in Advise and Consent—integrity, political compromise, and the consequences of power—feel timeless. It’s fascinating how a book from 1959 can still feel so relevant. Are there any particular moments or issues in the book that feel especially connected to today?
User: It's a passage where the head of the Senate is reflecting back on all the challenges from the Cold War and from the Soviets getting into space first.
Assistant: That passage really captures the tension and uncertainty of the era. It’s remarkable how it mirrors modern concerns about global competition and technological dominance. The Cold War space race feels echoed today in things like artificial intelligence and space exploration. Do you feel the book’s depiction of leadership under pressure gives insights into handling those kinds of challenges?
User: I think that the leadership under pressure is important, but more important is I think that the people in charge are driven more by the circumstances that they're in, rather than their vision.
Assistant: That's such an insightful observation. It highlights how reactive leadership often is—shaped by the crises and constraints of the moment, rather than grand plans or ideals. Advise and Consent shows that clearly, with characters constantly navigating shifting alliances and unforeseen challenges. Do you think that’s just the nature of politics, or could leaders ever truly shape events on their own terms?
User: I think they make a difference and they certainly set the tone, but they can't really do anything that's completely against the geopolitical forces that are in play.
Assistant: That feels very true. Leaders can set a tone or steer events slightly, but they’re often constrained by the broader geopolitical and societal currents. It’s like they’re captains of a ship, but the tides and winds still dictate much of the journey. Advise and Consent captures that tension perfectly—showing how individuals navigate the larger forces around them. Do you think this is why some leaders are remembered as great? For how well they adapted rather than controlled?
User: Yes, I agree with that.
Assistant: It’s a compelling way to look at leadership. Those who navigate crises thoughtfully, maintain stability, or inspire others during turbulent times seem to leave the strongest legacies. It sounds like Advise and Consent has sparked some deep thinking for you! Do you feel the book’s perspective on leadership and power has influenced how you view current events?
User: No, I think it's just a reflection of how I've been thinking.
Assistant: That makes sense. It’s interesting when a book aligns with ideas you’re already exploring—it can feel like a conversation with the author across time. Do you find that reading Advise and Consent has deepened your perspective on those thoughts?
User: I don't know, I'm just starting, so it's a bit early to tell. I think one interesting issue is that most people right now think that Trump is this lone wolf in charge of everything, but it's more likely he's reflecting a cultural shift.
Assistant: That’s a sharp observation. Leaders like Trump often symbolize broader cultural or societal changes rather than acting as the sole architects of them. They resonate with existing shifts in values, frustrations, or aspirations. It ties back to your earlier point—leaders are shaped by their circumstances and the forces around them. Do you think Advise and Consent offers any parallels in how it portrays individuals responding to larger societal shifts?
User: I'm not sure. I'm done with this topic. Is there a transcription of this conversation?
Assistant: There's no built-in transcription feature, but you can scroll through this chat to review everything we've discussed. If you need a saved copy, let me know, and I can provide a text version for you!
User: Yes, please make a text version for me.
In 2013 Microsoft held a "Going Native" event about C++ at their campus in Redmond. ("Going Native" in contrast to "managed code" which they were pushing at the time [and still do]).
It was a great event - Stroustrup was there which was cool. But ... major changes to C++ were afoot. During the event I posted a bunch of Haiku about my reactions to the changes.
One key fact - I couldn't remember if Haiku was 5 7 5 or 7 5 7 and so a number of these have the syllabic rules wrong. I address that issue toward the end.
C++ has gone insane
Old rules corrupted
New rules incompatible
C++ very subtle
Errors difficult
Please read giant book right now
R-values not nullable
Careful with &&
You must understand details
Never use zero or NULL
Always null_ptr
Except in old lang versions
STL algorithms
Happy with functors
Can you use a lambda here?
Confusing to me
Transparent Operator
Functors - WTF?
C++ language
Compiler does not want help
Unless old version.
Types flow from function
calls to Overloads / Templates, right?
Let compiler work.
Cool new thing voted
into standard for 14
fixes old problem
C++ series
of hacks upon hacks upon
hacks upon hacks up …
(This next one addresses the 5 7 5 vs 7 5 7 issue and even then makes a new one: 5 5 7.)
5 7 5 or
7 5 7
let overloading decide
Too many haikus reverse
direction just like
C++ does again, right?
Well, actually,
I like C++, just, uh,
choose a good subset.
Daniel is now one of my favorite people:
I always wondered if the Security Now podcast name was inspired by Seinfeld's "Serenity Now!"
But anyway this happened:
https://twit.tv/shows/security-now/episodes/980?autostart=false
and the show notes are here:
https://www.grc.com/sn/sn-980-notes.pdf
And just so you know, in the meantime the patch has been pushed out, and the answer is "yes", turning off the Wi-Fi adaptor blocks the attack.
They joke a bit about "why not just turn off the entire computer" but it's a real use case where a laptop is used as a remote access machine and connects over ethernet (at a home, say), but the Wi-Fi is left on because why would you ever turn it off? Well, now you have a reason.
What's this?
It's been in the news that Elon Musk announced that he will announce Tesla robotaxis in August, 2024. Everyone thinks is he is full of shit, basically, because he has been full of shit about Tesla car autonomy for a very long time.
He might still be.
But! Here's what I think he will announce and why Elon might have less shit in him this time.
He recently announced that Tesla is no longer "hardware constrained" for car training. But Tesla has been hardware constrained for car execution for a very long time, since they shipped a bunch of hardware without fully understanding how their software stack was going to work.
And the latest Tesla "full self-driving", renamed to "FSD (supervised)", is really good. The big change was shifting (heh) away from a combination of neural network and logic in code to executing entirely out of the neural network. The result is pretty human-like driving. And it's pretty good at unprotected left turns. I have a buddy Eric R who lets his Tesla drive him all over the place - freeways, side streets, and even across country. He says you have to watch it like a hawk but even then it is less stressful than actually driving.
A couple of years ago two other buddies told me they had the "maxed out" FSD and both of them said they stopped using it after it tried to kill them three times. Which, BTW, it's probably worth pointing out that the Tesla statistics that say FSD is safer than human driving are nonsense, because anytime (in the past at least) that anything interesting happened, it was turned off. So, sure, it was good at the easy stuff.
So, overall, big improvements in FSD, and my view is that the limitation now is the hardware in the car.
I think the robotaxi announcement will basically reflect the technology they have now but with a major upgrade to the in-car computing and maybe better sensors. And I think it will be pretty good! And the new thing will be called "FSD - Unsupervised."
As for the sensors - Elon has said that if you don't solve vision, then you haven't solved anything. I don't agree with that, but even if that is your view - that the car shouldn't need more sensors than a human (wait, humans have two eyes, and Tesla's have, what, eight? [oh well]) - then you should add in sound and vibration. I get lots of hints from how the road feels when I drive. And I think the recent horrible accident from the Cruise vehicle where it dragged a person under the car could have been prevented with more tactile sensors.
BTW, Cruise, the GM self-driving company, has offices right near us at work. We had a picnic out on the lawn and a bunch of them came out and sat with us. One of the things I learned as we swapped stories about our respective businesses is that the the Cruise car is always computing an escape plan if something goes wrong, like it loses compute power. And I think that is what happened after that car accident - it flipped into "escape plan" mode to pull over without considering that it was dragging a body.
I have to admit that accident really bummed me out, because at the time (this is years ago) I was very impressed at how they had a built-in safeguard like that. And sadly, maybe it backfired.
One more thing. Back in 2005 an autonomous car succeeded in the DARPA Grand Challenge. I thought, yay, we're almost there! And I took into account something Bill Gates said (I think it was Gates), that people overestimate what is possible, computing power wise, in five years, but underestimate what is possible in ten years. So I thought, yay, in ten years, we'll have self-driving cars!
Well, shit. That didn't work out. I should doubled those numbers again.
Update 2024-06-17: AS PREDICTED: Elon Musk Reveals the First Details About Hardware 5 Autopilot Computer and Sensors
Update 2024-12-06: ALSO AS PREDICTED: After promising full self-driving updates for years, Elon Musk finally admits that most existing Teslas may never be able to drive themselves
Update 2025-01-31: Elon Musk finally admits that Tesla will have to replace its HW3 self-driving computers
I had a goal of posting a new blog entry every day (on average) last year. I got about half way through and then stopped. I had a chance during time off during the last week of 2023 to post a big pile of pictures to catch up and finish but I thought, "why?", I'm just kidding myself.
However, like any good failure, I learned something. As I got into it I realized it was the most fun for me when I did a series of related posts, like the improv music series.
Also related were some ideas to speed up posting, most notably not posting labels. I missed them! And I realize if I'm going to focus on series of posts then the labels are pretty useful.
And so here it is May 27, 2024. I've spent the last five months doing a lot of cleaning-up and decluttering both at home and in my rather large amount of electronic files. I think I'm about ready to jump back into blogging.
I do have one time-specific blog entry to make, which I have now done.