2023-01-31

Too much smoke

At first I thought this air quality measurement device was broken, last October 2022, when I took it outside to see how bad the smoke level was.

Nope!  It was working and the air quality was bad - very bad.




 

Lake Bellevue

 Calling it a lake is a bit of a stretch, but that's what it is called.







Secret Garden

 




Frozen Leaves

 


Dripping

I worry about our parking garage at the office.  I'm not so sure that water dripping down the sides of the concrete walls is how it is supposed to work.



It's worse another floor down.





Reflecting

Half of the reason to go out and get coffee is to enjoy getting out of the office. 




"Let's make the car a place of silent reflection from now on."




Fallout

I only played Fallout 4 for an hour or two but somehow these pictures at Point Reyes in CA remind me of it ...












 

2023-01-29

Parking

A hallway at the giant parking structure at Disneyland.




Rare Miss

The Millennium Falcon at Galaxy's Edge at Disneyland is a rare miss in presentation.



It's hard to find a good picture of the MF where it doesn't entirely blend in with the background.  Even when you're there you can be looking at it and not realize what you're looking at.


2023-01-26

Space ... Mounting

Space Mountain ... what a great name.



When I was a wee lad and went to Disneyland every weekend I watched them build Space Mountain, starting from a big hole in the ground.  Eventually, courtesy of a short haircut, I rode it before it opened.

Programming Retreat

I've mentioned I did some work on spwnn during a programming retreat in 2018.

The place I stayed was in the middle of this satellite view:


The internet was provided by an old-school (not Starlink) satellite.  It was just good enough that I could look up articles about Go and AWS but not fast enough to be distracted by anything else.


Sometimes I would have visitors in the evening.



Each day I would drive to Clifton, Az, for lunch and coffee.


You can rent it if you'd like (click picture to go to AirBnB posting).








2023-01-22

No, really, which model?

 In another post I complained I didn't know what a machine learning model is.  So I asked ChatGPT.

Me: explain what a model is in machine learning

ChatGPT: A model in machine learning is a set of mathematical equations or algorithms that are used to make predictions or decisions based on input data. The model is trained using a labeled dataset, which means that it is given a set of input-output pairs and learns to map the inputs to the correct outputs. Once the model is trained, it can be used to make predictions on new, unseen data. The quality of a model is often measured by its ability to make accurate predictions or decisions on unseen data.

Well, that's good!  That's sort of what I thought - it's the algorithm plus the data.  But when an ML platform asks me "what model do I want to use" I still have no idea.  Is there a list of these things somewhere that describes the relevant tradeoffs?  Are there standards?  Or, is every specific implementation a new model?  But I read people are making their own ChatGPT (GPT-3) implementations.  Do they use the model without additional training?

Maybe ChatGPT has more to say about this.

Me: how do I make my own gpt-3

ChatGPT: Creating a model like GPT-3 is a complex task that requires significant resources and expertise in machine learning. Here are the general steps to create a model like GPT-3:

1. Collect a large dataset of text. GPT-3 was trained on a dataset of over 570GB of text data.

2. Pre-process the data to clean and format it for training. This includes tasks such as tokenization, lowercasing, and removing special characters.

3. Select an architecture for the model. GPT-3 uses the transformer architecture, which is known for its ability to handle sequential data like text.

4. Train the model using the pre-processed data and the selected architecture. This typically involves using a technique called unsupervised learning, where the model learns to predict the next word in a sentence based on the previous words.

5. Fine-tune the model by training it on a smaller dataset of labeled data. This can improve the model's performance on specific tasks, such as language translation or question answering.

6. Continuously evaluate the model's performance on held out test sets and adjust the model as needed

7. Deploy the model and use it to make predictions or decisions.

It's important to note that GPT-3 was developed by OpenAI and they used a lot of resources like computational power, memory and storage to train it. Additionally, it requires extensive expertise in machine learning, natural language processing, and deep learning to create a model like GPT-3.

So I guess when people say they are making their own ChatGPT they aren't really?  Or perhaps they are using the transforms on their own data?

Still confused.

Maybe this two-hour video will clear it up for me:


Uh oh

 


bassin de marée (Washington DC)










 

le monument de Washington





 

la tour Eiffel
















 

2023-01-21

spwnn - scaling

The dictionary size has a big impact on the speed of spwnn (spwnning?).

c7g.16xlarge
dict sizetime (seconds)words / sec
23,1151.4715,768
49,1676.138,024
370,104388.47953



This could be for many reasons.  One is that the number of words traversed per letter pair is much larger.  Another could be exceeding the cache size which slows down memory access.

Every time I think I'm done measuring things I think up something else.

All right, all right.  Here are the increases in times to validate the dictionary vs. the max size of the neurons:

time vs. 23ktime vs len
1.01
4.23.257148909
265.016.02753654

That doesn't directly correlate, so most likely the big dictionary exceeds the cache size.