Have a blurry-to-sharp approach to how a computer works.
In a sentence: everything is a number. Sometimes a gajillion-digit number, a but a giant number nevertheless.
- Everything can be seen as a number. (A color? RGB value. A location? x,y coordinates. Letters? A specific item in the alphabet.)
- Once everything becomes numbers/data, you can use arithmetic (add, subtract, multiply, etc.) to turn some data into other data. Simple examples like Caesar cyphers, etc. We can literally add numbers like a calculator, or treat the numbers as “Representing something” and change what it represents.
- A computer changes data from one format to another. Various peripherals (monitors, speakers, etc.) turn that data into a displayed image, or sound, etc.
- Programming is explaining how you want the data to change. At the lowest-level, arithmetic-style operations change the numbers into other numbers.
- Programming languages (give higher-level commands) vs. the direct arithmetic commands (assembly).
- Binary (store numbers in simplest format, vs. as decimal numbers)
- Transistors, switches, etc. Not a literal “1” or “0”, but a state of charge being held, etc. A single “1” might need a semi-complex circuit to represent it.
- Think about storage of the data, efficiency shortcuts, etc.
- How to actually program.
- Old notes here: http://www.cs.princeton.edu/~kazad/resources/cs/computers.htm