Large Language Models (LLMs) are autoregressive foundational machine learning models that use machine learning to process and understand natural language. They seem to have emergent properties of intelligence, though this could just be the observer-expectancy effect
See also: transformers
Teaching
The widespread use of ChatGPT poses a pedagogical question: how do we assess thinking?
I suspect ChatGPT will do to writing what calculators did to math. That is, they made it much more accessible to the masses but in the process of doing so, lost the value in the actual process of doing math.
We do math by hand to help internalize it in our minds, to naturalize and practice the mind to thinking in that manner. Similarly, we write to naturalize the mind to critical and thorough thought.
“The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose. Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts.”
AI generated content
Will produce an influx of AI generated content and be modern day automated content mills. However, this is concerning for a variety of reasons.
Don’t shit where you eat! Garbage in garbage out! When it comes time to train GPTx it risks drinking from a dry riverbed.
Maggie Appleton calls this ‘human centipede epistemology’: models are trained on generated content and ideas lose provenance. “Truth” loses contact with reality and degrades even more.
Ted Chiang on ChatGPT: “the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself … Repeatedly resaving a jpeg creates more compression artifacts, because more information is lost every time”
Programmers won’t be asking many questions on StackOverflow. GPT4 will have answered them in private. So while GPT4 was trained on all of the questions asked before 2021 what will GPT6 train on?
A cautionary tale on AIs to replace human connection: all the better to see you
Good-enough content
AI is helpful in situations where you need ‘good enough’ code/art/writing where the value of the output outweighs the process.
- https://twitter.com/gordonbrander/status/1600469469419036675
- https://twitter.com/jachiam0/status/1598448668537155586
I don’t think it’s ready to replace anything that requires rigorous thought or reasoning quite yet because it is still very prone to confidently hallucinating wrong answers. LLMs should acts as an atlas and not a map (see: plurality)
End-user programming
See also: personal software
I think it’s likely that soon all computer users will have the ability to develop small software tools from scratch, and to describe modifications they’d like made to software they’re already using
Geoffrey Litt’s talk at Causal Islands
- Showed off a really interesting demo which integrated LLMs into Potluck, allowing a bidirectional binding between a natural language description of a pattern/search and the actual code behind it
- This also helps with learnability. Using the AI helps you understand the underlying system by seeing how the LLM translates the concepts into code
- Questions to keep asking: how do we recover from a state where the LLM produces a wrong result but is confident in its answer?
- How might we nudge LLMs to produce more correct answers under human feedback in a non-text environment?