← Snippets

Seedling notes and shower thoughts looking for a permanent home. Fleshed out enough ideas get moved to more permanent notes.

If you’re interested, I’m love to discuss these ideas on Twitter!

Snippets

I. Community Building as Developmental Deconstraint

There’s an idea of being a “tummler” or someone who warms the crowd to create a naturally forming community.

This seems like an evolutionary deconstraint or something that allows for incremental development. For instance, consider an example where both A and B are looking to start a band. For a band to form in the community, both of them need to express interest at the same time. While our central leader C could form it himself, doing so makes them a single point of failure. By warming up the crowd, they’re able to get A and B to reach out on their own and create an organic community independent of the tummler.

Note taking could also fit this paradigm of a evolutionary deconstraint. It prevents catastrophic failure of knowledge not accreting and allows for serendipitous collection of eclectic views.

II. USNews Ranking Correlations

Is school performance correlated to how rural the surrounding environs are?

Do higher ranked schools have better parity between men/women in STEM fields?

Do lower ranked schools play on different metrics to attract students? Comparative advantage but applied to schools. Why would every school try and compete on the same ranking? Has our obsession with single score “objective functions” for education improved or degraded the institutions themselves?

III. GANs Expanding Distribution

Neural networks struggle with data which is out of their training distribution. Is it possible to use GANs to train resilient networks which can generalize better?

IV. Uncertain Religion

Idea discussed by Michael Nielsen.

At the crux of religion is a faith in the face of uncertainty. I want to further explore the idea of faith in uncertainty.

V. Libraries as Spaced Repetition

Are personal physical libraries as a form of spaced repetition?

Rather than the standard idea of remembering atomic ideas, revisiting these complex works would help tease out new connections and insights.

“No man ever steps in the same river twice. For it’s not the same river and he’s not the same man.”
- Heraclitus

VI. Topology in Neural Networks

Why is our genome so small relative to our neural capacities?

Much of animal behavior isn’t learned but rather innate to our brain architecture. [@zadorCritiquePureLearning2019] How can we create a small subset of rules that are primed for learning? [@finnModelAgnosticMetaLearningFast2017]

Are bottlenecks to complexity, like a genome, essential to higher intelligence? Regularization techniques (L1, L2) are already a standard and shown to increase performance, so could longitudinal bottlenecks through evolution and a genome do likewise? If so, why?

Could we harness bottlenecks through time in neural networks? Meta-learning?

This topic is sort of addressed in @kaznatcheevNothingMakesSense2022 in how humans naturally are incorporating inductive biases that improve their performance!

VII. VR to Reality

How far away is VR from being as satisfying as interacting in the real world? What factors are necessary for it to reach that stage?

There’s something deeply unsatisfying about digital interactions and their ephemeral nature to my personal aesthetics. Something to revisit in the future when the technology holds more promise and when I have more experience with cutting edge VR systems.

VIII. AI Alchemy

There have been a lot of papers writing about what current approaches to AI are missing from general intelligence (@lakeBuildingMachinesThat2016, @mitchellWhyAIHarder2021), but I haven’t seen any approaches into building in those missing ingredients. This also raises of question of whether Clippy is truly dangerous and if AI can be intelligent without all the other components so integral to human intelligence.

The current state of AI is such that we’re trying to push SOTA along which is very different than “science”. How did alchemy turn into chemistry, and how could we make the same transition in ML?

IX. Benchmark for Benchmarks

What if we had a meta benchmark for AI benchmarks which evaluated how closely they correlated with the goal they were purporting to represent? Could this help us work towards AI that truly are more generalizable? This is basically a question of how to evaluate construct validity. Objective measure of arbitrary, subjective goals can lead to misleading hopes of progress.

X. Idle Games

Inspiration: @spielItStartedJoke2019

How are these designed? Why are these so fun? Could they be used as a template to designing real environments? Template of encouraging exploring and tinkering in real world design? In a way, something like this might be accomplished with Roam Research or Anki repeatedly bringing up old ideas.

XI. Science Education

Why is it that science is “taught” in such a dry manner in schooling? Even then there is a firm focus on the scientific method, much of my college science education has been rote memorization rather than trying to answer interesting questions.

Science is this.

XII. Human Evolution

If evolution requires diverse genetic material to keep innovating, why is it that humans seek like-minded individuals? Is there any correlation between genetic similarity and behavioral similarity? How does the ability to impact and change the world affect behavior?

XIII. Adaptive Interfaces

I wonder if we could use LLMs to creative interfaces that learn and adapt to a user. In the same way that vim starts to grow with us and suggests possible ways to add new keybinds for common keybinds and actions. We should lower the bar for customizing software.

XIV. Intelligence Auto Bootstraps

An interesting idea is that intelligence is the ability to create more problems that only more intelligence can solve. In a way, intelligence is necessarily self-propagating since it makes issues only it can solve. Under this framework of thinking, perhaps computers are a form of intelligence since they’ve certainly created a new class of problems. (Counter point: they haven’t done that, humans have made new problems using computers)!

A maybe productive way to think about AGI is when it creates problems that only it can solve. This is why a truly superhuman AGI has arrived. If it creates problems that both it and humans can solve, then it is some in-between intelligence, and if it creates problems that it can’t solve, I’m not sure this is any sort of general recursively improving intelligence.

XV. 2 * lifespan for Nobels

Since ideas that stand the test of the time are the most impressive, we should wait for the test of time. Nobel’s should only be awarded for work that was done 2 * \text{lifespan} years after someone died. The 2 here is arbitrary, but hyping things up every other day like a news cycle is probably not so great.

XVI. 4 billion if statements

4 billion if statements. Not super sure exactly what I think about this, but do want to revisit it sometime if only because it’s fun.

XVII. Quines, quines, quines

Quines for dummies is a nice little example, but like above, something I’d like to sit down to think more about in the future.

XVIII. Bootstrap as focus

Idea is to compile a list of things which are self-boostrapping. In theory, bus usage is one since more demand = more buses = less wait time = more buses. Intelligence and learning begets more learning. What else follows this? These are the things worth investing in! Novelty is also a big one in this category.

XIX. What even is learning a model?

Idea about how I’d like to write a piece on just what ChatGPT is. What is deciding a model? Think of it as some notion of how the world is, some large equation, and then the model is learning the parameters to make that as accurate as possible.

That’s kind of an incredible thing that we’re able to do. Transformers as a sort of general model for anything? Do I think that blurry image of the internet is actually a good metaphor? If you make it large enough, then this function is able to become a good approximation of how people produce tokens? Is this true? (This is more of a NeuroAI kind of problem)