← Snippets

Seedling notes and shower thoughts looking for a permanent home. Fleshed out enough ideas get moved to more permanent notes.

If you’re interested, I’m love to discuss these ideas on Twitter!

Snippets

I. Community Building as Developmental Deconstraint

There’s an idea of being a “tummler” or someone who warms the crowd to create a naturally forming community. (Hillman, Alex 2014)

This seems like an evolutionary deconstraint or something that allows for incremental development. For instance, consider an example where both A and B are looking to start a band. For a band to form in the community, both of them need to express interest at the same time. While our central leader C could form it himself, doing so makes them a single point of failure. By warming up the crowd, they’re able to get A and B to reach out on their own and create an organic community independent of the tummler.

Note taking could also fit this paradigm of a evolutionary deconstraint. It prevents catastrophic failure of knowledge not accreting and allows for serendipitous collection of eclectic views.

II. USNews Ranking Correlations

Is school performance correlated to how rural the surrounding environs are?

Do higher ranked schools have better parity between men/women in STEM fields?

Do lower ranked schools play on different metrics to attract students? Comparative advantage but applied to schools. Why would every school try and compete on the same ranking? Has our obsession with single score “objective functions” for education improved or degraded the institutions themselves?

III. GANs Expanding Distribution

Neural networks struggle with data which is out of their training distribution. Is it possible to use GANs to train resilient networks which can generalize better?

IV. Uncertain Religion

Idea discussed by Michael Nielsen.

At the crux of religion is a faith in the face of uncertainty. I want to further explore the idea of faith in uncertainty.

V. Libraries as Spaced Repetition

Are personal physical libraries as a form of spaced repetition?

Rather than the standard idea of remembering atomic ideas, revisiting these complex works would help tease out new connections and insights.

“No man ever steps in the same river twice. For it’s not the same river and he’s not the same man.”
- Heraclitus

VI. Topology in Neural Networks

Why is our genome so small relative to our neural capacities?

Much of animal behavior isn’t learned but rather innate to our brain architecture. (Zador 2019) How can we create a small subset of rules that are primed for learning? (Finn, Abbeel, and Levine 2017)

Are bottlenecks to complexity, like a genome, essential to higher intelligence? Regularization techniques (L1, L2) are already a standard and shown to increase performance, so could longitudinal bottlenecks through evolution and a genome do likewise? If so, why?

Could we harness bottlenecks through time in neural networks? Meta-learning?

This topic is sort of addressed in Kaznatcheev and Kording (2022) in how humans naturally are incorporating inductive biases that improve their performance!

VII. VR to Reality

How far away is VR from being as satisfying as interacting in the real world? What factors are necessary for it to reach that stage?

There’s something deeply unsatisfying about digital interactions and their ephemeral nature to my personal aesthetics. Something to revisit in the future when the technology holds more promise and when I have more experience with cutting edge VR systems.

VIII. AI Alchemy

There have been a lot of papers writing about what current approaches to AI are missing from general intelligence (Lake et al. (2016), Mitchell (2021)), but I haven’t seen any approaches into building in those missing ingredients. This also raises of question of whether Clippy is truly dangerous and if AI can be intelligent without all the other components so integral to human intelligence.

The current state of AI is such that we’re trying to push SOTA along which is very different than “science”. How did alchemy turn into chemistry, and how could we make the same transition in ML?

IX. Benchmark for Benchmarks

What if we had a meta benchmark for AI benchmarks which evaluated how closely they correlated with the goal they were purporting to represent? Could this help us work towards AI that truly are more generalizable? This is basically a question of how to evaluate construct validity. Objective measure of arbitrary, subjective goals can lead to misleading hopes of progress.

X. Idle Games

Inspiration: Spiel et al. (2019)

How are these designed? Why are these so fun? Could they be used as a template to designing real environments? Template of encouraging exploring and tinkering in real world design? In a way, something like this might be accomplished with Roam Research or Anki repeatedly bringing up old ideas.

XI. Science Education

Why is it that science is “taught” in such a dry manner in schooling? Even then there is a firm focus on the scientific method, much of my college science education has been rote memorization rather than trying to answer interesting questions.

Science is this.

XII. Human Evolution

If evolution requires diverse genetic material to keep innovating, why is it that humans seek like-minded individuals? Is there any correlation between genetic similarity and behavioral similarity? How does the ability to impact and change the world affect behavior?

Bibliography

Finn, Chelsea, Pieter Abbeel, and Sergey Levine. 2017. “Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.” arXiv. http://arxiv.org/abs/1703.03400.
Hillman, Alex. 2014. “To Build a Strong Community, Stop ‘Community Managing,’ Be a Tummler Instead.” April 2014. https://dangerouslyawesome.com/2014/04/community-management-tummling-a-tale-of-two-mindsets/.
Kaznatcheev, Artem, and Konrad Paul Kording. 2022. “Nothing Makes Sense in Deep Learning, Except in the Light of Evolution.” arXiv. http://arxiv.org/abs/2205.10320.
Lake, Brenden M., Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2016. “Building Machines That Learn and Think Like People.” arXiv. http://arxiv.org/abs/1604.00289.
Mitchell, Melanie. 2021. “Why AI Is Harder Than We Think.” arXiv. http://arxiv.org/abs/2104.12871.
Spiel, Katta, Sultan A. Alharthi, Andrew Jian-lan Cen, Jessica Hammer, Lennart E. Nacke, Z O. Toups, and Theresa Jean Tanenbaum. 2019. ‘It Started as a Joke’: On the Design of Idle Games.” In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, 495–508. Barcelona Spain: ACM. https://doi.org/10.1145/3311350.3347180.
Zador, Anthony M. 2019. “A Critique of Pure Learning and What Artificial Neural Networks Can Learn from Animal Brains.” Nature Communications 10 (1): 3770. https://doi.org/10.1038/s41467-019-11786-6.