top of page

New Ideas?

Have we reached a time where there are no new ideas? What does it even mean for an idea to be new? That it is original? That no one else has thought it before? Is that even a possibility? You know the hundredth monkey theory, right? The idea that, on average, every hundredth monkey will come up with the same idea? Does that also apply to humans?

 

Or is a new idea just a novel way of reinterpreting an old idea in a new context? Or a recombining of old ideas in a new way? Adding complexity? What if, for any seemingly new idea to make sense, we need to comprehend a baseline set of all other ideas that have come before? Is that what culture is? The baseline set of shared ideas that create a contextual framework for mutual understanding? Or is that language? Or is language a main medium of culture that we use to simultaneously construct, maintain, and transmit that contextual framework — the medium in which we share an understanding? The medium in which we construct, maintain, and transmit the other aspects of our culture: religion, philosophy, custom, social order, economics, tradition, aesthetics, morality? That continuously amorphous blob that we call culture? That we argue and disagree about?

 

Does that mean that for any idea to make sense it needs to directly or indirectly refer to everything else in that shared contextual framework in some way, or it won’t make sense? Is that what context is? The shared framework? Does that mean everything refers to everything, at least in some way? In some context?

 

We’ve been creating culture and transmitting it through some odd 7,000 languages for the last 50,000 to 100,000 years. That is a lot of potential context for shared understanding that we have created. Is it a finite system with a maximum number of discreet ideas? And that, with the average 70,000 thoughts that occur to each of us each day, that over all of that space-time, in all of those languages, that every seemingly random thought that we have hasn’t — based on probability — probably occurred to someone else? Because everything refers to everything

 

Does that mean that there is not enough brain power in each of our heads to be able to process all of that context that we have generated through all that space-time? Or, that there is plenty of brainpower, but that the ideas and thoughts occur so randomly and sometimes seemingly out of context that we fail to notice which ideas to pay attention to at any given moment, even though they may be useful in that context and many others? Is there a difference between those two processing limitations? Or do they just have the same effect? Especially in this age of distraction?

 

Is this why we created AI? Because it can rapidly browse and ingest a broad cultural context, and synthesize a culturally appropriate communication that it is fluent in that cultural context enough so as to impersonate what we think of as human intelligence? (Wait, what really is intelligence?) Or what we conceive of as human intelligence?

 

Are we trying to reverse engineer machines that can transcend our own ability to process context? That can become culturally fluent much faster than we can? Does it come down to an issue of processing speed and space-time? That there is far more context than our brains can process at our maximum processing speed for our average life span? So, no one can really understand how everything refers to everything? At least not in a single lifespan? Essentially, no one can fully understand all of the context we’ve created because of the processing limitations?

 

So, if we build AI, could it process enough context fast enough to tell us what we are not understanding? Because we are limited in our ability to ingest and synthesize the immense context that is our reality due to processing speed and battery life? What if we succeed? Will we be able to comprehend what AI is telling us? Or, will we not have enough context to keep up with AI if it starts generating its own context that we can’t even begin to fathom it could do? Is that what evolution is? Or digital evolution?

 

Do we think we can build better machines than nature has made of us? Would we replace ourselves with something that we think we built better? How would we know the built machines are better? Can’t “better” be pretty subjective? Especially, when everyone does not have enough processing speed and space-time to fully understand all the created context that represents our reality? Who should be deciding things considering all of that? Should we outsource it to machines that were created by humans (who lack the processing speed to fully understand reality) who lack the ability to understand the Large Language Model Artificial Intelligence they have created and why it works? Is that the ultimate optimism in technology? That we can create tech that will solve all the problems we can’t, by fully understanding the context of reality, where we are limited in our own human understanding? Is that the tech ideology? A kind of progressive thinking?

 

Maybe in a better machine we could reverse engineer out the dualistic thinking limitation? Would that mean we could think in trinary code or quaternary code? Now we're thinking hierarchically! Would that give us enough processing power to fully understand context? Or would creating more processing power propel more context construction so that it becomes a one-step-forward-four-steps-back game? Is it all a fools errand? To try to understand reality? Yes. And to try to outsource our ability to understand reality to AI that we engineered in our own likeness, but to be better, whatever that means?

 

That is a lot of faith to put in technology. Especially because it is hard to imagine that AI could imagine what it means to be alive. We have not been able to give a good explanation that everyone can agree on of what it means to be alive. Have we? Isn’t that the goal of all human-created art? To explain what it means to be alive through representations? Of philosophy? To explain it in linguistic ideas? Or at least to take a stab at describing one fundamental element of what it means to be alive? What we often call the human condition? And collectively, over millennia, we have described what it means to be alive in what we perceive as “reality,” thereby creating our shared contextual framework. Our shared understanding of a “reality” that is so large that no single one of us, in the space-time we are alive, can fully process enough context to really understand our “reality” as we have described it for millennia. That is, to understand the human perspective? So, if we can’t describe what it means to be alive as a human, how can we program a system that has not lived (in our most well-agreed upon understanding of what makes something alive) to be able to understand what it means to be alive? How can we program computers to do something that we can’t understand ourselves? Or is that the goal of AI? To exceed our own understanding? To create something that can understand how everything refers to everything in light speed? What if it can figure out what it means to be AI alive? Would that be the same as what it means to be human alive? Would it be similar? We have no idea. We can’t even agree on what is better in most situations. What if all the AI systems came up with different answers? They wouldn’t be able to agree with each other. The systems will exponentially increase their complexity to the point where they can create systems that are more complex than themselves, until it reaches maximum complexity, implodes on it self and has to begin anew with slightly different conditions. 

 

There’s a metaphor for what it means to be alive. 

 

Good luck figuring that one out AI. Let us know how it goes.

 

If you chose this path, you probably believe or hope that a human wrote this. (Unless you came via Old Ideas.) Of course a human wrote this. Can AI really understand a complex metaphor referring to something it literally can’t do — live? If humans can get beyond dualistic, hierarchical, progressive thinking, then…

 

There are a lot of possibilities. 

​

Everything refers to everything.

bottom of page