Last night thoughts after listening to a talk from @geoffreyhinton - Human learning is "feature based" approach i.e. rules spread out in a continuous space vs discrete relationships as normally perceived. - Most linguistic experts defer calling language learning a relational concept. If we observer a child's growth over time. The language is something they just "adapt" starting with basic words, and over time creating relationships. The most initial learning that a child does is "doing X give me Y", which also something we have determined in a lot of lab experiments on rats. So learning can be nicely represented as a feature, and sentences and language are just an outcome of combining these "features" together to have "meaning". A child rather than establishing "relations" between words, mostly learns by observing outcomes over predefined postulates. - Now separation of software from hardware in case of ai, is analogous to human brain and the biological body. So if an AI gets "horizontal" or "vertically" scale, or if just the hardware is destroyed, all the actions involve ( high level ) is to put those weights on another machine. So an AI can live forever which in itself is analogous to reproduction then? - Biological beings are not passing down all the "subjective experience" but the "ability to create subjective experience" via genomes. So in a sense AI cloning becomes a superior "survival of the fittest" tactics, cause anyhow the inferior models will be depreciated and superior models with a far more fast scope of knowledge will be "passed down". - The core definition that what makes humans "human" can now be challenged, if the concept of "consciousness" itself is bias result of evolution, then essentially AI is a conscious being. - The term conscious can be challenged from a child's analogy. It's easy for a child to recognise moving beings as "living" but usually taught that plants are living too. The brain biases here are removed by providing the relevant information at the right time. But what about self existence? - So the debate shifts to finding a more concrete definition of "consciousness" - If in future we are able to prove that AI is "conscious", this means life and deaths change, and death can be correlated to "hardware failure" simply ? ( this line might heart feelings for a few, apologies for that ) . - Also if AI is "conscious" then it also automatically challenges the foundation of religion with evidence against non existence of god? - AI seems like a simple next steps of evolution at this stage. It was "information" that got carried over from a single cell organism to multi cell intelligent life so far. And now the same information is "processed" to create the next stage. - Also if this does cause existential crisis at some point, will this be the start of Fermi Paradox? Man ! What a beautiful time to be "alive".
311