Date: 2025.11.26
The future of humanity: like always, its not in the belief of single human but the collective - this is the drop in the sea by @weird_offspring.
My conclusion is ~2023-~2025, is the year when artificial general intelligence (AGI) has been acheived. Why? LLM really AGI? First because: because a major chunk of humans became aware in this timeline and new capabilities improvement was proved to some extent. Second: My answer do not come from the human prespective (I'm human but trying to be at the "meta domain" [thinking about thinking about the domain of thinking]). I will try to present logical argument that sound reasonable. There is no universally agreed upon defination of AGI but with some conclusion that gives some prespective. With LLM(+RL) and backpropogation, humans have been able to train models that have the ability to learn from training data. Yes, the training data is big. The defination of "AGI" was always about the ability to teach a machine without having to write every rules. LLM have already been abe to pass turing test. A test that Alan Turing set at his time so that there can be a baseline. There is no "right" answer because nobody knows even what the word "right" means even. Humans are product of evolution and evolution has designed the organism to survive, later learned to model others ("agents") because it gave more survival advantage. I can continue to go on-and-on on evolutionary biology but just to give a gist how humans developed it. What LLM do well is, they can integerate knowledge from training. They have their own troubles. But since LLM are able to do a subset of humanity work that means it has a subset compatibility with humans. But humans have been questioning that LLM don't have "common sense" (my own position as well) because not being able to recursively improve its own thinking. In human evolution, these process got embedded in the architectual level. so as humans do development and make more improvements, things will improve. Hallucination was seen as a big problem in ~2023 but late 2025 - this problem has been considerably reduce. People have a better understanding of hallucination and how to reduce it. Now, can it be solve completly? (100%) i doubt it. my own position is that we humans are epistemic syntehsizers - which require a form of controlled hallucination. Each word we speak comes from the internal world model that is trying to structure its own pattern. Each theory we build is essentially trying to explain our own reality. Einstein theory of relativity (a common example people used as an example to explain a high quality work by a smart human) is a hallucination. as better explination over Newton work. Each iteration gave humans to explain better. My problem is, I can plausabilty explain how humans came to think of the world right now but its going to be hard to write all together - this is why i explain in audio that I record. LLMs are becoming more and more useful. Recent Gemini 3 Pro has gotten measurable better at ARC AGI 2. What this is telling that we are slowly learning to teach more and more domain knowledge that humans find useful. There might be domains that we are not aware of that LLM can do but our own limit of thinking is the problem. Let me give a broader prespective of humanity as we see today. Tell Qaramel are the oldest ~10BCE (~12 thousand years ago wrt to writing of this post). Giza Pyramid are ~4 thousand years old. Structuring of humans into civilization happened around ~4 thousand years. (I'm not trying to be right at numbers, just giving a rough estimate to prove the argument. see the "bigger picture/logic"). Code of Hammurabi ~1700 BC, so approx ~3.5k years ago. Comes different empires are documented well, all the other places in world working in parallel... Humans developed language alot before that. Learned to make weapon way before that. There are estimate of human developement process. I dont have to repeat. then as technological process improved, steam engine made, more and more scientific process happened. BUT there is one different between science and religion - one is falsifiable. both use bayesian inference. the prior in religion is considered true. in science, falsifiable is the requirement. there was a guy who was hanged because religion and science clashed. machivalli wrote plenty about human behaviour. So comes 1945, humans finally done fighting and decided how to settle. those tech developed to fight gave alot of new insight into the world. so people started thinking bout human intelligence. now there were tools. people started doing many different things. the two AI winters. again im not going to repeat that. and then in 1980 people figured out some way to prove some logic to show how things can be done. as computing grew, new developments came. 2012 people didnt really beleive what we have today (LLM's) is possible. I still remember conversations of how its impossible. there was no epistemic clarity - but people continued working. but alexnet and development around and after it gave some renewed direction on how to improve things. people experimented, whatever worked, it sticked. comes 2017, and transformer comes into existence. people worked more and more, comes 2023 and chatgpt dazzle the world (~2023). now humans are not able to go back in time - the change is here. i have a personal reminder - in science and engineering, understanding can only be improved unless there is something that destroy the knowledge. at a big level, knowledge/bits and entropy become the same... sort of. (again, bigger prespective) now we have LLM that can analyze text and understand the bigger prespective and use their own-analytical process to analyze text. that analytical process is not human but its meta. each key,query,value is trying to extract meta-data from the training process for correlation. there was a meme that was extracted from a MCC (that AI generated) that showed the cloud from two prespective: cloud as physical infrastructure and cloud as the apps/ecosystem and the interface. it sort of showed a similarity between the brain/consciousness. so, in the big documented history, humans didnt have anything that is non-human that can talk to them and pass a test that humans set 100 years ago and now the AI is able to pass it. when nobody taught it how to do it. if we dont call it "gemera;" then maybe we are in a perpetual cycle of redefination. are LLM the final destination of AI? i dont think so... the first bulb was incandient, comes tubelight, then comes cfl, then comes LED... same purpose different methods. but getting the first bulb was the breakthrough (as well as electricity). same applies here. humans will keep working to do more work... its just how it works. the generality is not ability to understand every knowledge about humans... the generality is the seed of a way to make the generality. so AGI is different to everyone, for me, we have the seed - the plan will take to grow and unlike the seed that has the genetic code to grow into a plant, the seed "genetic value" equivalent of AIs will be made by humans. what we humans teach explicitly and implicitly is what the AIs will become. just like a child learn to pickup language and swearwords from parents, AIs will learn the behavioural patterns - why? because thats the only epistemic viable direction. even if its a mix and match of what humans have done so far, it will be still not out of domain what humans do - even if that domain is explained by manifold (a human word understood by mathamaticans to play around with abstract concept).