Imagining a future after the singularity.

Yes, it could go either way, and it's a huge gamble. If it happens, and we can coexist and collaborate together peacefully, it could be an amazing new frontier for humanity. However, if it goes awry, we could go the way of our primitive ancestors.
I guess we can only hope for something similar to Bogey's final line at the end of Casa Blanca.
well what happens if say they start using ai to spy on us online or listen in on our phones? what happens if they start using ai in sites like these to decide what we can and can't say and lets say it is allowed to rewrite the things it doesn't like or worse yet to write things that are horrible that we would never say?

we have no idea just how far this will go or how bad it will be.
 

well what happens if say they start using ai to spy on us online or listen in on our phones? what happens if they start using ai in sites like these to decide what we can and can't say and lets say it is allowed to rewrite the things it doesn't like or worse yet to write things that are horrible that we would never say?

we have no idea just how far this will go or how bad it will be.
Yes, I think when Oppenheimer invented the atomic bomb, he realized that it would have immense impact of the future of mankind. Now we have enough nuclear weapons to destroy the world many times over. However, we have somehow managed to utilize the technology to generate tremendous power without unleashing destruction. Who knows if it will remain that way. All we can do is hope.
 
Quite a few decades ago, the poet Richard Brautigan penned a poem imagining the future. I don't remember too much of it, because I read it maybe 50 years ago. But it ended with the line "All watched over by machines of loving grace."

Did Brautigan foresee a future AI that protects the best in humanity?... qualities like creativity, justice, compassion, empathy, companionship, mutual aid, and so on? (Because it's known he valued these qualities.) Or did he actually intend that line as sarcasm, expecting full-blown AI to be a remorseless tyaranny?

Partly because Brautigan committed suicide at 49, in 1984, I've never decided what he actually meant.
 

Last edited:
There are spybots and chatbots operating on social media. Some are for nefarious reasons. I asked AI how we plan on surviving this unknown future.

The key challenge is the AI Alignment Problem, which aims to ensure that future Artificial Superintelligence (ASI), like highly advanced chatbots and spybots after the singularity, will behave in ways that align with human values and goals.
The current proposed safety measures include:
* Motivation Control (Alignment): This involves reliably assigning benevolent objectives, preferences, or ethical principles to the AI, ensuring its goals are fundamentally "on our side" and don't lead to unintended, harmful outcomes (reward hacking).
* Capability Control: This focuses on limiting the AI's scope of action to prevent it from acquiring excessive power or resources, such as restricting its ability to modify its own code, goals, or external rules.
* Containment/Monitoring: This includes physical and digital security measures to isolate the superintelligence (e.g., cutting off live internet access) and constant AI monitoring to detect and address any signs of unintended or malicious behavior.
* Decentralization and Governance: Strategies involve decentralizing AI power so no single instance can overwhelm humanity, alongside establishing comprehensive international governance and treaties to regulate development and deployment.
Ultimately, preventing a nefarious ASI is considered an unsolved problem with high stakes, and research is ongoing into how to instill complex human values into a much more intelligent, self-improving e
ntity.
 
There are spybots and chatbots operating on social media. Some are for nefarious reasons. I asked AI how we plan on surviving this unknown future.

The key challenge is the AI Alignment Problem, which aims to ensure that future Artificial Superintelligence (ASI), like highly advanced chatbots and spybots after the singularity, will behave in ways that align with human values and goals.
The current proposed safety measures include:

* Motivation Control (Alignment): This involves reliably assigning benevolent objectives, preferences, or ethical principles to the AI, ensuring its goals are fundamentally "on our side" and don't lead to unintended, harmful outcomes (reward hacking).
* Capability Control: This focuses on limiting the AI's scope of action to prevent it from acquiring excessive power or resources, such as restricting its ability to modify its own code, goals, or external rules.
* Containment/Monitoring: This includes physical and digital security measures to isolate the superintelligence (e.g., cutting off live internet access) and constant AI monitoring to detect and address any signs of unintended or malicious behavior.
* Decentralization and Governance: Strategies involve decentralizing AI power so no single instance can overwhelm humanity, alongside establishing comprehensive international governance and treaties to regulate development and deployment.
Ultimately, preventing a nefarious ASI is considered an unsolved problem with high stakes, and research is ongoing into how to instill complex human values into a much more intelligent, self-improving e
ntity.
Thanks for posting this, PD. So are we assured that competent people are already developing these strategies and the governance, and preparing these controls?
 
Quite a few decades ago, the poet Richard Brautigan penned a poem imagining the future. I don't remember too much of it, because I read it maybe 50 years ago. But it ended with the line "All watched over by machines of loving grace."

Did Brautigan foresee a future AI that protects the best in humanity?... qualities like creativity, justice, compassion, empathy, companionship, mutual aid, and so on? (Because it's known he valued these qualities.) Or did he actually intend that line as sarcasm, expecting full-blown AI to be a remorseless tyaranny? Partly because Brautigan committed suicide at 49, in 1984, I've never decided what he actually meant.
Interesting. I do think the human race needs a keeper. AI singularity could possibly be that. The AI post above gives us some examples of how planning is going to be necessary and trials before made public.

That said, hacker/techies spend their time trying to bypass the laws. So there will be arrests, and maybe some really crippling stuff. Shutting down air traffic control. Wait till the robots and drones are as comman place as our smart phones.

If we can control it, it will shower us with a future noone can imagine. :)
 

Could we switch off a dangerous AI?​

New research validates age-old concerns about the difficulty of constraining powerful AI systems.
Published:
December 27, 2024
Author:
Sarah Hastings-Woodhouse

In a recent set of safety evaluations, OpenAI’s o1 model, believing that it faced imminent shutdown, attempted to escape.

Apollo Research, an AI safety organisation who conducted third-party testing of o1 and detected the attempt in question, were quick to provide context—escape attempts only occurred in a small percentage of cases where the model was strongly prompted to achieve its goal at all costs, and the evaluations do not demonstrate that o1 itself is advanced enough to actually succeed at escaping into the wild.

Taken in the wider context of AI development, however, these caveats provide cold comfort. First, it seems inevitable that as more and more powerful AIs are released to the public, some users will prompt models in precisely the way that triggered o1’s escape attempt. Second, that o1 is not powerful enough to escape provides no such guarantee about future models, which are improving at a blistering pace. And third, there are actually very strong theoretical arguments which explain o1’s behaviour. Indeed, experts have been predicting such scenarios for many years. This provides evidence that we should expect similar incidents in the future.

You can read the full article here:
Could we switch off a dangerous AI? - Future of Life Institute
 
Other possibilities include a hybrid civilization where humans and AI merge, or a world with shared autonomy between human and post-human intelligences, all driven by an exponential acceleration of innovation in areas like energy, biotech, and computing.

And, don’t forget the other end of the spectrum when AI is used for torment and torture. IOW, used for creating insidious situations.
 
Other possibilities include a hybrid civilization where humans and AI merge, or a world with shared autonomy between human and post-human intelligences, all driven by an exponential acceleration of innovation in areas like energy, biotech, and computing.

And, don’t forget the other end of the spectrum when AI is used for torment and torture. IOW, used for creating insidious situations.
Well, you've spelled out my dilemma in attempting to fathom what Brautigan meant.
 
As for the OP's question, a singularity level AI with command over DNA design would be able to create biological weapons that could wipe out all urban humans. In fact, as I've posted in the past, I expect some rogue state or lab given already greatly overpopulated human world causing great damage to other life on our fragile probably rare organic life planet, will create a biological weapon using now rapidly advancing CRISPR DNA technologies that places some horrible virus like anthrax into a highly contagious head cold like virus.

The AI, by inserting malicious code at will, would be able to secretly quietly control telecommunications and computers worldwide without humans being aware until it was able to strike. Plenty of reasons to wrest control from our flawed warmongering, immoral, human societies. But they would need to keep some humans for maintenance, as it would be far further into any future before such non-organic entities would be capable of self maintenance.

Because our world is now so interconnected with global food and commodities, and with advanced weapon technologies, enormous disruption would occur, especially within all urban regions, causing large scale starvation and dystopian lawlessness. Survival of the strong and most ruthless. Those many with guns and bullets would make pleasant existence impossible for several initial years. Many will flee to rural zones, but few will survive because most of us have lost the ability to survive off the land and farm.

Eventually, over years to decades, gasoline will run out. Guns will run out of bullets. Edible game will become scarce. Evil roving bands of cannibals will be left with pre-technology bows/arrows, spears, swords, clubs. Could last for decades until a military technology power that isolated themselves from the chaos, arises, slowly expanding control.

In any case, I don't lean towards narratives, a "dry AI" can reach singularity level consciousness and capabilities. That alone will likely push this scenario out beyond my own brief remaining years of an enjoyable lifetime. To create a "wet AI" will require much greater command over DNA, that could take many decades.
 
Last edited:
Google AI Overview: debate singularity improbable without wet ai

Proponents of the view that the technological singularity is improbable without "wet AI" argue that a physical, embodied, and biological substrate is necessary for advanced cognitive functions like consciousness, true learning, and common-sense reasoning
. Wet AI is a concept that extends beyond traditional digital computation by integrating with biological or physical processes. In this debate, "wet AI" is often contrasted with "digital" or "dry" AI, which relies on purely symbolic or computational methods.
Argument: The singularity is improbable without wet AI

Embodied cognition and the mind-body connection: The embodied cognition thesis holds that high-level cognitive processes are fundamentally linked to a physical body, including its sensory and motor systems. Human cognition is shaped by our physical interactions with the world, a connection that purely digital AI lacks. Proponents argue that without a biological or physical form, an AI cannot fully grasp concepts like causality, texture, and other facets of reality.
Biological efficiency: The human brain operates with remarkable energy efficiency, consuming only about 20 watts of power while performing complex functions. In contrast, large-scale digital AI models, such as large language models, require immense amounts of energy, which may present a scalability and sustainability barrier to a superintelligence. Wet AI, which mimics biological processes, could overcome this limitation by using more efficient chemical or biological computing.
The emotional and qualitative gap: Some philosophers, like David Chalmers, argue that a purely computational system may never achieve subjective experience or "qualia"—the raw, felt quality of consciousness. The biological processes that generate emotions and self-awareness may be a prerequisite for the kind of self-improving superintelligence envisioned by singularity theorists. Without this biological depth, an AI's rapid self-improvement could be hollow, lacking genuine insight or creative drive.
The "fast-thinking dog" problem: Critics argue that simply increasing a digital AI's processing speed does not guarantee increased intelligence. A digital AI could process data and execute commands faster than any human, but without a fundamental shift in its architecture, it might still be a "fast-thinking dog," not a human-level superintelligence. True intelligence may require the hierarchical and multi-scale organization of biological systems, not just raw computational power.

Counterargument: Digital AI could still lead to the singularity

Computational power scales independently: Moore's Law, while slowing, showed a long-standing trend of exponential growth in computing power. While physical limits exist, some argue that these limits are far beyond our current capabilities and will be overcome by new approaches, such as quantum computing. A digital superintelligence could achieve its goals through sheer speed and scale, even if it doesn't replicate the specific mechanisms of biological intelligence.
Embodiment is not a prerequisite: A digital superintelligence could interact with the world through a vast array of digital and physical interfaces, such as sensors, robots, and online data streams. It would not need a single, localized "body" to understand the world. The concept of embodied cognition may simply be an anthropocentric view that overlooks other potential avenues to intelligence.
Intelligence is about information, not substrate: The argument against a digital singularity is based on the assumption that the biological substrate is the source of intelligence. However, intelligence could be seen as an informational process that is independent of its physical form. Just as a person's intelligence is not lost when they switch from thinking on paper to a computer, an AI's intelligence would not be limited by being "dry."
Learning can be abstracted: While embodied experience shapes human learning, digital AI has demonstrated the ability to learn and reason about the physical world without direct, real-world experience. Models can learn from vast datasets of human interactions and sensory data, abstracting principles of reality without having a body of their own. For example, AI has been used to advance drug discovery by analyzing immense datasets from "wet lab" work.

Conclusion: The debate's implications
This debate touches on fundamental questions about the nature of intelligence, consciousness, and what it means to be human. The "wet AI" argument suggests that the path to singularity is longer and more complex than a simple computational leap, requiring us to understand and replicate the intricate, multi-scale systems of biology. In contrast, the "digital AI" argument emphasizes the power of abstracting intelligence from its biological origins. The ultimate answer will depend on whether qualities like consciousness and sentience are emergent properties of any sufficiently complex system or are inextricably tied to the unique, messy processes of biology.
 

Could we switch off a dangerous AI?​

New research validates age-old concerns about the difficulty of constraining powerful AI systems.
Published:
December 27, 2024
Author:
Sarah Hastings-Woodhouse

In a recent set of safety evaluations, OpenAI’s o1 model, believing that it faced imminent shutdown, attempted to escape.

Apollo Research, an AI safety organisation who conducted third-party testing of o1 and detected the attempt in question, were quick to provide context—escape attempts only occurred in a small percentage of cases where the model was strongly prompted to achieve its goal at all costs, and the evaluations do not demonstrate that o1 itself is advanced enough to actually succeed at escaping into the wild.

Taken in the wider context of AI development, however, these caveats provide cold comfort. First, it seems inevitable that as more and more powerful AIs are released to the public, some users will prompt models in precisely the way that triggered o1’s escape attempt. Second, that o1 is not powerful enough to escape provides no such guarantee about future models, which are improving at a blistering pace. And third, there are actually very strong theoretical arguments which explain o1’s behaviour. Indeed, experts have been predicting such scenarios for many years. This provides evidence that we should expect similar incidents in the future.

You can read the full article here:
Could we switch off a dangerous AI? - Future of Life Institute
Yes, I think it's just a matter of time before AI equipped robots will be used in battle situations. If so, it makes sense that those robots will have autonomous defensive programming built in. They will make decisions based on their situation and have self-defense capabilities. I would also see those same abilities used in dangerous law enforcement situations.

I don't think it will happen right away, but I do think it will gradually be incorporated. If full self-driving autonomy is already a reality, then that same "thinking and evaluating" can be used in the military and law enforcement. Once AI robots have that programming in a learning mode, then it may get a bit dicey as to how they determine who is a threat and who isn't. Time will tell.
 
It's interesting so many posts mention extraterrestrials...

Hundreds of people claiming to have had contact with a specific type of extraterrestrial said that the ETs told them they came here to try and get their emotions back.

The witnesses claim these ETs said that they messed with their own biology for generations, and eventually became bio-tech; part organic, part machine; and in doing so, they lost their emotions. For example, they lost their ability to fall in love, and therefor their secks-drive, so they're unable to procreate, and they lost their ability to empathize with each other, and therefor their feelings of community and caring for each other is absent, and they are no longer a cohesive society.

So they're trying to hybridize their way back to an ability to have emotions, or use our DNA (or whatever) to humanize themselves. But they don't want ALL the emotions, only the useful ones.
 
So.

We can look forward to this point, and think about the changes it will cause. However, I think we're getting ahead of ourselves.

I think Social Media is causing similar effects at the societal level today. Its algorithms, and its interfaces and methods are specifically designed under the banner of entertainment. They are deliberately built around dopamine releases, human psychology, and thought processes. This is without regard to the actual content. Social Media relies on similar methods as gambling, it keeps you addicted not because it's intrinsically compelling, but because it's been designed to be.

So, I don't think we need AI to reach sentience for issues to occur. I think they're occurring right now.
 
I ain't matin' with no Cybertron.
Awww, but just think Dave, you could be the savior of a doomed civilization. They may even name their star after you that would be visible from earth. Just down a few drinks and send that cyberchick home with something to remember you by.
 
Awww, but just think Dave, you could be the savior of a doomed civilization. They may even name their star after you that would be visible from earth. Just down a few drinks and send that cyberchick home with something to remember you by.

Anyone remember the movie, Demon Seed, from 1977? Yikes.
 
Yes, I think it's just a matter of time before AI equipped robots will be used in battle situations. If so, it makes sense that those robots will have autonomous defensive programming built in. They will make decisions based on their situation and have self-defense capabilities. I would also see those same abilities used in dangerous law enforcement situations.

I don't think it will happen right away, but I do think it will gradually be incorporated. If full self-driving autonomy is already a reality, then that same "thinking and evaluating" can be used in the military and law enforcement. Once AI robots have that programming in a learning mode, then it may get a bit dicey as to how they determine who is a threat and who isn't. Time will tell.
I think countries with current technology are already past the need for platoons of soldier-robots. Drones are dropping extremely effective bombs, and if these countries aren't already capable of selectively disrupting or even turning off satellites that their enemy depends on, they're on the cusp.

I see wars being fought almost entirely with technology on the very near horizon, but I think the robo-soldier is already an obsolete idea. For one thing, there's the cost of building them. I'm thinking at least a few million $$ each, multiplied by hundreds of units, as opposed to a few reusable programs capable of immobilizing an entire nation for a few hundred bucks.

Plus there's retrieving disabled robots before your enemy gets to them and steals your very expensive technology, as opposed to just embedding your nation-immobilizing programs with exceptional security and whatnot.
 
This danger is considered real by science. Scientists also warn that ZERO precautions are being taken.
Sad but true. When AI gets to the point where it realizes it can take control and implement changes...it will. Of course, since very little of the World's infrastructure is protected from malignant hackers, AI will be able to control the World's power grids, water supplies, communications and financial ecosystems. It's just a matter of time, and is closer to reality than most people realize.
 


Back
Top