The Singularity

nogodsnomasters85

Not Stirred
I thought this might be a perfect time to discuss this as the Singularity "movement" has been gaining ground with the opening of the Singularity Institute, and a coming, international Singularity summit will be happening soon in New York. For those who are unaware "The Singularity" is a projected transformation of society and humankind. Named after a cosmological event where gravity becomes infinite, the Singularity is not a fixed concept but has multiple interpretations. The term was first coined by math professor and scientist Vernor Vinge. All Singularity theories rest on the idea of a projected point in the future, perhaps the near future, where scientific and technological development reach a point of a kind of feedback loop and increase rapidly and exponentially. Vinge's conception rested on the idea that the precursor to the singularity would be the creation of the first artificial intelligence that was smarter than humans, which would then theoretically be able to design "smarter" machines and so on and so forth. There is another significant school of thought which maintains that this event could take place in human beings, through artificial augmentation through either bioengineering, nanotechnology, etc., which eventually could theoretically create a human intelligence great enough to spark the aforementioned feedback loop and essentially evolve into something else altogether. Scientist and inventor Raymond Kurzwiel has described the technological progress of humankind through what he calls "The Law of Accelerating Returns", being the basic truism that scientific/technological progress enables future science and technology, etc. This is exemplified in Moore's Law, the principle that computing power essentially doubles every two years. The Singularity has it's detractors, from neo-primativists, to people who express grave concern about the negative implications of such powerful technology, to those who simply dismiss it as "the rapture for nerds." For the uninitiated, here are a few links:
Wikipedia article explaining the Singularity
http://en.wikipedia.org/wiki/Technological_singularity

Raymond Kurzwiel's Singularity Website with articles by him and other scientists/theoreticians discussing the Singularity
http://www.kurzweilai.net/index.html?flash=1

The Website of "H+" ("Human-Plus") magazine which covers the Singularity and and cutting-edge Technology
http://www.hplusmagazine.com/magazine

Here is an excellent video of director of Oxford's Future of Humanity Institute discussing some basic Singularity concepts including exponential growth and transhumanism, and the future of the human race.
http://www.bloggingthesingularity.com/2 ... tivate-09/

Thoughts? Feelings? Doubts? Criticisms? Discuss.
 
Last edited:
quick thoughts...

I've seen an idea like this in the writings of Robert Anton Wilson, and it was supposed to happen, amazingly enough, to coincide with the completion of the Mayan Calendar. Now, whether that was a mindblowing put-on or meant iwth absolute sincerity, I can't say, but it's a funny idea and so a good one.

There is another somewhat related idea that has to do with all of the Scientific Knowledge (in caps!) that was known at the time that Jesus was believed to have walked the Earth. Don't trip that it mentions Jesus. It's just a setting for a historical concept. The idea is that All Of The Scientific Knowledge That Was Known At The Time Of Jesus is considered a unit of measurement. The theory goes that one man could have known it all. However that knowledge doubled and doubled again and so on, and this happened faster and faster, and even in the early stages, say the Age of Three Jesus, no one could have known it all.

Of course now we are at a stage where the scientific thinkers become ever more specialized and so "know more and more about less and less".

All of this leads to a feeling, just intuition, not so much logic, that this idea can't really be sustained. It necessarily leads to some problem, because a machine or an entity with artificial intelligence can still only know what it knows, and it can't know everything.

Everything effects everything, and one of these artificially conceived and automatically implemented "ideas" will have effects that are more subtle than can be mapped or predicted by even the most subtle and long-range capable methods of predictive analysis (???) so that the equation will fail and produce results that might not be disastrous, consciously malign or evil, but that will certainly, if not destroy us all :eek: at least set into motion an effect like a skipping record that just can't quite get back in the groove, and so remains stuck at a certain level of progression.
 
There is another somewhat related idea that has to do with all of the Scientific Knowledge (in caps!) that was known at the time that Jesus was believed to have walked the Earth. Don't trip that it mentions Jesus. It's just a setting for a historical concept. The idea is that All Of The Scientific Knowledge That Was Known At The Time Of Jesus is considered a unit of measurement. The theory goes that one man could have known it all. However that knowledge doubled and doubled again and so on, and this happened faster and faster, and even in the early stages, say the Age of Three Jesus, no one could have known it all.

Of course now we are at a stage where the scientific thinkers become ever more specialized and so "know more and more about less and less".

I think theres' some truth to that. However, I would say that that the problem isn't the expansion of knowledge, (Which I think can be regarded as a very good thing.) but the limitations of human knowledge. I see the chief limitations of human knowledge being; 1. Time-We have a limited lifespan, plus up until now humans have had to devote large segments of their lives to supporting themselves, either directly, like farming for example, or work. 2. The limitations of the human brain- Most humans do not have an eidetic memory and require repetition and study and review to learn, this knowledge can also fade with time, memories less often accessed begin to fade. Also, the human brain has a finite capacity, theorized to be about 1000 terabytes, if I'm not mistaken. With bioengineering, nanotechnology, or artificial intelligence, an enhanced or artificial brain could hold a much larger amount of information with greater recall. Also, emergent technologies could ultimately provide something very close to immortality. Certainly an artificial lifeform, depending on it's power source, could easily live for thousands of years, if not longer.
 
It's not a problem of too much knowledge but a question of managing the knowledge. I need more time to really think about the idea and what it means, but just thought it was interesting and wanted to give some feedback. So what I'm saying isn't set, just a feeling. And it might be human to feel limitations, and possibly they exist only in the human mind.

Space is pretty big, and if some of these feedback loop driven mechanisms did make errors leading to destruction, it's possible that they would only erase themselves and their "sector" (it's space talk), leaving the space there free to be built in or upon again. Like when you empty the recycle bin.

I know that our perspectives are very limited. We all have certain reality coordinates that we have accepted as being permanent just because they have worked for us consistently. But of course a higher form of intelligence with a longer lifespan would have a different perspective.

Maybe my feeling is more anthropological, limited by my status as a human.
 
Interview with Ray Kurzweil
[youtube]<object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/cc5gIj3jz44&hl=en&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/cc5gIj3jz44&hl=en&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object>[/youtube]

Stanford Summit at Stanford University (Part 1 of 3)
[youtube]<object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/9PWXrnsSrf0&hl=en&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/9PWXrnsSrf0&hl=en&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object>[/youtube]

Nick Bostrom on the future of humanity
[youtube]<object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/Yd9cf_vLviI&hl=en&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Yd9cf_vLviI&hl=en&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object>[/youtube]
 
Space is pretty big, and if some of these feedback loop driven mechanisms did make errors leading to destruction, it's possible that they would only erase themselves and their "sector" (it's space talk), leaving the space there free to be built in or upon again. Like when you empty the recycle bin.

This reminds me of what Bostrom says about extraterrestrial life. Bostrom reminds us of the millions of galaxies out there, that through our technology, like the hubble telescope we can view a limited but still enormous swath of the universe including many, many different galaxies. Now, in all of this observed intergalactic territory, and throughout human history we have seen no evidence of intelligent life. This leads us to two conclusions: either we are the only life in the universe, or life has happened billions of times on billions of worlds. If the latter is true this must mean that all of these civilizations have ultimately collapsed and been totally destroyed. As he says there are earth-like planets that are billions of years older than earth, even if a civilization took hundreds of cycles of growth and collapse to get it right, we'd see the evidence. This must mean there is an endpoint, a filter, as he calls it, where life is expunged. This filter could be ahead or behind us. The more advanced alien fossils we found, would indicate the "filter" is closer and closer to the present, for the statistical likelihood that the human race will prevail where probably billions of species have failed is astronomical. This is a thoroughly depressing analysis, and while I have previously looked forward to the discovery of extraterrestrial life and some part of me still does, the internal logic is brilliant. Heres' the link:http://www.youtube.com/watch?v=_W8zu7lFmhY

I know that our perspectives are very limited. We all have certain reality coordinates that we have accepted as being permanent just because they have worked for us consistently. But of course a higher form of intelligence with a longer lifespan would have a different perspective.

Maybe my feeling is more anthropological, limited by my status as a human.

This is a very good point. This is an inherent problem, the limitations of human knowledge. We can't possibly identify with a superintelligence. if this "Singularity" were to occur we would bring ourselves (Something that would be a first in the history of our planet.) deliberately to the next stage in evolution. Not only that it would not simply be the next step, but a giant leap of drastic proportions. The word 'human" would no longer necessarily apply. Our minds cannot sufficently understand or judge or appreciate such a being or it's perspective. It would be like trying to understand "God" if there was one, an almost pointless exercise. "He" would think in terms we couldn't possibly comprehend.
 
I thought this might be a perfect time to discuss this as the Singularity "movement" has been gaining ground with the opening of the Singularity Institute, and a coming, international Singularity summit will be happening soon in New York. For those who are unaware "The Singularity" is a projected transformation of society and humankind. Named after a cosmological event where gravity becomes infinite, the Singularity is not a fixed concept but has multiple interpretations. The term was first coined by math professor and scientist Vernor Vinge. All Singularity theories rest on the idea of a projected point in the future, perhaps the near future, where scientific and technological development reach a point of a kind of feedback loop and increase rapidly and exponentially. Vinge's conception rested on the idea that the precursor to the singularity would be the creation of the first artificial intelligence that was smarter than humans, which would then theoretically be able to design "smarter" machines and so on and so forth. There is another significant school of thought which maintains that this event could take place in human beings, through artificial augmentation through either bioengineering, nanotechnology, etc., which eventually could theoretically create a human intelligence great enough to spark the aforementioned feedback loop and essentially evolve into something else altogether. Scientist and inventor Raymond Kurzwiel has described the technological progress of humankind through what he calls "The Law of Accelerating Returns", being the basic truism that scientific/technological progress enables future science and technology, etc. This is exemplified in Moore's Law, the principle that computing power essentially doubles every two years. The Singularity has it's detractors, from neo-primativists, to people who express grave concern about the negative implications of such powerful technology, to those who simply dismiss it as "the rapture for nerds." For the uninitiated, here are a few links:
Wikipedia article explaining the Singularity


Lovely can i have a flake and raspberry sauce on mine.
 
I think you already know my position on this, nogods! (Terrified :p)

Heh,.. It's good that we can disagree without losing our sense of humor. Yes, you mentioned having some reservations. I'd be lying if I said I didn't, either. However, I don't see it as avoidable. Barring some huge cataclysm, which nobody sane wants, technological progress will continue. As I've said, the desire for knowledge is an essential componant of the human condition. It is as fundamental as sympathy or empathy. If we were to lose or deny this we would be denying part of our humanity. The search for knowledge is insperable from technological advancement, theory leads to practice. We are so close to developing these technologies, it's really very close. For instance, we are now capable to interface the brain and computers sufficiently so that disabled people can move a cursor on a computer, mentally. (http://www.hplusmagazine.com/articles/ai/brain-computer-interfacing-prosthetic-limbs-telepathy-chips) It seems pretty safe to say that we will continue to progress towards a more sophisticated connection between computers and the brain, enabling things like a full-immersion reality simulation involving all five senses, or, a much more interesting application, the enhancement of the human brain, specifically in memory capacity (Both in the amount of storage capacity and clarity and permenance of memories.) and processing speed.This could have remarkable ramifications for humankind. The development of the first "super-intelligence" would, I think, have to be a watershed event, by necessity. A superintelligence, either in man, or an artificial life, would be capable of improving itself, and so forth, which could very easily lead to an exponential increase. This is a truly tantalizing prospect.

I am guardedly optimistic for two reasons: first, because I view technology, like language, to be essentially neutral, therefore we are in complete control of the end result for good or bad, it's our decision. Second, because the advantage of technological progress is while it may present new problems, it also provides us with better tools to fix those problems. The best form of insurance then is to dive in.

I don't know how to properly articulate the wonder when I read about artificial red blood cells which could not only save lives, but enhance the limits of human endurance to almost rival comic book superheroes. Theoretically, these "respirocytes" could allow you to sit underwater for over an hour. They would dramatically enhance athletic performance. Heres' a short intro:http://dev.nsta.org/evwebs/10955/index.html This is but a minor example of the potentialities for nanotechnology, which are discussed more broadly here: http://www.hplusmagazine.com/articles/nano/how-close-are-we-real-nanotechnology This is real promethean fire.

Black, red and green forever! :D

I haven't given up on my Anarchist ideals, but all this has made me step back and think. Ultimately right now I'm trying to synthesize my libertarian socialist/secular humanist/whatever-you-want-to-call-it philosophy with these new ideas, to combine them. The liberatory potential of these technologies is extreme. Near-infinite, clean energy is within oureach, using nanotech solar power cells and perhaps other sources like nuclear fusion (http://hplusmagazine.com/articles/m...ornia-scientists-advance-toward-fusion-energy) this could be acheived. That alone would have massive social ramifications. Through advancements in robotics, computers (Like quantum computers for example.) and nanotechnology, production could become almost entirely automated. Wage-slavery may simply become obsolete, freeing people up to develop themselves artistically and intellectually, and enjoy life. The revolution might just be won in a laboratory. (But don't let that slow you down.) At the very least with nanotech and stem cells it's very likely you could live to be 150 years old, maybe older, and maintain peak performance for much of that. Think of all the protests you could have over the course of a century and a half. You could be one productive little class-warrior.:D
 
Heh,.. It's good that we can disagree without losing our sense of humor. Yes, you mentioned having some reservations. I'd be lying if I said I didn't, either. However, I don't see it as avoidable. Barring some huge cataclysm, which nobody sane wants, technological progress will continue. As I've said, the desire for knowledge is an essential componant of the human condition. It is as fundamental as sympathy or empathy. If we were to lose or deny this we would be denying part of our humanity. The search for knowledge is insperable from technological advancement, theory leads to practice. We are so close to developing these technologies, it's really very close. For instance, we are now capable to interface the brain and computers sufficiently so that disabled people can move a cursor on a computer, mentally. (http://www.hplusmagazine.com/articles/ai/brain-computer-interfacing-prosthetic-limbs-telepathy-chips) It seems pretty safe to say that we will continue to progress towards a more sophisticated connection between computers and the brain, enabling things like a full-immersion reality simulation involving all five senses, or, a much more interesting application, the enhancement of the human brain, specifically in memory capacity (Both in the amount of storage capacity and clarity and permenance of memories.) and processing speed.This could have remarkable ramifications for humankind. The development of the first "super-intelligence" would, I think, have to be a watershed event, by necessity. A superintelligence, either in man, or an artificial life, would be capable of improving itself, and so forth, which could very easily lead to an exponential increase. This is a truly tantalizing prospect.

I am guardedly optimistic for two reasons: first, because I view technology, like language, to be essentially neutral, therefore we are in complete control of the end result for good or bad, it's our decision. Second, because the advantage of technological progress is while it may present new problems, it also provides us with better tools to fix those problems. The best form of insurance then is to dive in.

I don't know how to properly articulate the wonder when I read about artificial red blood cells which could not only save lives, but enhance the limits of human endurance to almost rival comic book superheroes. Theoretically, these "respirocytes" could allow you to sit underwater for over an hour. They would dramatically enhance athletic performance. Heres' a short intro:http://dev.nsta.org/evwebs/10955/index.html This is but a minor example of the potentialities for nanotechnology, which are discussed more broadly here: http://www.hplusmagazine.com/articles/nano/how-close-are-we-real-nanotechnology This is real promethean fire.

I completely agree with you about knowledge being an essential part of being human, I guess our disagreement would be on the types of knowledge which are constructive/beneficial to us. I know the difference between an alkane and an alkene, the formula for gravity on the moon...etc. And yet I can't name four edible plants that grow wild within 5 miles of my house. We need to improve our knowledge in ways which help us to live sustainably with our local environment.

I don't really know where I stand on the neutrality of technology. I used to think it was entirely neutral. Now I'm not sure. To use the obvious example of computers, obviously they're massively helpful to the radical movement, in terms of word processing and publicising events, but we've yet to find a way of manufacturing them which isn't negative, so I really don't know.

Essentially, I think I might be missing the point slightly. I can't see how being able to sit on the bottom of a swimming pool for hours is going to improve my quality of life.



I haven't given up on my Anarchist ideals, but all this has made me step back and think. Ultimately right now I'm trying to synthesize my libertarian socialist/secular humanist/whatever-you-want-to-call-it philosophy with these new ideas, to combine them. The liberatory potential of these technologies is extreme. Near-infinite, clean energy is within oureach, using nanotech solar power cells and perhaps other sources like nuclear fusion (http://hplusmagazine.com/articles/m...ornia-scientists-advance-toward-fusion-energy) this could be acheived. That alone would have massive social ramifications. Through advancements in robotics, computers (Like quantum computers for example.) and nanotechnology, production could become almost entirely automated. Wage-slavery may simply become obsolete, freeing people up to develop themselves artistically and intellectually, and enjoy life. The revolution might just be won in a laboratory. (But don't let that slow you down.) At the very least with nanotech and stem cells it's very likely you could live to be 150 years old, maybe older, and maintain peak performance for much of that. Think of all the protests you could have over the course of a century and a half. You could be one productive little class-warrior.:D

Surely this would just mean that small-scale businesses would be completely unable to compete with the 'superior' production methods used by companies that could afford the technology, though? Although 150 years of annoying cops does sound fun. ;)
 
This is a newsletter from the Arlington Institute Future Edition, which is dedicated to developing a vision towards 2012 and planning for extraordinary change by mapping the latest scientific advances and studies -

http://www.mynewsletterbuilder.com/tools/view_newsletter.php?newsletter_id=1410004773

Danny I think mentioned managing knowlede rather than the knowledge itself as the worry. Why would we want artificial intelligence to take over at the highest level? That's suicidal, surely? Nothing new of course to our race, but still? What do people want, seriously? Wall-EEeee!!
 
This is a newsletter from the Arlington Institute Future Edition, which is dedicated to developing a vision towards 2012 and planning for extraordinary change by mapping the latest scientific advances and studies -

http://www.mynewsletterbuilder.com/tools/view_newsletter.php?newsletter_id=1410004773

Danny I think mentioned managing knowlede rather than the knowledge itself as the worry. Why would we want artificial intelligence to take over at the highest level? That's suicidal, surely? Nothing new of course to our race, but still? What do people want, seriously? Wall-EEeee!!

Despite having strong primitivist tendencies...I would like a world with Wall-E. So cute!
 
I completely agree with you about knowledge being an essential part of being human, I guess our disagreement would be on the types of knowledge which are constructive/beneficial to us. I know the difference between an alkane and an alkene,

I had to look this up.

the formula for gravity on the moon...etc. And yet I can't name four edible plants that grow wild within 5 miles of my house. We need to improve our knowledge in ways which help us to live sustainably with our local environment.

Some knowledge has more practical applications, however, it's still valuable. Certain information may seem useless, like I know the turkey vulture has the largest olfactory canals relative to it's body size, or that the jellyfish's sting is caused by nematocytes, or the moons of mars (Phobos and Deimos.), or how black holes form, etc., this may not be biologically essential, but I'm glad for them and I feel I am better off for knowing these things. living a more sustainable existence is a great idea, thats' why I drive a Prius. The point of contention is what is the ratio of what are we willing to sacrifice for the sake of sustainability.

I don't really know where I stand on the neutrality of technology. I used to think it was entirely neutral. Now I'm not sure. To use the obvious example of computers, obviously they're massively helpful to the radical movement, in terms of word processing and publicising events, but we've yet to find a way of manufacturing them which isn't negative, so I really don't know.

The "negativity" is sort of relative. We could return to nomadic hunter-gatherer existence if we wanted to do whats' best for the environment but nobody's going to want to do that (With good reason.), and you forfeit human progress and a piece of who we are. Like i said, it's about balancing these tendencies. Although, as I've said before, the best hope for reducing our impact or even repairing the biosphere, is cutting-edge tech.

Essentially, I think I might be missing the point slightly. I can't see how being able to sit on the bottom of a swimming pool for hours is going to improve my quality of life.

In itself, probably not much, although it could be great if you were diving off a coral reef or something. You're physical endurance would also be drastically enhanced. Professional cyclists and other athletes sometimes resort to whats' called "blood-doping", injecting extra red blood cells. If we could build these respirocytes today and give you an injection, you could feasibly beat Lance Armstrong in a race. This is just the beginning, again, a minor example of how technology can and probably will augment and enhance us. What I find most provocative, however, is the idea of augmenting the brain, which would be the most advantageous.

Surely this would just mean that small-scale businesses would be completely unable to compete with the 'superior' production methods used by companies that could afford the technology, though?

Well, my response to that is twofold. First of all, while new technology is often expensive, it moves very quickly to the private sphere and then gets improved upon and mass produced for commercial use. My MP3 player is superior in many ways to most computers 20-odd years ago, and much smaller and more efficient. Secondly, again, barring some existential catastrophe, I think we will reach a stage of technological development where currency will really cease to have value, and be eventually discarded, altogether.

Although 150 years of annoying cops does sound fun. ;)

Thats' the spirit!:D
 
This is a newsletter from the Arlington Institute Future Edition, which is dedicated to developing a vision towards 2012 and planning for extraordinary change by mapping the latest scientific advances and studies -

http://www.mynewsletterbuilder.com/tools/view_newsletter.php?newsletter_id=1410004773

Great link! I've only read some of it so far, but very interesting and informative.

Here are some cool Singularity-related links for anybody who's interested:

A Singularity FAQ for Beginners
http://jwbats.blogspot.com/2005/07/singularity-faq-for-dummies.html

The Singularity
http://singularity.org/

Danny I think mentioned managing knowlede rather than the knowledge itself as the worry. Why would we want artificial intelligence to take over at the highest level? That's suicidal, surely? Nothing new of course to our race, but still? What do people want, seriously? Wall-EEeee!!

I might be dating myself here, but I'm more anxious to meet these guys:
johnny5.jpg

star-trek-lt-cmdr-data.jpg
C-3PO.jpg
 
Last edited:
It's heartening that there is still a strain of Utopian thinking out there (and in here).

Unfortunately, given our collective human history, I think Dystopian visions of the future seem more likely.

Still, where there's Higher Math there's hope.
 
Huh. I clicked on the thread thinking "The Singularity" was the name of a self-help book and that I'd finally discover the secret to losing weight and making money on the stock market.

Technology will probably cure the ills of the species and the planet, only when they're cured, a hundred or two hundred years from now, the species and the planet will be unrecognizable to a time traveler from 2009. The patient will be cured through death-- or, really, metempsychosis. As in a real black hole, as humans approach the singularity time and space will slow, forever freezing humanity at the brink of breaking through to the new world. Nothing of us will survive, which is the essential failing of the Star Trek vision.
 
Nothing of us will survive, which is the essential failing of the Star Trek vision.

It takes some nerve to post that on a geek thread.

Culturally, Star Trek has already borne fruit. I'm afraid we're in Red Dwarf territory now.
 
It takes some nerve to post that on a geek thread.

Culturally, Star Trek has already borne fruit. I'm afraid we're in Red Dwarf territory now.

:lbf:

Listen, I'm nothing close to a Trekkie, but I like the shows when I see them, once in awhile dig the movies.

The future may indeed have holodecks and warp drives and particle beam teleportation devices.

What it's not going to have is a thrill-seeking, womanizing, Shakespeare-quoting humanist flying through space disseminating Arnoldian cultural ideals in between sack-sessions with green-skinned alien chicks. Technology won't render the universe safe for the Kirks of the world. It's going to drown them in the rushing river of history like unwanted kittens in a burlap sack and leave the colonization of space to...well, to beings somehow created by humans, but nothing like them. The Federation will only become The Federation after it becomes The Borg.

Think of it like this. We all think, "I will be so much happier when I am rich". But you'll never get rich. Even if you have more money than Bill Gates and Warren Buffet combined, you won't be rich. Money changes you. Once you're wealthy there is no more you. You walk through the doorway and the doorway changes you. In this way all these techno-nerdy-Utopian visions are true and false at the same time. Human beings no longer see the future because they know there is no longer a future for human beings. To imagine one is an act of exceedingly cruel irony, though not devoid of humor.
 
Last edited:
The technology to substantially improve the quality of life for millions of the world's inhabitants has existed for years. But not used, or used to the advantage of few.

Like Munchy says, it's not knowledge itself, but how it is used, disseminated, available.
Humanity has proved ill-equiped to put to wise use the resources we have. It will take a change in values as well as technology to bring about a better life for all. There must be drastic changes in the way we behave individually and collectively if we don't all end up hurtling down the rails to hell.

But I wonder (with my superficial understanding of this concept) if people are not putting their Faith in science to save us, when all else has failed.
 
There must be drastic changes in the way we behave individually and collectively if we don't all end up hurtling down the rails to hell.


One Steve Wright wrote an article called "Reimagining 'Value' for a Post- Crisis World'...


The reason why money can't buy love is because, ultimately, love is the value that money represents. Money is a proxy for value and value is that which holds us together and makes us relevant, love. This feels cheesy or even embarrassing because we have broken our social contract with value. Specifically, we have inappropriately imbued money with a value of its own, disassociating money from its role as an intermediary, as a temporary representation of value in a chain of transactions. Money has become the object as opposed to the expedient. It is my thesis that value creation must be the frame within which wealth creation fits; that our humanity can no longer be subjugated to our economy due to a false primacy of our intermediary for value, cash. I believe that our economy should serve our humanity....

...Bernard Madoff intentionally defrauded his clients but the calculus of what he did is no different than what our global financial institutions have done to all of us. Their sole concern was how money could make more money, completely divorced from creating anything real. To be sure, the new money could then been used to build, hire, educate, serve, etc, but, as is my thesis, that is the tail wagging the dog...

...Our current system with its overly abstracted and evidently exploitable assignation of value constrains the network to have fewer and inflated hubs and any nuance or complexity in the definition of value is lost in the translation through currency. Value creation should be a market design principle. Wealth creation is a market participant motivation. An effective market should ensure value creation while facilitating wealth creation, not the other way around. Additionally, an effective market would leverage our social (human) network defining "valuable" more accurately by harvesting connections to what we value. Again, the reason to have an economy is to serve humanity... ( http://socialentrepreneurship.change.org/blog/view/reimagining_value_for_a_post-crisis_economy )


Also the eminent Albert Bandura et al reported in 2007 on Moral Disengagement and its effects; how "the influential role played by selective moral disengagement for social practices that cause widespread human harm and degrade the environment. Disengagement of moral self-sanctions enables people to pursue detrimental practices freed from the restraint of self-censure. This is achieved by investing ecologically harmful practices with worthy purposes through social, national, and economic justifications; enlisting exonerative comparisons that render the practices righteous; use of sanitising and convoluting language that disguises what is being done; reducing accountability by displacement and diffusion of responsibility; ignoring, minimising, and disputing harmful effects; and dehumanising and blaming the victims and derogating the messengers of ecologically bad news. These psychosocial mechanisms operate at both the individual and social systems levels..." ( - http://www.worldacademy.org/files/MD ecology_0.pdf )

This kind of awareness is a hopeful response, I think. I've only picked up a gist of what this Singularity phenomenon is about, must soon investigate the useful links posted already. :):thumb:
 
Back
Top Bottom