Google Search

Subscribe to Templateism

Monday, 21 October 2013

Resurrecting dinosaurs will remain a Jurassic park dream?

Contrary to earlier reports, DNA doesn't survive even decades in tree sap!




On the same day that the latest installment of the Jurassic Park film series has been confirmed, a study published in PLOS One has detailed experiments that seem to demonstrate once and for all that dinosaurs will never again walk the Earth.
The 1993 film, based on a book by Michael Crichton, depicts a theme park island filled with dinosaurs, resurrected from ancient DNA extracted from fossilized mosquitoes trapped in amber. For a while, that science didn’t seem to be entirely fiction. In the early 1990s, several scientists announced they had extracted DNA from insects fossilized in amber as long as 130 million years ago. Insects from this time in Earth’s history, the early Cretaceous period, would have flown among dinosaurs (including giant, long-necked sauropods, among the largest creatures ever on land) as well as creatures such as flying pterosaurs, swimming plesiosaurs, feathered birds, and mammals.
This Lebanese amber was until recently the oldest in the world, older than the more common Dominican amber, which formed around 16 million years ago and the 49-million-year-old amber of the Baltic. But last year, tiny mites were found for the first time in amber dating from the Triassic period—230 million years ago.
While the premise of the film—that dinosaur DNA could be extracted from the guts of a preserved mosquito that had recently dined on one—seems reasonable, the fragile nature of DNA and the huge expanse of time that has passed have led many experts to doubt claims to have extracted any DNA that old—including DNA from the insect itself.
David Penney, a palaeontologist and expert in amber-preserved spiders and insects at the University of Manchester, carried out experiments to try to confirm once and for all whether DNA could be extracted from creatures fossilized in amber. With Terry Brown, an ancient-DNA expert also at the University of Manchester, they used the latest “next generation” DNA extraction and sampling techniques to avoid DNA contamination.
“We used Manchester’s specialized, dedicated laboratories that are only used for analyzing ancient DNA,” Penney said. “Any DNA traces will be tiny pieces of ancient, fragmented material, so it’s important to avoid contamination with modern DNA.” The laboratories are sterilized, the air filtered, and scientists wear full-body decontamination suits.
The specimens from which scientists claimed to have successfully extracted DNA were of stingless bees, and Penney used examples of the same species. One was about 10,600 years old, the second was preserved just after World War II—only about 60 years ago. Both samples were extracted from copal, a hardened form of tree sap that had encased the insects; it’s an intermediate stage that had not fully fossilized into amber. Chemicals were used to dissolve away the copal before samples were taken from the creature held inside.
The results were pretty conclusive: “In the oldest specimen we found no viable DNA,” Penney said. “In the newer sample, we found various bacterial and other DNA, but nothing that was certifiably from the bee.”
Brown explained that an older technique known as PCR was used in the 1990s experiments, and it may have caused problems: “The process, called polymerase chain reaction, will preferentially amplify any modern, undamaged DNA molecules that contaminate an extract of partially degraded ancient ones and give false positive results that might be mistaken for genuine, ancient DNA.”
The process of fossilization, whether in rock or amber, chemically changes the makeup of the organism it preserves. Over time, with heat and pressure, the remains are transformed, and they no longer contain the organic material that might harbor DNA.
If viable DNA cannot be extracted from a bee that is only as old as someone living today, there is no chance that it could be obtained from specimens hundreds of millions of years old. But that doesn’t mean the amber samples are useless.
“The preservation of these creatures in amber is remarkable, and they give us an insight into the past and can shed light on the possible future of the tropical forests of today,” Penney said.“I suppose it’s a bit of a shame that we can’t extract DNA from these creatures—I was even expecting to find some in the younger specimen—but it seems Jurassic Park must remain in the realms of fiction.”
This article was first published at The ConversationThe Conversation!

Friday, 18 October 2013

The Wow! Signal: Intercepted Alien Transmission?






SETI, the search for extraterrestrial intelligence, has seen astronomers scouring the sky for decades in hopes of receiving artificially generated radio signals sent by alien civilizations. But then, there’s a good chance we’ve already found just such a signal. And 1977 saw the most tantalizing glimpse ever.
Nicknamed the “Wow!” signal, this was a brief burst of radio waves detected by astronomer Jerry Ehman who was working on a SETI project at the Big Ear radio telescope, Ohio. The signal was, in fact, so remarkable that Ehman circled it on the computer printout, writing “Wow!” in the margin — and unintentionally giving the received radio signal the name under which it would become famous.


SETI: No Alien Life Found, Yet!

Despite a lot of effort, no identification has been found for the signal’s source, and no repeat signal has ever been found. It’s a complete mystery. The only conclusion that can be drawn is if the signal truly did originate in deep space, then it was either an astrophysical phenomenon of which we’ve never seen before, or it truly was an intercepted alien signal.

To explain scientific observations, the normal method is to construct hypotheses and then test them. If your hypothesis is incorrect, it will fail to explain the observation. You can then continue this way, using different hypotheses, until you find something which can accurately describe what you’ve observed (if you ever watch Mythbusters, you may be familiar with how this works).

But with the Wow! signal, researchers ran into difficulty. After trying and failing to find any repeat of the signal, Ehman was skeptical of its origin, stating that “something suggests it was an Earth-sourced signal that simply got reflected off a piece of space debris.” But when he tried to investigate that explanation, he only found more problems.

Investigations found that it was scarcely possible the signal could have originated on Earth, and reflection off a piece of space junk was equally unlikely. The received signal was very specific, and these explanations required too many assumptions. A pattern of logical thinking known as Occam’s razor was pointing towards this signal having an astrophysical origin. But this provided no explanation for what it might be.

The Hydrogen Line

The curious Wow! signal is more or less a perfect match for what would be expected from a received extraterrestrial transmission. It’s been closely analyzed as a result, but to date no one has come up with a satisfactory explanation for where the signal came from.

For a start, the signal’s intensity was observed to rise and fall over a period of 72 seconds, consistent with the rotation of Earth, and a single source tracking across the sky, through the Big Ear telescope’s view. This gave the signal a characteristic signature, caused by objects seen in the sky. It would be nearly impossible for any Earth-bound object to match it.

It also stood out dramatically over the background noise found in deep space, being about 30 times louder than anything else around it. But by far the most interesting thing about this signal was its frequency.

This signal was very sharp, transmitting at only a single frequency. Natural radio sources don’t work like that. They spread across a range of frequencies, meaning that the same signal covers a broad band of transmission. The Wow! signal is not like this at all, showing only one very specific frequency at approximately 1420 MHz.

1420 MHz, also known as the hydrogen line, is a frequency internationally banned from use by terrestrial radio signals because of its use in radio astronomy. Astronomically, it’s usually emitted by neutral hydrogen atoms in interstellar space. It’s observed roughly evenly in all directions, and has been used before to help map out the galaxy. But in the SETI program, it has another use.

ANALYSIS: No Alien Signals Detected in Kepler SETI Search

Hydrogen is the most simple and abundant element in the universe, and any intelligent civilization would know of this frequency’s presence in space and probably be using it to make astronomical observations. As a result, SETI researchers consider it a logical frequency to check for any alien transmissions intended to be heard. It’s just as logical that any astronomers elsewhere in the galaxy might think the same way.

Is Anybody Out There?

The final question in the mystery is where exactly the signal came from. Because of the way in which the Big Ear was designed, the signal’s source can be narrowed down to one of two small regions in the sky. But that’s as precise as it gets.

This puts the source of the signal somewhere in the constellation of Sagittarius. There are a handful of nearby stars, but it’s impossible to tell precisely where the signal originated. At least, not unless we ever hear a repeat signal. And given that no repeat signal has yet been found in any of the searches, it’s probably best not to hold your breath.

To date, most SETI searches have operated by sweeping the sky, observing any spot for only a few minutes at a time. While this allows a lot of coverage, it also means that the likelihood of eavesdropping on any signals that happen to be pointing in our direction is minimal. The other approach would be more like the way the Kepler mission worked, by staring at one particular patch of sky and waiting. Of course, while we now believe that exo-planets are common across the galaxy, we have much less idea when it comes to alien transmissions — for all we know, we may waste years looking in the wrong direction.
Unfortunately, there’s no way for us to know what exactly caused the Wow! signal. As much as some of us would love to use it as proof of extraterrestrial life, that would be a leap of faith, and unscientific at best. Astronomer Robert Gray describes it as “…a tug on the cosmic fishing line. It doesn’t prove that you have a fish on the line, but it does suggest that you keep your line in the water at that spot.”

Logically, the conclusion that must be drawn is that the Wow! signal very likely originated in deep space, but if it did then it was either a completely unknown astronomical phenomenon, or an intercepted alien broadcast — but with nothing else to go on, there’s no way to prove or disprove either of these ideas.

For now, the Wow! signal remains as nothing more than a vague but enthralling hint that there may be more things lurking out there in this galaxy of ours than we currently know of!

Sunday, 13 October 2013

Use Universal Gesture Control From Any Room In Your House!

What Will It Take To Find Life Elsewhere In The Universe?



Tuesday, 8 October 2013

Government Technology to Read Your Thoughts and Implant New Ones!


HAVE you ever thought about something you never shared with anyone, and have been horror-struck at the mere thought of someone coming to know about your little secret? If you have, then you probably have all the more reason to be paranoid now thanks to new and improved security systems being developed around the world to deal with terrorism that inadvertently end up impinging on one's privacy.
Some of the countries involved in such programmes include USA, UK, Spain, Germany and France. Recently, the National Security Agency (NSA) of the US has developed a very efficient method of controlling the human brain. This technology is called Remote Neural Monitoring (RNM) and is expected to revolutionise crime detection and investigation.
hand-on-brain

What is it?

RNM works remotely to control the brain in order to read and detect any criminal thought taking place inside the mind of a possible perpetrator. Research studies have shown thatthe human brain thinks at a rate of about 5000 bits per second and does not have the capacity to compete with supercomputers performing via satellites, implants and biotelemetry. The human brain has a distinctive set of bioelectric resonance system. For the RNM system, supercomputers are being used and, thus, with its help, supercomputers can send messages through an implanted person's nervous system in order to influence their performance in a desired way.
RNM has been developed after about 50 years of neuro-electromagnetic involuntary human experimentations. According to many scientists, within a few years it is expected that DNA microchips will be implanted in the human brain which would make it inherently controllable. With RNM, it will be possible to read and control a person's emotional thought processes along with the subconscious and dreams. At present, around the world, supercomputers are monitoring millions of people simultaneously with the speed of 20 billion bits per second especially in countries like USA, Japan, Israel and many European countries.RNM has a set of certain programs functioning at different levels, like the signals intelligence system which uses electromagnetic frequencies (EMF), to stimulate the brain for RNM and the electronic brain link (EBL). The EMF Brain Stimulation system has been designed as radiation intelligence which means receiving information from inadvertently originated electromagnetic waves in the environment. However, it is not related to radioactivity or nuclear detonation. The recording machines in the signals intelligence system have electronic equipment that investigate electrical activity in humans from a distance. This computer-generated brain mapping can constantly monitor all electrical activities in the brain. The recording aid system decodes individual brain maps for security purposes.

What does it do?

For purposes of electronic evaluation, electrical activity in the speech centre of the brain can be translated in to the subject's verbal thoughts. RNM can send encoded signals to the auditory cortex of the brain directly bypassing the ear. This encoding helps in detecting audio communication. It can also perform electrical mapping of the brain's activity from the visual centre of the brain, which it does by bypassing the eyes and optic nerves, thus projecting images from the subject's brain onto a video monitor. With this visual and audio memory, both can be visualised and analysed. This system can, remotely and non-invasively, detect information by digitally decoding the evoked potentials in 30-50Hz, 5 millwatt electromagnetic emissions from the brain. The nerves produce a shifting electrical pattern with a shifting magnetic flux which then puts on a constant amount of electromagnetic waves. There are spikes and patterns which are called evoked potentials in the electromagnetic emission from the brain. The interesting part about this is that the entire exercise is carried out without any physical contact with the subject.
The EMF emissions from the brain can be decoded into current thoughts, images and sounds in the subject's brain. It sends complicated codes and electromagnetic pulse signals to activate evoked potentials inside the brain, thus generating sounds and visual images in the neural circuits. With its speech, auditory and visual communication systems, RNM allows for a complete audio-visual brain to brain link or a brain-to-computer link.
Of course, the mechanism needs to decode the resonance frequency of each specific site to modulate the insertion of information in that specific location of the brain. RNM can also detect hearing via electromagnetic microwaves, and it also features the transmission of specific commands into the subconscious, producing visual disturbances, visual hallucinations and injection of words and numbers in to the brain through electromagnetic radiation waves. Also, it manipulates emotions and thoughts and reads thoughts remotely, causes pain to any nerve of the body, allows for remote manipulation of behaviour, controls sleep patterns through which control over communication is made easy. This can be used for crime investigation and security management.

Concerns

With all the given benefits of RNM for tracking the illicit and treacherous activities, there are many concerns and risks being pointed out by human rights activists and other scientists. The agencies of human rights around the world have criticised RNM as a violation of basic human rights because it violates privacy and the dignity of thoughts and activities of life. Several countries have protested against it and refer to it as an attack on their human and civil rights. The scientists protesting against the use of RNM believe thatpeople who have been implanted involuntarily become biological robots and guinea pigs for RNM activities in the guise of security. This is an important biological concern related to microchip implantation, which is a hidden technology using microwave radiations for the control of the mind.
Scientists believe that like leukemia and the cancerous risks posed by mobile phones which also emit microwaves, RNM can also pose similar threats to a subject's overall health as the heating effect of tissues with the speed of light is a known effect of high powered microwave and electromagnetic pulse weapons.
Thus, RNM remains a controversial technology which is being used in many countries for security maintenance and surveillance!

Teleportation,a success!

Furusawa group at the University of Tokyo has succeeded in demonstrating complete quantum teleportation of photonic quantum bits by a hybrid technique for the first time worldwide. In 1997, quantum teleportation of photonic quantum bits was achieved by a research team at Innsbruck University in Austria. However, such quantum teleportation couldn't be used for information processing, because measurement was required after transport, and the transport efficiency was low. So, quantum teleportation was still a long way from practical use in quantum communication and quantum computing. The demonstration of quantum teleportation of photonic quantum bits by Furusawa group shows that transport efficiency can be over 100 times higher than before. Also, because no measurement is needed after transport, this result constitutes a major advance toward quantum information processing technology.

"In 1997, quantum bit teleportation was successfully achieved, but as I said just now, it was only achieved in a probabilistic sense. In 1998, we used a slightly different method to succeed at unconditional, complete teleportation. But at that time, the state sent wasn't a quantum bit, but something different. Now, we've used our experimental technology, which was successful in 1998, to achieve teleportation with quantum bits. The title of our paper is "Hybrid Technique," and developing that technique is where we've been successful."
The hybrid technique was developed by combining technology for transporting light waves with a broad frequency range, and technology for reducing the frequency range of photonic quantum bits. This has made it possible to incorporate photonic quantum bit information into light waves without disruption by noise. This research result has been published in Nature, and is attracting attention worldwide, as a step toward quantum information processing technology.
"I think we can definitely say that quantum computers have come closer to reality. Teleportation can be thought of as a quantum gate where input and output are the same. So, it's known that, if we improve this a little, the input and output could be produced in different forms. If changing the form of input and output like that is considered as a program, you have a programmable quantum gate. So, I think a quantum computer could be achieved by combining lots of those."
Looking ahead, Furusawa group aims to increase the transport efficiency and make the device smaller by using photonic chips. In this way, the researchers plan to achieve further advances toward quantum computing.

Source: 

Forbes:How Close Are We To A Real Iron Man Suit?

Okay. If we are just speaking of an armored suit that augments the strength and weaponry of a person, then we are extremely close. If we are speaking of something with all of the primary abilities of the Iron Man character such as flight, clean infinite power, and his repulsor beams, that may never become a reality. The current version in the comic books is even more crazy than the movies, with a suit made entirely of nanites that can repair or replicate itself and any weaponry on the fly…
First, I will break down the things we cannot do, then I will treat you to some really awesome stuff we can do…
  • Flight
We just cannot have a flying suit like Iron Man’s. Not exactly anyhow. The main reason for this is the tiny rocket engines and the repulsor beams. I go into quite a bit of detail about the flight systems of the suit as they are described in the comics and a bit about the technology as it was laid out generation by generation from Iron Man Mark I to VIII+ here… Ariel Williams’ answer to Iron Man (Marvel Comics character): How do Tony Stark’s Iron Man suits generate lift for horizontal flight?
  • Repulsor Beams
One of the key components of the Iron Man suit is its repulsor beams, and that is a kind of technology we may not have anything like for a very long time. The only logical explanation is they are some kind of graviton manipulation. This is so far out of our current technology that we cannot even begin to guess how it would be done, if at all.
  • Arc Reactor
It is not entirely clear how the suit’s reactor works. It is smaller than a soda can, but can produce more power than the miniature nuclear reactor on a Virginia class nuclear submarine. It has something to do with zero point energy and a continuous self sustaining reaction that would appear to break most rules of thermodynamics that we know of. It could possibly be some type of cold fusion reaction, but again we have no clue how to even begin doing this in such a small size. No reaction is 100% efficient, so the waste heat alone from such a reactor would be enough to cook the person inside the suit.
_____________________________
Okay so that was a letdown, what could we do? The answer is a lot, actually!
We could almost exactly copy the original Mark 1 prototype armor from the comics right now today or in the next few years.
The Mark 1 above is not very different from Raytheons XOS 2 powered armor suit below. Stronger than a human, and it has heavier armor than a person could carry.
The video below shows how the enhanced strength from the exoskeleton allows a soldier to lift 200 pounds without any effort at all. A soldier can also carry a much heavier pack and armor. Lastly, the suit reduces fatigue even for physical activities like pushups. The suit does all the work for him.

A prototype design concept for the future development, that can also be seen in the background of the video above.
Not to be outdone, Lockheed Martin has their own powered suit called HULC.

It has less overall strength, but still reportedly allows a soldier to carry a 200 lb load, and it seems they have working units that are untethered. No word on how much of that 200lb load is currently being devoted to batteries though.
  • Power
While nothing like an Arc Reactor, they expect a tethered version that would walk alongside a vehicle that carries a generator for the suit in 3 to 5 years. This would mostly be used to lift heavy materials in and out of vehicles for deployed soldiers. This could be very useful in allowing a very small team to rearm a vehicle or chopper in the field and to allow a small team to quickly unload bulk supplies and move on out of harms way. In 6 to 10 years, they expect to be able to power the suit with its own internal power supply. The most limiting factor is battery technology, but it is getting better.
We would have trouble with finding a power source and would need to recharge frequently. This is actually okay, because the Mark 1 had the exact same issue! The Mark 1 used “transistors” (capacitors they meant possibly?) to power the suit and allow it to be charged rapidly from any wall outlet. Unfortunately, it ran out of juice very often. Still, so far, so good. You didn’t expect us to start out with a Mark VIII dynamic nanite colony suit did you?
  • Flight
As I stated before, we can’t have flight like the Iron Man from the movies, but the Mark 1 did not have that either. Supposedly, it was able to make short jumps or bursts of flight using “compressed air.” We have actually had designs similar to this for decades. The military designed a series of “jump jets” or “rocket belts” that used a highly concentrated mix of hydrogen peroxide and a catalyzing agent to create jets of high intensity steam that can allow a person to fly mostly vertically, which is just what the Mark 1 did! Some other models used liquid nitrogen instead. Either one can be deadly if the fuel tank ruptures. Dissolved by 90% pure hydrogen peroxide or frozen by liquid nitrogen is no way to go.

Okay, so that is obviously very cool, but with a flight time measured in seconds and a world record of only 150 ft this will not do for a superhero or a soldier.
Enter the Martin Technology JetPack.

In this video, we see the device climb to 5000 feet and deploy an emergency parachute at 3000 ft. This was actually flown by remote control in a chaser helicopter with a dummy, but that also shows how a navigation computer or assistant could help with flight in an emergency. The flight system is bulky, but it could be attached to the XOS and then used to enter hostile territory. After you arrive, you park it somewhere inconspicuous and then move in to deal out some Iron Man justice. Just don’t forget where you parked it.
Lastly, if that was not enough for you flight enthusiasts who want to truly soar like a superhero or jet as opposed to a tiny helicopter, there is a last option.

Yves “Jetman” Rossy has pioneered a wearable jet flight pack. It has the drawback that it is not powerful enough to reach flight altitude on its own, but once in the air, it can perform an impressive flight.
As daring innovators like these continue to create amazing devices, it is only a matter of time before some type of human scale personal flight system is possible.
Bonus…
Now for the Navy, we could go a different route for an Iron Man suit’s propulsion.
Yes, it does have a small 10 ft long chaser boat and does not fly very high, but it could allow for some useful tactical advantages. This thing even runs on normal fuel. It could propel a Navy Iron Man underwater quietly and then allow him to burst up from the water’s surface and dispense justice on some modern pirates.
In conclusion…
Yes, we could expect something very similar to Iron Man in the next 3-15 years, if we are willing to settle for the Iron Man Mark I suit.
The number of ways in which such a device could change the battlefield or dangerous police and emergency rescue operations is almost too many to list. We could also have versions of these suits in use at factories and manufacturing plants around the world. We would no longer need a forklift or dolly for simple yet heavy or tiring jobs, we would become the forklift. The benefits to workers on assembly lines would also be enormous as workers would get fatigued much less often and injuries from strains would be very uncommon.
Eventually technology will allow these suits to work without the human at all but that is still a long ways off….
Or is it…


:

Samsung's Flex-screen phone to be out by end of this year!


samsung-amoled-galaxy
After first promising it as early as 2009, Samsung said recently that it will introduce a curved-screen smartphone in the coming months.
Few details are available, but at a recent demo at the Consumer Electronics Show, Samsung showed off a phone whose screen cut out at an angle on one side, displaying notifications in the additional space. But an ergonomically curved design or even a phone that unfolds to become a tablet could be in the works.
The particular model is less important than the fact that the world’s leading smartphone manufacturer is investing in AMOLED screens, which are more durable and deliver higher quality images, and pushing towards using them without the rigid glass it currently encases them in.
SH-29_1Samsung dropped the news about the new display at its launch of the Gear smart watch, making clear that it intends to stay relevant as mobile computing shifts away from the phone form factor.
If paired with flexible electronics components and freed of glass casing, AMOLED screens could break open the mobile and wearable device market. Samsung is not the only company to flirt with flexible screens, but most of the exciting demos have failed to materialize in the marketplace.
Flexible screens are currently hamstrung by the challenge of finding flexible electronics components durable enough to withstand repeated bending at a price consumers will pay. But a midway point involves putting some of the componentry of the electronics into the AMOLED surface, while leaving the others packaged in a rigid container.
While Samsung’s initial offering will most likely be a tweak to the current blocky smartphone designs and not a radical break from the rectangular screen, it will put the possibilities of flexible screens on the average user’s radar for the first time and pressure its suppliers and competitors to push farther, faster past limits of the rectangular screen.

Create an A.I. on your computer!


Intelligence Realm is seeking to build the first AI using distributed computing.
Intelligence Realm is seeking to build the first AI using distributed computing.
If many hands make light work, then maybe many computers can make an artificial brain. That’s the basic reasoning behind Intelligence Realm’s Artificial Intelligence project. By reverse engineering the brain through a simulation spread out over many different personal computers, Intelligence Realm hopes to create an AI from the ground-up, one neuron at a time. The first waves of simulation are already proving successful, with over 14,000 computers used and 740 billion neurons modeled. Singularity Hub managed to snag the project’s leader, Ovidiu Anghelidi, for an interview: see the full text at the end of this article.
The ultimate goal of Intelligence Realm is to create an AI or multiple AIs, and use these intelligences in scientific endeavors. By focusing on the human brain as a prototype, they can create an intelligence that solves problems and “thinks” like a human. This is akin to the work done at FACETS that Singularity Hub highlighted some weeks ago. The largest difference between Intelligence Realm and FACETS is that Intelligence Realm is relying on a purely simulated/software approach.
Which sort of makes Intelligence Realm similar to the Blue Brain Project that Singularity Hub also discussed. Both are computer simulations of neurons in the brain, but Blue Brain’s ultimate goal is to better understand neurological functions, while Intelligence Realm is seeking to eventually create an AI. In either case, to successfully simulate the brain in software alone, you need a lot of computing power. Blue Brain runs off a high-tech supercomputer, a resource that’s pretty much exclusive to that project. Even with that impressive commodity, Blue Brain is hitting the limit of what it can simulate. There’s too much to model for just one computer alone, no matter how powerful. Intelligence Realm is using a distributed computing solution. Where one computer cluster alone may fail, many working together may succeed. Which is why Intelligence Realm is looking for help.
The AI system project is actively recruiting, with more than 6700 volunteers answering the call. Each volunteer runs a small portion of the larger simulation on their computer(s) and then ships the results back to the main server. BOINC, the Berkeley built distributed computing software that makes it all possible, manages the flow of data back and forth. It’s the same software used for SETI’s distributed computing processing. Joining the project is pretty simple: you just download BOINC, some other data files, and you’re good to go. You can run the simulation as an application, or as part of your screen saver.
Baby Steps
So, 6700 volunteers, 14,000 or so platforms, 740 billion neurons, but what is the simulated brain actually thinking? Not a lot at the moment. The same is true with the Blue Brain Project, or FACETS. Simulating a complex organ like the brain is a slow process, and the first steps are focused on understanding how the thing actually works. Inputs (Intelligence Realm is using text strings) are converted into neuronal signals, those signals are allowed to interact in the simulation and the end state is converted back to an output. It’s a time and labor (computation) intensive process. Right now, Intelligence Realm is just building towards simple arithmetic.
Which is definitely a baby step, but there are more steps ahead. Intelligence Realm plans on learning how to map numbers to neurons, understanding the kind of patterns of neurons in your brain that represent numbers, and figuring out basic mathematical operators (addition, subtraction, etc). From these humble beginnings, more complex reasoning will emerge. At least, that’s the plan.
Intelligence Realm isn’t just building some sort of biophysical calculator. Their brain is being designed so that it can change and grow, just like a human brain. They’ve focused on simulating all parts of the brain (including the lower reasoning sections) and increasing the plasticity of their model. Right now it’s stumbling towards knowing 1+1 = 2. Even with linear growth they hope that this same stumbling intelligence will evolve into a mental giant. It’s a monumental task, though, and there’s no guarantee it will work. Building artificial intelligence is probably one of the most difficult tasks to undertake, and this early in the game, it’s hard to see if the baby steps will develop into adult strides. The simulation process may not even be the right approach. It’s a valuable experiment for what it can teach us about the brain, but it may never create an AI. A larger question may be, do we want it to?
Knock, Knock…It’s Inevitability
With the newest Terminator movie out, it’s only natural to start worrying about the dangers of artificial intelligence again. Why build these things if they’re just going to hunt down Christian Bale? For many, the threats of artificial intelligence make it seem like an effort of self-destructive curiosity. After all, from Shelley’s Frankenstein Monster to Adam and Eve, Western civilization seems to believe that creations always end up turning on their creators.
AI, however, promises rewards as well as threats. Problems in chemistry, biology, physics, economics, engineering, and astronomy, even questions of philosophy could all be helped by the application of an advanced AI. What’s more, as we seek to upgrade ourselves through cybernetics and genetic engineering, we will become more artificial. In the end, the line between artificial and natural intelligence may be blurred to a point that AIs will seem like our equals, not our eventual oppressors. However, that’s not a path that everyone will necessarily want to walk down.
Will AI and Humans learn to co-exist?
Will AI and Humans learn to co-exist?
The nature of distributed computing and BOINC allow you to effectively vote on whether or not this project will succeed. Intelligence Realm will eventually need hundred of thousands if not millions of computing platforms to run their simulations. If you believe that AI deserves a chance to exist, give them a hand and recruit others. If you think we’re building our own destroyers, then don’t run the program. In the end, the success or failure of this project may very well depend on how many volunteers are willing to serve as mid-wives to a new form of intelligence.
Before you make your decision though, make sure to read the following interview. As project leader, Ovidiu Anghelidi is one of the driving minds behind reverse engineering the brain and developing the eventual AI that Intelligence Realm hopes to build. He’s didn’t mean for this to be a recruiting speech, but he makes some good points:
SH: Hello. Could you please start by giving yourself and your project a brief introduction?
OA: Hi. My name is Ovidiu Anghelidi and I am working on a distributed computing project involving thousands of computers in the field of artificial intelligence. Our goal is to develop a system that can perform automated research.
What drew you to this project?
During my adolescence I tried understanding the nature of question. I used extensively questions as a learning tool. That drove me to search for better understanding methods. After looking at all kinds of methods, I kinda felt that understanding creativity is a worthier pursuit. Applying various methods of learning and understanding is a fine job, but finding outstanding solutions requires much more than that. For a short while I tried understanding how creativity is done and what exactly is it. I found out that there is not much work done on this subject, mainly because it is an overlapping concept. The search for creativity led me to the field of AI. Because one of the past presidents of the American Association of Artificial Intelligence dedicated an entire issue to this subject I started pursuing that direction. I looked into the field of artificial intelligence for a couple of years and at some point I was reading more and more papers that touched the subject of cognition and brain so I looked briefly into neuroscience. After I read an introductory book about neuroscience, I realized that understanding brain mechanisms is what I should have done all along, for the past 20 years. To this day I am pursuing this direction.
What’s your time table for success? How long till we have a distributed AI running around using your system?
I have been working on this project for about 3 years now, and I estimate that we will need another 7-8 years to finalize the project. Nonetheless we do not need that much time to be able to use some its features. I expect to have some basic features that work within a couple of months. Take for example the multiple simulations feature. If we want to pursue various directions in different fields (i.e. mathematics, biology, physics) we will need to set up a simulation for each field. But we do not need to get to the end of the project, to be able to run single simulations.
Do you think that Artificial Intelligence is a necessary step in the evolution of intelligence? If not, why pursue it? If so, does it have to happen at a given time?
I wouldn’t say necessary, because we don’t know what we are evolving towards. As long as we do not have the full picture from beginning to end, or cases from other species to compare our history to, we shouldn’t just assume that it is necessary.
We should pursue it with all our strength and understanding because soon enough it can give us a lot of answers about ourselves and this Universe. By soon I mean two or three decades. A very short time span, indeed. Artificial Intelligence will amplify a couple of orders of magnitude our research efforts across all disciplines.
In our case it is a natural extension. Any species that reaches a certain level of intelligence, at some point in time, they would start replicating and extending their natural capacities in order to control their environment. The human race did that for the last couple thousands of years, we tried to replicate and extend our capacity to run, see, smell and touch. Now it reached thinking. We invented vehicles, television sets, other devices and we are now close to have artificial intelligence.
What do you think are important short term and long term consequences of this project?
We hope that in short term we will create some awareness in regards to the benefits of artificial intelligence technology. Longer term it is hard to foresee.
How do you see Intelligence Realm interacting with more traditional research institutions? (Universities, peer reviewed Journals, etc)
Well…, we will not be able to provide full details about the entire project because we are pursuing a business model, so that we can support the project in the future, so there is little chance of a collaboration with a University or other research institution. Down the road, as we we will be in an advanced stage with the development, we will probably forge some collaborations. For the time being this doesn’t appear feasible. I am open to collaborations but I can’t see how that would happen.
I submitted some papers to a couple of journals in the past, but I usually receive suggestions that I should look at other journals, from other fields. Most of the work in artificial intelligence doesn’t have neuroscience elements and the work in neuroscience contains little or no artificial intelligence elements. Anyway, I need no recognition.
Why should someone join your project? Why is this work important?
If someone is interested in artificial intelligence it might help them having a different view on the subject and seeing what components are being developed over time. I can not tell how important is this for someone else. On a personal level, I can say that because my work is important to me and by having an AI system I will be able to get answers to many questions, I am working on that. Artificial Intelligence will provide exceptional benefits to the entire society.
What should someone do who is interested in joining the simulation? What can someone do if they can’t participate directly? (Is there a “write-your-congressman” sort of task they could help you with?)
If someone is interested in joining the project they need to download the Boinc client from the http://boinc.berkeley.edu site and then attach to the project using the master Url for this project, http://www.intelligencerealm.com/aisystem. We appreciate the support received from thousands of volunteers from all over the world.
If someone can’t participate directly I suggest to him/her to keep an open mind about what AI is and how it can benefit them. He or she should also try to understand its pitfalls.
There is no write-your-congressman type of task. Mass education is key for AI success. This project doesn’t need to be in the spotlight.
What is the latest news?
We reached 14,000 computers and we simulated over 740 billion neurons. We are working on implementing a basic hippocampal model for learning and memory.
Anything else you want to tell us?
If someone considers the development of artificial intelligence impossible or too far into the future to care about, I can only tell him or her, “Embrace the inevitable”. The advances in the field of neuroscience are increasing rapidly. Scientists are thorough.
Understanding its benefits and pitfalls is all that is needed.
Thank you for your time and we look forward to covering Intelligence Realm as it develops further.
Thank you for having me.

Follow us on Google+