Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Sunday, November 14, 2010

Mind-reading scientists say future crime can be predicted with '100 per cent' accuracy


Reading the minds of terrorists to know where and when the next attack will occur is no longer the stuff of sci-fi films.

A team from the Northwestern University in the US claim they have achieved 100 per cent accuracy in reading the minds of make-believe terrorists simply by attaching electrodes to their scalps and examining their brain waves.

For the study, 29 students were given mock terrorist plans and 30 minutes to learn about an attack on a certain US city.

They were asked to work out their own details based on information they were given regarding weapons and methods.

For the first study, the researchers also knew about the mock terrorist plans. Their goal was to monitor whether the students' brain waves gave away details of where and when the attacks were to take place.

In this case, the "terrorists" were also had to write a letter outlining their plan in order to encode the plot deeper in their memories.

They were then told to watch monitors which presented a range of stimuli, such as the names of various US cities, including the one detailed in the attack plot.

Given this prior knowledge, researchers in the test were able to correlate the rise in brain wave activity to guilty knowledge with 100 per cent accuracy across all the students that participated.

What makes the result so impressive is that in a real-life situation, the knowledge would be much more deeper entrenched, given the months or years of planning that a participant would be subject to.

But how does such technology get used to save us from another 9/11?

Such testing would be done on people picked up on the basis of activity or "chatter", psychology Prof Rosenfeld said.

The investigators would have heard prior chatter that detailed specifics such as weapons, time and place and the P300 testing would be carried out on suspects in order to determine their level of culpability and confirm the details of the attack.

Obviously, that means authorities need some prior knowledge of an attack, but a more impressive result from another test hinted at a future whereby non-suspects could be scanned for potential crimes.

In the second range of tests, researchers had no idea what they were looking for.

"Without any prior knowledge of the planned crime in our mock terrorism scenarios, we were able to identify 10 out of 12 terrorists and, among them, 20 out of 30 crime- related details," he said.

"The test was 83 per cent accurate in predicting concealed knowledge, suggesting that our complex protocol could identify future terrorist activity."

Monday, December 29, 2008

Scientists plan to ignite tiny man-made star



It is science’s star experiment: an attempt to create an artificial sun on earth — and provide an answer to the world’s impending energy shortage.
While it has seemed an impossible goal for nearly 100 years, scientists now believe that they are on brink of cracking one of the biggest problems in physics by harnessing the power of nuclear fusion, the reaction that burns at the heart of the sun.

In the spring, a team will begin attempts to ignite a tiny man-made star inside a laboratory and trigger a thermonuclear reaction.

Its goal is to generate temperatures of more than 100 million degrees Celsius and pressures billions of times higher than those found anywhere else on earth, from a speck of fuel little bigger than a pinhead. If successful, the experiment will mark the first step towards building a practical nuclear fusion power station and a source of almost limitless energy.

At a time when fossil fuel supplies are dwindling and fears about global warming are forcing governments to seek clean energy sources, fusion could provide the answer. Hydrogen, the fuel needed for fusion reactions, is among the most abundant in the universe. Building work on the £1.2 billion nuclear fusion experiment is due to be completed in spring.

Scientists at the National Ignition Facility (NIF) in Livermore, nestled among the wine-producing vineyards of central California, will use a laser that concentrates 1,000 times the electric generating power of the United States into a billionth of a second.

The result should be an explosion in the 32ft-wide reaction chamber which will produce at least 10 times the amount of energy used to create it.

"We are creating the conditions that exist inside the sun," said Ed Moses, director of the facility. "It is like tapping into the real solar energy as fusion is the source of all energy in the world. It is really exciting physics, but beyond that there are huge social, economic and global problems that it can help to solve."

Inside a structure covering an area the size of three football pitches, a single infrared laser will be sent through almost a mile of lenses, mirrors and amplifiers to create a beam more than 10 billion times more powerful than a household light bulb.

Housed within a hanger-sized room that has to be pumped clear of dust to prevent impurities getting into the beam, the laser will then be split into 192 separate beams, converted into ultraviolet light and focused into a capsule at the centre of an aluminium and concrete-coated target chamber.

When the laser beams hit the inside of the capsule, they should generate high-energy X-rays that, within a few billionths of a second, compress the fuel pellet inside until its outer shell blows off.

This explosion of the fuel pellet shell produces an equal and opposite reaction that compresses the fuel itself together until nuclear fusion begins, releasing vast amounts of energy.

Scientists have been attempting to harness nuclear fusion since Albert Einstein’s equation E=mc², which he derived in 1905, raised the possibility that fusing atoms together could release tremendous amounts of energy.

Under Einstein’s theory, the amount of energy locked up in one gram of matter is enough to power 28,500 100-watt lightbulbs for a year.

Until now, such fusion has only been possible inside nuclear weapons and highly unstable plasmas created in incredibly strong magnetic fields. The work at Livermore could change all this.

The sense of excitement at the facility is clear. In the city itself, people on the street are speaking about the experiment and what it could bring them. Until now Livermore has had only the dubious honour of being home of the US government’s nuclear weapons research laboratories which are on the same site as the NIF.

Inside the facility, the scientists are impatient. After 11 years of development work, they want the last of the lenses and mirrors for the laser to be put in place and the tedious task of adjusting and aiming the laser to be over, a process they fear could take up to a year before they can successfully achieve fusion.

Jeff Wisoff, a former astronaut who is deputy principal associate director of science at the NIF, said: "Everyone is keen to get started, but we have to get the targeting right, otherwise it won’t work.

"We will be firing laser pulses that last just a few billionths of a second but we will be creating conditions that are found in the interior of stars or exploding nuclear weapons.

"I worked on the building of the International Space Station, but this is a far bigger challenge and the implications are huge. When we started the project, a lot of the technology we needed did not exist, so we have had to develop it ourselves.

"The next step is looking at how ignition can be used to deliver something of value to the world. It has the potential to be one of the biggest achievements mankind has made."

Although other experiments have attempted to create the conditions needed for nuclear fusion, lasers are seen as the most likely technique to be able to provide a viable electricity supply.

If all goes well, the NIF will be able to fire its laser and ignite a fusion reaction every five hours, but to create a reliable fusion power plant the laser would need to ignite fusion around 10 times a second.

The scientists are already working with British counterparts on the next step towards a fusion power station. A project known as the High Powered Laser Research facility aims to create a laser-powered fusion reactor that can fire once every couple of minutes.

Prof Mike Dunne, director of the central laser facility at the Rutherford Appleton Laboratory near Oxford, said: "The National Ignition Facility is going to finally prove fusion can be achieved with a laser. It will start an exciting new period in physics as it will prove what we are trying to achieve is actually be possible."

Sunday, December 14, 2008

Scientists develop software that can map dreams

The secret world of dreams has been unlocked with the invention of technology capable of illustrating images taken directly from human

The research investigated how electrical signals are captured and reconstructed into images
A team of Japanese scientists have created a device that enables the processing and imaging of thoughts and dreams as experienced in the brain to appear on a computer screen.

While researchers have so far only created technology that can reproduce simple images from the brain, the discovery paves the way for the ability to unlock people's dreams and other brain processes.

A spokesman at ATR Computational Neuroscience Laboratories said: "It was the first time in the world that it was possible to visualise what people see directly from the brain activity.

"By applying this technology, it may become possible to record and replay subjective images that people perceive like dreams." The scientists, lead by chief researcher Yukiyaso Kamitani, focused on the image recognition procedures in the retina of the human eye.

It is while looking at an object that the eye's retina is able to recognise an image, which is subsequently converted into electrical signals sent into the brain's visual cortex.

The research investigated how electrical signals are captured and reconstructed into images, according to the study, which will be published in the US journal Neuron.

As part of the experiment, researchers showed testers the six letters of the word "neuron", before using the technology to measure their brain activity and subsequently reconstruct the letters on a computer screen.

Since Sigmund Freud published The Interpretations of Dreams over a century ago, the workings of the sleeping human mind have been the source of extensive analysis by scientists keen to unlock its mysteries.

Dreams were the focus of a scientific survey conducted by the Telegraph last year in which it was concluded that dreams were more likely to be shaped by events of the past week than childhood traumas.

Wednesday, December 10, 2008

Strange Experiments Create Body-Swapping Experiences

Experimental set-up to induce the "body swap illusion."


Experimental set-up to induce illusory ownership of an artificial body (left panel). The participant could see the mannequin's body from the perspective of the mannequin's head (right panel). Credit: Valeria Petkova, H. Henrik Ehrsson, PLoS ONE Scientists now have manipulated people’s perceptions to make them think they have swapped bodies with another human or even a "humanoid body," experiencing the sensations that the other would feel and giving the illusion of being inside the other's body.

The bizarre achievement hearkens to body swaps portrayed on numerous TV shows and movies
such as "Freaky Friday" and "All of Me."

In real life, the cognitive neuroscientists at the Swedish medical university Karolinska Institutet succeeded in making subjects perceive the bodies of mannequins and other people as their own. The illusion also worked even when the two people differed in appearance or were of different sexes. It also worked whether the subject was immobile or was making voluntary movements. However, it was not possible to fool the subjects into identifying with a non-humanoid object, such as a chair or a large block.

A year ago, scientists achieved the illusion of an out-of-body experience in subjects, using virtual reality. The new research manipulates the brain even further — out of itself and into another body.

In one of the new body-swap experiments, the head of a shop dummy was fitted with two cameras connected to two small screens placed in front of the subjects’ eyes, so that they saw what the dummy "saw." When the dummy's camera eyes and a subject's head were directed downwards, the subject saw the dummy's body where he or she would normally have seen his or her own.

The illusion of body-swapping was created when the scientist touched the stomach of both with two sticks. The subject could then see that the mannequin's stomach was being touched while feeling (but not seeing) a similar sensation on his/her own stomach. As a result, the subject developed a powerful sensation that the mannequin’s body was his/her own.

"This shows how easy it is to change the brain's perception of the physical self," said Henrik Ehrsson, who led the project. "By manipulating sensory impressions, it’s possible to fool the self not only out of its body but into other bodies too."

In another experiment, the camera was mounted onto another person's head. When this person and the subject turned towards each other to shake hands, the subject perceived the camera-wearer's body as his/her own.

"The subjects see themselves shaking hands from the outside, but experience it as another person," said Valeria Petkova, who worked with Ehrsson on the study. "The sensory impression from the hand-shake is perceived as though coming from the new body, rather than the subject's own."

The strength of the illusion was confirmed by the subjects' exhibiting stress reactions when a knife was held to the camera wearer's arm but not when it was held to their own.

The object of the projects was to learn more about how the brain constructs an internal image of the body and how we come to feel like we are located inside our bodies, a concept called embodiment. The new experiments, the first to move beyond experiments on just a single limb, show that matching of our multisensory and motor signals from the first-person perspective is sufficient for producing the experience of owning one’s entire body, Petkova and Ehrsson write in the Dec. 3 issue of the online, open-access journal PLoS ONE. Previously, researchers thought that embodiment was sort of an inductive process of combining signals from muscles, joints and skin.

The knowledge that the sense of corporal identification/self-perception can be manipulated to make people believe that they have a new body is of potential practical use in virtual reality applications and robot technology. It could also be useful in research on body image disorders.
The research was supported by grants from the Swedish Medical Research Council, the Swedish Foundation for Strategic Research, the Human Frontier Science Programme and the European Research Council.

Monday, November 17, 2008

Neuroimaging Of Brain Shows Who Spoke To A Person And What Was Said

Neuroimaging Of Brain Shows Who Spoke To A Person And What Was Said
ScienceDaily (Nov. 10, 2008) — Scientists from Maastricht University have developed a method to look into the brain of a person and read out who has spoken to him or her and what was said. With the help of neuroimaging and data mining techniques the researchers mapped the brain activity associated with the recognition of speech sounds and voices.

In their Science article "'Who' is Saying 'What'? Brain-Based Decoding of Human Voice and Speech," the four authors demonstrate that speech sounds and voices can be identified by means of a unique 'neural fingerprint' in the listener's brain. In the future this new knowledge could be used to improve computer systems for automatic speech and speaker recognition.

Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice. The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity.

Just like real fingerprints, these neural patterns are both unique and specific: the neural fingerprint of a speech sound does not change if uttered by somebody else and a speaker's fingerprint remains the same, even if this person says something different.

Moreover, this study revealed that part of the complex sound-decoding process takes place in areas of the brain previously just associated with the early stages of sound processing. Existing neurocognitive models assume that processing sounds actively involves different regions of the brain according to a certain hierarchy: after a simple processing in the auditory cortex the more complex analysis (speech sounds into words) takes place in specialised regions of the brain. However, the findings from this study imply a less hierarchal processing of speech that is spread out more across the brain.

The research was partly funded by the Netherlands Organisation for Scientific Research (NWO): Two of the four authors, Elia Formisano and Milene Bonte carried out their research with an NWO grant (Vidi and Veni). The data mining methods were developed during the PhD research of Federico De Martino (doctoral thesis defended at Maastricht University on 24 October 2008).

Sunday, November 16, 2008

Packs of robots will hunt down uncooperative humans

The latest request from the Pentagon jars the senses. At least, it did mine. They are looking for contractors to provide a "Multi-Robot Pursuit System" that will let packs of robots "search for and detect a non-cooperative human".

One thing that really bugs defence chiefs is having their troops diverted from other duties to control robots. So having a pack of them controlled by one person makes logistical sense. But I'm concerned about where this technology will end up.

Given that iRobot last year struck a deal with Taser International to mount stun weapons on its military robots, how long before we see packs of droids hunting down pesky demonstrators with paralysing weapons? Or could the packs even be lethally armed? I asked two experts on automated weapons what they thought - click the continue reading link to read what they said. Both were concerned that packs of robots would be entrusted with tasks - and weapons - they were not up to handling without making wrong decisions.

Steve Wright of Leeds Metropolitan University is an expert on police and military technologies, and last year correctly predicted this pack-hunting mode of operation would happen. "The giveaway here is the phrase 'a non-cooperative human subject'," he told me:


"What we have here are the beginnings of something designed to enable robots to hunt down humans like a pack of dogs. Once the software is perfected we can reasonably anticipate that they will become autonomous and become armed.

We can also expect such systems to be equipped with human detection and tracking devices including sensors which detect human breath and the radio waves associated with a human heart beat. These are technologies already developed."

Another commentator often in the news for his views on military robot autonomy is Noel Sharkey, an AI and robotics engineer at the University of Sheffield. He says he can understand why the military want such technology, but also worries it will be used irresponsibly.

"This is a clear step towards one of the main goals of the US Army's Future Combat Systems project, which aims to make a single soldier the nexus for a large scale robot attack. Independently, ground and aerial robots have been tested together and once the bits are joined, there will be a robot force under command of a single soldier with potentially dire consequences for innocents around the corner."
What do you make of this? Are we letting our militaries run technologically amok with our tax dollars? Or can robot soldiers be programmed to be even more ethical than human ones, as some researchers claim?

Wednesday, October 29, 2008

Lines bluring between humans and machines



Inventor and futurist Ray Kurzweil illustrates the exponential evolution of technology, predicting a sharp rise in computing capability, robotics and life expectancy within the next 15 years. He outlines the shocking ways we'll use technology to augment our own capabilities, forever blurring the lines between human and machine. A prolific inventor, Kurzweil developed the first Optical Character Recognition (OCR) system, the first text-to-speech reader for the blind, one of the first speech-recognition systems, and numerous electronic instruments. He's written several books exploring the social impact of technology, including The Age of Spiritual Machines and The Singularity is Near: When Humans Transcend Biology

Can we create new life

Can we create new life out of our digital universe?" asks Craig Venter. And his answer is, yes, and pretty soon. He walks the TED2008 audience through his latest research into "fourth-generation fuels" -- biologically created fuels with CO2 as their feedstock. His talk covers the details of creating brand-new chromosomes using digital technology, the reasons why we would want to do this, and the bioethics of synthetic life.

Wednesday, October 22, 2008

Robotic ants building homes on Mars?





Tiny bots smaller than a thumbnail. I-SWARM project

Recent discoveries of water and Earth-like soil on Mars have set imaginations running wild that human beings may one day colonise the Red Planet. However, the first inhabitants might not be human in form at all, but rather swarms of tiny robots.

“Small robots that are able to work together could explore the planet. We now know there is water and dust so all they would need is some sort of glue to start building structures, such as homes for human scientists,” says Marc Szymanski, a robotics researcher at the University of Karlsruhe in Germany.

Szymanski is part of a team of European researchers developing tiny autonomous robots that can co-operate to perform different tasks, much like termites, ants or bees forage collaboratively for food, build nests and work together for the greater good of the colony.

Working in the EU-funded I-SWARM project, the team created a 100-strong posse of centimetre-scale robots and made considerable progress toward building swarms of ant-sized micro-bots. Several of the researchers have since gone on to work on creating swarms of robots that are able to reconfigure themselves and assemble autonomously into larger robots in order to perform different tasks. Their work is being continued in the Symbrion and Replicator projects that are funded under the EU’s Seventh Framework Programme.

Planet exploration and colonisation are just some of a seemingly endless range of potential applications for robots that can work together, adjusting their duties depending on the obstacles they face, changes in their environment and the swarm’s needs.

“Robot swarms are particularly useful in situations where you need high redundancy. If one robot malfunctions or is damaged it does not cause the mission to fail because another robot simply steps in to fill its place,” Szymanski explains.

That is not only useful in space or in deep-water environments, but also while carrying out repairs inside machinery, cleaning up pollution or even carrying out tests and applying treatments inside the human body – just some of the potential applications envisioned for miniature robotics technology.

Creating collective perception
Putting swarming robots to use in a real-world environment is still, like the vision of colonising Mars, some way off. Nonetheless, the I-SWARM team did forge ahead in building robots that come close to resembling a programmable ant.

Just as ants may observe what other ants nearby are doing, follow a specific individual, or leave behind a chemical trail in order to transmit information to the colony, the I-SWARM team’s robots are able to communicate with each other and sense their environment. The result is a kind of collective perception.

The robots use infrared to communicate, with each signalling another close by until the entire swarm is informed. When one encounters an obstacle, for example, it would signal others to encircle it and help move it out of the way.

A group of robots that the project team called Jasmine, which are a little bigger than a two-euro coin, use wheels to move around, while the smallest I-SWARM robots, measuring just three millimetres in length, move by vibration. The I-SWARM robots draw power from a tiny solar cell, and the Jasmine machines have a battery.

“Power is a big issue. The more complex the task, the more energy is required. A robot that needs to lift something [uses] powerful motors and these need lots of energy,” Szymanski notes, pointing to one of several challenges the team have encountered.

Processing power is another issue. The project had to develop special algorithms to control the millimetre-scale robots, taking into account the limited capabilities of the tiny machine’s onboard processor: just eight kilobytes of program memory and two kilobytes of RAM, around a million times less than most PCs.

Tests proved that the diminutive robots were able to interact, though the project partners were unable to meet their goal of producing a thousand of them in what would have constituted the largest swarm of the smallest autonomous robots ever created anywhere in the world.

Nonetheless, Szymanski is confident that the team is close to being able to mass produce the tiny robots, which can be made much like computer chips out of flexible printed circuit boards and then folded into shape.

Sunday, October 12, 2008

New machine prints sheets of light

Sheets owe luminance to organic light-emitting diodes called OLEDs

OLEDs could be used to make light sources out of everyday objects

Sheets provide broad, diffuse light sources that bathe rooms in a gentle glow could make floor lamps, bedside lamps, wall sconces and nearly every other household lamp obsolete.


You could even make OLED wallpaper, since the material is flexible.

It's a machine that prints lights.

The size of a semitrailer, it coats an 8-inch wide plastic film with chemicals, then seals them with a layer of metal foil. Apply electric current to the resulting sheet, and it lights up with a blue-white glow.

You could tack that sheet to a wall, wrap it around a pillar or even take a translucent version and tape it to your windows. Unlike practically every other source of lighting, you wouldn't need a lamp or conventional fixture for these sheets, though you would need to plug them into an outlet.

The sheets owe their luminance to compounds known as organic light-emitting diodes, or OLEDs. While there are plenty of problems to be worked out with the technology, it's not the dream of a wild-eyed startup.

OLEDs are beginning to be used in TVs and cell-phone displays, and big names like Siemens and Philips are throwing their weight behind the technology to make it a lighting source as well. The OLED printer was made by General Electric Co. on its sprawling research campus here in upstate New York. It's not far from where a GE physicist figured out a practical way to use tungsten metal as the filament in a regular light bulb. That's still used today, nearly a century later.

The invention of the incandescent bulb created the pattern for home lighting: Our light sources are small and bright. Maybe there are a few in the center of the ceiling, and a few in the corners of the room. Because they're too bright to look at, they need to be reflected and diffused with lamp shades and frosted glass.

OLEDs could overturn all that, with broad, diffuse light sources bathing rooms in a gentle glow. Photographers go to great lengths to diffuse the illumination they use when shooting portraits, because they know we look our best in soft light.

The big glowing sheets could also make light sources out of everyday things. GE imagines putting OLEDs on the inside of window blinds -- pull them down, light them up, and you have light streaming from the window, even at night. You could even make OLED wallpaper, since the material is flexible.

"We have a lot of ideas for what we can do with it," said German lighting designer Ingo Maurer.

He and his firm have already created the first commercially available OLED lamp, and is selling it in a limited edition of 25. He expects to deliver the first two this month, at an undisclosed but presumably collector-level price.

The lamp is more of a curiosity than a practical product: the light is dim, and gradually grows dimmer, losing half its brightness after 2,000 hours. Its OLED panels are only a few inches wide, and made of glass rather than plastic. They protrude from a central stem like the leaves of a fern.

The panels in Maurer's lamp are made by Osram Opto Semiconductors, a subsidiary of German industrial conglomerate Siemens AG, which is also the parent of Osram Sylvania, a competitor to GE in the general lighting market.

Osram Opto made them with an expensive, slow process known as vacuum deposition that has dominated OLED development so far. One virtue of this method is that it can be combined with the technologies that produce LCD displays to make full-color OLED TVs. Sony Corp. sells an 11-inch model for $2,500.

OLED TVs have to become much cheaper (and larger) to become mass-market products, and OLED lights have to be cheaper still. That's the issue GE is tackling with its printer, which dispenses with vacuum deposition in favor of a process that's not much more complicated than the printing of a newspaper.

"We're trying to be as low-tech as possible," said Anil Duggal, head of GE's OLED research team.

In the next step, GE plans to build a larger machine that can print panels several feet wide. Its output could be sold commercially as early as 2010, Duggal said, but he acknowledged that's a "very aggressive" goal.

Since the production runs will be small by then, the prices won't be accessible to the average consumer. But the luminous OLEDs could show up in niche, luxury settings, like casinos or fancy restaurants, where the thin and flexible lights could allow the creation of striking architectural or artistic effects.

Looking ahead a few more years, printing could reduce the cost of OLEDs to little more than the cost of the stuff it's printed on, said Janice Mahon, vice president of technology commercialization at Universal Display Corp. in Ewing, New Jersey. The company is a leader in OLED research, and develops some of the organic compounds, which are akin to the dyes used to color clothes. If printed on metal foil, the cost of an OLED light could be less than a dollar per square foot, Mahon said.

This sets OLEDs apart from another promising technology that has been hailed as the future of lighting. Inorganic LEDs, the pinhead-sized lights that adorn electronic gadgets, are beginning to show up in commercial lighting, where their extreme longevity compared to bulbs makes up for their high production cost. Since they're made with semiconductor manufacturing techniques, a cluster of LEDs that produces as much light as a standard bulb costs more than $100.

LEDs and OLEDs both hold the potential for big energy savings over standard incandescent bulbs. Matching fluorescents is tougher. Universal Display this year created OLEDs that exceeded the energy efficiency of fluorescents, but combining that feature with longevity and mass production will be a challenge.

"It's not going to be competitive with fluorescents in 2010," Duggal said.

As point light sources, LEDs are likely to coexist with the big, diffuse OLEDs, said analyst Lawrence Gasman at Nanomarkets LLC, a research firm in Glen Allen, Virginia. "Together, they make for a nice lighting future," Gasman said.

He projects that OLED lighting sales could reach $5.9 billion by 2015.

Bob Sagebiel, technical marketing manager for lighting at distributor Arrow Electronics Inc., is less optimistic. Because OLEDs are so different from current lighting technology, they could have a hard time being accepted by the market, he believes. An OLED panel won't fit in any of the 20 billion light-bulb sockets worldwide, he noted. Commercial buildings will probably need rewiring to take advantage of big panels that don't fit into existing fixtures for fluorescent tubes.

Also, for GE and Osram to reach customers with their panels, they'll need to go through makers of light fixtures, and "that's an industry that is tremendously conservative," Sagebiel said.

On the manufacturing side, there are still challenges to overcome for the technology, particularly in making OLEDs long-lasting in addition to being power-efficient. They're gradually worn out by use. Exposure to atmospheric oxygen, which can seep through plastic, destroys them quickly.

But OLED technology is, at least, in much better shape to take on the lighting market than an older technology that also produces thin and printable light sources. Electroluminescent lights, which glow in Indiglo watches and car dashboards, have been made for decades. Despite early hopes, they've never become competitive when it comes to brightness or energy efficiency.

"In the 1950s, people were talking about electroluminescence the way we talk about OLEDs today," Duggal said. "It's humbling."

Monday, October 6, 2008

TV screen that folds up to fit in your pocket

The Traditional flat-screen televisions could soon become a thing of the past, as scientists have revealed an ultra-thin, flexible screen that could fold up and fit in your pocket.

The bendy screens - less than a millimetre thick - could be used for televisions, computers and phones, and may pave the way for easy-to-carry digital newspaper displays, which readers could upload their news on to daily.
Some speculate that the technology could even lead to wearable TV jackets, flexible laptop screens, and TV blankets.
Sony worked with researchers at the Max Planck Institute in Germany to create the design. They say it is flexible and transparent, and has an extremely low energy requirement, allowing laptop and phone batteries to last longer.

The screens are made up of organic molecules that emit light in all directions to produce an image, which gives an almost infinite viewing angle.

Stacking up the transparent screens may produce 3D effects, the scientists say.

Other possibilities include moving images on posters, like those seen in films such as Minority Report, and talking pictures on cereal boxes.

The researchers told the Journal of Physics: 'The displays have excellent brightness and are transparent, bendable and flexible.
'There are practically no display size limitations and they could be produced relatively easily and cheaply compared to today's screens.'
An earlier version of the work was demonstrated by Sony in 2006, but technical and design issues stopped it from being mass produced.

Americans Clueless About Plans To Create New Life Forms

If you've never heard of the exciting field of synthetic biology,
you're not alone, but you might want to get wise to the field's
controversial promise to create life from scratch.



About two-thirds of U.S. residents are clueless as well, having
never heard of the synthetic biology. Only 2 percent in a new telephone survey of 1,003 adults said they have heard a lot about the work, which crosses biology with technology and promises to create forms of life that Nature never thought of.


Synthetic biologists engineer and build or redesign living
organisms, such as bacteria, to carry out specific functions. The field is a scientific playground for the genetic code, where previously nonexistent DNA is formulated in test tubes.


By taking genetic engineering to the extreme, synthetic biologists aim to make life in the lab.



The promise is that the novel organisms will fight disease, create alternative fuels or build living computers.
Already, researchers have transplanted genetic material from one
microbe species into the cellular body of another, described last year as the living equivalent to converting a Macintosh computer to a PC by inserting a new piece of software.


We face daunting problems of climate change, energy, health, and
water resources, a group of 17 leading scientists in the field stated last year. Synthetic biology offers solutions to these issues: microorganisms that convert plant matter to fuels or that synthesize new drugs or target and destroy rogue cells in the body.


Now you know.



But why should you care?


For one, the field is potentially controversial because it raises
issues of ownership, misuse, unintended consequences and accidental
release, according to a report earlier this year commissioned by the
Biotechnology and Biological Sciences Research Council in England. In a nutshell,some fear microscopic lab freaks might escape and wreak havoc.



That in mind, scientists are concerned that the United States is falling behind other countries in many areas of science and technology and that the current administration has been downright hostile
toward some fields of science. Obtaining federal funding for
cutting-edge research can be challenging when the public doesn't even know what the research is about or what its benefits might be.



And as the new poll showed, we tend to fear what we don't know.



Respondents were asked how they viewed the potential risks and
rewards of the new technology. Those more familiar with synthetic
biology are more inclined to have a positive assessment of the
tradeoff,the pollsters found.



Early in the administration of the next president, scientists are
expected to take the next major step toward the creation of synthetic forms of life, said David Rejeski, director of the Project on Emerging Nanotechnologies. Yet the results from the first U.S. telephone poll about synthetic biology show that most adults have heard just a little or nothing at
all about it.



The poll was conducted in August by Peter D. Hart Research Associates.



Nearly half of the poll respondents said they have heard nothing at
all about the broader field of nanotechnology. Again,there is a
positive association between awareness of nanotechnology and the belief that the benefits of nanotechnology will outweigh the risks, the analysts found.


Thursday, September 11, 2008

Robot Suit To Enter Mass Production

Japan is a world leader in robotics, and in October 2008 a Japanese company will become the first in the world to begin mass-producing a robot that assists humans in moving their limbs. A research team led by University of Tsukuba Professor Sankai Yoshiyuki has developed the device, which is called Robot Suit HAL (Hybrid Assistive Limb) TM. Sankai is the CEO of Cyberdyne Inc., the company that plans to begin making this robot suit available for rental through sales outlets.

How the Robot Suit Works

Manufacturing robots and realistic humanoid robots are just two of the numerous kinds of robots that are already in use. A robot suit is a wearable device that dramatically increases the strength of the wearer. Robot Suit HAL is worn over the arms and legs and assists body movement through eight electric motors attached to shoulders, elbows, knees, and the waist.As it supports the wearer's own limb movements, the robot suit must detect how the wearer is trying to move his or her arms and legs and quickly respond. Most of the robots that have been developed so far in this field rely on sensors to detect motion and then activate motors.

This method, however, has some drawbacks. First, there is a slight time lag from when the wearer moves a muscle to when the robot responds. Second, people who are unable to move their arms and legs can't use such a robot at all. These issues had been viewed as obstacles to a wide commercialization of robot suits. Robot Suit HAL, however, has overcome these limitations using a unique method that senses bioelectric signals sent from brain, rather than detecting muscle movements.

When you want to move your body, your brain sends out an electric signal that is received by your muscles, which then contract, thus producing motion. This electric signal travels to the muscles via the body's nerves, generating a slight voltage of electricity on the surface of the skin. This is known as a bioelectric signal, and Robot Suit HAL detects them using the sensors placed around the wearer's body. Depending on the voltage running the surface of the skin, the computer inside Robot Suit HAL analyzes the signal and sets the appropriate motors in motion.

A Variety of Potential Uses

This unique method of operation means that a person can control Robot Suit HAL by his or her own will, even if he or she is unable to actually move. And as the suit detects the signal sent from the brain even before it gets to the muscle, it can move an instant before the muscle does..When a person wearing Robot Suit HAL picks up an object that weighs 40 kg, he/she feels as if it weighed only a few kilograms. Robot Suit HAL is therefore expected to have a wide range of applications, such as assisting carers, helping people with physical disabilities to move, and assisting people performing jobs that require a great deal of physical strength. In order to facilitate the commercialization process, Professor Sankai and others formed Cyberdyne Inc. in 2004. In October 2008, the company plans to move into a factory currently under construction that will allow them to manufacture up to 500 suits a year.

Several other types of robot suits are also under development in Japan. Toyama Shigeki, a Professor of Tokyo University of Agriculture and Technology, leads a team that is currently developing a power-assist suit, which will be used to help agricultural work. Their goal is to place the product on the market within the next few years.

Sunday, September 7, 2008

City of the future: The giant glass pyramid that could house one million people


With its sharp angles and its glass walls shimmering in the sunlight it looks like a piece of modern art.

But this innovative design is actually a blueprint for the city of the future - a giant glass pyramid that could house up to one million people.

The development, named the 'Ziggurat', will be self sufficient and carbon neutral with power being supplied by wind turbines.

No cars will be allowed inside the 2.3 square kilometre building, with residents being whisked around by a monorail network which operates both horizontally and vertically.


The futuristic pyramid could provide homes for around one million people

Security in the city will be provided by biometrics with residents relying on facial recognition to enter their homes.

Dubai based designer Timelinks has already patented the design and technology incorporated into the project.

They have also applied to the European Union for a grant to carry out more work on the project.

Ridas Matonis, managing director of Timelinks, said the city would work by 'harnessing the power of nature.' He said: "Ziggurat communities can be almost totally self-sufficient energy-wise.

"Apart from using steam power in the building we will also employ wind turbine technology to harness natural energy resources.

Enlarge
The incredible building will be environmentally friendly with no cars, and will be powered by wind turbines

But it is not just about reducing the carbon footprint - the pyramid has many other benefits.

"Whole cities can be accommodated in complexes which take up less than ten per cent of the original land surface.

"Public and private landscaping will be used for leisure pursuits or irrigated as agricultural land.

"If these projects were realised today the world would see communities that are sustainable, environmentally friendly and in tune with their natural surroundings."

Monday, September 1, 2008

Anti-gravity propulsion comes ‘out of the closet’

Boeing, the world’s largest aircraft manufacturer, has admitted it is working on experimental anti-gravity projects that could overturn a century of conventional aerospace propulsion technology if the science underpinning them can be engineered into hardware. As part of the effort, which is being run out of Boeing’s Phantom Works advanced research and development facility in Seattle, the company is trying to solicit the services of a Russian scientist who claims he has developed anti-gravity devices in Russia and Finland. The approach, however, has been thwarted by Russian officialdom. The Boeing drive to develop a collaborative relationship with the scientist in question, Dr Evgeny Podkletnov, has its own internal project name: ‘GRASP’ — Gravity Research for Advanced Space Propulsion. A GRASP briefing document obtained by JDW sets out what Boeing believes to be at stake. "If gravity modification is real," it says, "it will alter the entire aerospace business." GRASP’s objective is to explore propellentless propulsion (the aerospace world’s more formal term for anti-gravity), determine the validity of Podkletnov’s work and "examine possible uses for such a technology". Applications, the company says, could include space launch systems, artificial gravity on spacecraft, aircraft propulsion and ‘fuelless’ electricity generation — so-called ‘free energy’.

But it is also apparent that Podkletnov’s work could be engineered into a radical new weapon. The GRASP paper focuses on Podkletnov’s claims that his high-power experiments, using a device called an ‘impulse gravity generator’, are capable of producing a beam of ‘gravity-like’ energy that can exert an instantaneous force of 1,000g on any object — enough, in principle, to vaporise it, especially if the object is moving at high speed.


Filmed in 1994 at the IFNE Conference in Denver, this hour-long presentation by John Searl describes the inner-workings of the infamous Searl-Effect Generator and IGV Propulsion System with photos, schematics, construction details, and a concise summary of 1960's testing. John Searl is one of the most controversial figures in Antigravity research, but since beginning his work in the 1940's, he's arguably become "the father of modern Antigravity". His claim is simple: that after a childhood dream showing a rotating set of rollers on a metallic ring, he constructed a device called the Searl Effect Generator (SEG) that seems to produce massive Antigravitational thrust. Searl is one of the cultural icons in the field of Antigravity.

Monday, August 18, 2008

30 perccent of U.S. Army may be robotic by 2020

U.S. technologists have revealed that the countrys military has plans to have about 30 per cent of the Army comprised of robotic forces by approximately 2020.

Washington, U.S. technologists have revealed that the country's military has plans to have about 30 per cent of the Army comprised of robotic forces by approximately 2020.

Doug Few and Bill Smart of Washington University in St. Louis say that robots are increasingly taking over more soldier duties in Iraq and Afghanistan, and that the U.S. Army wants to make further additions to its robotic fleet.

They, however, also point out that the machines still need the human touch.

"When the military says 'robot' they mean everything from self-driving trucks up to what you would conventionally think of as a robot. You would more accurately call them autonomous systems rather than robots," says Smart, assistant professor of computer science and engineering.

All of the Army's robots are teleoperated, meaning there is someone operating the robot from a remote location, perhaps often with a joystick and a computer screen.

While this may seem like a caveat in plans to add robots to the military, it is actually very important to keep humans involved in the robotic operations.

"It's a chain of command thing. You don't want to give autonomy to a weapons delivery system. You want to have a human hit the button. You don't want the robot to make the wrong decision. You want to have a human to make all of the important decisions," says Smart.

The technologist duo says that researchers are not necessarily looking for intelligent decision-making in their robots. Instead, they are working to develop an improved, "intelligent" functioning of the robot.

"It's oftentimes like the difference between the adverb and noun. You can act intelligently or you can be intelligent. I'm much more interested in the adverb for my robots," says Few, a Ph.D. student who is interested in the delicate relationship between robot and human.

He says that there are many issues that may require "a graceful intervention" by humans, and these need to be thought of from the ground up.

"When I envision the future of robots, I always think of the Jetsons. George Jetson never sat down at a computer to task Rosie to clean the house. Somehow, they had this local exchange of information. So what we've been working on is how we can use the local environment rather than a computer as a tasking medium to the robot," he says.

Few has incorporated a toy into robotic programming, and with the aid of a Wii controller, he capitalizes on natural human movements to communicate with the robot.

According to the researchers, focussing on a joystick and screen rather than carting around a heavy laptop would help soldiers in battle to stay alert, and engage in their surroundings while performing operations with the robot.

"We forget that when we're controlling robots in the lab it's really pretty safe and no one's trying to kill us. But if you are in a war zone and you're hunched over a laptop, that's not a good place to be. You want to be able to use your eyes in one place and use your hand to control the robot without tying up all of your attention," says Smart.

Devices like unmanned aerial vehicles, ground robots for explosives detection, and Packbots have already been inducted in the military.

"When I stood there and looked at that Packbot, I realized that if that robot hadn't been there, it would have been some kid," says Few.

Saturday, August 2, 2008

Matrix-style virtual worlds 'a few years away'

Are supercomputers on the verge of creating Matrix-style simulated realities? Michael McGuigan at Brookhaven National Laboratory in Upton, New York, thinks so. He says that virtual worlds realistic enough to be mistaken for the real thing are just a few years away.

In 1950, Alan Turing, the father of modern computer science, proposed the ultimate test of artificial intelligence – a human judge engaging in a three-way conversation with a machine and another human should be unable to reliably distinguish man from machine.

A variant on this "Turing Test" is the "Graphics Turing Test", the twist being that a human judge viewing and interacting with an artificially generated world should be unable to reliably distinguish it from reality.

"By interaction we mean you could control an object – rotate it, for example – and it would render in real-time," McGuigan says.

Photoreal Animation

Although existing computers can produce artificial scenes and textures detailed enough to fool the human eye, such scenes typically take several hours to render. The key to passing the Graphics Turing Test, says McGuigan, is to marry that photorealism with software that can render images in real-time – defined as a refresh rate of 30 frames per second.

McGuigan decided to test the ability of one of the world's most powerful supercomputers – Blue Gene/L at Brookhaven National Laboratory in New York – to generate such an artificial world.

Blue Gene/L possesses 18 racks, each with 2000 standard PC processors that work in parallel to provide a huge amount of processing power – it has a speed of 103 teraflops, or 103 trillion "floating point operations" per second. By way of comparison, a calculator uses about 10 floating operations per second.

In particular, McGuigan studied the supercomputer's ability to mimic the interplay of light with objects – an important component of any virtual world with ambitions to mimic reality.

He found that conventional ray-tracing software could run 822 times faster on the Blue Gene/L than on a standard computer, even though the software was not optimised for the parallel processors of a supercomputer. This allowed it to convincingly mimic natural lighting in real time.

Not There Yet

"The nice thing about this ray tracing is that the human eye can see it as natural," McGuigan says. "There are actually several types of ray-tracing software out there – I chose one that was relatively easy to port to a large number of processors. But others might be faster and even more realistic if they are used in parallel computing."

Although Blue Gene/L can model the path of light in a virtual world both rapidly and realistically, the speed with which it renders high-resolution images still falls short of that required to pass the Graphics Turing Test.

But supercomputers capable of passing the test may be just years away, thinks McGuigan. "You never know for sure until you can actually do it," he says. "But a back-of-the-envelope calculation would suggest it should be possible in the next few years, once supercomputers enter the petaflop range – that's 1000 teraflops."

But others think that passing the Graphics Turing Test requires more than photorealistic graphics moving in real-time. Reality is not 'skin deep' says Paul Richmond at the University of Sheffield, UK. An artificial object can appear real, but unless it moves in a realistic way the eye won't be fooled. "The real challenge is providing a real-time simulation that includes realistic simulated behaviour," he says.

Fluid Challenge

"I'd like to see a realistic model of the Russian ballet," says Mark Grundland at the University of Cambridge. "That's something a photographer would choose as a subject matter, and that's what we should aim to convey with computers."

Grundland also points out that the Graphics Turing Test does not specify what is conveyed in the virtual world scene. "If all that is there is a diffusely-reflecting sphere sitting on a diffusely-reflecting surface, then we've been able to pass the test for many years now," he says. "But Turing didn't mean for his vision to come true so quickly."

McGuigan agrees that realistic animation poses its own problems. "Modelling that fluidity is difficult," he says. "You have to make sure that when something jumps in the virtual world it appears heavy." But he remains optimistic that animation software will be up to the task. "Physical reality is about animation and lighting," he says. "We've done the lighting now – the animation will follow."




'Multiverse Theory' Holds That the Universe is a Virtual Reality Matrix

Comment: Isn't it amazing that scientists have finally had to admit that the design of the universe is so perfectly crafted so as to indicate intelligent design and yet they still try to avoid any explanation which includes the word God.

The multiverse theory has spawned another - that our universe is a simulation, writes Paul Davies.

If you've ever thought life was actually a dream, take comfort. Some pretty distinguished scientists may agree with you. Philosophers have long questioned whether there is in fact a real world out there, or whether "reality" is just a figment of our imagination.

Then along came the quantum physicists, who unveiled an Alice-in-Wonderland realm of atomic uncertainty, where particles can be waves and solid objects dissolve away into ghostly patterns of quantum energy.

Now cosmologists have got in on the act, suggesting that what we perceive as the universe might in fact be nothing more than a gigantic simulation.

The story behind this bizarre suggestion began with a vexatious question: why is the universe so bio-friendly? Cosmologists have long been perplexed by the fact that the laws of nature seem to be cunningly concocted to enable life to emerge. Take the element carbon, the vital stuff that is the basis of all life. It wasn't made in the big bang that gave birth to the universe. Instead, carbon has been cooked in the innards of giant stars, which then exploded and spewed soot around the universe.

The process that generates carbon is a delicate nuclear reaction. It turns out that the whole chain of events is a damned close run thing, to paraphrase Lord Wellington. If the force that holds atomic nuclei together were just a tiny bit stronger or a tiny bit weaker, the reaction wouldn't work properly and life may never have happened.

The late British astronomer Fred Hoyle was so struck by the coincidence that the nuclear force possessed just the right strength to make beings like Fred Hoyle, he proclaimed the universe to be "a put-up job". Since this sounds a bit too much like divine providence, cosmologists have been scrambling to find a scientific answer to the conundrum of cosmic bio-friendliness.

The one they have come up with is multiple universes, or "the multiverse". This theory says that what we have been calling "the universe" is nothing of the sort. Rather, it is an infinitesimal fragment of a much grander and more elaborate system in which our cosmic region, vast though it is, represents but a single bubble of space amid a countless number of other bubbles, or pocket universes.

Things get interesting when the multiverse theory is combined with ideas from sub-atomic particle physics. Evidence is mounting that what physicists took to be God-given unshakeable laws may be more like local by-laws, valid in our particular cosmic patch, but different in other pocket universes. Travel a trillion light years beyond the Andromeda galaxy, and you might find yourself in a universe where gravity is a bit stronger or electrons a bit heavier.

The vast majority of these other universes will not have the necessary fine-tuned coincidences needed for life to emerge; they are sterile and so go unseen. Only in Goldilocks universes like ours where things have fallen out just right, purely by accident, will sentient beings arise to be amazed at how ingeniously bio-friendly their universe is.

It's a pretty neat idea, and very popular with scientists. But it carries a bizarre implication. Because the total number of pocket universes is unlimited, there are bound to be at least some that are not only inhabited, but populated by advanced civilisations - technological communities with enough computer power to create artificial consciousness. Indeed, some computer scientists think our technology may be on the verge of achieving thinking machines.

It is but a small step from creating artificial minds in a machine, to simulating entire virtual worlds for the simulated beings to inhabit. This scenario has become familiar since it was popularised in The Matrix movies.

Now some scientists are suggesting it should be taken seriously. "We may be a simulation ... creations of some supreme, or super-being," muses Britain's astronomer royal, Sir Martin Rees, a staunch advocate of the multiverse theory. He wonders whether the entire physical universe might be an exercise in virtual reality, so that "we're in the matrix rather than the physics itself".

Is there any justification for believing this wacky idea? You bet, says Nick Bostrom, a philosopher at Oxford University, who even has a website devoted to the topic ( http://www.simulation-argument.com). "Because their computers are so powerful, they could run a great many simulations," he writes in The Philosophical Quarterly.

So if there exist civilisations with cosmic simulating ability, then the fake universes they create would rapidly proliferate to outnumber the real ones. After all, virtual reality is a lot cheaper than the real thing. So by simple statistics, a random observer like you or me is most probably a simulated being in a fake world. And viewed from inside the matrix, we could never tell the difference.

Or could we? John Barrow, a colleague of Martin Rees at Cambridge University, wonders whether the simulators would go to the trouble and expense of making the virtual reality foolproof. Perhaps if we look closely enough we might catch the scenery wobbling.

He even suggests that a glitch in our simulated cosmic history may have already been discovered, by John Webb at the University of NSW. Webb has analysed the light from distant quasars, and found that something funny happened about 6 billion years ago - a minute shift in the speed of light. Could this be the simulators taking their eye off the ball?

I have to confess to being partly responsible for this mischief. Last year I wrote an item for The New York Times, saying that once the multiverse genie was let out of the bottle, Matrix-like scenarios inexorably follow. My conclusion was that perhaps we should retain a healthy scepticism for the multiverse concept until this was sorted out. But far from being a dampener on the theory, it only served to boost enthusiasm for it.

Where will it all end? Badly, perhaps. Now the simulators know we are on to them, and the game is up, they may lose interest and decide to hit the delete button.

Links Of Interest
Chat With Motbot/ Artificial Intelligence

The Day You Discard Your Body

Using mind control to make flies sing

An Oxford scientist has used mind control to make female flies belt out male love songs, revealing they have a hidden capacity for masculine behaviour.


The female fly sang the male fly's song when a laser was flashed at it


The research, which suggests that the sexes are not quite so different as they seem, exploits a remote control method that could provide revolutionary insights into behaviour.

Professor Gero Miesenböck of Oxford University is sometimes nicknamed the "lord of the flies" after remarkable work he pioneered in America to use laser light to control fly brains with the flick of a switch. He has now applied his mind control methods to exploring fly sexuality.

Three years ago, he caused a buzz when he showed he could trigger certain actions in flies from a distance by shining light on them. The flies are genetically engineered so that only the brain cells of interest were made responsive to light.

When the laser flashed it activated these brain cells and could make flies jump, walk, fly or, in the present case, produce a 'love song'.

Male fruit flies usually 'sing' to attract females, vibrating one wing to produce a distinctive sound which females like because they then allow copulation.

By tweaking one set of nerve cells thought to control this behaviour, the so called fru neurons, Professor Miesenböck has shown that female fruit flies can be made to 'sing' too.

"You might expect that the brains of the two sexes would be built differently, but that does not seem to be the case," says Prof Miesenböck. "Instead, it appears there is a largely bisexual or 'unisex brain' with a few critical switches that make the difference between male and female behaviour."

"The fact that we could make females vibrate one wing to produce a courtship song - a behaviour never before seen in female flies - shows that the brain circuits for this male behaviour are present in the female brain, even though they are never used for that purpose,' says Miesenböck.

Although the work suggests the circuitry for male behaviour exists in female brains and simply lies dormant, the 'song' was not quite as good as the males'.

"If you look carefully, the females do sound different," he says. "They have a different pitch and rhythm and aren't as well controlled." He thinks those distinctions probably stem from real, if subtle, differences between the male and female brains.

Despite this off key note, the study poses a profound puzzle. "The mystery at the root of our study is the neuronal basis of differences in male and female behaviour. Anatomically, the differences are subtle. How is it that the neural equipment is so similar, but the sexes behave so differently?

The new findings suggest that flies must harbour key nodes or "master switches" that set the whole brain to the male or female mode, according to the researchers. Their next goal is to find those controls.

Wednesday, July 30, 2008

An Air Car You Could See in 2009: ZPM’s 106 MPG Compressed-Air Hybrid



.
Compressed-Air Powered cars could take you over 800 miles on a single fill-up, at speeds of up to 96 mph. They should refuel in less than 3 minutes, and at speeds over 35 mph emit about half the CO2 of a Toyota Prius. Best part? You could see them in the US at the end of next year.
Car-tech aficionados may already be familiar with Zero Pollution Motor’s (ZPM) compressed-air powered car. For those that haven’t heard of it yet, read on:

“The compressed air vehicle is a new generation of vehicle that finally solves the motorist’s dilemma: how to drive and not pollute at a cost that is affordable!”

What happens when you replace the explosions in your car’s combustion chamber with clean compressed air? Well, as long as you lighten things up by replacing heavier parts with aluminum, you end up with a clean, efficient way to power a vehicle.

The world’s first commercial compressed-air powered vehicle is currently being produced by India’s largest automaker, Tata Motors, who is licensing the technology from European-based company MDI (a company powered by the innovation of ex-Formula One engineer Guy Nègre). They anticipate having about 6000 of these vehicles on city streets in India in 2008.

How does an Air Car Work?
Although potentially revolutionary it really isn’t that complicated. What a compressed-air car does is use the force of super-compressed air to move the engine’s pistons up and down, as opposed to explosions produced from injecting a small amount of fuel.

To get things moving on compressed air, weight reduction is a top priority. MDI’s aluminum-based engine weighs half what a normal engine does, and the frame is also built out of lightweight materials (US version will be aluminum?).

ZPM’s US model will store about 3200 cubic feet of compressed air in carbon fiber tanks at 4500 psi. Carbon fiber tanks are used for safety reasons since they tend to split open (as opposed to explode) when punctured.


Compressed air from the tanks will run directly to the engine under speeds of 35 miles per hour. That means that under 35 mph the car qualifies as a zero emissions vehicle. At higher speeds the engine will burn a small amount of fuel to create more compressed air, sort of like how a plug-in hybrid like the Chevy Volt produces on-the-fly electricity. The hybrid air-car setup should be able use any number of fuels, including gasoline, propane, or ethanol.

1 tank of air + 8 gallons of gas = 848 mile range


The car’s compressed air tank can be refilled in about 3 minutes from a service station. To fill it up at home the car would be plugged in, where an onboard compressor would refill the tank in about 4 hours, at an electrical cost of about $2.

If you aren’t sure whether turning electricity into compressed air is really that clean, here are some numbers: at speeds over 35 mph the air car emits about half the CO2 per mile as a 2007 Toyota Prius (0.141lbs of CO2 per mile, while that the Toyota Prius emits 0.34 lbs of CO2 per mile).

Will we actually see a US-model Air Car in 2009/10?
New York startup ZPM, like Tata motors, has licensed technology from Luxembourg-based MDI. MDI also has plans to release these cars in Europe in 2-, 4-, and 6-cylinder models, starting under $15,000.

Despite lightweight construction that could be of concern for passing US safety tests, it appears that air car technology could be available in the US in late 2009. ZPM told PopularMechanics.com earlier this year that it expects to produce the first US model air car at the end of 2009 or early 2010. (Btw, ZPM’s model is also a candidate for the $10 million Automotive X Prize.)

ZPM wants to produce a 6-seater, 75-hp model with a 1000 mile range at 96 mph, all for just $17,800.

The big question I think we all have is: will this car make it through US safety testing? ZPM’s website says that air car models will meet the same safety specifications of all cars driven in the US. As with most of these new hyper-efficient models we’ve seen (like Aptera’s Typ1 or VW’s 1L Car), ZPM claims the vehicle’s “tubular body provides increased resistance in the event of a crash.” The car will also come with Air Bags and ABS braking.

Brain in a dish flies flight simulator

A Florida scientist has developed a "brain" in a glass dish that is capable of flying a virtual fighter plane and could enhance medical understanding of neural disorders such as epilepsy.

The "living computer" was grown from 25,000 neurons extracted from a rat's brain and arranged over a grid of 60 electrodes in a Petri dish.

The brain cells then started to reconnect themselves, forming microscopic interconnections, said Thomas DeMarse, professor of biomedical engineering at the University of Florida.

"It's essentially a dish with 60 electrodes arranged in a dish at the bottom," explained DeMarse, who designed the study.

"Over that we put the living cortical neurons from rats, which rapidly begin to reconnect themselves, forming a living neural network -- a brain."

Although such living networks could one day be used to fly unmanned aircraft, DeMarse said the study was of more immediate relevance as an experimental aid to understanding how the human brain performs and learns computational tasks at a cellular level.

"We're interested in studying how brains compute," said DeMarse.

"If you think about your brain, and learning and the memory process, I can ask you questions about when you were five-years-old and you can retrieve information. That's a tremendous capacity for memory. In fact, you perform fairly simple tasks that you would think a computer would easily be able to accomplish, but in fact it can't."

Although computers can perform certain tasks extremely quickly, they lack the flexibility and adaptability of the human brain and perform particularly poorly at pattern recognition tasks.

"If we extract the rules of how these neural networks are doing computations like pattern recognition we can apply that to create novel computing systems," said DeMarse.

"There's a lot of data out there that will tell you that the computation that's going on here isn't based on just one neuron. The computational property is actually an emergent property of hundreds of thousands of neurons cooperating to produce the amazing processing power of the brain."

As well as enhancing scientific knowledge of how the brain works, the neurons may provide clues to brain dysfunction. For example, an epileptic seizure is triggered when all the neurons in the brain fire simultaneously -- a pattern commonly replicated by a neural network in a dish.

When linked up to an F-22 jet flight simulator, the brain and the simulator established a two-way connection similar to how neurons receive and interpret signals from each other to control our bodies.

Gradually the brain learnt to control the flight of the plane based on the information it received about flight conditions.

However, the brain still falls a long way short of the complexity of the human brain, which has billions of neurons, and Steven Potter, a biomedical engineer at the Georgia Institute of Technology, said a brain in a dish flying a real plane was still a long way off.

"A lot of people have been interested in what changes in the brains of animals and people when they are learning things," said Potter, DeMarse's former supervisor.

"We're interested in getting down into the network and cellular mechanisms, which is hard to do in living animals. And the engineering goal would be to get ideas from this system about how brains compute and process information."