first step to virtual sex -Virtual Lips for Long-Distance Lovers

THE GIST
  • Connect the device to a computer via a USB port.
  • Link up online.
  • Start making out.
kissenger
The Kissenger is shaped like a small head with oversized silicone lips. Click to enlarge this image.
Youtube screen grab

Finding it hard to keep up the passion in a long-distance relationship? Help might be on the way.
A robotics professor in Singapore has invented a gadget equipped with motion-sensitive electronic "lips" that allow amorous but absent couples to exchange long-distance smooches via the Internet.
 ------------------------------------------------------------------------------------------------------------

Predictions made by Ray KurzweilAccording to Ray Kurzweil, 89 out of 108 predictions he made were entirely correct by the end of 2009. An additional 13 were what he calls “essentially correct" (meaning that they were likely to be released within a few years of 2009), for a total of 102 out of 108. Another 3 are partially correct, 2 look like they are about 10 years off, and 1, which was tongue in cheek anyway, was just wrong.

2019

  • Devices that deliver sensations to the skin surface of their users (i.e.--tight body suits and gloves) are also sometimes used in virtual reality to complete the experience. "Virtual sex"--in which two people are able to have sex with each other through virtual reality, or in which a human can have sex with a "simulated" partner that only exists on a computer—becomes a reality.
  • Just as visual- and auditory virtual reality have come of age, haptic technology has fully matured and is completely convincing, yet requires the user to enter a V.R. booth. It is commonly used for computer sex and remote medical examinations. It is the preferred sexual medium since it is safe and enhances the experience.
  • ====================================================================
  • ------------------------------------------------------------------------------------------------------
BLOG: Kiss Transmitter Lets You Make Out Over the Internet
Shaped like a small head with oversize silicone lips, the "Kissenger" -- short for Kiss Messenger -- was unveiled in June at a scientific conference in Britain and is still being refined for commercial launch.
"It can be used between humans to improve their communication," its creator Hooman Samani told AFP.
Couples just have to connect the devices to computers via USB cables, link up online and start kissing the silicone material to trigger sensors that move the gadget on the other side.
They can stare at each other on screen while exchanging kisses.
"The main issue is to transmit the force and pressure, and also the shape of the lip," Samani said.
The "special silicone material" chosen for the lips offers "the best sensation and feeling," said the scientist, who has personally tested the device.
But the Kissenger is not yet ready for the market despite "a lot of offers" from interested parties because there are "ethical issues" that need to be resolved on top of the technical aspects, he said.
BLOG: Robot Prostitutes, the Future of Sex Tourism
"Kissing is very intimate so in order to have a product in market which is going to deal with this sensitive issue we have to do proper studies and investigation on the social point of view, cultural point of view," he said.
The device is still being refined at a laboratory jointly set up by the National University of Singapore (NUS) and Keio University of Japan.
Samani calls his field of study "lovotics" -- research into the relationship between robots and humans -- and the Kissenger is just one of several devices being developed by his team.
=========================================================================

Kissenger: virtual lips for long-distance lovers

Sapa-AFP | 23 July, 2012 09:21

Lips. File picture.
Image by: Time LIVE

Finding it hard to keep up the passion in a long-distance relationship? Help might be on the way.A robotics professor in Singapore has invented a gadget equipped with motion-sensitive electronic “lips” that allow amorous but absent couples to exchange long-distance smooches via the Internet.

Shaped like a small head with oversize silicone lips, the “Kissenger” — short for Kiss Messenger — was unveiled in June at a scientific conference in Britain and is still being refined for commercial launch.
“It can be used between humans to improve their communication,” its creator Hooman Samani told AFP.
Couples just have to connect the devices to computers via USB cables, link up online and start kissing the silicone material to trigger sensors that move the gadget on the other side.
They can stare at each other on screen while exchanging kisses.
“The main issue is to transmit the force and pressure, and also the shape of the lip,” Samani said.
The “special silicone material” chosen for the lips offers “the best sensation and feeling”, said the scientist, who has personally tested the device.
But the Kissenger is not yet ready for the market despite “a lot of offers” from interested parties because there are “ethical issues” that need to be resolved on top of the technical aspects, he said.
“Kissing is very intimate so in order to have a product in market which is going to deal with this sensitive issue we have to do proper studies and investigation on the social point of view, cultural point of view,” he said.
The device is still being refined at a laboratory jointly set up by the National University of Singapore (NUS) and Keio University of Japan.
Samani calls his field of study “lovotics” — research into the relationship between robots and humans — and the Kissenger is just one of several devices being developed by his team.

Scientists decode brain waves to eavesdrop on what we hear

BERKELEY —
Neuroscientists may one day be able to hear the imagined speech of a patient unable to speak due to stroke or paralysis, according to University of California, Berkeley, researchers.
This content requires the QuickTime Plugin. Download QuickTime Player.Already have QuickTime Player? Click here.Frequency spectrograms of the actual spoken words (top) and the sounds as reconstructed by two separate models based solely on recorded temporal lobe activity in a volunteer subject. The words – Waldo, structure, doubt and property – are more or less recognizable, even though the model had never encountered these specific words before. Credit: Brian Pasley, UC Berkeley
These scientists have succeeded in decoding electrical activity in the brain’s temporal lobe – the seat of the auditory system – as a person listens to normal conversation. Based on this correlation between sound and brain activity, they then were able to predict the words the person had heard solely from the temporal lobe activity.
“This research is based on sounds a person actually hears, but to use it for reconstructing imagined conversations, these principles would have to apply to someone’s internal verbalizations,” cautioned first author Brian N. Pasley, a post-doctoral researcher in the center. “There is some evidence that hearing the sound and imagining the sound activate similar areas of the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device.”
“This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig’s disease and can’t speak,” said co-author Robert Knight, a UC Berkeley professor of psychology and neuroscience. “If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit.”
In addition to the potential for expanding the communication ability of the severely disabled, he noted, the research also “is telling us a lot about how the brain in normal people represents and processes speech sounds.”
Pasley and his colleagues at UC Berkeley, UC San Francisco, University of Maryland and The Johns Hopkins University report their findings Jan. 31 in the open-access journal PLoS Biology.
Help from epilepsy patients
They enlisted the help of people undergoing brain surgery to determine the location of intractable seizures so that the area can be removed in a second surgery. Neurosurgeons typically cut a hole in the skull and safely place electrodes on the brain surface or cortex – in this case, up to 256 electrodes covering the temporal lobe – to record activity over a period of a week to pinpoint the seizures. For this study, 15 neurosurgical patients volunteered to participate.
An X-ray CT scan of the head of one of the volunteers, showing electrodes distributed over the brain’s temporal lobe, where sounds are processed. Credit: Adeen Flinker, UC Berkeley
Pasley visited each person in the hospital to record the brain activity detected by the electrodes as they heard 5-10 minutes of conversation. Pasley used this data to reconstruct and play back the sounds the patients heard. He was able to do this because there is evidence that the brain breaks down sound into its component acoustic frequencies – for example, between a low of about 1 Hertz (cycles per second) to a high of about 8,000 Hertz –that are important for speech sounds.
Pasley tested two different computational models to match spoken sounds to the pattern of activity in the electrodes. The patients then heard a single word, and Pasley used the models to predict the word based on electrode recordings.
“We are looking at which cortical sites are increasing activity at particular acoustic frequencies, and from that, we map back to the sound,” Pasley said. He compared the technique to a pianist who knows the sounds of the keys so well that she can look at the keys another pianist is playing in a sound-proof room and “hear” the music, much as Ludwig van Beethoven was able to “hear” his compositions despite being deaf.
The better of the two methods was able to reproduce a sound close enough to the original word for Pasley and his fellow researchers to correctly guess the word.
“We think we would be more accurate with an hour of listening and recording and then repeating the word many times,” Pasley said. But because any realistic device would need to accurately identify words heard the first time, he decided to test the models using only a single trial.
“This research is a major step toward understanding what features of speech are represented in the human brain” Knight said. “Brian’s analysis can reproduce the sound the patient heard, and you can actually recognize the word, although not at a perfect level.”
Knight predicts that this success can be extended to imagined, internal verbalizations, because scientific studies have shown that when people are asked to imagine speaking a word, similar brain regions are activated as when the person actually utters the word.
“With neuroprosthetics, people have shown that it’s possible to control movement with brain activity,” Knight said. “But that work, while not easy, is relatively simple compared to reconstructing language. This experiment takes that earlier work to a whole new level.”
Based on earlier work with ferrets
The current research builds on work by other researchers about how animals encode sounds in the brain’s auditory cortex. In fact, some researchers, including the study’s coauthors at the University of Maryland, have been able to guess the words ferrets were read by scientists based on recordings from the brain, even though the ferrets were unable to understand the words.
The ultimate goal of the UC Berkeley study was to explore how the human brain encodes speech and determine which aspects of speech are most important for understanding.
“At some point, the brain has to extract away all that auditory information and just map it onto a word, since we can understand speech and words regardless of how they sound,” Pasley said. “The big question is, What is the most meaningful unit of speech? A syllable, a phone, a phoneme? We can test these hypotheses using the data we get from these recordings.”
Coauthors of the study are electrical engineers Stephen V. David, Nima Mesgarani and Shihab A. Shamma of the University of Maryland; Adeen Flinker of UC Berkeley’s Helen Wills Neuroscience Institute; and neurologist Nathan E. Crone of The Johns Hopkins University in Baltimore, Md. The work was done principally in the labs of Robert Knight at UC Berkeley and Edward Chang, a neurosurgeon at UCSF.
Chang and Knight are members of the Center for Neural Engineering and Prostheses, a joint UC Berkeley/UCSF group focused on using brain activity to develop neural prostheses for motor and speech disorders in disabling neurological disorders.
The work is supported by the National Institute of Neurological Disorders and Stroke of the National Institutes of Health and the Humboldt Foundation.
PLoS Biology Podcast Episode 2: Decoding speech from the human brain (interview with Brian Pasley and Robert Knight) by Public Library of Scien

A supercomputer that can unravel secrets of universe


LONDON: Renowned theoretical physicist Stephen Hawking has launched the most powerful shared-memory supercomputer in Europe.

Hawking anticipates that the COSMOS supercomputer , manufactured by SGI and the first system of its kind, will open up new windows on the universe.

During the launch, which is part of the Numerical Cosmology 2012 workshop at the Centre for Mathematical Sciences at the University of Cambridge, Hawking said, "We have made advances in cosmology and particle physics. Cosmology is now a precision science, so we need machines like COSMOS to reach out and touch real universe, to investigate whether our mathematical models are correct," he said.

Hawking added, "I hope that we will soon find an ultimate theory which, in principle , would enable us to predict everything in the universe," he said. "Even if we do find the ultimate theory , we will still need supercomputers to describe how something as big and complex as universe evolves, let alone why humans behave the way they do," he said.

'Saturn's moon Titan is Earth-like'

Titan , Saturn's largest moon is "a weirdly Earth-like place" when it comes to geology, astronomers have claimed. Titan boasts landscapes shaped by the flow of rivers, though they are rivers of liquid methane, not of water. And, like Earth, the surface of Titan is surprisingly free of craters, implying that geological activity is constantly reshaping the moon, as also happens here. "It's a weirdly Earth-like place," Taylor Perron, assistant professor of geology at MIT said, "even with this exotic combination of materials and temperatures" .