Thursday, March 18, 2004

FOCUS


Technology – Research

Nasa develops system computerize silent, “Subvocal speech”
NASA scientists have begun to computerize human, silent reading using nerve signals in the throat that control speech.

In preliminary experiments, NASA scientists found that small, button-sized sensors, stuck under the chin and on either side of the "Adam's apple," could gather nerve signals, and send them to a processor and then to a computer program that translates them into words. Eventually, such "subvocal speech" systems could be used in spacesuits, in noisy places like airport towers to capture air-traffic controller commands, or even in traditional voice-recognition programs to increase accuracy, according to NASA scientists.

"What is analyzed is silent, or subauditory, speech, such as when a person silently reads or talks to himself," said Chuck Jorgensen, a scientist whose team is developing silent, subvocal speech recognition at NASA's Ames Research Center, Moffett Field, Calif. "Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement," Jorgensen explained.

"A person using the subvocal system thinks of phrases and talks to himself so quietly, it cannot be heard, but the tongue and vocal chords do receive speech signals from the brain," Jorgensen said.

In their first experiment, scientists "trained" special software to recognize six words and 10 digits that the researchers repeated subvocally. Initial word recognition results were an average of 92 percent accurate. The first sub-vocal words the system "learned" were "stop," "go," "left," "right," "alpha" and "omega," and the digits "zero" through "nine." Silently speaking these words, scientists conducted simple searches on the Internet by using a number chart representing the alphabet to control a Web browser program.

"We took the alphabet and put it into a matrix - like a calendar. We numbered the columns and rows, and we could identify each letter with a pair of single-digit numbers," Jorgensen said. "So we silently spelled out 'NASA' and then submitted it to a well-known Web search engine. We electronically numbered the Web pages that came up as search results. We used the numbers again to choose Web pages to examine. This proved we could browse the Web without touching a keyboard," Jorgensen explained.

Scientists are testing new, "noncontact" sensors that can read muscle signals even through a layer of clothing.

A second demonstration will be to control a mechanical device using a simple set of commands, according to Jorgensen. His team is planning tests with a simulated Mars rover. "We can have the model rover go left or right using silently 'spoken' words," Jorgensen said. People in noisy conditions could use the system when privacy is needed, such as during telephone conversations on buses or trains, according to scientists.

"An expanded muscle-control system could help injured astronauts control machines. If an astronaut is suffering from muscle weakness due to a long stint in micro gravity, the astronaut could send signals to software that would assist with landings on Mars or the Earth, for example," Jorgensen explained. "A logical spin-off would be that handicapped persons could use this system for a lot of things." To learn more about what is in the patterns of the nerve signals that
control vocal chords, muscles and tongue position, Ames scientists are studying the complex nerve-signal patterns. "We use an amplifier to strengthen the electrical nerve signals. These are processed to remove noise, and then we process them to see useful parts of the signals to show one word from another," Jorgensen said.

After the signals are amplified, computer software "reads" the signals to recognize each word and sound. "The keys to this system are the sensors, the signal processing and the pattern recognition, and that's where the scientific meat of what we're doing resides," Jorgensen explained. "We will continue to expand the vocabulary with sets of English sounds, usable by a full speech-recognition computer program."

The Computing, Information and Communications Technology Program, part of NASA's Office of Exploration Systems, funds the subvocal word-recognition research. There is a patent pending for the new technology.
Source; NASA
Article; Publication-size images are available on the World Wide Web at: NASA, March 04


HIGHLIGHTS


Technology – Mobile Phones

Sony launches mobile music service
Sony today at the CeBIT tradeshow in Germany said it plans to launch what it claims is the first personalized, streaming music service for mobile phones. The new service will act as a "personalized radio" that will allow users to select the types of music they want to hear. Sony said it is in talks with almost all the major carriers in Europe. The company will launch a version of the mobile music service through the Finnish division of TeliaSonera in April. Sony's new service will go up against RealNetworks mobile media player. RealNetworks has aggressively pushed its media player in the wireless market, signing deals with leading mobile content services, including Vodafone's Live!
Source; Reuters, March 04


Telecommunications – Wireless

Public services upgrade wireless
The end of wireless CDPD service in the U.S. is forcing police departments across the country to update their wireless data service. Most police departments used CDPD service from Verizon Wireless and AT&T Wireless, with most rates around $50 to $70 per month for each license. Between 100,000 and 200,000 public safety customers use CDPD service currently. Most of these subscribers will upgrade to either CDMA2000 1xrtt or GSM/GPRS for wide area data service. Many public service agencies are also eyeing WiFi as a possible solution for building cheaper urban wireless systems.
Source; Investor's Business Daily, March 04

TechTrend – Australia

Australians prefer snail mail to SMS
Source; News Limited, March 04
Article; News.com.au