Search This Blog

SEMINAR TOPICS AND SEMINAR REPORTS

Saturday, 8 May 2010

AirPower from RCA


While I was at CTIA Wireless 2010, I saw a booth that had something to say about AirPower.
AirPower is some new technology developed by RCA, and it is designed to steal the energy from the Wi-Fi signals in the air. 

I had no idea that those Wi-Fi signals could actually carry any energy, but apparently it only takes five to six hours to juice up one of these AirPower batteries if it is within close proximity of a Wi-Fi hotspot.
These AirPower chargers will be available sometime before the holidays, and they intend to market them as emergency back-up power supplies.

Read more »

Motorola H17txt Bluetooth headset

Motorola does not only make cellphones and smartphones just in case you were wondering and are new to the scene, as they have their fingers dipped in the world of communication devices as well. Their latest device would be the H17txt Bluetooth headset that will offer wireless mobility when communicating with your contacts over a Bluetooth-enabled cellphone. 
Thanks to MotoSpeak, this Bluetooth headset was specially designed to keep you connected and responsible regardless of whether you have two hands behind the wheel or not, where it was created to optimize the text-to-speech technology. Apart from that, when paired with an application, the H17txt will read text messages (SMS) into the headset in real time, helping you keep your eyes peeled on the road. Apart from that, you can also choose to customize an automated response and automatically enable text-to-speech whenever the headset is turned on.

Read more »

Microsoft announces KIN

Microsoft has just announced its brand new Windows-powered smartphone known as KIN, where it was specially designed for folks who are actively navigating their social lives wherever they are. This is made possible thanks to their partnership with Verizon Wireless, Vodafone and Sharp Corporation, where KIN is meant to be the ultimate social experience which will blend the phone, online services and the PC together to deliver breakthrough new experiences known as the Loop, Spot and Studio. 
You can’t just pick up the KIN from anywhere as it will be exclusively available from Verizon Wireless for folks living in the US from May onwards, while Vodafone is offering it in Germany, Italy, Spain and the United Kingdom later this autumn.

Read more »

Samsung AMOLED Beam cellphone in South Korea

We all know that our friends over at South Korea do have a slew of smartphones which are by far and large, much more advanced that what the rest of the world gets (of course they are a close challenger to Japan which has some pretty far out handsets as well). For those who are in the know, the Samsung Haptic Beam (SPH-W7900) which has been out for some time already has just received its successor in the form of the AMOLED Beam (SPH-W9600), making it the first full touch beam projector phone in the world which is capable of delivering an enhanced viewing experience and richer image projection.

Read more »

Wall Mounted Charger

Sometimes the best solution is the most simple one. This charging station is probably the most plain one you’ll come across, but it hids the cords and it attaches to the wall. All around it will easily simplify charging your smaller gadgets. If you don’t want a charging station that draws attention to itself, then this one would be a great solution.

Read more »

Sony Ericsson Zylo with Walkman cellphone

Sony Ericsson is back with a bang, this time round they have expanded their Walkman line of cellphones (guess the Cybershot fans will have to take a back seat for this time), where the Zylo is the latest model to be introduced. This next generation Walkman phone will not break the bank if you are interested in picking one up, where it not only offers the best musical experience from a Walkman handset, but will also merge both music with social networking in a single device. 
That’s right, you will be able to catch up with family and friends over Twitter and Facebook among others using but a few button presses, while enjoying your favorite tunes in the background. Apart from that, the Zylo will also sport the TrackID feature which lets you access the name and artist of a track which you currently hear.

Read more »

Samsung's Verizon

Do you live in the US, and are currently looking for a new phone? Granted, there are virtually tons of handsets to choose from, and most folks would probably veer towards the smartphone variety, ultimately whittling their choices down to an iPhone or a BlackBerry. For those who want to try something different, why not give other handsets a chance instead of the two mentioned models?
Verizon Wireless and Samsung Telecommunications America (Samsung Mobile) have teamed up to bring the Samsung Reality from April 22nd onwards, where this sleek and stylish model will come in Piano Black and City Red colors, bringing a 3″ touch screen display, full horizontal slide-out QWERTY keyboard, customizable widgets, and multiple messaging options to the fore.

Read more »

Virtual Typing

Virtual Keyboard is just another example of today’s computer trend of ‘smaller and faster’. It uses sensor technology and artificial intelligence to let users work on any surface as if it were a keyboard.
   
    Virtual Keyboard is a small Java application that lets you easily create multilingual text content on almost any existing platform and output it directly to web pages. Virtual Keyboard, being a small, handy, well-designed and easy to use application, turns into a perfect solution for cross platform multilingual text input.
    The main features are: platform-independent multilingual support for keyboard text input, built-in language layouts and settings, copy/paste etc. operations support just as in a regular text editor, already existing system language settings remain intact, easy and user-friendly interface and design, and small file size.

    Virtual Keyboard is available as Java applet and Java-script. It uses a special API to interact with a web page. You can invoke its public methods from Java script to perform certain tasks such as Launch Virtual Keyboard, Move the Virtual Keyboard window to exact screen coordinates, etc. The application also uses a bound text control to transfer the text to/from the page.

Read more »

Thursday, 6 May 2010

Lwip

Over the last few years, the interest for connecting computers and computer supported devices to wireless networks has steadily increased. Computers are becoming more and more seamlessly integrated with everyday equipment and prices are dropping. At the same time wireless networking technologies, such as Bluetooth  and IEEE 802.11b WLAN , are emerging. This gives rise to many new fascinating scenarios in areas such as health care, safety and security, transportation, and processing industry. Small devices such as sensors can be connected to an existing network infrastructure such as the global Internet, and monitored from anywhere.

               The Internet technology has proven itself flexible enough to incorporate the changing network environments of the past few decades. While originally developed for low speed networks such as the ARPANET, the Internet technology today runs over a large spectrum of link technologies with vastly different characteristics in terms of bandwidth and bit error rate. It is highly advantageous to use the existing Internet technology in the wireless networks of tomorrow since a  large amount of applications using the Internet technology have been developed. Also, the large connectivity of the global Internet is a strong incentive.

             Since small devices such as sensors are often required to be physically small and inexpensive, an implementation of the Internet protocols will have to deal with having limited computing resources and memory. This report describes the design and implementation of a small TCP/IP stack called lwIP that is small enough to be used in minimal systems.

Read more »

HP JAVA

The explosion of java over the last year has been driven largely by its in role in bringing a new generation of interactive web pages to World Wide Web. Undoubtedly various features of the languages-compactness, byte code portability, security, and so on—make it particularly attractive as an implementation languages for applets embedded in web pages. But it is clear that the ambition of the Java development team go well beyond enhancing the functionality of HTML documents.
     “Java is designed to meet the chalanges of application development on the context of heterogeneous, network-wide distributed environments. Paramaount amoung these chalanges is secure delivery of applications that consume the minimum of systems resources, can run on any hardware and software platform, can be extended dynamically.”

Several of these concerns are mirrored in developments in the High Prerformance Computing world over a number of years. A decade ago the focus of interest in the parallel computing community was on parallel hardware. A parallel computer was typically built from specialized processers through a proprietary high-performance communication switch. If the machine also had to be programmed in a proprietary language, that was an acceptable price for the benefits of using a supercomputer. This attitude was not sustainable as one parallel architecture gave way to another, and cost of porting software became exorbitant. For several years now, portability across platforms had been a central concern in parallel computing.

HPJava is a programming language extended from Java to support parallel programming, especially (but not exclusively) data parallel programming on message passing and distributed memory systems, from multi-processor systems to workstation clusters.

Read more »

Digital Theatre System

Digital Theatre System (Digital cinema, or d-cinema) is perhaps the most significant challenge to the cinema industry since the introduction of sound on film. As with any new technology, there are those who want to do it fast, and those who want to do it right. Both points of view are useful. This new technology will completely replace the conventional theatre system having projectors, film boxes, low quality picture, sound system.
                Let's not forget the lesson learned with the introduction of digital audio for film in the '90s. Cinema Digital Sound, a division of Optical Radiation Corporation, was the first to put digital audio on 35mm film. Very, very few remember CDS, who closed their doors long ago. Such are the rewards for being first.

Read more »

Bio chip

Most of us won’t like the idea of implanting a biochip in our body that identifies us uniquely and can be used to track our location. That would be a major loss of privacy. But there is a flip side to this! Such biochips could help agencies to locate lost children, downed soldiers and wandering Alzheimer’s patients.
    The human body is the next big target of chipmakers. It won’t be long before biochip implants will come to the rescue of sick, or those who are handicapped in someway. Large amount of money and research has already gone into this area of technology.

Anyway, such implants have already experimented with. A few US companies are selling both chips and their detectors. The chips are of size of an uncooked grain of rice, small enough to be injected under the skin using a syringe needle. They respond to a signal from the detector, held just a few feet away, by transmitting an identification number. This number is then compared with the database listings of register pets.

Daniel Man, a plastic surgeon in private practice in Florida, holds the patent on a more powerful device: a chip that would enable lost humans to be tracked by satellite.  
2. BIOCHIP DEFINITION

A biochip is a collection of miniaturized test sites (micro arrays) arranged on a solid substrate that permits many tests to be performed at the same time in order to get higher throughput and speed. Typically, a biochip’s surface area is not longer than a fingernail. Like a computer chip that can perform millions of mathematical operation in one second, a biochip can perform thousands of biological operations, such as decoding genes, in a few seconds.

A genetic biochip is designed to “freeze” into place the structures of many short strands of DNA (deoxyribonucleic acid), the basic chemical instruction that determines the characteristics of an organism. Effectively, it is used as a kind of “test tube” for real chemical samples.

A specifically designed microscope can determine where the sample hybridized with DNA strands in the biochip. Biochips helped to dramatically increase the speed of the identification of the estimated 80,000 genes in human DNA, in the world wide research collaboration known as the Human Genome Project. The microchip is described as a sort of “word search” function that can quickly sequence DNA.
In addition to genetic applications, the biochip is being used in toxicological, protein, and biochemical research. Biochips can also be used to rapidly detect chemical agents used in biological warfare so that defensive measures can be taken.

Motorola, Hitachi, IBM, Texas Instruments have entered into the biochip business

Read more »

Autonomic computing

"Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead

    This quote made by the preeminent mathematician Alfred Whitehead holds both the lock and the key to the next era of computing. It implies a threshold moment surpassed only after humans have been able to automate increasingly complex tasks in order to achieve forward momentum.
    There is every reason to believe that we are at just such a threshold right now in computing. The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the I/T industry to keep them running. And it's not just a matter of numbers. It's the complexity   of these systems and the way they work together that is creating shortage of skilled I/T workers to manage all of the systems. The high-tech industry has spent decades creating computer systems with ever- mounting degrees of complexity to solve a wide variety of business problems. Ironically, complexity itself has become part of the problem. It’s a problem that's not going away, but will grow exponentially, just as our dependence on technology has.
     But as Mr. Whitehead so eloquently put it nearly a century ago, the solution may  lie in automation, or creating a new capacity where important computing operations can run without the need for human intervention. On October 15th, 2001 Paul Horn, senior vice president of IBM Research addressed the Agenda conference, an annual meeting of the preeminent technological minds, held in Arizona. In his speech, and in a document he distributed there, he suggested a solution: build computer systems that regulate themselves much in the same way our  nervous systems regulates and protects our bodies.

    This new model of computing is called autonomic computing. The good news is that some components of this technology are already up and running. However, complete  autonomic systems do not yet exist. This is not a proprietary solution. It's a radical change in the way businesses, academia, and even the government design, develop, manage and maintain computer systems. Autonomic computing calls for a whole new area of study and a whole new way of conducting business.

Read more »

Animatronics

The first use of Audio-Animatronics was for Walt Disney's Enchanted Tiki Room in Disneyland, which opened in June, 1963. The Tiki birds were operated using digital controls; that is, something that is either on or off. Tones were recorded onto tape, which on playback would cause a metal reed to vibrate. The vibrating reed would close a circuit and thus operate a relay. The relay sent a pulse of energy (electricity) to the figure's mechanism which would cause a pneumatic valve to operate, which resulted in the action, like the opening of a bird's beak. Each action (e.g., opening of the mouth) had a neutral position, otherwise known as the "natural resting position" (e.g., in the case of the Tiki bird it would be for the mouth to be closed). When there was no pulse of energy forthcoming, the action would be in, or return to, the natural resting position.
    This digital/tone-reed system used pneumatic valves exclusively--that is, everything was operated by air pressure. Audio-Animatronics' movements that were operated with this system had two limitations. First, the movement had to be simple--on or off. (e.g., The open and shut beak of a Tiki bird or the blink of an eye, as compared to the many different positions of raising and lowering an arm.) Second, the movements couldn't require much force or power. (e.g., The energy needed to open a Tiki Bird's beak could easily be obtained by using air pressure, but in the case of lifting an arm, the pneumatic system didn't provide enough power to accomplish the lift.) Walt and WED knew that this this pneumatic system could not sufficiently handle the more complicated shows of the World's Fair. A new system was devised.
In addition to the digital programming of the Tiki show, the Fair shows required analog programming. This new "analog system" involved the use of voltage regulation. The tone would be on constantly throughout the show, and the voltage would be varied to create the movement of the figure. This "varied voltage" signal was sent to what was referred to as the "black box." The black boxes had the electronic equipment that would receive the signal and then activate the pneumatic and hydraulic valves that moved the performing figures. The use of hydraulics allowed for a substantial increase in power, which was needed for the more unwieldy and demanding movements. (Hydraulics were used exclusively with the analog system, and pneumatics were used only with the tone-reed/digital system.)

    There were two basic ways of programming a figure. The first used two different methods of controlling the voltage regulation. One was a joystick-like device called a transducer, and the other device was a potentiometer (an instrument for measuring an unknown voltage or potential difference by comparison to a standard voltage--like the volume control knob on a radio or television receiver). If this method was used, when a figure was ready to be programmed, each individual action--one at a time-- would be refined, rehearsed, and then recorded. For instance, the programmer, through the use of the potentiometer or transducer, would repeatedly rehearse the gesture of lifting the arm, until it was ready for a "take." This would not include finger movement or any other movements, it was simply the lifting of an arm. The take would then be recorded by laying down audible sound impulses (tones) onto a piece of 35 mm magnetic film stock. The action could then instantly be played back to see if it would work, or if it had to be redone. (The machines used for recording and playback were the 35 mm magnetic units used primarily in the dubbing process for motion pictures. Many additional units that were capable of just playback were also required for this process. Because of their limited function these playback units were called "dummies.")

    Eventually, there would be a number of actions for each figure, resulting in an equal number of reels of 35 mm magnetic film (e.g., ten actions would equal ten reels). All individual actions were then rerecorded onto a single reel--up to ten actions, each activated by a different tone, could be combined onto a single reel. For each action/reel, one dummy was required to play it back. Thus for ten actions, ten playback machines and one recording machine were required to combine the moves onto a new reel of 35 mm magnetic film.
"Sync marks" (synchronization points) were placed at the front end of each individual action reel and all of the dummies were interlocked. This way, during the rerecording, all of the actions would start at the proper time. As soon as it was finished, the new reel could be played back and the combined actions could be studied. Wathel, and often times Marc Davis (who did a lot of the programming and animation design for the Carousel show) would watch the figure go through the motions of the newly recorded multiple actions. If it was decided that the actions didn't work together, or something needed to be changed, the process was started over; either by rerecording the individual action, or by combining the multiple actions again. If the latter needed to be done, say the "arm lift action" came in too early, it would be accomplished by unlocking the dummy that had the "arm-lift reel" on it. The film would then be hand cranked, forward or back, a certain number of frames, which changed the start time of the arm lift in relation to the other actions. The dummies would be interlocked, and the actions, complete with new timing on the arm lift, would be recorded once again.

Read more »

India Unveils World’s Cheapest $10 Laptop

The ‘world’s cheapest laptop’, developed in India, was unveiled by Union Minister for Human Resources Development Arjun Singh at the Tirupati temple on Tuesday evening.
The laptop, jointly developed by several organisations, such as the University Grants Commission, the Indian Institute of Technology-Madras, and the Indian Institute of Science, Bangalore, will be priced at around $10 to $20 (about Rs 500 to Rs 1,000), officials said. This laptop is expected to reach the market in about six months.
The project has already created a buzz in the laptop industry across the world.
The laptop has 2 GB onboard memory with wireless Internet connectivity. To make it useful for the students, especially in the rural areas, the scientists have made it low power consuming gadget.
The $10 laptop is being seen as India’s reply to One Laptop per Child’s XO and Classmate of Intel. The XO, created by scientist Nicholas Negroponte and MIT Media Lab was originally targeted to cost only $100 but by the time it was ready to enter the market its cost went up to $188. The Classmate notebook PC from Intel was priced at $ 300 a piece.
In contrast, the Indian government’s effort to market lap top at only $10 has caused a flutter in the international laptop market and many players are curious to know the details of the costing and how Indians managed to keep the cost so low.

Read more »

Verizon Wireless Set To Rollout 4G In 2010

Verizon Wireless completed its first successful Long Term Evolution (LTE) Fourth Generation (4G) data call in Boston based on the 3GPP Release 8 standard; the company also announced that it had earlier completed the first LTE 4G data call based on the 3GPP Release 8 standard in Seattle. While Verizon previously disclosed its intentions to test the 4G standard in the two cities, the carrier had not provided details on the trials until now.
The tests involved streaming video, file uploads and downloads, and Web browsing. Interestingly, Verizon also said it placed voice calls using Voice over Internet Protocol (VoIP) technology to enable voice transmissions over the LTE 4G network, though the carrier has said in the past that it plans to keep most voice traffic on its existing CDMA 1x network.

Read more »

IBM To Build Next Generation Chips Using DNA

In future DNA wouldn’t just control human evolution but also computing evolution, if IBM succeeds to use DNA in development of next-generation microchips.
IBM scientists are using DNA scaffolding to build tiny circuit boards; this image shows high concentrations of triangular DNA origami binding to wide lines on a lithographically patterned surface; the inset shows individual origami structures at high resolution.
Scientists at IBM Research and the California Institute of Technology announced a scientific advancement that could be a major breakthrough in enabling the semiconductor industry to pack more power and speed into tiny computer chips, while making them more energy efficient and less expensive to manufacture.
Today, the semiconductor industry is faced with the challenges of developing lithographic technology for feature sizes smaller than 22 nm and exploring new classes of transistors that employ carbon nanotubes or silicon nanowires. IBM’s approach of using DNA molecules as scaffolding – where millions of carbon nanotubes could be deposited and self-assembled into precise patterns by sticking to the DNA molecules – may provide a way to reach sub-22 nm lithography.

Read more »

Quantum Computing Closer To Reality

The ability to exploit the extraordinary properties of quantum mechanics in novel applications, such as a new generation of super-fast computers, seems to come closer following with recent breakthroughs by an international team led by researchers from the University of New South Wales.
A colour-enhanced Scanning Electron Microscope image of a quantum dot
In the two breakthroughs, written up in the international journals Nano Letters and Applied Physics Letters, researchers have for the first time demonstrated two ways to deliberately place an electron in a nano-sized device on a silicon chip.
The achievements set the stage for the next crucial steps of being able to observe and then control the electron’s quantum state or “spin”, to create a quantum bit. Multiple quantum bits coupled together make up the processor of a quantum computer.

Read more »

Cellonics Technology

For the last 60 years, the way radio receivers are designed and built has undergone amazingly little change. Much of the current approach could be attributed to EH Armstrong, the oft -credited Father of FM, who invented the super heterodyne method in 1918.He further developed it into a complete FM commercial system in 1933 for use in public-radio broadcasting. 

Today, more than 98% of receivers in radios, television and mobile phones use this method.The subsystem used in the superhet design consists of radio-frequency (RF)amplifiers mixers ,phase-lock loops ,filters, oscillators and other components ,which are all complex ,noisy ,and power hungry. Capturing a communications element from the air to retrieve its modulated signal is not easy ,and a system often needs to spend thousands of carrier cycles to recover just one bit of information .This process of demodulation is inefficient ,and newly emerging schemes result in complex chips difficult and expensive to manufacture.So it was necessary to invent a new demodulation circuit ,which do the job of conventional superheterodyne receiver but at afar lesser component count, faster and lower in power consumption and possessing greater signal robustness These requirements were met by designing a circuit which models the biological cell behavior as explained earlier. The technology for this, named CELLONICS ,was invented by scientists from CWC(Center for Wireless communication) and Computational Science Department in Singapore. The Cellonics technology is a revolutionary and unconventional approach based on the theory of nonlinear dynamical systems and modeled after biological cell behavior. When used in the field of communication, the technology has the ability to encode, transmit and decode digital information powerfully over a variety of physical channels, be they cables or wirelessly through air

Read more »

xMax

xMax developed by xG Technology is a wireless communications technology whose developers claim is low power and provides a high data rate over a distance of about 13 miles.
A fundamental paradigm shift in the way radio signals are modulated and demodulated.
Developed by xG Technology in Florida Rather than transmitting many RF cycles for each bit of data to be sent, xMax does it in a single RF cycle.
Power is saved not only in the transmission, but because receivers will only recognize single-cycle waveforms, power isn't wasted on un-intended RF signals .

Read more »

Automated Eye-Pattern Recognition Systems

Privacy of personal data is an illusion in today’s complex society. With only passwords, or Social Security Numbers as identity or security measures every one is vulnerable to invasion of privacy or break of security. Traditional means of identification are easily compromise and enyone can use this information to assume another’s identity. Sensitive personal and corporate information can be assessed and even criminal activities can be performed using another name. Eye pattern recognition system provides a barrier to and virtually eliminates fraudulent authentication and identity privacy and safety controls privileged access or authorised entry to sensitive sites, data or material. 

In addition to privacy protection there are myriad of applications were iris recognition technology can provide protection and security. This technology offers the potential to unlock major business opportunities by providing high confidence customer validation. Unlike other measurable human features in the face, hand, voice or finger print, the patterns in the iris do not change overtime and research show the matching accuracy of iris recognition systems is greater than that of DNA testing. Positive identifications can be made through glasses, contact lenses and most sunglasses. Automated recognition of people by the pattern of their eyes offers major advantages over conventional identification techniques. Iris recognition system also require very little co-operation from the subject, operate at a comfortable distance and are virtually impossible to deceive. Iris recognition combines research in computer vision, pattern recognition and the man-machine interface. The purpose is real-time, high confidence recognition of a persons identity by mathematical analysis of the random patterns that are visible with in the iris. Since the iris is a protected internal organ whose random texture is stable throughout life, it can serve as a ‘living password’ that one need not remember but one always carries..

Read more »

Micromechanical System For System-On-Chip Connectivity

Micromechanical systems can be combined with microelectronics, photonics or wireless capabilities new generation of Microsystems can be developed which will offer far reaching efficiency regarding space, accuracy, precision and so forth. Micromechanical systems (MEMS) technology can be used fabricate both application specific devices and the associated micro packaging systems that will allow for the integration of devices or circuits, made with non-compatible technologies, with a System-on-Chip environment. The MEMS technology can be used for permanent, semi permanent or temporary interconnection of sub modules in a System-on-Chip implementation. The interconnection of devices using MEMS technology is described with the help of a hearing instrument application and related micropackaging.
MEMS technology offers wide range application in fields like biomedical, aerodynamics, thermodynamics and telecommunication and so forth. MEMS technology can be used to fabricate both application specific devices and the associated micropackaging system that will allow for the integration of devices or circuits, made with non compatible technologies, with a SoC environment .

Read more »

Virtual Surgery

Rapid change is under way on sever fronts I medicine and surgery. Advance in computing power have enable continued growth in virtual reality, visualization, and simulation technologies. The ideal learning opportunities afforded by simulated and virtual environments have prompted their exploration as learning modalities for surgical education and training. Ongoing improvements in this technology suggest an important future role for virtual reality and simulation in medicine.
Medical virtual reality has come a long way in the past 10 years as a result of advances in computer imaging, software, hardware and display devices. Commercialisation of VR systems will depend on proving that they are cost effective and can improve the quality of care. One of the current limitations of VR implementation is shortcomings in the realism of the simulations. The main Impediment to realistic simulators is the cost and processing power of available hardware. Another factor hindering the progress and acceptability of VR applications is the need to improve human-computer interfaces, which can involve use of heavy head-mounted displays or bulky VR gloves that impede movement. There is also the problem of time delays in the simulator’s response to the users movements. Conflicts between sensory information can result in stimulator sickness, which includes side effects such as eyestrain, nausea, loss of balance and disorientation. Commercialisation of VR systems must also address certain legal and regulatory issues.

Read more »

Telemedicine

The increasing awareness about the health and development of advanced telecommunication means and information technology has given rise to telemedicine that allows healthcare when patient and doctor are long distances away from each other. This procedure allows for maximum utilization of limited resources.
The increasing population of the world and lack of sufficient no of doctors and the hospitals are the main impediments in providing basic health care to the majority of world population. It is not always possible, nor cost effective, to have the medical expertise available when where it is needed. This is true for complex and critical solutions requiring a medical specialist, and also for health education and routine meetings.
The increasing health consciousness and development of advanced telecommunication means and information technology has given rise to telemedicine , which allows health care when patient and doctor are long distances away from each other. This procedure allows for maximum utilization of limited resources. A distant emergency department can get immediate assistance from an orthopedic surgeon- a service that is not always easy to provide in remote areas..

Read more »

Teleportation

Teleportation is the name given by the science fiction writers to the feat of making an object or person disintegrate in one place while the exact replica appears somewhere else. How this is accomplished is usually not explained in detail, but the general idea seems to be that the original object is scanned in such away as to extract all the information from it, then this information is transmitted to the receiving location and used to construct the replica, not necessarily from the actual material of the original, but perhaps from atoms of same kinds, arranged in exactly the same pattern as the original.
A teleportation machine would look like a fax machine, except that it would work on both 3-dimensional objects as well as documents, it would produce an exact copy rather than approximate facsimile, and it would destroy the original in the process of scanning it.
A few science fiction writers consider teleporters that preserve the original, and the plot gets complicated when the original and teleported versions of same person meet; but the more common kind of teleporter destroys the original, functioning as a super transportation device, not as a perfect replicator of souls and bodies..

Read more »

E-Paper Technology

Electronic ink is not intended to diminish or do away with traditional displays. Instead electronic ink will initially co-exist with traditional paper and other display technologies. In the long run, electronic ink may have a multibillion-dollar impact on the publishing industry.
Ultimately electronic ink will permit most any surface to become a display, bringing information out of the confines of traditional devices and into the world around us. Electronic ink is a pioneering invention that combiners all the desired features of a modern electronic display and the sheer convenience and physical versatility of sheet of paper.
E-paper or electronic paper is sometimes called radio paper or smart paper. It is many applications includes making of the next generation paper. Paper would be perfect except for one obvious thing: printed words can’t change. The effort is to create a dynamic high-resolution electronic display that’s thin and flexible enough to become the next generation of paper.

Read more »

Embedded Systems In Automobiles

Most of the microprocessors in the world are not in PCs, they are embedded in devices which control traffic for highways, airspace, railway tracks, and shipping lanes to manufacturing systems with robots.
An embedded system is any device controlled by instructions stored on a chip. These devices are usually controlled by a microprocessor that executes the instructions stored on a Read Only Memory (ROM) chip.
Embedded systems can be used to implement features ranging from the way pacemakers operate and mobile phones that can be worn as jewellery to Adaptive Cruise Control (ACC). In this seminar I will explain one such technology that uses the embedded systems- The Adaptive Cruise Control.
The easy availability of good design tools (many of them in the freeware domain) and a software engineer has been two key factors in fueling the growth of embedded systems. All these applications areas mentioned in the seminar are just tiny drops in the big ocean of embedded systems technology. So wait with bated breath for the fireworks to come. They are sure to blow our mind..

Read more »

Molecular Electronics

The development of molecular electronics, the foundation for nanomedicine, would not have been possible without the availability of advanced computers; however, the availability of advanced computers seems to depend on the development of molecular electronics. Molecular electronics seems to aid in its own growth.
These advances are not limited to computing either. The more scientists learn about nanotechnology, the more they learn about engineering, chemistry, physics, mathematics, etc. In September 2002,researchers at Hewlett-Packard created the highest density electronically addressable memory to date. The 64-bit memory uses molecular switches and has an area of less than one square micron.
The device has a bit density that is more than 10 times that of current silicon memory chips.Extreme optimists have gone on to propose that nanotechnology will bring the end of war, the end of world hunger, the end of disease, and ultimately the end of death. It is hard to discern the difference between science fiction and science fact at the moment. But one thing is sure: molecular electronics is a highly promising new field that will be investigated with vigor for many years to come. .

Read more »

Carbon Nano Tubes

The carbon nanotubes form a promising tool in the emerging fields of nanotechnology and its reassuring development. The importance lies in the fact that the whole development has been from carbon which forms the basic unit. The future lies in continuous improvement in the required properties of materials by artificially structuring them on nanometer scale and developing processing methods for their manufacture. Thus this topic presents a smooth and effective blending of electronics and chemistry. The day is not far when we could make supercomputers no larger than a human cell or a spacecraft no more expensive than a family car.

The drive towards miniaturization and sophistication of electrical devices has been progressing at a relentless pace. Silicon-based microelectronic devices have played an integral role in steering this revolution during the latter part of the last century. Gordon Moore’s observation in 1965 of doubled computing capacity in every new silicon chip produced within 18-24 months of the previous one is testimony to the ascendancy of silicon devices in the microelectronics race. Nanotechnology relates to the creation of devices, structures and systems whose size ranges from 1 to 100 nanometers (nm). These creations also exhibit novel physical, chemical or biological properties because of their nano scale size. To place their size in context, 1 nm is 10,000 times smaller than the thickness of one strand of human hair..

Read more »

Tuesday, 4 May 2010

BLUE EYES

We communicate with others using visual, audio and sensory information (touch, smell, etc.). This is possible only because the human brain is highly skilled in integrating and interpreting such data.
What if a computer can do the same? If computers can understand what we feel and act accordingly, the possibilities are endless!!! Blue Eyes technology being developed by the IBM research center at Alma den to make such "Smart" Computers.

In this paper, basic concepts of Blue Eyes are discussed. The motivation and the benefits of Blue Eyes are also mentioned in this context. The concepts and design of the software's included in Blue Eyes are also carried out in detail.
At IBM's lab, researchers are talking the lofty goal of designing 'Blue Eyes', its aim being to create devices with embedded technology that gathers your personal information. They'll track your pulse, breathing rate, and eye movements, and then react to those physical triggers by performing tracks for you. Following your movements of your eyes, the "gaze-tracking" technology uses MAGIC to control your mouse. With MAGIC, the cursor follows your eyes as you look around the screen.
When your eye stops on an object, you click the mouse to select it. Also current versions of gaze tracking technology only come within an inch or so of its target..

Read more »

Light Pen


A light pen is a computer input device in the form of a light-sensitive wand used in conjunction with the computer's CRT monitor. It allows the user to point to displayed objects, or draw on the screen, in a similar way to a touch screen but with greater positional accuracy. A light pen can work with any CRT-based monitor, but not with LCD screens, projectors or other display devices.




A light pen is fairly simple to implement. The light pen works by sensing the sudden small change in brightness of a point on the screen when the electron gun refreshes that spot. By noting exactly where the scanning has reached at that moment, the X,Y position of the pen can be resolved. This is usually achieved by the light pen causing an interrupt, at which point the scan position can be read from a special register, or computed from a counter or timer. The pen position is updated on every refresh of the screen.

The light pen became moderately popular during the early 1980s. It was notable for its use in the Fairlight CMI, and the BBC Micro. However, due to the fact that the user was required to hold his or her arm in front of the screen for long periods of time, the light pen fell out of use as a general purpose input device

Read more »

Holographic Versatile Disc

Holographic Versatile Disc (HVD) is an optical disc technology still in the research stage which would greatly increase storage over Blu-ray and HD DVD optical disc systems. It employs a technique known as collinear holography, whereby two lasers, one red and one blue-green, are collimated in a single beam. The blue-green laser reads data encoded as laser interference fringes from a holographic layer near the top of the disc while the red laser is used as the reference beam and to read servo information from a regular CD-style aluminium layer near the bottom. ...

Read more »

3D-DOCTOR

3D-DOCTOR is an advanced 3D imaging software for researchers doing medical (MRI, CT, Microscopy), engineering, scientific, and industrial 3D imaging applications. 3D-DOCTOR is US FDA-approved for medical imaging applications. 3D-DOCTORcreates 3D surface models and volume rendering from 2D cross-section images (in DICOM, BMP, RAW, TIFF, JPEG, PNG, GIF or other format) and exports 3D polygon models to DXF, 3DS, STL, IGES, VRML, XYZ, and other formats.3D-DOCTOR's vector-based tools support easy image data handling, measurement, and quantitative analysis. 
If you are a power user, the 3D BASIC scripting language will let you create your own Basic-like sophisticated programs allowing you to use 3D-DOCTOR's advanced imaging and rendering functions quickly3D-DOCTOR is an advanced, 3D imaging software developed by Able Software Corporation. It does 3D image segmentation, 3D surface modeling, rendering, volume rendering, 3D image processing, deconvolution, registration, automatic alignment, measurements, and many other functions....

Read more »

IDMA - Future of Wireless Technology

Direct-sequence code-division multiple access (DS-CDMA) has been adopted in second and third-generation cellular mobile standards. Users are separated in CDMA system by use of different signatures for ach user. In CDMA system, many users share the transmission media so that signals from different users are superimposed causing interference. This report outlines a multiple access scheme in which interleaving is the only means of user separation. 


It is a special form of CDMA; it inherits many advantages of CDMA such as dynamic channel sharing, mitigation of cross cell reference, asynchronous transmission, ease of cell planning and robustness against fading. Also a low cost interference cancellation technique is available for systems with large number of users in multipath channels. The normalized cost (per user) of this algorithm is independent of the number of users. Furthermore, such low complexity and high performance attributes can be maintained in a multipath environment. The detection algorithm for IDMA requires less complexity than that of CDMA. The performance is surprisingly good despite its simplicity. ..

Read more »

Night Vision Technology

Night vision technology was developed by the US defense department mainly for defense purposes ,but with the development of technology night vision devices are being used in day to day lives.

Night Vision can work in two different ways depending on the technology used. 1.Image enhancement- This works by collecting the tiny amounts of light including the lower portion of the infrared light spectrum, those are present but may be imperceptible to our eyes, and amplifying it to the point that we can easily observe the image. 2:Thermal imaging- This technology operates by capturing the upper portion of the infrared light spectrum, which is emitted as heat by objects instead of simply reflected as light. Hotter objects, such as warm bodies, emit more of this light than cooler objects like trees or buildings...

Read more »

High Speed Data In Mobile Networks

Currently, almost all network operators worldwide are upgrading their GSM networks in order to provide high speed mobile data to their subscribers. The ever increasing growth rate of data applications such as e-mail and the internet is confronting mobile network operators worldwide with the challenge to upgrade their networks to high bandwidth capable "bit pipes" in order to provide for all kinds of mobile data applications. High speed mobile data will combine two of today's most rapidly growing technologies, mobility and the internet.
GPRS (General Packet Radio Service), EDGE (Enhanced Data rates for Global Evolution) and HSCSD (High Speed Circuit Switched Data) have been designed primarily as upgrades to the well known and widely used GSM standard. In the 1980s and early 1990s, when the GSM system was designed and standardized, data transmission capabilities were of minor importance compared to voice. Besides, at that time, the maximum transmission speed of 9.6 kbit/s that GSM offered, appeared to be sufficient and was comparable with analog wireline modems.

Starting with HSCSD, the first high speed mobile data upgrade to be standardized, higher rates of transmission can be provided to mobile customers. EDGE has a transmission speed of up to 384 kbit/s and GPRS is able to support up to 160 kbit/s.

Read more »

MPEG 7

As more and more audiovisual information becomes available from many sources around the world, many people would like to use this information for various purposes. This challenging situation led to the need for a solution that quickly and efficiently searches for and/or filters various types of multimedia material that’s interesting to the user.


For example, finding information by rich-spoken queries, hand-drawn images, and humming improves the user-friendliness of computer systems and finally addresses what most people have been expecting from computers. For professionals, a new generation of applications will enable high-quality information search and retrieval. For example, TV program producers can search with “laser-like precision” for occurrences of famous events or references to certain people, stored in thousands of hours of audiovisual records, in order to collect material for a program. This will reduce program production time and increase the quality of its content.
MPEG-7 is a multimedia content description standard that addresses how humans expect to interact with computer systems, since it develops rich descriptions that reflect those expectations.

Read more »

Future Satellite Communication

The beginning of new millenium sees an important milestone in military aviation communication with the introduction of the first Super-High-Frequency(SHF) airborne satellite communication (Satcom) terminals, which are due to enter service on Nimrod maritime reconnaissance aircraft (MRA4). Satcom terminals using the Ultra-High-Frequency (UHF) band have been fitted to larger aircrafts for a number of years. Although relatively simple to install and comparatively inexpensive, UHF satcoms(240-270 & 290-320 MHz bands) suffers from very limited capacity (a few 25 KHz channels per satellite) and are prone to multipath & unintensional interference due to their poor antenna selectivity. SHF satcoms (7.25 – 7.75 & 7.9-8.4 GHz) offer significantly increased bandwidths (hundreds of MHz) for high data rates or increased use of spread-spectrum techniques, together with localised coverage and adaptive antenna techniques for nulling unwanted signals or interference.



For airborne platforms, the advantages of SHF satcoms come at the expense of a significant additional burden in terms of antenna siting and pointing, particularly for smaller, highly agile aircrafts. Antenna should be large enough to support the desired data rate and to provide enough directivity to minimise interference with adjascent satellites and avoid detection by hostile forces. Another feature of satcoms, unique to aircraft, is the effect of unwanted modulation from moving parts such as helicopter rotor blades, propellers and jet engines.

This paper gives an overview of development of airborne SHF and also Extremely-High-Frequency (EHF) satcom techniques, and terminal demonstrators by DERA (Defence Evaluation and Research Agency). This research is aimed at providing affordable, secure and robust satcoms to a range of military aircraft, supporting ground attack and reconnaissance roles to surveillance, transport and tanker aircraft.

Read more »

CHAMELEON CHIP

It is a reconfigurable processor which provides a design environment that allows customer to convert their algorithms to hardware configuration on the fly.Chameleon Systems Inc, San Jose, California is one of the new breed of reconfigurable processor makers.
Chameleon chip is the industry’s first Reconfigurable Communications Processor (RCP)








Advantages
------------
1. Early and fast designs
2. Enabling Field upgrades
3. Creating product differentiation for suppliers
4. Creating flexible & adaptive products
5. Reducing power
6. Reducing manufacturing costs
7. Increasing bandwidths

Disadvantages
---------------
1. Inertia – Engineers slow to change
2. RCP designs requires comprehensive set of tools
3. “Learning curve” for designers unfamiliar with reconfigurable logic
4. Applications
5. Wireless Base stations
6. Packetized voice (VOIP)
7. Digital Subscriber Line (DSL)
8. Software Defined Radio (SDR)

Read more »

Sunday, 2 May 2010

ISI

Computers are as ubiquitous as automobiles and toasters, but exploiting their capabilities still seems to require the training of a supersonic test pilot. VCR displays blinking a constant 12 noon around the world testify to this conundrum. As interactive television, palmtop diaries and "smart" credit cards proliferate, the gap between millions of untrained users and an equal number of sophisticated microprocessors will become even more sharply apparent. With people spending a growing proportion of their lives in front of computer screens--informing and entertaining one another, exchanging correspondence, working, shopping and falling in love--some accommodation must be found between limited human attention spans and increasingly complex collections of software and data.

                                                                                             

Computers currently respond only to what interface designers call direct manipulation. Nothing happens unless a person gives commands from a keyboard, mouse or touch screen. The computer is merely a passive entity waiting to execute specific, highly detailed instructions; it provides little help for complex tasks or for carrying out actions (such as searches for information) that may take an indefinite time.

If untrained consumers are to employ future computers and networks effectively, direct manipulation will have to give way to some form of delegation. Researchers and software companies have set high hopes on so called software agents, which "know" users' interests and can act autonomously on their behalf. Instead of exercising complete control (and taking responsibility for every move the computer makes), people will be engaged in a cooperative process in which both human and computer agents initiate communication, monitor events and perform tasks to meet a user's goals.

The average person will have many alter egos in effect, digital proxies-- operating simultaneously in different places. Some of these proxies will simply make the digital world less overwhelming by hiding technical details of tasks, guiding users through complex on-line spaces or even teaching them about certain subjects. Others will actively search for information their owners may be interested in or monitor specified topics for critical changes. Yet other agents may have the authority to perform transactions (such as on-line shopping) or to represent people in their absence. As the proliferation of paper and electronic pocket diaries has already foreshadowed, software agents will have a particularly helpful role to play as personal secretaries--extended memories that remind their bearers where they have put things, whom they have talked to, what tasks they have already accomplished and which remain to be finished.

Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent should also be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. Finally, it should be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal.

Read more »

GRID COMPUTING

The term the Grid was coined in the mid1990s to denote a proposed distributed computing infrastructure for advanced science and engineering. Considerable progress has since been made on the construction of such an infrastructure but the term Grid has also been conflated, at least in popular perception, to embrace everything from advanced networking to artificial intelligence. One might wonder whether the term has any real substance and meaning. Is there really a distinct Grid problem and hence a need for new Grid technologies? If so, what is the nature of these technologies, and what is their domain of applicability? While numerous groups have interest in Grid concepts and share, to a significant extent, a common vision of Grid architecture, we do not see consensus on the answers to these questions.


The Grid concept is indeed motivated by a real and specific problem and that there is an emerging, well-defined Grid technology base that addresses significant aspects of this problem. In the process, we develop a detailed architecture and roadmap for current and future Grid technologies. Furthermore, Grid technologies are currently distinct from other major technology trends, such as Internet, enterprise, distributed, and peer-to-peer computing, these other trends can benefit significantly from growing into the problem space addressed by Grid technologies.

The real and specific problem that underlies the Grid concept is coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations. The sharing that concerned with is not primarily file exchange but rather direct access to computers, software, data, and other resources, as is required by a range of collaborative problem-solving and resource-brokering strategies emerging in industry, science, and engineering. This sharing is, necessarily, highly controlled, with resource providers and consumers defining clearly and carefully just what is shared, who is allowed to share, and the conditions under which sharing occurs. A set of individuals and/or institutions defined by such sharing rules form what we call a virtual organization (VO).

The following are examples of VOs: the application service providers, storage service providers, cycle providers, and consultants engaged by a car manufacturer to perform scenario evaluation during planning for a new factory; members of an industrial consortium bidding on a new aircraft; a crisis management team and the databases and simulation systems that they use to plan a response to an emergency situation; and members of a large, international, multiyear high-energy physics collaboration. Each of these examples represents an approach to computing and problem solving based on collaboration in computation-and data-rich environments.

As these examples show, VOs vary tremendously in their purpose, scope, size, duration, structure, community, and sociology. Nevertheless, careful study of underlying technology requirements leads us to identify a broad set of common concerns and requirements. In particular, we see a need for highly flexible sharing relationships, ranging from client-server to peer-to-peer; for sophisticated and precise levels of control over how shared resources are used, including fine-grained and multi-stakeholder access control, delegation, and application of local and global policies; for sharing of varied resources, ranging from programs, files, and data to computers, sensors, and networks; and for diverse usage modes, ranging from single user to multi-user and from performance sensitive to cost-sensitive and hence embracing issues of quality of service, scheduling, co-allocation, and accounting.

Current distributed computing technologies do not address the concerns and requirements just listed. For example, current Internet technologies address communication and information exchange among computers but do not provide integrated approaches to the coordinated use of resources at multiple sites for computation. The Open Groups Distributed Computing Environment (DCE) supports secure resource sharing across sites, but most VOs would find it too burdensome and inflexible. Storage service providers (SSPs) and application service providers (ASPs) allow organizations to outsource storage and computing requirements to other parties, but only in constrained ways: for example, SSP resources are typically linked to a customer via a virtual private network (VPN). Emerging distributed computing companies seek to harness idle computers on an international scale but, to date, support only highly centralized access to those resources. In summary, current technology either does not accommodate the range of resource types or does not provide the flexibility and control on sharing relationships needed to establish VOs.

It is here that Grid technologies enter the picture. Over the past five years, research and development efforts within the Grid community have produced protocols, services, and tools that address precisely the challenges that arise when we seek to build scalable VOs. These technologies include security solutions that support management of credentials and policies when computations span multiple institutions; resource management protocols and services that support secure remote access to computing and data resources and the co-allocation of multiple resources; information query protocols and services that provide configuration and status information about resources, organizations, and services; and data management services that locate and transport datasets between storage systems and applications. Because of their focus on dynamic, cross-organizational sharing, Grid technologies complement rather than compete with existing distributed computing technologies. For example, enterprise distributed computing systems can use Grid technologies to achieve resource sharing across institutional boundaries; in the ASP/SSP space, Grid technologies can be used to establish dynamic markets for computing and storage resources, hence overcoming the limitations of current static configurations.

It is our belief that VOs have the potential to change dramatically the way we use computers to solve problems, much as the web has changed how we exchange information. As the examples presented here illustrate, the need to engage in collaborative processes is fundamental to many diverse disciplines and activities: it is not limited to science, engineering and business activities. It is because of this broad applicability of VO concepts that Grid technology is important.

Read more »

SMART QUILL

Lyndsay Williams of Microsoft Research's Cambridge UK lab is the inventor of the Smartquill, a pen that can remember the words that it is used to write, and then transform them into computer text . The idea that "it would be neat to put all of a handheld-PDA type computer in a pen," came to the inventor in her sleep. “It’s the pen for the new millennium,” she says. Encouraged by Nigel Ballard, a leading consultant to the mobile computer industry, Williams took her prototype to the British Telecommunications Research Lab, where she was promptly hired and given money and institutional support for her project. The prototype, called SmartQuill, has been developed by world-leading research laboratories run by BT (formerly British Telecom) at Martlesham, eastern England. It is claimed to be the biggest revolution in handwriting since the invention of the pen.


The sleek and stylish prototype pen is different from other electronic pens on the market today in that users don't have to write on a special pad in order to record what they write. User could use any surface for writing such as paper, tablet, screen or even air. The SmartQuill isn't all space-age, though -- it contains an ink cartridge so that users can see what they write down on paper. SmartQuill contains sensors that record movement by using the earth's gravity system, irrespective of the platform used. The pen records the information inserted by the user. Your words of wisdom can also be uploaded to your PC through the “digital inkwell”, while the files that you might want to view on the pen are downloaded to SmartQuill as well.

It is an interesting idea, and it even comes with one attribute that makes entire history of pens pale by comparison—if someone else picks your SmartQuill and tries to write with it- it won’t. Because user can train the pen to recognize a particular handwriting. Hence SmartQuill recognizes only the owner’s handwriting. SmartQuill is a computer housed within a pen which allows you to do what a normal personal organizer does .It’s really mobile because of it’s smaller size and one handed use. People could use the pen in the office to replace a keyboard, but the main attraction will be for users who usually take notes by hand on the road and type them up when returning to the office. SmartQuill will let them skip the step of typing up their notes.

Read more »

SYMBIAN OS

Just like PCs have an operating system like Windows, Symbian is the O.S for mobile phones. But unlike PC design, mobile phone put constrains on a suitable O.S. The operating system has to have a low memory footprint, and low dynamic memory usage, and efficient power management framework and real time support for communication and telephony protocols. Symbian OS is designed for the mobile phone environment .It addresses constraints of mobile phones by providing a framework to handle low memory situations, a power management., and a rich software layer implementing industry standards for communication , telephony and data rendering .

Symbian OS is designed for the specific requirements of open, advanced , data enabled 2G , 3G mobile phones . Compact enough to fit in the memory of a mobile phone, Symbian OS was planned from the beginning to be a full operating system in terms of functionality. Symbian OS is already available in Ericsson R380 smart phones, Nokia 9200 communicator series, the Nokia 7650 and Sony Ericsson P800

Key features of Symbian OS are:

• Rich suit of application engines – Including contacts, schedules, messaging, browsing, office, utility and system control

• Browsing – fit for the purpose of browsing engine, for full web browser support and WAP stack for mobile browsing
• Messaging – multimedia messaging using MMS, picture messaging with EMS and text messaging using SMS

• Multimedia – shared access to screen, keyboards, phones and bitmaps; audio recording and play back, and image related to functionality (support for all common audio and image format)

• Communication protocol – wide area networking stacks including TCP, IP version 4 , IP version 6 and personal area networking stacks including Blue tooth , and USB.

• Software development – 3 main programming and content development options: C++, Java, WAP.

Read more »

Cellular Neural Network

Cellular Neural Network is a revolutionary concept and an experimentally proven new computing paradigm for analog computers. Looking at the technological advancement in the last 50 years ; we see the first revolution which led to pc industry in 1980’s, second revolution led to internet industry in 1990’s cheap sensors & mems arrays in desired forms of artificial eyes, nose, ears etc. this third revolution owes due to C.N.N.This technology is implemented using CNN-UM and is also used in imageprocessing. It can also implement any Boolean functions.

Read more »

Wireless Internet Security

In the past few years, there has been an explosive growth in the popularity and availability of small, handheld devices (mobile phones, PDAs, pagers), that can wirelessly connect to the Internet. These devices are predicted to soon outnumber traditional Internet hosts like PCs and workstations [1]. With their convenient form factor and falling prices, these devices hold the promise of ubiquitous (“anytime, anywhere”) access to a wide array of interesting services. However, these batterydriven devices are characterized by limited storage (volatile and non-volatile memory), minimal computational capability, and screen sizes that vary from small to very small. These limitations make the task of creating secure, useful applications for these devices especially challenging.


It is easy to imagine a world in which people rely on connected handheld devices not only to store their personal data, check news and weather reports, but also for more security sensitive applications like on-line banking, stock trading and shopping - all while being mobile. Such transactions invariably require the exchange of private information like passwords, PINs and credit card numbers and ensuring their secure transport through the network becomes an important concern.

On the wired Internet, Secure Sockets Layer (SSL) [3] is the most widely used security protocol.Between its conception at Netscape in the mid-90s and standardization within the IETF in the late-90s, the protocol and its implementations have been subjected to careful scrutiny by some of the world’s foremost security experts [5]. No wonder then, that SSL (in the form of HTTPS which is simply HTTP over SSL) is trusted to secure transactions for sensitive applications ranging from web banking to securities trading to e-commerce. One could easily argue that without SSL, there would be no e-commerce on the web today. Almost all web servers on the Internet support some version of SSL [6]. Unfortunately, none of the popular wide-area wireless data services today offer this protocol on a handheld device. Driven by perceived inadequacies of SSL in a resource constrained environment, architects of both WAP [7] and Palm.net [8] chose a different (and incompatible) security protocol (e.g., WTLS[9] for WAP) for their mobile clients and inserted a proxy/gateway in their architecture to perform protocol conversions. A WAP gateway, for instance, decrypts encrypted data sent by a WAP phone using WTLS and re-encrypts it using SSL before forwarding it to the eventual destination server. The reverse process is used for traffic flowing in the opposite direction.

Such a proxy-based architecture has some serious drawbacks. The proxy is not only a potential performance bottleneck, but also represents a “man-in-the-middle” which is privy to all “secure” communications. This lack of end-to-end security is a serious deterrent for any organization thinking of extending a security-sensitive Internet-based service to wireless users. Banks and brokerage houses are uncomfortable with the notion that the security of their customers’ wireless transactions depends on the integrity of the proxy under the control of an untrusted third party [10].

We found it interesting that the architects of WAP and Palm.net made tacit assumptions about the unsuitability of standard Internet protocols (especially SSL) for mobile devices without citing any studies that would warrant such a conclusion [11]. This prompted our experiments in evaluating standard security algorithms and protocols (considered too “big” by some) for small devices. We sought answers to some key questions: Is it possible to develop a usable implementation of SSL for a mobile device and thereby provide end-to-end security? How would near-term technology trends impact the conclusions of our investigation?

The rest of this report describes our experiments in greater detail. Section 2 reviews the security architecture of current wireless Internet offerings and analyses its shortcomings. Section 3 provides an overview of the SSL protocol. In particular, we highlight aspects that make it easier to implement SSL on weak CPUs than it might appear at first. Section 4 discusses our implementation of an SSL client, called KSSL, on a Palm PDA and evaluates its performance. Section 5 describes an application we’ve developed for secure, mobile access to enterprise resources through sun. netTM based on KSSL. Section 6 talks about mobile technology trends relevant to application and protocol developers

Read more »