Search This Blog

SEMINAR TOPICS AND SEMINAR REPORTS

Sunday, 2 May 2010

Digital Watermarking

In recent years, the distribution of works of art, including pictures, music, video and textual documents, has become easier. With the widespread and increasing use of the Internet, digital forms of these media (still images, audio, video, text) are easily accessible. This is clearly advantageous, in that it is easier to market and sell one's works of art. However, this same property threatens copyright protection. Digital documents are easy to copy and distribute, allowing for pirating. There are a number of methods for protecting ownership. One of these is known as digital watermarking.

Digital watermarking is the process of inserting a digital signal or pattern (indicative of the owner of the content) into digital content. The signal, known as a watermark, can be used later to identify the owner of the work, to authenticate the content, and to trace illegal copies of the work.

Watermarks of varying degrees of obtrusiveness are added to presentation media as a guarantee of authenticity, quality, ownership, and source.
To be effective in its purpose, a watermark should adhere to a few requirements. In particular, it should be robust, and transparent. Robustness requires that it be able to survive any alterations or distortions that the watermarked content may undergo, including intentional attacks to remove the watermark, and common signal processing alterations used to make the data more efficient to store and transmit. This is so that afterwards, the owner can still be identified. Transparency requires a watermark to be imperceptible so that it does not affect the quality of the content, and makes detection, and therefore removal, by pirates less possible.

The media of focus in this paper is the still image. There are a variety of image watermarking techniques, falling into 2 main categories, depending on in which domain the watermark is constructed: the spatial domain (producing spatial watermarks) and the frequency domain (producing spectral watermarks). The effectiveness of a watermark is improved when the technique exploits known properties of the human visual system. These are known as perceptually based watermarking techniques. Within this category, the class of image-adaptive watermarks proves most effective.
In conclusion, image watermarking techniques that take advantage of properties of the human visual system, and the characteristics of the image create the most robust and transparent watermarks.

Read more »

PARASITIC COMPUTING

The net is a fertile place where new ideas/products surface quite often. We have already come across many innovative ideas such as Peer-to-Peer file sharing, distributed computing etc. Parasitic computing is a new in this category. Reliable communication on the Internet is guaranteed by a standard set of protocols, used by all computers. The Notre Dame computer scientist showed that these protocols could be exploited to compute with the communication infrastructure, transforming the Internet into a distributed computer in which servers unwittingly perform computation on behalf of a remote node.


In this model, known as “parasitic computing”, one machine forces target computers to solve a piece of a complex computational problem merely by engaging them in standard communication. Consequently, the target computers are unaware that they have performed computation for the benefit of a commanding node. As experimental evidence of the principle of parasitic computing, the scientists harnessed the power of several web servers across the globe, which–unknown to them–work together to solve an NP complete problem.

Sending a message through the Internet is a sophisticated process regulated by layers of complex protocols. For example, when a user selects a URL (uniform resource locator), requesting a web page, the browser opens a transmission control protocol (TCP) connection to a web server. It then issues a hyper-text transmission protocol (HTTP) request over the TCP connection. The TCP message is carried via the Internet protocol (IP), which might break the message into several packages, which navigate independently through

numerous routers between source and destination. When an HTTP request reaches its target web server, a response is returned via the same TCP connection to the user's browser. The original message is reconstructed through a series of consecutive steps, involving IP and TCP; it is finally interpreted at the HTTP level, eliciting the appropriate response (such as sending the requested web page). Thus, even a seemingly simple request for a web page involves a significant amount of computation in the network and at the computers at the end points.

In essence, a `parasitic computer' is a realization of an abstract machine for a distributed computer that is built upon standard Internet communication protocols. We use a parasitic computer to solve the well known NP-complete satisfiability problem, by engaging various web servers physically located in North America, Europe, and Asia, each of which unknowingly participated in the experiment. Like the SETI@home project, parasitic computing decomposes a complex problem into computations that can be evaluated independently and solved by computers connected to the Internet; unlike the SETI project, however, it does so without the knowledge of the participating servers. Unlike `cracking' (breaking into a computer) or computer viruses, however, parasitic computing does not compromise the security of the targeted servers, and accesses only those parts of the servers that have been made explicitly available for Internet communication.

Read more »

ED-RAM

One of the constants in computer technology is the continuing advancement in operational speed. A few years ago, a 66 MHz PC was considered “lightning fast”. Today’s common desktop machine operates at many times that frequency. All this speed is the foundation of a trend towards visual computing, in which the PC becomes ever more graphical, animated, and three- dimensional..

In this quest for speed, most of the attention is focused on the microprocessor. But a PC’s memory is equally important in supporting the new capabilities of visual computing. And commodity Dynamic RAMs (DRAMs), the mainstay of PC memory architecture, have fallen behind the microprocessor in their ability to handle data in the volume necessary to support complex graphics. While device densities have increased by nearly six orders of magnitude, DRAM access times have only improved by 10. Over the same time, microprocessor performance has jumped by a factor of 100. In other words, while bus frequency has evolved from 33 MHz for EDO to the current standard of 100 Mhz for SDRAMs and up to 133 MHz for the latest PC-133 specification, memory speed has been out spaced by the operation frequency of the microprocessor which reached 600 MHz plus by the turn of the century. Thus, the memory subsystem risked to become a bottleneck for overall system performance or had created a significant performance gap between computing elements and their associated memory devices.

Traditionally, this gap has been filled by application specific memories like SRAM caches, VRAMs etc. In order to broaden the usage, we thus need a high density, low cost, high bandwidth DRAM.

This technology is based on a very high-speed, chip-to-chip interface and has been incorporated into DRAM architectures called Rambus DRAM or RDRAM. It can also be used with conventional processors and controllers to achieve a performance rate that is 100 times faster than conventional DRAMs. At the heart of the Rambus Channel Memory architecture, is ordinary DRAM cells to store information. But the access to those cells, and the physical, electrical and logical construction of a Rambus memory system is entirely new and much, much faster than conventional DRAMs. The Rambus channel transfers data on each edge of a 400 MHz differential clock to achieve an 800- MB/s data rate. It uses a very small number of very high speed signals to carry all the address, data and control information, greatly reducing the pin count and hence cost while maintaining high performance levels. The data and control lines have 800-mV logic levels that operate in a strictly controlled impedance environment and meet the specific high-speed timing requirements. This memory performance satisfies the requirements of the next generation of processors in PCs, servers, workstations as well as communications and consumer applications.

Read more »

E-Intelligence

As corporations move rapidly toward deploying e-business systems, the lack of business intelligence facilities in these systems prevents decisionmakers from exploiting the full potential of the Internet as a sales, marketing, and support channel. To solve this problem, vendors are rapidly enhancing their business intelligence offerings to capture the data flowing through e-business systems and integrate it with the information that traditional decision-making systems manage and analyze. These enhanced business intelligence—or e-intelligence—systems may provide significant business benefits to traditional brick-and-mortar companies as well as new dot-com ones as they build e-business environments.


Organizations have been successfully using decision processing products, including data warehouse and business intelligence tools, for the past several years to optimize day-to-day business operations and to leverage enterprise-wide corporate data for a competitive advantage. The advent of the Internet and corporate extranets has propelled many of these organizations toward the use of ebusiness applications to further improve business efficiency, decrease costs and increase revenues - and to compete with new dot.com companies appearing in the marketplace.

The explosive growth in the use of e-business has led to the need for decision-processing systems to be enhanced to capture and integrate business information flowing through e-business systems. These systems also need to be able to apply business intelligence techniques to this captured-business information. These enhanced decision processing systems, or E-Intelligence, have the potential to provide significant business benefits to both traditional bricks-and-mortar companies and new dot.com companies as they begin to exploit the power of e-business processing.

Read more »