September 12, 2008

What is a Computer Printer?

4 comments
What is a Computer Printer?

Computer printer

Many printers are primarily used as computer peripherals, and are permanently attached to a computer which serves as a document source. Other printers, commonly known as network printers, have built-in network interfaces (typically wireless or Ethernet), and can serve as a hardcopy device for any user on the network. In addition, many modern printers can directly interface to electronic media such as memory sticks or memory cards, or to image capture devices such as digital cameras, scanners; some printers are combined with a scanners and/or fax machines in a single unit. A printer which is combined with a scanner can essentially function as a photocopier.

Printers are designed for low-volume, short-turnaround print jobs; requiring virtually no setup time to achieve a hard copy of a given document. However, printers are generally slow devices (10 pages per minute is considered fast; and many consumer printers are far slower than that), and the cost-per-page is relatively high, In contrast, the printing press (which serves much the same function), is designed and optimized for high-volume print jobs such as newspaper print runs--printing presses are capable of hundreds of pages per minute or more, and have an incremental cost-per-page which is a fraction of that of printers. The printing press remains the machine of choice for high-volume, professional publishing. However, as printers have improved in quality and performance, many jobs which used to be done by professional print shops are now done by users on local printers; see desktop publishing.

The world's first computer printer was a 19th-century mechanically driven apparatus invented by Charles Babbage for his Difference Engine.

Printing technology
Printers are routinely classified by the underlying print technology they employ; numerous such technologies have been developed over the years. The choice of print engine has a substantial effect on what jobs a printer is suitable for, as different technologies are capable of different levels of image/text quality, print speed, low cost, noise; in addition, some technologies are inappropriate for certain types of physical media (such as carbon paper or transparencies).

Another aspect of printer technology that is often forgotten is resistance to alteration: liquid ink such as from an inkjet head or fabric ribbon becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface. According to the website of security expert Frank Abagnale checks should either be printed with liquid ink or on special "check paper with toner anchorage" [1]. For similar reasons carbon film ribbons for IBM Selectric typewriters bore labels warning against using them to type negotiable instruments such as checks.

Modern print technology
The following printing technologies are routinely found in modern printers, as of April 2006:

Toner-based printers
Toner-based printers work using the Xerographic principle that is at work in most photocopiers: by adhering toner to a light-sensitive print drum, then using static electricity to transfer the toner to the printing medium to which it is fused with heat and pressure. The most common type of toner-based printer is the laser printer, which uses precision lasers to cause adherence. Laser printers are known for high quality prints, good print speed, and a low cost-per-copy; they are the most common printer for many general-purpose office applications. They are far less commonly used as consumer printers due to a high initial cost.

Laser printers are available in both color and monochrome varieties.

Another toner based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum.

Liquid inkjet printers
Inkjet printers spray very small, precise amounts (usually a few picolitres) of ink onto the media. Inkjet printing (and the related bubble-jet technology) are the most common consumer print technology; as high-quality inkjet printers are inexpensive to produce. Virtually all modern inkjet printers are color devices; some, known as photo printers, include extra pigments to better reproduce the color gamut needed for high-quality photographic prints (and are additionally capable of printing on photographic card stock, as opposed to plain office paper).

Inkjet printers consist of nozzles that produce very small ink bubbles that turn into tiny droplets of ink. The dots formed are the size of tiny pixels. Ink-jet printers can print high quality text and graphics. They are also almost silent in operation. Inkjet printers have a much lower initial cost than do laser printers, but have a much higher cost-per-copy, as the ink needs to be frequently replaced. In addition, consumer printer manufacturers have adapted a business model similar to that employed by manufacturers of razors; the printers themselves are frequently sold below cost, and the ink is then sold at a high markup. Various legal and technological means are employed to try and force users to only purchase ink from the manufacturer (thus leading to vendor lock-in); however there is a thriving aftermarket for such things as third-party ink cartridges (new or refurbished) and refill kits.

Inkjet printers are also far slower than laser printers. Inkjet printers also have the disadvantage that pages must be allowed to dry before being aggressively handled; premature handling can cause the inks (which are adhered to the page in liquid form) to run.

Solid Ink printers
Solid Ink printers, also known as phase-change printers, are a type of thermal transfer printer. They use solid sticks of CMYK colored ink (similar in consistency to candle wax), which are melted and fed into a piezo crystal operated print-head. The printhead sprays the ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is transferred, or transfixed, to the page.

Solid ink printers are most commonly used as color office printers, and are excellent at printing on transparencies and other non-porous media. Solid ink printers can produce excellent results, and are commonly found in office environments. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high power consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on (the wax tends to repel inks from pens), and are difficult to feed through Automatic Document Feeders, however these traits have been significantly reduced in later models. In addition, this type of printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tek sold the printing business to Xerox in 2000.

Dye-sublimation printers
A dye-sublimation printer (or dye-sub printer) is a printer which employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper or canvas. The process is usually to lay one color at a time using a ribbon that has color panels. Dye-sub printers are intended primarily for high-quality color applications, including color photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers.

Thermal printers
Thermal printers work by selectively heating regions of special heat-sensitive paper. These printers are limited to special-purpose applications such as cash registers and the printers in ATMs and gasoline dispensers. They are also used in some older inexpensive fax machines.

Obsolete and special-purpose printing technologies
The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use. Among these types are impact printers and pen-based plotters.

Impact printers rely on a forcible impact to transfer ink to the media, similar to the action of a typewriter. All but the dot matrix printer rely on the use of formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome printing in a single typeface at one time, although bolding and underlining of text could be done by overstriking, that is, printing two or more impressions in the same character position. Impact printers varieties include, Typewriter-derived printers, Teletypewriter-derived printers, Daisy wheel printers, Dot matrix printers and Line printers.

Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se), and special purpose pens that are mechanically run over the paper to create text and images.

Only plotters, dot matrix printers, and certain line printers were capable of printing graphics.

Typewriter-derived printers
Several different computer printers were simply computer-controlable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric typewriter were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second.

Teletypewriter-derived printers
The common teleprinter could easily be interfaced to the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned (in the X- and Y-axes) by a mechanism and the selected letter from was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS.

Daisy wheel printers
Daisy-wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals (the daisy wheel), each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing.

These printers were also referred to as letter-quality printers because, during their heyday, they could produce text which was as clear and crisp as a typewriter (though they were nowhere near the quality of printing presses). The fastest letter-quality printers printed at 30 characters per second.

Dot-matrix printers
In the general sense many printers rely on a matrix of pixels, or dots, that together form the larger image. However, the term dot matrix printer is specifically used for impact printers that use a matrix of small pins to create precise dots. The advantage of dot-matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type).

A Tandy 1000 HX with a Tandy DMP-133 dot-matrix printer.Dot-matrix printers can be broadly divided into two major classes:

Ballistic wire printers (discussed in the dot matrix printers article)
Stored energy printers
Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head.

At one time, dot matrix printers were one of the more common types of printers used for general use - such as for home and small office use. Such printers would have either 9 or 24 pins on the print head. 24 pin print heads were able to print at a higher quality. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favor for general use.

Some dot matrix printers, such as the NEC P6300, can be upgraded to print in color. This is achieved through the use of a four-color ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Color graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, color graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode.

Dot matrix printers are still commonly used in low-cost, low-quality applications like cash registers, or in demanding, very high volume applications like invoice printing. The fact that they use an impact printing method allows them to be used to print multi-part documents using carbonless copy paper (like sales invoices and credit card receipts), whereas other printing methods are unusable with paper of this type. Dot-matrix printers are now (as of 2005) rapidly being superseded even as receipt printers.

Line printers
Line printers, as the name implies, print an entire line of text at a time. Three principle designs existed. In drum printers, a drum carries the entire character set of the printer repeated in each column that is to be printed. In chain printers (also known as train printers), the character set is arranged multiple times around a chain that travels horizontally past the print line. In either case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper.

Comb printers represent the third major design. These printers were a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers printed a portion of a row of pixels at one time (for example, every eighth pixel). By shifting the comb back and forth slightly, the entire pixel row could be printed (continuing the example, in just eight cycles). The paper then advanced and the next pixel row was printed. Because far less motion was involved than in a conventional dot matrix printer, these printers were very fast compared to dot matrix printers and were competitive in speed with formed-character line printers while also being able to print dot-matrix graphics.

Line printers were the fastest of all impact printers and were used for bulk printing in large computer centres. They were virtually never used with personal computers and have now been replaced by high-speed laser printers.

The legacy of line printers lives on in many computer operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers.

Pen-based plotters
A plotter is a vector graphics printing device which operates by moving a pen over the surface of paper. Plotters have been (and still are) used in applications such as computer-aided design, though they are being replaced with wide-format conventional printers (which nowadays have sufficient resolution to render high-quality vector graphics using a rasterized print engine). It is commonplace to refer to such wide-format printers as "plotters", even though such usage is technically incorrect.

Other printers
A number of other sorts of printers are important for historical reasons, or for special purpose uses:

Digital minilab (photographic paper)
Electrolytic printers
Microsphere (printer) (special paper)
Spark printer (supplied for Sinclair ZX81)
barcode printer uses heat to print barcodes

Printing mode
The data received by a printer may be:

a string of characters
a bitmapped image
a vector image
Some printers can process all three types of data, others not.

Daisy wheel printers can handle only plain text data or rather simple point plots.
Plotters typically process vector images.
Modern printing technology, such as laser printers and inkjet printers, can adequately reproduce all three. This is especially true of printers equipped with support for PostScript and/or PCL; which includes the vast majority of printers produced today.
Today it is common to print everything (even plain text) by sending ready bitmapped images to the printer, because it allows better control over formatting. Many printer drivers do not use the text mode at all, even if the printer is capable of it.

Monochrome, color and photo printers
A monochrome printer can only produce an image consisting of one color, usually black. A monochrome printer may also be able to produce various hues of that color, such as a grey-scale.

A color printer can produce images of multiple colors.

A photo printer is a color printer that can produce images that mimic the color range (gamut) and resolution of photographic methods of printing.

The printer manufacturing business
Often the razor and blades business model is applied. That is, a company may sell a printer at cost, and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges.

Printing speed
The speed of early printers was measured in units of characters per second. More modern printers are measured in pages per minute. These measures are used primarily as a marketing tool, and are not well standardised. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly.

Printer job classes
They are collections of printers. Print jobs sent to a class are forwarded to the first available printer in the class.

Forensic identification
Similar to forensic identification of typewriters, computer printers and copiers can be traced down by imperfections in their output. The mechanical tolerances of the toner and paper feed mechanisms cause banding, which contain information about the individual device's mechanical properties. It is sometimes possible to identify the manufacturer and brand, but in some cases the individual printer can be identified from a set of known ones by comparing their outputs. [2] [3]

Some high-quality color printers and copiers steganographically embed their identification code into the printed pages, as fine and almost invisible patterns of yellow dots. The sources identify Xerox and Canon as companies doing this [4] [5]. The Electronic Frontier Foundation has investigated[6] this issue and documented how the Xerox DocuColor printer's serial number, as well as the date and time of the printout, are encoded in a repeating 8×15 dot pattern in the yellow channel. EFF is working to reverse engineer additional printers.

September 11, 2008

What is a Computer display?

0 comments
What is a Computer display?

Computer display

A cable connects the monitor to a video adapter (video card) that is installed in an expansion slot on the computer’s motherboard. This system converts signals into text and pictures and displays them on a TV-like screen (the monitor).

The computer sends a signal to the video adapter, telling it what character, image, or graphic to display. The video adapter converts that signal to a set of instructions that tell the display device (monitor) how to draw the image on the screen.

It is important that the monitor have a TCO Certification.

Cathode ray tube
The CRT, or cathode ray tube, is the picture tube of your monitor. Although it is a large vacuum tube, it is shaped more like a bottle. The tube tapers near the back where there is a negatively charged cathode, or electron gun. The electron gun shoots electrons at the back of the positively charged screen, which is coated with a phosphorous chemical. This excites the phosphors causing them to glow as individual dots called pixels (picture elements). The image you see on the monitor's screen is made up of thousands of tiny dots (pixels). If you have ever seen a child's LiteBrite toy, then you have a good idea of the concept. The distance between the pixels has a lot to do with the quality of the image. If the distance between pixels on a monitor screen is too great, the picture will appear fuzzy, or grainy. The closer together the pixels are, the sharper the image on screen. The distance between pixels on a computer monitor screen is called its dot pitch and is measured in millimeters. (See sidebar.) Most modern monitors have a monitor with a dot pitch of .28 mm or less.

Note: From an environmental point of view, the monitor is the most difficult computer peripheral to dispose of because of the lead it contains.

There are two electromagnets (yokes) around the collar of the tube, which bend the beam of electrons. The beam scans (is bent) across the monitor from left to right and top to bottom to create, or draw the image, line by line. The number of times in one second that the electron gun redraws the entire image is called the refresh rate and is measured in Hertz (Hz). If the scanning beam hits each line of pixels, in succession, on each pass, then the monitor is known as a non-interlaced monitor. The electron beam on an interlaced monitor scans the odd numbered lines on one pass, and then scans the even lines on the second pass. Interlaced Monitors are typically harder to look at, and have been attributed to eyestrain and nausea.

Imaging technologies

19" inch (48 cm) CRT computer monitorAs with television, several different hardware technologies exist for displaying computer-generated output:

Liquid crystal display (LCD). (LCD-based monitors can receive television and computer protocols (SVGA, DVI, PAL, SECAM, NTSC). As of this writing (June 2006), LCD displays are the most popular display device for new computers in North America.
Cathode ray tube (CRT)
Vector displays, as used on the Vectrex, many scientific and radar applications, and several early arcade machines (notably Asteroids (game) - always implemented using CRT displays due to requirement for a deflection system, though can be emulated on any raster-based display.
Television receivers were used by most early personal and home computers, connecting composite video to the television set using a modulator. Image quality was reduced by the additional steps of composite video → modulator → TV tuner → composite video, though it reduced costs of adoption because one did not have to buy a specialized monitor.
Plasma display
Surface-conduction electron-emitter display (SED)
Video projector - implemented using LCD, CRT, or other technologies. Recent consumer-level video projectors are almost exclusively LCD based.
Organic light-emitting diode (OLED) display
During the era of early home computers, television sets were almost exclusively CRT-based.

Performance measurements
The relevant performance measurements of a monitor are:

Luminance
Size
Dot pitch. In general, the lower the dot pitch (e.g. 0.24), the sharper the picture will rate.
V-sync rate
Response time
Refresh rate

Display resolutions
A modern CRT display has considerable flexibility: it can usually handle a range of resolutions from 320 by 200 up to 2560 by 2040 pixels.

Issues and problems
Screen burn-in has been an issue for a long time with CRT computer monitors and televisions. Commonly, people use screensavers in order to prevent their computer monitors from getting screen burn-in. How this happens is that if an image is displayed on the screen for a long period without changing, the screen that is showing will embed itself into the glass. Generally, you will find this phenomenon at older ATM machines. In order to prevent screen burn-in on computer monitors, it is recommended that you use a good screensaver program that rotates often.

The other issue with computer monitors is that some LCD monitors may get dead pixels over time. This generally applies to older LCD monitors from the 1990s.

Things on both issues have changed over time and are improving in order to prevent these things from happening.

With exceptions of DLP, most display technologies (especially LCD) have an inherent misregistration of the color planes, that is, the centres of the red, green, and blue dots do not line up perfectly. Subpixel rendering depends on this misalignment; technologies making use of this include the Apple II from 1976 [1], and more recently Microsoft (ClearType, 1998) and XFree86 (X Rendering Extension).

Display interfaces

Computer Terminals
Early CRT-based VDUs (Visual Display Units) such as the DEC VT05 without graphics capabilities gained the label glass teletypes, because of the functional similarity to their electromechanical predecessors.

Composite monitors
Early home computers such as the Apple II and the Commodore 64 used composite monitors. However, they are now used with video game consoles.

Digital monitors
Early digital monitors are sometimes known as TTLs because the voltages on the red, green, and blue inputs are compatible with TTL logic chips. Later digital monitors support LVDS, or TMDS protocols.

TTL monitors

IBM PC with green monochrome displayMonitors used with the MDA, Hercules, CGA, and EGA graphics adapters used in early IBM Personal Computers and clones were controlled via TTL logic. Such monitors can usually be identified by a male DB-9 connector used on the video cable. The primary disadvantage of TTL monitors was the extremely limited number of colors available due to the low number of digital bits used for video signaling.

TTL Monochrome monitors only made use of five out of the nine pins. One pin was used as a ground, and two pins were used for horizontal/vertical synchronization. The electron gun was controlled by two separate digital signals, a video bit, and an intensity bit to control the brightness of the drawn pixels. Only four unique shades were possible; black, dim, medium or bright.

CGA monitors used four digital signals to control the three electron guns used in color CRTs, in a signalling method known as RGBI, or Red Green and Blue, plus Intensity. Each of the three RGB colors can be switched on or off independently. The intensity bit increases the brightness of all guns that are switched on, or if no colors are switched on the intensity bit will switch on all guns at a very low brightness to produce a dark grey. A CGA monitor is only capable of rendering 16 unique colors. The CGA monitor was not exclusively used by PC based hardware. The Commodore 128 could also utilize CGA monitors. Many CGA monitors were capable of displaying composite video via a separate jack.

EGA monitors used six digital signals to control the three electron guns in a signalling method known as RrGgBb. Unlike CGA, each gun is allocated its own intensity bit. This allowed each of the three primary colors to have four different states (off, soft, medium, and bright) resulting in 64 possible colors.

Although not supported in the original IBM specification, many vendors of clone graphics adapters have implemented backwards monitor compatibility and auto detection. For example, EGA cards produced by Paradise could operate as a MDA, or CGA adapter if a monochrome or CGA monitor was used place of an EGA monitor. Many CGA cards were also capable of operating as MDA or Hercules card if a monochrome monitor was used.

Modern technology

Analog RGB monitors
Most modern computer displays can show thousands or millions of different colors in the RGB color space by varying red, green, and blue signals in continuously variable intensities.

Digital and analog combination
Many monitors have analog signal relay, but some more recent models (mostly LCD screens) support digital input signals. It is a common misconception that all computer monitors are digital. For several years, televisions, composite monitors, and computer displays have been significantly different. However, as TVs have become more versatile, the distinction has blurred.

Configuration and usage

Multi-head

Some users use more than one monitor. The displays can operate in multiple modes. One of the most common spreads the entire desktop over all of the monitors, which thus act as one big desktop. The X Window System refers to this as Xinerama.

A monitor may also clone another monitor.

Dualhead - Using two monitors
Triplehead - using three monitors
Display assembly - multi-head configurations actively managed as a single unit

Virtual displays
The X Window System provides configuration mechanisms for using a single hardware monitor for rendering multiple virtual displays, as controlled (for example) with the Unix DISPLAY global variable or with the -display command option.

Major manufacturers
Apple Computer
BenQ
Dell, Inc.
Eizo
Iiyama Corporation
LaCie
LG Electronics
NEC Display Solutions
Philips
Samsung
Sony
ViewSonic

What is a Computer display?

0 comments
What is a Computer display?

Computer display

A cable connects the monitor to a video adapter (video card) that is installed in an expansion slot on the computer’s motherboard. This system converts signals into text and pictures and displays them on a TV-like screen (the monitor).

The computer sends a signal to the video adapter, telling it what character, image, or graphic to display. The video adapter converts that signal to a set of instructions that tell the display device (monitor) how to draw the image on the screen.

It is important that the monitor have a TCO Certification.

Cathode ray tube
The CRT, or cathode ray tube, is the picture tube of your monitor. Although it is a large vacuum tube, it is shaped more like a bottle. The tube tapers near the back where there is a negatively charged cathode, or electron gun. The electron gun shoots electrons at the back of the positively charged screen, which is coated with a phosphorous chemical. This excites the phosphors causing them to glow as individual dots called pixels (picture elements). The image you see on the monitor's screen is made up of thousands of tiny dots (pixels). If you have ever seen a child's LiteBrite toy, then you have a good idea of the concept. The distance between the pixels has a lot to do with the quality of the image. If the distance between pixels on a monitor screen is too great, the picture will appear fuzzy, or grainy. The closer together the pixels are, the sharper the image on screen. The distance between pixels on a computer monitor screen is called its dot pitch and is measured in millimeters. (See sidebar.) Most modern monitors have a monitor with a dot pitch of .28 mm or less.

Note: From an environmental point of view, the monitor is the most difficult computer peripheral to dispose of because of the lead it contains.

There are two electromagnets (yokes) around the collar of the tube, which bend the beam of electrons. The beam scans (is bent) across the monitor from left to right and top to bottom to create, or draw the image, line by line. The number of times in one second that the electron gun redraws the entire image is called the refresh rate and is measured in Hertz (Hz). If the scanning beam hits each line of pixels, in succession, on each pass, then the monitor is known as a non-interlaced monitor. The electron beam on an interlaced monitor scans the odd numbered lines on one pass, and then scans the even lines on the second pass. Interlaced Monitors are typically harder to look at, and have been attributed to eyestrain and nausea.

Imaging technologies

19" inch (48 cm) CRT computer monitorAs with television, several different hardware technologies exist for displaying computer-generated output:

Liquid crystal display (LCD). (LCD-based monitors can receive television and computer protocols (SVGA, DVI, PAL, SECAM, NTSC). As of this writing (June 2006), LCD displays are the most popular display device for new computers in North America.
Cathode ray tube (CRT)
Vector displays, as used on the Vectrex, many scientific and radar applications, and several early arcade machines (notably Asteroids (game) - always implemented using CRT displays due to requirement for a deflection system, though can be emulated on any raster-based display.
Television receivers were used by most early personal and home computers, connecting composite video to the television set using a modulator. Image quality was reduced by the additional steps of composite video → modulator → TV tuner → composite video, though it reduced costs of adoption because one did not have to buy a specialized monitor.
Plasma display
Surface-conduction electron-emitter display (SED)
Video projector - implemented using LCD, CRT, or other technologies. Recent consumer-level video projectors are almost exclusively LCD based.
Organic light-emitting diode (OLED) display
During the era of early home computers, television sets were almost exclusively CRT-based.

Performance measurements
The relevant performance measurements of a monitor are:

Luminance
Size
Dot pitch. In general, the lower the dot pitch (e.g. 0.24), the sharper the picture will rate.
V-sync rate
Response time
Refresh rate

Display resolutions
A modern CRT display has considerable flexibility: it can usually handle a range of resolutions from 320 by 200 up to 2560 by 2040 pixels.

Issues and problems
Screen burn-in has been an issue for a long time with CRT computer monitors and televisions. Commonly, people use screensavers in order to prevent their computer monitors from getting screen burn-in. How this happens is that if an image is displayed on the screen for a long period without changing, the screen that is showing will embed itself into the glass. Generally, you will find this phenomenon at older ATM machines. In order to prevent screen burn-in on computer monitors, it is recommended that you use a good screensaver program that rotates often.

The other issue with computer monitors is that some LCD monitors may get dead pixels over time. This generally applies to older LCD monitors from the 1990s.

Things on both issues have changed over time and are improving in order to prevent these things from happening.

With exceptions of DLP, most display technologies (especially LCD) have an inherent misregistration of the color planes, that is, the centres of the red, green, and blue dots do not line up perfectly. Subpixel rendering depends on this misalignment; technologies making use of this include the Apple II from 1976 [1], and more recently Microsoft (ClearType, 1998) and XFree86 (X Rendering Extension).

Display interfaces

Computer Terminals
Early CRT-based VDUs (Visual Display Units) such as the DEC VT05 without graphics capabilities gained the label glass teletypes, because of the functional similarity to their electromechanical predecessors.

Composite monitors
Early home computers such as the Apple II and the Commodore 64 used composite monitors. However, they are now used with video game consoles.

Digital monitors
Early digital monitors are sometimes known as TTLs because the voltages on the red, green, and blue inputs are compatible with TTL logic chips. Later digital monitors support LVDS, or TMDS protocols.

TTL monitors

IBM PC with green monochrome displayMonitors used with the MDA, Hercules, CGA, and EGA graphics adapters used in early IBM Personal Computers and clones were controlled via TTL logic. Such monitors can usually be identified by a male DB-9 connector used on the video cable. The primary disadvantage of TTL monitors was the extremely limited number of colors available due to the low number of digital bits used for video signaling.

TTL Monochrome monitors only made use of five out of the nine pins. One pin was used as a ground, and two pins were used for horizontal/vertical synchronization. The electron gun was controlled by two separate digital signals, a video bit, and an intensity bit to control the brightness of the drawn pixels. Only four unique shades were possible; black, dim, medium or bright.

CGA monitors used four digital signals to control the three electron guns used in color CRTs, in a signalling method known as RGBI, or Red Green and Blue, plus Intensity. Each of the three RGB colors can be switched on or off independently. The intensity bit increases the brightness of all guns that are switched on, or if no colors are switched on the intensity bit will switch on all guns at a very low brightness to produce a dark grey. A CGA monitor is only capable of rendering 16 unique colors. The CGA monitor was not exclusively used by PC based hardware. The Commodore 128 could also utilize CGA monitors. Many CGA monitors were capable of displaying composite video via a separate jack.

EGA monitors used six digital signals to control the three electron guns in a signalling method known as RrGgBb. Unlike CGA, each gun is allocated its own intensity bit. This allowed each of the three primary colors to have four different states (off, soft, medium, and bright) resulting in 64 possible colors.

Although not supported in the original IBM specification, many vendors of clone graphics adapters have implemented backwards monitor compatibility and auto detection. For example, EGA cards produced by Paradise could operate as a MDA, or CGA adapter if a monochrome or CGA monitor was used place of an EGA monitor. Many CGA cards were also capable of operating as MDA or Hercules card if a monochrome monitor was used.

Modern technology

Analog RGB monitors
Most modern computer displays can show thousands or millions of different colors in the RGB color space by varying red, green, and blue signals in continuously variable intensities.

Digital and analog combination
Many monitors have analog signal relay, but some more recent models (mostly LCD screens) support digital input signals. It is a common misconception that all computer monitors are digital. For several years, televisions, composite monitors, and computer displays have been significantly different. However, as TVs have become more versatile, the distinction has blurred.

Configuration and usage

Multi-head

Some users use more than one monitor. The displays can operate in multiple modes. One of the most common spreads the entire desktop over all of the monitors, which thus act as one big desktop. The X Window System refers to this as Xinerama.

A monitor may also clone another monitor.

Dualhead - Using two monitors
Triplehead - using three monitors
Display assembly - multi-head configurations actively managed as a single unit

Virtual displays
The X Window System provides configuration mechanisms for using a single hardware monitor for rendering multiple virtual displays, as controlled (for example) with the Unix DISPLAY global variable or with the -display command option.

Major manufacturers
Apple Computer
BenQ
Dell, Inc.
Eizo
Iiyama Corporation
LaCie
LG Electronics
NEC Display Solutions
Philips
Samsung
Sony
ViewSonic

What is a Computer case?

0 comments
What is a Computer case?

Computer case

Cases are usually constructed from steel, aluminum, or plastic, although other materials (such as wood and perspex) have also been used in case designs.

Size and shape
Cases can come in many different sizes or form factors. In 2006, the most popular form factor is ATX, although small form factor cases are becoming popular for a variety of different uses.

A case with an ATX motherboard and power supply, for example, may still take on one of several specific shapes, also known as form factors. Common case form factors include towers (such as mini tower, mid-sized tower, and full-sized tower); desktops or pizza boxes (also called flatbed or horizontal); and slim desktops, which integrate the display into the housing. Tower cases are taller and typically have more room while desktop cases are more compact and are more popular in business environments.

Small form factor cases are a variety of cases that are becoming more and more common. Companies like Shuttle Inc. and AOpen have been producing such cases and FlexATX is the most common motherboard designed for them. Apple Computer has its Mac Mini computer, which is around the size of a CD-ROM drive.

Function

A computer case opened up and stripped of its motherboard and power supply unitCases usually come with room for a power supply unit, several expansion slots and expansion bays, wires for powering up a computer and some with built in I/O ports that must be connected to a motherboard.

Motherboards are screwed to the bottom or the side of the case, its I/O ports being exposed on the back of the case. Usually the power supply unit is at the top of the case attached with several screws. The typical case has four 5.25" and three 3.5" expansion bays for devices such as hard drives, floppy disk drives and CD-ROMs. A power button and sometimes a reset button are usually located on the front. LED status lights for power and hard drive activity are often located near the power button and are powered from wires that are connected with the motherboard. Some cases come with status monitoring equipment such as case temperature or processor speed monitors.

A panel on the side, covers up and protects the inside of the computer when being used, which usually slides on and held with a screw. Most cases require a large amount of screws to put together. Recently there has been a move to "screwless" cases, where cases are held together with other methods than screws. Since the early 2000s, some computers have begun to have clear side panels so that the user can look into the computer while it is operating.

Appearance

The current iMac G5 contains the entire computer in a two inch-thick screen—there is no tower.Traditional designs are beige and rectangular, but have evolved in style, especially after the introduction of the iMac in 1998. Beige box designs are typically found on budget machines; some people still prefer the traditional design.

Case modding is the artistic styling of computer encasings, often to draw attention to the use of advanced or unusual components. Modded cases may include internal lighting, custom paint, acrylic windows, or liquid cooling systems. Some case modding hobbyists build their own cases from raw materials like aluminum, steel, acrylic, or wood.

Stickers are common on cases. These may include the manufacturer's logo, the computer's specifications (CPU, RAM, Hard drive, etc), the operating system (such as "Designed for Windows XP") and processor (such as Intel Inside).

Brands
Case manufacturers include Antec, Chieftec, Cooler Master, Ever Case, Lian Li, NZXT, SilverStone Technology, Thermaltake and Zalman. Cases may be composed of acrylic glass.

September 08, 2008

What is a Sound Card?

0 comments
What is a Sound Card?

Sound card

Typical uses of sound cards include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation/education, and entertainment (games). Many computers have sound capabilities built in, while others require these expansion cards if audio capability is desired.

General characteristics

Close-up of a sound card PCB, showing electrolytic capacitors (most likely for AC coupling), SMT capacitors and resistors, and a YAC512 two-channel 16-bit DAC.A typical sound card includes a sound chip, usually featuring a digital-to-analog converter, that converts recorded or generated digital waveforms of sound into an analog format. This signal is led to a (typically 1/8-inch earphone-type) connector where an amplifier, headphones, or similar sound destination can be plugged in. More advanced designs usually include more than one sound chip to separate duties between digital sound production and synthesized sounds (usually for real-time generation of music and sound effects utilizing little data and CPU time).

Digital sound reproduction is usually achieved by multi-channel DACs, able to play multiple digital samples at different pitches and volumes, optionally applying real-time effects like filtering or distortion. Multi-channel digital sound playback can also be used for music synthesis if used with a digitized instrument bank of some sort, typically a small amount of ROM or Flash memory containing samples corresponding to the standard MIDI instruments. (A contrasting way to synthesize sound on a PC uses "audio codecs", which rely heavily on software for music synthesis, MIDI compliance and even multiple-channel emulation. This approach has become common as manufacturers seek to simplify the design and the cost of the sound card itself).

Most sound cards have a line in connector where the sound signal from a cassette tape recorder or similar sound source can be input. The sound card can digitize this signal and store it (controlled by the corresponding computer software) on the computer's hard disk for editing or further reproduction. Another typical external connector is the microphone connector, for connecting to a microphone or other input device that generates a relatively lower voltage than the line in connector. Input through a microphone jack is typically used by speech recognition software or Voice over IP applications.

Connections
Most sound cards since 1999 conform to Microsoft's PC 99 standard for color coding the external connectors as follows:

Color Function
Pink Analog microphone input.
Light blue Analog line level input.
Lime green Analog line level output for the main stereo signal (front speakers or headphones).
Black Analog line level output for rear speakers.
Silver Analog line level output for side speakers.
Orange S/PDIF digital output (sometimes used as an analog line output for a center speaker instead)

Voices vs channels
Another important characteristic of any sound card is the number of distinct voices (intended as the number of sounds that can be played back simultaneously and independently) and the number of channels (intended as the number of distinct electrical audio outputs).

For example, many older sound chips had three voices, but only one audio channel (mono) where all the voices were mixed into, while the AdLib sound card had 9 voice and 1 mono channel.

For a number of years, most PC sound cards had multiple FM synthesis voices (typically 9 or 18) which were mostly used for MIDI music, but only one (mono) or two(stereo) voice(s) and channel(s) dedicated to playing back digital sound samples, and playing back more than one digital sound sample required performing a software downmix at a fixed sampling rate. Modern low-cost integrated soundcards using an audio codec like the AC'97 still work that way, although they may have more than two sound output channels (surround sound).

Today, a sound card having hardware support for more than the two standard stereo voices, is likely to referred at as "providing hardware audio acceleration".

History of sound cards for the IBM PC architecture

A sound card based on VIA Envy chip
Echo Digital Audio Corporation's Indigo IO — PCMCIA card 24-bit 96 kHz stereo in/out sound cardSound cards for computers based on the IBM PC were uncommon until 1988, leaving the internal PC speaker as the only way early PC software could produce sound and music. The speaker was limited to square wave production, leading to the common nickname of "beeper" and the resulting sound described as "beeps and boops". Several companies, most notably Access Software, developed techniques for digital sound reproduction over the PC speaker; the resulting audio, while functional, suffered from distorted output and low volume, and usually required all other processing to halt while sounds were played. Other home computer models of the 1980s included hardware support for digital sound playback or music synthesis (or both), leaving the IBM PC at a disadvantage when it came to multimedia applications such as music composition or gaming.

It is important to note that the initial design and marketing focuses of sound cards for the IBM PC platform were not based on gaming, but rather on specific audio applications such as music composition (AdLib Personal Music System, Creative Music System, IBM Music Feature Card) or on speech synthesis (Digispeech DS201, Covox Speech Thing, Street Electronics Echo). It took the involvement of Sierra and other game companies in 1988 to switch the focus toward gaming.

Hardware manufacturers
One of the first manufacturers of sound cards for the IBM PC was AdLib, who produced a card based on the Yamaha YM3812 sound chip, aka the OPL2. The AdLib had two modes: A 9-voice mode where each voice could be fully programmed, and a lesser-used "percussion" mode that used 3 regular voices to produce 5 independent percussion-only voices for a total of 11. (The percussion mode was considered inflexible by most developers, so it was used mostly by AdLib's own composition software.)

Creative Labs also marketed a sound card at the same time called the Creative Music System. Although the C/MS had twelve voices to AdLib's nine, and was a stereo card while the AdLib was mono, the basic technology behind it was based on the Philips SAA 1099 which was essentially a square-wave generator. Sounding not unlike twelve simultaneous PC speakers, it never caught on the way the AdLib did, even after Creative marketed it a year later through Radio Shack as the Game Blaster. The Game Blaster retailed for under $100 and included the hit game title Silpheed.

Probably the most significant historical change in the history of sound cards came when Creative Labs produced the Sound Blaster card. The Sound Blaster cloned the AdLib, and also added a sound coprocessor to record and play back digital audio (presumably an Intel microcontroller, which Creative incorrectly called a "DSP" to suggest it was a digital signal processor), a game port for adding a joystick, and the ability to interface to MIDI equipment (using the game port and a special cable). With more features at nearly the same price point, and compatibility with existing AdLib titles, most first-time buyers chose the Sound Blaster. The Sound Blaster eventually outsold the AdLib and set the stage for dominating the market.

The Sound Blaster line of cards, in tandem with the first cheap CD-ROM drives and evolving video technology, ushered in a new era of multimedia computer applications that could play back CD audio, add recorded dialogue to computer games, or even reproduce motion video (albeit at much lower resolutions and quality). The widespread adoption of Sound Blaster support in multimedia and entertainment titles meant that future sound cards such as Media Vision's Pro Audio Spectrum and the Gravis Ultrasound needed to address Sound Blaster compatibility if they were to compete against it.

Industry adoption
When game company Sierra On-Line opted to support add-on music hardware (instead of built-in hardware such as the PC speaker and built-in sound capabilities of the IBM PCjr and Tandy 1000), the concept of what sound and music could be on the IBM PC changed dramatically. Two of the companies Sierra partnered with were Roland and Adlib, opting to produce in-game music for King's Quest 4 that supported the Roland MT-32 and Adlib Music Synthesizer. The MT-32 had superior output quality, due in part to its method of sound synthesis as well as built-in reverb. Being the most sophisticated synthesizer they supported, Sierra chose to use most of the MT-32's custom features and unconventional instrument patches to produce background sound effects (birds chirping, horses clopping, etc.) before the Sound Blaster brought playing real audio clips to the PC entertainment world. Many game companies would write for the MT-32, but support the Adlib as an alternative due to the latter's higher market base. The adoption of the MT-32 led the way for the creation of the MPU-401/Roland Sound Canvas and General MIDI standards as the most common means of playing in-game music until the mid-1990s.

Feature evolution
Most ISA bus soundcards could not record and play digitized sound simultaneously, mostly due to inferior card DSPs. Later PCI bus cards fixed these limitations and are mostly full-duplex.

For years, soundcards had only one or two channels of digital sound (most notably the Sound Blaster series and their compatibles) with the notable exception of the Gravis Ultrasound family, which had hardware support for up to 32 independent channels of digital audio. Early games and MOD-players needing more channels than the card could support had to resort to mixing multiple channels in software. Today, most good quality sound cards have hardware support for at least 16 channels of digital audio, but others, like those that utilize cheap audio codecs, still rely partially or completely on software to mix channels, through either device drivers or the operating system itself to perform a software downmix of multiple audio channels.

Sound devices other than expansion cards

Integrated sound on the PC
In 1984, the IBM PCjr debuted with a rudimentary 3-voice sound synthesis chip, the SN76489, capable of generating three square-wave tones with variable amplitude, and a pseudo white noise channel that could generate primitive percussion sounds. The Tandy 1000, initially being a clone of the PCjr, duplicated this functionality, with the Tandy TL/SL/RL line adding digital sound recording/playback capabilities.

In the late 1990s, many computer manufacturers began to replace plug-in soundcards with a "codec" (actually a combined audio AD/DA-converter) integrated into the motherboard. Many of these used Intel's AC97 specification. Others used cheap ACR slots.

As of 2005, these "codecs" usually lack the hardware for direct music synthesis or even multi-channel sound, with special drivers and software making up for these lacks, at the expense of CPU speed (for example, MIDI reproduction takes away 10-15% CPU time on an Athlon XP 1600+ CPU).

Nevertheless, some manufacturers offered (and offer, as of 2006) motherboards with integrated "real" (non-codec) soundcards usually in the form of a custom chipset providing e.g. full ISA or PCI Soundblaster compatibility, thus saving an expansion slot while providing the user with a (relatively) high quality soundcard.

Integrated sound on other platforms
Various computers which do not use the IBM PC architecture, such as Apple's Macintosh, and workstations from manufacturers like Sun have had their own motherboard integrated sound devices. In some cases these provide very advanced capabilities (for the time of manufacture), in most they are minimal systems. Some of these platforms have also had sound cards designed for their bus architectures which of course cannot be used in a standard PC.

USB sound cards
While not literally sound cards (since they don't plug into slots inside of a computer, and usually are not card-shaped (rectangular)), there are devices called USB sound cards. These attach to a computer via USB cables. The USB specification defines a standard interface, the USB audio device class, allowing a single driver to work with the various USB sound devices on the market.

Other outboard sound devices
USB Sound Cards are far from the first external devices allowing a computer to record or synthesize sound. Virtually any method that was once common for getting an electrical signal in or out of a computer has probably been used to attempt to produce sound.

Driver architecture
To use a sound card, the operating system typically requires a specific device driver. Some operating systems include the drivers for some or all cards available, in other cases the drivers are supplied with the card itself, or are available for download.

DOS programs for the IBM PC often had to use universal middleware driver libraries (such as the HMI Sound Operating System, the Miles Sound System etc.) which had drivers for most common sound cards, since DOS itself had no real concept of a sound card. Some card manufacturers provided (sometimes inefficient) middleware TSR-based drivers for their products, and some programs simply had drivers incorporated into the program itself for the sound cards that were supported.
Microsoft Windows uses proprietary drivers generally written by the sound card manufacturers. Many makers supply the drivers to Microsoft for inclusion on Windows distributions. Sometimes drivers are also supplied by the individual vendors for download and installation. Bug fixes and other improvements are likely to be available faster via downloading, since Windows CDs cannot be updated as frequently as a web or FTP site. Vista will use UAA.
A number of versions of UNIX make use of the portable Open Sound System. Drivers are seldom produced by the card manufacturer.
Most Linux-based distributions make use of the Advanced Linux Sound Architecture, but have taken measures to remain compatible with the Open Sound System.

What is a Sound Card?

0 comments
What is a Sound Card?

Sound card

Typical uses of sound cards include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation/education, and entertainment (games). Many computers have sound capabilities built in, while others require these expansion cards if audio capability is desired.

General characteristics

Close-up of a sound card PCB, showing electrolytic capacitors (most likely for AC coupling), SMT capacitors and resistors, and a YAC512 two-channel 16-bit DAC.A typical sound card includes a sound chip, usually featuring a digital-to-analog converter, that converts recorded or generated digital waveforms of sound into an analog format. This signal is led to a (typically 1/8-inch earphone-type) connector where an amplifier, headphones, or similar sound destination can be plugged in. More advanced designs usually include more than one sound chip to separate duties between digital sound production and synthesized sounds (usually for real-time generation of music and sound effects utilizing little data and CPU time).

Digital sound reproduction is usually achieved by multi-channel DACs, able to play multiple digital samples at different pitches and volumes, optionally applying real-time effects like filtering or distortion. Multi-channel digital sound playback can also be used for music synthesis if used with a digitized instrument bank of some sort, typically a small amount of ROM or Flash memory containing samples corresponding to the standard MIDI instruments. (A contrasting way to synthesize sound on a PC uses "audio codecs", which rely heavily on software for music synthesis, MIDI compliance and even multiple-channel emulation. This approach has become common as manufacturers seek to simplify the design and the cost of the sound card itself).

Most sound cards have a line in connector where the sound signal from a cassette tape recorder or similar sound source can be input. The sound card can digitize this signal and store it (controlled by the corresponding computer software) on the computer's hard disk for editing or further reproduction. Another typical external connector is the microphone connector, for connecting to a microphone or other input device that generates a relatively lower voltage than the line in connector. Input through a microphone jack is typically used by speech recognition software or Voice over IP applications.

Connections
Most sound cards since 1999 conform to Microsoft's PC 99 standard for color coding the external connectors as follows:

Color Function
Pink Analog microphone input.
Light blue Analog line level input.
Lime green Analog line level output for the main stereo signal (front speakers or headphones).
Black Analog line level output for rear speakers.
Silver Analog line level output for side speakers.
Orange S/PDIF digital output (sometimes used as an analog line output for a center speaker instead)

Voices vs channels
Another important characteristic of any sound card is the number of distinct voices (intended as the number of sounds that can be played back simultaneously and independently) and the number of channels (intended as the number of distinct electrical audio outputs).

For example, many older sound chips had three voices, but only one audio channel (mono) where all the voices were mixed into, while the AdLib sound card had 9 voice and 1 mono channel.

For a number of years, most PC sound cards had multiple FM synthesis voices (typically 9 or 18) which were mostly used for MIDI music, but only one (mono) or two(stereo) voice(s) and channel(s) dedicated to playing back digital sound samples, and playing back more than one digital sound sample required performing a software downmix at a fixed sampling rate. Modern low-cost integrated soundcards using an audio codec like the AC'97 still work that way, although they may have more than two sound output channels (surround sound).

Today, a sound card having hardware support for more than the two standard stereo voices, is likely to referred at as "providing hardware audio acceleration".

History of sound cards for the IBM PC architecture

A sound card based on VIA Envy chip
Echo Digital Audio Corporation's Indigo IO — PCMCIA card 24-bit 96 kHz stereo in/out sound cardSound cards for computers based on the IBM PC were uncommon until 1988, leaving the internal PC speaker as the only way early PC software could produce sound and music. The speaker was limited to square wave production, leading to the common nickname of "beeper" and the resulting sound described as "beeps and boops". Several companies, most notably Access Software, developed techniques for digital sound reproduction over the PC speaker; the resulting audio, while functional, suffered from distorted output and low volume, and usually required all other processing to halt while sounds were played. Other home computer models of the 1980s included hardware support for digital sound playback or music synthesis (or both), leaving the IBM PC at a disadvantage when it came to multimedia applications such as music composition or gaming.

It is important to note that the initial design and marketing focuses of sound cards for the IBM PC platform were not based on gaming, but rather on specific audio applications such as music composition (AdLib Personal Music System, Creative Music System, IBM Music Feature Card) or on speech synthesis (Digispeech DS201, Covox Speech Thing, Street Electronics Echo). It took the involvement of Sierra and other game companies in 1988 to switch the focus toward gaming.

Hardware manufacturers
One of the first manufacturers of sound cards for the IBM PC was AdLib, who produced a card based on the Yamaha YM3812 sound chip, aka the OPL2. The AdLib had two modes: A 9-voice mode where each voice could be fully programmed, and a lesser-used "percussion" mode that used 3 regular voices to produce 5 independent percussion-only voices for a total of 11. (The percussion mode was considered inflexible by most developers, so it was used mostly by AdLib's own composition software.)

Creative Labs also marketed a sound card at the same time called the Creative Music System. Although the C/MS had twelve voices to AdLib's nine, and was a stereo card while the AdLib was mono, the basic technology behind it was based on the Philips SAA 1099 which was essentially a square-wave generator. Sounding not unlike twelve simultaneous PC speakers, it never caught on the way the AdLib did, even after Creative marketed it a year later through Radio Shack as the Game Blaster. The Game Blaster retailed for under $100 and included the hit game title Silpheed.

Probably the most significant historical change in the history of sound cards came when Creative Labs produced the Sound Blaster card. The Sound Blaster cloned the AdLib, and also added a sound coprocessor to record and play back digital audio (presumably an Intel microcontroller, which Creative incorrectly called a "DSP" to suggest it was a digital signal processor), a game port for adding a joystick, and the ability to interface to MIDI equipment (using the game port and a special cable). With more features at nearly the same price point, and compatibility with existing AdLib titles, most first-time buyers chose the Sound Blaster. The Sound Blaster eventually outsold the AdLib and set the stage for dominating the market.

The Sound Blaster line of cards, in tandem with the first cheap CD-ROM drives and evolving video technology, ushered in a new era of multimedia computer applications that could play back CD audio, add recorded dialogue to computer games, or even reproduce motion video (albeit at much lower resolutions and quality). The widespread adoption of Sound Blaster support in multimedia and entertainment titles meant that future sound cards such as Media Vision's Pro Audio Spectrum and the Gravis Ultrasound needed to address Sound Blaster compatibility if they were to compete against it.

Industry adoption
When game company Sierra On-Line opted to support add-on music hardware (instead of built-in hardware such as the PC speaker and built-in sound capabilities of the IBM PCjr and Tandy 1000), the concept of what sound and music could be on the IBM PC changed dramatically. Two of the companies Sierra partnered with were Roland and Adlib, opting to produce in-game music for King's Quest 4 that supported the Roland MT-32 and Adlib Music Synthesizer. The MT-32 had superior output quality, due in part to its method of sound synthesis as well as built-in reverb. Being the most sophisticated synthesizer they supported, Sierra chose to use most of the MT-32's custom features and unconventional instrument patches to produce background sound effects (birds chirping, horses clopping, etc.) before the Sound Blaster brought playing real audio clips to the PC entertainment world. Many game companies would write for the MT-32, but support the Adlib as an alternative due to the latter's higher market base. The adoption of the MT-32 led the way for the creation of the MPU-401/Roland Sound Canvas and General MIDI standards as the most common means of playing in-game music until the mid-1990s.

Feature evolution
Most ISA bus soundcards could not record and play digitized sound simultaneously, mostly due to inferior card DSPs. Later PCI bus cards fixed these limitations and are mostly full-duplex.

For years, soundcards had only one or two channels of digital sound (most notably the Sound Blaster series and their compatibles) with the notable exception of the Gravis Ultrasound family, which had hardware support for up to 32 independent channels of digital audio. Early games and MOD-players needing more channels than the card could support had to resort to mixing multiple channels in software. Today, most good quality sound cards have hardware support for at least 16 channels of digital audio, but others, like those that utilize cheap audio codecs, still rely partially or completely on software to mix channels, through either device drivers or the operating system itself to perform a software downmix of multiple audio channels.

Sound devices other than expansion cards

Integrated sound on the PC
In 1984, the IBM PCjr debuted with a rudimentary 3-voice sound synthesis chip, the SN76489, capable of generating three square-wave tones with variable amplitude, and a pseudo white noise channel that could generate primitive percussion sounds. The Tandy 1000, initially being a clone of the PCjr, duplicated this functionality, with the Tandy TL/SL/RL line adding digital sound recording/playback capabilities.

In the late 1990s, many computer manufacturers began to replace plug-in soundcards with a "codec" (actually a combined audio AD/DA-converter) integrated into the motherboard. Many of these used Intel's AC97 specification. Others used cheap ACR slots.

As of 2005, these "codecs" usually lack the hardware for direct music synthesis or even multi-channel sound, with special drivers and software making up for these lacks, at the expense of CPU speed (for example, MIDI reproduction takes away 10-15% CPU time on an Athlon XP 1600+ CPU).

Nevertheless, some manufacturers offered (and offer, as of 2006) motherboards with integrated "real" (non-codec) soundcards usually in the form of a custom chipset providing e.g. full ISA or PCI Soundblaster compatibility, thus saving an expansion slot while providing the user with a (relatively) high quality soundcard.

Integrated sound on other platforms
Various computers which do not use the IBM PC architecture, such as Apple's Macintosh, and workstations from manufacturers like Sun have had their own motherboard integrated sound devices. In some cases these provide very advanced capabilities (for the time of manufacture), in most they are minimal systems. Some of these platforms have also had sound cards designed for their bus architectures which of course cannot be used in a standard PC.

USB sound cards
While not literally sound cards (since they don't plug into slots inside of a computer, and usually are not card-shaped (rectangular)), there are devices called USB sound cards. These attach to a computer via USB cables. The USB specification defines a standard interface, the USB audio device class, allowing a single driver to work with the various USB sound devices on the market.

Other outboard sound devices
USB Sound Cards are far from the first external devices allowing a computer to record or synthesize sound. Virtually any method that was once common for getting an electrical signal in or out of a computer has probably been used to attempt to produce sound.

Driver architecture
To use a sound card, the operating system typically requires a specific device driver. Some operating systems include the drivers for some or all cards available, in other cases the drivers are supplied with the card itself, or are available for download.

DOS programs for the IBM PC often had to use universal middleware driver libraries (such as the HMI Sound Operating System, the Miles Sound System etc.) which had drivers for most common sound cards, since DOS itself had no real concept of a sound card. Some card manufacturers provided (sometimes inefficient) middleware TSR-based drivers for their products, and some programs simply had drivers incorporated into the program itself for the sound cards that were supported.
Microsoft Windows uses proprietary drivers generally written by the sound card manufacturers. Many makers supply the drivers to Microsoft for inclusion on Windows distributions. Sometimes drivers are also supplied by the individual vendors for download and installation. Bug fixes and other improvements are likely to be available faster via downloading, since Windows CDs cannot be updated as frequently as a web or FTP site. Vista will use UAA.
A number of versions of UNIX make use of the portable Open Sound System. Drivers are seldom produced by the card manufacturer.
Most Linux-based distributions make use of the Advanced Linux Sound Architecture, but have taken measures to remain compatible with the Open Sound System.

September 07, 2008

What is a Graphics Card?

0 comments
What is a Graphics Card?

Graphics Card

The term is usually used to refer to a separate, dedicated expansion card that is plugged into a slot on the computer's motherboard, as opposed to a graphics controller integrated into the motherboard chipset.

Hardware
A video card consists of a printed circuit board on which the components are mounted. These include:

Graphics processing unit (GPU)
The GPU is a microprocessor dedicated to manipulating and rendering graphics according to the instructions received from the computer's operating system and the software being used. At their simplest level, GPUs include functions for manipulating two-dimensional graphics, such as blitting. Modern and more advanced GPUs also include functions for generating and manipulating three-dimensional graphics elements, rendering objects with shading, lighting, texture mapping and other visual effects.

Video memory
Unlike integrated video controllers, which usually share memory with the rest of the computer, most video cards have their own separate onboard memory, referred to as video RAM (VRAM). VRAM is used to store the display image, as well as textures, buffers (the Z-buffer necessary for rendering 3D graphics, for example) and other elements. VRAM typically runs at higher speeds than desktop RAM. For the most part, current Graphics Cards use GDDR3 or GDD4 whereas desktop RAM is still using DDR2.

Video BIOS
The video BIOS or firmware chip is a chip that contains the basic program that governs the video card's operations and provides the instructions that allow the computer and software to interface with the card.

Connects to:
Motherboard via one of
AGP
PCI Express
PCI

Display via one of
VGA connector
Digital Visual Interface
Composite video
Component Video

Common Manufacturers:
ATI
NVIDIA

September 06, 2008

What is a HDD (Harddisk)?

0 comments
What is a HDD (Harddisk)?

Hard disk

Strictly speaking, "drive" refers to an entire unit containing hard disk, read/write head assembly, driver electronics, and motor while "hard disk" (sometimes "platter") refers to the storage medium itself.

Hard disks were originally developed for use with computers. In the 21st century, applications for hard disks have expanded beyond computers to include video recorders, audio players, digital organizers, and digital cameras. In 2005 the first cellular telephones to include hard disks were introduced by Samsung and Nokia. The need for large-scale, reliable storage, independent of a particular device, led to the introduction of configurations such as RAID, hardware such as network attached storage (NAS) devices, and systems such as storage area networks (SANs) for efficient access to large volumes of data.

Hard disks record information by magnetizing a magnetic material in a pattern that represents the data. They read the data back by detecting the magnetization of the material. A typical hard disk design consists of a spindle which holds one or more flat circular disks called platters, onto which the data is recorded. The platters are made from a non-magnetic material, usually glass or aluminum, and are coated with a thin layer of magnetic material. Older disks used iron(III) oxide as the magnetic material, but current disks use a cobalt-based alloy.

The platters are spun at very high speeds. Information is written to a platter as it rotates past mechanisms called read-and-write heads that fly very close over the magnetic surface. The read-and-write head is used to detect and modify the magnetization of the material immediately under it. There is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins.

A cross section of the magnetic surface in action. In this case the binary data encoded using frequency modulation.The magnetic surface of each platter is divided into many small sub-micrometre-sized magnetic regions, each of which is used to encode a single binary unit of information. In today's hard disks each of these magnetic regions is composed of a few hundred magnetic grains. Each magnetic region forms a magnetic dipole which generates a highly localised magnetic field nearby. The write head magnetizes a magnetic region by generating a strong local magnetic field nearby. Early hard disks used the same inductor that was used to read the data as an electromagnet to create this field. Later, metal in Gap (MIG) heads were used, and today thin film heads are common. With these later technologies, the read and write head are separate mechanisms, but are on the same actuator arm.

Hard disks have a mostly sealed enclosure that protects the disk internals from dust, condensation, and other sources of contamination. The hard disk's read-write heads fly on an air bearing which is a cushion of air only nanometers above the disk surface. The disk surface and the disk's internal environment must therefore be kept immaculate to prevent damage from fingerprints, hair, dust, smoke particles and such, given the sub-microscopic gap between the heads and disk.

Using rigid platters and sealing the unit allows much tighter tolerances than in a floppy disk. Consequently, hard disks can store much more data than floppy disk and access and transmit it faster. In 2006, a typical workstation hard disk might store between 80 GB and 1Tb of data, rotate at 7,200 to 10,000 revolutions per minute (RPM), and have a sequential media transfer rate of over 50 MB/s. The fastest workstation and server hard disks spin at 15,000 RPM, and can achieve sequential media transfer speeds up to and beyond 80 MB/s. Laptop hard disks, which are physically smaller than their desktop counterparts, tend to be slower and have less capacity. Most spin at only 4,200 RPM or 5,400 RPM, whereas the newest top models spin at 7,200 RPM.

Capacity
The capacity of hard disks has grown dramatically over time. The first commercial disk, the IBM RAMAC introduced in 1956, stored 5 million characters (about 5 megabytes) on fifty 24-inch diameter disks. (See early IBM disk storage.) With early personal computers in the 1980s, a disk with a 20 megabyte capacity was considered large. In the latter half of the 1990s, hard disks with capacities of 1 gigabyte and greater became available. As of 2006, the "smallest" desktop hard disk still in production has a capacity of 20 gigabytes, while the largest-capacity internal disks are a 3/4 terabyte (750 gigabytes), with external disks at or exceeding one terabyte by using multiple internal disks. These new internal disks increased their storage capacities with perpendicular recording.

This has enabled the commercial viability of consumer products that require large storage capacities, such as the Apple iPod digital music player, the TiVo personal video recorder, and web-based email programs.[1] This is also gradually but significantly altering how programmers think; in many programming tasks there is a time-space tradeoff, so as space becomes cheaper and cheaper relative to CPU cycles the appropriate choice about time versus space changes. For instance in database work it is now common practice to store precomputed views, transitive closures, and the like on disk in order to speed up queries; 20 years ago such profligate use of disk space would have been impractical.

A vice president of Seagate projects a future growth in disk density of 40% per year.[1] Access times have not kept up with throughput increases, which themselves haven't kept up with growth in storage capacity. The main way to increase either is to increase the number of read-write heads in a hard disk. Since flying heads are the most expensive component of hard disks, increasing their number per hard disk wouldn't help the situation. Currently, the most promising way to reduce access times and increase throughput are to replace rotating disks with nonvolatile random access memory (NVRAM) or, possibly, holographic technology.

Capacity measurements

Hard disk manufacturers typically specify disk capacity using the SI definition of the prefixes "mega" and "giga." This is largely for historical reasons. Disks with multi-million byte capacity have been used since 1956, long before there were standard binary prefixes. (The IEC only standardized binary prefixes in 1999.) Many practitioners early on in the computer and semiconductor industries used the prefix kilo to describe 210 (1024) bits, bytes or words because 1024 is "close enough" to 1000. Similar usage has been applied to the prefixes "mega," "giga," "tera," and even "peta." Often this non-SI conforming usage is noted by a qualifier such as "1 kB = 1,024 bytes" but the qualifier is sometimes omitted, particularly in marketing literature.

Operating systems, such as Microsoft Windows, frequently report capacity using the binary interpretation of the prefixes, which results in a discrepancy between the disk manufacturer's stated capacity and what the system reports. The difference becomes much more noticeable in the multi-gigabyte range. For example, Microsoft's Windows 2000 reports disk capacity both in decimal to 12 or more significant digits and with binary prefixes to 3 significant digits. Thus a disk specified by a disk manufacturer as a 30 GB disk might have its capacity reported by Windows 2000 both as "30,065,098,568 bytes" and "28.0 GB." The disk manufacturer used the SI definition of "giga," 109. However utilities provided by Windows define a gigabyte as 230, or 1073741824, bytes, so the reported capacity of the disk will be closer to 28.0 GB. For this reason, many utilities that report capacity have begun to use the aforementioned IEC standard binary prefixes (e.g. KiB, MiB, GiB) since their definitions are unambiguous.

Some people mistakenly attribute the discrepancy in reported and specified capacities to reserved space used for file system and partition accounting information. However, for large (several GiB) filesystems, this data rarely occupies more than a few MiB, and therefore cannot possibly account for the apparent "loss" of tens of GBs.

The capacity of a hard disk can be calculated by multiplying the number of cylinders by the number of heads by the number of sectors by the number of bytes/sector (most commonly 512).

History

IBM 62PC "Piccolo" HDD, circa 1979 - an early 8" diskFor many years, hard disks were large, cumbersome devices, more suited to use in the protected environment of a data center or large office than in a harsh industrial environment (due to their delicacy), or small office or home (due to their size and power consumption). Before the early 1980s, most hard disks had 8-inch (20 cm) or 14-inch (35 cm) platters, required an equipment rack or a large amount of floor space (especially the large removable-media disks, which were often referred to as "washing machines"), and in many cases needed high-current or even three-phase power hookups due to the large motors they used. Because of this, hard disks were not commonly used with microcomputers until after 1980, when Seagate Technology introduced the ST-506, the first 5.25-inch hard disk, with a capacity of 5 megabytes. In fact, in its factory configuration, the original IBM PC (IBM 5150) was not equipped with a hard disk.

Most microcomputer hard disks in the early 1980s were not sold under their manufacturer's names, but by OEMs as part of larger peripherals (such as the Corvus Disk System and the Apple ProFile). The IBM PC/XT had an internal hard disk, however, and this started a trend toward buying "bare" disks (often by mail order) and installing them directly into a system. Hard disk makers started marketing to end users as well as OEMs, and by the mid-1990s, hard disks had become available on retail store shelves.

While internal disks became the system of choice on PCs, external hard disks remained popular for much longer on the Apple Macintosh and other platforms. Every Mac made between 1986 and 1998 has a SCSI port on the back, making external expansion easy. External SCSI disks were also popular with older microcomputers such as the Apple II series, and were also used extensively in servers, a usage which is still popular today. The appearance in the late 1990s of high-speed external interfaces such as USB and FireWire has made external disk systems popular among PC users once again, especially for users who move large amounts of data between two or more locations, and most hard disk makers now make their disks available in external cases.

Hard disk characteristics

5.25" MFM 110 MB hard disk (2.5" IDE 6495 MB hard disk, US & UK pennies for comparison)Capacity, usually quoted in gigabytes. (older hard disks used to quote their smaller capacities in megabytes)
Physical size, usually quoted in inches:
Almost all hard disks today are of either the 3.5" or 2.5" varieties, used in desktops and laptops, respectively. 2.5" disks are usually slower and have less capacity but use less power and are more tolerant of movement. An increasingly common size is the 1.8" disks used in portable MP3 players and subnotebooks, which have very low power consumption and are highly shock-resistant. Additionally, there is the 1" form factor designed to fit the dimensions of CF Type II, which is also usually used as storage for portable devices including digital cameras. 1" was a de facto form factor led by IBM's Microdrive, but is now generically called 1" due to other manufacturers producing similar products. There is also a 0.85" form factor produced by Toshiba for use in mobile phones and similar applications. The size designations can be slightly confusing, for example a 3.5" disk has a case that is 4" wide. Furthermore, server-class hard disks also come in both 3.5" and 2.5" form factors.
Reliability, usually given in terms of Mean Time Between Failures (MTBF):
SATA 1.0 disks support speeds up to 10,000 rpm and MTBF levels up to 1 million hours under an eight-hour, low-duty cycle. Fibre Channel (FC) disks support up to 15,000 rpm and an MTBF of 1.4 million hours under a 24-hour duty cycle.
Number of I/O operations per second:
Modern disks can perform around 50 random access or 100 Sequential access operations per second.
Power consumption (especially important in battery-powered laptops).
audible noise in dBA (although many still report it in bels, not decibels).
G-shock rating (surprisingly high in modern disks).
Transfer Rate:
Inner Zone: from 44.2 MB/s to 74.5 MB/s.
Outer Zone: from 74.0 MB/s to 111.4 MB/s.
Random access time: from 5 ms to 15 ms.

Integrity

Close-up of a hard disk head suspended above the disk platter together with its mirror image in the smooth surface of the magnetic platter.The hard disk's spindle system relies on air pressure inside the enclosure to support the heads at their proper flying height while the disk is in motion. A hard disk requires a certain range of air pressures in order to operate properly. The connection to the external environment and pressure occurs through a small hole in the enclosure (about 1/2 mm in diameter), usually with a carbon filter on the inside (the breather filter, see below). If the air pressure is too low, there will not be enough lift for the flying head, the head will not be at the proper height, and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are needed for reliable high-altitude operation, above about 10,000 feet (3,000 m). This does not apply to pressurized enclosures, like an airplane pressurized cabin. Modern disks include temperature sensors and adjust their operation to the operating environment.

Very high humidity for extended periods can cause accelerated wear of the heads and platters by corrosion. If the disk uses "Contact Start/Stop" (CSS) technology to park its heads on the platters when not operating, increased humidity can also lead to increased stiction (the tendency for the heads to stick to the platter surface). This can cause physical damage to the platter and spindle motor and can also lead to head crash. Breather holes can be seen on all disks — they usually have a warning sticker next to them, informing the user not to cover the holes. The air inside the operating disk is constantly moving too, being swept in motion by friction with the spinning platters. This air passes through an internal recirculation (or "recirc") filter to remove any leftover contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure, and any particles or outgassing generated internally in normal operation.

Due to the extremely close spacing between the heads and the disk surface, any contamination of the read-write heads or platters can lead to a head crash — a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film. For giant magnetoresistive (GMR) heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) will still result in the head temporarily overheating, due to friction with the disk surface, and can render the data unreadable for a short period until the head temperature stabilizes (so called "thermal asperity," a problem which can partially be dealt with by proper electronic filtering of the read signal). Head crashes can be caused by electronic failure, a sudden power failure, physical shock, wear and tear, corrosion, or poorly manufactured platters and heads. In most desktop and server disks, when powering down, the heads are moved to a landing zone, an area of the platter usually near its inner diameter (ID), where no data is stored. This area is called the CSS (Contact Start/Stop) zone. However, especially in old models, sudden power interruptions or a power supply failure can sometimes result in the device shutting down with the heads in the data zone, which increases the risk of data loss. In fact, it used to be procedure to "park" the hard disk before shutting down your computer. Newer disks are designed such that either a spring (at first) or (more recently) rotational inertia in the platters is used to safely park the heads in the case of unexpected power loss.

The hard disk's electronics control the movement of the actuator and the rotation of the disk, and perform reads and writes on demand from the disk controller. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media which have failed. Also, most major hard disk and motherboard vendors now support self-monitoring, analysis, and reporting technology (S.M.A.R.T.), by which impending failures can be predicted, allowing the user to be alerted to prevent data loss.

Landing zones

Microphotograph of a hard disk head. The size of the front face (which is the "trailing face" of the slider) is about 0.3 mm × 1.0 mm. The (not visible) bottom face of the slider is about 1.0 mm × 1.25 mm (so called "nano" size) and faces the platter. One functional part of the head is the round, orange structure in the middle - the lithographically defined copper coil of the write transducer. Also note the electric connections by wires bonded to gold-plated pads.Around 1995 IBM pioneered a technology where the landing zone is made by a precision laser process (Laser Zone Texture = LZT) producing an array of smooth nanometer-scale "bumps" in the ID landing zone, thus vastly improving stiction and wear performance. This technology is still widely in use today (2006). A few years after LZT, initially for mobile applications (i.e. laptop etc.), and later also for the other HDD types, IBM introduced "head unloading" technology, where the heads are lifted off the platters onto plastic "ramps" near the outer disk edge, thus eliminating the risk of stiction altogether and greatly improving non-operating shock performance. All HDD manufacturers use these two technologies to this day. Both have a list of advantages and drawbacks in terms of loss of storage space, relative difficulty of mechanical tolerance control, cost of implementation, etc.

IBM created a technology for their Thinkpad line of laptop computers called the Active Protection System. When a sudden, sharp movement is detected by the built-in motion sensor in the Thinkpad, internal hard disk heads automatically unload themselves into the parking zone to reduce the risk of any potential data loss or scratches made. Apple later also utilized this technology in their Powerbook, iBook, MacBook Pro, and MacBook line, known as the Sudden Motion Sensor.

Spring tension from the head mounting constantly pushes the heads towards the platter. While the disk is spinning, the heads are supported by an air bearing and experience no physical contact or wear. In CSS drives the sliders carrying the head sensors (often also just called heads) are designed to reliably survive a number of landings and takeoffs from the media surface, though wear and tear on these microscopic components eventually takes its toll. Most manufacturers design the sliders to survive 50,000 contact cycles before the chance of damage on startup rises above 50%. However, the decay rate is not linear—when a disk is younger and has fewer start-stop cycles, it has a better chance of surviving the next startup than an older, higher-mileage disk (as the head literally drags along the disk's surface until the air bearing is established). For example, the Maxtor DiamondMax series of desktop hard disks are rated to 50,000 start-stop cycles. This means that no failures attributed to the head-platter interface were seen before at least 50,000 start-stop cycles during testing.

Access and interfaces
Hard disks are generally accessed over one of a number of bus types, including ATA (IDE, EIDE), Serial ATA (SATA), SCSI, SAS, IEEE 1394, USB, and Fibre Channel.

Back in the days of the ST-506 interface, the data encoding scheme was also important. The first ST-506 disks used Modified Frequency Modulation (MFM) encoding (which is still used on the common "1.44 MB" (1440 KiB) 3.5-inch floppy), and transferred data at a rate of 5 megabits per second. Later on, controllers using 2,7 RLL (or just "RLL") encoding increased the transfer rate by half, to 7.5 megabits per second; it also increased disk capacity by half.

Many ST-506 interface disks were only certified by the manufacturer to run at the lower MFM data rate, while other models (usually more expensive versions of the same basic disk) were certified to run at the higher RLL data rate. In some cases, the disk was overengineered just enough to allow the MFM-certified model to run at the faster data rate; however, this was often unreliable and was not recommended. (An RLL-certified disk could run on a MFM controller, but with 1/3 less data capacity and speed.)

Enhanced Small Disk Interface (ESDI) also supported multiple data rates (ESDI disks always used 2,7 RLL, but at 10, 15 or 20 megabits per second), but this was usually negotiated automatically by the disk and controller; most of the time, however, 15 or 20 megabit ESDI disks weren't downward compatible (i.e. a 15 or 20 megabit disk wouldn't run on a 10 megabit controller). ESDI disks typically also had jumpers to set the number of sectors per track and (in some cases) sector size.

SCSI originally had just one speed, 5 MHz (for a maximum data rate of 5 megabytes per second), but later this was increased dramatically. The SCSI bus speed had no bearing on the disk's internal speed because of buffering between the SCSI bus and the disk's internal data bus; however, many early disks had very small buffers, and thus had to be reformatted to a different interleave (just like ST-506 disks) when used on slow computers, such as early IBM PC compatibles and Apple Macintoshes.

ATA disks have typically had no problems with interleave or data rate, due to their controller design, but many early models were incompatible with each other and couldn't run in a master/slave setup (two disks on the same cable). This was mostly remedied by the mid-1990s, when ATA's specification was standardised and the details began to be cleaned up, but still causes problems occasionally (especially with CD-ROM and DVD-ROM disks, and when mixing Ultra DMA and non-UDMA devices).

Serial ATA does away with master/slave setups entirely, placing each disk on its own channel (with its own set of I/O ports) instead.

FireWire/IEEE 1394 and USB(1.0/2.0) hard disks are external units containing generally ATA or SCSI disks with ports on the back allowing very simple and effective expansion and mobility. Most FireWire/IEEE 1394 models are able to daisy-chain in order to continue adding peripherals without requiring additional ports on the computer itself.

Disk families used in personal computers
Notable disk families include:

MFM (Modified Frequency Modulation) disks required that the controller electronics be compatible with the disk electronics.
RLL (Run Length Limited) disks were named after the modulation technique that made them an improvement on MFM. They required large cables between the controller in the PC and the hard disk, the disk did not have a controller, only a modulator/demodulator.
ESDI (Enhanced Small Disk Interface) was an interface developed by Maxtor to allow faster communication between the PC and the disk than MFM or RLL.
Integrated Drive Electronics (IDE) was later renamed to ATA, and then PATA.
The name comes from the way early families had the hard disk controller external to the disk. Moving the hard disk controller from the interface card to the disk helped to standardize interfaces, reducing cost and complexity.

The data cable was originally 40 conductor, but UDMA modes from the later disks requires using an 80 conductor cable (note that the 80 conductor cable still uses a 40 position connector.)

The interface changed from 40 pins to 39 pin. The missing pin acts as a key to prevent incorrect insertion of the connector, a common cause of disk and controller damage.

SCSI (Small Computer System Interface) was an early competitor with ESDI, originally named SASI for Shugart Associates. SCSI disks were standard on servers, workstations, and Apple Macintosh computers through the mid-90s, by which time most models had been transitioned to IDE (and later, SATA) family disks. Only in 2005 did the capacity of SCSI disks fall behind IDE disk technology, though the highest-performance disks are still available in SCSI and Fibre Channel only. The length limitations of the data cable allows for external SCSI devices. Originally SCSI data cables used single ended data transmission, but server class SCSI could use differential transmission, and then Fibre Channel (FC) interface, and then more specifically the Fibre Channel Arbitrated Loop (FC-AL), connected SCSI hard disks using fibre optics. FC-AL is the cornerstone of storage area networks, although other protocols like iSCSI and ATA over Ethernet have been developed as well.
SATA (Serial ATA). The SATA data cable has only one data pair for the differential transmission of data to the device, and one pair for receiving from the device. That requires that data be transmitted serially. The same differential transmission system is used in RS485, LocalTalk, USB, Firewire,and differential SCSI. In 2005/2006 parlance, the 40 pin IDE/ATA is called "PATA" or parallel ATA, which means that there are 16 bits of data transferred in parallel at a time on the data cable.
SAS (Serial Attached SCSI). The SAS is a new generation serial communication protocol for devices designed to allow for much higher speed data transfers and is compatible with SATA. SAS uses serial communication instead of the parallel method found in traditional SCSI devices but still uses SCSI commands for interacting with SAS
EIDE was an unofficial update (by Western Digital) to the original IDE standard, with the key improvement being the use of DMA to transfer data between the disk and the computer, an improvement later adopted by the official ATA standards. DMA is used to transfer data without the CPU or program being responsible to transfer every word. That leaves the CPU/program/operating system to do other tasks while the data transfer occurs.
Acronym Meaning Description
SASI Shugart Associates System Interface Predecessor to SCSI
SCSI Small Computer System Interface Bus oriented that handles concurrent operations.
ST-412 Seagate interface
ST-506 Seagate interface (improvement over ST-412)
ESDI Enhanced Small Disk Interface Faster and more integrated than ST-412/506, but still backwards compatible
ATA Advanced Technology Attachment Successor to ST-412/506/ESDI by integrating the disk controller completely onto the device. Incapable of concurrent operations.

As of 2005, over 98% of the world's hard disks are manufactured by just a handful of large firms: Seagate, Maxtor (acquired by Seagate in May 2006), Western Digital, Samsung, and Hitachi which owns the former disk manufacturing division of IBM. Fujitsu continues to make mobile- and server-class disks but exited the desktop-class market in 2001. Toshiba is a major manufacturer of 2.5-inch and 1.8-inch notebook disks.

Dozens of former hard disk manufacturers have gone out of business, merged, or closed their hard disk divisions; as capacities and demand for products increased, profits became hard to find, and there were shakeouts in the late 1980s and late 1990s. The first notable casualty of the business in the PC era was Computer Memories Inc. or CMI; after an incident with faulty 20 MB AT disks in 1985.[2] CMI's reputation never recovered, and they exited the hard disk business in 1987. Another notable failure was MiniScribe, who went bankrupt in 1990 after it was found that they had "cooked the books" and inflated sales numbers for several years. Many other smaller companies (like Kalok, Microscience, LaPine, Areal, Priam and PrairieTek) also did not survive the shakeout, and had disappeared by 1993; Micropolis was able to hold on until 1997, and JTS, a relative latecomer to the scene, lasted only a few years and was gone by 1999, after attempting to manufacture hard disks in India using a second hand factory.[citation needed] Rodime was also an important manufacturer during the 1980s, but stopped making disks in the early 1990s amid the shakeout and now concentrates on technology licensing; they hold a number of patents related to 3.5-inch form factor hard disks.
 

Copyright 2008 All Rights Reserved | Blogger Template by Computer Science and Computer Tips