Products – Allied IT Services https://www.pearlinfotech.us Thu, 17 Nov 2022 07:32:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Optical Mouse https://www.pearlinfotech.us/product/optical-mouse/ Wed, 19 Oct 2022 11:14:14 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2285 Keyboard https://www.pearlinfotech.us/product/data-storage/ Wed, 19 Oct 2022 11:12:47 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2282 computer keyboard is a peripheral input device modeled after the typewriter keyboard[1] [2]which uses an arrangement of buttons or keys to act as mechanical levers or electronic switches. Replacing early punched cards and paper tape technology, interaction via teleprinter-style keyboards have been the main input method for computers since the 1970s, supplemented by the computer mouse since the 1980s.]]> History

While typewriters are the definitive ancestor of all key-based text entry devices, the computer keyboard as a device for electromechanical data entry and communication derives largely from the utility of two devices: teleprinters (or teletypes) and keypunches. It was through such devices that modern computer keyboards inherited their layouts.

As early as the 1870s, teleprinter-like devices were used to simultaneously type and transmit stock market text data from the keyboard across telegraph lines to stock ticker machines to be immediately copied and displayed onto ticker tape.[5] The teleprinter, in its more contemporary form, was developed from 1907 to 1910 by American mechanical engineer Charles Krum and his son Howard, with early contributions by electrical engineer Frank Pearne. Earlier models were developed separately by individuals such as Royal Earl House and Frederick G. Creed.

Earlier, Herman Hollerith developed the first keypunch devices, which soon evolved to include keys for text and number entry akin to normal typewriters by the 1930s.[6]

The keyboard on the teleprinter played a strong role in point-to-point and point-to-multipoint communication for most of the 20th century, while the keyboard on the keypunch device played a strong role in data entry and storage for just as long. The development of the earliest computers incorporated electric typewriter keyboards: the development of the ENIAC computer incorporated a keypunch device as both the input and paper-based output device, while the BINAC computer also made use of an electromechanically controlled typewriter for both data entry onto magnetic tape (instead of paper) and data output.[7]

The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a consumer device in 1984. By this time, text-only user interfaces with sparse graphics gave way to comparatively graphics-rich icons on screen.[8] However, keyboards remain central to human-computer interaction to the present, even as mobile personal computing devices such as smartphones and tablets adapt the keyboard as an optional virtual, touchscreen-based means of data entry.

Types and standards

A wired computer keyboard for desktop use manufactured by Lenovo

Different types of keyboards are available and each is designed with a focus on specific features that suit particular needs. Today, most full-size keyboards use one of three different mechanical layouts, usually referred to as simply ISO (ISO/IEC 9995-2), ANSI (ANSIINCITS 154-1988), and JIS (JIS X 6002-1980), referring roughly to the organizations issuing the relevant worldwide, United States, and Japanese standards, respectively. (In fact, the mechanical layouts referred such as “ISO” and “ANSI” comply to the primary recommendations in the named standards, while each of these standards in fact also allows the other way.) ANSI standard alphanumeric keyboards have keys that are on three-quarter inch centers (0.75 inches (19 mm)), and have a key travel of at least 0.15 inches (3.8 mm).[citation needed]

A size comparison between typical compact, tenkeyless, and full-size keyboard form factors

Modern keyboard models contain a set number of total keys according to their given standard, described as 101, 104, 105, etc. and sold as “Full-size” keyboards.[9] Modern keyboards matching US conventions typically have 104 keys while the 105 key layout is the norm in the rest of the world. This number is not always followed, and individual keys or whole sections are commonly skipped for the sake of compactness or user preference. The most common choice is to not include the numpad, which can usually be fully replaced by the alphanumeric section; such designs are referred to as “tenkeyless”.[10] Laptops and wireless peripherals often lack duplicate keys and ones seldom used. Function- and arrow keys are nearly always present.

Another factor determining the size of a keyboard is the size and spacing of the keys. The reduction is limited by the practical consideration that the keys must be large enough to be easily pressed by fingers. Alternatively, a tool is used for pressing small keys.

Desktop or full-size

Desktop computer keyboards include alphabetic characters and numerals, typographical symbols and punctuation marks, one or more currency symbols and other special characters, diacritics and a variety of function keys. The repertoire of glyphs engraved on the keys of a keyboard accords with national conventions and language needs. Computer keyboards are similar to electric-typewriter keyboards but contain additional keys, such as the command key or Windows keys.

Laptop-size

Keyboards on laptops usually have a shorter travel distance and a reduced set of keys.

Keyboards on laptops and notebook computers usually have a shorter travel distance for the keystroke, shorter over travel distance, and a reduced set of keys. They may not have a numeric keypad, and the function keys may be placed in locations that differ from their placement on a standard, full-sized keyboard. The switch mechanism for a laptop keyboard is more likely to be a scissor switch than a rubber dome; this is opposite the trend for full-size keyboards.

Flexible keyboards

Flexible keyboards are a junction between normal type and laptop type keyboards: normal from the full arrangement of keys, and laptop from the short key distance. Additionally, the flexibility allows the user to fold/roll the keyboard for better storage and transfer. However, for typing the keyboard must be resting on a hard surface. The vast majority[11] of flexible keyboards in the market are made from silicone; this material makes them water- and dust-proof. This is useful in hospitals,[12] where keyboards are subjected to frequent washing, and other dirty or must-be-clean environments.

Handheld

An AlphaGrip handheld keyboard

Handheld ergonomic keyboards[13][14] are designed to be held like a game controller, and can be used as such, instead of laid out flat on top of a table surface.

Typically handheld keyboards hold all the alphanumeric keys and symbols that a standard keyboard would have, yet only be accessed by pressing two sets of keys at once; one acting as a function key similar to a ‘Shift’ key that would allow for capital letters on a standard keyboard.[15] Handheld keyboards allow the user the ability to move around a room or to lean back on a chair while also being able to type in front or away from the computer.[16] Some variations of handheld ergonomic keyboards also include a trackball mouse that allow mouse movement and typing included in one handheld device.[17]

Thumb-sized

Smaller external keyboards have been introduced for devices without a built-in keyboard, such as PDAs, and smartphones. Small keyboards are also useful where there is a limited workspace.[18]

thumb keyboard (thumb board) is used in some personal digital assistants such as the Palm Treo and BlackBerry and some Ultra-Mobile PCs such as the OQO.

Numeric keyboards contain only numbers, mathematical symbols for addition, subtraction, multiplication, and division, a decimal point, and several function keys. They are often used to facilitate data entry with smaller keyboards that do not have a numeric keypad, commonly those of laptop computers.[19] These keys are collectively known as a numeric pad, numeric keys, or a numeric keypad, and it can consist of the following types of keys: Arithmetic operatorsnumbersarrow keysNavigation keysNum Lock and Enter key.

Multifunctional

Multifunction keyboard with LCD function keys

Multifunctional keyboards provide additional function beyond the standard keyboard. Many are programmable, configurable computer keyboards and some control multiple PCs, workstations and other information sources, usually in multi-screen work environments. Users have additional key functions as well as the standard functions and can typically use a single keyboard and mouse to access multiple sources.

Multifunction keyboard with touch

Multifunctional keyboards may feature customised keypads, fully programmable function or soft keys for macros/pre-sets, biometric or smart card readers, trackballs, etc. New generation multifunctional keyboards feature a touchscreen display to stream video, control audio visual media and alarms, execute application inputs, configure individual desktop environments, etc. Multifunctional keyboards may also permit users to share access to PCs and other information sources. Multiple interfaces (serial, USB, audio, Ethernet, etc.) are used to integrate external devices. Some multifunctional keyboards are also used to directly and intuitively control video walls.

Common environments for multifunctional keyboards are complex, high-performance workplaces for financial traders and control room operators (emergency services, security, air traffic management; industry, utilities management, etc.).

Non-standard layout and special-use types

Chorded

While other keyboards generally associate one action with each key, chorded keyboards associate actions with combinations of key presses. Since there are many combinations available, chorded keyboards can effectively produce more actions on a board with fewer keys. Court reporters’ stenotype machines use chorded keyboards to enable them to enter text much faster by typing a syllable with each stroke instead of one letter at a time. The fastest typists (as of 2007) use a stenograph, a kind of chorded keyboard used by most court reporters and closed-caption reporters. Some chorded keyboards are also made for use in situations where fewer keys are preferable, such as on devices that can be used with only one hand, and on small mobile devices that don’t have room for larger keyboards. Chorded keyboards are less desirable in many cases because it usually takes practice and memorization of the combinations to become proficient.

Software

Software keyboards or on-screen keyboards often take the form of computer programs that display an image of a keyboard on the screen. Another input device such as a mouse or a touchscreen can be used to operate each virtual key to enter text. Software keyboards have become very popular in touchscreen enabled cell phones, due to the additional cost and space requirements of other types of hardware keyboards. Microsoft Windows, Mac OS X, and some varieties of Linux include on-screen keyboards that can be controlled with the mouse. In software keyboards, the mouse has to be maneuvered onto the on-screen letters given by the software. On the click of a letter, the software writes the respective letter on the respective spot.

Projection

Projection keyboards project an image of keys, usually with a laser, onto a flat surface. The device then uses a camera or infrared sensor to “watch” where the user’s fingers move, and will count a key as being pressed when it “sees” the user’s finger touch the projected image. Projection keyboards can simulate a full size keyboard from a very small projector. Because the “keys” are simply projected images, they cannot be felt when pressed. Users of projected keyboards often experience increased discomfort in their fingertips because of the lack of “give” when typing. A flat, non-reflective surface is also required for the keys to be projected. Most projection keyboards are made for use with PDAs and smartphones due to their small form factor.

Optical keyboard technology

Also known as photo-optical keyboard, light responsive keyboard, photo-electric keyboard and optical key actuation detection technology.

An optical keyboard technology[20] utilizes LEDs and photo sensors to optically detect actuated keys. Most commonly the emitters and sensors are located in the perimeter, mounted on a small PCB. The light is directed from side to side of the keyboard interior and it can only be blocked by the actuated keys. Most optical keyboards[21] require at least 2 beams (most commonly vertical beam and horizontal beam) to determine the actuated key. Some optical keyboards use a special key structure that blocks the light in a certain pattern, allowing only one beam per row of keys (most commonly horizontal beam).

Key types

Alphanumeric

A Greek keyboard lets the user type in both Greek and the Latin alphabet (MacBook Pro).

The ControlWindows, and Alt keys are important modifier keys.

Space-cadet keyboard has many modifier keys.

Alphabetical, numeric, and punctuation keys are used in the same fashion as a typewriter keyboard to enter their respective symbol into a word processing program, text editor, data spreadsheet, or other program. Many of these keys will produce different symbols when modifier keys or shift keys are pressed. The alphabetic characters become uppercase when the shift key or Caps Lock key is depressed. The numeric characters become symbols or punctuation marks when the shift key is depressed. The alphabetical, numeric, and punctuation keys can also have other functions when they are pressed at the same time as some modifier keys. The Space bar is a horizontal bar in the lowermost row, which is significantly wider than other keys. Like the alphanumeric characters, it is also descended from the mechanical typewriter. Its main purpose is to enter the space between words during typing. It is large enough so that a thumb from either hand can use it easily. Depending on the operating system, when the space bar is used with a modifier key such as the control key, it may have functions such as resizing or closing the current window, half-spacing, or backspacing. In computer games and other applications the key has myriad uses in addition to its normal purpose in typing, such as jumping and adding marks to check boxes. In certain programs for playback of digital video, the space bar is used for pausing and resuming the playback.

Modifier keys

Modifier keys are special keys that modify the normal action of another key, when the two are pressed in combination. For example, Alt+F4 in Microsoft Windows will close the program in an active window. In contrast, pressing just F4 will probably do nothing, unless assigned a specific function in a particular program. By themselves, modifier keys usually do nothing. The most widely used modifier keys include the Control keyShift key and the Alt key. The AltGr key is used to access additional symbols for keys that have three symbols printed on them. On the Macintosh and Apple keyboards, the modifier keys are the Option key and Command key, respectively. On Sun Microsystems and Lisp machine keyboards, the Meta key is used as a modifier and for Windows keyboards, there is a Windows key. Compact keyboard layouts often use a Fn key. “Dead keys” allow placement of a diacritic mark, such as an accent, on the following letter (e.g., the Compose key). The Enter/Return key typically causes a command line, window form or dialog box to operate its default function, which is typically to finish an “entry” and begin the desired process. In word processing applications, pressing the enter key ends a paragraph and starts a new one.

Cursor keys

Navigation keys or cursor keys include a variety of keys which move the cursor to different positions on the screen.[22] Arrow keys are programmed to move the cursor in a specified direction; page scroll keys, such as the Page Up and Page Down keys, scroll the page up and down. The Home key is used to return the cursor to the beginning of the line where the cursor is located; the End key puts the cursor at the end of the line. The Tab key advances the cursor to the next tab stop. The Insert key is mainly used to switch between overtype mode, in which the cursor overwrites any text that is present on and after its current location, and insert mode, where the cursor inserts a character at its current position, forcing all characters past it one position further. The Delete key discards the character ahead of the cursor’s position, moving all following characters one position “back” towards the freed place. On many notebook computer keyboards the key labeled Delete (sometimes Delete and Backspace are printed on the same key) serves the same purpose as a Backspace key. The Backspace key deletes the preceding character. Lock keys lock part of a keyboard, depending on the settings selected. The lock keys are scattered around the keyboard. Most styles of keyboards have three LEDs indicating which locks are enabled, in the upper right corner above the numeric pad. The lock keys include Scroll lockNum lock (which allows the use of the numeric keypad), and Caps lock.[23]

System commands

4800-52 mainframe / dumb terminal keyboard, circa mid 1980s. Note the obscure configuration of modifier and arrow keys, line feed key, break key, blank keys, and repeat key.

The SysRq and Print screen commands often share the same key. SysRq was used in earlier computers as a “panic” button to recover from crashes (and it is still used in this sense to some extent by the Linux kernel; see Magic SysRq key). The Print screen command used to capture the entire screen and send it to the printer, but in the present it usually puts a screenshot in the clipboard.

Break key

The Break key/Pause key no longer has a well-defined purpose. Its origins go back to teleprinter users, who wanted a key that would temporarily interrupt the communications line. The Break key can be used by software in several different ways, such as to switch between multiple login sessions, to terminate a program, or to interrupt a modem connection. In programming, especially old DOS-style BASIC, Pascal and C, Break is used (in conjunction with Ctrl) to stop program execution. In addition to this, Linux and variants, as well as many DOS programs, treat this combination the same as Ctrl+C. On modern keyboards, the break key is usually labeled Pause/Break. In most Windows environments, the key combination Windows key+Pause brings up the system properties.

Escape key

The escape key (esc) has a variety of meanings according to Operating System, application or both. “Nearly all of the time”,[24] it signals Stop,[25] QUIT,[26] or “let me get out of a dialog” (or pop-up window).[24][27] It triggers the Stop function in many web browsers.[28]

The escape key was part of the standard keyboard of the Teletype Model 33 (introduced in 1964 and used with many early minicomputers).[29] The DEC VT50, introduced July 1974, also had an Esc key. The TECO text editor (ca 1963) and its descendant Emacs (ca 1985) use the Esc key extensively.

Historically it also served as a type of shift key, such that one or more following characters were interpreted differently, hence the term escape sequence, which refers to a series of characters, usually preceded by the escape character.[30][31]

On machines running Microsoft Windows, prior to the implementation of the Windows key on keyboards, the typical practice for invoking the “start” button was to hold down the control key and press escape. This process still works in Windows 95, 98, Me, NT 4, 2000, XP, Vista, 7, 8, and 10.[32]

Enter key or Return key

The ‘enter key’ ⌅ Enter and ‘return key’ ↵ Return are two closely related keys with overlapping and distinct functions dependent on operating system and application. On full-size keyboards, there are two such keys, one in the alphanumeric keys and the other one is in the numeric keys. The purpose of the enter key is to confirm what has been typed. The return key is based on the original line feed/carriage return function of typewriters: in many word processors, for example, the return key ends a paragraph; in a spreadsheet, it completes the current cell and move to the next cell.

The shape of the Enter key differs between ISO and ANSI keyboards: in the latter, the Enter key is in a single row (usually the third from the bottom) while in the former it spans over two rows and has an inverse L shape.

Shift key

The purpose of the ⇧ Shift key is to invoke the first alternative function of the key with which it is pressed concurrently. For alphabetic keys, shift+letter gives the upper case version of that letter. For other keys, the key is engraved with symbols for both the unshifted and shifted result. When used in combination with other control keys (such as CtrlAlt or AltGr), the effect is system and application dependent.

Menu key

The Menu key or Application key is a key found on Windows-oriented computer keyboards. It is used to launch a context menu with the keyboard rather than with the usual right mouse button. The key’s symbol is usually a small icon depicting a cursor hovering above a menu. On some Samsung keyboards the cursor in the icon is not present, showing the menu only. This key was created at the same time as the Windows key. This key is normally used when the right mouse button is not present on the mouse. Some Windows public terminals do not have a Menu key on their keyboard to prevent users from right-clicking (however, in many Windows applications, a similar functionality can be invoked with the Shift+F10 keyboard shortcut).

Number pad

Many, but not all, computer keyboards have a numeric keypad to the right of the alphabetic keyboard, often separated from the other groups of keys such as the function keys and system command keys, which contains numbers, basic mathematical symbols (e.g., addition, subtraction, etc.), and a few function keys. In addition to the row of number keys above the top alphabetic row, most desktop keyboards have a number pad or accounting pad, on the right hand side of the keyboard. While num lock is set, the numbers on these keys duplicate the number row; if not, they have alternative functions as engraved. In addition to numbers, this pad has command symbols concerned with calculations such as addition, subtraction, multiplication and division symbols. The enter key in this keys indicate the equal sign.

Miscellaneous

Multimedia buttons on some keyboards give quick access to the Internet or control the volume of the speakers.

On Japanese/Korean keyboards, there may be Language input keys for changing the language to use. Some keyboards have power management keys (e.g., power key, sleep key and wake key); Internet keys to access a web browser or E-mail; and/or multimedia keys, such as volume controls; or keys that can be programmed by the user to launch a specified application or a command like minimizing all windows.

Multiple layouts

It is possible to install multiple keyboard layouts within an operating system and switch between them, either through features implemented within the OS, or through an external application. Microsoft Windows,[33] Linux,[34] and Mac[35] provide support to add keyboard layouts and choose from them.

]]>
Data Server https://www.pearlinfotech.us/product/data-server/ Wed, 19 Oct 2022 11:11:19 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2279 computing, a server is a piece of computer hardware or software (computer program) that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device.[1] Typical servers are database serversfile serversmail serversprint serversweb serversgame servers, and application servers.[2]]> History

The use of the word server in computing comes from queueing theory,[3] where it dates to the mid 20th century, being notably used in Kendall (1953) (along with “service”), the paper that introduced Kendall’s notation. In earlier papers, such as the Erlang (1909), more concrete terms such as “[telephone] operators” are used.

In computing, “server” dates at least to RFC 5 (1969),[4] one of the earliest documents describing ARPANET (the predecessor of Internet), and is contrasted with “user”, distinguishing two types of host: “server-host” and “user-host”. The use of “serving” also dates to early documents, such as RFC 4,[5] contrasting “serving-host” with “using-host”.

The Jargon File defines “server” in the common sense of a process performing service for requests, usually remote, with the 1981 (1.1.0) version reading:

SERVER n. A kind of DAEMON which performs a service for the requester, which often runs on a computer other than the one on which the server runs.

Operation

A network based on the client–server model where multiple individual clients request services and resources from centralized servers

Strictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as verb and as noun respectively) are frequently used, though servicer and servant are not.[a] The word service (noun) may refer to either the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as “servers serve users” (and “users use servers”), in the sense of “obey”, today one often says that “servers serve data”, in the same sense as “give”. For instance, web servers “serve [up] web pages to users” or “service their requests”.

The server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general-purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.

While request–response is the most common client-server design, there are others, such as the publish–subscribe pattern. In the publish-subscribe pattern, clients register with a pub-sub server, subscribing to specified types of messages; this initial registration may be done by request-response. Thereafter, the pub-sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request-response.[6]

Hardware

rack-mountable server with the top cover removed to reveal internal components

Hardware requirement for servers vary widely, depending on the server’s purpose and its software. Servers are more often than not, more powerful and expensive than the clients that connect to them.

Since servers are usually accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface (GUI). They are configured and managed remotely. Remote management can be conducted via various methods including Microsoft Management Console (MMC), PowerShellSSH and browser-based out-of-band management systems such as Dell’s iDRAC or HP’s iLo.

Large servers

Large traditional single servers would need to be run for long periods without interruption. Availability would have to be very high, making hardware reliability and durability extremely important. Mission-critical enterprise servers would be very fault tolerant and use specialized hardware with low failure rates in order to maximize uptimeUninterruptible power supplies might be incorporated to guard against power failure. Servers typically include hardware redundancy such as dual power suppliesRAID disk systems, and ECC memory,[10] along with extensive pre-boot memory testing and verification. Critical components might be hot swappable, allowing technicians to replace them on the running server without shutting it down, and to guard against overheating, servers might have more powerful fans or use water cooling. They will often be able to be configured, powered up and down, or rebooted remotely, using out-of-band management, typically based on IPMI. Server casings are usually flat and wide, and designed to be rack-mounted, either on 19-inch racks or on Open Racks.

These types of servers are often housed in dedicated data centers. These will normally have very stable power and Internet and increased security. Noise is also less of a concern, but power consumption and heat output can be a serious issue. Server rooms are equipped with air conditioning devices.

Clusters

server farm or server cluster is a collection of computer servers maintained by an organization to supply server functionality far beyond the capability of a single device. Modern data centers are now often built of very large clusters of much simpler servers,[11] and there is a collaborative effort, Open Compute Project around this concept.

Appliances

A class of small specialist servers called network appliances are generally at the low end of the scale, often being smaller than common desktop computers.

Mobile

A mobile server has a portable form factor, e.g. a laptop.[12] In contrast to large data centers or rack servers, the mobile server is designed for on-the-road or ad hoc deployment into emergency, disaster or temporary environments where traditional servers are not feasible due to their power requirements, size, and deployment time.[13] The main beneficiaries of so-called “server on the go” technology include network managers, software or database developers, training centers, military personnel, law enforcement, forensics, emergency relief groups, and service organizations.[14] To facilitate portability, features such as the keyboarddisplaybattery (uninterruptible power supply, to provide power redundancy in case of failure), and mouse are all integrated into the chassis.

]]>
laptop Accessories https://www.pearlinfotech.us/product/laptop-accessories/ Wed, 19 Oct 2022 11:09:53 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2276 Component CPUs https://www.pearlinfotech.us/product/component-cpus/ Wed, 19 Oct 2022 11:07:13 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2272 central processing unit (CPU), also called a central processormain processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program. This contrasts with external components such as main memory and I/O circuitry,[1] and specialized processors such as graphics processing units (GPUs).]]> Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called “fixed-program computers”.[4] The “central processing unit” term has been in use since as early as 1955.[5][6] Since the term “CPU” is generally defined as a device for software (computer program) execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.

The idea of a stored-program computer had been already present in the design of J. Presper Eckert and John William Mauchly‘s ENIAC, but was initially omitted so that it could be finished sooner.[7] On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would eventually be completed in August 1949.[8] EDVAC was designed to perform a certain number of instructions (or operations) of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer.[9] This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task.[10] With von Neumann’s design, the program that EDVAC ran could be changed simply by changing the contents of the memory. EDVAC, was not the first stored-program computer, the Manchester Baby which was a small-scale experimental stored-program computer, ran its first program on 21 June 1948[11] and the Manchester Mark 1 ran its first program during the night of 16–17 June 1949.[12]

Early CPUs were custom designs used as part of a larger and sometimes distinctive computer.[13] However, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers.[14] Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles[15] to cellphones,[16] and sometimes even in toys.[17][18]

While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, and the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas.[19] The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC,[20][21] also used a stored-program design using punched paper tape rather than electronic memory.[22] The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.[23] Most modern CPUs are primarily von Neumann in design, but CPUs with the Harvard architecture are seen as well, especially in embedded applications; for instance, the Atmel AVR microcontrollers are Harvard architecture processors.[24]

Relays and vacuum tubes (thermionic tubes) were commonly used as switching elements;[25][26] a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches. Vacuum-tube computers such as EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely.[6] In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.[27]

Transistor CPUs

IBM PowerPC 604e processor

The design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable and fragile switching elements like vacuum tubes and relays.[28] With this improvement, more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components.

In 1964, IBM introduced its IBM System/360 computer architecture that was used in a series of computers capable of running the same programs with different speed and performance.[29] This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM used the concept of a microprogram (often called “microcode”), which still sees widespread usage in modern CPUs.[30] The System/360 architecture was so popular that it dominated the mainframe computer market for decades and left a legacy that is still continued by similar modern computers like the IBM zSeries.[31][32] In 1965, Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the PDP-8.[33]

Fujitsu board with SPARC64 VIIIfx processors

Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay.[34] The increased reliability and dramatically increased speed of the switching elements (which were almost exclusively transistors by this time); CPU clock rates in the tens of megahertz were easily obtained during this period.[35] Additionally, while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like single instruction, multiple data (SIMD) vector processors began to appear.[36] These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc and Fujitsu Ltd.[36]

Small-scale integration CPUs

CPU, core memory and external bus interface of a DEC PDP-8/I, made of medium-scale integrated circuits

During this period, a method of manufacturing many interconnected transistors in a compact space was developed. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or “chip”. At first, only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs.[37] CPUs based on these “building block” ICs are generally referred to as “small-scale integration” (SSI) devices. SSI ICs, such as the ones used in the Apollo Guidance Computer, usually contained up to a few dozen transistors. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs.[38]

IBM’s System/370, follow-on to the System/360, used SSI ICs rather than Solid Logic Technology discrete-transistor modules.[39][40] DEC’s PDP-8/I and KI10 PDP-10 also switched from the individual transistors used by the PDP-8 and PDP-10 to SSI ICs,[41] and their extremely popular PDP-11 line was originally built with SSI ICs but was eventually implemented with LSI components once these became practical.

Large-scale integration CPUs

Lee Boysel published influential articles, including a 1967 “manifesto”, which described how to build the equivalent of a 32-bit mainframe computer from a relatively small number of large-scale integration circuits (LSI).[42][43] The only way to build LSI chips, which are chips with a hundred or more gates, was to build them using a metal–oxide–semiconductor (MOS) semiconductor manufacturing process (either PMOS logicNMOS logic, or CMOS logic). However, some companies continued to build processors out of bipolar transistor–transistor logic (TTL) chips because bipolar junction transistors were faster than MOS chips up until the 1970s (a few companies such as Datapoint continued to build processors out of TTL chips until the early 1980s).[43] In the 1960s, MOS ICs were slower and initially considered useful only in applications that required low power.[44][45] Following the development of silicon-gate MOS technology by Federico Faggin at Fairchild Semiconductor in 1968, MOS ICs largely replaced bipolar TTL as the standard chip technology in the early 1970s.[46]

As the microelectronic technology advanced, an increasing number of transistors were placed on ICs, decreasing the number of individual ICs needed for a complete CPU. MSI and LSI ICs increased transistor counts to hundreds, and then thousands. By 1968, the number of ICs required to build a complete CPU had been reduced to 24 ICs of eight different types, with each IC containing roughly 1000 MOSFETs.[47] In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits.[48]

Microprocessors

Die of an Intel 80486DX2 microprocessor (actual size: 12 × 6.75 mm) in its packaging
Intel Core i5 CPU on a Vaio E series laptop motherboard (on the right, beneath the heat pipe)

Inside of a laptop, with the CPU removed from socket

Since microprocessors were first introduced they have almost completely overtaken all other central processing unit implementation methods. The first commercially available microprocessor, made in 1971, was the Intel 4004, and the first widely used microprocessor, made in 1974, was the Intel 8080. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual success of the ubiquitous personal computer, the term CPU is now applied almost exclusively[a] to microprocessors. Several CPUs (denoted cores) can be combined in a single processing chip.[49]

Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards.[50] Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one.[51] The overall smaller CPU size, as a result of being implemented on a single die, means faster switching time because of physical factors like decreased gate parasitic capacitance.[52][53] This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, the ability to construct exceedingly small transistors on an IC has increased the complexity and number of transistors in a single CPU many fold. This widely observed trend is described by Moore’s law, which had proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity until 2016.[54][55]

While the complexity, size, construction and general form of CPUs have changed enormously since 1950,[56] the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines.[57][b] As Moore’s law no longer holds, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant.[59][60] These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model.

Operation

The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions that is called a program. The instructions to be executed are kept in some kind of computer memory. Nearly all CPUs follow the fetch, decode and execute steps in their operation, which are collectively known as the instruction cycle.

After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If a jump instruction was executed, the program counter will be modified to contain the address of the instruction that was jumped to and program execution continues normally. In more complex CPUs, multiple instructions can be fetched, decoded and executed simultaneously. This section describes what is generally referred to as the “classic RISC pipeline“, which is quite common among the simple CPUs used in many electronic devices (often called microcontrollers). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline.

Some instructions manipulate the program counter rather than producing result data directly; such instructions are generally called “jumps” and facilitate program behavior like loops, conditional program execution (through the use of a conditional jump), and existence of functions.[c] In some processors, some other instructions change the state of bits in a “flags” register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, in such processors a “compare” instruction evaluates two values and sets or clears bits in the flags register to indicate which one is greater or whether they are equal; one of these flags could then be used by a later jump instruction to determine program flow.

Fetch

Fetch involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The instruction’s location (address) in program memory is determined by the program counter (PC; called the “instruction pointer” in Intel x86 microprocessors), which stores a number that identifies the address of the next instruction to be fetched. After an instruction is fetched, the PC is incremented by the length of the instruction so that it will contain the address of the next instruction in the sequence.[d] Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below).

Decode

The instruction that the CPU fetches from memory determines what the CPU will do. In the decode step, performed by binary decoder circuitry known as the instruction decoder, the instruction is converted into signals that control other parts of the CPU.

The way in which the instruction is interpreted is defined by the CPU’s instruction set architecture (ISA).[e] Often, one group of bits (that is, a “field”) within the instruction, called the opcode, indicates which operation is to be performed, while the remaining fields usually provide supplemental information required for the operation, such as the operands. Those operands may be specified as a constant value (called an immediate value), or as the location of a value that may be a processor register or a memory address, as determined by some addressing mode.

In some CPU designs the instruction decoder is implemented as a hardwired, unchangeable binary decoder circuit. In others, a microprogram is used to translate instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses. In some cases the memory that stores the microprogram is rewritable, making it possible to change the way in which the CPU decodes instructions.

Execute

After the fetch and decode steps, the execute step is performed. Depending on the CPU architecture, this may consist of a single action or a sequence of actions. During each action, control signals electrically enable or disable various parts of the CPU so they can perform all or part of the desired operation. The action is then completed, typically in response to a clock pulse. Very often the results are written to an internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but less expensive and higher capacity main memory.

For example, if an addition instruction is to be executed, registers containing operands (numbers to be summed) are activated, as are the parts of the arithmetic logic unit (ALU) that perform addition. When the clock pulse occurs, the operands flow from the source registers into the ALU, and the sum appears at its output. On subsequent clock pulses, other components are enabled (and disabled) to move the output (the sum of the operation) to storage (e.g., a register or memory). If the resulting sum is too large (i.e., it is larger than the ALU’s output word size), an arithmetic overflow flag will be set, influencing the next operation.

]]>
Component GUPs https://www.pearlinfotech.us/product/component-gups/ Wed, 19 Oct 2022 11:05:40 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2269 Date Centre products Fire wall https://www.pearlinfotech.us/product/date-centre-products-fire-wall/ Wed, 19 Oct 2022 10:56:13 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2266 data center (American English)[1] or data centre (British English)[2][note 1] is a building, a dedicated space within a building, or a group of buildings[3] used to house computer systems and associated components, such as telecommunications and storage systems.[4][5] In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules.[1][2] A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet.[3]]]> History

NASA mission control computer room c. 1962

Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center.[7][note 2] Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes.[7][note 3] Basic design-guidelines for controlling access to the computer room were therefore devised.

During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term “data center”, as applied to specially designed computer rooms, started to gain popular recognition about this time.[7][note 4]

The boom of data centers came during the dot-com bubble of 1997–2000.[8][note 5] Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs),[9] which provide enhanced capabilities, such as crossover backup: “If a Bell Atlantic line is cut, we can transfer them to … to minimize the time of outage.”[9]

The term cloud data centers (CDCs) has been used.[10] Data centers typically cost a lot to build and maintain.[8] Increasingly, the division of these terms has almost disappeared and they are being integrated into the term “data center”.[11]

Requirements for modern data centers

Racks of telecommunications equipment in part of a data center

Modernization and data center transformation enhances performance and energy efficiency.[12]

Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. A data center must, therefore, keep high standards for assuring the integrity and functionality of its hosted computer environment.

Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old.[12] Gartner, another research company, says data centers older than seven years are obsolete.[13] The growth in data (163 zettabytes by 2025[14]) is one factor driving the need for data centers to modernize.

Focus on modernization is not new: concern about obsolete equipment was decried in 2007,[15] and in 2011 Uptime Institute was concerned about the age of the equipment therein.[note 6] By 2018 concern had shifted once again, this time to the age of the staff: “data center staff are aging faster than the equipment.”[16]

Meeting standards for data centers

The Telecommunications Industry Association‘s Telecommunications Infrastructure Standard for Data Centers[17] specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.[18]

Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces,[19] provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:

  • Operate and manage a carrier’s telecommunication network
  • Provide data center based applications directly to the carrier’s customers
  • Provide hosted applications for a third party to provide services to their customers
  • Provide a combination of these and similar data center applications

Data center transformation

Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach.[20] The typical projects within a data center transformation initiative include standardization/consolidation, virtualizationautomation and security.

  • Standardization/consolidation: Reducing the number of data centers[21][22] and avoiding server sprawl[23] (both physical and virtual)[24] often includes replacing aging data center equipment,[25] and is aided by standardization.[26]
  • Virtualization: Lowers capital and operational expenses,[27] reduces energy consumption.[28] Virtualized desktops can be hosted in data centers and rented out on a subscription basis.[29] Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.[30]
  • Automating: Automating tasks such as provisioning, configuration, patching, release management, and compliance is needed, not just when facing fewer skilled IT workers.[26]
  • Securing: Protection of virtual systems is integrated with the existing security of physical infrastructures.[31]

Raised floor

Perforated cooling floor tile.

A raised floor standards guide named GR-2930 was developed by Telcordia Technologies, a subsidiary of Ericsson.[32]

Although the first raised floor computer room was made by IBM in 1956,[33] and they’ve “been around since the 1960s”,[34] it was the 1970s that made it more common for computer centers to thereby allow cool air to circulate more efficiently.[35][36]

The first purpose of the raised floor was to allow access for wiring.[33]

Lights out

The “lights-out”[37] data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.[38][39]

 

History

The term firewall originally referred to a wall intended to confine a fire within a line of adjacent buildings.[4] Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. The term was applied in the late 1980s to network technology[5] that emerged when the Internet was fairly new in terms of its global use and connectivity.[6] The predecessors to firewalls for network security were routers used in the late 1980s. Because they already segregated networks, routers could apply filtering to packets crossing them.[7]

Before it was used in real-life computing, the term appeared in the 1983 computer-hacking movie WarGames, and possibly inspired its later use.[8]

Types

Firewalls are categorized as a network-based or a host-based system. Network-based firewalls are positioned between two or more networks, typically between the LAN and WAN.[9] They are either a software appliance running on general-purpose hardware, a hardware appliance running on special-purpose hardware, or a virtual appliance running on a virtual host controlled by a hypervisor. Firewall appliances may also offer non firewall functionality, such as DHCP[10][11] or VPN[12] services. Host-based firewalls are deployed directly on the host itself to control network traffic or other computing resources.[13][14] This can be a daemon or service as a part of the operating system or an agent application for protection.

An illustration of a network-based firewall within a network

Packet filter

The first reported type of network firewall is called a packet filter, which inspect packets transferred between computers. The firewall maintains an access control list which dictates what packets will be looked at and what action should be applied, if any, with the default action set to silent discard. Three basic actions regarding the packet consist of a silent discard, discard with Internet Control Message Protocol or TCP reset response to the sender, and forward to the next hop.[15] Packets may be filtered by source and destination IP addresses, protocol, source and destination ports. The bulk of Internet communication in 20th and early 21st century used either Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) in conjunction with well-known ports, enabling firewalls of that era to distinguish between specific types of traffic such as web browsing, remote printing, email transmission, and file transfers.[16][17]

The first paper published on firewall technology was in 1987 when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. At AT&T Bell LabsBill Cheswick and Steve Bellovin continued their research in packet filtering and developed a working model for their own company based on their original first-generation architecture.[18] In 1992, Steven McCanne and Van Jacobson released paper on BSD Packet Filter (BPF) while at Lawrence Berkeley Laboratory.[19][20]

Connection tracking

Flow of network packets through Netfilter, a Linux kernel module

From 1989–1990, three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, and Kshitij Nigam, developed the second generation of firewalls, calling them circuit-level gateways.[21]

Second-generation firewalls perform the work of their first-generation predecessors but also maintain knowledge of specific conversations between endpoints by remembering which port number the two IP addresses are using at layer 4 (transport layer) of the OSI model for their conversation, allowing examination of the overall exchange between the nodes.[22]

Application layer

Marcus Ranum, Wei Xu, and Peter Churchyard released an application firewall known as Firewall Toolkit (FWTK) in October 1993.[23] This became the basis for Gauntlet firewall at Trusted Information Systems.[24][25]

The key benefit of application layer filtering is that it can understand certain applications and protocols such as File Transfer Protocol (FTP), Domain Name System (DNS), or Hypertext Transfer Protocol (HTTP). This allows it to identify unwanted applications or services using a non standard port, or detect if an allowed protocol is being abused.[26] It can also provide unified security management including enforced encrypted DNS and virtual private networking.[27][28][29]

As of 2012, the next-generation firewall provides a wider range of inspection at the application layer, extending deep packet inspection functionality to include, but is not limited to:

Endpoint specific

Endpoint-based application firewalls function by determining whether a process should accept any given connection. Application firewalls filter connections by examining the process ID of data packets against a rule set for the local process involved in the data transmission. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers. Application firewalls that hook into socket calls are also referred to as socket filters.[citation needed]

]]>
Server RAM and Desktop RAM https://www.pearlinfotech.us/product/ram-memory/ Wed, 19 Oct 2022 10:54:36 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2262 There exists a huge difference between servers and desktop line-memory systems. Desktops use SIMMS i.e. Single in Line Memory system which facilitates memory maintenance by having pins on a single side only. Whereas Servers function with DIMMS i.e. Double in Line Memory system which facilitates good power of memory management with pins on both the sides of the module. Servers use powerful memory to manage all the connections memory efficiently. They use certain dynamic architecture to cause minimal faults in their connection operations.

]]>
Bar Code Scanner https://www.pearlinfotech.us/product/bar-code-scanner/ Wed, 19 Oct 2022 10:53:27 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2259 Types of barcode scanners

Technology

A handheld barcode scanner

Barcode readers can be differentiated by technologies as follows:

Pen-type readers

Pen-type readers consist of a light source and photodiode that are placed next to each other in the tip of a pen. To read a barcode, the person holding the pen must move the tip of it across the bars at a relatively uniform speed. The photodiode measures the intensity of the light reflected back from the light source as the tip crosses each bar and space in the printed code. The photodiode generates a waveform that is used to measure the widths of the bars and spaces in the barcode. Dark bars in the barcode absorb light and white spaces reflect light so that the voltage waveform generated by the photodiode is a representation of the bar and space pattern in the barcode. This waveform is decoded by the scanner in a manner similar to the way Morse code dots and dashes are decoded.

Laser scanners

See also: Laser scanning

Laser scanners direct the laser beam back and forth across the barcode. As with the pen-type reader, a photo-diode is used to measure the intensity of the light reflected back from the barcode. In both pen readers and laser scanners, the light emitted by the reader is rapidly varied in brightness with a data pattern and the photo-diode receive circuitry is designed to detect only signals with the same modulated pattern.

CCD readers (also known as LED scanners)

Charge-coupled device (CCD) readers use an array of hundreds of tiny light sensors lined up in a row in the head of the reader. Each sensor measures the intensity of the light immediately in front of it. Each individual light sensor in the CCD reader is extremely small and because there are hundreds of sensors lined up in a row, a voltage pattern identical to the pattern in a barcode is generated in the reader by sequentially measuring the voltages across each sensor in the row. The important difference between a CCD reader and a pen or laser scanner is that the CCD reader is measuring emitted ambient light from the barcode whereas pen or laser scanners are measuring reflected light of a specific frequency originating from the scanner itself. LED scanners can also be made using CMOS sensors, and are replacing earlier Laser-based readers.[1][better source needed]

Camera-based readers

Two-dimensional imaging scanners are a newer type of barcode reader. They use a camera and image processing techniques to decode the barcode.

Video camera readers use small video cameras with the same CCD technology as in a CCD barcode reader except that instead of having a single row of sensors, a video camera has hundreds of rows of sensors arranged in a two dimensional array so that they can generate an image.

Large field-of-view readers use high resolution industrial cameras to capture multiple bar codes simultaneously. All the bar codes appearing in the photo are decoded instantly (ImageID patents and code creation tools) or by use of plugins (e.g. the Barcodepedia used a flash application and some web cam for querying a database), have been realized options for resolving the given tasks.

Omnidirectional barcode scanners

Omnidirectional scanning uses “series of straight or curved scanning lines of varying directions in the form of a starburst, a Lissajous curve, or other multiangle arrangement are projected at the symbol and one or more of them will be able to cross all of the symbol’s bars and spaces, no matter what the orientation.[2] Almost all of them use a laser. Unlike the simpler single-line laser scanners, they produce a pattern of beams in varying orientations allowing them to read barcodes presented to it at different angles. Most of them use a single rotating polygonal mirror and an arrangement of several fixed mirrors to generate their complex scan patterns.

Omnidirectional scanners are most familiar through the horizontal scanners in supermarkets, where packages are slid over a glass or sapphire window. There are a range of different omnidirectional units available which can be used for differing scanning applications, ranging from retail type applications with the barcodes read only a few centimetres away from the scanner to industrial conveyor scanning where the unit can be a couple of metres away or more from the code. Omnidirectional scanners are also better at reading poorly printed, wrinkled, or even torn barcodes.

Cell phone cameras

While cell phone cameras without auto-focus are not ideal for reading some common barcode formats, there are 2D barcodes which are optimized for cell phones, as well as QR Codes (Quick Response) codes and Data Matrix codes which can be read quickly and accurately with or without auto-focus.[3]

Cell phone cameras open up a number of applications for consumers. For example:

  • Movies: DVD/VHSmovie catalogs.
  • Music: CDcatalogs – playing an MP3 when scanned.
  • Book catalogs and device.
  • Groceries, nutrition information, making shopping lists when the last of an item is used, etc.
  • Personal Property inventory (for insurance and other purposes) code scanned into personal finance software when entering. Later, scanned receipt images can then be automatically associated with the appropriate entries. Later, the barcodes can be used to rapidly weed out paper copies not required to be retained for tax or asset inventory purposes.
  • If retailers put barcodes on receipts that allowed downloading an electronic copy or encoded the entire receipt in a 2D barcode, consumers could easily import data into personal finance, property inventory, and grocery management software. Receipts scanned on a scanner could be automatically identified and associated with the appropriate entries in finance and property inventory software.
  • Consumer trackingfrom the retailer perspective (for example, loyalty card programs that track consumers purchases at the point of sale by having them scan a QR code).

A number of enterprise applications using cell phones are appearing:

  • Access control(for example, ticket validation at venues), inventory reporting (for example, tracking deliveries), asset tracking (for example, anti-counterfeiting).[4]
  • Recent versions of the AndroidiOS, and Windows Phonemobile phone operating systems feature QR or barcode scanners built in, usually accessible from their respective camera application.

Housing

A large multifunction barcode scanner being used to monitor the transportation of packages of radioactive pharmaceuticals

Barcode readers can be distinguished based on housing design as follows:

Handheld scanner

with a handle and typically a trigger button for switching on the light like this are used in factory and farm automation for quality management and shipping.

PDA scanner (or Auto-ID PDA)

PDA with a built-in barcode reader or attached barcode scanner.

Automatic reader

a back office equipment to read barcoded documents at high speed (50,000/hour).

Cordless scanner (or Wireless scanner)

a cordless barcode scanner is operated by a battery fit inside it and is not connected to the electricity mains and transfer data to the connected device like PC.

Barcode library

Main article: Barcode library(or Barcode SDK)

Currently any camera equipped device or device which has document scanner can be used as Barcode reader with special software libraries, Barcode libraries. This allows them to add barcode features to desktop, web, mobile or embedded applications. In this way, combination of barcode technology and barcode library allows to implement with low cost any automatic document processing OMR, package tracking application or even augmented reality application.

Methods of connection

Early serial interfaces

Early barcode scanners, of all formats, almost universally used the then-common RS-232 serial interface. This was an electrically simple means of connection and the software to access it is also relatively simple, although needing to be written for specific computers and their serial ports.

Proprietary interfaces

There are a few other less common interfaces. These were used in large EPOS systems with dedicated hardware, rather than attaching to existing commodity computers. In some of these interfaces, the scanning device returned a “raw” signal proportional to the intensities seen while scanning the barcode. This was then decoded by the host device. In some cases the scanning device would convert the symbology of the barcode to one that could be recognized by the host device, such as Code 39.

Keyboard wedge (USBPS/2, etc)

PS/2 keyboard and mouse ports

As the PC with its various standard interfaces evolved, it became ever easier to connect physical hardware to it. Also, there were commercial incentives to reduce the complexity of the associated software. The early “keyboard wedge” hardware plugged in between the PS/2 port and the keyboard, with characters from the barcode scanner appearing exactly as if they had been typed at the keyboard. Today the term is used more broadly for any device which can be plugged in and contribute to the stream of data coming “from the keyboard”. Keyboard wedges plugging in via the USB interface are readily available.

The “keyboard wedge” approach makes adding things such as barcode readers to systems simple. The software may well need no changes.

The concurrent presence of two “keyboards” does require some care on the part of the user. Also, barcodes often offer only a subset of the characters offered by a normal keyboard.

US

Subsequent to the PS/2 era, barcode readers began to use USB ports rather than the keyboard port, this being more convenient. To retain the easy integration with existing programs, it was sometimes necessary to load a device driver called a “software wedge”, which facilitated the keyboard-impersonating behavior of the old “keyboard wedge” hardware.

Today, USB barcode readers are “plug and play”, at least in Windows systems. Any necessary drivers are loaded when the device is plugged in.

In many cases, a choice of USB interface types (HIDCDC) are provided. Some have PoweredUSB.

Wireless networking

Some modern handheld barcode readers can be operated in wireless networks according to IEEE 802.11g (WLAN) or IEEE 802.15.1 (Bluetooth). Some barcode readers also support radio frequencies viz. 433 MHz or 910 MHz. Readers without external power sources require their batteries be recharged occasionally, which may make them unsuitable for some uses.

Resolution

The scanner resolution is measured by the size of the dot of light emitted by the reader. If this dot of light is wider than any bar or space in the bar code, then it will overlap two elements (two spaces or two bars) and it may produce wrong output. On the other hand, if a too small dot of light is used, then it can misinterpret any spot on the bar code making the final output wrong.

The most commonly used dimension is 13 mil (0.013 in or 0.33 mm), although some scanners can read codes with dimensions as small as 3 mil (0.003 in or 0.075 mm). Smaller bar codes must be printed at high resolution to be read accurately.

 

]]>
Receipt Printer https://www.pearlinfotech.us/product/receipt-printer/ Wed, 19 Oct 2022 10:52:15 +0000 https://webservicesdelhi.com/pearlinfo/?post_type=product&p=2256