Date Centre products Fire wall

data center (American English)[1] or data centre (British English)[2][note 1] is a building, a dedicated space within a building, or a group of buildings[3] used to house computer systems and associated components, such as telecommunications and storage systems.[4][5]

In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules.[1][2] A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet.[3]

Category:

Description

History

NASA mission control computer room c. 1962

Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center.[7][note 2] Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes.[7][note 3] Basic design-guidelines for controlling access to the computer room were therefore devised.

During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term “data center”, as applied to specially designed computer rooms, started to gain popular recognition about this time.[7][note 4]

The boom of data centers came during the dot-com bubble of 1997–2000.[8][note 5] Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs),[9] which provide enhanced capabilities, such as crossover backup: “If a Bell Atlantic line is cut, we can transfer them to … to minimize the time of outage.”[9]

The term cloud data centers (CDCs) has been used.[10] Data centers typically cost a lot to build and maintain.[8] Increasingly, the division of these terms has almost disappeared and they are being integrated into the term “data center”.[11]

Requirements for modern data centers

Racks of telecommunications equipment in part of a data center

Modernization and data center transformation enhances performance and energy efficiency.[12]

Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. A data center must, therefore, keep high standards for assuring the integrity and functionality of its hosted computer environment.

Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old.[12] Gartner, another research company, says data centers older than seven years are obsolete.[13] The growth in data (163 zettabytes by 2025[14]) is one factor driving the need for data centers to modernize.

Focus on modernization is not new: concern about obsolete equipment was decried in 2007,[15] and in 2011 Uptime Institute was concerned about the age of the equipment therein.[note 6] By 2018 concern had shifted once again, this time to the age of the staff: “data center staff are aging faster than the equipment.”[16]

Meeting standards for data centers

The Telecommunications Industry Association‘s Telecommunications Infrastructure Standard for Data Centers[17] specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.[18]

Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces,[19] provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:

  • Operate and manage a carrier’s telecommunication network
  • Provide data center based applications directly to the carrier’s customers
  • Provide hosted applications for a third party to provide services to their customers
  • Provide a combination of these and similar data center applications

Data center transformation

Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach.[20] The typical projects within a data center transformation initiative include standardization/consolidation, virtualizationautomation and security.

  • Standardization/consolidation: Reducing the number of data centers[21][22] and avoiding server sprawl[23] (both physical and virtual)[24] often includes replacing aging data center equipment,[25] and is aided by standardization.[26]
  • Virtualization: Lowers capital and operational expenses,[27] reduces energy consumption.[28] Virtualized desktops can be hosted in data centers and rented out on a subscription basis.[29] Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.[30]
  • Automating: Automating tasks such as provisioning, configuration, patching, release management, and compliance is needed, not just when facing fewer skilled IT workers.[26]
  • Securing: Protection of virtual systems is integrated with the existing security of physical infrastructures.[31]

Raised floor

Perforated cooling floor tile.

A raised floor standards guide named GR-2930 was developed by Telcordia Technologies, a subsidiary of Ericsson.[32]

Although the first raised floor computer room was made by IBM in 1956,[33] and they’ve “been around since the 1960s”,[34] it was the 1970s that made it more common for computer centers to thereby allow cool air to circulate more efficiently.[35][36]

The first purpose of the raised floor was to allow access for wiring.[33]

Lights out

The “lights-out”[37] data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.[38][39]

 

History

The term firewall originally referred to a wall intended to confine a fire within a line of adjacent buildings.[4] Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. The term was applied in the late 1980s to network technology[5] that emerged when the Internet was fairly new in terms of its global use and connectivity.[6] The predecessors to firewalls for network security were routers used in the late 1980s. Because they already segregated networks, routers could apply filtering to packets crossing them.[7]

Before it was used in real-life computing, the term appeared in the 1983 computer-hacking movie WarGames, and possibly inspired its later use.[8]

Types

Firewalls are categorized as a network-based or a host-based system. Network-based firewalls are positioned between two or more networks, typically between the LAN and WAN.[9] They are either a software appliance running on general-purpose hardware, a hardware appliance running on special-purpose hardware, or a virtual appliance running on a virtual host controlled by a hypervisor. Firewall appliances may also offer non firewall functionality, such as DHCP[10][11] or VPN[12] services. Host-based firewalls are deployed directly on the host itself to control network traffic or other computing resources.[13][14] This can be a daemon or service as a part of the operating system or an agent application for protection.

An illustration of a network-based firewall within a network

Packet filter

The first reported type of network firewall is called a packet filter, which inspect packets transferred between computers. The firewall maintains an access control list which dictates what packets will be looked at and what action should be applied, if any, with the default action set to silent discard. Three basic actions regarding the packet consist of a silent discard, discard with Internet Control Message Protocol or TCP reset response to the sender, and forward to the next hop.[15] Packets may be filtered by source and destination IP addresses, protocol, source and destination ports. The bulk of Internet communication in 20th and early 21st century used either Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) in conjunction with well-known ports, enabling firewalls of that era to distinguish between specific types of traffic such as web browsing, remote printing, email transmission, and file transfers.[16][17]

The first paper published on firewall technology was in 1987 when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. At AT&T Bell LabsBill Cheswick and Steve Bellovin continued their research in packet filtering and developed a working model for their own company based on their original first-generation architecture.[18] In 1992, Steven McCanne and Van Jacobson released paper on BSD Packet Filter (BPF) while at Lawrence Berkeley Laboratory.[19][20]

Connection tracking

Flow of network packets through Netfilter, a Linux kernel module

From 1989–1990, three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, and Kshitij Nigam, developed the second generation of firewalls, calling them circuit-level gateways.[21]

Second-generation firewalls perform the work of their first-generation predecessors but also maintain knowledge of specific conversations between endpoints by remembering which port number the two IP addresses are using at layer 4 (transport layer) of the OSI model for their conversation, allowing examination of the overall exchange between the nodes.[22]

Application layer

Marcus Ranum, Wei Xu, and Peter Churchyard released an application firewall known as Firewall Toolkit (FWTK) in October 1993.[23] This became the basis for Gauntlet firewall at Trusted Information Systems.[24][25]

The key benefit of application layer filtering is that it can understand certain applications and protocols such as File Transfer Protocol (FTP), Domain Name System (DNS), or Hypertext Transfer Protocol (HTTP). This allows it to identify unwanted applications or services using a non standard port, or detect if an allowed protocol is being abused.[26] It can also provide unified security management including enforced encrypted DNS and virtual private networking.[27][28][29]

As of 2012, the next-generation firewall provides a wider range of inspection at the application layer, extending deep packet inspection functionality to include, but is not limited to:

Endpoint specific

Endpoint-based application firewalls function by determining whether a process should accept any given connection. Application firewalls filter connections by examining the process ID of data packets against a rule set for the local process involved in the data transmission. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers. Application firewalls that hook into socket calls are also referred to as socket filters.[citation needed]