Uncategorized

data recovery

data recovery

Data recovery is the process of restoring data that has been lost, accidentally deleted, corrupted or made inaccessible.In enterprise IT, data recovery typically refers to the restoration of data to a desktop, laptop, server or external storage system from a backup.

Causes of data loss

Most data loss is caused by human error, rather than malicious attacks, according to U.K. statistics released in 2016. In fact, human error accounted for almost two-thirds of the incidents reported to the U.K. Information Commissioner’s Office. The most common type of breach occurred when someone sent data to the wrong person.

Other common causes of data loss include power outages, natural disasters, equipment failures or malfunctions, accidental deletion of data, unintentionally formatting a hard drive, damaged hard drive read/write heads, software crashes, logical errors, firmware corruption, continued use of a computer after signs of failure, physical damage to hard drives, laptop theft, and spilling coffee or water on a computer.

How data recovery works

The data recovery process varies, depending on the circumstances of the data loss, the data recovery software used to create the backup and the backup target media. For example, many desktop and laptop backup software platforms allow users to restore lost files themselves, while restoration of a corrupted database from a tape backup is a more complicated process that requires IT intervention. Data recovery services can also be used to retrieve files that were not backed up and accidentally deleted from a computer’s file system, but still remain on the hard disk in fragments.

Data recovery is possible because a file and the information about that file are stored in different places. For example, the Windows operating system uses a file allocation table to track which files are on the hard drive and where they are stored. The allocation table is like a book’s table of contents, while the actual files on the hard drive are like the pages in the book.

When data needs to be recovered, it’s usually only the file allocation table that’s not working properly. The actual file to be recovered may still be on the hard drive in flawless condition. If the file still exists — and it is not damaged or encrypted — it can be recovered. If the file is damaged, missing or encrypted, there are other ways of recovering it. If the file is physically damaged, it can still be reconstructed. Many applications, such as Microsoft Office, put uniform headers at the beginning of files to designate that they belong to that application. Some utilities can be used to reconstruct the file headers manually, so at least some of the file can be recovered.

Most data recovery processes combine technologies, so organizations aren’t solely recovering data by tape. Recovering core applications and data from tape takes time, and you may need to access your data immediately after a disaster. There are also risks involved with transporting tapes.

In addition, not all production data at a remote location may be needed to resume operations. Therefore, it’s wise to identify what can be left behind and what data must be recovered.

Data recovery techniques

Instant recovery, also known as recovery in place, tries to eliminate the recovery window by redirecting user workloads to the backup server. A snapshot is created so the backup remains in a pristine state and all user write operations are redirected to that snapshot; users then work off the backup virtual machine (VM) and the recovery process begins in the background. Users have no idea the recovery is taking place, and once the recovery is complete, the user workload is redirected back to the original VM.

One way to avoid the time-consuming and costly process of data recovery is to prevent the data loss from ever taking place. Data loss prevention (DLP) products help companies identify and stop data leaks, and come in two versions: stand-alone and integrated.

  • Stand-alone DLP products can reside on specialized appliances or be sold as software.
  • Integrated DLP products are usually found on perimeter security gateways and are useful for detecting sensitive data at rest and in motion.

Unlike stand-alone data loss prevention products, integrated DLP products usually do not share the same management consoles, policy management engines and data storage.

Integrating data recovery into a DR plan

An organization’s disaster recovery plan should identify the people in the organization responsible for recovering data, provide a strategy for how data will be recovered, and document acceptable recovery point and recovery time objectives. It should also include the steps to take in recovering data.

inoperable, affected business units must be advised to prepare to relocate to an alternate location. If hardware systems have been damaged or destroyed, processes must be activated to recover damaged hardware. Processes to recover damaged software should also be part of the DR plan.

Some resources worth reviewing are the National Institute for Standards and Technology SP 800-34 standard, as well as ISO 24762 and 27031 standards.A business impact analysis can help an organization understand its data requirements and identify the minimum amount of time needed to recover data to its previous state. One challenge to data loss and data recovery is getting a handle on the unstructured data stored on various devices.

But there are steps that can mitigate the damage. Start by classifying data based on its sensitivity and determine which classifications must be secured. Then, determine how much data would have to be compromised to affect the organization. Undertake a risk assessment to determine what controls are needed to protect sensitive data. Finally, put systems in place to store and protect that content.

Uncategorized

data center

data center

A data center (or datacenter) is a facility composed of networked computers and storage that businesses and other organizations use to organize, process, store and disseminate large amounts of data. A business typically relies heavily upon the applications, services and data contained within a data center, making it a focal point and critical asset for everyday operations.

How data centers work

Data centers are not a single thing, but rather, a conglomeration of elements. At a minimum, data centers serve as the principal repositories for all manner of IT equipment, including servers, storage subsystems, networking switches, routers and firewalls, as well as the cabling and physical racks used to organize and interconnect the IT equipment. A data center must also contain an adequate infrastructure, such as power distribution and supplemental power subsystems. This also includes electrical switching; uninterruptable power supplies; backup generators; ventilation and data center cooling systems, such as in-row cooling configurations and computer room air conditioners; and adequate provisioning for network carrier (telco) connectivity. All of this demands a physical facility with physical security and sufficient square footage to house the entire collection of infrastructure and equipment.

What is data center consolidation?

There is no requirement for a single data center, and modern businesses may use two or more data center installations across multiple locations for greater resilience and better application performance, which lowers latency by locating workloads closer to users.

Conversely, a business with multiple data centers may opt to consolidate data centers, reducing the number of locations in order to minimize the costs of IT operations. Consolidation typically occurs during mergers and acquisitions when the majority business doesn’t need the data centers owned by the subordinate business.

What is data center colocation?

Data center operators can also pay a fee to rent server space in a colocation facility. Colocation is an appealing option for organizations that want to avoid the large capital expenditures associated with building and maintaining their own data centers. Today, colocation providers are expanding their offerings to include managed services, such as interconnectivity, allowing customers to connect to the public cloud.

Because many providers today offer managed services along with their colocation facilities, the definition of managed services becomes blurry, as all vendors market the term in a slightly different way. The important distinction to make is this:

  • Colocation — The organization pays a vendor to house their hardware in a facility. The customer is paying for the space alone.
  • Managed services — The organization pays a vendor to actively maintain or monitor the hardware in some way, whether it be through performance reports, interconnectivity, technical support or disaster recovery.

Data center tiers

Data centers are not defined by their physical size or style. Small businesses may operate successfully with several servers and storage arrays networked within a convenient closet or small room, while major computing organizations, such as Facebook, Amazon or Google, may fill an enormous warehouse space with data center equipment and infrastructure. In other cases, data centers can be assembled in mobile installations, such as shipping containers, also known as data centers in a box, which can be moved and deployed as required.

However, data centers can be defined by various levels of reliability or resilience, sometimes referred to as data center tiers. In 2005, the American National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) published standard ANSI/TIA-942, “Telecommunications Infrastructure Standard for Data Centers,” which defined four tiers of data center design and implementation guidelines. Each subsequent tier is intended to provide more resilience, security and reliability than the previous tier. For example, a tier 1 data center is little more than a server room, while a tier 4 data center offers redundant subsystems and high security.

Data center architecture and design

Although almost any suitable space could conceivably serve as a “data center,” the deliberate design and implementation of a data center requires careful consideration. Beyond the basic issues of cost and taxes, sites are selected based on a multitude of criteria, such as geographic location, seismic and meteorological stability, access to roads and airports, availability of energy and telecommunications and even the prevailing political environment.

Once a site is secured, the data center architecture can be designed with attention to the mechanical and electrical infrastructure, as well as the composition and layout of the IT equipment. All of these issues are guided by the availability and efficiency goals of the desired data center tier.

Energy consumption and efficiency

Data center designs also recognize the importance of energy efficiency. A simple data center may need only a few kilowatts of energy, but an enterprise-scale data center installation can demand tens of megawatts or more. Today, the green data center, which is designed for minimum environmental impact through the use of low-emission building materials, catalytic converters and alternative energy technologies, is growing in popularity.

Data centers can also maximize efficiency through their physical layout using a method known as hot aisle/cold aisle layout. Server racks are lined up in alternating rows with cold air intakes facing one way, and hot air exhausts facing the other. The result is alternating hot and cold aisles, with the exhausts creating a hot aisle and the intakes creating a cold aisle. The exhausts are pointed toward the air conditioning equipment. The equipment is often placed between the server cabinets in the row or aisle and distributes the cold air back to the cold aisle. This configuration of the air conditioning equipment is known as in-row cooling.

Organizations often measure data center energy efficiency through a metric called power usage effectiveness (PUE), which represents the ratio of total power entering the data center divided by the power used by IT equipment. However, the subsequent rise of virtualization has allowed for much more productive use of IT equipment, resulting in much higher efficiency, lower energy use and energy cost mitigation. Metrics such as PUE are no longer central to energy efficiency goals, but organizations may still gauge PUE and employ comprehensive power and cooling analyses to better understand and manage energy efficiency.

Data center security and safety

Data center designs must also implement sound safety and security practices. For example, safety is often reflected in the layout of doorways and access corridors, which must accommodate the movement of large, unwieldy IT equipment, as well as permit employees to access and repair the infrastructure.

Fire suppression is another key safety area, and the extensive use of sensitive, high-energy electrical and electronic equipment precludes common sprinklers. Instead, data centers often use environmentally friendly chemical fire suppression systems, which effectively starve a fire of oxygen while mitigating collateral damage to the equipment. Because the data center is also a core business asset, comprehensive security measures, like badge access and video surveillance, help to detect and prevent malfeasance by employees, contractors and intruders.

Data center infrastructure management and monitoring

Modern data centers make extensive use of monitoring and management software. Software including data center infrastructure management tools allow remote IT administrators to oversee the facility and equipment, measure performance, detect failures and implement a wide array of corrective actions without ever physically entering the data center room.

The growth of virtualization has added another important dimension to data center infrastructure management. Virtualization now supports the abstraction of servers, networks and storage, allowing every computing resource to be organized into pools without regard to their physical location. Administrators can then provision workloads, storage instances and even network configuration from those common resource pools. When administrators no longer need those resources, they can return them to the pool for reuse. All of the actions network, storage and server virtualization accomplish can be implemented through software, giving traction to the term software-defined data center.

Data center vs. cloud

Data centers are increasingly implementing private cloud software, which builds on virtualization to add a level of automation, user self-service and billing/chargeback to data center administration. The goal is to allow individual users to provision workloads and other computing resources on demand without IT administrative intervention.

It is also increasingly possible for data centers to interface with public cloud providers. Platforms such as Microsoft Azure emphasize the hybrid use of local data centers with Azure or other public cloud resources. The result is not an elimination of data centers, but rather, the creation of a dynamic environment that allows organizations to run workloads locally or in the cloud or to move those instances to or from the cloud as desired.

History

The origins of the first data centers can be traced back to the 1940s and the existence of early computer systems like the Electronic Numerical Integrator and Computer (ENIAC). These early machines were complex to maintain and operate and had a slew of cables connecting all the necessary components. They were also in use by the military — meaning specialized computer rooms with racks, cable trays, cooling mechanisms and access restrictions were necessary to both accommodate all the equipment, as well as implement the proper security measures.

However, it wasn’t until the 1990s, when IT operations started to gain complexity and inexpensive networking equipment became available, that the term data center first came into use. It became possible to store all of a company’s necessary servers in a room within the company. These specialized computer rooms were dubbed data centers within the companies, and the term gained traction.

Around the time of the dot-com bubble in the late nineties, the need for internet speed and a constant internet presence for companies necessitated larger facilities to house the amount of networking equipment needed. It was at this point that data centers became popular and began to resemble the ones described above.

Over the history of computing, as computers become smaller and networks become bigger, the data center has evolved and shifted to accommodate the necessary technology of the day.