(v) To make a copy of data so that the additional copy may be used to restore the original in case of data loss.
(n) A collection of data stored on (usually remote) non-volatile storage media for purposes of recovery in case the original copy of data is lost or becomes inaccessible; also called a backup copy. To be useful for recovery, a backup must be made by copying the source data image when it is in a consistent state.
An interval of time during which a set of data can be backed up without seriously affecting applications that use the data.
Backup and recovery transfer protocol used by the EVault® Agent.
A bare-metal restore (BMR) is a restore in which the backed up data is available in a form which allows one to restore a computer system from "bare metal"—meaning without any requirements as to previously installed software or operating system.
Formally called buffer-to-buffer credit (BBC) spoofing, and also called buffer-to-buffer credits, this technology effectively removes limitations on data throughput for long-distance transmissions in a Fibre Channel storage area network (SAN). Fibre Channel protocols usually limit the distance between the source and the destination network to within a few kilometers. Using buffer-to-buffer credits makes it possible to use offsite storage hundreds of kilometers away.
The ability of an organization to continue to function even after a disastrous event. Business continuity is accomplished through the deployment of redundant hardware and software, the use of fault tolerant systems, as well as a solid backup and recovery strategy.
Business continuity planning (BCP) covers both disaster recovery planning and business resumption planning. BCP is the preparation and testing of measures that protect business operations and also provide the means for the recovery of technologies in the event of any loss, damage, or failure of facilities
A group of individuals responsible for maintaining the business recovery procedures and coordinating the recovery of business functions and processes. Also called a disaster recovery team.
The chronological sequence of recovery activities, or critical path, that must be followed to resume an acceptable level of operations following a business interruption or outage. This timeline may range from minutes to weeks, depending upon the recovery requirements and methodology.
An interface standard for the connection of storage devices and hosts in consumer electronic devices such as mobile and handheld devices. One of the primary goals of the standard is to standardize connections for small form factor hard disk drives such as one-inch microdrives. The standard is maintained by CE-ATA Workgroup.
Also called a data center chiller, a chiller is a cooling infrastructure used in a data centers and industrial facilities. A chiller cooling system removes heat from one element and deposits it into another element. In large data centers, the chiller is used to cool the water used in heating, ventilation, and air-conditioning units. Due to the amount of heat produced by many servers and systems in a data center, chiller cooling systems are operational around-the-clock. As such, a large percentage of the electricity consumed in a data center is used by the chiller.
The delivery over a network of appropriately configured virtual storage and related data services. Typically, cloud storage hides limits to scalability, is either self-provisioned or provisionless, and is billed based on consumption.
Next-generation data protection deployed in seamless combination—on-premise and cloud, licensed software and hosted services—to optimize performance, availability, and affordability.
Acronym for computer output to laser disk. The storage of data on optical disk, such as CD-ROMs. Storing large volumes of data on laser disk, as opposed to microfiche or microfilm, lets the user access and search for this information on a computer, avoid the duplication and security costs of protecting physical documents or film, and more readily distribute information.
A method of redundancy in which the secondary (backup) system is only called upon when the primary system fails. The system on cold standby receives scheduled data backups, but less frequently than a warm standby. Cold standby systems are used for non-critical applications or in cases where data is changed infrequently.
Often referred to as a CompactFlash or CF card, CompactFlash is a very small removable mass storage device that relies on flash memory technology, a storage technology that does not require a battery to retain data indefinitely. CompactFlash cards can support 3.3V and 5V operation and can switch between the two, in contrast to other small-form factor flash memory cards that can only operate at one voltage. CompactFlash applications include PDAs, cellular phones, digital cameras, and photo printers.
The state of being in accordance with a standard, specification, or clearly defined requirements, including legal requirements. In IT, the "compliance market" is centered around storage and systems that support the retention and discovery of data as required by law or regulation.
An electronic document comprising more than one type of file. For example, a text file and image file.
An alternative theory to Nyquist's Law that indicates signals and images can be reconstructed from fewer measurements than what is usually considered necessary. In contrast, Nyquist's Law states that a signal must be sampled at least twice its highest analog frequency in order to extract all of the information. Also called compressive sampling.
The process of encoding data to reduce its size. Lossy compression (compression using a technique in which a portion of the original information is lost) is acceptable for some forms of data (for example, digital images) in some applications. However, for most IT applications, lossless compression (compression using a technique that preserves the entire content of the original data, and from which the original data can be reconstructed exactly) is required.
Amount of space consumed after compression has been applied to the data set. In a backup solution, compressed footprint refers to the amount of space being utilized by the backed-up data.
Content-addressed storage (CAS) is an object-oriented system for storing data that is not intended to be changed once it is stored (for example, medical images, sales invoices, and archived e-mail). CAS assigns a unique identifying logical address to the data record when it is stored. That address is neither duplicated nor changed in order to ensure that the record always contains the exact same data that was originally stored. CAS relies on disk storage instead of removable media, such as tape.
Also called continuous backup, continuous data protection (CDP) refers to backing up computer data by saving as an automated function a copy every time changes are made to that data. It allows users to restore files that are corrupted or that have been accidentally deleted back to any point in time before they were lost.
The backup of all data files that have been modified since the last backup.
A data archive that cannot be accessed by any user. Access to the data is either limited to a set of few individuals or completely restricted to all. The purpose of a dark archive is to function as a repository for information that can be used as a failsafe during disaster recovery.
All data in storage excluding any data that frequently traverses the network or that resides in temporary memory. Data at rest includes, but is not limited to, archived data; data which is not accessed or changed frequently; files stored on hard drives; USB thumb drives; files stored on backup tape and disks; and files stored offsite or on a storage area network (SAN).
Security protection measures such as password protection, data encryption, or a combination of both that protect data at rest from hackers and other malicious threats. The measures prevent this data from being accessed, modified, or stolen.
A system intended to organize, store, and retrieve large amounts of data easily. It consists of an organized collection of data for one or more uses, typically in digital form.
A facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (for example, air conditioning or fire suppression), and security devices.
A four-tier system that provides a simple and effective means for identifying different data center site infrastructure design topologies. The Uptime Institute's tiered classification system is an industry standard approach to site infrastructure functionality that addresses common benchmarking standard needs. The four tiers, as classified by The Uptime Institute, include the following:
- Tier I: Composed of a single path for power and cooling distribution, without redundant components, providing 99.671 percent availability.
- Tier II: Composed of a single path for power and cooling distribution, with redundant components, providing 99.741 percent availability.
- Tier III: Composed of multiple active power and cooling distribution paths, but only one path active, has redundant components, and is concurrently maintainable, providing 99.982 percent availability.
- Tier IV: Composed of multiple active power and cooling distribution paths, has redundant components, and is fault tolerant, providing 99.995 percent availability.
Also referred to as data scrubbing, the act of detecting and removing and/or correcting a database’s dirty data (data that is incorrect, out-of-date, redundant, incomplete, or formatted incorrectly). The goal of data cleansing is not just to clean up the data in a database, but also to bring consistency to different sets of data that have been merged from separate databases. Sophisticated software applications are available to clean a database’s data using algorithms, rules, and look-up tables. This task was once done manually and was therefore still subject to human error.
- In a RAID system, the act of correcting parity bit errors so that drives remain synchronized.
The elimination of redundant data. In the deduplication process, duplicate data is deleted, leaving only one copy of the data to be stored. However, indexing of all data is still retained should that data ever be required. Deduplication reduces the required storage capacity since only the unique data is stored.
In database management systems, a file that defines the basic organization of a database. A data dictionary contains a list of all files in the database, the number of records in each file, and the names and types of each field. Most database management systems keep the data dictionary hidden from users to prevent them from accidentally destroying its contents. Data dictionaries do not contain any actual data from the database, only book keeping information for managing it. Without a data dictionary, however, a database management system cannot access data from the database.
Practices that promote or preserve the shape of an entire data infrastructure (for example, network, servers, databases, storage, and software). These practices include any activity that reduces the stress of information growth on the data infrastructure and enables the efficient access, movement, and protection of data while reducing overall infrastructure and maintenance costs. Such practices include active archiving of relational databases, e-mail archiving, and document archiving.
The act of copying data from one location to a storage device in real time. Because the data is copied in real time, the information stored from the original location is always an exact copy of the data from the production device. Data mirroring is useful in the speedy recovery of critical data after a disaster. Data mirroring can be implemented locally or offsite at a completely different location.
Assurance that data is not corrupted, is accessible for authorized purposes only, and is in compliance with applicable requirements.
The salvaging of data stored on damaged media, such as magnetic disks and tapes. Many software products help recover data damaged by a disk crash or virus. In addition, many companies specialize in data recovery. Although not all data is recoverable, data recovery specialists can often restore a surprisingly high percentage of the data on damaged media.
The policy of persistent data and records management for meeting legal and business data archival requirements. A data retention policy weighs legal and privacy concerns against economics and need to know concerns to determine retention time, archival rules, data formats, and the permissible means of storage, access, and encryption.
Data space transfer protocol (DSTP) is a protocol used to index and categorize data using an XML -based catalogue. Data, no matter how it is stored, has corresponding XML files which contain UCK (universal correlation key) tags that act as identification keys. Data is retrieved when a user connects to DSTP servers with a DSTP client and asks for specific information. Data is found and retrieved based on the labels contained in the UCK tags.
The process of sending data from its primary source, where it can be protected from hardware failures, theft, and other threats. Several companies now provide web backup services that compress, encrypt, and periodically transmit a customer's data to a remote vault. In most cases the vaults will feature auxiliary power supplies, powerful computers, and manned security.
- Acronym for disk-based data protection, where a disk or RAID system is used as a data backup and archival system in place of tape.
- Acronym for distributed data protection, a managed (or hosted) service that provides customers with online, scheduled, automated computer system data backup and self-serve restoration.
- Acronym for development data platform, a web-based platform for data analysis, presentation, and dissemination.
- Acronym for distributed data processing, a data processing network in which some functions are performed in different places on different computers and are connected by transmission facilities.
The backup of all data files that have been modified since the last incremental backup or archival backup. Also known as differential incremental backup.
Patented EVault technology that performs delta backup and compresses the data before sending it over the wire.
Digital asset management (DAM) is a system that creates a centralized repository for digital files that allows the content to be archived, searched, and retrieved. The digital content is stored in databases called asset repositories. Metadata—such as photo captions, article key words, advertiser names, contact names, file names, or low-resolution thumbnail images—is stored in separate databases called media catalogs and points to the original items. Digital asset management also is known as enterprise digital asset management, media asset management, or digital asset warehousing.
The trail, traces, or "footprints" that people leave online. A digital footprint includes information transmitted online, such as forum registration, e-mails and attachments, uploaded videos or digital images, and any other form of transmission of information. All of this activity leaves traces of personal information about yourself that is available to others online.
Direct access file system (DAFS) is a file-access sharing protocol that uses memory-to-memory interconnect architectures, such as VI and InfiniBand. DAFS is designed for storage area networks (SANs) to provide bulk data transfer directly between the application buffers of two machines without having to packetize the data. With DAFS, an application can transfer data to and from application buffers without using the operating system, which frees up the processor and operating system for other processes and allows files to be accessed by servers using several different operating systems.
Direct-attached storage (DAS) is non-networked storage in which the hardware is connected to an individual server. Although more than one server can be present, storage for each server is managed separately and cannot be shared.
The process, policies, and procedures related to preparing for the recovery or continuation of a business-critical technology infrastructure after a natural or human-induced disaster. Disaster recovery is a subset of business continuity. While business continuity involves planning for keeping all aspects of a business functioning in the midst of disruptive events, disaster recovery focuses on the IT or technology systems that support business functions.
A plan for business continuity in the event of a disaster that destroys part or all of a business's resources, including IT equipment, data records, and the physical space of an organization. The goal of a disaster recovery plan is to resume normal computing capabilities in as little time as possible. A typical disaster recovery plan has several stages:
- Understanding an organization's activities and how all of its resources are interconnected
- Assessing an organization's vulnerability in all areas, including operating procedures, physical space and equipment, data integrity, and contingency planning
- Understanding how all levels of the organization would be affected in the event of a disaster
- Developing a short-term recovery plan
- Developing a long-term recovery plan, including how to return to normal business operations and prioritizing the order of functions that are resumed
- Testing and consistently maintaining and updating the plan as the business changes A key to a successful disaster recovery plan is taking steps to prevent the likelihood of disasters from occurring, such as using a hot site or cold site to back up data archives.
A linked group of one or more physical independent hard disk drives generally used to replace larger, single disk drive systems. The most common disk arrays are in daisy chain configuration or implement RAID (Redundant Array of Independent Disks) technology. A disk array may contain several disk drive trays and is structured to improve speed and increase protection against loss of data. Disk arrays organize their data storage into Logical Units (LUs), which appear as linear block paces to their clients. Disk arrays are an integral part of high-performance storage systems.
Disk-to-disk (D2D) is a type of data storage backup in which the data is copied from one disk (typically a hard disk) to another disk (such as another hard disk or other disk storage medium). In a D2D system, the disk that the data is being copied from typically is referred to as the primary disk and the disk that the data is copied to typically is called the secondary or backup disk.
Disk-to-disk-to-tape (D2D2T) is a type of data storage backup in which data is first backed up on a disk system, but then is spooled to a tape or an optical storage system. A D2D2T backup system can help eliminate data loss issues due to tape drive or tape failure. In a D2D2T system, a copy of the data is kept onsite for faster retrieval and tape copies are kept offsite for disaster recovery purposes. D2D2T devices may be appliances, virtual tape, or disk libraries.
Disk-to-tape (D2T) is a type of data storage backup in which the data is copied from a disk (typically a hard disk) to a magnetic tape. D2T systems are used widely in enterprises that require the safe storage of vital information in the case of disaster recovery.
A cohesive, robust, interconnected whole. The EVault cloud-connected ecosystem is built on a shared technology platform, leveraged in every deployment—software, appliances, software as a service (SaaS), and managed services—that creates a seamlessly integrated, cloud-connected data protection ecosystem.
A topological paradigm in which applications, data, and computing power (services) are pushed away from centralized points to the logical extremes of a network. Edge computing replicates fragments of information across distributed networks of web servers, which may be vast and include many networks. Edge computing is also referred to as mesh computing, peer-to-peer computing, autonomic (self-healing) computing, grid computing, and other names implying non-centralized, nodeless availability.
A method of erasable optical storage. Information is written, or stored, by a low-power laser tuned to a specific frequency. The laser elevates the energy level of electrons to a trapped state. The data is read by a second laser that returns the elevated electrons to their ground state.
Also referred to as an EPO switch, emergency power off (EPO) is a button or switch that shuts down the power in a room or network of electrical circuits. Typically used in data centers with a large number of computers using large amounts of electricity, the EPO is meant to be activated by a human only in emergency situations when it is necessary to cut the power if human life is in jeopardy or if there is the potential for major damage to the building or equipment (for example, in the case of a fire or electrocution). The sudden loss of power will inevitably lead to the loss of some data, and the EPO is not meant to be used under normal circumstances.
The conversion of plaintext to encrypted text with the intent that it only be accessible to authorized users who have the appropriate decryption key.
In data storage technology, enhanced capacity cartridge system (ECCST) is a double length tape cartridge with a nominal uncompressed capacity of approximately 800 Mbytes.
Enterprise content management (ECM) describes the technologies used by organizations to capture, manage, store, and control enterprise-wide content, including documents, images, e-mail messages, instant messages, video, and more. ECM software is used to assist in content control associated with business processes, and can be used to assure compliance with regulations (such as Sarbanes-Oxley , HIPPA, and others). ECM has emerged from the convergence of many related technologies such as document management, web content management, and collaboration.
A centralized storage system used by a large business or organization to manage data. Enterprise storage also indicates processes for data sharing and connectivity. Enterprise storage is different from consumer or home computer storage in terms of the size of the storage system, the amount of data handled by the system, the number of users accessing the system, and also the technology used to create the storage system. Enterprise storage systems usually focus on providing the networking and management operations for data storage, backup, disaster recovery, and archiving.
The physical area that is occupied only by data center equipment. This area does not include aisles between racks or any space left at end of equipment rows.
After a failback event, the restoration of a failed system component’s share of a load to a replacement component. When a failed controller in a redundant configuration is replaced, the devices that were originally controlled by the failed controller are usually failed back to the replacement controller to restore the I/O balance, and to restore failure tolerance. Similarly, when a defective fan or power supply is replaced, its load, previously borne by a redundant component, can be failed back to the replacement part.
The automatic substitution of a functionally equivalent system component for a failed one. Failover most often involves intelligent controllers connected to the same storage devices and host computers. If one of the controllers fails, failover occurs, and the survivor takes over its I/O load.
Fibre Channel ATA (FATA) is a hybrid hard drive first introduced by HP in 2004 that combines both Fibre Channel and ATA technologies. FATA drives use an ATA drive mechanism, offering the same performance and capacity as a standard ATA drive, but also feature a Fibre Channel connector, which enables the FATA drive to be used where conventional Fibre Channel drives are currently connected.
A computer with the primary purpose of serving files to clients. A file server may be a general purpose computer that is capable of hosting additional applications or a special purpose computer capable only of serving files.
Using ghosting software, a method of converting the contents of a hard drive—including its configuration settings and applications—into an image, and then storing the image on a server or burning it onto a CD. When contents of the hard drive are needed again, ghosting software converts the image back to original form. Companies often use ghost imaging when they want to create identical configurations and install the same software on numerous machines.
Giant magnetoresistive (GMR) is a hard disk drive storage technology. The technology is named for the giant magnetoresistive effect, first discovered in the late 1980s. While working with large magnetic fields and thin layers of magnetic materials, researchers noticed very large resistance changes when these materials were subjected to magnetic fields. Disk drives that are based on GMR head technology use these properties to help control a sensor that responds to very small rotations on the disk. The magnetic rotation yields a very large change in sensor resistance, which in turn provides a signal that can be picked up by the electric circuits in the drive.
2 to the 30th power (1,073,741,824) bytes. One gigabyte is equal to 1,024 megabytes. Gigabyte is often abbreviated as G or GB.
An IT environment that includes computers, operating systems, platforms, databases, applications, and other components from different vendors.
Hierarchical storage management (HSM) is a data storage system that automatically moves data between high-cost and low-cost storage media. HSM systems exist because high-speed storage devices, such as hard disk drives, are more expensive (per byte stored) than slower devices, such as optical discs and magnetic tape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise's data on slower devices, and then copy data to faster disk drives when needed. In effect, HSM turns the fast disk drives into caches for the slower mass storage devices. The HSM system monitors the way data is used and makes best guesses as to which data can safely be moved to slower devices and which data should stay on the hard disks.
The availability of resources in a computer system in the wake of component failures in the system. High availability can be achieved in a variety of ways—from solutions that use custom and redundant hardware to ensure availability to solutions that provide software solutions using off-the-shelf hardware components. The former class of solutions provides a higher degree of availability, but is significantly more expensive than the latter class. This high cost has led to the popularity of the latter class, with almost all vendors of computer systems offering various high availability products. Typically, these products survive single points of failure in the system.
(n.) A formatting method that initializes portions of the hard disk and creates the file system structures on the disk, such as the master boot record and the file allocation tables. High-level formatting is typically done to erase the hard disk and reinstall the operating system back onto the disk drive.
(v.) The process of performing high-level formatting.
A mass storage technology that uses three-dimensional holographic images to enable more information to be stored in a much smaller space. In holographic storage, at the point where the reference beam and the data carrying signal beam intersect, the hologram is recorded in the light sensitive storage medium.
A service in which day-to-day related management responsibilities are transferred to the service provider. The person or organization that owns or has direct oversight of the organization or system being managed is referred to as the offerer, client, or customer. The person or organization that accepts and provides the hosted service is regarded as the service provider. Typically, the offerer remains accountable for the functionality and performance of a hosted service and does not relinquish the overall management responsibility of the organization or system.
A technique used in data storage and backup that enables a system to perform a routine backup of data, even if the data is being accessed by a user. Hot backups are a popular backup solution for multi-user systems as no downtime to perform the backup is required. If a user alters the data during the backup process (for example, makes changes at the exact moment the backup system is processing that data) the final version of the backup may not reflect those changes. Hot backup may also be called a dynamic backup or active backup.
A form of routing in which the nodes of a network have no buffer to store packets in before they are moved on to their final predetermined destination. In normal routing situations, when multiple packets contend for a single outgoing channel, packets that are not buffered are dropped to avoid congestion. But in hot potato routing, each packet that is routed is constantly transferred until it reaches its final destination because the individual communication links cannot support more than one packet at a time. The packet is bounced around like a "hot potato," sometimes moving further away from its destination because it has to keep moving through the network. This technique allows multiple packets to reach their destinations without being dropped.
A method of redundancy in which the primary and secondary (backup) systems run simultaneously. The data is mirrored to the secondary server in real time so that both systems contain identical information.
The Internet Fibre Channel Protocol (iFCP) allows an organization to extend Fibre Channel storage networks over the Internet by using TCP/IP. TCP is responsible for managing congestion control as well as error detection and recovery services. iFCP allows an organization to create an IP SAN fabric that minimizes the Fibre Channel fabric component and maximizes use of the company's TCP/IP infrastructure.
- In computer science an image is an exact replica of the contents of a storage device (a hard disk drive or CD-ROM for example) stored on a second storage device.
- Often used in place of the term digital image, which is an optically formed duplicate or other reproduction of an object formed by a lens or mirror.
Any backup in which only the data objects that have been modified since the time of some previous backup are copied. Incremental backup is a collective term for cumulative incremental backups and differential incremental backups. Contrast with an archival, or full, backup, in which all files are backed up regardless of whether they have been modified since the last backup.
Information classification and management (ICM) is a class of application-independent software that use advanced indexing, classification, policy and data access capabilities to automate data management activities above the storage layer.
The combined set of hardware, software, networks, facilities, and other components (including all of the information technology) necessary to develop, test, deliver, monitor, control, or support IT services. Associated people, processes, and documentation are not part of an infrastructure.
Intelligent information management (IIM) is a set of processes and underlying technology solutions that enables organizations to understand, organize, and manage all sorts of data types (for example, general files, databases, and e-mails). Key attributes that define an IIM solution include the following:
The space between two consecutive physical blocks on a data recording medium, such as a hard drive or a magnetic tape. Interrecord gaps are used as markers for the end of data and also as safety margins for data overwrites. An interrecord gap is also referred to as an interblock gap.
Internet small computer systems interface (iSCSI) is a transport protocol that enables the SCSI protocol to be carried over a TCP-based IP network. iSCSI was standardized by the Internet Engineering Task Force and described in RFC 3720.
A technology being standardized under the IP Storage (IPS) IETF Working Group. Same as SoIP.
A special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also, a collection of data representing related experiences.
In reference to data storage, an archive that can be accessed by many authorized users. Access to the data is open to all the members of the "community" that have a need for the data.
In digital asset management (DAM) systems, an area within the web site (or web service) or other internal DAM where users can create and store a list of assets they want to reference or use at a later time. Lightboxes are common on stock photo web sites where registered users can store images until they are ready to download them.
Linear tape open (LTO) is a technology that was developed jointly by HP, IBM, and Certance (Seagate) to provide a clear and viable choice in an increasingly complex array of tape storage options. LTO technology is an "open format" technology, which means that users have multiple sources of product and media. The open nature of the technology also provides a means of enabling compatibility between different vendors' offerings.
A local area network (LAN) is a communications infrastructure—typically Ethernet—designed to use dedicated wiring over a limited distance (typically a diameter of less than five kilometers) to connect a large number of intercommunicating nodes.
Also called a lost allocation unit, or a lost file fragment. A data fragment that does not belong to any file, according to the system’s file management system, and, therefore, is not associated with a file name in the file allocation table. Lost clusters can result from files not being closed properly, from shutting down a computer without first closing an application, or from ejecting a storage medium, such as a floppy disk, from the disk drive while the drive is reading or writing.
(n.) A formatting method that creates the tracks and sectors on a hard disk. Low-level formatting creates the physical format that dictates where data is stored on the disk. Modern hard drives are low-level formatted at the factory for the life of the drive. A PC cannot perform an LLF on a modern IDE/ATA or SCSI hard disk, and doing so would destroy the hard disk. A low-level format is also called a physical format.
(v.) The process of performing low-level formatting.
A direct-access, or random-access, storage device. A magnetic drum, also referred to as drum, is a metal cylinder coated with magnetic iron-oxide material on which data and programs can be stored. Magnetic drums were once used as a primary storage device but have since been implemented as auxiliary storage devices.
Magneto-optical (MO) is a type of data storage technology that combines magnetic disk technologies with optical technologies, such as those used in CD-ROMs. Like magnetic disks, MO disks can be read and written to. And like floppy disks, they are removable. However, their storage capacity can be more than 200 megabytes, much greater than magnetic floppies. In terms of data access speed, Mo disks are faster than floppies but not as fast as hard disk drives.
The various techniques and devices for storing large amounts of data. Modern mass storage devices include all types of disk drives and tape drives. Mass storage is distinct from memory, which refers to temporary storage areas within the computer. Unlike main memory, mass storage devices retain data even when the computer is turned off.
In storage terminology a massive array of idle disks (MAID) is a technology that uses a large group of hard disk drives (hundreds or even thousands), with only those drives that are needed actively spinning at any given time. MAID is a storage system solution that reduces both wear on the drives and also reduces power consumption. Because only specific disks spin at a given time, what is not in use is literally a massive array of idle disks, which also means the system produces less heat than other large storage systems.
In data storage, mean time to repair (MTTR) is the average time before an electronic component can be expected to require repair.
In data storage, mean time until data loss (MTDL) is the average time until a component failure can be expected to cause data loss.
Plural of medium.
- Objects on which data can be stored. These include hard disks, floppy disks, CD-ROMs, and tapes.
- In computer networks, media refers to the cables linking workstations together. There are many different types of transmission media, the most popular being twisted-pair wire (normal electrical wire), coaxial cable (the type of cable used for cable television), and fiber optic cable (cables made out of glass).
- The form and technology used to communicate information. Multimedia presentations, for example, combine sound, pictures, and videos, all of which are different types of media.
Metadata catalog service (MCS) is a mechanism for storing and accessing descriptive metadata and allows users to query for data items based on desired attributes. MCS may be used for storing and accessing metadata about logical files.
A heterogeneous environment that includes multiple platform types.
In the network file system (NFS), a protocol and set of procedures to specify a remote host and file system or directory to be accessed, and their location in the local directory hierarchy.
When spelled ms, short for millisecond, one thousandth of a second. Access times of mass storage devices are often measured in milliseconds.
When spelled MS, short for Microsoft or mobile subscribers.
See heterogeneous environment.
Geographically dispersed; having more than one location.
Near-line storage is used by corporations, including data warehouses, as an inexpensive, scalable way to store large volumes of data. Near-line storage devices include DAT and DLT tapes (sequential access); optical storage such as CD-ROM, DVD, and Blu-ray; magneto-optical which utilize magnetic heads with an optical reader; and also slower P-ATA and SATA hard disk drives. Retrieval of data is slower than SCSI hard disk which is usually connected directly to servers or in a SAN environment. Near-Line implies that whatever media the information is stored on, it can be accessed via a tape library or some other method electronically as opposed to off-line which signified some human intervention is required, such as retrieving and mounting a tape. Near-line can be slower, but the type of data (historical archives, backup data, video, and others) dictates that the information will not require instant access and high throughput that SAN and SCSI can provide and is less expensive per byte.
A network-attached storage (NAS) device is a server that is dedicated to nothing more than file sharing. NAS does not provide any of the activities that a server in a server-centric system typically provides, such as e-mail, authentication, or file management. NAS allows more hard disk storage space to be added to a network that already utilizes servers without shutting them down for maintenance and upgrades. With a NAS device, storage is not an integral part of the server. Instead, in this storage-centric design, the server still handles all of the processing of data but a NAS device delivers the data to the user. A NAS device does not need to be located within the server but can exist anywhere in a LAN and can be made up of multiple networked NAS devices.
Also called Nyquist's Theorem. Before sound as acoustic energy can be manipulated on a computer, it must first be converted to electrical energy (using a transducer such as a microphone) and then transformed through an analog-to-digital converter into a digital representation. This is all accomplished by sampling the continuous input waveform a certain number of times per second. The more often a wave is sampled the more accurate the digital representation. Nyquist's Law, named in 1933 after scientist Harry Nyquist, states that a sound must be sampled at least twice its highest analog frequency in order to extract all of the information from the bandwidth and accurately represent the original acoustic energy. Sampling at slightly more than twice the frequency will make up for imprecisions in filters and other components used for the conversion.
An object-based storage device (OSD) is a device that implements the standard in which data is organized and accessed as objects, where object means an ordered set of bytes (within the OSD) that is associated with a unique identifier. Objects are allocated and placed on the media by the OSD logical unit. With an OSD interface, metadata is associated directly with each data object and can be carried between layers and across storage device files. Records are no longer abstractions, but actual storage objects that are understood, managed, and secured at the device level.
Any storage medium that must be inserted into a storage drive by a person before it can be accessed by the computer system is considered to be a type of offline storage. Also called removable storage.
Also called Internet storage or hosted storage, online data storage is a data storage management solution that enables individuals or organizations to store their data on the Internet using a service provider, rather than storing the data locally on a physical disk, such as a hard drive or tape backup.
Open document management API (ODMA) is an open industry standard that enables desktop applications to interface with a document management system (DMS). ODMA simplifies cross-platform and cross-application file communication by standardizing access to document management through an API. ODMA allows multiple applications to access the same DMS without the need for a hard-coded link between the application and the DMS.
A type of database that serves as an interim area for a data warehouse in order to store time-sensitive operational data that can be accessed quickly and efficiently. In contrast to a data warehouse, which contains large amounts of static data, an operational data store contains small amounts of information that is updated through the course of business transactions. An operational data store will perform numerous quick and simple queries on small amounts of data, such as acquiring an account balance or finding the status of a customer order, whereas a data warehouse will perform complex queries on large amounts of data. An operational data store contains only current operational data while a data warehouse contains both current and historical data.
(v.) To record or copy new data over existing data, as in when a file or directory is updated. Data that is overwritten cannot be retrieved.
(n.) Refers to a file or directory that has been overwritten.
A physical erase unit is the smallest memory area of the flash memory media that can be erased in a single erase operation. Its size varies between flash devices.
A physical entity that contains nodes. Platforms include all end devices that are attached to a Fabric, for example, hosts and storage subsystems. Platforms communicate with other platforms in the storage area network using the facilities of a Fabric or other topology
A small, portable storage device used for storing and viewing your digital images. The device is a portable hard drive in an enclosure that resembles handheld game consoles. The device usually offers USB and memory card readers as options for transferring your images directly to the device, as well as an LCD display for viewing the stored images. The device may also have different controls for maneuvering through the images, such as forward, random, skip, and so on.
A somewhat dated term for main memory. Mass storage devices, such as disk drives and tapes, are sometimes called secondary storage.
A set of rules that control an interaction between two or more entities in communication with one another, for example, TCP ports, Fibre Channel FC-4 processes, and polite humans.
See redundant array of independent (or inexpensive) disks.
See redundant array of independent nodes.
A type of flooring supported by a metal grid and typically used in data centers. Raised flooring can be removed in pieces to allow for cabling, wiring, and cooling systems to run under the floor space. When the floor is raised, it usually can accommodate space for walking or crawling in.
The recreation of a past operational state of an entire application or computing environment. Recovery is required after an application or computing environment has been destroyed or otherwise rendered unusable. It may include restoration of application data, if that data has been destroyed as well.
Recovery point objective (RPO) is the maximum acceptable time period prior to a failure or disaster during which changes to data may be lost as a consequence of recovery. Data changes preceding the failure or disaster by at least this time period are preserved by recovery. Zero is a valid value and is equivalent to a "zero data loss" requirement.
Recovery time objective (RTO) is the period of time after an outage in which the systems and data must be restored to the predetermined recovery point.
Red Hat Global File System (GFS) is an open source cluster file system and volume manager that executes on Red Hat Enterprise Linux servers attached to a storage area network (SAN). It enables a cluster of Linux servers to share data in a common pool of storage to provide a consistent file system image across server nodes. Red Hat Global File System works on all major server and storage platforms supported by Red Hat.
The inclusion of extra components of a given type in a system (beyond those required by the system to carry out its function) for the purpose of enabling continued operation in the event of a component failure.
Redundant array of independent (or inexpensive) disks (RAID) is a category of disk drives that employ two or more drives in combination for fault tolerance and performance. RAID disk drives are used frequently on servers but aren't generally necessary for personal computers. RAID allows you to store the same data redundantly (in multiple places) in a balanced way to improve overall performance.
Redundant array of independent nodes (RAIN) is a data storage and protection system architecture. It uses an open architecture that combines standard computing and networking hardware with management software to create a system that is more distributed and scalable. RAIN is based on the idea of linking RAID nodes together into a larger storage mechanism. In a RAIN setup, there are multiple servers, each with disk drive and RAID functionality, all working together as a RAIN, or a parity or mirrored implementation. RAIN may also be called storage grid.
Refers to corporate offices externally connected to a WAN or a LAN. These offices will often have one or more servers to provide branch users with file, print, and the other services required to maintain the daily routine.
(n.) A copy of a collection of data.
(v.) The action of making a replicate as defined above.
To bring a desired data set back from the backup media.
Also called rotational delay, the amount of time it takes for the desired sector of a disk (for example, the sector from which data is to be read or written) to rotate under the read-write heads of the disk drive. The average rotational latency for a disk is half the amount of time it takes for the disk to make one revolution. The term typically is applied to rotating storage devices, such as hard disk drives and floppy drives (and even older magnetic drum systems), but not to tape drives.
See recovery point objective.
See recovery time objective.
Run length limited (RLL) is an encoding scheme used to store data on newer PC hard disks. RLL produces fast data access times and increases a disk's storage capacity over the older encoding scheme called MFM (modified frequency modulation).
Software as a Service (SaaS) is a software delivery method that provides access to software and its functions remotely as a web-based service. Software as a Service allows organizations to access business functionality at a cost typically less than paying for licensed applications since SaaS pricing is based on a monthly fee. Also, because the software is hosted remotely, users don't need to invest in additional hardware. Software as a Service removes the need for organizations to handle the installation, setup, and often daily upkeep and maintenance. Software as a Service may also be referred to as simply hosted applications.
Storage as a Service (SaaS) is a storage model in which a business or organization (the client) rents or leases storage space from a third-party provider. Data is transferred from the client to the service provider via the Internet and the client then accesses the stored data using software provided by the storage provider. The software is used to perform common tasks related to storage, such as data backups and data transfers. Storage as a Service is popular with SMBs because there usually are no start-up costs (for example, servers, hard disks, IT staff, and so on) involved. Businesses pay for the service based only on the amount of storage space used. Storage as a Service may also be called hosted storage.
A Storage Area Network (SAN) is a high-speed subnetwork of shared storage devices. A storage device is a machine that contains nothing but a disk or disks for storing data. A SAN's architecture works in a way that makes all storage devices available to all servers on a LAN or WAN. As more storage devices are added to a SAN, they too will be accessible from any server in the larger network. In this case, the server merely acts as a pathway between the end user and the stored data. Because stored data does not reside directly on any of a network's servers, server power is utilized for business applications, and network capacity is released to the end user.
The hardware that connects workstations and servers to storage devices in a SAN. The SAN fabric enables any-server-to-any-storage device connectivity through the use of Fibre Channel switching technology.
A technology used by businesses to obtain greater flexibility in their data storage. A Storage Area Network (SAN) provides raw storage devices across a network, and is typically sold as a service to customers who also purchase other services. SAN services may also be administered over an existing, local fiber network, and administered through a service subscription plan.
Space dedicated on a hard drive for temporary storage of data. Scratch disks are commonly used in graphic design programs, such as Adobe Photoshop. Scratch disk space is only for temporary storage and cannot be used for permanently backing up files. Scratch disks can be set to erase all data at regular intervals so that the disk space is left free for future use. The management of scratch disk space is typically dynamic, occurring when needed.
The first full backup of company data.
A technology for encrypting and hiding data on a hard drive, flash drive, or when transferring files. Secret storage is a portion of encrypted data, hidden in some file or FAT/FAT32/NTFS partitions. To the end-user, it looks like a folder in which he may add files and folders and protect it with a password.
A type of backup where only the user-specified files and directories are backed up. A selective backup is commonly used for backing up files which change frequently or in situations where the space available to store backups is limited. Also called a partial backup.
Serial storage architecture (SSA) is an open industry-standard interface that provides a high-performance, serial interconnect technology used to connect disk devices and host adapters. SSA serializes the SCSI data set and uses loop architecture that requires only two wires: transmit and receive. The SSA interface also supports full-duplex, so it can transmit and receive data simultaneously at full speed.
The area where a company stores its data center equipment. This area is protected from personnel access.
A service-level agreement (SLA) is an agreement between a service provider, such as an IT department, an Internet services provider, or an intelligent device acting as a server, and a service consumer. A service level agreement defines parameters for measuring the service, and states quantitative values for those parameters.
The unused space in a disk cluster. The DOS and Windows file systems use fixed-size clusters. Even if the actual data being stored requires less storage than the cluster size, an entire cluster is reserved for the file. The unused space is called the slack space. DOS and older Windows systems use a 16-bit file allocation table (FAT), which results in very large cluster sizes for large partitions. For example, if the partition size is 2 GB, each cluster will be 32 K. Even if a file requires only 4 K, the entire 32 K will be allocated, resulting in 28 K of slack space. Windows 95 OSR 2 and Windows 98 resolve this problem by using a 32-bit FAT (FAT32) that supports cluster sizes smaller than 1K.
Companies whose headcount or turnover fall below certain limits. In the United states, a small business is often defined as having fewer than 100 employees. A medium-size business is often defined as having fewer than 500 employees.
Companies whose headcount or turnover fall below certain limits. In the United states, a small business is often defined as having fewer than 100 employees. A mid-size business is often defined as having fewer than 500 employees.
A virtual copy of a device or file system. Snapshots imitate the way a file or device looked at the precise time the snapshot was taken. It is not a copy of the data, only a picture in time of how the data was organized. Snapshots can be taken according to a scheduled time and provide a consistent view of a file system or device for a backup and recovery program to work from.
A solid state disk (SSD) is a high-performance plug-and-play storage device that contains no moving parts. SSD components include either DRAM or EEPROM memory boards, a memory bus board, a CPU, and a battery card. Because SSDs contain their own CPUs to manage data storage, they are a lot faster (18MBps for SCSI-II and 35 MBps for UltraWide SCSI interfaces) than conventional rotating hard disks; therefore, they produce highest possible I/O rates.
Another name for a giant magnetoresistive(GMR) head. The term was coined by IBM.
- The capacity of a device to hold and retain data.
- Short for mass storage.
Storage bridge bay (SBB) is a specification that defines mechanical, electrical, and low-level enclosure management requirements for an enclosure controller slot that will support a variety of storage controllers from a variety of independent hardware vendors and system vendors. Any storage controller design based on the SBB specification will be able to fit, connect, and operate within any storage enclosure controller slot design based on the same specification.
The concept of centralized storage where resources are shared among multiple applications and users. Traditionally, organizations have deployed servers with direct-attached storage (DAS) as file servers. However, many organizations are facilitating server consolidation by deploying network-attached storage (NAS). NAS provides a single purpose device that can provide CIFS and NF- connected storage that can scale from gigabyte to petabytes.
A device capable of storing data. The term usually refers to mass storage devices, such as disk and tape drives.
The amount of energy, physical space, and other equipment necessary to run a data storage management system.
The tools, processes, and policies used to manage storage networks and storage services such as virtualization, replication, mirroring, security, compression, traffic analysis, and other services. The phrase also encompasses other storage technologies, such as process automation, storage management and real-time infrastructure products, and storage provisioning. In some cases, the phrase storage management may be used in direct reference to storage resource management (SRM).
Storage management initiative specification (SMI-S) is an interface standard that enables interoperability in both hardware and software between storage products from different vendors used in a SAN environment. The interface provides common protocols and data models that storage product vendors can use to ensure end user manageability of the SAN environment.
Based on the CIM and Web-Based Enterprise Management (WBEM) standards, SMI-S adds common interoperable and extensible management transport, automated discovery, and resource locking functions. SMI-S was developed by the Storage Networking Industry Association (SNIA) in 2002.
A high-speed network of shared storage devices. The storage network is used by IT departments to connect different types of storage devices with data servers for a larger network of users. As more storage devices are added to the storage network, they too will be accessible from any server in the larger network. Storage networking is a phrase most commonly associated with enterprises and data centers.
The implementation and management of tiered storage solutions to obtain a lower cost per capacity across a corporation or enterprise. Storage optimization is an information lifecycle management (ILM) strategy.
Storage over IP (SoIP) technology refers to the merging of Fibre Channel technologies with IP-based technology to allow for accessing storage devices over TCP/IP networks. SoIP is the framework for storage area networking (SAN) using Internet Protocol (IP) networks to directly connect servers and storage. SoIP products are designed to support transparent interoperability of storage devices based on Fibre Channel, SCSI, and a new class of Gigabit Ethernet storage devices using iSCSI and iFCP. Existing Fibre Channel or SCSI devices, such as servers with host bus adapters (HBAs) or storage subsystems, can be included in an SoIP storage network without modification.
Storage resource management (SRM) refers to software that manages storage from a capacity, utilization, policy, and event management perspective. SRM includes bill-back, monitoring, reporting, and analytic capabilities that allow you to drill down for performance and availability.
Key elements of SRM include asset management, charge back, capacity management, configuration management, data and media migration, event management, performance and availability management, policy management, quota management, and media management.
A storage service provider (SSP) is a company that provides computer storage space and related management services. SSPs also offer periodic backup, archiving, and the ability to consolidate data from multiple company locations so that data can be effectively shared.
Storage virtualization is the amalgamation of multiple network storage devices into what appears to be a single storage unit. Storage virtualization is often used in a SAN (storage area network), a high-speed subnetwork of shared storage devices, and makes tasks such as archiving, backup, and recovery easier and faster. Storage virtualization is usually implemented via software applications.
To copy data from a CPU to memory, or from memory to a mass storage device.
The process of distributing data across several storage devices to improve performance.
In magnetic disk drive storage technology, the fluctuation of magnetization due to thermal agitation. When the areal density—the number of bits that can be stored on a square inch of disk media—of a disk medium reaches 150 gigabits per square inch, the magnetic energy holding the bits in place on the medium becomes equal to the ambient thermal energy within the disk drive itself. When this happens, the bits are no longer held in a reliable state and can "flip," scrambling the data that was previously recorded. Because of superparamagnetism, hard drive technologies are expected to stop growing once they reach a density of 150 gigabits per square inch.
In Fibre Channel, a receiver's identification of a transmission word boundary.
A synthetic backup is identical to a regular full backup in terms of data, but it is created when data is collected from a previous, older full backup and assembled with subsequent incremental backups. The incremental backup will consist only of changed information. A synthetic backup is used when time or system requirements do not allow for a full complete backup. The end result of combining a recent full backup archive with incremental backup data is two kinds of files which are merged by a backup application to create the synthetic backup. Benefits to using a synthetic backup include a smaller amount of time needed to perform a backup, and lower system restore times and costs. This backup procedure is called "synthetic" because it is not a backup created from original data.
A storage device that writes data sequentially in the order in which it is delivered, and reads data in the order in which it is stored on the media. Unlike disks, tapes use implicit data addressing.
2 to the 40th power (1,099,511,627,776) bytes. This is approximately 1 trillion bytes. Terabyte is abbreviated as TB.
The consolidation and automated process of allocating just the exact required amount of server space at the time it is required. Thin provisioning is most commonly used in centralized large storage systems such as SANs, as well as in storage virtualization environments where administrators plan for both current and future storage requirements and often over purchase capacity, which can result in wasted storage. Since thin provisioning is designed to allocate exactly what is needed exactly when it is needed, it removes the element of "paid for but wasted" storage capacity. Additionally, as more storage is needed, additional volumes can be attached to the existing consolidated storage system.
An underlying principle of information lifecycle management, tiered storage is a storage networking method where data is stored on various types of media, based on performance, availability, and recovery requirements. For example, data intended for restoration in the event of data loss or corruption can be stored locally for fast recovery while data for regulatory purposes can be archived to lower cost disks.
A measurement of how tightly the concentric tracks on a disk are packed.
A feature of the Data Set management command available in the ATA8-ACS-2 specification. The command is used in the operating system to wipe invalid data blocks on solid state drives (SSD) which are no longer in use—including data blocks which have been left by deleted files. The purpose of Trim is to improve the speed of SSD drives and ultimately boost the drives' overall read and write performance. Both the operating system and SSD firmware must support Trim. Trim may also be referred to as DisableDeleteNotify, or SSD Optimizer.
U3 is a software solution that turns USB Flash drives into smart drives. Flash drives that meet the U3 specification are called U3 smart drives.
See universal media disc.
A single, integrated storage infrastructure that functions as unification engines to simultaneously support Fibre Channel, IP Storage Area Networks (SAN), and Network Attached Storage (NAS) data formats. Also called network unified storage (NUS).
An uninterruptible power source (UPS) is a source of electrical power that is not affected by outages in a building’s external power source. UPSs may generate their own power using generators, or they may consist of large banks of batteries. UPSs are typically installed to prevent service outages due to external power grid failure in computer applications deemed by their owners to be “mission critical.”
See uninterruptible power source.
The process of using a USB device, such as a flash drive, to boot a Windows PC. Windows XP Embedded Service Pack 2 Feature Pack 2007 introduced a new embedded enabling feature called USB Boot. This allows users to build a Windows XPe image that boots from a USB flash drive.
The USB Flash Drive Alliance is a consortium of technology companies launched to educate consumers about the exceptional portability, capacity, and utility of USB flash drives.
A USB flash drive that comes with pre-installed software. The software enables the USB flash drive to run active programs from the drive. Users can purchase USB smart drives, or download software from the Internet that enables existing flash drives to be upgraded to a smart drive.
A system for storing backup data. A vault can be as simple as a system administrator's home office or as sophisticated as a disaster-hardened, temperature-controlled, high-security bunker that has facilities for backup media storage.
In computing, virtualization means to create a virtual version of a device or resource, such as a server, storage device, network, or even an operating system where the framework divides the resource into one or more execution environments. Even something as simple as partitioning a hard drive is considered virtualization because one drive is partitioned to create two separate hard drives. Devices, applications, and human users are able to interact with the virtual resource as if it were a real single logical resource. The term virtualization has become somewhat of a buzzword, and as a result, the term is now associated with a number of computing technologies including the following:
- Storage virtualization: The amalgamation of multiple network storage devices into what appears to be a single storage unit.
- Server virtualization: The partitioning a physical server into smaller virtual servers.
- Operating system-level virtualization: A type of server virtualization technology which works at the operating system (kernel) layer.
- Network virtualization: Using network resources through a logical segmentation of a single physical network.
- Application virtualization
A virtual tape library (VTL) is an archival backup solution that combines traditional tape backup methodology with low-cost disk technology to create an optimized backup and recovery solution. A VTL is an intelligent disk-based library that emulates traditional tape devices and tape formats. Acting like a tape library with the performance of modern disk drives, data is deposited onto disk drives just as it would onto a tape library, only faster. Virtual tape backup solutions can be used as a secondary backup stage on the way to tape, or as their own standalone tape library solution. A VTL generally consists of a virtual tape appliance or server and software which emulates traditional tape devices and formats.
- Synonym for virtual disk. Volume is used to denote virtual disks created by volume manager control software. A volume can function as a container for a file system.
- A piece of removable media that has been prepared for use by a backup manager (for example, by the writing of a media ID).
A virtual private network (VPN) is a computer network that uses a public telecommunication infrastructure such as the Internet to provide remote offices or individual users with secure access to their organization's network. The aim of a VPN is to avoid an expensive system of owned or leased lines that can be used by only one organization.
A method of redundancy in which the secondary (backup) system runs in the background of the primary system. Data is mirrored to the secondary server at regular intervals, which means that there are times when both servers do not contain the exact same data.
Wide area data services (WADS or WDS) is a superset of smaller market segments that include WAN optimization, WAFS (wide-area file services), and application acceleration. Wide area data services enable the consolidation of file servers from remote sites to the data center without compromising end-user performance, and also enable the acceleration of file system-based distributed applications.
Wide area file services (WAFS) is a storage technology that allows businesses and enterprises to access remote data centers as if they were local. WAFS allow multiple agencies to manage data, data archiving, and wide area file services products. WAFS also provide a combination of distributed file systems and caching technology to allow real-time read-write access to shared file storage from any location. Wide area file services is an alternative system to a wide-area network (WAN).
A wide area network (WAN) is a communications network that is geographically dispersed and that includes telecommunications links. Techniques for WAN optimization include:
- Quick file scan—Rapidly scans files on every protected system or server to identify new or changed blocks.
- Adaptive compression—Shrinks transmitted data blocks by 50 to 90 percent using the best compression algorithm based on available CPU and network bandwidth.
- Dynamic bandwidth throttling—Controls the bandwidth available for backup jobs. Dynamic bandwidth throttling is useful for frequent backups of critical data or for use in limited-bandwidth environments.
- Enhanced CPU utilization—Automatically divides backup jobs across multiple CPUs, freeing source-system processing power for other tasks.
- Self healing—Automatically recreates delta index files that are corrupted or missing
A caching method in which modifications to data in the cache aren't copied to the cache source until absolutely necessary. Write-back caching is available on many microprocessors, including all Intel processors since the 80486. With these microprocessors, data modifications (for example, write operations) to data stored in the L1 cache aren't copied to main memory until absolutely necessary. In contrast, a write-through cache performs all write operations in parallel—data is written to the main memory and the L1 cache simultaneously.
Write-back caching yields somewhat better performance than write-through caching because it reduces the number of write operations to main memory. With this performance improvement comes a slight risk that data may be lost if the system crashes.
To mark a file or disk so that its contents cannot be modified or deleted. When you want to make sure that neither you nor another user can destroy data, you can write-protect it. Many operating systems include a command to write-protect files. You can also write-protect 5¼-inch floppy disks by covering the write-protect notch with tape. 3½-inch floppy diskettes have a small switch that you can set to turn on write-protection. Write-protected files and media can only be read; you cannot write to them, edit them, append data to them, or delete them.
Also called zoned constant linear velocity, a type of CLV data read/write method that separates a CD or DVD compact disk into different fixed-speed zones and changes the speed of rotation for each zone instead of across the entire disk.
A method of recording data on a hard disk drive whereby the sectors per track on the drive are not consistent across the platter. In general, tracks closest to the center have fewer sectors than tracks toward the outside of the platter where the tracks are larger and can fit more sectors. Though the platter rotates at a constant angular velocity, the clock speed, or clock rate, changes as the read/write head moves from one zone to another along the platter. Also called zone bit recording.
The process of allocating resources in a SAN to load balance the devices connected to the network. Zoning allows the network administrator to separate the SAN into units and allocate storage to those units based on need. Zoning protects the SAN system from such threats as viruses, data corruption, and malicious hackers as the devices in their respective zones are not able to communicate outside the zone through their ports unless given permission.