Advertisment

Watching the SAN rise

author-image
DQW Bureau
New Update



Advertisment

Until recently, the most significant developments in the storage market have
mostly focused on improving the price/capacity ratio. As hard drive technology
has improved, companies have been able to pack more information into the same
space for a lower price. However, there comes a point at which brains need to
replace brawn. Structural considerations become increasingly important, as
companies realize that they need to make their storage systems work more
efficiently.

Companies have been discovering this efficiency through storage
consolidation, which enables them to bring together corporate data into an
easily manageable, flexible data pool. Over the past few years, Storage Area
Networking (SAN) technology has emerged as the preferred means of achieving this
consolidation.

Until relatively recently, most network storage systems were configured in a
traditional way--a storage device would be attached directly to a server,
serving files to the machine as necessary. The server would connect to client
machines and other servers across a LAN or WAN, but this part of the
infrastructure had little if anything to do with the storage device; all storage
would be local.

Advertisment

Network Attached Storage fills a need

When the concept of Network Attached Storage (NAS) emerged a decade ago,
things changed. Suddenly, the storage device became aware of the network, and
was attached to the LAN directly, rather than hanging off a dedicated file/print
or application server. In a sense, NAS devices became servers in their own
right; although they did not have the application processing functionality that
you would find in your average Windows NT server, they were nevertheless able to
serve up files without relying on a separate server for the network connection.
Moreover, multiple clients, using a wide range of operating systems, could
access and share files on these new devices, making them useful for the
consolidation of storage in a heterogeneous environment.

One of the best things about NAS is its ease of administration. With IT
networking and storage skills in such short supply, the idea of a self-contained
box that could be literally plugged into the network is very attractive to IT
managers, but there are some disadvantages.

Advertisment

Scalability may be an issue, because NAS devices are single products, with
maximum capacity limits. Similarly, data transfer may be an issue, because NAS
devices deliver data over an existing LAN infrastructure, sharing bandwidth with
other traffic. Nevertheless, NAS is very useful in an environment where storage
consolidation is a requirement, but where cost of ownership is more important
than speed and flexibility. There is a wide range of NAS products available,
from a single network attached drive, to NAS products for workgroups,
departments and high-end solutions offering multiple terabytes of storage with
class five availability.

The NAS pioneers had to develop their own thin operating systems optimised
for file serving, in order to achieve the required performance and reliability,
and these continue to be deployed in many current products. However, there is
now an alternative. With improvements in file systems, processing power and
robustness of operating systems (OS), it is now possible to derive a high
performance, reliable NAS OS from kernels of an OS such as Windows 2000.

From NAS to Storage Area Networks

Advertisment

Storage Area Networks, also referred to as SANs, also achieve data
consolidation but in a different way. SANs are designed for corporate data
center environments where high performance, high availability, and space for
growth are important. SANs allow a number of servers to share a pool of storage
arrays. A high-speed network connects the servers and the storage, creating a
more flexible, manageable, centralized data resource. The clients talk to the
servers and access data files through them just as they did before.

The SAN administrator can allocate storage to each server as needed, and can
dynamically reallocate data between different servers for performance and
redundancy purposes as workloads change over time.

Generally, the high-speed links between the SAN devices and servers are fibre
channel connections operating at speeds of up to 1Gbit/sec, with 2 Gbit/sec just
around the corner. The most common topology for a SAN is a fabric-based system,
where all devices connect to a fibre channel network via switches. The
alternative, fibre channel arbitrated loop (FC-AL), gathers devices into a
ringed configuration. Fibre channel can also be used for point-to-point device
connections. Mainframe installations such as IBM using ESCON can be thought of
as SANs, although this term was introduced only a few years ago and these
solutions are based on higher cost proprietary hardware and software.

Advertisment

The need for corporate data consolidation has become a holy grail for IT
managers around the world. The benefits of storage consolidation via a SAN are
easily quantifiable. First, it makes the management of data much easier. Every
IT manager has had to contend with the headache of distributed 'islands' of
data, stranded on different servers, on far-flung subnets of the corporate LAN
or WAN. Bringing data into a single logical environment means not only that
everyone can get at it, but that network managers can update it and clean it
more easily. This can have enormous ramifications for the business, especially
in areas such as data warehousing and data mining, which can translate into more
customer sales.

SAN-based consolidation, as opposed to NAS, also makes backups possible
without using the existing company network infrastructure. This is important,
because such networks are being placed under considerable strain. In particular,
multinational corporations (MNCs) with operations spanning multiple time zones
are finding it increasingly difficult to conduct backups across their networks,
because they are never "quiet" -- someone is always using them.
Backing up between servers and a SAN infrastructure across high-speed, dedicated
links, or directly from the storage to the tape libraries (serverless back-up)
eliminates this problem, freeing up the conventional LAN for other traffic.

The same high-speed links that provide this backup capability also lead to an
increase in performance and redundancy. Linking to SAN devices across a
dedicated high-speed fibre channel connection enables servers to retrieve data
much more quickly, without having to share a lower-speed link with other
traffic. Furthermore, the ability to dynamically reallocate data between storage
devices gives storage managers the ability to optimise data storage according to
business requirements, putting the data capacity closest to the point where it’s
needed.

Advertisment

Easy Expansion and Maintenance

Because they generally consist of multiple devices connected over a network,
SANs are also easily expandable. It is possible to add a SAN device to the
storage infrastructure without taking the existing storage down. It is also
possible to take individual servers down for maintenance or upgrade purposes
without necessarily taking storage off-line, as would be the case in a more
conventional, direct server-attached storage scenario. These advantages assist
business continuity at a time when it is more important than ever.

All of these benefits lead to a reduced cost of ownership, making it possible
to manage more data on a per-person basis using a SAN than it is using
conventional, disparate storage methods. It is also possible to reduce costs by
only adding extra storage functionality such as fault tolerant systems exactly
where and when they are needed.

Advertisment

This is not to say that there are not some considerable challenges for
companies wishing to implement a SAN environment. Building a SAN is one of the
more difficult tasks facing network managers, because of interoperability
considerations between different vendors’ devices, and between devices and
different operating systems. The problem with SANs is that, as with many other
emerging technologies, vendors have each taken their own approach to the
solution and do not have the time or resource to test all the possible
combinations of their products working with other vendor’s products. This
creates a headache for the customer, who just wants to connect different devices
and get them working with different operating systems. No one wants to go back
to the bad old pre-open computing days, when vendor lock-in was a fact of life.

By Simon Penny Director, PowerVault Storage Marketing Dell Asia Pacific

(to be continued)

Advertisment