A little over a decade ago, when network storage was measured in megabytes rather than gigabytes, I started working for a firm with 4 dedicated CAD networked computers, and a half-dozen standalone desktops. The CAD stations were scheduled in 4 shifts of 4 hours each, and the server held less than a hundred megabytes of CAD files (a separate directory for each project). The workstations were connected to the file server with a 2mb/second coax cable.
We upgraded the server right away and increased the storage capacity, adding a couple more CAD stations. At that time, there was a clear separation in that office between those who wrote letters, those who typed letters, those who designed and those who drafted on CAD.
We then added the secretarial workstations to the network, and added another disk drive to store Word Processing documents and spreadsheets, with the documents for each project in it's own directory - a separate directory for each document type (word processing, spreadsheets, etc.). The result was one disk drive for CAD files, and another one for other project documents.
Time went on. CAD got simpler (and cheaper) and professionals were less willing to work at odd hours on dedicated CAD machines. We began to realize that such task specialization was unrealistic and impractical. Architects and Engineers really do everything: design, drafting, letters, spreadsheets and more. It made less sense to support a staff of typers and a staff of dedicated CAD operators - and less sense to separate project documentation into different drives and directories.
But we were stuck - we already had all our data segregated into separate disk drives (which was easier for the server to handle).
We also found new needs. Our marketing department began using desktop publishing for proposals, and needed to keep large images on-hand for quickly assembled documents. These documents were huge, and disk drive sizes were still limited to a few gigabytes - so they ended up on still another distinct drive.
Then the chaos began - we were outgrowing capacity on one drive, and were swimming in extra space in another. The ideal arrangement would be to have all project information on the same device. Technically, our Network Operating System (NOS) would allow to do that, but at a big performance cost. I knew that - even if we bought larger disk drives as time went on and capacities increased - we'd always bump into the same problem.
The problem is that storage on the network are discreet pools of space, each pool is accessed and managed separately. Why can't we just use all our storage as we need it, rather than be trapped in one device for Marketing, another for CAD and a third for everything else - or at least be able to adjust and balance the needs with the capacity?
In addition, adding more and bigger hard drives often required beefing up or upgrading the server - which needed more memory just to manage the files on the hard drives.
The answer to all this chaos is virtual storage, installed on the network rather than on the server. It is made up of two components:
1) NAS (Network Attached Storage) devices are simply disk space plugged into your network - anywhere. They are an easy way to add storage when you need it, but they are still separate devices - each with it's own maximum capacity. These devices are usually running UNIX or LINUX and do nothing except manage themselves. They require little effort on the part of your network server, so there's no need to upgrade the server to manage the additional drives. Many offices have already plugged one of these devices into their network - a SNAP server (so named because it takes seconds to set up). This solves the server upgrade problem, but they're still separate devices. One big plus to these devices is that they are not licensed per seat - an inherently evil arrangement, in my opinion.
Your first NAS device can be a small standalone LINUX server with gobs or hard drive space, or one of the many NAS devices now available on the market.
2) Storage Virtualization Software takes all of the storage devices on your network and manages the storage as a single large storage resource which you can manage. As new storage is added to your network, you can simply add it to the total capacity. To remove or replace a storage device, let the software migrate the information to other drives. The end users will still see the files in the same virtual location they stored it in. You can break up your storage resources into virtual devices, but their relative sizes can be manipulated to accommodate your current needs.
The software part of the equation is the expensive part, but as more of the components are built into the NOS, it's price should come down. Some of the components needed in the server to make this economical (iSCSI and Fibre Channel connectivity, for example) are planned for inclusion in MicroSoft's upcoming .NET server.
Perhaps, a complete Storage Virtualization solution will someday be included in many NOS's - but not this year. If you need it right away, you'll pay a premium.
For the moment, I'd strongly suggest you consider NAS devices as your needs grow - unless you wanted to buy that new server any way....
Have you already disengaged your storage from your server? E-mail me at email@example.com.
Michael Hogan, AIA - head chiphead at Ideate, provides custom
web solutions and provides consulting services to the AEC industry in Chicago.
He welcomes comments by e-mail at firstname.lastname@example.org