What does network-centric really mean?
One way of understanding this phrase is to examine what it looks like, how it performs and what it delivers.
Google Inc. offers one of the best examples of this concept.
To earn the designation of network-centric, a system must be able to deliver high-resolution displays to tens of millions of computer and cell phone devices simultaneously, and without any downtime.
The access time, or latency, to an application from any point on the globe must be comparable to what a user would get from a desktop computerwhich is no more than a quarter of a second.
The Defense Department and intelligence community can learn a lot from Google by focusing on the design of a network-centric infrastructure that serves the needs of information warfare.
Whats needed in DOD now is a Google-like infrastructure that can connect the dots seamlessly, rapidly and reliably. DOD needs an architecture that cleanly separates the networking functions from the data management tasks and in turn from computational processing.
DOD also needs a system that can accommodate the dynamics of rapidly changing user requirements, while keeping the legacy applications on the periphery until they can migrate to a shared, interoperable network.
Most important, DOD and the intelligence community should understand the economics of Google. The companys ongoing operating costs are a fraction of the huge costsby some estimates well over 50 percent of the departments total IT budgetcurrently consumed by DOD computers just to keep running.
The distinguishing characteristic of a network-centric design is the speed with which individuals can customize what they see or hear. For a standard application, such capability would allow a user to instantly customize his or her display.
Speed also extends to development. New applications would be delivered in less than a month, starting from a statement of requirement to producing a working prototype.
A network-centric design must be secure. There would be no known virus infections, no instances of database corruption and no unauthorized accesses. All of that would be delivered while accessing petabytes (that is, thousands of terabytes) of distributed databases that originate from decentralized sources that remain under the control of local organizations.
We are talking here about operating a massively parallel system of systems that is redundant, geographically distributed, inexpensive to operate, instantly reconfigurable, infinitely scaleable and user-friendly.
In short, we are talking about Google.
The good question to ask now is: How did two graduate students, using borrowed equipment, launch a network-centric system that soon spanned the globe and has become one of the most frequently accessible network services? How could that have been accomplished in less than five years?
The secret to the success of Google is its infrastructure. Its engineers built and operate a protected global network that is completely isolated from the external world except at tightly controlled points of entry. This infrastructure consists of dozens of look-alike clusters of servers. Each cluster has an identical architecture and an identical software environment, and is made up of tens of thousands of cheap processors. The individual processors as well as each cluster are configured so that even in case of a massive failure, the system as a whole will continue to function and keep delivering services without a perceptible degradation. The entire network is engineered for reliability through software and logical redundancy and not through dependency on hardware never failing.
To understand Google you must examine its software architecture, which depends on real-time re-indexing of the contents of its files. The key here is the separation of data and of data files from all other software. It is here that the contrast between the network-centric designs, as compared with the existing client-server solutions, shows up dramatically. In the Google network the data is continuously re-examined for usage patterns and then moved around the globe, as needed, while making sure that at least two other locations can take over in the case of a failure.
A way of describing this capability is by labeling it as a seamless global system awareness, as contrasted with current DOD practice of trying to deliver interoperability by means of costly and error-prone improvised splicing of connections between isolated servers.
Indeed, the Department of Defense and national-security organizations have much to learn from Google. There are lessons there for anyone to learn who wishes to lecture about true net-centric futures.
Paul A. Strassmann is the distinguished professor of the Information Sciences School of Information Technology and Engineering at George Mason University. He was director of Defense information during the first Bush administration. He can be reached at email@example.com.