Section 5: Costs

We present a 6-year budget of the cost to procure and operate the main components of our architecture from 1998 through the year 2003. These costs, based on a just-in-time implementation scheme, are then correlated with the ECS May baseline. This analysis is not a cost proposal. Its purpose is to indicate the categories where our architecture might be less expensive than the current baseline.

We use 1994 hardware as the baseline, recognizing that within the next 3 years, significant price/performance improvements will occur in computing, networking, and storage systems. Indeed, it is possible for entirely new technologies to emerge that would comprise a much better platform on which to build the EOSDIS archival storage system. To model the expected improvement in price/performance, we project our hardware costs to the year of acquisition by taking a uniform cost deflator per year for all technologies.

The cost basis we use for commercial software is a mixture of judgment and experience. For custom code, we have assumed that it will cost $100/line to develop non-COTS software, and line counts have been estimated for the major modules.

A summary of our cost model is presented in tables 5-1 (Investment Costs) and 5-2 (6-year Operating Costs). Each line represents the cumulative costs of a number of components. The details for each line are found in the referenced sections.

Section 5.1 discusses hardware costs of 2 superDAACs and 150 peerDAACs. We arbitrarily combined 100 minimal and 50 large-sized peerDAACs because this provides a net storage capacity and processing speed to equal 1.5 superDAACs. Purchased equipment, integration, and subcontracts for hardware development are all included here. Section 5.2 provides software costs, and they are split into COTS software and software to be coded by the ECS development team.

Section 5.3 discusses the 6-year operating and maintenance (O&M) costs from the launch of the first satellite in 1998 through the end of the EOSDIS contract in the year 2003. Salaries dominate these costs, and we assume 1 man-year will cost $125,000 every year of the project. Staffing is assumed to grow linearly, in synchrony with the hypothesized procurement plan for the superDAACs. The costs of superDAAC operations are based on historical experience at the San Diego Supercomputer Center.

Section 5.4 has several subsections explaining the relationship between our cost model and the ECS baseline, as of May 1994. We have not included all categories of the ECS budget in our study but, on the other hand, have delved into the costs of building and running DAACs and SCFs, costs that are part of the larger EOSDIS system. We have tried to use common sense to relate our numbers to the baseline numbers and explain our reasoning in the several parts of this section.

Table 5-1: Cost Summary -- Investment Costs

Item

Description

Section

Cost ($M)

2 superDAACs

Hardware (at deflated prices), integration, and subcontracts. (Lower figure for LAN topology, larger for mainframe topology.)

5.1.1

30-43

150 peerDAACs

Mixture of two types. Total capacity ~1.5 superDAACs.

5.1.2

9

COTS software

O/S, DBMS, system management, applications software

5.2.1

7

In-house software development

Type libraries, etc.

5.2.2

9

Contracted software development

Middleware, HSM

5.2.3

20-25

System integration and testing

In-house cost

5.2.4

13

Total

88-106

 

Table 5-2: Cost Summary -- 6-year Operating Costs

Item

Description

Table or Section

Cost ($M)

2 superDAACs

Hardware maintenance (tape silos and CRAY-like platforms)

5-12

10

2 superDAACs

Operations staff

5-9, 5&endash;10

16-19

2 superDAACs

Technical staff (e.g., DBA, help desk, documentation)

5-11

15

WAN

Communication tolls

5-13

21

Software maintenance

10%/year on COTS software

5.3.3

19

Total

81-84

5.1 Hardware

Hardware costs are split into two categories: the costs of the 2 superDAACs, and the costs of 150 peerDAACs. For both the superDAACs and the peerDAACs, we follow a just-in-time acquisition strategy. This is the most economical plan and is based on the historical experience that computer costs in the past have diminished steeply with time, and there is no sign this trend will not continue. The basis for this judgment is given in Appendix 2.

The strategy works as follows. Specific architectures have been priced from today's "catalogs" (see Section 4). We then assume that a year from now it will cost half as much to buy a system with the same functionality. This price deflator, 0.5, we assume is independent of the specific technology and remains constant through the year 2003. Again, Appendix 2 discusses the basis for this assumption.

The second part of the strategy is to synchronize the incremental acquisition of the systems with the storage requirements, in petabytes, of the EOS flight plan. We assume that a superDAAC with capacity different than 2 PB can be bought at the same fractional price as the storage ratio. The following schedule meets the needs, according to the figures for the Cumulative Storage Archive, provided by HAIS. Buy a 0.5-PB system for each superDAAC in June 1997 and June 1998. This is a minimal system consisting of a CRAY C90 and 2 NTP tape silos. Buy a 1-PB system for each superDAAC in June 1999. In June of the years 2000 through 2002, buy an additional 2 PB increment for each. Assuming that all equipment is retained, the total capacity at the end will be 16 PB. It will be an accident if any of these systems, even the one bought in 1998, strongly resembles the specific architectures we developed.

The biggest uncertainty in price that results from this model is the value taken for the cost deflator. Appendix 2 has our forecast for the evolution of a range of technologies, and we have made the simple approximation that, for fixed performance, all hardware items halve in cost each year. For example, computer chip performance is expected to increase a factor of 2.5 in the next two years (200 MHz to 500 MHz), memory sizes are expected to increase a factor of 16 in the next 3 years (4-Mb to 64-Mb), and storage densities are expected to increase a factor of 4 in the next 2 years (20-GB tape to 80-GB tape).

5.1.1 Pair of SuperDAACs

An architecture for the pair of superDAACs was described in Section 4, and detailed configuration information and current component costs are contained in Appendix 6. The bottom line is that a single 2-PB superDAAC today would cost between $130M and $203M, depending on whether it was architected to the workstation cluster design or the CRAY design. Equivalently, since our system needs 2 superDAACs, the price baseline is $130M for a pair of 1-PB superDAACs with workstation architecture or $203M for a pair of 1-PB superDAACs with CRAY architecture. Each 1-PB superDAAC for the CRAY architecture would have 2 CRAY C90s and 4 NTP tape silos.

Assuming the above cost deflator and the acquisition schedule described above, the superDAAC acquisition costs are between $23.1M and $36.8M.

Table 5-3: Just-in-time Purchase Cost of 2 SuperDAAC Architectures

Date

Added PB
(per DAAC)

Cum PB
(per DAAC)

Deflator
(per DAAC)

Arch 1

Arch 2

June 1994

130

203

June, 1997

1

1

.125

8.1

12.8

June 1998

1

2

.0625

4.0

6.4

June 1999

2

4

.0313

4.0

6.4

June 2000

4

8

.0156

4.0

6.4

June 2001

4

12

.0078

2.0

3.2

June 2002

4

16

.0039

1.0

1.6

Total, 1997-2002

23.1

36.8

Building the superDAAC will entail system integration costs. It is notable that, as equipment gets cheaper, the fraction of the hardware budget associated with systems integration will rise. This effect will be offset partly by the fact that as technology improves, there are fewer "parts," so integration becomes easier.

Table 5-4: SuperDAAC Integration Costs (Both SuperDAACs)

Item

Description

Basis

6-year cost

System integration (for either architecture)

Buy, cable, tune, install

5 people/year for 6 years

3.75

Workstations

Software/system development

30 machines, $10K every 3 years

0.6

Facility improvement, either architecture

Electricity, floors, air

20,000 square feet

2.0

Total

6.4

5.1.2 PeerDAACs

The other part of our architecture is a collection of peerDAACs that we imagine to be located at government and academic laboratories involved with global change research. These will support both single-investigator projects as well as the large enterprises of the instrument teams.

The cost model used for the peerDAACs is the same as the model used for the superDAACs. A mix of peerDAAC sizes is expected. We assume that 100 minimal and 50 large peerDAACs will represent an effective mix of storage and compute capabilities. The total capacity of the peerDAACs will be 1.5 times that of a superDAAC, and their aggregate cost will be $9.4M, as is shown below.

One-sixteenth of the peerDAACs are bought in June 1997 and in June 1998. One-eighth are bought in June 1999 and one-quarter in June in the years 2000 through 2002. Two prices were developed for each size peerDAAC (Section 4). Table 5-5 shows the cumulative cost for 100 of the lowest-cost minimal peerDAACs and 50 of the more expensive large peerDAACs.

Table 5-5: Just-in-time Purchase Cost of 150 PeerDAACs

Date

Number of peerDAACs added

Cumulative peerDAACs

Deflator

Minimal peerDAACs

Large peerDAACs

June 1994

204

211

June 1997

9

9

.125

1.59

1.65

June 1998

9

18

.0625

.8

.83

June 1999

18

36

.0313

.8

.83

June 2000

38

74

.0156

.8

.83

June 2001

38

112

.0078

.4

.42

June 2002

38

150

.0039

.2

.21

Total 1997-2002

4.59

4.77

5.1.3 Wide-Area Networking

The investment part of the wide-area networking cost includes the cost of the routers and interfaces at superDAACs and peerDAACs and mobilization costs, if any, for the phone company to set up the circuits. Our general approach is to have as much interface equipment as possible owned and maintained by the service provider. These parts of the costs will be embedded in the operating budget. The DAAC-unique equipment for networking is listed explicitly in the peerDAAC price tables. This equipment at the superDAAC is implicit in the cost of the request queueing platforms.

5.2 Software

The software costs are summed over 3 categories: commercial off-the-shelf software, in-house developed software, and contracted software.

5.2.1 COTS Software

Table 5-6 shows the COTS software costs. We assume the 2 superDAACs have a COTS OS, SQL-* DBMS, and a system management tool suite as noted in Section 3. The peerDAACs require a DBMS and 3 applications. The peerDAAC operating system is assumed to be bundled with the hardware and is not priced separately. Hence, the total cost of COTS software is $6.6M.

Table 5-6: COTS Software

Item

Quantity

Unit cost

Total ($)

Operating system

2

100K

200K

SuperDAAC DBMS

2

100K

200K

PeerDAAC DBMS

150

10K

1.5M

System management

2

100K

200K

Application 1 (e.g., IDL)

150

10K

1.5M

Application 2 (e.g., AVS)

150

10K

1.5M

Application 3 (e.g., MATLAB)

150

10K

1.5M

Total

6.6M

5.2.2 In-house Software Development

Table 5-7 summarizes the in-house software development we discussed in Section 3. In aggregate, they require $9.125M to accomplish.

Table 5-7: In-house Developed Software

Item

Section

Cost ($M)

Cooperation in geo-standards

3.2

1.25

Schema

3.2

2.5

Type library

3.2

2.5

Eager pipeline

3.4

2.5

Monitoring of object standards

3.6

.375

Total

9.125

5.2.3 Contracted Software

Table 5-8 summarizes the efforts that need to be contracted to external vendors of COTS software. In aggregate, this will require $20-25M.

Table 5-8: Contracted Software

Item

Section

Cost ($M)

2 HSM efforts

3.3

10

2-3 middleware efforts

3.4

10-15

Total

20-25

5.2.4 Integration and Testing

Lastly, the contractor must design the EOSDIS system and integrate various pieces of the system. Additionally, he must choose external vendors (5-6 in all) to work with and monitor their progress. And he must test the integrated system.

We assume that a 10-person group can perform the design-and-integration role, and a 5-person team can perform the testing. Over 6 years, this will result in 90 person-years of effort.

With a 10% safety margin added, we thereby allocate 100 person-years of effort to this task, costing $12.5M.

5.3 Operations and Maintenance

Running costs are dominated by labor, and a concept of operations is needed to justify staffing assumptions. In our architecture, the difference between superDAACs and peerDAACs is one of size more than function. With this design, people, data, and algorithms can be located at the most advantageous or convenient place. Our concept of operations exploits this flexibility.

Many global change scientists have contradictory views of their relationship to data and computers. On the one hand they wish to be "near" the data and have control over its analysis but, on the other hand, they are reluctant to establish and manage the type of computing environment needed to cope with the quantity, complexity, and interdependency of EOS data.

Our architecture is consistent with the following concept of operations:

Both human steering and quality assurance will be performed by data specialists at workstations using graphical displays. Human steering (for example, registering images to a map projection) is defined to be an activity that will need to be done as an integral part of the product generation cycle. The user interface for this task will be tailored for efficiency.

Quality assurance of data will be performed from a workstation by examining samples from the data stream. This task will be performed either at the superDAACs or the peerDAACs. However, we are not confident that data from the high-rate imagers (MODIS, MISR, and ASTER) could be reviewed adequately unless the staff are resident at the superDAAC.

The user environment for this task will be tailored to exploratory data analysis. We anticipate that data problems will be uncovered frequently by researchers, and that QA staff and researchers will be involved jointly in finding and fixing the problems. Ideally, the QA staff would be able to duplicate locally the specific analyses and displays underlying the problem.

Instrument and product specialists will need to establish the requirements for human steering and quality assurance of the various data streams. The staff required to perform the work will then be determined by the degree of automation in the resulting procedures.

Human data assistants figure prominently in the ECS scenarios. This is an important area to automate and thereby reduce staffing costs. For example, it is the recent experience at SDSC that each person on the help desk can only handle approximately 2 questions/hour.

As such, we expect the contractor to construct all of the following automated systems:

We view a call to a help desk to be a last resort for a user in need of assistance.

Our idea for the help desk is that inquiries pertaining to level 3 and higher products would always be answered from the superDAACs. We anticipate most inquiries of this type will seek to download and display standard images, not perform scientific manipulation of the data. Specialists needing information about the data that goes beyond the structure and lineage information contained in the database would direct inquires to help desks at the DAAC where the appropriate domain experts were situated.

To simplify the following discussion, all labor associated with product generation, data quality assurance, and user help is lumped into the superDAAC budget. As described above, this does not preclude large parts of the work being performed at the peerDAACs.

5.3.1 SuperDAACs

Operations Staff

The job descriptions and head counts required for the operations staff at a superDAAC will be very similar to those needed to run a supercomputer facility, and this section is based on experience at the San Diego Supercomputer Center. Two models are shown: one for an autonomous superDAAC and one for a superDAAC colocated at an existing large-scale computing facility.

The cost summary that appears in Table 5-2 is calculated as follows: The lower bound ($16M) is for 1 stand-alone superDAAC (Table 5-9, $9.7M) and one co-located DAAC (Table 5-10, $5.9M). The upper bound ($19M) is for 2 stand-alone DAACs.

Table 5-9 has the job descriptions and head counts for the self-contained superDAAC. In addition to the staff that keeps the equipment running, we include administrative and clerical positions as well as the systems analysts needed to install new versions of the software, debug problems, and respond to emergencies. The system administrators for a local workstation LAN used by software developers and software maintainers are also given.

Table 5-9: SuperDAAC Operations Staff

Position

Description

Staff level 1997

Staff level 2000

Total man&endash;years

Manager

Coordinates activities, manages staff

1

1

6

Operationsó3 shifts

Perform backups, monitor systems ($65K/year)

10

10

60

System analyst

System support for the vector supercomputer

1

1

6

System analyst

System support for the database

1

1

6

System analyst

System support for the HSM

1

1

6

System administrator

System support for the workstation LAN

0

1

5

Clerical

Maintain accounts ($65K/year)

1

2

11

Technical

Maintain disks, network

1

1

6

Facilities

Maintain building, power

1

1

6

Total

112

Total cost, 6 years

Assuming $125K/year, except as noted

$9.74M

The staffing levels in row 2 assume the machines will be monitored 24 hours/day with 2 people on each shift. This is to minimize the risk of fire, electrical outages, stuck elevators, and stolen equipment. If a 2-shift operation is deemed appropriate, the operations support level could be decreased from 10 to 7 persons. This would save 12 man-years over the duration.

If the superDAAC is colocated at an existing center, the support requirements would be only half as large. Table 5-10 shows the staff profiles under this assumption.

Table 5-10: Colocated SuperDAAC Operations Staff

Position

Description

Staff level 1997

Staff level 2000

Total man&endash;years

Manager

Coordinates activities, manages staff

1

1

6

Operations -- 3 shifts

Perform backups, monitor systems ($65K/year

2

2

12

System analyst

System support for the vector supercomputer

1

1

6

System analyst

System support for the database

1

1

6

System analyst

System support for the HSM

1

1

6

System administrator

System support for the workstation LAN

0

1

5

Clerical

Maintain accounts ($65K/year)

1

2

11

Technical

Maintain disks, network

1

1

6

Facilities

Maintain building, power

0

0

0

Total

58

Total Cost, 6 years

Assuming $125K/year, except as noted

$5.87M

Indeed, many of the tasks can be handled by a small increment to existing supercomputer staffs. In particular, operations support needed to re-boot computers or un-jam tape silos could be accomplished with 2 new FTEs if the superDAAC was colocated with an existing supercomputer center.

Science and Support Staff

The concept of operations, described previously, explains our overall approach to staffing the science and data management positions needed for EOSDIS. As stated previously, our architecture permits many jobs to float between a peerDAAC and a superDAAC but, for simplicity, we discuss a model where all work is performed at one of the superDAACs.

The most important positions will be those associated with the science products of the individual instruments. We have neither the information nor the experience to estimate the number of such positions, but we have a few observations to make.

In the past it has required scores of people to maintain the flow of good data from the larger systems. That experience leads to extremely large numbers of staff if it is blindly scaled up to the numbers of instruments and data rates expected in the EOS era. Better computing systems with more automated storage devices inevitably make staff more efficient in the future, so the same number of people can handle larger data sets. Beyond that natural increase in productivity, there are at least 3 additional gains that we think accrue from our architecture.

The first of these is the efficiency resulting from an entirely automatic hierarchical storage manager. To a first approximation, no human will ever mount a tape. The second efficiency arises from the fact that our method of lazy/eager evaluation requires a high degree of automation in the product factory. This will tend to reduce the staff because fewer products will routinely be calculated.

The third efficiency is a number of benefits that result from using a DBMS&endash;centric approach with an enterprise-wide schema and data dictionary. Staff will be able to move more easily from instrument to instrument and product to product because of the similar data models. Developers will be able to write application software for data-processing workstations more cheaply for the same reason. It may be possible to design "generic" workstations, using this common data model, that could be customized easily for a particular instrument or science product. Thus, although we believe our architecture is more conducive to high efficiency than the current baseline, we would need to work closely with instrument teams for some time before we could establish this fact.

There are 2 responsibilities where we have specific recommendations: database administration and "help desk" support. A database administrator (DBA) will be responsible for all work related to the COTS DBMS engine. This includes relationships with the vendors and the users. The former involves installing software and reporting bugs. The later involves establishing user accounts, changing permissions, allocating storage, and installing improvements to the schema, including types, tables, procedures, and triggers. We assume that each superDAAC will require 5 database administrators, but realize no one has experience with such large database systems running on the proposed architecture. In addition, database administrators will be required for many of the peerDAACs. These will be resident employees at larger sites, but many sites will be supported over the network from the superDAACs. We assume that there will be 1 DBA for every 25 peerDAACs.

Our 2-level approach to the EOS "help desk" was described previously. With the quantity of data expected, a high degree of automation will be essential if information about the data is to be widely available. We have 3 ideas for doing this. First, EOSDIS should be a paperless system. All documentation should be stored and distributed in electronic form. Second, tutorials should be developed and placed online to guide new users through the data and to serve as a resource for application developers who need to understand the inner workings of the DBMS. Finally, all bug reporting should be done online, with automatic reporting, routing, and (after resolution) publication. Our recommendation for the help desk is that NASA decide the budget they wish to spend and hire accordingly. We show an average of 4 at each DAAC. Creating the electronic documentation recommended earlier will require, we believe, a staff of 3 editors/technical writers.

Our plan for these positions is given in Table 5-11. This table assumes a linear growth between the initial staff levels at the end of 1997 and the full complement at the end of the year 2003. This model yields 21 for the average staff in this category during the 6-year operating phase.

Table 5-11: Science and Support Staff (Both SuperDAACs)

Position

Description

Positions

1997

2003

Total man&endash;years

Product analysis

Manage creation of standard products

Quality assurance

Instrument/science specialists

DBA, superDAAC

5 per superDAAC

2

10

36

DBA, peerDAAC

1 per 25 peerDAACs

3

6

27

Editor/writer

Electronic documentation ($90k/year)

2

4

18

Help desk

General support ($90K/year)

2

6

24

Help desk

Specialist support

2

6

24

Total

129

Total cost

Assuming $125K/year, except as noted

$15M

Hardware Maintenance

Supercomputer centers rely on hardware maintenance contracts to safeguard their investment in very costly equipment. Three types of failure modes are seen:

Given the high reliability of most systems, if equipment is discarded after 3 years, it seems a safe risk to pay for maintenance for only the first year. If the equipment is provided with the standard first-year warranty, this may not be necessary. In subsequent years, it will be feasible to spend 1% of the cost for a repair estimate, then fix it if the repair cost is less than 50% of the residual value. Otherwise, the system should be replaced. The degree of risk is small for the commodity systems. For the supercomputers, this risk may be unacceptable. Fortunately, the maintenance cost for supercomputers is 2-3% of the purchase price, compared to a typical cost of 10%.

The maintenance model proposed here is to maintain only the tape silos, supercomputers, and network switches. All other systems will be run without vendor-supplied maintenance. The network switches are included under maintenance to be able to use vendor expertise when obscure networking problems occur. Since the network is a critical component of the system, the network down time must be minimized. An explicit allocation is made to cover expected repair costs for the $29M worth of equipment that is not covered by paid maintenance.

Table 5-12: Hardware Maintenance Costs (Both superDAACs)

Item

Description

Basis

6-year cost ($M)

Compute platform

CRAY C90 supercomputers

3%/yr

3.03

Storage system

Tape silos

10%/yr

4.04

Network

Switch

10%/yr

.04

Maintenance pool

Workstations

10%

2.9

Total

10.0

5.3.2 WAN Communication

Future communication costs are extremely uncertain. The Internet structure is changing dramatically, with NSFNET being replaced by commercial systems. The NSF supercomputer centers will connect through regional networks to NAPs (Network Access Points), and between NAPs across commercial long-haul vendor networks. Charges for use of the structure include charges to connect to the regional networks and possible pass-through costs for the long-haul network. The common-carrier price structures will be determined by technology and government regulation and are unknown at this time. Current communication across NSFNET is free. In 1995, researchers will be paying for at least their regional network costs.

We believe that the ECS budget for communications assumes that all communications circuits are either Government Furnished Equipment (GFE) or through the Internet. Since our architecture places special demands on wide-area networking, it is appropriate to include these unusual costs.

We assume one superDAAC is on the east coast and one on the west, and that the eastern superDAAC is close enough to EDOS that the cost of the link is negligible. This leaves 3 cross-country lines, 1 between EDOS and the western superDAAC (T3), and 2 joining the superDAACs (OC-3). Starting in 1997, we assume that a transcontinental T3 circuit will cost $0.5M/year. OC-3 circuits are not yet available, but we assume these will cost $2M/year in 1997. For the first 3 years thereafter, a single line will suffice between the superDAACs. For the last 3 years, 2 will be required.

Table 5-13: Operating Costs of WAN Communication Circuits

Item

Description

Basis

6-year cost ($ M)

EDOS to superDAAC

T3ó2 links

1997

3

SuperDAAC to superDAAC

OC-3ó2 links

1997

18

Total

21

5.3.3 COTS Software Maintenance

In Section 5.2.1, COTS software is assumed to cost $6.6M with another $20-25M in contracts to accelerate the schedules discussed in Section 5.2.3. We conservatively estimate COTS maintenance at 10% per year and use the entire $26.6M - $31.6M figure as a base. As such, the 6-year maintenance cost is

(0.1) (6) (26.6 - 31.6) = $16 - $18.7M

5.3.4 Contractor Software Maintenance

We assume a cost of $100 per line of contractor software. That number is assumed to be the cost to write and maintain 1 line of code for its lifetime. Hence, maintenance has already been incorporated into our cost numbers in Section 5.2.3.

5.3.5 PeerDAAC Systems Administration

We assume that the costs of administering the peerDAACs is already included in the DAAC and SCF line items in the EOSDIS budget.

5.4 Comparison between Our Costs and ECS Budget

In this section, we focus on 3 areas of system cost, namely

The following three subsections discuss each item. Our general methodology is to use the ECS budget indicated in Table 5-14 and compare various aggregated costs with ones we have computed earlier in this section.

5.4.1 Hardware Procurement and Maintenance

The ECS hardware budget appears to be $98M. In addition, some unknown amounts are also allocated from a separate SCF's budget of $312M and a DAAC budget of $158M. Assuming 15% for hardware, this amount would be $46.8M of the SCF budget and $23.7M of the DAAC budget, leading to a total hardware budget of $166.5M.

In our proposal, superDAACs and peerDAACs accomplish the tasks of both SCFs and DAACs. Our total hardware cost is between $39M and $52M, leading to a savings of between $114M and $127M. This results primarily from just-in-time acquisition and a technology deflator that lowers the cost of future procurements.

Table 5-14: Budget Comparison (All ECS amounts are derived from percentages. Dollar amounts are rounded.)

Category /line item

Percent (766M)

ECS budget

Budget detail

Explanation

COTS

14%

108

Computers

35

5.4.1

Disk

36

5.4.1

Robots (archive)

18

5.4.1

Software

10

5.4.2

Communications

7

5.4.1

Development labor

27%

207

Science data processing

46

5.4.2

System engineering

39

5.4.2

Integration and testing

29

5.4.2

System management

21

5.4.2

Science Office

17

Not considered

Prototyping

14

Not considered

Quality assurance

12

5.4.2

Flight operations

29

Not considered

Management and operations

32%

260

All DAAC support

192

5.4.3

Flight operations

39

Not considered

EDF (Fairmont)

13

Not considered

System/network management

16

5.4.3

All other categories

25%

193

Engineering studies

77

Not considered

Project management/travel

54

Not considered

Reserve

54

Not considered

University research

8

Not considered

ECS

100%

766

SCFs

312

5.4.1

DAACs

158

5.4.1

5.4.2 Software Procurement and Maintenance

The ECS software budget is $207M for in-house development and $10M for COTS software. However, we have not considered flight operations, the Science Office, or prototyping in our analysis. The remaining budget, therefore, is

$207M  - 17 Science Office
      - 14 prototyping
      - 29 flight operations
      - 10 COTS software
___________________

$157M Total

Our complete software budget is $48.225M - 53.225M for development, resulting in a savings of around $107M. This results primarily from a much larger dependency on COTS software in our scheme relative to the current EOSDIS proposal.

5.4.3 Operations

In the ECS budget, there is $192M for DAAC support plus $16M for system and network management, yielding a total of $208M. In our system, operations requires $82M plus an unknown amount for product analysis. If product analysis requires 300 person-years of effort, then our total cost would be $120M, a savings of $88M relative to the current budget levels.

Section 6: Acknowledgements

This report is a collective effort of the entire Project Sequoia 2000 team. The ideas were refined in the crucible of vigorous debate. These are the colleagues who participated in the background: Zahid Ahmed, Jean Anderson, Tom Anderson, Paul Brown, Mike Bueno, Bill Coppens, Jim Davidson, Jeff Dozier, Jim Frew, Richard Frost, Claire Mosher, Joe Pasquale, Jason Simpson, Richard Somerville, Joseph Spahr, and Len Wanger. Son Dao, from Hughes Research Laboratory, was the contract monitor and also contributed to these discussions.