WORLD METEOROLOGICAL ORGANIZATION

COMMISSION FOR BASIC SYSTEMS

 

AD HOC MEETING ON THE YEAR 2000 PROBLEM

FINAL REPORT

 

READING, UNITED KINGDOM, 12 - 15 JULY 1999


CONTENTS

Pages

Agenda

ii

General summary of the work of the session

1 - 5

Annexes

6 - 7

List of participants

8 - 10

Appendix A: WMO Year 2000 International Monitoring and Contingency Plan

A1 - A18

 


AGENDA

 

1. ORGANIZATION OF THE MEETING

1.1 Opening remarks
1.2 Adoption of the agenda
1.3 Working arrangements

2. BRIEF OVERVIEW OF PREPAREDNESS OF NMHSs

3. APPROACHES TO ASSESSING THE ACTUAL IMPACT ON 1 JANUARY 2000

4. DEVELOPMENT OF A Y2K CONTINGENCY PLAN

5. POSSIBLE ESTABLISHMENT OF WMO Y2K SITUATION CENTRE(S)

6. CLOSURE OF THE MEETING


1.  ORGANIZATION OF THE MEETING (agenda item 1)

1.1 Opening remarks

1.1.1 The Ad hoc meeting on the Year 2000 (Y2K) Problem opened at 0930 on Monday 12 July 1999 at ECMWF in Reading, UK. Mr D. Marbouty, Head of Operations, welcomed the participants on behalf of ECMWF and wished them a successful meeting. Mr D. McGuirk welcomed the participants on behalf of the Secretary-General of WMO. He noted the high priority assigned to the Year 2000 (Y2K) Problem by Executive Council and the thirteenth WMO Congress and briefly reviewed the main objectives of the meeting.

1.1.2 The participants unanimously agreed that Mr B. Sumner (Australia) should chair the meeting.

1.2 Adoption of the agenda

1.2.1 The meeting adopted the agenda as reproduced at the beginning of this report.

2.  BRIEF OVERVIEW OF PREPAREDNESS OF NMHSs (agenda item 2)

2.1 Mr McGuirk informed the meeting of the status of the Y2K preparedness of WMO Members based on the information available immediately prior to the meeting. He noted that as of 9 July 1999, 12 NMHSs had reported that their critical systems were Y2K compliant, 39 had reported their message switch systems were compliant and 84 had reported their Y2K projects were proceeding according to schedule. The experts were concerned that 38 National Meteorological/Hydrological Services (NMHS) had not reported their Y2K status to the WMO Secretariat despite repeated requests.

2.2 The experts present informed the meeting of the latest information on the status of their own NMHS’s Y2K activities. Mr S. Noyes noted that the UK Message Switch System (MSS) had recently passed its certification tests, 80% of the other projects were finished and the remainder was on schedule to be completed by the end of August. Mr T. Potgieter stated that South Africa was confident their NMHS would be ready on time but that they will not be finished with all of their testing before October. Mr Sumner informed the meeting that the Australian Bureau of Meteorology MSS had been certified as compliant with the other systems on schedule to be completed by mid July. Mr A. Gusev described the current situation in the Russian Federation and summarised the results of the Ad hoc coordination meeting on Year 2000 Problem support to World Meteorological Centre (WMC) and Regional Telecommunications Hub (RTH) Moscow (19-20 April 1999, Moscow). He noted that it was likely that the telecommunications system would be compliant in time if the necessary financial support were made available soon. Mr McGuirk noted that commitments for support had already been received from the UK, USA and Japan, which came close to the amount needed.

3.  APPROACHES TO ASSESSING THE ACTUAL IMPACT ON 1 JANUARY 2000 (agenda item 3)

3.1 The experts discussed possible approaches on how the actual impact of outages or systems failures during the change to the new year could be assessed. Mr J. Lincoln noted there are 32 Regional Telecommunications Hubs (RTHs) which are key to the flow of raw and processed data through the Global Telecommunications System (GTS). Based on data submitted by the country as well as published information on the predicted reliability of international telecommunications circuits he estimated the likelihood that each RTH will function throughout the change to year 2000 and presented the results of his analysis to the meeting. The analysis is given in the annex to this paragraph.

3.2 Mr H. Diamond made an interesting presentation on a qualitative data analysis model to aid in assessing risks for international weather data receipt at the USA National Weather Service Telecommunication Gateway. He noted that the intent of the data receipt model was primarily to aid the National Centers for Environmental Prediction in assessing what contingencies for data outages would need to be developed to assure continued viability of the numerical weather models. The inputs to that model included information from WMO, the International Telecommunications Union, the International Civil Aviation Organization, as well as data from a Federal Communications Commission report on the Y2K status of international telecommunications. The model uses Microsoft Excel for data analysis and presentation and Mr Diamond offered to provide the copies of the spreadsheet to any of the experts present.

3.3 Mr Noyes noted that as a result of its own risk analysis the UK Met Office had determined that the biggest risk for loss of data to its operational systems was the loss of TEMP data, especially over the tropics. As a response the UK had undertaken a project to ingest and utilise ATOVS data over land.

3.4 The experts felt that the Aeronautical Fixed Telecommunication Network (AFTN) is one critical system for which NMHSs have too little information on its Y2K readiness. Some participants noted that although the International Civil Aviation Organization (ICAO) was responsible and they believed the International Air Transport Association (IATA) had information on this issue it has not been forthcoming, despite numerous requests.

3.5 The experts discussed possible mechanisms to monitor the actual performance of the World Weather Watch (WWW) in real time. They agreed that any system that is to be implemented before January 1 has to be simple, standardised and provide information in a manner than can easily be combined and consolidated. They decided that two levels of monitoring need to be carried out to meet all of the critical requirements during the transition to the year 2000.

  1. RTH-level telecommunications monitoring to determine the operability of each of the 32 RTHs
  2. Data monitoring by WMO lead centres to determine if significant outages of critical data occur

3.6 The experts defined the details of the RTH-level monitoring activities and incorporated them into the WMO Year 2000 International Monitoring and Contingency Plan provided as Appendix A of this report.

Data Monitoring

3.7 The experts considered the various types of data and products that would be useful and feasible to monitor and agreed that it was important to monitor the availability of TEMP, TEMP SHIP, SYNOP, SHIP, DRIBU, TAF, METAR, AIREP/AMDAR and satellite data and products.

3.8 While the experts agreed that TAFs and METARs are critical and their production is usually the responsibility of NMHSs, they noted that the international exchange of these data is the responsibility of ICAO. Furthermore, although there is no WMO lead centre designated to monitor these data, the experts were confident that their availability will be monitored by the airlines and other users, who would report any loss of these data to the NMHSs promptly. Nonetheless, the experts were concerned that they were not aware of an official responsibility regarding the monitoring of these critical data over the change to 2000.

3.9 The experts recommended that the satellite operators monitor the performance of their satellites and processing systems and report any problems to the WMO Y2K Situation Centres described in section 5.

3.10 The meeting discussed the monitoring of TEMP, TEMP SHIP, SYNOP, SHIP, DRIBU, TAF, METAR, AIREP/AMDAR reports and the role and responsibilities of WMO lead centres. They determined that most of the lead centres would not be able to modify their monitoring systems, which have been developed to report monthly, to report on a daily or more frequent basis. The experts were very grateful that the European Centre for Medium-Range Weather Forecasts (ECMWF) had kindly volunteered to report any significant problems with TEMP data to the WMO Y2K Situation Centres twice daily for the few days surrounding 1 January 2000. It will also endeavour to produce problem reports of other data important to Numerical Weather Prediction. Furthermore, ECMWF will consider adding additional monitoring information (such as time series plots of number of reports received) to their public Web site, along with the 6 hourly information (updated once per day) that they presently make available. The Internet address of the ECMWF Web site is http://www.ecmwf.int

4.  DEVELOPMENT OF Y2K CONTINGENCY PLANS (agenda item 4)

National contingency planning

4.1 Experience and discussions held in various fora show, on the one hand, that Members are making progress in securing their mission-critical operations against failures at the millennium change. On the other hand we see that there is still considerable potential for outages in these systems for a number of reasons which are only partly controlled by the NMHSs. We must anticipate that some things will be overlooked, not completed in time, or could not be resolved because they were too complex or too costly. One important way to be sufficiently prepared for possible problems is through development and application of well-defined and executable contingency plans.

4.2 Many NMHSs already have contingency plans for natural disasters and other possible contingencies. Year 2000 plans are little different although they must account for the fact that during Y2K potential widespread and simultaneous failures may render their traditional backup strategies ineffective.

4.3 Examples of national or regional plans from the USA, UK and Japan were presented and the experts discussed how useful guidance on contingency planning could be formulated for any NMHSs that had not yet developed their own plans. They noted that there was now little time to prepare for 1 January 2000. Consequently, any guidance must be brief and focused on actions that can be completed before this critical date. They agreed that the guidance should be organised to consider:

  1. actions to be taken before late December 1999
  2. actions to be taken during the few days immediately before and after the new year
  3. longer term follow-up actions.

Now until late December

4.4 NMHSs should evaluate potential hazards, assess the likelihood of failures and plan for likely contingencies. The NMHS should construct a list of systems or facilities that are critical to their operations and determine if alternative means of providing the necessary services can be provided. The experts considered possible failures that could apply to all NMHSs and made the following recommendations:

  1. Telecommunications failures (national and international)

Every NMHS, especially those responsible for operating an RTH, should lay out a matrix of the various telecommunication services that are used to meet its communications requirements and determine which organisation is responsible for each of these services. The NMHS should contact all of these responsible parties to ascertain the risk of an interruption in these services. For circuits that are of high risk, backup communications suppliers or technology or alternate routing arrangements should be planned for. These could include, for example, conventional telephone or fax, HF radio links, the Internet, satellite phone systems or manual means of communication (e.g. hand carry messages by automobile). Each NMHS should consult with its responsible RTH to plan for reliable procedures to provide a backup means to transmit and receive data should the circuit to that RTH fail. Any planned back-up procedures, national or international, should be tested no later than mid December 1999.

  1. Loss of electric power

Power stations and grid controls are less likely to be computer controlled in developing countries and hence are less susceptible to Y2K problems. However, possible interruptions in power should be considered as a possibility. NMHSs should contact the administrator of the concern that provides electrical power to determine the risk of outages. If an outage seems likely, the NMHS should investigate the possibility of backup power supplies including UPS and backup generators. If generators are used they should be tested, filled with fuel and be ready for use by 30 December 1999. Possible interactions between UPS and generator systems should be tested well in advance to ensure sufficient time to resolve any problems,

  1. Equipment failure

The preferred response to possible hardware or software failures is to ensure all essential systems have been tested, upgraded or repaired as necessary and certified to be compliant well before 1 January 2000. Otherwise, backup arrangements should be planned. These could include:

  • replacement of systems with other available equipment
  • manual operations (which might require training and testing of staff)
  • changing the date on non-compliant non-PC equipment to an earlier date (such as 1 January 1972),
  • manually resetting dates on non-compliant PCs
  • reallocation of compliant PCs to critical operational roles.
  1. Interruption in flow of data or products

If incoming data or products are not received from national sources the NMHS should follow normal procedures and contact the sources of the missing data (e.g. NMHS observing station, local aviation authorities, hydrological service, etc.). If the NMHS does not receive international data or can not meet its international obligations it should contact its responsible RTH following normal procedures.

4.5 Each NMHS is advised to establish a crisis management response strategy to deal with problems that may arise in the time around the change to the new year. Given the international nature of meteorology NMHSs must consider the transition to the year 2000 to span the entire period from 12 UTC on 31 December 1999 to 12 UTC on 1 January 2000. The strategy should include a plan to have a decision-maker available during all operational hours during this critical period who can set priorities and authorise responses and remedial actions. Furthermore, the NMHS should plan to have key operational staff available during this time to undertake necessary remedial actions. It is recommended that all but the smallest centres set up crisis management teams with staffing arranged well in advance of 31 December 1999. NMHSs should also identify national and international points of contact and ensure their contact information is accurate and up to date through testing. They should report any significant problems according to procedures outlined in the International Y2K Monitoring and Contingency Plan described in Appendix A.

4.6 As a measure of general advice for Y2K preparations, the experts recommended that NMHSs ensure their supplies of fuel, gas and other consumables are adequate to carry them for several weeks into 2000 in case suppliers are unable to provide these materials.

During the Y2K transition

4.7 During the critical period of 12 UTC on 31 December 1999 to 12 UTC on 1 January 2000, the NMHS should activate its crisis management strategy and ensure that members of its crisis management team are present or available on call. The NMHS should carefully monitor the situation while applying extra vigilance in the few hours immediately before and after midnight local time on 31 December 1999. It should evaluate problems and decide upon remedial actions according to its national and the international Y2K contingency plans. The NMHS should be prepared to notify users of any interruption or degradation of services.

Beginning a few days or weeks into 2000

4.8 The NMHS should evaluate its response to the Y2K transition. It should then establish priorities for recovering from failures and initiate actions necessary to return to normal operations. Every NMHS should remember to monitor the situation around 29 February 2000 in case the leap day causes problems with any of its systems. If any long-term problems are identified which require international assistance the NMHS should inform the WMO Secretariat.

International contingency planning

4.9 The experts discussed information that should be included in an international plan and agreed it should specify the actions to be taken in the event of various system failures, paying particular attention to telecommunications. Furthermore they agreed that the international monitoring and contingency plans should be combined into a single document and that this document should be distributed to all NMHSs as a printed document in the official WMO language appropriate for that NMHS.

4.10 The experts considered responses to possible outages of GTS circuits and, after careful thought, agreed that it was not practical or feasible to re-route large numbers of messages on short notice. While backup or alternate routing arrangements have already been agreed between some adjacent centres and these arrangements could be activated by bilateral agreement should outages occur, the experts did not believe any additional arrangements could be developed and tested before 1 January 2000. They noted that this is particularly true for circuits between large centres. For example, if the circuit between Tokyo and Washington were to become inoperable then very substantial changes would have to be made to the routing tables of several intermediate RTHs to re-route this traffic. Furthermore, an attempt to route significant volumes of data over alternate circuits would, in many cases, quickly overwhelm any spare capacity available on those circuits.

4.11 The meeting recommended that rather than attempt to re-route data over the GTS, selected centres should post essential data on the Internet and make these data available via FTP. The details of this proposal are provided in the International WMO Y2K Monitoring and Contingency Plan provided in Appendix A.

5 POSSIBLE ESTABLISHMENT OF WMO Y2K SITUATION CENTRES (agenda item 5)

5.1 The thirteenth WMO Congress requested CBS to develop a mechanism to respond to problems that may be detected and directed CBS to consider the possible establishment of one or more Year 2000 Situation Centres. The centres would act as a clearing-house for up-to-date status information and would coordinate response actions. The centres would also consolidate reports from monitoring centres, establish the most likely reasons for outages, disseminate information on outages, and possibly contact centres needed to implement remedial actions. The experts evaluated this proposal and recommended that four WMO Y2K Situation Centres be established. Each of the three WMO World Meteorological Centres and the two World Area Forecast Centres should act as a Y2K Situation Centre and be responsible for the areas described below.

Washington  Region III and Region IV
Bracknell Region I and part of Region VI
Melbourne  Region V, Antarctica and part of Region II
Moscow Parts of Regions II and VI

5.2 Further details concerning the proposed roles and responsibilities of these Y2K Situation Centres are provided in the International WMO Y2K Monitoring and Contingency Plan described in Appendix A.

6 CLOSURE OF THE MEETING

6.1 The meeting closed at 1215 on Thursday, 15 July.


Annex to Paragraph 3.1

Analysis of the Y2K Vulnerability of RTHS

Introduction

The 32 Regional Telecommunications Hubs (RTHs) are key to the flow of raw and processed data through the Global Telecommunications System (GTS). Based on data submitted by Members as well as published information on the predicted reliability of international telecommunications circuits, the likelihood that each RTH will function throughout the transition to year 2000 was estimated. The symbols in the following table are defined as follows:

+++ Almost certainly will function without interruption
++ Probably will function without interruption
+ Possibly will function without interruption
- Possibly will NOT function without interruption
- - Probably will NOT function without interruption
- - - Likely will NOT function without interruption

The third column of the table represents the total number of international connections to each RTH, including other RTHs. The fourth column represents the total number of connections to countries who do NOT operate an RTH.

RTH

Analysis of vulnerability of RTH

Total number of countries connected

Total number of non-RTH countries connected to this RTH

RA-I
Algiers, ALGERIA

++

11

4

Brazzaville, CONGO

- -

9

7

Cairo, EGYPT

++

7

5

Nairobi, KENYA

++

22

15

Niamey, NIGER

- -

10

6

Dakar, SENEGAL

+

20

16

Pretoria, SOUTH AFRICA

++

19

15

Lusaka, ZAMBIA

+

3

2

RA-II
Beijing, CHINA

++

10

6

New Delhi, INDIA

+

14

7

Tehran, IRAN

- -

7

4

Tokyo, JAPAN

++

9

3

Khabarovsk, RUSSIA

+

5

1

Novosibirsk, RUSSIA

+

4

1

Jeddah, SAUDI ARABIA

++

14

7

Bangkok, THAILAND

+

9

6

Tashkent, UZBEKISTAN

-

8

6

RA-III
Buenos Aires, ARGENTINA

++

9

6

Brasilia, BRAZIL

++

5

2

Maracay, VENEZUELA

+

8

6

RA-IV
Washington, USA

+++

42

38

RA-V
Melbourne, AUSTRALIA

+++

24

14

Wellington, NZ

++

1

0

RA-VI
Vienna, AUSTRIA

+++

9

7

Sofia, BULGARIA

+++

13

11

Prague, CZECH REPUBLIC

++

7

3

Toulouse, FRANCE

+++

12

6

Offenbach, GERMANY

+++

13

4

Rome, ITALY

++

9

6

Moscow, RUSSIA

+

24

12

Norrköping, SWEDEN

++

6

4

Bracknell, UK

+++

14

8

 


LIST OF PARTICIPANTS

AUSTRALIA

Mr Bruce Sumner
Bureau of Meteorology
GPO Box 1289K
MELBOURNE VIC 3001
Australia
Tel: (613) 9669 4349
Fax: (613) 9662 1223
Email: b.sumner@bom.gov.au

BRAZIL

Mr J. Mauro de Rezende
Instituto Nacional de Meteorologia
Eixo Monumental, Via S-1
BRASILIA D.F.
Brazil
Tel: (55) 61 344-4488; 344-0440
Fax: (55) 61 343-2132
Email: jmauro@inmet.gov.br

GERMANY
Representing CBS OPAG-ISS

Mr Geerd Hoffmann
Deutscher Wetterdienst
Zentralamt. Frankfurter Str. 135
D-63067 OFFENBACH
Germany
Tel: (49) 69 8062 2824
Fax: (49) 69 8062 3823
Email: geerd-ruediger.hoffmann@dwd.de

JAPAN

Mr Hiroyuki Ichijo
Japan Meteorological Agency
1-3-4 Otemachi, Chiyoda-ku
TOKYO 100-8122
Japan
Tel: (813) 3218 3825
Fax: (813) 3211 8404
Email: h_ichijo@met.kishou.go.jp

RUSSIAN FEDERATION

Mr Alexander Gusev
Federal Service for Hydrometeorology and Environmental Monitoring
Novovagankovsky Street 12
123 242 MOSCOW
Russian Federation
Tel: (7) 095 205 4813
Fax: (7) 095 255 2414
Email: tuboss@mskw.mecom.ru

SOUTH AFRICA

Mr Thomas Potgieter
South African Weather Bureau
Private Bag X097
PRETORIA, 0001
South Africa
Tel: (27) 12 309 3095
Fax: (27) 12 323 4518
Email: potgiet@cirrus.sawb.gov.za

UNITED KINGDOM

Dr Alan McIlveen
The Met. Office
London Road, Bracknell
Berkshire RG12 2SZ
United Kingdom
Tel: (44 1344) 854 680
Fax: (44 1344) 856 099
Email: wamcilveen@meto.gov.uk
  Mr Steven G. Noyes
The Met. Office
London Road, Bracknell
Berkshire RG12 2SZ
United Kingdom
Tel: (44 1344) 856 611
Fax: (44 1344) 856 012
Email: snoyes@meto.gov.uk

Representing CBS OPAG-IOS

Mr John Nash
The Met. Office
London Road, Bracknell
Berkshire RG12 2SZ
United Kingdom
Tel: (44 1344) 855 649
Fax: (44 1344) 855 897
Email: jnash@meto.gov.uk

Representing CAeM

Mr David Underwood
The Met. Office
London Road, Bracknell
Berkshire RG12 2SZ
United Kingdom
Tel: (44 1344) 856 281
Fax: (44 1344) 854 826
Email: dunderwood@meto.gov.uk

UNITED STATES OF AMERICA

Mr Howard Diamond
National Weather Service, W/OSO11x1
NOAA
1325 East West Hwy
Silver Spring, MD 20910
United States of America
Tel: (1 301) 713-0436 Ext. 121
Fax: (1 301) 713-0657
Email: Howard.Diamond@noaa.gov
  Mr John Lincoln
300 Treadwell Street
Berryville, VA 2261
USA
Tel: (1 540) 955 1620
Fax: (1 540) 955 0323
Email: jlincoln@shentel.net

ECMWF

Dr Horst Bottger
ECMWF
Shinfield Park
READING, BERKSHIRE RG2 9AX
United Kingdom
Tel: (44) 118 949 9060
Fax: (44) 118 986 9450
Email: horst.bottger@ecmwf.int
  Mr Dominique Marbouty
ECMWF
Shinfield Park
READING, BERKSHIRE RG2 9AX
United Kingdom
Tel: (44) 118 949 9003
Fax: (44) 118 986 9450
Email: dominique.marbouty@ecmwf.int

WMO SECRETARIAT

Mr David McGuirk
World Meteorological Organization
7 bis Avenue de la Paix
Case postale No. 2300
CH-1211 GENEVA 2
Switzerland
Tel: (41 22) 730 8241
Fax: (41 22) 730 8021
Email: mcguirk_d@gateway.wmo.ch

 


APPENDIX A

INTERNATIONAL WMO Y2K MONITORING AND CONTINGENCY PLAN

1.  INTRODUCTION

1.1 Experience and discussions held in various fora show that, on the one hand, Members are making progress in securing their mission-critical operations against failures at the millennium change. On the other hand it is clear that there is still considerable potential for outages in these systems for a number of reasons which are only partly controlled by the NMHSs. Considering the likelihood of outages and their possible impact on the operations of NMHSs around the world, the WMO Executive Council at its fifty-first session directed the Commission for Basic Systems (CBS) to consider the development and implementation of international monitoring activities and definition of actions that could be undertaken to minimise the impact of any problems that occur. In response to this request, this International WMO Y2K Monitoring and Contingency Plan has been developed by experts attending the Ad hoc meeting on the Year 2000 Problem (Reading, 12-15 July 1999).

1.2 This plan recommends actions that should be undertaken by all WMO Members in the few days immediately before and after 1 January 2000. Members operating Regional Telecommunication Hubs (RTHs), particularly those on the Main Telecommunication Network (MTN) are expected to play an especially important role.

1.3 The experts discussed possible mechanisms to monitor the actual performance of the World Weather Watch (WWW) in real time and agreed that two levels of monitoring should be carried out to meet all of the critical requirements for information on the status of this system over the transition to the year 2000.

  1. RTH-level telecommunications monitoring to determine the operability of each of the 32 RTHs and connections to each of the NMHSs that they serve
  2. Data monitoring by WMO lead centres to determine if significant outages of critical data occur

Recommended procedures to carry out this monitoring are described in section 2 of this plan.

1.4 Detailed, timely and accurate information on the current operational status of the World Weather Watch is of little use without coordinated actions to respond to problems that may be detected. A contingency plan for dealing with likely problems has therefore been developed and is described in section 3.

1.5 The thirteenth WMO Congress requested CBS to develop a mechanism to respond to problems that may be detected and directed CBS to consider the possible establishment of one or more Year 2000 Situation Centres. The centres would act as a clearing-house for up-to-date status information and would coordinate response actions. The centres would consolidate reports from monitoring centres, establish the most likely reasons for outages, disseminate information on outages, and contact centres needed to implement remedial actions. Each of the three WMO World Meteorological Centres and the two World Area Forecast Centres have agreed to act as a Y2K Situation Centre and their roles and responsibilities are described in section 4.

2.  MONITORING

2.1 RTH Monitoring

2.1.1 Considering the critical role that RTHs play in the operation of the World Weather Watch it is recommended that all RTHs ensure staff are available or can be contacted from no later than 06 UTC 31 December 1999 until at least 00 UTC on 2 January 2000.

2.1.2  The 32 RTHs provide the best resource to monitor the operation of the GTS. Each RTH should monitor the exchange of information with all National Meteorological Centres (NMC) under its area of responsibility (as defined in WMO Publication Number 386, the Manual on the GTS, Volume II - Regional Aspects). Beginning at 06 UTC on 31 December 1999 each RTH should report on the current status of this exchange. The report should contain a line for each of the NMCs under its responsibility according to the following format:

CCCCccccS Text where

CCCC = the identifier of the sending RTH
cccc = the identifier of the NMC under its responsibility
S = 0 (zero) if the link is not functioning and 1 (one) if it is functioning
Text = remarks briefly describing any other problems reported by the NMC

2.1.3  The reports should be sent in the form of an addressed message over the GTS and as an Internet e-mail message to the Y2K Situation Centre designated as the focal point for that RTH (see Table 4.1). The message sent over the GTS will be carried by the existing message switching mechanism according to its abbreviated heading of "BMAA01 CCCC YYGGgg". The CCCC shows the destination centre.

2.1.4  It is recommended that this report be sent once every six hours but, in any case, at least once every twelve hours. The reporting should continue to be sent until each RTH is advised to discontinue the monitoring by its designated WMO Y2K Situation Centre.

2.1.5  To ensure that the actual impact of any outages can be assessed a unique routing path for each addressed message between the RTH and its Y2K Situation Centre should be established beforehand.

2.1.6  A preliminary test of this reporting procedure should be carried out to ensure the system functions as envisioned. This test should be carried out with a test message sent from each RTH at 06 UTC on 2 December 1999 to:

  1. confirm that addressed messages from each RTH do indeed reach the WMO Y2K Situation Centres (check the MSS switching directories)
  2. gain experience in the formatting of these reports
  3. estimate human resources necessary to compose and send the messages

2.1.7  A follow-up test to evaluate any corrections or adjustments deemed to be necessary after the first test will be carried out one week later at 06 UTC on 9 December.

2.2 Other GTS Monitoring

2.2.1 There are a variety of mechanisms used to distribute meteorological data and products such as MDD. RETIM, Fax-E, HF radio, etc. There are existing contingency plans for maintaining most of these dissemination services which, in most cases, are considered to be adequate for the Y2K transition. If an NMHS should experience an interruption in any of these services it should report the problem to its responsible RTH according to normal procedures. The RTH should then report the problem to its designated Y2K Situation Centre along with other monitoring information as described in section 2.1 above.

2.3  Data Monitoring

2.3.1 The satellite operators should monitor the performance of their satellites and processing systems and report any problems to their designated WMO Y2K Situation Centre (see Table 2.1).

Satellite Operator WMO Y2K Situation Centre
China Melbourne
EUMETSAT Bracknell
India Bracknell
JMA Melbourne
NESDIS Washington
Russian Federation Moscow

Table 2.1

2.3.2  ECMWF will report any significant problems with TEMP data to the WMO Y2K Situation Centres twice daily for the few days surrounding 1 January 2000. It will also endeavour to produce problem reports of other data important to Numerical Weather Prediction (NWP) such as TEMP, TEMP SHIP, SYNOP, DRIBU, TAF, METAR, AIREP/AMDAR and satellite data and products. Furthermore, ECMWF has agreed to consider adding additional monitoring information (such as time series plots of number of reports received) to their public Web site along with the 6 hourly information (updated once per day) that they presently make available. The ECMWF Web site can be reach at http://www.ecmwf.int

3.  INTERNATIONAL CONTINGENCY ACTIVITIES

3.1 Response by an NMHS to an interruption in its receipt of data

3.1.1 If an NMHS experiences a loss of data received from an international source the NMHS should contact its responsible RTH following standard operational procedures. These procedures should be tested sometime before 15 December 1999 to ensure contact information is up to date.

3.2 Response to NMHS production failures

3.2.1 If an NMHS can not meet its international obligations it should contact its responsible RTH following normal procedures. The RTH should then notify its designated Y2K Situation Centre of any significant problems. The designation of these Situation Centres and relevant contact information are described in section 4.

3.3 Backup sources for essential data

3.3.1 Given the possibility of interruption of services provided by the GTS it is essential that a mechanism be established that can ensure NMHSs are able to receive critical data even if they can not receive this data from their primary RTH. Backup or alternate routing arrangements have already been agreed between some adjacent centres and these arrangements could be activated by bilateral agreement should outages occur. However, it is unlikely that additional arrangements can be developed and tested before 1 January 2000. This is particularly true for circuits between large centres. For example, if the circuit between Tokyo and Washington were to become inoperable then very substantial changes would have to be made to the routing tables of several intermediate RTHs to re-route this traffic. Furthermore, an attempt to route significant volumes of data over alternate circuits would, in many cases, quickly overwhelm any spare capacity available on those circuits.

3.3.2 Rather than attempt to re-route data over the GTS, it is recommended that selected centres post critical data on the Internet and make it available to all WMO Members via FTP. The National Meteorological Centres in Melbourne, Offenbach, Tokyo and Washington will endeavour to make all SYNOP, SHIP, DRIBU, TEMP, TEMP SHIP, PILOT, AIREP/AMDAR, METAR and TAF as well as Profiler and ACARS BUFR messages received at their centres available through this mechanism. Beginning 15 November these centres will make test data available. Operational data will be made available from 15 December continuing at least until 15 January 2000.  It is recommended that data conform to the following format and file conventions. If a centre chooses to use different conventions then the centre should provide details on its implementation to the Secretariat by 1 October 1999. The file naming standard used by the USA is reproduced in Annex A to this paragraph.

Format: As described in the Guide on use of TCP/IP on the GTS and reproduced in Annex B to this paragraph with amendments as indicated in bold type.

File name: CCCCYYGGTTRnnn where

CCCC = the identifier of the centre which created and posted the file
YY = day of the messages contained in the file
GG = hour of the messages contained in the file (i.e. 00, 06, 12, 18)
TT = identifier of the data contained in the file
R = WMO Region (1 to 6 and 7 for Antarctica)
nnn = file cycle number (a number starting with 001 and incremented whenever the file is replaced by an updated version)

Message types

TTs included

TEMP, TEMP SHIP, PILOT

US, UK, UL, UE, UP, UG, UH, UQ

SYNOP, SHIP, DRIBU

SI, SM, SN, SS

AIREP, AMDAR

UA, UD

METAR, TAF, SIGMET

SA, FT, WX

Profiler and ACARS BUFR messages

IU

Table 3.1

3.3.3 The four centres providing this backup FTP service should consult and exchange any information they determine to be necessary for implementation of the format or file name conventions.

3.3.4 It is, of course, the responsibility of each of the participating centres to determine if "Additional data" as defined by WMO Resolution 40 are to be included in the files that they make available via FTP. However, since this mechanism is intended to serve as a backup source of data for NMHSs over the change to the year 2000, if files include any "Additional data" it is recommended that these files be made available only through password protected FTP.

3.3.5 These four FTP centres will provide information on how they can be contacted and their data accessed to the WMO Secretariat by 15 September 1999. The information should include:

Telephone number
Optional backup telephone number
Fax number
E-mail address
Internet address of the FTP server
User-ID and Password to be used (if applicable)
File naming and format conventions (if different from those recommended above)

3.3.6 Although dial-up access to some of these FTP sites might be possible it is considered as a technically complex alternative that would require consultation, agreement and testing well in advance of 1 January 2000. It is judged to be marginally feasible and any NMHS wishing to explore this option should contact one of the FTP centres to discuss the matter on a bilateral basis.

3.4 Backup sources for products

3.4.1  Emergency procedures for backup provision of essential meteorological services are described in the Manual on the Global Data Processing System. In general, these procedures specify that through prior agreement a neighbouring NMHS may assume responsibility for critical forecasts or warnings upon request from the affected NMHS. Similarly backup generation of products from the World Area Forecast Centres has been agreed.

3.4.2  Specific backup arrangements for the dissemination of global products from World Meteorological or World Area Forecast Centres have not been planned and are not considered to be practical. However, NMHSs are reminded that products from the World Area Forecast Centres are already available via the Internet and can be found as follows:

Most if not all of the same products that are transmitted on the WAFS channels, i.e. GRIB, T-4 (FAX), and alphanumeric (METAR, TAF, and SIGMETS) are available from the FTP server at the Washington RTH. The METAR, TAF, SIGMETS, T-4 products can be found for download also from the web pages using either http or ftp. They can be found via http://www.nws.noaa.gov or http://weather.noaa.gov. They can also be retrieved via FTP at ftp://140.90.6.103

METARs, TAFs, SIGMETs, VAAC advisories and SATPIX can be retrieved via the Web at http://www.awc-kc.noaa.gov/

3.4.3  Products from the Emergency Managers Weather Information Network (EMWIN) are also available via FTP to 140.90.6.240 using username emwin and password in emwin. At that point you should see a number of .ZIP files (e.g., SAHOURLY.ZIP which has the hourly METAR observations). You have to download the .ZIP files and then unzip them with an application such as WINZIP which is a shareware program that can be found on the web.

3.4.4  Any NMHS that finds it necessary to utilise products retrieved from these sources should carefully check the validity times for these products.

4.  WMO Y2K SITUATION CENTRE(S)

4.1 It is recommended that WMO Y2K Situation Centres be established. Each of the three WMO World Meteorological Centres and the two World Area Forecast Centres should act as a Y2K Situation Centre and be responsible for the areas described below.

Washington  Region III and Region IV
Bracknell Region I and part of Region VI
Melbourne  Region V, Antarctica and part of Region II
Moscow Parts of Regions II and VI

Specifically, each Situation Centre would be responsible for the RTHs as listed in Table 4.1.

Bracknell responsible for:

Melbourne responsible for:

Moscow responsible for:

Washington responsible for:

Algiers, Algeria Beijing, China Khabarovsk, Russian Fed. Buenos Aires, Argentina
Brazzaville, Congo New Delhi, India Novosibirsk, Russian Fed. Brasilia, Brazil
Cairo, Egypt Tehran, Iran Tashkent, Uzbekistan Maracay, Venezuela
Nairobi, Kenya Tokyo, Japan Moscow, Russian Fed. Washington, USA
Niamey, Niger Jeddah, Saudi Arabia    
Dakar, Senegal Bangkok, Thailand    
Pretoria, South Africa Wellington, New Zealand    
Lusaka, Zambia Melbourne, Australia    
Vienna, Austria      
Sofia, Bulgaria      
Prague, Czech Republic      
Toulouse, France      
Offenbach, Germany      
Rome, Italy      
Norrköping, Sweden      
Bracknell, UK      

Table 4.1

4.2 The WMO Y2K Situation Centres would act as a clearing-house for up-to-date status information and would coordinate response actions. The centres would collect, consolidate and collate reports from monitoring centres, establish the most likely reasons for outages, and make information on the current status of World Weather Watch Systems available. The information should be made available via the World Wide Web. It is suggested that, if possible, additional mechanisms such as fax on demand be provided as an alternative to the Internet.

4.3 Information on a global scale should be duplicated at all of the centres and each centre may also choose to provide more detailed information for NMHSs within its area of responsibility. The centres would have responsibilities for actions before, during and after the change to year 2000 as described below.

Before 15 December 1999

  1. Provide detailed information to the WMO Secretariat by 15 September 1999 on how they can be contacted. This should include:

Fax number (incoming)
Optional fax on demand number (outgoing)
Telephone number
Optional backup telephone number
E-mail address
Internet Web address where status information will be made available

  1. Each centre should contact all of the RTHs under its area of responsibility no later than 1 December 1999 to ensure the information they have to contact these RTHs is accurate and up-to-date.

  2. Each centre should coordinate with the other Y2K Situation Centres to agree upon the mechanism and schedule for regular consultation between the centres during the transition to the year 2000. Plans for backup facilities to be used in the event of failure of the primary mechanism should be agreed no later than 1 December 1999.

  3. Each centre should coordinate with the other Y2K Situation Centres to agree upon a standard presentation format to be used to display status information. A draft format to begin discussions is provided in Table 4.2. Text should be kept to a minimum and presented in language(s) chosen by each centre.

{Date and time information is valid}

{General Information}

GTS –
GOS –
GDPS –

RTH

Link with:

1 - Up
0 - Down
X - Unknown

Remarks

Algiers

Cairo    
Toulouse    
Nairobi    
Rome    
Niamey    
Jeddah    
Dakar    
Madrid    
Tunis    

Tripoli

   
Casablanca    
Brazzaville Douala    
Bangui    
Ndjamena    
Libreville    
Niamey    
Dakar    
Etc.    
Etc.      

Table 4. 2

From 06 UTC on 31 December 1999 until at least 00 UTC on 2 January 2000

  1. Ensure that staff are available on site around the clock.

  2. Collect, collate, and display information gathered from the RTH monitoring described in section 2.1 above.

  3. Consult regularly (at least every 6 hours) with the other Y2K Situation Centres to coordinate activities and exchange information on the status of the WWW systems.

  4. Provide access to status information until at least 00 UTC 6 January.

After the Y2K transition

  1. Provide a report to the Secretariat by 15 March summarising the results of the Y2K transition and describing problems that remain unresolved at that time.

5.  WMO SECRETARIAT

5.1 Although not assigned operational responsibilities the Secretariat can help to minimise the impact of the Y2K transition. Among its routine responsibilities to coordinate interactions between WMO Members the Secretariat should carry out the following tasks. The Secretariat should:

  1. Consult with ITU to ascertain if the ITU plans to provide information on the status of international telecommunications over the Y2K transition and, if so, inform Members of how this information can be viewed.

  2. Issue a circular letter to all Members no later than 15 October 1999 informing them of the plans to provide meteorological data via FTP servers as an alternate source of data during the transition to year 2000. The letter should describe the format and file naming conventions that are used and provide the Internet addresses and any User-IDs and passwords required to access each of the FTP servers. The letter should also include information on how to contact the Y2K Situation Centres.

  3. Publish a summary of this Y2K Monitoring and Contingency Plan in the WWW Operational Newsletter no later than 15 November 1999, including information on the roles and contact information for the Y2K Situation Centres.

  1. Inform the satellite operators and other international organisations with an interest in the operation of the World Weather Watch such as EUMETSAT, IAEA, IATA, ICAO, IMO and IOC on the steps that are being taken to ensure its continued operation and how they could contact the Y2K Situation Centres to view status information.

  2. Produce an interim summary of the results of the Y2K transition by 15 January 2000

5.2 ECMWF has agreed to analyse significant losses or quality problems of data during the Y2K transition and will endeavour to provide a synopsis of persistent problems within the first several days of January 2000. This will include all data important to NWP. The synopsis might include, for example, evidence that radiosondes from a particular manufacturer have not been available since the transition to the year 2000. Other centres might also discover similar trends. The Secretariat should consolidate this information, make it available on its Web server, and coordinate possible responses with the CBS Y2K expert team as necessary.


Annex A to Paragraph 3.3.2

USA File Name Standard for FTP Servers

1. Introduction

Draft Directory and File naming Standards for FTP Servers were first adopted at a Restricted Session of the CBS Working Group on Data Management (WGDM) in February, 1995. These standards were intended for use by FTP servers participating in trial implementations of the CBS Distributed Databases Concept. Since the standards adopted referred to an emerging concept, evolution in the standards themselves was expected. Related software development was therefore suggested to be approached with caution. Since that time, evolution has occurred, but not in a coordinated manner.

Standards for directory and file naming standards for FTP servers used by centers within the US have been under development since 1997by a small group consisting of representatives from FNMOC, NWSTG, and NCEP. The group reviewed the experiences of the CBS WGDM effort and found some of the concepts in the 1995 draft standards were deficient and should be discarded, while others were useful and should be retained. In the next section, some of the issues that must be addressed in the standards are discussed. A proposal for some basic features the standard should contain is presented in Section 3.

2. Some Issues for the Standards (background)

Type of Information: The first issue is what set of information the revised standards should contain in the directory and file names. There is some commonality in the types of information FTP servers now use, for most contain some of the following kinds of information (referred to as information elements in this paper) in their directory and file names (note that this list is not exhaustive):

server location cycle of run generating process customer designator reference date
level of data data format reference time layer of data run of model
data date grid data subcategory data time parameter
area of data data category data date period type of mode data time period
filename sequence

Order of Information: Once the type of information to be contained in the revised standards is established, the order that information should appear needs to be decided. For example, should reference date precede generating process or should generating process precede reference date? Should level of data precede grid or should reverse be true? Different centers maintain their servers for different purposes, and the order this information appears has a significant effect on the ease or difficulty of retrieving the information contained. A brief review of some FTP servers suggests that while there is considerable commonality with respect to the information elements they use, there is no commonality regarding how these information elements are assembled into directory and file names. The group felt that consensus would be difficult to reach on this point, and the standards must therefore permit the center maintaining the server to decide the order of the information elements.

Length of Directory and File Names: If we adopt the position of using convenient UNIX constructs to create user-friendly directory and file names, the issue of how many characters are allowed for the directory and file names must be addressed. Long file names can be made more user friendly than short ones. However, different server implementations have different restrictions. A quite small sample revealed permissible file name lengths from 14 characters to over a thousand. Lengths of 14 characters leave precious little opportunity to create user-friendly file names.

Composition of Information Elements: A decision to rely more heavily on alphabetic rather than numeric information elements carries with it the challenge of reaching agreement on what the alphabetic entrees should be. This will be particularly challenging when complete common-usage names must be abbreviated in the interests of compactness.

Number of Directory Levels: Different servers may also permit different numbers of directory levels. Once the type of information to be contained in the revised standards is established, the number of directory levels and the number of characters allowed in the directory and file names could be considered. However, as no consensus is likely to exist on this point either, the revised standards must also allow the center maintaining the server to decide the number of directory levels and the lengths of the directory and file names.

3. The Current Evolution of the Directory and File Naming Standards for FTP Servers

In light of the past experience, the group felt revised standards should incorporate several basic features.

First, the individual information elements that comprise the standards must be nationally coordinated with respect to their form and content.

Second, except for insisting the server location be the first directory level, the centers responsible for the servers should be allowed to assemble these information elements into directory and file names as needed.

Third, use should be made of alphabetic or alphanumeric entrees whenever possible.

Fourth, the individual information elements should be of limited but fixed length.

Fifth, the allowable character set should be restricted to A-Z, a-z, 0-9, period (.), underscore (_), and hyphen (-).

One method of allowing centers to assemble the information elements as they wish, yet ensure the information elements can always be uniquely identified, is to first consider each information element to consist of an element ID - made up of two letters followed by a period - and element information. This would ensure unique identification of which information element(s) each directory and file name entry consists of yet allow the needed flexibility. Second, each directory and file name should be allowed to consist of several information elements connected by an underscore. Finally, use of the dash should be reserved for indicating that the information element is a spatial or temporal interval. The following description of the individual elements adheres to these basic principles and represents the current evolution of our efforts:

server location ==> {SL|sl}.ccnnnsss
documents ==> {DO|do}=dddd
tables ==> {TB|tb}=tttt
reference date ==> {RD|rd}=yyyymmdd
reference time ==> {RT|rt}=hhnnss
data date ==> {DD|dd}=yyyymmdd
data time ==> {DT|dt}=hhnnss
data date period ==> {DP|dp}=yyyy1mm1dd1-yyyy2mm2dd2
data time period ==> {TP|tp}=hh1nn1-hh2nn2
generating process ==> {GP|gp}=ppppp
area of data ==> {AR|ar}=aaaaaaaa
data format ==> {DF|df}=ffff
data status ==> {ST|st}=stat
type of model ==> {MT|mt}=mmmmm
run of model ==> {MR|mr}=rrr
cycle of run ==> {CY|cy}=hh
level of data ==> {LV|lv}=sddddd
layer of data ==> {LY|ly}=sddddd1-sddddd2
grid ==> {GR|gr}=gggggggg
parameter ==> {PA|pa}=pppppppp
data category ==> {DC|dc}=ccccc
data subcategory ==> {DS|ds}=sssss
customer ==> {CU|cu} lllll
sequence number ==> {CY|cy}.xx(xx)

In the above, the notation {a|b} indicates a choice may be made to use either "a" or "b". The convention suggested is to use upper case when the information element is used in a directory name and lower case when the information element is used in a file name. The specific description of these information elements is given in Annex 1. It should be noted that the above list of information elements is not exhaustive, and more can be anticipated to be needed. Furthermore, several tables described in Annex 1 are incomplete and others will need to be developed and maintained by the center responsible for the server.

As a first example, the US National Centers for Environmental Prediction (NCEP) might choose to assemble the above information elements into following directory and file naming configuration for observational data:

/(server location)/(reference date)/(reference time)/(generating process)/(data time period)/
(area of data)_(data format)_(data category)_(data subcategory)

Symbolically, this would appear as:

/SL.ccnnnsss/RD.yyyymmdd/RT.hhnnss/GP.ppppp/TP.hh1nn1-hh2nn2/ ar.aaaaaaaa_df.ff_dc.ccccc_ds.sssss

A file of radiosonde observations from fixed land sites for the period from 3 hours prior to 2 hours 59 minutes after a reference date/time of 8 December, 1997/1200 UTC stored in BUFR on the NCEP DDBs server would then appear as:

/SL.us007003/RD.19971208/RT.120000/GP.obvns/TP.0300-0259/ar.allglobe_df.bu_dc.vsndn_ds.raobf

Note that other centers would be free to organize their directory and file names for the same observational data with a different combination of information elements, or with the same information elements but in a different order.

As a second example, the US National Weather Service (NWS) might choose to assemble the above elements into following directory and file naming configuration for a specific observational data type:

/(server location)/(area of data)/(data format)/(data category)/(customer sequence number)

Symbolically, this would appear as:

/SL.ccnnnsss/AR.aaaaaaaa/DF.ff/DC.ccccc/cu.lllll_cy.xx(xx)

A file of surface synoptic data for the area of South America in WMO character code for use by a customer would appear as:

/SL.us008001/AR.wmora03l/DF.an/DC.sflnd/cu.mitre_cy.01

Note this is the first "cu.mitre" file of the "DC.sflnd" data type subdirectory where there will be a number of sequences of cy=01 through cy=36, for 36 different file names under that subdirectory. This provides a way of generating files of data as they arrive at the communications switching center. As the definition of area is not yet defined here, this AR = wmo for World Meteorological Organization, ra for region, 03 for region number of South America, and l (lower case L) for continent specific area.


Annex 1: Description of Directory and File Name Information Elements with element IDs

MANDATORY FIELD

server location ==> {SL|sl}.ccnnnsss
where

{SL|sl}

==> Indicator for information element "server location"

cc

==> country [FIPS standard 10-4]

nnn

==> center [WMO standard 306 Part II]

sss

==> sub-center [center defined]

 OPTIONAL FIELDS (Selected IDs and their order as determined by Center)

documents ==> {DO|do}.dddd
where

{DO|do}

==> Indicator for information element "documents"

dddd = tcom

==> telecommunications documents

dddd = code

==> data representation (code) form documents

dddd = prod

==> production documents

dddd = drft

==> draft documents
tables ==> {TB|tb}.tttt
where

{TB|tb}

==> Indicator for information element "tables"

tttt = stns

==> observing station information

tttt = bufr

==> BUFR tables

tttt = crex

==> CREX tables

tttt = grib

==> GRIB tables
reference date ==> {RD|rd}.yyyymmdd
where

{RD|rd}

==> Indicator for information element "reference date"

yyyy

==> 4-digit Year

mm

==> month

dd

==> day
reference time ==> {RTD|rt}.hhnnss
where

{RT|rt}

==> Indicator for information element "reference time"

hh

==> hour

nn

==> minute

ss

==> second
data date ==> {DD|dd}.yyyymmdd
where

{DD|dd}

==> Indicator for information element "Data Date"

yyyy

==> 4-digit Year

mm

==> month

dd

==> day
data time ==> {DT|dt}.hhnnss
where

{DT|dt}

==> Indicator for information element "Data Time"

hh

==> hour

nn

==> minute

ss

==> second
data date period ==> {DP|dp}.yy1mm1dd1-yy2mm2dd2
where

{DP|dp}

==> Indicator for information element "data date period"

yy1

==> number of years (0-99) before reference date/time data date period begins

mm1

==> number of months (00-12) before reference date/time data date period begins

dd1

==> number of days (00-31) before reference date/time data date period begins

yyyy2

==> number of years (0-99) after reference date/time data date period ends

mm2

==> number of months (00-12)after reference date/time data date period ends

dd2

==> number of days (00-31)after reference date/time data date period ends
data time period ==> {TP|tp}.hh1nn1-hh2nn2
where

{TP|tp}

==> Indicator for information element "data time period"

hh1

==> number of hours (00-99) before reference date/time data time period begins

nn1

==> number of minutes (00-99) before reference date/time data time period begins

hh2

==> number of hours (00-99) after reference date/time data time period ends

nn2

==> number of minutes (00-99) after reference date/time data time period ends
generating process ==> {GP|gp}.ppppp
where

{GP|gp}

==> Indicator for information element "generating process"

ppppp = obsvns

==> observations

ppppp = agrids

==> analysis grids

ppppp = agrphs

==> analysis graphics

ppppp = fgrids

==> forecast grids

ppppp = fgrphs

==> forecast graphics

ppppp = warngs

==> warnings

ppppp = discs

==> discussions
area of data ==> {AR|ar}.aaaaaaaa
where

{AR|ar}

==> Indicator for information element "area of data"

aaaaaaaa

==> is a string of eight characters. International coordination of a group of frequently-used areas would be useful.
data format ==> {DF|df}.ff
where

{DF|df}

==> Indicator for information element "data format"

ff = an

==> WMO character

ff = bl

==> bulletins of raw observations as exchanged on the GTS

ff = bu

==> WMO BUFR

ff = cr

==> WMO CREX

ff = c5

==> CCITT International Alphabet #5

ff = f1

==> CCITT T4-1D facsimile

ff = f2

==> CCITT T4-2D facsimile

ff = gi

==> GIF

ff = gr

==> WMO GRIB (binary)

ff = gt

==> mixed information as exchanged on the GTS

ff = jp

==> JPEG
type of model ==> {MT|mt}.mmmmm
where

{MT|mt}

==> Indicator for information element "type of model"

mmmmm

==> string of five characters indicating the type of model used (table maintained by originating centre)
run of model ==> {MR|mr}.rrr
where

{MR|mr}

==> Indicator for information element "run of model"

rrr

==> string of three characters indicating the model run (table maintained by originating centre)
cycle of run ==> {CY|cy}.hh
where

{CY|cy}

==> Indicator for information element "cycle of run"

hh

==> cycle time in hours
level of data ==> {LV|lv}.sddddd
where

{LV|lv}

==> Indicator for information element "level of data"

s = p

==> pressure

s = h

==> height

s = t

==> potential temperature

s = s

==> sigma

ddddd

==> value of surface. Multiple levels are indicated by setting ddddd = 99999.
layer of data ==> {LY|ly}.s1ddddd1-s2ddddd2
where

{LY|ly}

==> Indicator for information element "layer of data"

s1s2 = p

==> pressure

s1s2 = h

==> height

s1s2= t

==> potential temperature

s1s2 = s

==> sigma

ddddd1

==> value of lower surface of layer of type s1.

ddddd2

==> value of upper surface of layer of type s2. (multiple layers are indicated by setting ddddd1 = ddddd2 = 99999)
grid ==> {GR|gr}.gggggggg
where

{GR|gr}

==> Indicator for information element "grid"

gggggggg

==> is a string of eight characters indicating the grid used (table maintained by originating centre). Multiple grids are indicated by setting gggggggg = allgrids. International coordination of a group of frequently-used grids would be useful.
parameter ==> {PA|pa}.pppppppp
where

{PA|pa}

==> Indicator for information element "parameter"

pppppppp

==> is a string of eight characters indicating the parameter (table maintained by originating centre). Multiple parameters are indicated by setting pppppppp = allparms. International coordination of a group of frequently-used parameters would be useful.
data category ==> {DC|dc}.ccccc
where

{DC|dc}

==> Indicator for information element "data category"

ccccc = sflnd

==> Surface data - land

ccccc = sfmar

==> Surface data - sea

ccccc = vsndn

==> Vertical sounding - other than satellite

ccccc = vsnds

==> Vertical sounding - satellite

ccccc = sluan

==> Single level upper-air data - other than satellite

ccccc = sluas

==> Single-level upper-air data - satellite

ccccc = sfsat

==> Surface data - satellite

ccccc = altyp

==> All types of data category
data subcategory ==> {DS|ds}.sssss
where

{DS|ds}

==> Indicator for information element "Data Subcategory"
when ccccc = sflnd,

sssss = synop

==> Synoptic - manual and automatic

sssss = avnma

==> Aviation - manual

sssss = amosx

==> Aviation - AMOS

sssss = ramos

==> Aviation - RAMOS

sssss = autob

==> Aviation - AUTOB

sssss = asosx

==> Aviation - ASOS

sssss = metar

==> Aviation - METAR

sssss = awosx

==> Aviation - AWOS

sssss = coavn

==> Synoptic - converted aviation

sssss = autox

==> Aviation - AUTO(0-9)

sssss = coops

==> Cooperative - SHEF

sssss = sclim

==> Aviation - Supplementary Climat Data Report

sssss = allsc

==> All sub-categories
when ccccc = sfmar,

sssss = ships

==> Ship - manual and automatic

sssss = dbuoy

==> Drifting buoy

sssss = mbuoy

==> Moored buoy

sssss = lcman

==> Land-based CMAN station

sssss = oilrg

==> Oil rig or platform

sssss = slpbg

==> Sea level pressure bogus

sssss = wavob

==> WAVEOB

sssss = allsc

==> All sub-categories
when ccccc = vsndn,

sssss = raobf

==> Rawinsonde - fixed land

sssss = raobm

==> Rawinsonde - mobile land

sssss = raobs

==> Rawinsonde - ship

sssss = dropw

==> Dropwinsonde

sssss = pibal

==> Pibal

sssss = prflr

==> Profiler

sssss = nxrdw

==> NEXRAD winds

sssss = allsc

==> All sub-categories
when ccccc = vsnds,

sssss = geost

==> Geostationary

sssss = mstbg

==> Moisture bogus

sssss = tovsx

==> Polar orbiting - TOVS

sssss = synsy

==> Sun synchronous

sssss = allsc

==> All sub-categories
when ccccc = sluan

sssss = airep

==> AIREP

sssss = pirep

==> PIREP

sssss = asdar

==> ASDAR

sssss = acars

==> ACARS

sssss = recco

==> RECCO - flight level

sssss = allsc

==> All sub-categories
when ccccc = sluas

sssss = infus

==> Winds derived from cloud motion observed in infrared channels by the United States

sssss = visus

==> Winds derived from cloud motion observed in visible channels by the United States

sssss = h20us

==> Winds derived from motion observed in water vapour channels by the United States

sssss = comus

==> Winds derived from motion observed in a combination of spectral channels by the United States

sssss = infin

==> Winds derived from cloud motion observed in infrared channels by India

sssss = visin

==> Winds derived from cloud motion observed in visible channels by India

sssss = h20in

==> Winds derived from motion observed in water vapor channels by India

sssss = comin

==> Winds derived from motion observed in a combination of spectral channels by India

sssss = infja

==> Winds derived from cloud motion observed in infrared channels by Japan

sssss = visja

==> Winds derived from cloud motion observed in visible channels by Japan

sssss = h2oja

==> Winds derived from motion observed in water vapor channels by Japan

sssss = comja

==> Winds derived from motion observed in a combination of spectral channels by Japan

sssss = infeu

==> Winds derived from cloud motion observed in infrared channels by EUMETSAT

sssss = viseu

==> Winds derived from cloud motion observed in visible channels by EUMETSAT

sssss = h2oeu

==> Winds derived from motion observed in water vapor channels by EUMETSAT

sssss = comeu

==> Winds derived from motion observed in a combination of spectral channels by EUMETSAT

sssss = allsc

==> All sub-categories
when ccccc = sfsat,

sssss = ssmit

==> SSM/I - Brightness Temperatures

sssss = ssmip

==> SSM/I - Derived Products

sssss = ersar

==> ERS/SAR

sssss = erswn

==> ERS/scatterometer Winds

sssss = ersal

==> ERS/Radar altimeter Data

sssss = sstnv

==> DOD/Navy sea surface temperatures

sssss = sstns

==> DOC/NESDIS sea surface temperatures

sssss = allsc

==> All sub-categories
when ccccc = altyp,

sssss = (not used)


sequence number ==> {CY|cy}.xx(xx)
where

{CY|cy}

==> Indicator for cycle sequence numbers

xx(xx) = 01(01) thru 99(99)

==> sequence number, length of two or four digits determined when number of subdirectories or files are established by center writing the files
customer ==> {CU|cu}.lllll
where

{CU|cu}

==> Indicator of customer the file is established for

lllll = kwbc

==> RTH Washington

lllll = fnoc

==> Fleet Numerical Oceanographic Center

lllll = knhc

==> National Hurricane Center

lllll = mitre

==> Company name

lllll = faa

==> Federal Aviation Administration

lllll = genrl

==> general purpose files (implies file content is not restricted for any intended customer)

 


Annex B to Paragraph 3.3.2

Format of Data on FTP Servers

(Excerpt from Chapter for of the Guide on Use of TCP/IP on the GTS)

Accumulating messages into files

One of the problems with using FTP to send traditional GTS messages is the overhead if each message is sent in a separate file. To overcome this problem, multiple messages in the standard GTS message envelope should be placed in the same file according to the rules set out below. This method of accumulating multiple messages applies only to messages for which AHLs have been assigned.

Centres have the option of including or deleting the Starting Line and End of Message strings and indicating which option they are using via the format identifier (refer points 2 and 4 below).

  1. Each message should be preceded by an 8 octet message length field (8 ASCII characters). The length includes the Starting Line (if present), AHL, text and End of Message (if present).
  2. Each message should start with either:
  1. the currently defined Starting Line and AHL as shown in figure 4.2, option 1; or
  2. the AHL as shown in figure 4.2, option 2.
  1. Messages should be accumulated in files thus:
  1. length indicator, message 1 (8 characters);
  2. format identifier (2 characters);
  3. message 1
  4. length indicator, message 2 (8 characters);
  5. format identifier (2 characters);
  6. message 2
  7. and so on, until the last message; and then
  8. a 'dummy' message of zero length shall be inserted after the last real message, to assist with end of file detection in certain MSS systems;
  1. Format identifier (2 ASCII characters) has the following values:-
  1. 00 if Starting Line and End of Message strings present;
  2. 01 if Starting Line and End of Message strings absent.
  1. The sending centre should combine messages in the file for no more the 60 seconds to minimise transmission delays. (30 minutes is acceptable for files in Y2K backup procedures)
  2. The sending centre should limit the number of messages in a file to a maximum of 100.
  3. The format applies regardless of the number of messages, i.e. it applies even if there is only one message in the file.

Figure 4.2 Structure of a typical message in a file