![]() | Node Hardware |
UserPreferences |
GMS NOC Docs | FrontPage | RecentChanges | TitleIndex | Help |
The 3B2 Phoenix machine is a 50 Mhz 1100-R3 RISC processor. The backplane is in the middle of the machine vertically, which is where the system board resides. Memory boards (combination of 16, 32, and 64 MB boards totaling 96 MB) reside in the slots below the system board. These slots contain the Host Adapter boards that are used to connect the external SCSI drives. There are two internal hard drives used for the root, usr, and var file systems.
The prtconf command gives the internal system configuration of the boards.
slot | contains |
1 | single-ended SCSI card (S.E. BUS) that is the interface for the internal 300 MB disks and internal tape drive |
2 | bus continuity card (slot eliminator). This is a small card that allows the slot to be bypassed. Normal 3B2 configuration will not allow a slot to be empty when there is a card in the next slot. |
3,5 | (5 is above 3) - a Datakit fiber interface card (DKPE) dual board set. This host interface card uses a fiber optics cable to connect to the datakit CPM card. |
4 | the Alarm Interface Circuit (AIC) card used for console control of the 3B2 console and Compulert connections. Compulert is connected through a ty port from the local or same datakit as the host fiber interface. |
6,7,8,9 | Differential SCSI host adapter cards (DIF. BUS, tag 0 or 1). These are used to connect the same disks to multiple machines. Thus, a disk can be accessible by more than one machine, but a disk slice or partition can only be mounted on one machine at a time unless it is mounted as read only. Each host adapter in these slots controls six disk drives that are either 300 MB, 600 MB, 1 GB, or 2 GB (tags 2-7). Each disk is accessible by one other processor. |
10 | also contains a Differential SCSI host adapter card and controls six devices. However, the devices that it controls are the host adapter in slot 10 of the next machine (tag 2), the three external tape drives (tags 3-5), and 2 shared disk drives (tag 6-7). Everything attached to this bus is accessible by all three processors. |
11 | the Starlan 10 Network Access Unit (NAU) board, also called the Network Interface (NI) card. It is used for ethernet connections to run TCP/IP services. The card has a Starlan AUI adapter plugged into the back of it which connects to a twisted pair transceiver. This is where we connect the fiber that runs to GMSNET. |
General rules to follow to determine if a system crash or panic is hardware or software related:
To enter diagnostic mode, you must first release all services and shutdown -i5 -g0 -y
At the prompt:
Enter name of program to execute [/etc/system]:
Type filledt and select defaults for next 2 questions. After filledt is successfully completed you will get the above prompt again. This time enter dgmon and again select the defaults. Examples of diagnostics that can be run at the DGMON> prompt are:
command | runs |
dgn scsi ph=17 | phase 17 diags on all SCSIs) |
dgn scsi=1 ph=1-17 | phases 1-17 on scsi 1 only) |
dgn dkpe ph=1-13 | phases 1-13 on Datakit fiber board) |
dgn sbd ph=* | all phases on the system board) |
dgn sbd soak | all non-interactive phases until a key is touched) |
Always do a filledt before running diagnostics. Always answer NO to all interactive questions. Once in the DGMON>, to list what types of diagnostics are available, you can type l sbd or l scsi for example. There are 3 types. NORMAL are run at power up or when typing dgn. DEMAND are phases that must be specifically requested. INTERACTIVE requires operator intervention. Requesting diags to be run in soak mode can be useful for finding intermittent problems. They will run continuously until you touch a key on the keyboard. Typing show at the dgmon prompt will show you the edt (electronic device table). This is helpful to determine scsi numbers that must be used for diags, as they are not the same as the scsi slot number.
SCSI 0 = slot 1 SCSI 1 = slot 6 SCSI 2 = slot 7 SCSI 3 = slot 8 SCSI 4 = slot 9 SCSI 5 = slot 10
SCSI phases 1 thru 16 diagnose the host adapter board only. Phases 17 thru 24 check external SCSI bus outside of the board. Always run only phase 17 first, as it is normally the only phase that will fail.
Some basic differences fom the Phoenixes to keep in mind:
Hardware slots | |
slot | use |
0, 1 | CPU board partners |
2, 3 | Vacant |
4 - 9 | BIO/SCSI board partners (can be access to disk/tape/ethernet) |
10, 11 | IOP board partners |
Hardware commands | |
command | displays |
hwmaint ls | all boards and their state |
elgpost | contents of errorlogs |
elgpost -s VDL | errors about virtual disks |
elgpost -S CRITICAL | critical errors |
Disk drives:
disk/tape commands | |
command | use |
vdskconf -D -v /dev/rdsk/vdskC06 -d /dev/rdsk/c4a3d1s06 | unmirror disk on a virtual disk pair |
stape | backs up files specified in config file to tape |
srestore | restores files from tape |
The fax nodes are configured for fax customers to receive and deliver faxes through fax feps (front end processors) which perform the conversion of fax compression messages. These feps run UNIX SVR3.3. The feps are connected to their home node, using a similar naming scheme that identifies the node. For example, the feps attached to the SD node are named sdaaf, sdabf, etc. Inbound feps have extensions beginning with aa, ab, etc. Outbound feps have extensions beginning with au, av, etc. Port testing is done through Teltone, which allows access to the G2 for dial out.
Standard fax uses the TMI's on the IMS network, via MNAU & FWAU, to deliver faxes. SDN fax uses outbound fax feps on the GMS network, via FX, TX, ID, and NM nodes, to deliver faxes.
When the customer logs in via a fax machine, the call is routed to an inbound fax fep, which is connected via a DD4 board, where a Voice Power (VP) card prompts them for their username and password through the use of language files stored on the fep. The inbound fep remotely mounts the customers home file system using RFS and a fax user agent is started on the fep. It accepts the message, which is then stored and forwarded through their sent directory on their node. It is then delivered via MT, which routes it to an outbound fep for delivery. prlocate -n phone# will tell you which fep is going to deliver the message.
Procedures for troubleshooting fax fep problems can be found in OPSDOC Vol. 6, Sec. K. Fax problems may include actual problems on the fep that warrant a reboot, RFS problems on the node processor it is homed to, problems with fax spoolers that reside on the processor, and consolidated/daily report problems.
common fep commands | |
command | use |
from fep | |
fepconfig -p port on/off | turn ports on or off |
/etc/dd4/T_monstat | monitor port activity |
/etc/dd4/dd4_recover -w 05 -p port# | reload software on a dd4 board |
from node processor | |
fepcontrol status fep | check status of a fep. Can specifiy all |
prstat -f spooler | check status of spooled messages |
fssum | same as prstat but less info |
dkcu mx/contxauf | access fep console |
Fep logs:
RFS is remote file sharing. When a fax customer calls the efax access number, the call is routed based on the area code they are dialing from. The local auth process then verifies the customer account information and uses RFS to mount the customers home file system on the fep. The account actually still resides on the home node, but is remotely mounted on the fep. The call is not always routed to a fep that is attached to the home node.
Primary Name Server can be thought of as the scorekeeper. It keeps track of what file systems are available for remote mounting by rfs. The dktp0 listener assists in the directing of file system traffic. There can only be one acting or primary name server on the node. The A processor is always the default primary name server. If it is having problems that prevent RFS from running, the responsibilities are passed to the B processor, and last but not least, the C processor. When rebooting a fax node, it is imperative that the processors be brought up in this order.
common RFS commands | |
command | use |
rfadmin -p | Executed from the primary name server, will force the next processor to take over name server duties |
nlsadmin -x | see if listener is active |
sacadm -l | Similar to nsladmin -x but gives more information |
nsquery | see if everything is advertised |
rfadmin | see who it thinks is the primary name server |
Single Stage Dialing or Inbound Fax, is handled by a group of rack mounted 486 fax feps made by Texas Micro Systems. Each fep contains three 8 channel ports cards for inbound calls. Must telnet in to the remote computer name for access.
SSD Remote Computer Names | |
Bridgeton | Middletown |
bg01s.sff.els-gms.att.net | mt01s.sff.els-gms.att.net |
bg02s.sff.els-gms.att.net | mt02s.sff.els-gms.att.net |
bg03s.sff.els-gms.att.net | . |
bg04s.sff.els-gms.att.net | . |
Inbound fax provides an 800 phone number that the customer can give to people that they wish to receive fax messages from and have them placed in their mail account for handling as the customer wishes. It is an alternative to guest submission. The originator has no voice prompts to deal with. Can only be used by efax capable accounts and customer must have guest access enabled via their profile. All of the Bridgeton feps are pointed to one fax node in Bridgeton, and all of the Middletown feps are pointed to one fax node in Middletown. Messages are received from the caller by the ssdua process on the SSD fep. The message is transferred in real time across a TCP connection to an ssdpeer process that is running on the TCP port monitor on the fax node processor that it is pointed to. The message is stored in /spooler/guest/ssdwork on the node processor. When transmission is complete, it is moved to /spooler/guest/in and picked up by the efgdemon and sent to the owner's home/in directory via MT.
Important files:
Procedures for troubleshooting SSD problems can be found in OPSDOC Vol. 6, Proc. K
Synchronous communications are handled by sync feps. Each fep is specially configured, depending on the customers that are assigned to it. These feps support the SNA, 3770, 3780 bi-sync, and LU6.2 communications protocols. The users on the feps are also assigned to a node processor on the network so that messages can then be routed via MT on the node.
There are four types of machines used:
Information on sync feps can be found in OPSDOC Vol. 5, Sec. E-J. Section D contains information on the Sun Sync feps.
The datakit is a point to point type circuit switch. There are only two connection points on each circuit or segment. While Computer Port Modules (CPMs) are fiber optic type connections and will carry a high number of connections, it only runs from the datakit to a single machine. This differs from ethernet, which allows multiple connected machines to the same segment. So all of the connection points to the datakit are on the backplane and not on a wire. The datakit will allow you to define how you can use the bandwidth of each segment. When the CPM or other trunk module is entered into the datakit, the number of channels can be specified. TY ports are single serial ports and are used for console connections to compulert and other single serial port applications. Fiber segments are 8 Mhz. CPM's, local trunks, and ty ports are 9600 bps. Trunks between data centers are from 56 Kbps to T1's at 2Mbps.
We control and monitor the Datakits through Starkeeper. This is a system that both communicates with and monitors the status of all the datakits on the GMS network. A dkserver process must be running on the node processor to allow customers access into the node.
Once you have determined what datakit a customers call is routed to, you can duplicate the customer's access from inmsb by typing the following command and logging in with the customer id:
International calls from countries where there is no node, usually come in to domestic nodes. There is an account with username gmsg (uid 20291) that lists the countries and dial information, time out information, etc.
The Sun Ultra 2 is a 168 Mhz UltraSPARC processor with 256 MB of memory, 2 ethernet network interfaces, a quad ethernet board (4 additional ethernet ports), 2 scsi controller cards, two 4 GB internal disk, and two 4 GB external disks. It runs Sun OS release 5.6 Solaris software. TTY port A is connected to the Compulert System for console access. TTY port B is connected to a Datakit TTY port to provide a second port for login.
The Sun Ultra 2 acts as a DNS server for GMS node processors, for customer access via TCP/IP and PTNII dial access, and for delivery of Internet traffic at large. A DNS server responds to requests to translate names, known as domain names, to IP addresses. It also provides a reverse lookup function that allows IP addresses to be translated to domain names. Requests and responses are sent to/from the DNS servers in IP packets. A properly functioning DNS server is critical to the operation of the following:
A Sun DNS server does not have any customer file systems, shared data file systems, or spoolers. There are 4 DNS servers currently used by the network, 2 in Bridgeton (YBRU & ZBRU) and 2 in Middletown (YMTU & ZMTU). The network will continue to operate properly if one of these servers should fail, however, there may be some performance degradation in the network. Each server is independent of the other. The forwarders line under /etc/named.boot on a node processor specifies which order it will search the DNS servers. If it cannot access the first server, it will go on to the next one and so on.
We are known as mail.att.net or attmail.com to the DNS world.
Easylink Services also provides Domain Name Service offerings for customers who require this type of service. We offer customers the ability to access DNS servers on the AT&T network. The DNS related services offered to customers are available to any system that has IP access to the DNS servers, regardless of whether they are AT&T Mail customers or not. The offerings fall into 2 categories:
The NOC is responsible for monitoring the Customer Access Routers. These are Cisco routers that have direct customer connections for TCP/IP access to the network. These are normally serial connections via Frame Relay.