The simplified MPX spectrum is here:
This is sum of the left and right channels. In the case of a monophonic receiver or when the stereo component of the mpx signal is of low quality, this is signal reproduced by the receiver.
Before summing this signal is subject to a pre-emphasis in order to improve the signal to noise ratio of the high frequency (>2KHx) audio (common pre-emphasis values are 25uS, 50uS, 75uS, 125uS. 50uS is common in Europe, 75uS in the States, 125uS in South America). The receiver has an equivalent de-emphasis.
The difference of the left and right channels is modulated on to 38KHz carrier to generate a DSB-SC modulation. That is a modulation without a carrier. This means greater efficiency since no power is lost in transmitting a "useless" carrier signal. However the disadvantage is that the signal is difficult to de-modulate. Thus at the receiver a mechanism is needed to re-insert a carrier identical to the one "lost" in the transmitter.
This is where the pilot signal comes in.
Why introduce a seperate tone? Given modern day DSP processing power it would be very simple to generate a DSB-SC subcarrier at 38KHz the introduce a precisely defined (both phase and amplitude) carrier for use by the receiver to expediate the de-modulation. I suppose in the olden days this would have been too difficult?
This is a 19KHz reference signal used by the receiver to re-construct a 38KHz carrier in order to de-modulate the (L-R) sub-carrier. This reference has to be phase coherent with the 38KHz carrier. That is the zero rising edges of the waveforms coincide and by definition set at 10% of the MPX level.
The pilot is inserted equidistance between the (L+R and (L-R) subcarrier). The presence of this important signal means that L,R audio channels must be filtered to remove any possible 19KHz information.
Note: the introduction of pre-emphasis will increase the presence of any frequencies around 19KHz. Hence the implicit LPF (low pass filter) must have sufficient rejection to nullify the pre-emphasis introduced.
The RDS (Radio Data System) is commonly used to supply additional information about the transmitter. The radio name, time information, programm content data. This is a BPSK (Binary phase shift keyed) data modulation with a 57KHz carrier. This is a multiple of the pilot frequency as well, thus facilitaing demodulation in the receiver. The carrier will be in-phase or in quadrature with the pilot in order to minimise possible cross-interference with the audio channels when subject to a non-linear medium.
Over the course of time additional facilities have been added to the RDS specification. On of these facilities is the incorporation of in-house data. This information can be used to enable remote control of transmitters (on a mountain top) from the studio (in a cosy valley). In fact this data channel has been used to send SNMP (Simple Network Management protocol) commands to the transmitter complex to both control and monitor the transmitters. The return path for the SNMP information is provided by a SCA (sub carrier) channel (95KHz was used). This approach provided a solution to the limited data volume that can be transmitted by RDS. If there is a low data volume the self same RDS stream could have been used by interception of the RDS and reinsertion of new data. However compared with the SCA approach this was found to be somewhat complex.