Schwartz says that the latest version of the xMax technology no longer uses a narrow-band carrier, and that the test conducted in Florida in November 2005 used this newer version. He confirmed the test parameters that I used in my analysis that found that the link operated at a relatively high signal-to-noise ratio, and thus could have been performed with any of several conventional technologies.

I find this redesign to be very interesting, because I originally heard about xMax from Harold Walker when he cited it as independent validation of his VMSK and other so-called "ultra narrowband" (UNB) schemes that supposedly achieve extraordinarily high spectral efficiencies on the order of 100 bits/sec/Hz or more.

A quick application of the
Shannon channel capacity theorem shows that such high efficiencies are
ridiculously impractical. By rearranging the Shannon channel capacity formula
to solve for the minimum
E_{b}/N_{0} as a function of the
spectral efficiency, we get:

Ewhere R = C/B, the spectral efficiency in bits/sec / hertz, is obtained by dividing the channel capacity in bits/sec by the channel bandwidth in hertz._{b}/N_{0}>= (2^{R}- 1) / R

Plugging in R = 100 bits/sec / Hz, we get:

ELet's put 281 dB into perspective. If the receiver system noise temperature is 300 kelvins (room temperature, a typical value), then N_{b}/N_{0}>= (2^{100}- 1) / 100

>= 1.27 * 10^{28}

>= 281 dB

UNB is of course completely unworkable. The fact that supposed UNB radio modems can pass data under carefully controlled conditions using less than huge amounts of power merely proves that they are not really narrowband at all. A straightforward Fourier analysis, which can even be done by inspection, shows that every UNB scheme actually consists of a weak ultra-wideband component that carries the data plus a narrow spectral line that carries nearly all of the transmitted power, but no information at all. Only the spectral line is readily visible on a spectral analyzer, so it is incorrectly alleged to be the only important part of the signal. The flaw is immediately obvious to anyone with a basic understanding of signal theory.

When I first heard of xMax, it seemed to be another variation on the UNB theme. But the information then available was extremely sketchy and its claimed properties unclear. Some xG materials described xMax as a narrowband scheme, others more correctly as a strong narrowband carrier used to synchronize a weak wideband signal. The descriptions were so vague that I held off commenting on it.

But xMax's original "UNB" approach is clearly confirmed by xG's recently issued patents, particularly Suppressed Cycle Based Carrier Modulation Using Amplitude Modulation by Bobier and Kahn, and Missing Cycle Based Carrier Modulation by Bobier. Both patents fall into the same trap as Walker. They incorrectly state that deleting individual cycles from an RF carrier waveform can transmit data without the need for sidebands. This scheme is, in fact, so similar to several of Walker's own UNB schemes that Walker claims that they infringe on his own patents and that xG has abandoned it for this reason.

That's one possible explanation for the change in the xMax design reported by Schwartz. Another possibility is that Schwartz, or another consultant, convinced Bobier and xG that his original approach was fundamentally flawed and a new one was designed from scratch. In any event, these patents make clear that at the time he wrote them, Bobier lacked a correct understanding of signal theory and made fundamental mistakes, albeit mistakes that others such as Walker have also made.

Schwartz says that the new scheme performs synchronization by "Symbol Sensed Synchronization". The new method may or may not be novel in its specifics, but just about every conventional digital modulation scheme performs some sort of synchronization on the transmitted data symbols.

In terms of spectrum efficiency, which is measured in bits per second per Hertz, xMax is not the most efficient system for wireless communications.This is undoubtedly true. Schwartz is also completely correct in that in a multi-user environment, where one user's signal is interference for all other users on the same channel, we need a more meaningful measure of spectral efficiency.

But then Schwartz invents his own metric: bits per second per Hertz per Watt. This makes little sense to me or to other engineers with whom I have spoken who are also familiar with the problems of cellular radio.

First of all, this number depends highly on test conditions. High attenuation radio paths always require more power than low attenuation paths when all other factors are held constant, so it's hard to see how a single, well-defined metric could exist for each modulation method.

But the bigger problem with Schwartz's metric is that it simply
doesn't fit the real-world requirements of cellular telephony when, as
he says, interference is the limiting factor. Interference -- not transmitter
power. Cellular operators are invariably concerned with three critical
parameters: the total number of bits per second of network capacity
that can be sold to customers; the required amount of RF spectrum that
usually has to be bought at auction; and the number of cell sites that
have to be built. That's why the industry-accepted metric is bits/sec
per Hertz *per cell sector* in a sea of cells reusing the same
channel(s). In most cellular networks, the required RF spectrum and
the required number of cell sites can be traded off against each other
to a considerable extent, so this metric is fairly well defined for
each type of modulation.

This is where CDMA, a form of spread spectrum, really
shines. Although a single spread spectrum signal has an inherently low
spectral efficiency in bits/sec per Hertz, CDMA's considerable
resistance to interference permits many users to share the same
channel at the same time. Furthermore, the same channel can be reused
in adjacent cells. This is not possible with non-spread methods like
GSM where, to protect its neighbors, each cell site can only use a
fraction of the total spectrum available to the system. This "1:1
frequency reuse" property of CDMA more than makes up for the lower
spectral efficiency of the individual CDMA signal, so at the *system*
level, CDMA is more spectrally efficient than GSM.

Schwartz says that xMax conveys 17 Mb/s in a bandwidth of 26 Mb/s,
or 0.6538 bps/Hz. This is clearly not a spread spectrum system. 17
Mb/s is obviously much greater than needed for a VoIP call (perhaps
10-20 Kb/s) so the transmitter would presumably operate at a low duty
cycle, with users taking turns transmitting at different times.
The xMax VoIP system therefore resembles a TDMA system like GSM, and
will probably suffer from the same interference sensitivity and
resulting frequency reuse difficulties, especially if the required
E_{b}/N_{0} is as high as the Florida demonstration
suggests (about 17 dB). But if each transmitter is to occupy the
entire 902-928 MHz ISM band, there would be no other channels to
allocate to cells in a checkerboard pattern. Inter-cell interference
would thus seem to be a serious limitation on the capacity of this
system, but we have seen no analysis of xMax's interference resistance
or even a direct measurement of the required E_{b}/N_{0},
on which interference resistance depends.

This is why Schwartz should calculate the far more meaningful metric of bits/sec per Hz per cell sector, in a sea of co-channel cells, for xMax so we can compare it with CDMA and GSM.

But transmitter power isn't everything. On page 2, Schwartz
presents a table of his bps/Hz/Watt figures that, for some reason, use
the power transmitted by the *base* station, not the mobile
station, for the CDMA column. In the second paragraph he does talk
about the CDMA mobile transmit power only to cite it incorrectly: the
actual average is about 1 milliwatt, not the 0.1 watts (100
milliwatts) cited. The effect is to make CDMA look bad in comparison
to xMax, but again the metric is meaningless in the first place.

Why does the CDMA base station transmit so much more power than the
mobile, even when the different data rates are taken into account?
The two links use different modulation methods, but the error
control coding and E_{b}/N_{0} requirements are
similar, so one would expect the power levels to be similar. The answer is actually
pretty simple: mobile stations are almost always battery operated, so
power is extremely scarce. Base stations are connected to utility
power, so conservation isn't as important. The CDMA forward link could
work with much less RF power, but higher amounts are used to "swamp"
the thermal noise background in the receivers and to ensure that the
forward link is interference limited. This maximizes forward link
capacity and hence spectral efficiency.

The capacity of a CDMA cellular system depends directly on the
E_{b}/N_{0} required by the modulation and coding.
The lower the required E_{b}/N_{0}, the greater the system capacity.
Because unwanted signals appear to a CDMA receiver as noise,
N_{0} should actually be N_{0} + I_{0}, the
sum of the thermal noise density plus the interference density. That
makes the receiver operating point
E_{b}/(N_{0}+I_{0}). If thermal noise were
negligible, then increasing all the
base station transmit powers by 10 dB would increase both E_{b}
and I_{0} by 10 dB. The receiver operating point,
and hence system capacity, would
remain unchanged. But N_{0} is not negligible at low power levels.
So as the base station transmit powers are increased,
N_{0} becomes less and less significant, and the overall ratio
E_{b}/(N_{0}+I_{0}) increases asymptotically
toward E_{b}/I_{0}. By increasing the base station
transmit powers a reasonable amount, we maximize forward link
capacity. This isn't possible on the reverse link, primarily because of severe battery
energy limitations and secondarily because of RF exposure considerations, so we tolerate the
capacity loss due to thermal noise. CDMA base stations can still compensate for low
mobile transmit powers with higher quality receivers and multiple receive antennas.

The example of CDMA cellular demonstrates that power is not necessarily a good indication of spectral efficiency even in a spread spectrum system, and in some cases the two are actually at odds.

Phil Karn, 6 June 2007