The comments as filed can be found here:
https://www.fcc.gov/ecfs/document/1009106258899/1
Text of the comments is below.
Comments of
Open Research Institute, Inc.
#1873 3525 Del Mar Heights Road
San Diego, CA, 92130
United States of America
9 October 2023
Open Research Institute (ORI) is a non-profit research and development organization devoted to Open Source digital radio technology.
ORI’s mission is to provide practical open source wireless communications solutions for terrestrial and space applications. ORI provides significant workforce development opportunities to the volunteers engaging in Open Source technical work.
Open Source means that everything required to reproduce a software, hardware, or firmware design is freely available to the general public. An Open Source design must have free redistribution, allow modifications and derived works, and be non-discriminatory against persons, groups, or fields of endeavor. Open Source work cannot be specific to a particular product, it cannot restrict other software or hardware, and needs to be technology-neutral.
Open Source is vital to the United States’ economic competitiveness in telecommunications. The Internet runs on Linux, an Open Source computer operating system, as 96.3% of the top one million web servers run on Linux, and the vast majority of the underlying networking infrastructure uses either Linux or other Open Source operating systems and libraries.
The telecommunications industry has historically been heavily reliant on proprietary software and hardware. An increase in the adoption of Open Source, from OpenRAN to Nephio, has enabled telecommunications companies to more quickly and efficiently meet market demands. There is a broad trend towards the continuing increase in adoption of Open Source designs in order to avoid silos of wheel reinvention and to promote effective interoperability.
The Open Source community can address this inquiry.
Introduction
ORI agrees that spectrum usage information is generally nonpublic and infrequently available. In the spirit of Open Source and Open Access, we believe that publicly available high-quality data about spectrum usage is in the national interest.
ORI agrees that the need for spectrum usage information will only continue to grow. In order to produce useful models, artificial intelligence and machine learning require enough data to complete the training process. Without enough data, a machine learning model can suffer from a condition called overfitting. When this happens, the model becomes a very expensive copy machine, presenting the training data as output, regardless of the input. The lack of sufficiently large high quality radiofrequency data sets is widely acknowledged as an impediment to research and development for radiofrequency machine learning.
ORI agrees that the development of new and innovative spectrum sharing techniques, allowing increased co-existence among users and services, will improve spectrum management. Spectrum usage
information is required in order to develop new spectrum sharing techniques. This is true whether or not machine learning is used either in the process or in the product. In other words, even if only ordinary humans had the task of improving spectrum sharing over and above what we have today, those humans would still need spectrum usage information to achieve their goal.
Without good spectrum usage information, neither machine learning nor human architects will be able to confidently produce quality results. The most common outcome to making the best guess in the absence of spectrum usage information is highly conservative spectrum sharing arrangements that do not fully utilize spectrum, overly restrict licensees, and prevent innovation in the field.
Central Question
We want a more sophisticated knowledge of non-Federal spectrum usage. The central question of this inquiry is how can we take advantage of modern capabilities for gaining this knowledge in a costeffective, accurate, scalable, and actionable manner?
In addition to the other spectrum monitoring efforts listed in this inquiry, we can start with the concepts established by the Spectrum Monitoring Pilot Program from NTIA/NIST.
This program measured spectrum occupancy with standardized distributed receivers reporting to Measured Spectrum Occupancy Databases. These databases publish the metadata of their measurements so that measured data can be retrieved over https:// connections. The concepts of federation are used in order to avoid inefficient and expensive replication of measured data.
The Spectrum Monitoring Pilot Program had two classes of receivers. One was dedicated to radar and the other to communications. The communications receiver approach is an order of magnitude less
expensive than the radar receiver approach.
Restricting the hardware package to the less expensive category of communications receiver, and using modern software-defined radios with Open Source software, and incorporating existing online radios (“WebSDRs”) alongside the standardized stations, and being deliberate and innovative about incentives, then we assert that a low-resolution receiver network that can produce useful spectrum usage information is achievable.
A Low Resolution Receiver Network – Why?
Why low resolution? Because a broad heatmap of spectrum usage, even at low resolution, is valuable in a different way for spectrum management purposes than a small amount of high resolution data from
one service in one geographical area.
The current situation is of essentially no real-time spectrum usage information. Even if we simply had noise floor measurements across the bands across the country, and even if those measurements were
gathered from stations of varying quality, we would have an immense improvement in our capacity to intelligently manage our spectrum over having no measurements at all.
Towards a Weather Map of the National Noise Floor
Noise floor is the measure of the noise power per bandwidth. Getting a snapshot of something like a National Noise Floor, comparable to a national radar weather map, needs a diversity of radio receivers.
We need to be able to measure or estimate power spectral density from these receivers. Services with intermittent use must be measured often enough to produce minimally accurate data, or those services
must be covered with alternate techniques.
As an example of where alternate techniques can work, FT8 mode contacts on the 17 meter Amateur Radio Service band can be obtained from https://gridtracker.org/. These reports, which are text based, can be used to estimate the power spectral density at the radios using Grid Tracker. Reports from radios can be used to create a map of real-time spectrum usage without directly measuring the spectrum. These are estimates, but they are based on measured signal-to-noise reports that all the radios are giving each other, in each 15-second long exchange.
We can compare this type of observation to eavesdropping on roomfuls of people talking to each other, and writing down how many conversations (and attempted conversations) there were. Multiple
eavesdroppers can spread through the house, combine their notes, and show where the conversational density was highest.
What does this look like on the 17 meter Amateur Radio Service allocation with a typical FT8 station?
A stream of which stations contacted other stations is reported from a single radio. The stream consists of all the contacts that the radio has heard from that location. The radio is not directly contacting all of
these other stations, but it can hear them all and tracks who is trying to talk to who and when. Any radio on the band can act like the eavesdropper above.
Open Source software (called WSJT-X) is controlling the radio and demodulating and decoding all received transmissions across the entire sub-band. WSJT-X does include a spectrum waterfall display,
which could be used to obtain power spectral density of all the simultaneous transmissions, but we do not have to do this. We instead use another commonly installed Open Source software program (called
Grid Tracker) which takes the text output of WSJT-X and provides a text list of active stations and their reported signal power. This list can produce a calculated estimate of the power spectral density in the band. It’s less math intensive to work with text based signal reports for fixed formal signals like FT8 than it is to use a spectrum analyzer waterfall, or deal with the IQ recordings of a radiometer.
The addition of additional radios (more eavesdroppers) improves coverage.
Does this exact network exist today? It almost does. Instances of Grid Tracker, very commonly installed alongside WSJT-X and running whenever the station is active are reporting all of this information, but they do not have coordinated reporting, as of today. However, since the software is Open Source, adding a function to opt in to do some math and donate the data to a server to produce a National Noise Floor snapshot, for this particular mode, is achievable.
This example outlines a proof of concept of a very tiny slice of the HF spectrum at 18.100 MHz, but it shows the advantage of using existing Open Source software, existing radios and existing communities
of licensed operators. Incentives to licensees to participate could be as simple as getting an opportunity to defend their licensed spectrum up to recognition awards for donating data consistently over time.
Achieving Aggregated Wireless Sensing
How can we broaden this concept? First, leverage existing online radio receivers, such as broadband WebSDRs. See http://www.websdr.org/ for deployed examples. Power spectral density measurements or estimates can be obtained from receivers already on the air and already connected to the Internet.
An anticipated and understandable objection is that the multitude of WebSDRs across the country are not calibrated stations and they are not standardized. The owner could take them down at any time. A
new one might be put on the air tomorrow. The answer to these objections is that the aggregation of these observations, even if the observations are of varying quality, provides immense value in efforts to
improve spectrum management because these receivers can produce real-time spectral usage information with no additional radio hardware investment.
We should, of course, not stop there. We must commit to a both/and and not an either/or approach to answer the central question of this inquiry.
Second, deploy inexpensive, standardized, and calibrated receivers to libraries, schools, post offices, and any other institution or organization that can be either incentivized or mandated.
For a model of an Open Source standardized distributed receiver system producing real-world practical radio results, please refer to the SatNOGS project at https://satnogs.org/
What are some standardized station examples that we could deploy in the United States to achieve the
goals of this inquiry?
An Open Source PLUTO SDR running Open Source Maia software creates an inexpensive spectrum analyzer with a built-in webserver. The addition of the federated reporting functions is possible because
the source code for Maia can be modified to include these additional functions. Maia can be found at https://maia-sdr.org/. Documentation for the standard PLUTO firmware (which is largely replaced by
the Maia firmware) can be found at
https://github.com/analogdevicesinc/plutosdr-fw
and documentation for the PLUTO hardware can be found at
https://wiki.analog.com/university/tools/pluto/hackers
A PLUTO/Maia package can cover frequencies from 70 MHz to 6 GHz. It would requires one or more antennas (depending on how many bands are to be monitored by that station), a power supply, a weatherproof enclosure, mechanical attachments, and cables. A proof of concept would be expected to cost less than the Spectrum Monitoring Pilot Program communications receiving station proof of concept, which came in at $6000 according to “An Overview of the NTIA/NIST Spectrum Monitoring Pilot Program”.
This can be read at
https://its.ntia.gov/umbraco/surface/download/publication? reportNumber=CottonSpectMonIwssSubmitted.pdf
A second and even less expensive example of a standardized station would be an RTL-SDR
https://www.rtl-sdr.com/about-rtl-sdr/
and a Raspberry Pi
https://www.raspberrypi.com/
running Linux. This kit can use a large number of Open Source software-defined radio packages. It can be tuned to any of the bands in its operating range of 2.4 MHz to 2 GHz. For a sweep of the entire operating range, multiple antennas and combiners would be necessary along with an amount of additional equipment and software.
The WebSDRs combined with the standard package stations form a heterogeneous distributed receiver network. In aggregate, with enough stations, the resulting network can provide useful real-time reports
of spectrum usage information. A centralized visualization of actual spectrum usage, even if not realtime, would be very informative. If a picture is worth a thousand words, then a video is worth a thousand pictures. People seeing gaps in the data may be motivated to put up or sponsor a station to fill in the gaps, similar to the effect we see with personal weather stations that measure temperature, wind
speed, and other meteorological aspects.
TAC Working Group as Asset
The Dynamic Spectrum Allocation Working Group of the Technological Advisory Council of the Federal Communications Commission could provide advisory feedback on real-time spectral usage information obtained through opportunistic and inexpensive sensor networks, invite speakers to present about the topic, and give specific recommendations. Leveraging an existing Working Group with familiarity in this particular topic would be an efficient use of already-available expert advice.
Conclusion
A National Noise Floor heatmap, even if low resolution, is achievable and it is valuable. Any reasonable real-time data, whether obtained opportunistically or purposefully, is an enormous step forward compared to no data at all.
There are drawbacks to low resolution data. The limits of the resolution must be acknowledged. The measurements have to at least be reasonable, meaning that a snapshot of noise floor includes enough
information to where lower power signals aren’t completely overlooked or missed. For each frequency allocation measured, a subject matter expert in that allocation would be expected to compare the realtime spectrum usage information to a model of expected spectrum usage. The difference between this theoretical or calculated model and the real-time spectrum usage information is valuable and informative in many ways. A subject matter expert would be able to explain observed differences, explain whether and why any difference was sufficient reason to make spectrum management adjustments, and provide feedback for improved spectrum sensing. There is no one-size solution for either the measurement stations involved or the allocations they are measuring.
The architecture for gaining visibility of spectral usage has been previously piloted in the Spectrum Monitoring Pilot Program. This and other excellent prior work can be adapted, and citizens can be
incentivized to participate in order to scale up the sensor network. Incentives range from the simple fact of being able to individually contribute directly towards the defense of a spectral allocation, to awards or recognition for the technical achievement of constructing and calibrating a station to a published standard, to a scoreboard of who provided the most consistent reports of real-time spectral information
over specific lengths of time.
There is a large amount of Open Source software and hardware that can be used to reduce costs and reward high quality collaborative and cooperative work. A “lower-tech, inexpensive, diverse, and start now” instead of a “high-tech, expensive, maybe sometime in the future” approach is cost-effective, accurate (enough), scalable, and actionable.
Respectfully,
Michelle Thompson
CEO ORI