Inner Circle Newsletter August 2025

The Who What When Where Why

Open Research Institute is a non-profit dedicated to open source digital radio work on the amateur bands. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer and we work to use and protect the amateur radio bands.

You can get involved in our work by visiting https://openresearch.institute/getting-started 

Membership is free. All work is published to the general public at no cost. Our work can be reviewed and designs downloaded at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world.

Solution to July Puzzle

August Puzzle: The Mysterious Lock Loss

— SCENARIO: You’re debugging a Costas loop implementation that works

— perfectly in simulation but fails intermittently in hardware. 

— The loop locks quickly to F1 (carrier + 1kHz), but when the 

— input switches to F2 (carrier + 3kHz), it sometimes loses lock

— entirely instead of reacquiring.

— PUZZLE: What’s causing this intermittent lock loss? 

— HINT: Look carefully at the loop filter characteristics and gain scheduling.

library IEEE;

use IEEE.STD_LOGIC_1164.ALL;

use IEEE.NUMERIC_STD.ALL;

use IEEE.MATH_REAL.ALL;

entity costas_loop_puzzle is

    generic (

        DATA_WIDTH : integer := 16;

        PHASE_WIDTH : integer := 12;

        F1_OFFSET : integer := 1000;  — 1 kHz offset from carrier

        F2_OFFSET : integer := 3000   — 3 kHz offset from carrier

    );

    port (

        clk         : in  std_logic;

        reset       : in  std_logic;

        rf_input    : in  signed(DATA_WIDTH-1 downto 0);

        freq_select : in  std_logic; — ‘0’ for F1, ‘1’ for F2

        — Outputs for debugging

        i_data      : out signed(DATA_WIDTH-1 downto 0);

        q_data      : out signed(DATA_WIDTH-1 downto 0);

        phase_error : out signed(DATA_WIDTH-1 downto 0);

        vco_freq    : out signed(PHASE_WIDTH-1 downto 0);

        lock_detect : out std_logic

    );

end entity;

architecture behavioral of costas_loop_puzzle is

    — VCO signals

    signal vco_phase : signed(PHASE_WIDTH-1 downto 0) := (others => ‘0’);

    signal vco_i, vco_q : signed(DATA_WIDTH-1 downto 0);

    signal vco_control : signed(DATA_WIDTH-1 downto 0) := (others => ‘0’);

    — Mixer outputs

    signal mixer_i, mixer_q : signed(DATA_WIDTH-1 downto 0);

    — Loop filter components

    signal integrator : signed(DATA_WIDTH+4-1 downto 0) := (others => ‘0’);

    signal proportional : signed(DATA_WIDTH-1 downto 0);

    signal error_signal : signed(DATA_WIDTH-1 downto 0);

    — Lock detection

    signal error_magnitude : unsigned(DATA_WIDTH-1 downto 0);

    signal lock_counter : unsigned(15 downto 0) := (others => ‘0’);

    — Critical parameters (this is where the puzzle lies!)

    constant KP : signed(7 downto 0) := to_signed(32, 8);   — Proportional gain

    constant KI : signed(7 downto 0) := to_signed(2, 8);    — Integral gain

    — Gain scheduling based on frequency (THE TRAP!)

    signal adaptive_kp : signed(7 downto 0);

    signal adaptive_ki : signed(7 downto 0);

begin

    — Gain scheduling logic – reduces gains at higher frequencies

    — This looks reasonable but creates the error starvation!

    process(freq_select)

    begin

        if freq_select = ‘0’ then  — F1 mode

            adaptive_kp <= KP;

            adaptive_ki <= KI;

        else  — F2 mode – “optimize” for stability at higher frequency

            adaptive_kp <= shift_right(KP, 2);  — KP/4

            adaptive_ki <= shift_right(KI, 3);  — KI/8

        end if;

    end process;

    — VCO phase accumulator

    process(clk, reset)

    begin

        if reset = ‘1’ then

            vco_phase <= (others => ‘0’);

        elsif rising_edge(clk) then

            vco_phase <= vco_phase + vco_control;

        end if;

    end process;

    — VCO sine/cosine generation (simplified)

    — In real implementation, this would be a lookup table

    vco_i <= to_signed(integer(32767.0 * cos(real(to_integer(vco_phase)) * MATH_PI / 2048.0)), DATA_WIDTH);

    vco_q <= to_signed(integer(32767.0 * sin(real(to_integer(vco_phase)) * MATH_PI / 2048.0)), DATA_WIDTH);

    — Quadrature mixers

    process(clk)

    begin

        if rising_edge(clk) then

            — Multiply and low-pass filter (simplified)

            mixer_i <= shift_right(rf_input * vco_i, 15);

            mixer_q <= shift_right(rf_input * vco_q, 15);

        end if;

    end process;

    — Costas loop error detector (classic I*sign(Q) approach)

    process(clk)

        variable q_sign : signed(DATA_WIDTH-1 downto 0);

    begin

        if rising_edge(clk) then

            if mixer_q >= 0 then

                q_sign := to_signed(1, DATA_WIDTH);

            else

                q_sign := to_signed(-1, DATA_WIDTH);

            end if;

            error_signal <= shift_right(mixer_i * q_sign, 8);

        end if;

    end process;

    — Loop filter with adaptive gains

    process(clk, reset)

        variable scaled_error : signed(DATA_WIDTH+4-1 downto 0);

        variable prop_term : signed(DATA_WIDTH+4-1 downto 0);

    begin

        if reset = ‘1’ then

            integrator <= (others => ‘0’);

            vco_control <= (others => ‘0’);

        elsif rising_edge(clk) then

            — Scale error by adaptive gains

            scaled_error := resize(error_signal * adaptive_ki, DATA_WIDTH+4);

            prop_term := resize(error_signal * adaptive_kp, DATA_WIDTH+4);

            — Integrate with adaptive gain

            integrator <= integrator + scaled_error;

            — PI controller output

            vco_control <= resize(shift_right(integrator + prop_term, 4), DATA_WIDTH);

        end if;

    end process;

    — Lock detector – measures error magnitude

    process(clk, reset)

    begin

        if reset = ‘1’ then

            lock_counter <= (others => ‘0’);

            lock_detect <= ‘0’;

        elsif rising_edge(clk) then

            error_magnitude <= unsigned(abs(error_signal));

            if error_magnitude < 100 then  — Low error threshold

                if lock_counter < 65535 then

                    lock_counter <= lock_counter + 1;

                end if;

            else

                lock_counter <= (others => ‘0’);

            end if;

            — Declare lock after 1000 consecutive low-error samples

            if lock_counter > 1000 then

                lock_detect <= ‘1’;

            else

                lock_detect <= ‘0’;

            end if;

        end if;

    end process;

    — Output assignments

    i_data <= mixer_i;

    q_data <= mixer_q;

    phase_error <= error_signal;

    vco_freq <= resize(vco_control, PHASE_WIDTH);

end behavioral;

Defending Amateur Radio Spectrum: The AST SpaceMobile Battle Continues

Partial Victory in 430-440 MHz Band Defense

The amateur radio community has achieved a significant but limited victory in protecting the 430-440 MHz band from commercial satellite encroachment. AST SpaceMobile’s request for broad commercial use of amateur spectrum has been restricted to emergency-only operations for a maximum of 24 hours and only 20 satellites—but the fight isn’t over.

AST SpaceMobile (AST & Science LLC) operates a constellation of large commercial satellites designed to provide cellular service directly to mobile phones. Think of it as trying to turn satellites into massive cell towers in space. The problem? They wanted to use the 430-440 MHz amateur radio band for their Telemetry, Tracking, and Command (TT&C) operations across a planned 243-satellite constellation.

This isn’t just about frequency coordination—it’s about fundamental spectrum philosophy. The amateur bands exist for experimentation, emergency communications, and education. Commercial operations fundamentally change the character of these allocations, much like turning a public research laboratory into a private factory floor.

The Technical Challenge

AST SpaceMobile’s satellites are massive. These are some of the largest commercial satellites ever deployed, with solar arrays spanning over 700 square meters. These aren’t small CubeSats doing modest experiments. They are industrial-scale infrastructure requiring robust command and control systems.

The company initially deployed five Bluebird satellites in September 2024, operating on amateur frequencies at 430.5, 432.3, 434.1, 435.9, and 439.5 MHz with 50 kHz bandwidth. These were launched and operated without proper authorization. Each planned satellite would require TT&C channels with bandwidths between 64 – 256 kHz, creating a significant interference footprint across the entire 10 MHz amateur allocation.

The Open Research Institute, along with numerous international amateur radio organizations, filed strong opposition to AST SpaceMobile’s request. Our argument was both technical and philosophical:

Summarized from our filed comment, the technical objections included the following. 

1) Multiple established commercial satellite bands exist (S-band: 2025-2110 MHz, X-band: 8025-8400 MHz, Ka-band: 27.5-30.0 GHz)

2) ITU studies specifically excluded the 430-440 MHz amateur allocation from commercial TT&C considerations

3) Modern satellite technology readily supports operations in higher frequency bands with better propagation characteristics. 430-440 MHz is not the best choice.

We raise the following philosophical and cultural concerns.

1) Amateur radio bands serve critical emergency communications when commercial infrastructure fails

2) These frequencies support STEM education and technological innovation. Where do you think many RF engineers get their start?

3) Commercial encroachment sets a dangerous precedent that could completely destroy the experimental character of amateur allocations

The FCC’s Decision: A Limited Victory

On August 29, 2025, the FCC issued a modified grant that significantly restricts AST SpaceMobile’s operations:

1) 24-hour limit. TT&C operations in the 430-440 MHz band are permitted only for periods not exceeding 24 hours.

2. Emergency only. Operations are restricted to Launch and Early Orbit Phase (LEOP) and emergency situations when no other band is available.

3. 20-satellite cap: Authorization covers only the next 20 satellites, including the FM1 prototype.

FM1 stands for “Flight Model 1” and is AST SpaceMobile’s first “Block 2” BlueBird satellite. It’s a much bigger, more powerful version of their current satellites. According to AST, it is about three times larger than their first-generation BlueBird satellites with 10 times the capacity. Launch dates have been delayed over the past year and the satellite might go up in early 2026. 

This represents a major step back from AST SpaceMobile’s original request for blanket commercial access across their entire constellation.

What Does This Mean for the Amateur Community?

The decision validates several key principles that we and many others have been patiently asserting to regulators. 

1) Amateur spectrum is different. The FCC acknowledged that amateur allocations can’t simply be treated as general-purpose commercial spectrum. The severe restrictions imposed recognize the unique character and public service value of amateur radio.

2) Technical alternatives exist. By limiting operations to emergencies “when no other band is available,” the FCC effectively endorsed our argument that commercial TT&C bands are technically viable for these operations.

3) Precedent matters. While it shouldn’t have to be repeatedly argued, precedent really does matter and vigilance is required in order to keep a solid regulatory foundation for amateur radio. Rather than opening the floodgates to commercial use of amateur spectrum, the FCC imposed strict limits that discourage similar requests from other operators.

Industry Response and Next Steps

AMSAT-DL described this as a “greater (partial) success” for amateur radio and AMSAT satellite operators. The 24-hour emergency-only restriction and 20-satellite cap should give AST SpaceMobile sufficient time to redesign their constellation for proper commercial frequency usage.

However, this isn’t a complete victory. AST SpaceMobile still has temporary access to amateur spectrum, and the company may seek to extend or modify these restrictions as their constellation develops.

Lessons for Open Source and Amateur Communities

This case illustrates several important principles for defending community resources. Documentation matters. Technical arguments backed by ITU studies, engineering analysis, and regulatory precedent carried significant weight in the FCC’s decision. Without this, things would have worked out very differently. 

Community coordination works. International amateur radio organizations presenting unified opposition demonstrated the global impact of spectrum decisions. 

Vigilance must continue. Protecting community resources, whether spectrum, software licenses, IP addresses, or technical standards, requires continuous engagement with regulatory and governance processes. 

The amateur radio community must remain vigilant as commercial space operations continue expanding. AST SpaceMobile’s modified authorization creates a framework for emergency use that other operators will definitely seek to exploit.

We encourage continued support for organizations like AMSAT-DL, ARRL, and the Open Research Institute that actively defend amateur spectrum rights.

You can participating in FCC comment periods on spectrum issues. Yes! You! Reach out to your local amateur radio organization and be part of the process. Support technical education that demonstrates amateur radio’s ongoing innovation. Engage with emergency communications activities that highlight amateur radio’s public service value.

The 430-440 MHz band remains primarily protected for amateur use, but this victory required sustained technical and legal advocacy. Our spectrum allocations, just like our open source projects, exist because communities actively defend and develop them. The technical part of a project is never the hardest part. The hardest part of any project is the people part. Negotiating, collaborating, compromising, defending, and communicating in a complex world are all “people work”. 

Technical Details and References

For those interested in the regulatory details, please refer to the following documents. 

FCC Proceeding 25-201 (application/licensing proceeding)

ICFS File Number SAT-MOD-20250612-00145 (FCC’s “case number” or filing reference for AST SpaceMobile’s request to modify their satellite authorization.)

Open Research Institute Comment to 25-201 (Filed July 21, 2025 by Michelle Thompson W5NYV)

FCC Decision (DA-24-756A1.pdf from August 29, 2025)

The full technical analysis includes frequency coordination studies, interference modeling, and alternative band analysis available through the FCC’s Electronic Comment Filing System.

ORI’s Tiny Payload Delivered to AmbaSat

Most recent mission update from AmbaSat is below.

Dear AmbaSat Launch Partner,

A big thank you to everyone who has already returned their AmbaSat-1 ChipSats – your support and timely action are helping to keep us on track for mission integration.

If you haven’t yet returned your ChipSat, we kindly ask that you do so as soon as possible to ensure its inclusion in the upcoming 3U CubeSat assembly. If there are any issues or delays, please don’t hesitate to contact us directly at support@ambasat.com — we’re happy to assist.

Mission Update

We’re pleased to share that the UK Civil Aviation Authority (CAA) engineering team recently visited AmbaSat HQ, where they carried out a detailed review of our processes, concept of operations, risk management strategy, and supporting documentation. Twice-monthly CAA meeting are ongoing and:

Following the visit, we’ve developed a focused CubeSat action plan to:

Reduce the risk of fragmentation from stored energy

Strengthen documentation of our engineering methodology and V&V campaign

Document and minimise the chance of accidental debris release

Finalise details of the ground segment and operational architecture, including responsibilities and procedures

Document acceptable re-entry and collision risks

Produce additional virtual modelling of the combined AmbaSat, Flight Tray, and CubeSat assembly

In summary, we’re making solid progress towards both integration and licensing milestones, and we’ll continue to keep you updated as the mission advances.

Thank you once again for being part of this exciting step into Low Earth Orbit.

Ad Astra,

Martin & the AmbaSat Team

AmbaSat Ltd

Office: +44 (0)1609 600884

Email: martin@ambasat.com

Web: https://ambasat.com

Opulent Voice Progress Report: Issue #22

Paul Williamson, KB5MU

Here’s a summary of Dialogus debugging progress leading up to September 2, 2025 and resulting in improved code committed to Git at https://github.com/OpenResearchInstitute/dialogus/commit/741137a17cadbf68086392bce4805cf1d037a029 . The problems we were attempting to address, and some of the key observations made along the way, are captured in this Git issue: https://github.com/OpenResearchInstitute/dialogus/issues/22

Recall that the problems we were seeing had to do with the flow of encapsulated Opulent Voice frame data to be transmitted from the user interface program Interlocutor to the transmit output of the Pluto, where we can observe it with the Rigol RSA5065N spectrum analyzer in the Remote Lab. Let’s start with a rundown of the components of that flow, as implemented here in the lab.

Components of the Frame Data Flow for Transmit

Interlocutor (https://github.com/openresearchinstitute/interlocutor) is a Python program running on a Raspberry Pi 5. It accepts voice input from an attached microphone and/or text message input from a keyboard. Interlocutor understands the logical structure of an over-the-air Opulent Voice frame. It composes these frames to contain the data to be transmitted. Each frame contains 12 bytes of Opulent Voice frame header and 122 bytes of COBS-encoded payload data, for a total of 134 bytes.

Interlocutor encapsulates each packet into a UDP message with port number 57372 addressed to the IP address of the Pluto. The Linux networking stack routes this by IP address to the appropriate network interface port, eth1 in this case.

But eth1 is not physically an Ethernet port at all. The Pluto doesn’t have an Ethernet port. It does have a USB port, and one of the standard ways to connect a Pluto to a computer is to use Ethernet over USB. There are several such protocols, any of which can make the USB connection act like a network connection. The supported protocols on the Pluto are RNDIS, CDC-NCM, and CDC-ECM. Linux supports all three of these protocols, but Windows and macOS each support only one, and of course it’s not the same one. Because we are using more Macs than Windows computers here, we chose CDC-NCM, which is the one supported by macOS. This is configured in the Pluto as explained in https://wiki.analog.com/university/tools/pluto/users/customizing

So our encapsulated packets flow over this simulated Ethernet and arrive in the Pluto. Recall that the Pluto is based on a Xilinx device that contains an ARM computer core and an FPGA fabric (Zynq XD7Z010-1CLG225C), plus an Analog Devices AD9363 radio transceiver. The ARM core runs a build of Linux provided by Analog Devices at https://github.com/analogdevicesinc/plutosdr-fw , which we have lightly customized at https://github.com/openresearchinstitute/pluto_msk/firmware . The encapsulated packets arrive over the USB port (which is part of the Zynq) and are handled by the Linux network stack running on the ARM.

Dialogus (https://github.com/openresearchinstitute/dialogus) is a C program running under Linux on the Pluto’s ARM. Built with the command line flag -DOVP_FRAME_MODE, it listens for the encapsulated Opulent Voice frames arriving on UDP port 57372. Its job is to arrange for the frames to be transmitted in the specified way using the MSK modulator.

The MSK modulator is a part of Locutus (https://github.com/openresearchinstitute/pluto_msk), the FPGA-based implementation of an Opulent Voice modem. Overall, Locutus is a modification of the standard reference design for the Pluto FPGA, provided by Analog Devices within https://github.com/analogdevicesinc/hdl . The reference design is what ships installed with the Pluto to create the PlutoSDR product. PlutoSDR is what’s usually described as a Software Defined Radio (SDR) device. That is, it is a translator between I/Q samples and RF signals, capable of any type of radio transmission and/or reception within its limits of frequency coverage, bandwidth, and sample rate, but only in conjunction with a computer that can supply and/or interpret the stream of I/Q samples. The device itself doesn’t know anything about any modulation or waveform details. That’s up to the connected computer.

Since the goal was to implement the modem inside the FPGA, we add modem blocks inside the mostly-unchanged FPGA reference design. Instead of exchanging I/Q samples with the computer, it exchanges data bits.

Inside the reference design, data flow between blocks is mostly done using a scheme called AXI-S (Advanced eXtensible Interface – Stream). This involves multiple handshaking signals between any two communicating blocks, arranged such that a small quantum of data is transferred when both sides are ready for the transfer, no sooner and no later. Each block has to be designed so that it can pause its operation seamlessly and await the readiness of the other block. This scheme allows for various blocks within the design to process at different speeds, even varying their processing rate in time, without any block falling behind.

Specifically, in the transmit pipeline, the first major block that handles incoming data from the computer is a specialized DMA controller, with access to the ARM’s memory on the computer side and an AXI-S interface on the other side. In the reference design, this DMA controller feeds into a series of scaling and filtering facilities, which in turn feed into the final processing stages before the DACs. We replaced the unneeded scaling and filtering blocks with the MSK modulator block. Data bits come in from the computer’s memory through the DMA controller and are delivered to the input of the Modulator block. The Modulator block outputs I/Q samples, which pass on into the rest of the reference design’s pipeline, eventually to be delivered to the DACs to create the analog signals.

The framework that controls how a computer talks to and controls PlutoSDR is called IIO, for Industrial Input/Output. It’s an extensive system meant to be general purpose for all sorts of devices that stream data to and/or from a computer, fast or slow. Besides the sample streams themselves, IIO provides a variety of control and status functions. With these, the computer can control the radio as well as the flow of samples. The computer side of IIO can be run on a local machine such as the ARM inside the Pluto, or over a USB interface, or over a network interface. So, PlutoSDR with IIO is __almost__ exactly what we needed. We use IIO essentially as in the reference design, except that we abuse the IIO interface with the computer by using it to carry data bits instead of I/Q samples.

One other thing that will turn out to be important about IIO: because it’s designed to work with both fast hardware and relatively slow software, it handles samples in relatively large batches. The user software creating transmit samples fills up a buffer full of samples using IIO utility functions, and then “pushes” that buffer into the Linux kernel where the actual transfer takes place. The kernel manages a finite pool of buffers, four by default, though this can be adjusted. If a kernel buffer is available, the push operation is quick. If the kernel buffers are all in use, presumably three are full and waiting their turn and one is actively being transferred to the hardware. In that case, an attempt to push a new buffer will result in the user program blocking until a kernel buffer is freed. This creates a similar effect to the AXI-S handshaking: the user program is slowed down so it never gets too far ahead of the hardware processing.

The 8X Problem

The problem initially detailed in Issue #22 referenced above was that the transmission observed on the spectrum analyzer was taking approximately eight times as long as it should have been taking. The shortest transmission is a single 40ms frame, but we also send a 40ms frame of preamble before the data frame, and follow it with 25 dummy frames (one second) of “hang time” in case another transmission comes along right away, and follow that with a 40ms frame of postamble. So that short transmission occupies 28 frames, which is 1120ms at 40ms per frame. The duration actually observed was estimated at 8500ms using a stopwatch. Here’s what it looked like on the waterfall on the spectrum analyzer.

Image: Short transmission extended by 8x

The Zynq contains a hardware clock that’s useful for precise timestamping from software. I made use of this facility to timestamp each frame, and found most of them to be spaced 302ms to 303ms apart, instead of the nominal 40ms. The spacing was suspiciously consistent. What’s more, the first few frames were transferred more quickly. This was consistent with the spectrum analyzer waterfall display shown in the original issue #22 comment.

After some confusion and fiddling around, the cause became evident. The IIO buffer size in use had not been adjusted to match the size of the transfer. It was set to 1024 samples. Since the Opulent Voice system is intended to be used with realtime voice conversations, we don’t buffer up multiple frames. Instead, there’s an IIO transfer for each individual frame. Since we are sending the logical data from the UDP-encapsulated frame, that is only 134 bytes. 1024 / 134 = 7.6, and 7.6 * 1120ms is 8559ms, neatly accounting for the observed duration. The software was packing 134 bytes into each buffer, and then pushing the whole 1024-byte buffer into the kernel, and the Pluto had no way to know that only the first 134 bytes were significant.

There are two ways to solve that. The simplest way is to change the buffer size to match the frame size, 134. With no other changes, that eliminated almost all of the excess delay. The other way is to leave the buffers alone, and instead replace the calls to iio_buffer_push() with calls to iio_buffer_push_partial() and pass a length of 134 to the latter function. We were suspicious of iio_buffer_push_partial(), because we remembered having difficulty with it in a previous encounter, so I tried both methods and compared the results. Both apparently worked the same. I decided to stick with changing the buffer size to 134.

Looking again at the waterfall display on the spectrum analyzer, we could see that the duration was very close to correct, even though there’s no way to get a precise time measurement from the waterfall display. But now we had precise timestamps in the code, and we could see from the timestamps that the duration was still a little bit long, 1214ms instead of 1120ms. That was an extra 94ms unaccounted for. I re-learned how to use the spectrum analyzer in zero-span mode, and was able to see that the 1214ms duration was real, and not an error in how the timestamps were handled.

Image: 4ms transmission seen in zero-span mode

Putting the spectrum analyzer back into waterfall mode, we observed that the signal was not entirely clean during that 1214ms period. There appeared to be small gaps in the modulation. Something was still wrong.

Possibly Leaking Kernel Buffers

The timestamp trace (posted in a comment to Issue #22) showed that each call to iio_buffer_push() was taking about 39ms to return. That was nice and consistent, but it should not have been. With four kernel buffers allocated, at least the first three calls to iio_buffer_push() ought to return very quickly. But we weren’t seeing that, even after long idle periods.

A reboot of the Pluto cleared up this issue. We found that all 28 of the iio_buffer_push() calls in a short transmission were returning in under 2ms, as expected. The overall duration of the transmission had fallen to 1135ms, just 15ms longer than nominal.

This was still a little too long to attribute to measurement uncertainty. We concluded that there were probably brief underruns adding delays between frames, and attributed this to a timeline with no slack for timing uncertainty introduced by the USB Ethernet interface. We resolved to get control over the timeline and redesign it with wider windows, but in the meantime we moved on to longer transmissions. Voice transmissions.

Voice Transmissions Still Getting Stretched

We made voice transmissions for a count of 10 or 20 mississippis using the PTT button in the Interlocutor GUI. During the transmission, we observed the waterfall display on the spectrum analyzer. There appeared to be tiny gaps, medium sized gaps, and a few pretty large gaps in the modulation (never in the overall power). That’s not right.

We were also collecting timestamp information, of course. It showed that there were 469 encapsulated frames processed, which would add up to 18.76 seconds at 40ms each. At this point, we happened to be using iio_buffer_push_partial(), and the debug log showed 645 calls to iio_buffer_push_partial(). The 645 – 469 = 176 extra pushes could only have been  frames inserted by the Dialogus code. It always inserts a preamble frame and a postamble frame, and the log showed that it inserted 174 dummy frames, so that adds up. Only the 25 dummy frames at the end of the transmission are expected, leaving 149 dummy frames that must have been inserted due to underruns. That is, when a 40ms boundary passes and no new encapsulated frame data is available, Dialogus sees this as a possible end of the transmission and starts to count off a one-second hang time filled with dummy frames. This event was also visible in the log, 61 times. That works out to about 2.5 dummy frames emitted per declared hang time. That’s too much to be due to narrow timing windows; each failure to hit a window by a millisecond or several would only generate a single dummy frame. There were still excess dummy frames being sent.

I made a table of the function call durations for iio_buffer_push_partial(), which we were still using at that time. It tells a confusing story. A total of 18 calls were 7ms or less, indicating that the kernel buffer was freed around the same time the next buffer was pushed. A total of 501 calls were clustered around 40ms, indicating that all the kernel buffers were full around the time the next buffer was pushed. The remaining 126 calls had durations ranging all the way up to 1000ms, which caused an IIO timeout error. How is that possible, with only four kernel buffers that only take 40ms each to clear out? Seemingly at least one of these assumptions is wrong.

A Period of Confusion

We went back to trying to characterize the behavior during short transmissions, mainly because they seemed a lot closer to being correct and they were easier to experiment with. I captured a test with three short transmissions on the waterfall, with matching log files. The three transmissions contained exactly the same data, but they did not look alike on the waterfall. The visible gaps looked different.

We mused about the limitations of the waterfall display. It works by capturing some number of samples, doing an FFT on them, and drawing a row of pixels on the display, and that happens approximately 30 times per second. I don’t recall seeing any documentation on how many samples are used, but it’s probably much less than 100% duty cycle. Worse, 30 per second is barely faster than our frame rate of 25 per second, so we are unlikely to clearly see any details at the level of individual frames, much less fractions of a frame. A faster waterfall display would be useful to have.

For these three short transmissions, I measured the time between successive calls to iio_buffer_push(). Except for several well-understood special cases, they were all clustered tightly around 40ms, as expected.

Image: Three short transmissions on the waterfall


I also measured the duration of each iio_buffer_push() call. They were all 1ms or shorter. That’s good.

And the overall transmission session durations were 1132ms, 1134ms, and 1133ms, still a tiny bit long. Maybe that could be explained as overhead?

The only worrisome indication of a problem was the appearance of the waterfall, and we’re not sure how meaningful that really is for these short transmissions. But we hadn’t forgotten the voice tranmissions, which were clearly terrible on the waterfall.

USB Ethernet Suspected

I had spent a lot of energy trying to blame the USB Ethernet connection between the Raspberry Pi and the Pluto. Probably because it was an external interface that I didn’t fully understand and could not trace. I went so far as to shop for USB sniffer devices so we could see what was actually happening on the bus. They are expensive, and none of the devices I found advertised a feature for tracing USB Ethernet.

To try and answer this question, I made a special Dialogus build that didn’t process any of the encapsulated frames beyond noting their length and some bytes of data from each one. This removed any interaction with the FPGA from the test scenario. I ran a lengthy transmission through it. Every single encapsulated frames arrived like clockwork, within several milliseconds of 40ms. The USB Ethernet was working fine when isolated from IIO.

Doing the Arithmetic

During Open Research Institute’s regular Tuesday morning Zoom call for FPGA development projects and such, we had a chance to discuss this with the designer of Locutus, Matthew Wishek, NB0X (see https://www.openresearch.institute/2025/08/04/matthew-wishek-wins-2025-arrl-technical-innovation-award/ ). I had made an attempt to read the relevant VHDL code in the pluto_msk repository to clarify my understanding of how the Modulator interacted with the surrounding AXI-S interfaces. Matthew confirmed my shaky understanding that the Modulator would consume data bits at an absolutely constant rate, and that it was not capable of slowing down for an underrun or of exerting extra “backpressure” on a data source that was trying to go too fast. That constant rate was familiar to me from working on the numerology for the older 4FSK version of Opulent Voice: 54200 bits per second.

A bit of light began to dawn.

That number is based on a full implementation of the Opulent Voice waveform. It starts from our choice of one of the recommended bit rates for the Opus audio codec, which is the key to excellent voice quality. We long ago decided to allow the voice codec to use 16000 bits per second. We also chose one of the frame durations recommended by Opus, 40ms. 16000 * 0.040 = 640 bits = 80 bytes. Wrap it up in IP/UDP/RTP as is usually done for streaming Opus on networks, and you’re up to 80 + 12 + 8 + 20 = 120 bytes. Add 2 for packet framing using COBS, 122. Add 12 bytes of frame header consisting of 6 bytes of station ID, 3 bytes of authentication tag, and 3 bytes reserved for protocol use, and that’s where the 134 byte 40ms frame comes from.

But that’s not the end of the transmitting process. The header is Golay encoded for FEC, which doubles it size to 24 bytes. The COBS data (including IP/UDP/RTP/Opus) is convolutionally encoded for FEC, which doubles its size as well. Now we’re up to 268 bytes. We also prepend an uncoded frame synchronization word before each frame, so the receiving modem can unambiguously find the frame boundaries. The frame sync is 3 bytes long, so now it’s 271 bytes, 2168 bits, and THAT is what gets fed to the MSK Modulator.

2168 bits * 25 frames/second = 54200 bits per second.

We had been sending the raw data, 134 bytes per frame, to Locutus. That would be great if Locutus implemented the FEC codes and prepended the frame sync word. However, at the current state of development, Locutus is just a bare Modulator. It takes a stream of bits, which have to be at 54200 bits per second, and modulates them using MSK, and that’s all it does. The software doesn’t implement those features, either. We haven’t even reached a firm decision about where those features *should* be implemented, hardware or software. So it was never going to work like that, and we knew that from the start.

I got to work modifying the Dialogus code to send frames of the right length. I added calls in the frame building logic to encode the header and the payload for FEC, and wrote dummy routines that fake the FEC codes by simply including the contents twice. I changed the buffer size to 271, and added checks to make sure the built frames were 271 bytes long. This would at least come close to keeping the Modulator well-fed.

Close But No Cigar

Actually, when tested, it was worse. A lot worse. Nearly every call to iio_buffer_push() was taking a long time. I now understood this to mean that the Modulator was consuming data more slowly than we were trying to send it. Not only were the kernel buffers all full, but they were taking a lot more than 40ms each to empty out.

Nonetheless, I was baffled. I re-examined the code that filled up the buffers, for probably the seventeenth time. This is some of the oldest code in the program, having been taken originally from the Analog Devices example code for streaming samples to a PlutoSDR. It’s full of cryptic calls to IIO utility routines, but I knew what those routines did, and it was nothing very complicated in this case where we had only one I/Q channel open for transmit. Really they were just setting up start index, end index, and stride for a loop that would visit each sample in the buffer. One sample for every channel (total of one channel) would constitute an AXI-S transfer when it got into the FPGA, and that same increment was the unit for allocating buffer sizes. Each sample had room for 16 bits of I and 16 bits of Q. Four bytes. Of course, I knew I had to put one byte of frame data in there instead of I/Q samples, and that’s what the code was doing.

There was one weird thing about the code that I did not know the story behind. The code put that one byte of data in two places within the I/Q sample. It carefully copied the data byte into the high byte of the I and then copied it again into the low byte of the Q. There were lovely comments on each line describing accurately what each line of code was doing at the byte level. Whoever wrote that code must have known the reason for it. Maybe they figured that by filling in the first byte and the last byte, they’d have a good chance of having the byte in the right place. Clearly they were aware that only one data byte needed to go into the sample. Unless they were an LLM.

Funny story. I knew for sure that those lines of code had been changed from the Analog Devices reference code. I remembered that the transmit buffers in that reference code had been filled with all zeroes. Which is just about the stupidest thing you could put into the I/Q samples of a demonstration intended to show how it worked. It would generate no modulation for most normal modulation types, and no power output at all for amplitude-modulation methods. Which we learned the hard way, and wasted some time tracking down.

Anyway, despite several good reasons I should have known better, I assumed that code was probably right and went looking for other things to check.

What About the Register Initializations?

There are a bunch of reads and writes to registers in Locutus as part of the initialization code in the main() function of Dialogus. These were inherited from older programs that did their jobs successfully. Some had been modified as new registers were added, mostly to help with debugging. I undertook to read and check them all against the register definitions. I wasn’t really expecting to find anything.

Until I got to the TX_DATA_WIDTH register. Short description, “Modem Tx Input Data Width”. Long description, “Set the parallel data width of the parallel-to-serial converter”. Default value at reset: 8. I knew what this was. The Modulator block is designed to be flexible about its input format, to make it easier to connect to a variety of hosts. When an AXI-S transfer arrives at the Modulator, this register tells it how many bits of that transfer contain meaningful data bits for modulation. I knew it was 8. It had to be 8, because we needed to send an odd number of bytes in each transfer.

But it wasn’t set to 8 in the initialization code. It was set to 32. That meant the Modulator was trying to send out four bytes for every one byte that we intended. I changed it to 8.

I also wanted to know where in the 32-bit “sample” the 8-bit data was supposed to go. I ran a sequence of tests at TX_DATA_WIDTHs of 8, 16, and 24, checking for modulation with data in each of the four locations within the sample. It turns out the buffer-filling code was wrong in both of the positions where it placed the data byte. It should have been placed in the lower byte of I. This is now corrected and documented in the code.

Characterizing the Results

I made a long voice transmission with all the usual logging. The waterfall looked good, no dropouts visible. The higher tone was much stronger than the lower tone, but that could be just because the data wasn’t scrambled for whitening or really encoded.

I extracted some statistics from the log file and did what felt like a systematic and thorough evaluation. Everything looked good and I got excited and committed the fixed code. I didn’t say so online, but I thought it was fixed. I did promise a detailed report “later on”, and that turned into this document.

On further study (as a result of writing this document) I can see that there are definitely still some things wrong, and clues about where to look next.

The elapsed time between calls to iio_buffer_push() was a mix of around 40ms (2736 occurrences) and around 0ms (662 occurrences). There were no outliers or long waits between push calls. I’m not sure I understand the mix of 40ms and 0ms elapsed times, but I suspect it can be explained by timing jitter around a too-narrow window. Some more debugging may be needed on this one.

I measured the duration of every iio_buffer_push() call. They were all quick. Most values were 0ms, none were longer than 2ms, and only a few examples of 2ms in the run. This is what I’d hope to see. 

I also looked at the axis_xfer_count register. This is one I haven’t mentioned before. It is supposed to measure how many AXI-S transfers have taken place. I logged it as a cumulative value and also as a differential between pushes. The difference between start of session and end of session was 458597 transfers, which is almost exactly 135 (not 134) transfers per push, but 169.4 transfers per frame as measured by the overall duration of the session, which was 2707 x 40ms. Neither of those numbers makes sense. It ought to be 271 per frame now. The distribution of the delta values was interesting, too.

I logged starts and cancels of hang times. There were 664 hang times declared and canceled. I didn’t log dummy frames, but I captured periodic statistics reports and they reveal that 689 dummy frames were sent. Discarding the 25 dummy frames at the natural end of transmission, that’s 664 dummy frames, exactly one for each hang time declared. That’s what we would expect from short underruns. Looking for patterns in the hang time events, I see cases where an encapsulated frame is logged as arriving after a hang time is declared, but the hang time is not canceled. This suggests race conditions in the hang time checking.

Image: Delta of axis_xfer_count between buffer pushes

Conclusion

The debugging detailed here was (probably) good progress, but there’s more work to do before we can consider the problems solved.

The final test will of course be to receive the transmissions created this way, and check that the data error rate is appropriate at a variety of signal strengths. Getting the receiver to work that well may be the harder part of the problem, so it’s essential to keep looking at Dialogus until every weird behavior is understood or corrected.

IIO Timeline Management in Dialogus — Transmit

Paul Williamson, KB5MU 

The next step in debugging/optimizing the design of the transmit pipeline in Dialogus (https://github.com/OpenResearchInstitute/dialogus) is to get control of the timeline. Frames are coming in via the network from Interlocutor (https://github.com/OpenResearchInstitute/interlocutor), being processed by Dialogus, and going out toward the modulator via iio_buffer_push() calls that transfer data into Linux kernel buffers on the Pluto’s ARM. The kernel’s IIO driver then uses the special DMA controller in the Pluto’s FPGA reference design to turn this into a stream of 32-bit AXI-S transfers into the Modulator block. The Modulator block has hard realtime requirements and is not capable of waiting for an AXI-S transfer that is delayed. Dialogus’s job is to make sure the kernel never runs out of DMA data.

Goals

1. Minimize latency

2. Maximize robustness to timing errors

Assumptions

Data frames arrive on UDP-encapsulated network interface from Locutus.

Data frames arrive without warning.

Data frames stop without any special indication.

Data frames may then resume at any time, without necessarily preserving the frame rhythm.

Data frames arrive in order within a transmission.

Requirements

A preamble frame must be transmitted whenever the transmitter turns on.

The preamble frame duration needs to be settable up to one entire 40ms frame.

There must be no gap between the preamble and the first frame of data.

A postamble frame must be transmitted whenever the transmitter turns off.

There must be no gap between the preamble and the postamble frame.

Dialogus may insert dummy frames when needed to prevent any gaps.

Dialogus must limit the number of consecutive dummy frames to a settable “hang time” duration.

Dialogus should not do any inspection of the frame contents.

Dialogus should not allow frame delays to accumulate. Latency should be bounded.

Dialogus is allowed and encouraged to combine transmissions whenever a transmission begins shortly after another transmission ends. That is, the new transmission begins within the hang time.

Derived Requirements

When not transmitting, Dialogus need not keep track of time.

When a first frame arrives, Dialogus should plan a timeline.

The timeline should be designed to create a wide time window during which a new encapsulated frame may correctly arrive.

The window should accommodate frame jitter in either direction from the base timing derived from the arrival time of the first frame.

The timeline will necessarily include a hard deadline for the arrival of a new encapsulated frame.

When no new frame has arrived by the hard deadline, Dialogus has no choice but to generate a dummy frame.

If an encapsulated frame arrives after the deadline but before the window, Dialogus must assume that the frame was simply late in arriving

Dialogus should adhere to the planned timeline through the end of the transmission, so that the receiver sees consistent frame timing throughout. The timeline should not drift or track incoming frame timing.

Observations

When the first frame arrives, Dialogus can only assume that the arrival time of that frame is representative of frame timing for the entire transmission. The window for future frame arrivals must include arrival times that are late or early as compared to exact 40ms spacing from the first frame, with enough margin to tolerate the maximum expected delay jitter.

The window does not track with varying arrival times, because that would imply that the output frame timing would track as well, and that’s not what the receiver is expecting. Once a timeline is established, the transmitter is stuck with that timeline until the transmission ends. After that, when the next transmission occurs, a completely new timeline will be established, and the preamble will be transmitted again to give the receiver time to re-acquire the signal and synchronize on the new first frame’s sync word.

When the first frame arrives and Dialogus is planning the time line, the minimum possible latency is achieved by immediately starting the preamble transmission. The hardware must receive the first frame before the end of the preamble transmission, and every frame thereafter with deadlines spaced 40ms apart. That implies that the window must close slightly before that time, early enough that Dialogus has time to decide whether to send the data frame or a dummy frame and to send the chosen frame to the hardware. If the preamble duration is set to a short value, this may not be possible. In that case, Dialogus can either extend the preamble duration or delay the start of the preamble, or a combination of both, sufficient to establish an acceptable window. Essentially, this just places a lower limit on the duration of the preamble.

Let’s think about the current case, where the preamble duration is fixed at 40ms. We’ll ignore the duration of fast processing steps. From idle, a first frame arrives. Call that time T=0. We expect every future frame to arrive at time T = N * 40ms ± J ms of timing jitter. We immediate push a preamble frame and then push the data frame. The kernel now holds 80ms worth of bits (counting down), and Dialogus can’t do anything more until the next frame arrives. If the next frame arrives exactly on time, 40ms after the first frame, then the preamble buffer will have been emptied and DMA will be just starting on the first frame’s buffer. When we push the arrived frame, the kernel holds 80ms of bytes again, counting down. If the next frame arrives a little early, DMA will still be working on the preamble buffer, and the whole first frame’s buffer will still be sitting there, and if we push the arrived frame, it will be sitting there as well, for a total of more than 80ms worth of data. If the next frame arrives a little late, DMA will have finished emptying out and freeing the preamble’s buffer, and will have started on the first frame’s buffer. If we push the arrived frame, there will be less than 80ms worth of data in the kernel. All of these cases are fine; we have provided a new full buffer long before the previous buffer was emptied.

What if the frame arrives much later? If it arrives 40ms late, or later, or even a little sooner, DMA will have finished emptying the previous frame’s buffer and will have nothing left to provide to the modulator before we can do anything about it. That’s bad. We need to have taken some action before this is allowed to occur.

Let’s say the frame arrives sooner than that. There are multiple cases to consider. One possibility is that the frame we were expecting was just delayed inside Interlocutor or while traversing the network between Interlocutor and Dialogus. In this case, we’d like to get that frame pushed if at all possible, and hope that subsequent frames aren’t delayed even more (exceeding our margin) or a lot less (possibly even arriving out of order, unbeknownst to Dialogus).

A second possibility is that the frame we were expecting was lost in transit of the network, and is never going to arrive, and the arrival is actually the frame after the one we were expecting, arriving a little early. In this case, we could push a dummy frame to take the place of the lost frame, and also push the newly arrived frame. That would get us back to a situation similar to the initial conditions, after we pushed a preamble and the first frame. That’d certainly be fine. We could also choose to push just the newly arrived frame, which might be better in some circumstances, but would leave us with less margin. Dialogus might need to inspect the frames to know which is best, and I said above that we wouldn’t be doing that.

A third possibility is that the frame we were expecting never existed, but a new transmission with its own timing was originated by Interlocutor, coincidentally very soon after the end of the transmission we were making. In that case, we need to conform Interlocutor’s new frame timing to our existing timeline and resume sending data frames. This may required inserting a dummy frame that’s not immediately necessary, just to re-establish sufficient timing margin. By combining transmissions in this way we are saving the cost of a postamble and new preamble, and saving the receiver the cost of re-acquiring our signal.

In real time, Dialogus can’t really distinguish these cases, so it has to have simple rules that work out well enough in all cases.

Analyzing a Simple Rule

The simplest rule is probably to make a complete decision once per frame, at Td = N * 40ms + 20ms, which is halfway between the nominal arrival time of the expected frame N and the nominal arrival time of the frame after that, N+1. If a frame arrives before Td, we assume it was the expected frame N and push it. If, on the other hand, no frame has arrived before Td, we push a dummy frame, which may end up being just a fill-in for a late or lost frame, or it may end up being the first dummy frame of a hang time. We don’t care which at this moment. If, on the gripping hand, more than one frame has arrived before Td, things have gotten messed up. The best we can do is probably to count the event as an error and push the most recently arrived frame. Note that we never push a frame immediately under this rule. We only ever push at decision time, Td. We never lose anything by waiting until Td, though. In all cases the kernel buffers are not going to be empty before we push.

It’s easy to see that this rule has us pushing some sort of frame every 40ms, about 20ms before it is needed to avoid underrunning the IIO stream. That would work as long as the jitter isn’t too bad, and coping with lots of frame arrival jitter would require extra mechanisms in the encapsulation protocol and would impose extra latency.

Pushing the frame 20ms before needed doesn’t sound very optimal. We need some margin to account for delays arising in servicing events under Linux, but probably not that much. With some attention to details, we could probably guarantee a 5ms response time. So, can we trim up to 15 milliseconds off that figure, and would it actually help?

We can trim the excess margin by reducing the duration of the preamble. Every millisecond removed from the preamble is a millisecond off the 15ms of unused margin. This would in fact translate to less latency as well, since we’d get that first data frame rolling through DMA that much sooner, and that locks in the latency for the duration of the transmission. This option would have to be evaluated against the needs of the receiver, which may need lots of preamble to meet system acquisition requirements.

We could also reduce the apparent excess margin by just choosing to move Td later in the frame. That effectively makes our tolerance to jitter asymmetrical. A later Td means we can better tolerate late frames, but our tolerance of early frames is reduced. Recall that “late” and “early” are defined relative to the arrival time of that very first frame. If that singular arrival time is likely to be biased with respect to other arrival times, that effectively biases all the arrival times. Perhaps first frames are likely to be later, because of extra overhead in trying to route the network packets to an uncached destination. Perhaps other effects might be bigger. We can’t really know the statistics of arrival time jitter without knowing the source of the jitter, so biasing the window seems ill-advised. Worse, biasing the window doesn’t reduce latency.

Conclusion

I’d argue that the simple rule described above is probably the best choice.

ORI Open Source Digital Radio at DEFCON

by KK6OOZ

At DEFCON in RF Village, we had a place to demonstrate work from ORI. We showed off open source synthetic aperture radar with coffee cans and GNU Radio and a PLUTO, and had space to show our “FT8 performance for keyboard chat” RFBitBanger QRP HF kit. We had room for the regulatory work for ITAR/EAR/219 MHz. And, very importantly – we had enough space to show major components of our UHF and up comms Opulent Voice system for amateur terrestrial and satellite fully up and running. At DEFCON, we had the human-radio interface and the modem as separate fully functional demonstrations.

Today, these two components have been combined and are working end-to-end. It’s coughing and sputtering, but it’s a solid first light. This means that microphone/keyboard/data processing from user input to waveforms over the air are happening.

The design goals for Opulent Voice project are to deliver very good voice quality in a modern way. AMBE/CODEC2 honestly sound terrible. Amateur radio deserves better audio quality. Therefore, we baseline Opus 16 kbps. It sounds great. Want more? there’s a path to 32 kbps.

We were very tired of a separate broken packet mode for data in ham digital voice product after ham digital voice product. Opulent Voice has keyboard chat and data in a single prioritized stream. No separate clunky packet mode. No 1980s architecture. It just works. In your browser. Or, at a command line interface. Chat only with transcriptions of all the audio received? With a microphone all in your ears and you never have to look at a screen? Your choice.

There are transcriptions for received audio (if you want that – it’s configurable), and text to speech for text messages is literally the next issue to be addressed. Accessibility is designed in from the start.

For terrestrial use, we have a demonstration conference server running on ORI’s Linode instance. This was up and running for DEFCON. It’s internet-only implementation of the repeater for terrestrial or space use, so that folks can see what the UI/UX looks like.

Everything is open source. Upcoming events? Opulent Voice will be presented to the ESA, AMSAT-DL, JAMSAT, and others in the next few months.

To everyone here that’s been supportive and wonderful – thank you so much.

RF Village has been instrumental and irreplaceable for this work to get a wider audience. This has been of enormous help. The benefits last year-round.

We’ve exhibited, presented, and contributed towards Ham Radio Village as well.

Here’s where we’ve been publishing and documenting the work. Under active development. Approach with the usual caution.

https://github.com/OpenResearchInstitute/interlocutor human-radio interface

https://github.com/OpenResearchInstitute/dialogus frames-to-modem application code for PLUTO SDR. Accepts Interlocutor connections and delivers them to the PLUTO Opulent Voice modem.

https://github.com/OpenResearchInstitute/pluto_msk Locutus, the award-winning modem firmware for the PLUTO SDR that makes the Opulent Voice waveforms.

Want to set up your own repeater for Opulent Voice?

https://github.com/OpenResearchInstitute/locus

Want to be more involved or informed about our work?

Opulent Voice in Space

Here’s Opulent Voice on a sounding rocket (RockSat-X project). Thank you to University of Puerto Rico for being such a great educational partner! Nice clean signal the entire time.

“Antenna Pattern”, colored pencil on paper, 2006

“Take This Job” For August 2025

Interested in Open Source software and hardware? Not sure how to get started? Here’s some places to begin at Open Research Institute. If you would like to take on one of these tasks, please write hello@openresearch.institute and let us know which one. We will onboard you onto the team and get you started.

Opulent Voice:

Add a carrier sync lock detector in VHDL. After the receiver has successfully synchronized to the carrier, a signal needs to be presented to the application layer that indicates success. Work output is tested VHDL code. 

Bit Error Rate (BER) waterfall curves for Additive White Gaussian Noise (AWGN) channel.

Bit Error Rate (BER) waterfall curves for Doppler shift.

Bit Error Rate (BER) waterfall curves for other channels and impairments.

Review Proportional-Integral Gain design document and provide feedback for improvement. 

Generate and write a pull request to include a Numerically Controlled Oscillator (NCO) design document for the repository located at https://github.com/OpenResearchInstitute/nco. 

Generate and write a pull request to include a Pseudo Random Binary Sequence (PRBS) design document for the repository located at https://github.com/OpenResearchInstitute/prbs.

Generate and write a pull request to include a Minimum Shift Keying (MSK) Demodulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_demodulator 

Generate and write a pull request to include a Minimum Shift Keying (MSK) Modulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_modulator

Evaluate loop stability with unscrambled data sequences of zeros or ones.

Determine and implement Eb/N0/SNR/EVM measurement. Work product is tested VHDL code.

Review implementation of Tx I/Q outputs to support mirror image cancellation at RF. 

Haifuraiya:

HTML5 radio interface requirements, specifications, and prototype. This is the primary user interface for the satellite downlink, which is DVB-S2/X and contains all of the uplink Opulent Voice channel data. Using HTML5 allows any device with a browser and enough processor to provide a useful user interface. What should that interface look like? What functions should be prioritized and provided? A paper and/or slide presentation would be the work product of this project. 

Default digital downlink requirements and specifications. This specifies what is transmitted on the downlink when no user data is present. Think of this as a modern test pattern, to help operators set up their stations quickly and efficiently. The data might rotate through all the modulation and coding, transmitting a short loop of known data. This would allow a receiver to calibrate their receiver performance against the modulation and coding signal to noise ratio (SNR) slope. A paper and/or slide presentation would be the work product of this project.

Inner Circle Sphere of Activity

If you know of an event that would welcome ORI, please let your favorite board member know at our hello at openresearch dot institute email address. 

1 September 2025 Our Complex Modulation Math article will be published in ARRL’s QEX magazine in the September/October issue. 

5 September 2025 – Charter for the current Technological Advisory Council of the US Federal Communications Commission concludes. 

19-21 September 2025 – ESA and AMSAT-DL workshop in Bochum, Germany.

3 October 2025 – Deadline for submission for FCC TAC membership.

10-12 October 2025 – See us at Pacificon, San Ramon Marriot, CA, USA.

11-12 October 2025– Presentation (recorded) to AMSAT-UK Symposium

25-26 October 2025 – Open Source Cubesat Workshop, Athens, Greece.


Thank you to all who support our work! We certainly couldn’t do it without you. 

Anshul Makkar, Director ORI
Keith Wheeler, Secretary ORI
Steve Conklin, CFO ORI
Michelle Thompson, CEO ORI
Matthew Wishek, Director ORI

DEFCON33 Open Source Digital Radio Exhibit

At DEFCON in RF Village, we had a place to demonstrate work from ORI. We showed off open source synthetic aperture radar with coffee cans and GNU Radio and a PLUTO, and had space to show our “FT8 performance for keyboard chat” RFBitBanger QRP HF kit. We had room for the regulatory work for ITAR/EAR/219 MHz. And, very importantly – we had enough space to show major components of our UHF and up comms Opulent Voice system for amateur terrestrial and satellite fully up and running. At DEFCON, we had the human-radio interface and the modem as separate fully functional demonstrations.

Today, these two components have been combined and are working end-to-end. It’s coughing and sputtering, but it’s a solid first light. This means that microphone/keyboard/data processing from user input to waveforms over the air are happening.

The design goals for Opulent Voice project are to deliver very good voice quality in a modern way. AMBE/CODEC2 honestly sound terrible. Amateur radio deserves better audio quality. Therefore, we baseline Opus 16 kbps. It sounds great. Want more? there’s a path to 32 kbps.

We were very tired of a separate broken packet mode for data in ham digital voice product after ham digital voice product. Opulent Voice has keyboard chat and data in a single prioritized stream. No separate clunky packet mode. No 1980s architecture. It just works. In your browser. Or, at a command line interface. Chat only with transcriptions of all the audio received? With a microphone all in your ears and you never have to look at a screen? Your choice.

There are transcriptions for received audio (if you want that – it’s configurable), and text to speech for text messages is literally the next issue to be addressed. Accessibility is designed in from the start.

For terrestrial use, we have a demonstration conference server running on ORI’s Linode instance. This was up and running for DEFCON. It’s internet-only implementation of the repeater for terrestrial or space use, so that folks can see what the UI/UX looks like.

Everything is open source. Upcoming events? Opulent Voice will be presented to the ESA, AMSAT-DL, JAMSAT, and others in the next few months.

To everyone here that’s been supportive and wonderful – thank you so much.

RF Village has been instrumental and irreplaceable for this work to get a wider audience. This has been of enormous help. The benefits last year-round.

We’ve exhibited, presented, and contributed towards Ham Radio Village as well.

Here’s where we’ve been publishing and documenting the work. Under active development. Approach with the usual caution.

https://github.com/OpenResearchInstitute/interlocutor human-radio interface

https://github.com/OpenResearchInstitute/dialogus frames-to-modem application code for PLUTO SDR. Accepts Interlocutor connections and delivers them to the PLUTO Opulent Voice modem.

https://github.com/OpenResearchInstitute/pluto_msk Locutus, the award-winning modem firmware for the PLUTO SDR that makes the Opulent Voice waveforms.

Want to set up your own repeater for Opulent Voice?
https://github.com/OpenResearchInstitute/locus

Want to be more involved or informed about our work?
https://www.openresearch.institute/getting-started/

Defending Amateur Radio Spectrum: The AST SpaceMobile Battle Continues

Partial Victory in 430-440 MHz Band Defense

The amateur radio community has achieved a significant but limited victory in protecting the 430-440 MHz band from commercial satellite encroachment. AST SpaceMobile’s request for broad commercial use of amateur spectrum has been restricted to emergency-only operations for a maximum of 24 hours and only 20 satellites—but the fight isn’t over.

AST SpaceMobile (AST & Science LLC) operates a constellation of large commercial satellites designed to provide cellular service directly to mobile phones. Think of it as trying to turn satellites into massive cell towers in space. The problem? They wanted to use the 430-440 MHz amateur radio band for their Telemetry, Tracking, and Command (TT&C) operations across a planned 243-satellite constellation.

This isn’t just about frequency coordination—it’s about fundamental spectrum philosophy. The amateur bands exist for experimentation, emergency communications, and education. Commercial operations fundamentally change the character of these allocations, much like turning a public research laboratory into a private factory floor.

The Technical Challenge

AST SpaceMobile’s satellites are massive. These are some of the largest commercial satellites ever deployed, with solar arrays spanning over 700 square meters. These aren’t small CubeSats doing modest experiments. They are industrial-scale infrastructure requiring robust command and control systems.

The company initially deployed five Bluebird satellites in September 2024, operating on amateur frequencies at 430.5, 432.3, 434.1, 435.9, and 439.5 MHz with 50 kHz bandwidth. These were launched and operated without proper authorization. Each planned satellite would require TT&C channels with bandwidths between 64 – 256 kHz, creating a significant interference footprint across the entire 10 MHz amateur allocation.

The Open Research Institute, along with numerous international amateur radio organizations, filed strong opposition to AST SpaceMobile’s request. Our argument was both technical and philosophical:

Summarized from our filed comment, the technical objections included the following. 

1) Multiple established commercial satellite bands exist (S-band: 2025-2110 MHz, X-band: 8025-8400 MHz, Ka-band: 27.5-30.0 GHz)

2) ITU studies specifically excluded the 430-440 MHz amateur allocation from commercial TT&C considerations

3) Modern satellite technology readily supports operations in higher frequency bands with better propagation characteristics. 430-440 MHz is not the best choice.

We raise the following philosophical and cultural concerns.


1) Amateur radio bands serve critical emergency communications when commercial infrastructure fails

2) These frequencies support STEM education and technological innovation. Where do you think many RF engineers get their start?

3) Commercial encroachment sets a dangerous precedent that could completely destroy the experimental character of amateur allocations

The FCC’s Decision: A Limited Victory

On August 29, 2025, the FCC issued a modified grant that significantly restricts AST SpaceMobile’s operations:

1) 24-hour limit. TT&C operations in the 430-440 MHz band are permitted only for periods not exceeding 24 hours.

2. Emergency only. Operations are restricted to Launch and Early Orbit Phase (LEOP) and emergency situations when no other band is available.

3. 20-satellite cap: Authorization covers only the next 20 satellites, including the FM1 prototype.

FM1 stands for “Flight Model 1” and is AST SpaceMobile’s first “Block 2” BlueBird satellite. It’s a much bigger, more powerful version of their current satellites. According to AST, it is about three times larger than their first-generation BlueBird satellites with 10 times the capacity. Launch dates have been delayed over the past year and the satellite might go up in early 2026. 

This represents a major step back from AST SpaceMobile’s original request for blanket commercial access across their entire constellation.

What Does This Mean for the Amateur Community?

The decision validates several key principles that we and many others have been patiently asserting to regulators. 

1) Amateur spectrum is different. The FCC acknowledged that amateur allocations can’t simply be treated as general-purpose commercial spectrum. The severe restrictions imposed recognize the unique character and public service value of amateur radio.

2) Technical alternatives exist. By limiting operations to emergencies “when no other band is available,” the FCC effectively endorsed our argument that commercial TT&C bands are technically viable for these operations.

3) Precedent matters. While it shouldn’t have to be repeatedly argued, precedent really does matter and vigilance is required in order to keep a solid regulatory foundation for amateur radio. Rather than opening the floodgates to commercial use of amateur spectrum, the FCC imposed strict limits that discourage similar requests from other operators.

Industry Response and Next Steps

AMSAT-DL described this as a “greater (partial) success” for amateur radio and AMSAT satellite operators. The 24-hour emergency-only restriction and 20-satellite cap should give AST SpaceMobile sufficient time to redesign their constellation for proper commercial frequency usage.

However, this isn’t a complete victory. AST SpaceMobile still has temporary access to amateur spectrum, and the company may seek to extend or modify these restrictions as their constellation develops.

Lessons for Open Source and Amateur Communities

This case illustrates several important principles for defending community resources. Documentation matters. Technical arguments backed by ITU studies, engineering analysis, and regulatory precedent carried significant weight in the FCC’s decision. Without this, things would have worked out very differently. 

Community coordination works. International amateur radio organizations presenting unified opposition demonstrated the global impact of spectrum decisions. 

Vigilance must continue. Protecting community resources, whether spectrum, software licenses, IP addresses, or technical standards, requires continuous engagement with regulatory and governance processes. 

The amateur radio community must remain vigilant as commercial space operations continue expanding. AST SpaceMobile’s modified authorization creates a framework for emergency use that other operators will definitely seek to exploit.

We encourage continued support for organizations like AMSAT-DL, ARRL, and the Open Research Institute that actively defend amateur spectrum rights.

You can participating in FCC comment periods on spectrum issues. Yes! You! Reach out to your local amateur radio organization and be part of the process. Support technical education that demonstrates amateur radio’s ongoing innovation. Engage with emergency communications activities that highlight amateur radio’s public service value.

The 430-440 MHz band remains primarily protected for amateur use, but this victory required sustained technical and legal advocacy. Our spectrum allocations, just like our open source projects, exist because communities actively defend and develop them. The technical part of a project is never the hardest part. The hardest part of any project is the people part. Negotiating, collaborating, compromising, defending, and communicating in a complex world are all “people work”. 

Technical Details and References

For those interested in the regulatory details, please refer to the following documents.

FCC Proceeding 25-201 (application/licensing proceeding)

ICFS File Number SAT-MOD-20250612-00145 (FCC’s “case number” or filing reference for AST SpaceMobile’s request to modify their satellite authorization.)

Open Research Institute Comment to 25-201 (Filed July 21, 2025 by Michelle Thompson W5NYV)

FCC Decision (DA-24-756A1.pdf from August 29, 2025)

The full technical analysis includes frequency coordination studies, interference modeling, and alternative band analysis available through the FCC’s Electronic Comment Filing System.

Inner Circle Newsletter July 2025

The Who What When Where Why

Open Research Institute is a non-profit dedicated to open source digital radio work on the amateur bands. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer and we work to use and protect the amateur radio bands.

You can get involved in our work by visiting https://openresearch.institute/getting-started 

Membership is free. All work is published to the general public at no cost. Our work can be reviewed and designs downloaded at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world.

Want more Inner Circle Newsletters? Sign up for email delivery at http://eepurl.com/h_hYzL 

The Microwave Link Mystery Puzzle (Solution Below)

Four amateur radio operators (Alice, Bob, Carol, and Dave) are setting up 10 GHz microwave links. Each has a different antenna polarization: Horizontal, Vertical, Right-Hand Circular (RHC), and Left-Hand Circular (LHC).

Polarization Loss Rules

Same polarization: 0 dB loss

Cross-polarized linear (H vs V): 20+ dB loss

Circular to linear: 3 dB loss (either direction)

Opposite circular (RHC vs LHC): 20+ dB loss

The Clues

1. Alice can communicate with Bob with perfect signal strength (0 dB loss).

2. Alice gets terrible reception from Carol (20+ dB loss).

3. Alice receives Dave’s signal at reduced power (3 dB loss).

4. Bob can barely hear Carol (20+ dB loss).

5. Bob gets a good but reduced signal from Dave (3 dB loss).

6. Carol receives Dave’s signal at reduced power (3 dB loss).

7. One operator forgot to rotate their new IC-905 dish from its factory vertical polarization setting.

8. Bob notices that a 10 degree rotation resulted in a lot of signal loss.

Who has which antenna polarization?

Solution:

Alice: Horizontal

Bob: Horizontal

Carol: Vertical (IC-905 factory setting – forgot to rotate!)

Dave: Right-Hand Circular (RHCP)

From clue 1, Alice and Bob have 0 dB loss, therefore they have identical polarization.

From clues 2 & 4, both Alice and Bob get 20+ dB loss from Carol, so Carol has the orthogonal polarization to Alice/Bob. This 20+ dB loss could happen in two scenarios:

1. Alice/Bob are one linear polarization (for example, Horizontal), Carol is the orthogonal linear (for example, Vertical).

2. Alice/Bob are one circular polarization (for example, RHCP), Carol is the opposite circular (for example, LHCP).

From Clue 8, Bob has either vertical or horizontal polarization, as rotating the antenna results in noticeable loss. Rotating a circular polarized antenna doesn’t result in much loss.

From clues 3, 5, & 6, Dave gets 3 dB loss from Alice, Bob, and Carol. Since 3 dB loss occurs between circular and linear polarizations, and we suspect Bob is linear, then Dave must be circular. 

Dave can get 3 dB from all three is if Alice/Bob/Carol are all linear polarizations, and Dave is circular. 

Then clue 7 (IC-905 vertical) helps us determine which linear polarizations they have.

From clue 7, someone has an IC-905 in vertical polarization. This some “one” must be Carol (since she’s orthogonal to Alice/Bob). Only one operator forgot to rotate, so it can’t be both Alice and Bob with vertical. 

Therefore: Carol = Vertical, 

Alice/Bob = Horizontal.

Since we have Horizontal (Alice/Bob), Vertical (Carol), and one circular (Dave), Dave must be either RHCP or LHCP. We are going to say RHCP, but it is arbitrary. LHCP is correct too. 

Amateur Radio Band Sudoku

“Take This Job”

30 July 2025

Interested in Open Source software and hardware? Not sure how to get started? Here’s some places to begin at Open Research Institute. If you would like to take on one of these tasks, please write hello@openresearch.institute and let us know which one. We will onboard you onto the team and get you started.

Opulent Voice:

Add a carrier sync lock detector in VHDL. After the receiver has successfully synchronized to the carrier, a signal needs to be presented to the application layer that indicates success. Work output is tested VHDL code. 

Bit Error Rate (BER) waterfall curves for Additive White Gaussian Noise (AWGN) channel.

Bit Error Rate (BER) waterfall curves for Doppler shift.

Bit Error Rate (BER) waterfall curves for other channels and impairments.

Review Proportional-Integral Gain design document and provide feedback for improvement. 

Generate and write a pull request to include a Numerically Controlled Oscillator (NCO) design document for the repository located at https://github.com/OpenResearchInstitute/nco. 

Generate and write a pull request to include a Pseudo Random Binary Sequence (PRBS) design document for the repository located at https://github.com/OpenResearchInstitute/prbs.

Generate and write a pull request to include a Minimum Shift Keying (MSK) Demodulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_demodulator 

Generate and write a pull request to include a Minimum Shift Keying (MSK) Modulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_modulator

Evaluate loop stability with unscrambled data sequences of zeros or ones.

Determine and implement Eb/N0/SNR/EVM measurement. Work product is tested VHDL code.

Review implementation of Tx I/Q outputs to support mirror image cancellation at RF. 

Haifuraiya:

HTML5 radio interface requirements, specifications, and prototype. This is the primary user interface for the satellite downlink, which is DVB-S2/X and contains all of the uplink Opulent Voice channel data. Using HTML5 allows any device with a browser and enough processor to provide a useful user interface. What should that interface look like? What functions should be prioritized and provided? A paper and/or slide presentation would be the work product of this project. 

Default digital downlink requirements and specifications. This specifies what is transmitted on the downlink when no user data is present. Think of this as a modern test pattern, to help operators set up their stations quickly and efficiently. The data might rotate through all the modulation and coding, transmitting a short loop of known data. This would allow a receiver to calibrate their receiver performance against the modulation and coding signal to noise ratio (SNR) slope. A paper and/or slide presentation would be the work product of this project.

ORI’s “Real and Complex Signal Basics” Article to be Published in QEX

The September/October 2025 issue of ARRL’s QEX magazine features “Real and Complex Signal Basics” by Michelle Thompson W5NYV. The article provides a step-by-step mathematical explanation of how complex modulation works in digital radio systems.

The piece starts with simple single-carrier real signals and builds up to explain quadrature amplitude modulation (QAM). Using clear mathematical derivations, it shows how two real signals can be transmitted simultaneously using sine and cosine carriers that are 90 degrees out of phase, then separated at the receiver using trigonometric identities and integration.

Subjects covered include:


How real signals create symmetrical frequency domain images

The transition from 4-level amplitude modulation to 16QAM using I and Q coordinates

The mathematical basis for quadrature mixing at both transmitter and receiver

Why complex modulation eliminates unwanted frequency images

How this approach enables higher data rates without requiring finer amplitude resolution

The article emphasizes the practical advantages of complex modulation. You get increased spectral efficiency, easier filtering due to single-sided transmission, and the flexibility to implement any modulation scheme through software-defined radio techniques.

This mathematical foundation underlies much of ORI’s digital radio development work, including the Opulent Voice protocol and other broadband digital communications projects.


The full article is available to ARRL members through QEX magazine. Want to publish it in your club newsletter? Article is available on request from ARRL from qst@arrl.org

Looking to Learn more about IQ Modulation?

Basics of IQ Signals and IQ modulation & demodulation – A tutorial by W2AEW

https://www.youtube.com/watch?v=h_7d-m1ehoY

Software Defined Radio For Engineers (free PDF from Analog Devices)

https://www.analog.com/en/resources/technical-books/software-defined-radio-for-engineers.html

These resources will get you well on your way!

ORI’s FCC Comment on Proceeding 25-201

Opposition to AST & Science LLC (AST SpaceMobile) Request for Amateur Radio Band Usage

July 21, 2025

Executive Summary

We respectfully submit this comment in strong opposition to AST & Science LLC’s (AST SpaceMobile) request to utilize the 430-440 MHz amateur radio band for Telemetry, Tracking, and Command (TT&C) operations for their planned 243-satellite constellation. We urge the Commission to deny this application and direct AST SpaceMobile toward established commercial satellite frequency allocations that are much more appropriate for their commercial operations.

Background and Technical Concerns

First, we have currently unauthorized operations going on. AST SpaceMobile currently operates five Bluebird commercial satellites launched on September 12, 2024, using amateur radio frequencies at 430.5, 432.3, 434.1, 435.9, and 439.5 MHz with 50 kHz bandwidth for telemetry links. This existing operation has already demonstrated the potential for interference with legitimate amateur radio operations. 

The scope of the proposed expansion is a problem. AST SpaceMobile seeks to expand this usage to a 243-satellite constellation, with each TT&C beam supporting command and telemetry channels with bandwidths between 64-256 kHz. This massive expansion would fundamentally transform the character of the amateur radio band from experimental and emergency communications to commercial satellite operations.

Amateur Radio uses this band and is important. The 430-440 MHz band serves a variety of critical Amateur Radio applications including amateur space communications, weak-signal SSB, digital television, data communications, repeaters and other applications. The amateur radio service in this band supports:

Emergency Communications: Amateur radio operators provide vital public service during disasters when commercial communications infrastructure fails.

Space Communication: Educational and experimental satellite communications that advance the radio arts.

Technical Innovation: Experimentation and development of new communication technologies. Where do we think new engineers come from? Many of them come from amateur radio. 

International Coordination: The proposed constellation will cause interference to amateurs world-wide. This is opposed by a wide variety of international amateur radio organizations. 

Regulatory and Precedential Concerns

This is a very inappropriate band allocation. The 430-440 MHz band is allocated to the Amateur Radio Service, not commercial satellite operations. ITU study groups investigated potential TT&C frequency allocations in the frequency ranges 150.05–174 MHz and 400.15–420 MHz, specifically excluding the amateur allocation at 430-440 MHz. Permitting a commercial satellite constellation to operate in amateur radio spectrum sets a dangerous precedent that could lead to further commercial encroachment on bands reserved for experimental, educational, and emergency communications.

Frequency coordination frameworks exist. Satellite frequency coordination, particularly in these frequency bands, relies on a global regulatory and technical framework maintained by the International Telecommunication Union (ITU). AST SpaceMobile should utilize this established framework rather than seeking unauthorized access to amateur spectrum. ITU study results are clear. ITU study groups conducted sharing studies in various bands which yield that no new allocations are suitable for small satellite TT&C on a co-channel sharing basis. Proper commercial allocations exist that would not interfere with amateur operations.

Proposed Alternative Solutions

We recommend the Commission direct AST SpaceMobile to utilize appropriate commercial satellite frequency allocations:

1. S-Band Operations: Migrate TT&C operations to established S-band satellite allocations (2025-2110 MHz and 2200-2290 MHz)

2. X-Band Implementation: Utilize X-band frequencies (8025-8400 MHz) which offer excellent propagation characteristics for satellite communications

3. Ka-Band Adoption: Consider Ka-band frequencies for high-capacity operations

4. Proper ITU Coordination: Work through established international coordination procedures for legitimate commercial satellite spectrum

Technical feasibility is not an issue. Modern satellite technology readily supports operations in these higher frequency bands. The primary frequency bands of S, X, and Ka are more advantageous than using the UHF band, which has a higher probability of local interference.

Economic and Public Interest Considerations

Protecting Public Service is important. Amateur radio operators provide critical emergency communications during disasters. Interference from commercial satellite operations could compromise this vital public service capability. The amateur radio service serves as a proving ground for new technologies and provides STEM education opportunities. Commercial encroachment limits these important societal benefits and harms our national competitiveness. 

Precedential impact is negative. Approving commercial use of amateur spectrum without compelling technical justification would invite similar requests from other commercial operators, potentially destroying the character of amateur radio allocations.

Conclusion and Recommendations: We respectfully urge the Commission to:

1. DENY AST SpaceMobile’s request to operate in the 430-440 MHz amateur radio band

2. DIRECT AST SpaceMobile to utilize appropriate commercial satellite frequency allocations in S, X, or Ka bands

3. REQUIRE proper ITU coordination for international satellite operations

4. REAFFIRM the Commission’s commitment to protecting amateur radio spectrum for its intended non-commercial, experimental, and emergency communications purposes

The amateur radio bands serve critical public interest functions that would be compromised by large-scale commercial satellite operations. Abundant alternative spectrum exists that is specifically allocated for commercial satellite TT&C operations. We urge the Commission to preserve the amateur radio bands for their intended purposes and direct AST SpaceMobile toward appropriate commercial spectrum.


References: FCC DA 25-532 (June 20, 2025), AMSAT-UK Technical Analysis, ITU Radio Regulations and Study Reports, and NASA Small Satellite Guidelines

ORI’s Contribution to FCC Technological Advisory Council


Open Research Institute contributed to the US Federal Communications Commission Technological Advisory Council final report for the 2024-2025 term. A summary of ORI’s final draft contribution to the report is presented here.

We describe how spectrum sharing models must evolve to meet growing demand, particularly focusing on terrestrial-satellite integration. The core thesis suggests we’re experiencing a crisis in current spectrum management that requires transitioning to a new “Era 4” model incorporating AI/ML automation and market-based mechanisms.


Historical Evolution of Spectrum Management

We identify three distinct eras of spectrum management.


Era 1 (1890-1912): Unregulated Model – A “loudest-served” system with no regulatory oversight, which collapsed following the Titanic disaster due to communication congestion.


Era 2 (1927-1981): Command-and-Control Model – Centralized FCC authority making static allocations based on “public interest.” This system struggled with emerging technologies like FM radio and cellular networks.


Era 3 (1993-present): Market-Based/Flexible Use Model – Introduced spectrum auctions and flexible licensing, but now showing signs of regulatory overload and crisis.


Evidence of Current Crisis


Several indicators suggest Era 3 regulatory models are failing.

219 MHz Band Limbo: Years of regulatory deadlock between amateur radio, commercial, and federal interests with zero amateur activity despite allocated rights


C-Band Aviation Disputes: $81 billion auction created interference concerns with radar altimeters, requiring presidential intervention


Inter-agency Conflicts: NTIA and FCC reaching opposite conclusions on identical technical evidence (Ligado case)


Reallocation Resistance: Broadcasting industry claiming all “low hanging fruit” has been picked from spectrum repacking


Technical Challenges in Terrestrial-Satellite Sharing


We highlight complex coordination requirements across multiple services in bands like 7.125-8.4 GHz, including Fixed Satellite Service, Mobile Satellite Service, and various terrestrial services. The SiriusXM situation exemplifies ongoing interference challenges between satellite and terrestrial broadband services.

AI/ML Enhanced Spectrum Management

The report positions AI/ML as essential for Era 4, comparing it to sophisticated air traffic control for the electromagnetic domain. Key capabilities include real-time spectrum sensing and occupancy analysis, dynamic allocation based on interference patterns, pattern recognition for optimization, and automated coordination at scale beyond human regulatory capacity


However, the report recommends against mandating specific AI/ML technologies, favoring technology-neutral approaches.


Proposed Era 4 Solutions


Band Managers and Spectrum Bucks: Government exits direct allocation, appointing non-governmental band managers who negotiate usage using a “Spectrum Bucks” currency system. This would enable both commercial and non-commercial users to coexist through market mechanisms.

Amateur Radio Model: Highlighted as a successful example of dynamic spectrum sharing through self-regulation, technical excellence requirements, and community governance. Amateur satellites demonstrate effective secondary service operations and have pioneered numerous technologies later adopted commercially.


Regulatory Sandboxes: Supplemental Coverage from Space (SCS) is the first terrestrial-to-satellite spectrum leasing framework, creating economic incentives for cooperation rather than just technical coordination. This hybrid model enables spectrum reuse in the same geographic areas.


Key Recommendations


Improve Spectrum Sensing: Establish independent measurement networks through citizen science projects, public-private partnerships, and dedicated monitoring systems to provide transparent occupancy data.


Create More Regulatory Sandboxes: Use controlled environments to test new sharing models before broad deployment, building on SCS and amateur radio examples.


Optimize Satellite Uplink Sharing: Prioritize sharing arrangements for uplink services while providing separate allocations for downlink services, recognizing the different interference characteristics.


Develop HetNet Principles: Create coordination algorithms that leverage satellite orbital mechanics and optimize handoffs between terrestrial and non-terrestrial networks.


The report concludes that the complexity and scale of modern spectrum management demands a paradigm shift toward automated, AI/ML-enhanced systems that can handle what human regulators cannot, while maintaining proven principles from successful sharing models like amateur radio.

[We’ll share full versions of all the charter items when the final report is approved by the TAC. This should be in early August 2025. -Michelle Thompson ]

What is the ESA FutureGEO Project?

Here is ORI’s response to the call for participation from AMSAT-DL concerning the FutureGEO project, sponsored by the European Space Agency. We are looking forward to participating in the FutureGEO workshop in September 2025.

Matthew Wishek Wins 2025 ARRL Technical Innovation Award

We are thrilled to announce that Matthew Wishek NB0X has been awarded the prestigious 2025 ARRL Technical Innovation Award by the American Radio Relay League (ARRL) Board of Directors. This distinguished honor recognizes licensed radio amateurs who develop and apply new technical ideas or techniques that advance the state of amateur radio technology.

Matthew received this recognition for his innovative contributions to amateur radio digital communications, specifically his development of an open source minimum shift keying (MSK) implementation for software-defined radio.

Matthew’s primary achievement centers on his work with the pluto_msk project, a sophisticated MSK modem implementation designed for the Analog Devices ADALM-Pluto SDR platform. This project represents a significant advancement in efficient digital communications for amateur radio, particularly for the Opulent Voice (OPV) digital voice and data protocol.

MSK modulation is a type of continuous phase modulation that eliminates phase discontinuities, resulting in superior spectral efficiency compared to traditional FSK and other binary modulations. Matthew’s custom hardware description language (HDL) implementation targeting the AMD Zynq 7010 system on chip, maximizes performance and resource utilization. The design has multiple sophisticated components including numerically controlled oscillators (NCO), Proportional Integral (PI) controllers, power detectors, and exponential moving average filters.

Equally significant is Matthew’s pioneering modem module approach to HDL design. This architectural approach makes a large positive difference in how digital signal processing systems are designed and implemented in FPGAs. The modem module approach is the systematic creation of reusable, well-defined building blocks that can be combined for a variety of communication protocols. While many projects and people pay lip service to modularity, Matthew’s execution and leadership in this area have supercharged ORI’s work.


All components are freely available, fostering collaboration and continued innovation in the amateur radio community

The ARRL Technical Innovation Award recognizes not just technical achievement, but also contributions that benefit the broader amateur radio community. Matthew’s work exemplifies both criteria

Matthew’s innovations in modular HDL design and MSK implementation provide a solid foundation for future developments in amateur radio digital communications. His work demonstrates how modern software-defined radio platforms can be leveraged to implement sophisticated communication techniques that were previously the domain of commercial and military systems.

The amateur radio community benefits enormously from contributions like Matthew’s, which not only advance the technical state of the art but also provide practical, implementable solutions that enhance our communication capabilities.

Congratulations to Matthew Wishek on this well-deserved recognition of his outstanding technical contributions to amateur radio!

The American Radio Relay League (ARRL) is the national association for amateur radio, connecting hams around the U.S. with news, information and resources. The ARRL Technical Innovation Award is presented annually to recognize exceptional technical innovation that advances amateur radio.

Highlights from the New Interlocutor Installation and Operator Manual

A Human Radio Interface for Opulent Voice is ready for you to try out at https://github.com/OpenResearchInstitute/interlocutor

Overview

Interlocutor is the human-radio interface component of the Open Research Institute’s Opulent Voice digital communication system. Think of it as the “radio console” that transforms your computing device (such as  Raspberry Pi or a laptop) into a sophisticated digital voice and data terminal. While traditional amateur radio digital modes often sacrifice audio quality for bandwidth efficiency, Interlocutor enables very high-quality voice communications with seamless integration of keyboard chat, file transfer, and system control messages.

What Does Interlocutor Do?

Interlocutor provides high-quality digital voice communication using the high-bitrate open source Opus voice encoder. It enables keyboard chat that integrates seamlessly with voice, handles file transfer and system control messages, and offers both command-line and web-based interfaces. Interlocutor manages audio devices with sophisticated conflict resolution and implements priority-based message queuing (voice always wins)

It acts as the bridge between human operators and radio equipment. It processes voice, text, and data into properly formatted frames that can be sent to any Opulent Voice-compatible modem via Ethernet, enabling remote operation and modular system design.

On first run, you’ll be prompted to:

1. Select audio input device where you choose your microphone.

2. Test audio input where you speak to verify microphone works.

3. Select audio output device where you choose your speakers/headphones

4. Test audio output where you listen for test tone. 

Where to Send Your Frames?

After Interlocutor does all the work required to gather your voice and text input and create Opulent Voice Protocol frames, those frames are then sent to a modem or other program that can make use of them. How does this work? If frames are sent to a modem then it turns the frames into a modulated signal. This signal is then sent out over the air. The current implemented target modem is the PLUTO SDR from Analog Devices.

Or, the frames can go to a computer or conference server over the Internet. In other words, frames can be sent to another computer, a modem for radio transmission, a conference server (repeater) receiver, and more. If it has an IP address, and if it understands what to do with the frames, then you are ready to communicate.

The Basics of Running Interlocutor

See the online manual for detailed installation instructions.

# Launch with web interface

python3 interlocutor.py YOUR_CALLSIGN --web-interface

# Launch with a specific config file

python3 interlocutor.py YOUR_CALLSIGN -c myconfig.yaml --web-interface

You’ll need to configure network targets to tell Interlocutor where your Opulent Voice data frames need to be sent. Modify the network target configuration through the web interface in the Target IP Address box or use the command-line argument like this:
-i <IP address> 
as needed.

The web interface is available at http://localhost:8000 on the device running interlocutor.py. 

Interlocutor features a modern glassmorphism-styled web interface for operation and complete system configuration. All configuration items are available at the command line or in the web interface.

The purpose of the Configuration System is to create, validate, and save one or more configuration files so that an operator can start the radio in a fully defined state. 

Operational Modes

First, let’s talk about the command line interface (CLI) mode. This offers traditional terminal-based operation with full keyboard chat capabilities. 

The simplest way to invoke this mode is by typing:

python3 interlocutor.py YOUR_CALLSIGN

In CLI mode, real-time voice transmission is done with a hardware PTT button. There is a keyboard chat interface. Voice has priority, with control messages second highest priority, and keyboard chat third. Debug output and system status are shown as text on the screen. 

Second, let’s explore the web interface mode. The web interface is a modern browser-based interface with visual controls. 

It is invoked by adding the –web-interface argument to program start.
python3 interlocutor.py YOUR_CALLSIGN --web-interface

We find a detailed configuration management in Configuration tab, live status indicators, and real-time voice transmission with PTT control available in three different ways. First, as a hardware switch on designated GPIOs. Second, as a clickable button in the web interface. Third, the space bar when the message entry box does not have focus. Web interface has keyboard chat and shows system log, notifications for important events, debug output, and system status. Sent and received audio can be replayed from the message history window. 

Dual-Mode Operation

Both interfaces can run simultaneously, providing flexibility for different operational scenarios or preferences. There are instant updates between command line and web interfaces via WebSocket communication.

Protocol and Network Configuration

Interlocutor implements the Opulent Voice protocol with sophisticated frame management. Here are the frame types and priorities.

1. VOICE (Priority 1): Opus-encoded audio, immediate transmission

2. CONTROL (Priority 2): PTT state changes, high priority queue, A5 messages

3. TEXT (Priority 3): Keyboard chat, normal priority queue

4. DATA (Priority 4): File transfers, low priority queue

External Network Ports:

57372: Network Transmitter port (configurable, connects to radio hardware, computer, or repeater). This is the only port you have to configure.

Internal Protocol Ports:

57373: Audio frames

57374: Text frames  

57375: Control frames

These ports tell the receiver what kind of information it’s receiving. These ports are in the UDP header in each Opulent Voice frame. The protocol is extendable. Future data types get the next port available. 

All frames follow the Opulent Voice protocol format.
Opulent Voice Header: 12 bytes (sync word + station ID + type + sequence + length + reserved)

Payload: 122 bytes of data loaded up in 40 ms frames

Encoding: COBS (Consistent Overhead Byte Stuffing) framing

Transport: UDP over IP with RTP headers for audio, UDP over IP for control, text, and data

Network Integration

# Basic operation (connects to default target with default values)

python3 interlocutor.py YOUR_CALLSIGN

# Specify target IP and port

python3 interlocutor.py YOUR_CALLSIGN --target-ip 192.168.1.100 --target-port 57372

# Load specific configuration file

python3 interlocutor.py YOUR_CALLSIGN -c mystation.yaml

Audio System Operation

Push-to-Talk (PTT) Operation:

GPIO Button: Physical button connected to Raspberry Pi GPIO

Web PTT: Additional click/touch controls in web interface. There’s a PTT button and the space bar activates PTT when the message entry box is not highlighted (does not have focus). 

Audio Processing Pipeline:

1. Microphone input to PyAudio capture

2. Audio validation

3. Opus encoding (40ms frames, 16,000 bps bitrate)

4. Hardware audio callback

5. RTP header addition

6. UDP header addition

7. IP header addition

8. COBS encoding

9. Opulent Voice header addition

10. Network transmission

Chat Integration

Voice transmission has absolute priority. Text messages typed during PTT are buffered. Buffered messages transmit immediately when PTT releases. Control messages maintain high priority for system functions

Chat Modes:

Voice + Chat: Normal operation with seamless integration. Operators can choose voice or text as appropriate. This is the default mode.

Chat Only Mode: Keyboard-to-keyboard communication (similar to RTTY). This is set up with a command line argument
--chat-only

Automatic Reconnection System

Interlocutor implements intelligent reconnection logic for the web interface.

Reconnection Timing is as follows.

1. First retry: 1 second delay

2. Subsequent retries: Exponential backoff (1.5x increase)

3. Maximum delay: 30 seconds

4. Maximum attempts: 10 attempts

5. Total auto-retry time: 2-3 minutes

A manual retry button appears after auto-retry exhaustion. 

Documentation Resources

Project repository: https://github.com/OpenResearchInstitute/interlocutor

Open Research Institute: https://www.openresearch.institute/getting-started

GitHub Issues for bug reports

Code contributions welcome via GitHub pull requests and filing issues. Documentation improvements welcome and encouraged. Testing and feedback valuable for development. Hardware testing on different platforms welcome and encouraged! 

The system is actively developed open-source software, and features will evolve. Check the project repository for the latest updates and documentation. 

But Wait, There’s More

Opulent Voice Demonstration Conference Server Beta Test Now Open

This project is called Locus, and the repository link, with manual and installation instructions, can be found here:

https://github.com/OpenResearchInstitute/locus

Key components of a fully implemented conference server (Opulent Voice repeater) are outlined below.

1. FDMA Uplink Channels Received at Spacecraft or Terrestrial Repeater

Multiple receivers monitoring different frequency slots simultaneously, each capable of demodulating and decoding the Opulent Voice protocol.

2. Conference Management Hardware and Software

This manages how stations can connect with other stations by maintaining lists of rooms or conferences. Conferences are logical groupings of stations.

3. DVB-S2 Downlink Multiplexer

This component takes all the conference data and creates a single high-bandwidth downlink stream.

Software modifications to Interlocutor, the human-radio interface for Opulent Voice stations, have been made in order to realize a simple repeater system. The Interlocutor repository can be found here:

https://github.com/OpenResearchInstitute/interlocutor

By targeting the IP address of opulent.openresearch.institute, anyone running Interlocutor can participate on ORI’s demonstration conference repeater. This repeater is internet-only at the moment, but will have RF hardware in the next phase of work.

To configure Interlocutor for the conference server, here is an example invocation.

python3 interlocutor.py QUARTER --web-interface  -i 172.236.237.16

This sets up a station with ID QUARTER, running a web interface at 127.0.0.1:8000, and sends frames to opulent.openresearch.institute.

The next phase of work is to set up named conference rooms so that people can join based on event, subject matter, scheduled meetups, and other searchable categories. There will be a Conferences tab in the Interlocutor web interface that will populate whenever conferences metadata is received.

Key Features Unlocked in the Next Phase

Multiple simultaneous conversations are possible. Instead of traditional “one-at-a-time” repeater operation, users will have multiple conferences running simultaneously. A local weather net, an emergency coordinator channel, and a tech Q&A room are all active at once. Each of the voice transmissions and chat messages appear in the message history window of Interlocutor. The voice transmissions and chat messages are filtered by the receiving station based on the conferences from which the transmissions originated from.

Operators will browse active conferences and see things like participant counts, last activity, and have the ability to freely monitor any conferences.

Conference participation is straightforward. Operators have their station join a conference by transmitting in one. At that point, the station ID appears in the conference membership list. Before transmission to a Conference, the station ID would not appear in the list.

Conference lists are kept clean. Over time, a station ID that has joined a conference expires and is dropped off the list of stations in the conference. This is done after some configurable period of inactivity. The default in Locus is one hour. Conferences themselves, without active transmissions, can expire after some amount of time as well. This “time out” process emphasizes active conferences, reducing the amount of “empty rooms” that an operator would have to search through to find someone to communicate with.

Technical Implementation Strategy

Within Interlocutor, current conference server operation is essentially the same as if it was connecting directly to another station. If the target IP address is a conference server, then voice and text appear in the message history from all the stations that are transmitting. The Locus demonstration station at ORI simply forwards frames received. It has essentially one conference. This will change in the next phase. Current work is to continuously improve performance and user experience with the basic functions. Adding additional conference functions will follow shortly.

For this next phase, Interlocutor will have a third Conference Tab in the web interface. This will allow conference discovery and enable full participation. The banner on the Communications Tab will change to show when a server is connected, as opposed to a single directly connected station. This will be unobtrusive but clear.

We need to implement the frequency division multiple access (FDMA) receiver that manages multiple frequency slots on the uplink, and build the DVB-S2 Multiplexer for the downlink aggregation. Hardware for our open source HEO/GEO satellite project Haifuraiya has been demonstrated in the past, and these designs will be used for this project as well.

Discoverability Improves Engagement

In order to take advantage of the conference functions, individual stations use the Interlocutor program in essentially the same way as they would in a point-to-point contact. A fully implemented conference server will broadcast messages to be detected by Interlocutor receiver. These broadcasts list the conferences and the participants. Then, the web interface populates with the available conferences and exposes the controls for starting a new conference, joining an existing conference, or monitoring any conference. Operators join conferences by participating with a transmission. The repeater handles the complexity. The broadcast will be sent at the lowest priority level (data), as to not interfere with voice, text, or control frames.

The concept of conferences is central to the repeater experience. People playing a D&D game over the air would be in a game conference. That conference might be called “The Elven Empire”. People coming and going during drive time might join “I-15 Rollout”. An individual calling another individual for a one-on-one QSO would be in a conference too, which might be called “Ken’s Coffee Break” or “Everyone Welcome”.

Conferences become the fundamental organizing principle. This organizing principal shifts amateur radio from “everyone talks to everyone on one frequency” to “purposeful gatherings with context and discoverability.”

Traditional repeaters are like having one conference room where everyone shows up, takes turns to talk, and if you miss a transmission then that is just too bad. You missed it.

A conference-based system creates purposeful spaces. The D&D group doesn’t interfere with the emergency coordinators, and both can run simultaneously. If you missed a message, you can scroll up in the message history window. You can replay past audio transmissions from the UI Audio Bubbles or read the past chat message history. Both are in the message history window. Empty conferences time out and are removed, so that it doesn’t look like a ghost town or give a false sense of activity. 

Instead of “I hope someone interesting is on the repeater,” users can browse conferences in the past, present, and future. A list of past conferences, out to some point in the past, can be maintained to show the frequency and type of activity. Any conference can be monitored. Current conferences are joined by transmitting in the conference. Future conferences can be listed in a schedule. This is like the difference between wandering into a random tavern hoping for good company versus checking a bulletin board that says “Adventuring Party Seeking Ranger Meet by the Old Oak” or “Storytelling by the Fire in the Great Hall.” This type of radio experience has natural social dynamics.

An operator can make a casual discovery and meet new friend.

“Oh, there’s a Python programming discussion with 4 people – that sounds interesting!”

Operators can make and advertise planned events.

“Tech Book Club session every Tuesday at 7 PM – join us!

Emergency coordination can be greatly improved. Operators can quickly and easily create dedicated emergency conferences during disasters. With authentication and authorization functions, these conferences can be limited to people that have, for example, completed specific training. A system like this can provide realistic emergency training opportunities that do not interfere with normal operation. Mentoring conferences can be set up to provide friendly and welcoming alternatives for education and learning.

The conference server manages conference data. A goal is to keep this to the minimal required state. The conference server doesn’t decide who can listen to what, unless there is an authentication and authorization policy in place from the repeater owner.

Use Cases

Opulent Voice naturally scales up. Here are some anticipated use cases. This is not intended to be a comprehensive list.

Simple Point-to-Point: Two or more people communicating point-to-point create the equivalent of a single conference. Whether the station is operating point-to-point or in a conference at a repeater, the program behaves almost identically. The only difference on the Communications tab is the heading color and title.

Local Repeater: Multiple conferences on one repeater system that is running a conference server.

Regional Network: Conferences can span multiple repeaters.

Satellite Integration: HEO satellites carry conference multiplexes to different continents

Internet Bridging: Conferences can include remote participants via internet gateways

The conference concept transforms amateur radio from “broadcast to everyone in range” to “purposeful communities of interest.” Conference discovery/joining is a different mental mode than active communication. Therefore, it gets a separate tab in the Interlocutor web interface.

Three-Tab Architecture Defined

Tab 1: Communications “I’m having a conversation”
We keep the current design
We use current message history and audio playback
Active conference indicator at the top: “Currently in: Morning Coffee Chat (4 people)”
Quick leave/switch button – minimal, non-intrusive
Focus stays on the actual conversations

Tab 2: Conferences “I’m choosing who to talk to”
New design to be implemented in the very near future
Conference browser and discovery
Create new conferences
Join/schedule conferences
Personal conference history and favorites
Block lists

Tab 3: Configuration “I’m setting up my radio”
We keep the current design
Configuration Tab moves from position 2 to position 3
New conference configuration items
Radio and system settings

Operator Mental Models

This creates a clear progression in the operator’s mind:

1) “I want to join a conversation” Check conference server status
2) If connected browse and join conferences
3) If no conferences appeal then “I can still talk directly to someone”

The central idea behind the conference tab is the transition experience: User browses conferences, finds “Python Coding Discussion (3 people)“, clicks join, gets smoothly transitioned back to their familiar communications interface, but now they’re talking with Python enthusiasts instead of whoever happened to be on the repeater.

Conference Servers feel like choosing your conversation rather than hoping for a good one.

Expect conference functionality to increase at the internet demonstration station over the next few months. 

Open Research Institute Sphere of Activity

August, September, October 2025

5 August 2025 – Final Technological Advisory Council meeting at the US Federal Communications Commission (FCC) in Washington, DC. The current charter concludes 5 September 2025. 

7-10 August 2025 – DEFCON 33 in Las Vegas, Nevada, USA. ORI plans an Open Source Digital Radio exhibit in RF Village, which is hosted by Radio Frequency Hackers Sanctuary. 

10 August 2025 – Submission deadline for Open Source Cubesat Workshop, to be held 25–26 October 2025. Location is Serafio of the Municipality of Athens, Greece. Submission status !!!AI

1 September 2025 Our Complex Modulation Math article will be published in ARRL’s QEX magazine in the September/October issue. 

5 September 2025 – Charter for the current Technological Advisory Council of the US Federal Communications Commission concludes. 

19-21 September 2025 – ESA and AMSAT-DL workshop in Bochum, Germany.

25-26 October 2025 – Open Source Cubesat Workshop, Athens, Greece.


Thank you to all who support our work! We certainly couldn’t do it without you. 

Anshul Makkar, Director ORI

Keith Wheeler, Secretary ORI

Steve Conklin, CFO ORI

Michelle Thompson, CEO ORI

Matthew Wishek, Director ORI

ORI’s Contribution to FCC Technological Advisory Council


Open Research Institute contributed to the US Federal Communications Commission Technological Advisory Council final report for the 2024-2025 term. A high-level summary of ORI’s final draft contribution to the report is presented here.

We describe how spectrum sharing models must evolve to meet growing demand, particularly focusing on terrestrial-satellite integration. The core thesis suggests we’re experiencing a crisis in current spectrum management that requires transitioning to a new “Era 4” model incorporating AI/ML automation and market-based mechanisms.


Historical Evolution of Spectrum Management

We identify three distinct eras of spectrum management.


Era 1 (1890-1912): Unregulated Model – A “loudest-served” system with no regulatory oversight, which collapsed following the Titanic disaster due to communication congestion.


Era 2 (1927-1981): Command-and-Control Model – Centralized FCC authority making static allocations based on “public interest.” This system struggled with emerging technologies like FM radio and cellular networks.


Era 3 (1993-present): Market-Based/Flexible Use Model – Introduced spectrum auctions and flexible licensing, but now showing signs of regulatory overload and crisis.


Evidence of Current Crisis


Several indicators suggest Era 3 regulatory models are failing.

219 MHz Band Limbo: Years of regulatory deadlock between amateur radio, commercial, and federal interests with zero amateur activity despite allocated rights


C-Band Aviation Disputes: $81 billion auction created interference concerns with radar altimeters, requiring presidential intervention


Inter-agency Conflicts: NTIA and FCC reaching opposite conclusions on identical technical evidence (Ligado case)


Reallocation Resistance: Broadcasting industry claiming all “low hanging fruit” has been picked from spectrum repacking


Technical Challenges in Terrestrial-Satellite Sharing


We highlight complex coordination requirements across multiple services in bands like 7.125-8.4 GHz, including Fixed Satellite Service, Mobile Satellite Service, and various terrestrial services. The SiriusXM situation exemplifies ongoing interference challenges between satellite and terrestrial broadband services.

AI/ML Enhanced Spectrum Management

The report positions AI/ML as essential for Era 4, comparing it to sophisticated air traffic control for the electromagnetic domain. Key capabilities include real-time spectrum sensing and occupancy analysis, dynamic allocation based on interference patterns, pattern recognition for optimization, and automated coordination at scale beyond human regulatory capacity


However, the report recommends against mandating specific AI/ML technologies, favoring technology-neutral approaches.


Proposed Era 4 Solutions


Band Managers and Spectrum Bucks: Government exits direct allocation, appointing non-governmental band managers who negotiate usage using a “Spectrum Bucks” currency system. This would enable both commercial and non-commercial users to coexist through market mechanisms.

Amateur Radio Model: Highlighted as a successful example of dynamic spectrum sharing through self-regulation, technical excellence requirements, and community governance. Amateur satellites demonstrate effective secondary service operations and have pioneered numerous technologies later adopted commercially.


Regulatory Sandboxes: Supplemental Coverage from Space (SCS) is the first terrestrial-to-satellite spectrum leasing framework, creating economic incentives for cooperation rather than just technical coordination. This hybrid model enables spectrum reuse in the same geographic areas.


Key Recommendations


Improve Spectrum Sensing: Establish independent measurement networks through citizen science projects, public-private partnerships, and dedicated monitoring systems to provide transparent occupancy data.


Create More Regulatory Sandboxes: Use controlled environments to test new sharing models before broad deployment, building on SCS and amateur radio examples.


Optimize Satellite Uplink Sharing: Prioritize sharing arrangements for uplink services while providing separate allocations for downlink services, recognizing the different interference characteristics.


Develop HetNet Principles: Create coordination algorithms that leverage satellite orbital mechanics and optimize handoffs between terrestrial and non-terrestrial networks.


The report concludes that the complexity and scale of modern spectrum management demands a paradigm shift toward automated, AI/ML-enhanced systems that can handle what human regulators cannot, while maintaining proven principles from successful sharing models like amateur radio.

[We’ll share full versions of all the charter items when the final report is approved by the TAC. This should be in early August 2025. -Michelle Thompson ]

ORI’s FCC Comment on Proceeding 25-201

Opposition to AST & Science LLC (AST SpaceMobile) Request for Amateur Radio Band Usage

July 21, 2025

Executive Summary

We respectfully submit this comment in strong opposition to AST & Science LLC’s (AST SpaceMobile) request to utilize the 430-440 MHz amateur radio band for Telemetry, Tracking, and Command (TT&C) operations for their planned 243-satellite constellation. We urge the Commission to deny this application and direct AST SpaceMobile toward established commercial satellite frequency allocations that are much more appropriate for their commercial operations.

Background and Technical Concerns

First, we have currently unauthorized operations going on. AST SpaceMobile currently operates five Bluebird commercial satellites launched on September 12, 2024, using amateur radio frequencies at 430.5, 432.3, 434.1, 435.9, and 439.5 MHz with 50 kHz bandwidth for telemetry links. This existing operation has already demonstrated the potential for interference with legitimate amateur radio operations. 

The scope of the proposed expansion is a problem. AST SpaceMobile seeks to expand this usage to a 243-satellite constellation, with each TT&C beam supporting command and telemetry channels with bandwidths between 64-256 kHz. This massive expansion would fundamentally transform the character of the amateur radio band from experimental and emergency communications to commercial satellite operations.

Amateur Radio uses this band and is important. The 430-440 MHz band serves a variety of critical Amateur Radio applications including amateur space communications, weak-signal SSB, digital television, data communications, repeaters and other applications. The amateur radio service in this band supports:

Emergency Communications: Amateur radio operators provide vital public service during disasters when commercial communications infrastructure fails.

Space Communication: Educational and experimental satellite communications that advance the radio arts.

Technical Innovation: Experimentation and development of new communication technologies. Where do we think new engineers come from? Many of them come from amateur radio. 

International Coordination: The proposed constellation will cause interference to amateurs world-wide. This is opposed by a wide variety of international amateur radio organizations. 

Regulatory and Precedential Concerns

This is a very inappropriate band allocation. The 430-440 MHz band is allocated to the Amateur Radio Service, not commercial satellite operations. ITU study groups investigated potential TT&C frequency allocations in the frequency ranges 150.05–174 MHz and 400.15–420 MHz, specifically excluding the amateur allocation at 430-440 MHz. Permitting a commercial satellite constellation to operate in amateur radio spectrum sets a dangerous precedent that could lead to further commercial encroachment on bands reserved for experimental, educational, and emergency communications.

Frequency coordination frameworks exist. Satellite frequency coordination, particularly in these frequency bands, relies on a global regulatory and technical framework maintained by the International Telecommunication Union (ITU). AST SpaceMobile should utilize this established framework rather than seeking unauthorized access to amateur spectrum. ITU study results are clear. ITU study groups conducted sharing studies in various bands which yield that no new allocations are suitable for small satellite TT&C on a co-channel sharing basis. Proper commercial allocations exist that would not interfere with amateur operations.

Proposed Alternative Solutions

We recommend the Commission direct AST SpaceMobile to utilize appropriate commercial satellite frequency allocations:

1. S-Band Operations: Migrate TT&C operations to established S-band satellite allocations (2025-2110 MHz and 2200-2290 MHz)

2. X-Band Implementation: Utilize X-band frequencies (8025-8400 MHz) which offer excellent propagation characteristics for satellite communications

3. Ka-Band Adoption: Consider Ka-band frequencies for high-capacity operations

4. Proper ITU Coordination: Work through established international coordination procedures for legitimate commercial satellite spectrum

Technical feasibility is not an issue. Modern satellite technology readily supports operations in these higher frequency bands. The primary frequency bands of S, X, and Ka are more advantageous than using the UHF band, which has a higher probability of local interference.

Economic and Public Interest Considerations

Protecting Public Service is important. Amateur radio operators provide critical emergency communications during disasters. Interference from commercial satellite operations could compromise this vital public service capability. The amateur radio service serves as a proving ground for new technologies and provides STEM education opportunities. Commercial encroachment limits these important societal benefits and harms our national competitiveness. 

Precedential impact is negative. Approving commercial use of amateur spectrum without compelling technical justification would invite similar requests from other commercial operators, potentially destroying the character of amateur radio allocations.

Conclusion and Recommendations: We respectfully urge the Commission to:

1. DENY AST SpaceMobile’s request to operate in the 430-440 MHz amateur radio band

2. DIRECT AST SpaceMobile to utilize appropriate commercial satellite frequency allocations in S, X, or Ka bands

3. REQUIRE proper ITU coordination for international satellite operations

4. REAFFIRM the Commission’s commitment to protecting amateur radio spectrum for its intended non-commercial, experimental, and emergency communications purposes

The amateur radio bands serve critical public interest functions that would be compromised by large-scale commercial satellite operations. Abundant alternative spectrum exists that is specifically allocated for commercial satellite TT&C operations. We urge the Commission to preserve the amateur radio bands for their intended purposes and direct AST SpaceMobile toward appropriate commercial spectrum.


References: FCC DA 25-532 (June 20, 2025), AMSAT-UK Technical Analysis, ITU Radio Regulations and Study Reports, and NASA Small Satellite Guidelines

Are Multi-Carrier Data Modes Illegal in Amateur Radio?

Mike McGinty W2FBI, organizer of the Amateur Cellular project (https://xcl.is) has a concern about Ham LTE. 

He writes, 

“The more I read part 97, the more I convinced myself that actually, Ham LTE and indeed all ham multi-carrier data is illegal. Further, all identification over almost all digital modes is illegal.

The important part is under Authorized Emissions. 

See https://xcl.is/news/standardized_ham_radio_lte_v1_rc/

Does part 97 needs rephrasing?”

The Key Issues with Emission Designators in Part 97

Mike McGinty raises a legitimate question about the interpretation of Part 97 rules regarding emission types and digital codes, particularly for multi-carrier data modes like LTE. The core of the issue appears to be in two parts of 47 CFR Part 97:

  1. Definition of Emission Types (§97.3(c)): This section defines authorized emission types for amateur radio, including RTTY and data emissions with specific designators where only “a digital code of a type specifically authorized in this part may be transmitted.” (https://www.ecfr.gov/current/title-47/chapter-I/subchapter-D/part-97)
  2. RTTY and Data Emission Codes (§97.309): This section lists both specified digital codes that are authorized, and conditions under which unspecified digital codes may be used depending on frequency band and other requirements. (https://www.law.cornell.edu/cfr/text/47/97.309)

The potential contradiction arises because §97.307(f)(8) authorizes “A RTTY or data emission having designators with A, B, C, D, E, F, G, H, J or R as the first symbol; 1, 2, 7, 9 or X as the second symbol; and D or W as the third symbol” (https://www.law.cornell.edu/cfr/text/47/97.307)

 while the definition in §97.3 appears to be more restrictive for some emission types.

The LTE Question

LTE (Long-Term Evolution) uses OFDMA (Orthogonal Frequency-Division Multiple Access), which is a multi-carrier modulation technique. The emission designator for such transmissions could be something like G7W or D7W based on emission designator standards. There is ambiguity about whether this precisely fits within the authorized emission types in Part 97.

Possible Interpretations

There are several ways to interpret these rules:

  1. Strict Interpretation: Only specifically listed emission designators are allowed, making some modern digital modes (including amateur LTE implementations) potentially unauthorized.
  2. Liberal Interpretation: The rule allowing “unspecified digital codes” (§97.309(b)) encompasses modern digital modes as long as they meet bandwidth and other technical requirements. Modes includes the way they are emitted and we don’t really have a problem. 
  3. Documentation-Based Interpretation: As discussion about this question widened, Paul Williamson KB5MU suggested that there may be an “established understanding with FCC” that emission modes can be unspecified in the same way that digital codes can be unspecified, as long as the full details of the mode are published.

Is There any Existing FCC Guidance or Documentation?

After Paul suggested that there might be a letter or document that addresses this, I made attempts to search and asked around. While I was unable to find specific FCC guidance documents or formal letters directly addressing this question for amateur radio LTE or similar modern multi-carrier digital modes, I can say that there is precedent for the FCC taking a more permissive approach to new digital modes:

  1. In 2016, the FCC proposed rule changes to remove limitations on the symbol rate for amateur radio data transmissions while maintaining bandwidth limitations (https://www.federalregister.gov/documents/2016/08/12/2016-19085/amateur-radio-service-rules-to-permit-greater-flexibility-in-data-communications) I assert that this shows a general trend toward technical flexibility.
  2. The provision in §97.309(a)(4) indicates that “An amateur station transmitting a RTTY or data emission using a digital code specified in this paragraph may use any technique whose technical characteristics have been documented publicly” (https://www.law.cornell.edu/cfr/text/47/97.309) which strongly suggests openness to innovation within specified categories and is a rule widely used by experimenters. There are no reports of restrictions, repercussions, or negative outcomes when amateur operators have relied upon public documentation for digital codes not otherwise specified. 

This begs the question: Can we assume that “An amateur station transmitting a RTTY or data emission may use any emissions technique whose technical characteristics have been documented publicly”?

Interpretation

The interpretation of amateur radio service rules regarding multi-carrier modes like LTE falls into a gray area that hasn’t been explicitly addressed by the FCC. The concern that Mike raises is therefore valid – there is tension between the specific emission designator definitions and the broader allowances for unspecified digital codes.

Paul’s suggestion that “there is an established understanding with FCC that emission modes can be unspecified in the same way that digital codes can be unspecified, as long as the full details of the mode are published” aligns with the spirit of amateur radio’s experimental nature, but I couldn’t find formal documentation supporting this understanding.

Given the FCC’s trend toward embracing technical innovation while maintaining good spectrum practices, this is indeed potentially a “bear best left unpoked” as Paul suggested, unless the amateur radio community seeks formal clarification through a petition for declaratory ruling from the FCC.

We could take the opportunity to suggest clarifying language that would explicitly encompass modern multi-carrier digital modes while maintaining appropriate technical standards and interference protection.

Option 1: Adding Specific Commercial LTE Emission Designators

Commercial LTE typically uses emission designators like G7W, D7W, or W7D, depending on the specific implementation. Simply adding these to the list of authorized designators would be a straightforward and easy approach, but it has some serious limitations:

  1. It would address only the current implementation of LTE without future-proofing for evolving technologies.
  2. It might not address other multi-carrier modes beyond LTE.
  3. It would require updating the rules again for each new technology.
  4. It doesn’t really fit into the Delete, Delete, Delete 25-133 proceeding because it’s adding and not taking away. 

Option 2: Comprehensive Modernization Approach

A more forward-looking approach would be to amend 47 CFR Part 97 with language that explicitly embraces modern digital techniques while maintaining appropriate technical standards. 

Draft:

I. INTRODUCTION

1. In this Notice of Proposed Rulemaking (or comment, or reply-to), we share proposed amendments to Part 97 of the Commission’s Rules governing the Amateur Radio Service. These proposed rules clarify and update the authorized emission types for digital communications. These proposed changes aim to explicitly accommodate modern multi-carrier digital modes, such as those based on orthogonal frequency-division multiplexing (OFDM) technologies, while maintaining appropriate technical standards and interference protection.

II. BACKGROUND

2. The Commission’s rules in Part 97 define various emission types authorized for amateur radio operators in § 97.3(c) and specify standards for RTTY and data emissions in § 97.309. These rules were established before many modern digital communication techniques were developed and in some cases do not clearly accommodate innovative technologies now commonly used in other radio services, including highly popular and efficient multi-carrier modulation methods.

3. Multi-carrier modulation techniques, such as those used in LTE (Long-Term Evolution) and other OFDM-based systems, offer significant advantages for efficient spectrum use and robustness against interference. Amateur radio operators have adapted these technologies for experimentation and improved communications, but questions have arisen about whether such modes are fully compliant with existing Part 97 emission designator requirements.

III. DISCUSSION

4. We propose to amend Part 97 to clarify that modern multi-carrier digital modes are authorized for amateur radio communications, provided they comply with other technical parameters specified for the relevant frequency bands. For example, a multi-carrier mode that exceeds the entire allocated bandwidth of an amateur band would obviously not be an authorized transmission. Specifically, we propose the following changes:

A. Proposed Rule Changes

5. We propose to amend § 97.3(c)(2) as follows:

(2) Data. Telemetry, telecommand and computer communications emissions having

   (i) designators with A, B, C, D, E, F, G, H, J, R or W as the first symbol, 1, 2, 7, 9 or X as the second symbol, and D or W as the third symbol;

   (ii) emission J2D; and

   (iii) emissions A1C, F1C, F2C, J2C, and J3C having an occupied bandwidth of 500 Hz or less when transmitted on an amateur service frequency below 30 MHz.

6. We further propose to amend § 97.309 by adding a new paragraph (c) as follows:

(c) An amateur station may transmit any digital emission using a multi-carrier modulation technique, including but not limited to orthogonal frequency-division multiplexing (OFDM), provided that:

   (1) The emission complies with all bandwidth limitations specified in § 97.307 for the frequency band in use;

   (2) The technical characteristics of the emission have been documented publicly;

B. Effect of Proposed Changes

7. These proposed changes would explicitly authorize the use of multi-carrier digital modes like those based on OFDM technology, including amateur adaptations of LTE and similar systems. Adding “W” as an authorized first symbol in emission designators would encompass cases where the carrier is modulated using combinations of amplitude and angle modulation or where multiple carriers are employed. The additional provisions in § 97.309(c) would ensure that such operations maintain the experimental and open nature of amateur radio while preventing misuse.

8. We believe these changes would eliminate ambiguity while promoting technical innovation in the Amateur Radio Service. The requirement for public documentation of technical characteristics would maintain the educational and experimental purposes of the service.

IV. CONCLUSION

9. The proposed rule changes would modernize Part 97 to explicitly accommodate contemporary digital communication technologies while preserving the fundamental purposes and technical disciplines of the Amateur Radio Service.

Option 3: Harmonization with International Approach

Another approach would be to harmonize with the ITU Radio Regulations’ approach to emission classification, which might provide more flexibility. This would involve referring directly to the ITU emission classification system rather than listing specific designators, focusing on bandwidth limitations and interference protection rather than specific emission types, and (like above) establishing that any emission type that meets technical parameters is permitted unless specifically prohibited.

Recommendation

Option 2 provides the most comprehensive solution as it explicitly addresses multi-carrier modes including OFDM-based systems like LTE, adds the emission designator first symbol “W” which is used for combined or multi-carrier modulation, and creates a framework that can accommodate future innovations without requiring constant rule updates. We want to maintain the core principles of amateur radio regarding openness and documentation while also preserving technical standards and doing all we can to prevent interference. Option 2 is the one most likely to resolve Mike’s concern that “all ham multi-carrier data is illegal”. 

Comments and critique welcome and encouraged!

-Michelle Thompson, for Mike McGinty, with Paul Williamson

Connecting the Dots: Protecting and Improving Amateur Radio Access to 219-220 MHz

Introduction

The 219-220 MHz band represents both a challenge and an opportunity for the amateur radio community. Recent events highlighted in the Convair/220 Amateur Radio Club’s newsletter “Contrails” and the DR0NK Radio Network’s report demonstrate how vigilance is essential for protecting our spectrum rights. While our ongoing work at Open Research Institute (ORI) focuses on removing unnecessary regulatory barriers to make this valuable spectrum more accessible, it took boots on the ground to defend the 220 band from unexpected and harmful interference.

The Twin Challenges: Interference and Regulatory Burden

The amateur radio community currently faces two related challenges in the 220 MHz vicinity. First, external threats. As documented by the DR0NK Radio Network (https://k1ct.freeshell.org/c220.html), experimental licenses granted without proper consideration of existing amateur services have caused significant interference to operations in the 222-225 MHz band (see the April 2025 newsletter – we have a copy of it here). This experience serves as a warning about potential threats to adjacent frequencies. In addition, regulatory barriers still exist. Our work at ORI has identified that the 219-220 MHz band is hampered by outdated notification requirements that create unnecessary administrative burdens without providing meaningful interference protection.

Our Regulatory Approach

ORI has been actively participating in the FCC regulatory process with proposals that would streamline operations while maintaining necessary technical safeguards. We have spoken up about removing the outdated and burdensom notification and permissions requirements for amateur radio stations operating on 219 MHz. See https://www.openresearch.institute/2025/04/01/fcc-filing-for-219-mhz-rules-changes-from-ori/

Our most recent comments go further than this, and support removing regulations requiring interference studies to protect television channels 10 and 13 (this is also supported by AAR and WISPA) along with reiterating the elimination of all of the outdated notification requirements in Part 80 and Part 97.

REPLY COMMENTS OF OPEN RESEARCH INSTITUTE, INC
Open Research Institute, Inc replies in support of certain comments filed by the Association of American Railroads (AAR) [1], WISPA [2], and Steven K. Stroh [3].
Specifically, we support section II of the AAR comments requesting the removal of regulations requiring interference studies to protect television stations using Channel 10 and Channel 13.
We also support WISPA’s comments regarding Part 80 of the Commission’s regulations requesting the same removal of interference studies. These regulations are outdated and based
on a coast station use of AMTS that was never widely deployed. Furthermore, we support Steven K. Stroh’s recommendation to delete 97.307(f)(13) which restricts amateur station emissions
in 219-220 MHz (hereafter “219”) from using more than 100 kHz of bandwidth. We take no position on other comments made by the AAR, WISPA, or Steven K. Stroh.
We believe that our comments [4] serve to encourage wider use of AMTS spectrum by removing other burdensome notification regulations in Part 80
that require AMTS licensees to notify the American Radio Relay League and Interactive Systems of station locations. Our proposed removal of notification requirements for
219 in Part 97 serve the same purpose.
The AAR observes that most Positive Train Control deployments are “[…] primarily located within railroad rights-of-way and generally away from households […]”.
This necessarily limits the potential for harmful interference given we do not advocate for mobile use of 219 by amateur stations.
Beyond documented Amateur Radio Service history in coexisting with primary users, we believe our proposed retention of many of the restrictions on amateur use (secondary status, reduced power levels, fixed stations, and digital emissions) will sufficiently reduce the possibility of harmful interference with AMTS licensees. In sum, we believe that our proposals, combined with those of AAR, WISPA, and Steven K. Stroh will lead to more efficient commercial and amateur use of 219-220 MHz. In keeping with the Commission’s goal of removing regulations, all of these
proposals can be implemented solely as rule deletions.
[1] https://www.fcc.gov/ecfs/search/search-filings/filing/10412030480052
[2] https://www.fcc.gov/ecfs/search/search-filings/filing/104111661931041
[3] https://www.fcc.gov/ecfs/search/search-filings/filing/10412237159703
[4] https://www.fcc.gov/ecfs/search/search-filings/filing/10329271641887

In the past, we have supported the removal of the 100 kHz bandwidth restriction for amateur stations in the 219-220 MHz band, and continue to do so now. 

The Connection to Recent Interference Cases

The DR0NK Radio Network’s experience with interference from improperly granted experimental licenses offers valuable lessons for our regulatory advocacy. Documentation is critical. Success came through “coordinating efforts” and “documenting and communicating real issues.”

Technical arguments prevaied. Successful spectrum defense relied on demonstrating actual harmful interference through technical measurements, not administrative formalities.

Community vigilance works. As DR0NK notes, “the price of freedom, and of spectrum, is eternal vigilance.” This vigilance allowed them to detect and address interference that administrative processes failed to prevent.

Conclusion

ORI’s approach balances regulatory streamlining with responsible spectrum sharing. By eliminating outdated administrative barriers while maintaining secondary status and technical safeguards, we can make the 219-220 MHz band more accessible to amateurs while protecting primary users. The recent experiences documented by the DR0NK Radio Network demonstrate that our community can effectively protect spectrum through technical vigilance rather than bureaucratic procedures. Removing unnecessary and outdated bureaucratic procedures clears the way for operators and enthusiasts to effectively and efficiently police the bands. 

FCC Filing for 219 MHz Rules Changes from ORI

Thank you to the many people that have helped with this effort. Open Research Institute (ORI) has filed the first of what might be several comments and proposed rules making efforts to the FCC about reforming amateur radio use of the 219 MHz band.

https://www.fcc.gov/ecfs/search/search-filings/filing/10329271641887

The list of folks that have contributed and supported this effort to renovate 219 MHz for actual amateur radio use is quite long. This filing and any that follow are the result of over a year of work. Thank you especially to Mike McGinty, ARRL advisors, and Justin Overfelt. 

If you would like to help? 

1) Please use this comment to make your own similar request under this particular proceeding. This is a “what regulations do you want to delete?” type of call. As with many FCC calls for comment, it will be dominated by commercial interests. Anything from amateur radio will stand out. The deadline for comments is 11 April 2025. Speak simply and directly. We’d like to use this band without unnecessary and burdensome requirements. 

2) Please be ready to file a “reply” comment after the 11 April 2025 deadline. This is a chance for you to say “I agree with this and support this.”

We are not asking to change the fundamental nature of the band. Fixed digital messaging forwarding is super exciting these days because of SDRs, mesh networking, and all sorts of amazing protocol work available to us. We decided to simply ask for removal of the notification and permissions requirements. These requirements have resulted in zero use of this band for over two decades. 

The primary service back in the late 1990s when these rules came out was maritime (AMTS). Those licenses were never fully deployed and have now been leased out by railroads. This means, to us, that the permissions requirements now make no sense at all for secondary licensees. 

ORI is tired of this and is working to make this situation better. This is a great band with huge, innovative, digital promise. We deserve to have a seat at this table and that means the chair has to actually exist and the door to the room the table is located within has to actually be something we can open.

Inner Circle Newsletter: Autumn 2024

A Harvest of Regulatory and Technical Updates

Open Research Institute is a non-profit dedicated to open source digital radio work. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer. You can get involved by visiting https://openresearch.institute/getting-started

Membership is free. All work is published to the general public at no cost. Our work can be reviewed and designs downloaded at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world.

Subscribe to this newsletter at http://eepurl.com/h_hYzL

Ribbit Radio Update

It isn’t often that a completely new way of using existing technology in a truly innovative way on the amateur bands excites the community, but Ribbit consistently delivers.

What is Ribbit?

Ribbit is a novel digital text messaging mode for VHF/UHF communications for recreational and emergency use. It radically increases the density of information transmitted by spectrum used. Ribbit turns text into encoded audio, which can be played from your smartphone into the microphone of any radio, on any band, under even very noisy conditions. A smartphone with the Ribbit application on the other side can recover this message. Your information will get through.

Ribbit leverages the computing power of the modern smartphone to increase the capabilities of any Handy Talkie without requiring any additional hardware or cable.
Its redundant distributed nature allows it to function even when internet connectivity is lost during emergencies or just not available due to poor coverage.

Developer Updates

Project Lead Pierre W4CKX and Rattlegram Developer Ahmet Inan supported a transition of project development to Alex Okita KO6BVA in late 2023. Since then, Alex has made several big strides forward in both development and demonstration.

Alex’s most recent TO DO list included the following tasks.

1) Building a scheme to compact the header for Ribbit. The header is a bit long and could be shortened to save some bits using a raw bit stream of a simplified character map.

2) Add a keyEncoder to Code Witches as the keyboard input gets letters added. When the “Enter” key is pressed begin encoding/consuming the characters in the input area.

3) May also want to add in a sort of message queue above the input area.

4) Build a useful CW-Font for code witches and anything else anyone wants to use it for.

All of this work enabled a highly successful demonstration of Ribbit at DEFCON 32 in Las Vegas, Nevada, in mid-August 2024. A special thank you to RF Village for hosting ORI’s Open Source Showcase.

Visitors to the exhibit were able to see Ribbit work in a busy and noisy environment. DEFCON, like most other very large conferences, is acoustically noisy, but also very electrically noisy. Ribbit was demonstrated all weekend long in a very harsh environment, as the WiFi Capture the Flag competition was happening literally steps away from the Ribbit demonstration. The venue, Las Vegas Convention Center West, was full of conference attendees. Despite the background noise and substantial RF interference, Ribbit worked from Smartphone app speaker output, to Family Radio Service microphone input, over the air on a busy FRS channel, to the receiving FRS radio speaker output, to a Ribbit application microphone input on the monitoring smartphone. Being able to transmit and reliably receive accurate text data is a notable achievement.

Alex has integrated the Ribbit functionality into a web application. This makes Ribbit even easier to use than traditional native applications. The live web app demonstration can be found at https://badkangaroo.github.io/

Recordings of presentations about Ribbit, along with posters and diagrams, can be found at https://www.ribbitradio.org

The source code can be found at https://github.com/OpenResearchInstitute/ribbit_webapp

Practical Real World Use

The technical demonstration for Ribbit, called Rattlegram, got very positive reviews for utility in the field from Chile and Argentina mountain rescue groups. The challenge is supporting hikers and skiiers in places where cellular coverage is limited or not available. Ribbit functionality provided clear benefits with no downsides.

Carlos, our contact in Argentina who supports mountain rescue operations, described his experiences using Ribbit technology.

“Hi everyone, how are you? I’m just back from the mountains in Chile. We did some testing with Rattlegram, in very windy conditions (without any cables) and worked just great !
Send you a picture & video of one of the test we made. We were at this summit:”

In Argentina, Carlos reported:

“These past few months, we have been going out to the mountains, and been using Rattlegram with lot of success. In one of the ‘refugio’ we left an old cell phone with Rattlegram loaded besides the base radio, and we have been sending messages in all the backcountry ski outings and all the messages arrived without a problem.”

If Ribbit can work on the top of the mountains in Chile and Argentina, and if it also performs really well in noisy urban environments, then what can’t it do?

Ribbit Can’t Flourish Without Your Support

Ribbit is innovative, useful, and fully supported by Open Research Institute. But, it won’t realize its full potential without continuing development and community support. If you are excited about Ribbit and want to help, please join ORI at https://openresearch.institute/getting-started and let us know you want to get involved with Ribbit. We’ll make sure you get on the mailing lists and Slack channels.

ORI graciously accept donations for Ribbit development. Ribbit has a dedicated bank account, follows non-profit best practices, has an enforced code of conduct, and regularly reports to ORI leadership.

Thank you to everyone helping make Ribbit successful, useful, exciting, and fun to use!

Opulent Voice Comes to the PLUTO SDR
An Open Source Minimum Shift Keying Implementation

Opulent Voice is the native digital voice and data protocol for ORI’s open source satellite (or terrestrial) system called Haifuraiya. With high fidelity voice quality, seamless integration of data, keyboard chat, and authentication and control messages, amateur operators experience something truly new and enjoyable. As described by ORI Director Steve Conklin AI4QR, “Leave scratchy and harsh proprietary vocoders with clunky or incomplete data mode integration behind for good.” All source code at https://github.com/OpenResearchInstitute/pluto_msk

Things worth doing are rarely easy. The Minimum Shift Keying (MSK) implementation of Opulent Voice (OPV) on the PLUTO SDR is no exception. Challenges to the documentation, design, implementation, and test cycles have come up throughout the process. The response from the open source digital radio community has been solid, inspiring, and productive, with patient problem-solving winning the day every time.

Michelle Thompson, ORI’s CEO, described her perspective on OPV’s progress recently at an IEEE event, stating:

“I would like to thank everyone helping ORI achieve ambitious goals for our uplink protocol. It’s a real pleasure to serve this community and meet the needs of this project. It’s a privilege to be part of something so innovative and relevant.”

This feeling is reflected by the OPV volunteer team, who are actively involved around the world in promoting open source work and helping ORI projects like OPV succeed.

Successes Over the Past Quarter and Current Status

OPV was the communications downlink for a NASA RockSat-X mission executed by the University of Puerto Rico, one of the most consistent and successful teams participating in this competitive program. The mid-August 2024 launch from Wallops went flawlessly. The performance of OPV met all expectations and data recovered from the launch is being used to further develop and improve the communications protocol.

OPV presentations at HamCon:Zion and DEFCON during the summer of 2024 were extremely well received, giving positive educational value to the community. Amateur Radio deserves quality digital communications. OPV delivers excellent results and is well-documented.

There is still a long way to go. The physical layer work is currently focused on tuning the tracking loops in the demodulator. Once the physical layer is reliable, then the functions that integrate voice, data, keyboard, and channel control messages will be integrated into the PLUTO SDR build and tested. Application layer functions in HTML 5 have a head start, with some tests already completed. Parallel integration efforts from F5OEO and OpenCPI, both of which will expand the number of hardware platforms that OPV can use, are well underway.

Below, an example of bit-accurate simulation results for OPV, with signal names on the left and waveform values on the right.

Below, the spectrum of OPV, showing main lobe null-to-null span of 81 kHz, and signal sidelobes.

Haifuraiya and Modem Module Update

The work on Haifuraiya has broader appeal than just as a HEO/GEO satellite design. The “modem module” approach, developed by Matthew NB0X, is summarized at left.

At left is the most recent high level block diagram.

Haifuraiya – Open Source HEO/GEO Amateur Satellite Update

The multirate channelizer has been implemented in Remote Labs West on the zc706/ADRV9009 station. At left is a summary of the channel plan for the satellite receiver.

Regulatory Work at ORI
Making Open Source Easier for Everyone

Past regulatory work at ORI can be found at
https://github.com/OpenResearchInstitute/documents/Regulatory

A special Halloween season workshop about exorcising the Haunted Band of 219-220 MHz was held by IEEE and ORI on 29 October 2024.

The Zoom meetup was well attended with the majority of RSVPs participating on the call. The meeting was recorded for those that couldn’t make it. The video recording of the background session, followed immediately by the workshop, can be found at https://youtu.be/wGX__qcFrT4

219 MHz is a secondary allocation for amateur radio. It has excellent channel characteristics, is open to fixed point-to-point digital links only, allows 100 kHz bandwidths, has a minimum of 56 kbps data rates, and has had no activity for decades. It’s time to take this band back!

Why is there no activity? We believe this band is inactive because of the notification requirements in FCC part 97 and part 80. We assert that these notification requirements, which pull ARRL into the loop from two different directions, are impossible for both amateurs and commercial licensees to comply with.

What are these notification requirements? If you as an amateur want to operate on 219 MHz, you have to determine if there’s an Automated Maritime Telecommunications System (AMTS) station within 80 km of your amateur station. You then have to ask permission from the AMTS station(s) in order to legally operate with at least 30 days notice. If you are between 80 km and 640 km, then you have to notify the AMTS stations at least 30 days in advance of activating the band.

A publicly accessible database of amateur and AMTS stations on 219 MHz is supposed to be maintained by ARRL. This is a rule in Part 97 (for the amateur radio stations) and in Part 80 (for AMTS stations). This database does not appear to exist.

AMTS stations did not report their information to ARRL. Requests to ARRL for a copy of AMTS stations have not been answered. Requests for a copy of amateur radio stations that have notified the ARRL of intent to operate have also not been answered.

These requests were made by multiple amateur radio operators in the October – December 2023 time frame. This was before the cyberattack in 2024, which did sideline a lot of ARRL’s services and activities.

The 219 MHz band regulations need a serious overhaul. We believe a proposal for new rules for this band are a necessary step towards restoring amateur use of this band. We believe that rules can be proposed that will both promote modern and innovative digital activity, and at the same time, not endanger the primary licensees in any way. While the AMTS service no longer exists as originally envisioned, the AMTS licenses have been leased out to fundamentally different services, such as railroads. These primary licensees can share the band with amateur radio in, at the very worst, a mutually inoffensive way and, at the very best, a mutually beneficial manner. It’s up to us to make this better future happen.

In order to make an effective proposal to the US FCC for updated rules for this band, there are things that we need to sort out. We need to know what we want and we need to know what the primary users want. We need to anticipate and answer objections to dropping the onerous notification requirements, and we need to clearly state the value that an amateur radio allocation on this band brings to education, public service, citizenship, and technical advancement of the radio arts.

The workshop on 29 October was the first strong step towards sorting out a draft of proposed rulemaking for restoring amateur radio service use of the 219 MHz band.

Discussion during the workshop resulted in an outline for a draft rulemaking proposal. ORI sincerely thanks everyone that participated during the event and through email correspondence.

Draft Outline

1) Demonstrate that the current rules are not working. State our case that the notification requirements are impossible to comply with and are the reason that there is no activity on this band.

2) Cite 60m as an example of successful similar activity.

3) Assert, providing evidence, that amateur radio operators already know how to share spectrum with other services. We already know how to not cause problems for the services that we share spectrum allocations with. We do not need additional notification requirements or to sign declarations to the ARRL in order to be able to successfully operate on an amateur radio band as a secondary user.

4) Show, from a ULS database perspective, the current utilization of 219-220 MHz. Amateur activity does not threaten primary licensees.

5) Show, to the best of our ability, the actual use of 219 – 220 MHz. This will require a call for participation for sensing stations, such as WebSDRs, amateur operators, receiver enthusiasts, and distributed receiver networks such as SatNOGS, to monitor 219-220 MHz for some period of time, in order to measure primary licensee activity on the band.

5) We assert that AMTS no longer exists. The current primary licensees are not coastal stations. The rules assuming coastal activity do not reflect the reality of this allocation as it exists today. Old rules are hampering efficient use of the spectrum.

6) Propose specific deletions of rules. What exactly should and could be struck down?

7) What are the anticipated effects, repercussions, both positive and negative, of deleting the notification requirements.

8) Is there anything else that could or should be changed? We discussed the 100 kHz bandwidth requirement and it’s origins in specific equipment available to amateurs at the time. Current FCC thinking is to not have rules that hardcode in equipment requirements. Deleting the 100 kHz bandwidth limit in favor of no bandwidth or symbol rate limit had broad support at the workshop.

Thank you to all of you that support our regulatory work. It will take a lot of time and effort to draft proposed rules and to go through the proposal process. The FCC may or may not choose to entertain new rules for 219 MHz. If they do, then a public comment period will happen. If they do not, then the draft rules will still be of benefit, as they will record the point of view and consensus of digital mode amateur radio experimenters and enthusiasts.

Please join this effort at ORI by visiting https://openresearch.institute/getting-started

Sphere of Activity

July 12-13 2024 – ORI exhibited at HamCon:Zion in St. George, Utah, USA. We invited Deep Space Exploration Society (DSES) and QRZ.com columnist Dave Jensen to join our booth. We showcased their work alongside our technical and regulatory acheivements. Total staff was three for this weekend event at a very nice conference center. Some of the travel expenses were reimbursed by ORI. This collaboration was entirely successful. Our CEO gave a talk about AI/ML in Amateur Radio at HamCon’s forum, which can be seen at https://www.youtube.com/watch?v=ZgHUavf5Ddo

August 8-11 2024 – Our Open Source Digital Radio exhibit at DEFCON 32 in Las Vegas, Nevada USA was a big success. ORI hosted demonstrations from RFBitBanger, Ribbit, Opulent Voice, and our Regulatory team in the RF Village. A staff of four set up, staffed, and tore down the exhbit. Travel and lodging was funded by ORI. Our CEO gave an RF Village talk about Synthetic Aperture Radar and Amateur Radio, which can be seen at https://www.youtube.com/watch?v=fiR4woe4O5Q

August 29 2024 – ORI participated in the quarterly Federal Communications Commission Technology Advisory Council meeting and presentation in Washington, DC. ORI is represented on the FCC TAC by Michelle Thomposn and Matthew Wishek. Open Source and Amateur Radio interests have been defended at the federal level. ORI Contributions to the Dynamic Spectrum Sharing and 6G Working Groups can be seen in the video at https://www.fcc.gov/news-events/events/2024/08/technological-advisory-council-meeting-august-29-2024 If you are interested in supporting regulatory participation and defense, please join ORI, support ORI’s work, and let us know how you want to help.

October 2-9 2024 – ORI organized a week-long event called Digital Update for Microwave (DUM) in Vancouver, British Columbia, Canada. There were 3 on site for this event and 9 participants virtually. Housing was reimbursed by ORI. The focus of the workshop was Opulent Voice and Iterative Receiver Theory technical work. We did not address the 10 GHz antenna questions posed from our outreach prior to the event due to time limitations. Better 10 GHz mobile radio gear is what people are looking for, and this will get attention from ORI over the next quarter. DUM is an event that provides opportunities for people interested in digital communications on the amateur microwave frequencies. The contributions to ORI’s GitHub repositories showed that DUM was well worth the effort. Should ORI have a DUMII?

Thank you to all who support our work! We certainly couldn’t do it without you.

Anshul Makkar, Director ORI
Frank Brickle, Director ORI
Keith Wheeler, Secretary ORI
Steve Conklin, CFO ORI
Michelle Thompson, CEO ORI

219 MHz FCC Rules Revision Workshop Announcement

All of us at ORI are looking forward to the “let’s take back 219 MHz for innovative and awesome digital operations” workshop coming up in a few days. (This is a US centric spectrum management issue.)

The response has been very strong. Thank you!

Yes, the workshop will be recorded. No, it will not be successful without participation. If you care about completely unused VHF/UHF spectrum in the US, and want to see educational, modern, non-commercial use of this band happen without unnecessary impediments and silly rules, then we need your input now.

Here’s the link to RSVP (no you do not have to be an IEEE member to use vTools and sign up and attend):

https://events.vtools.ieee.org/m/439290

Even if you simply want a recording of the workshop, please register. Then you can get a notification.

If you would like to participate in “The Haunted Band” or any of the other 9 open source amateur radio projects at ORI, then it’s easy to join.

Please visit https://openresearch.institute/getting-started to get involved.

Inner Circle Newsletter: Summer 2024

A Sizzling Summer Summary prepared just for you!

Read on for highlights from all our technical and regulatory open source digital radio work. ORI’s work directly benefits amateur radio, provides educational and professional development opportunities for people all over the world, and puts ethics and good governance first.

Opulent Voice Flying High

Opulent Voice is an open source high bitrate digital voice (and data) protocol. It’s what we are using for our native digital uplink protocol for ORI’s broadband microwave digital satellite transponder project. Opulent Voice has excellent voice quality, putting it in a completely different category than low bitrate digital communications products such as D-Star, Yaesu System Fusion, and DMR.

Opulent voice switches between high resolution voice and data without requiring the operator to switch to a separate packet mode. Opulent voice also handles keyboard chat and digital file transmission. Seamless integration of different data types, using modern digital communication techniques, differentiates Opulent Voice from any other amateur radio protocol.

Opulent Voice will fly on the University of Puerto Rico’s RockSat-X launch on 13 August 2024. It’s been a very positive experience working with the students and faculty at the University.

An implementation on FPGA for the PLUTO SDR is well underway, with a active international team delivering quality results. This implementation will not only turn your PLUTO SDR into an Opulent Voice transceiver, but it will have remote operation functionality.

Hear what Opulent Voice sounds like by following the links in an earlier update at https://www.openresearch.institute/2022/07/30/opulent-voice-digital-voice-and-data-protocol-update/

We’ve come quite a long way in less than two years! The FPGA implementation upgrades the modulation from 4-ary frequency shift keying to minimum shift keying, and increases forward error correction performance and flexibility.

HAMCON:ZION 2024 is This Week!

Please visit us at HAMCON:ZION 2024 this weekend, 12-13 July 2024 in St. George, Utah, USA.

The event website is https://www.hamconzion.com/

ORI will have a club booth at the event. We opened our space to QRZ.com (https://www.qrz.com/) and Deep Space Exploration Society (https://dses.science/). This combined exhibit is a one-stop shop for the best in community, technical, and citizen science amateur radio activity.

We have a volunteer presenting on Artificial Intelligence and Machine Learning in Amateur Radio. The talk opens with a brief summary of the history of our relationship with created intelligence and then explores case studies of the use of artificial intelligence and machine learning in amateur radio. Talk is 1pm on Friday in Entrada B.

Open Research Institute at DEFCON32

We will present an Open Source Showcase at DEFCON in the Radio Frequency Village 12-13 August 2024, with accessible exhibits and demonstrations. Here is the list of scheduled project demonstrations.

Regulatory Efforts: ORI works hard for open source digital radio work and moves technology from proprietary and controlled to open and free in intelligent and mutually beneficial ways. Our work on ITAR, EAR, Debris Mitigation, AI/ML, and Synthetic Aperture Radar will be presented and explained. Find out more at https://github.com/OpenResearchInstitute/documents/tree/master/Regulatory

Ribbit: this open source communications protocol uses the highest performance error correction and modern techniques to turn any analog radio into a digital text terminal. No wires, no extra equipment.. Learn how to use this communications system and get involved in building a truly innovative open source tactical radio service. Find out more at https://www.ribbitradio.org

Satellite: ORI has the world’s first and only open source HEO/GEO communications satellite project. All working parts of the transponder project will be demonstrated, from Opulent Voice to antenna designs.

The Dumbbell antenna: We have an HF antenna design based on a highly effective inductive loading technique first written about in 1958. Learn about this antenna and find out how to make your own. Repository can be found at https://github.com/OpenResearchInstitute/dumbbell

RFBitBanger: an HF QRP system and novel digital protocol called SCAMP. Kit information and updates will be available. Get your Batch 2 kit today at https://www.ebay.com/itm/364783754396

Radar: Our regulatory and technical work in synthetic aperture radar will be demonstrated. One of our volunteers will be giving a talk about open source synthetic aperture radar in the RF Village speakers track. Here is the abstract.

Synthetic Aperture Radar (SAR) is one of the most useful and interesting techniques in radar, providing high resolution radar satellite images from relatively small satellites. SAR is not limited by the time of day or by atmospheric conditions. It complements satellite photography and other remote sensing techniques, revealing activity on the Earth that would otherwise be hidden. How does the magic happen? This talk will explain the basics of SAR in an accessible and friendly way. That’s the good news.

The bad news? SAR is controlled by ITAR, the International Traffic in Arms Regulations, and is listed in the USML, the United States Munitions List. ITAR regulates the export of defense articles and services and is administered by the US State Department. This includes both products and services as well as technical data. Such as, catalogs of high resolution radar imagary. The articles and services regulated by ITAR are identified in the USML. If ITAR doesn’t get you, then EAR just might. The Export Administration Regulations (EAR) are administered by the US Commerce Department, and items are listed on a Commerce Control List (CCL). Commercial products and services and dual-use items that are not subject to ITAR could be regulated by EAR. Even if you are free of ITAR and EAR, you may still be regulated by yet another agency, such as the FCC.

Regulation of SAR chills commercial activity, creating costly and time-consuming burdens. But why does any of this matter to signals hackers? Because technology has overtaken the rules, and devices used by enthusiasts, researchers, students, and hackers are increasingly likely to have enough capability to fall into export-controlled categories. The penalties are harsh. Fear of running afoul of ITAR is enough to stop a promising open source project in its tracks.

Is there a way forward? Yes. ITAR has a public domain carve out. Information that is published and that is generally accessible or available to the public is excluded from control as ITAR technical data. That’s great in theory, but how can we increase our confidence that we are interpreting these rules correctly? How can we use and build upon these rules, so that our community can learn and practice modern radio techniques with reduced fear and risk? Can we contribute towards regulatory relief when it comes to SAR? We will describe the process, report on the progress, and enumerate the challenges and roadblocks.

RFBitBanger Batch 2 Kits Available

Kits are available at our eBay store at this link https://www.ebay.com/itm/364783754396

Be a part of the future with a prototype Batch 2 kit build of the RFBitBanger, a low-power high-frequency digital radio by Dr. Daniel Marks KW4TI. Presented by Open Research Institute, this kit is designed to produce 4 watts of power and opens up a new digital protocol called SCAMP.

SCAMP Is now available in FLDigi!

Source code and extensive instructions can be found at https://github.com/profdc9/fldigi

Your donation in exchange for this kit directly enables the further development of an innovative Class E amplifier based radio design. It has a display, button menu navigation, and keyboard connection for keyboard modes and keyboard-enabled navigation. This radio can be taken portable or used in a case. If you have a 3d printer, then Dr. Marks has a design ready for you to print in the repository linked below.

  • Built-in digital modes: CW, RTTY, SCAMP (FSK and OOK, multiple speeds)
  • Key jack supports straight keys and iambic paddles
  • Open Source hardware and firmware, Arduino UNO compatible https://github.com/profdc9/RFBitBanger
  • External sound-card FSK digital modes supported (including FT4/FT8)
  • Experimental SSB support
  • Serial port support (2400 baud) for send and receive in keyboard modes

SCAMP is a new protocol that allows keyboard-to-keyboard contacts with a digital protocol that has excellent connection performance. See Dr. Marks presentation about RFBitBanger at QSO Today Academy in September 2023 to learn more about SCAMP and the RFBitBanger project. Link below:

All surface mount parts on the main board are pre-installed at the factory. All the through-hole parts you need to complete the radio are provided for you to solder yourself.

Builder’s notes and photos of all the components to help you identify and install them can be found here:

https://github.com/OpenResearchInstitute/RFBitBanger-kit/tree/main/batch2

If you don’t know how to wind toroids or solder surface mount capacitors, this is an excellent kit to learn on. There are just six toroids on the main board, and two on each band pass filter board. You can build just one band pass filter board and operate on a single band, or you can build an assortment. We provide 12 filter boards, enough toroids to build any 9 filters, and a supply of capacitors that will let you build those 9 filters for 9 different HF ham bands. These capacitors are size 1206, which is the largest common size for SMT capacitors and the easiest to solder manually. All you’ll need is a pair of tweezers and your regular soldering iron and solder. We provide detailed instructions on winding the toroids and soldering the capacitors. You get spare filter boards to experiment with.

Friendly Support is provided through a dedicated Open Research Institute Slack channel.

Instructions on how to join this community are here:

https://www.facebook.com/openresearchinstitute https://www.instagram.com/open_research_institute/
https://x.com/OpenResearchIns

Standards Groups and Advisory Work

We have been hard at work defending #opensource and #hamradio in multiple ways by actively supporting standards committees and advisory groups.

We are a member of the United States Federal Communications Commission Technological Advisory Council, serving on the Advanced Spectrum Sharing and 6G Working Groups.

We serve on IEEE P1954 Standard for Self-Organizing Spectrum-Agile Unmanned Aerial Vehicles Communications working group.

We participated in several notices for proposed rulemaking at multiple government agencies over the past year, and expect this work to continue.

We have assisted in several successful experimental license applications for amateur radio satellite communications projects.

ORI is doing the work that helps the future of open source digital radio.

Thank you to all who support our work.

Comments from ORI to the FCC on Advancing Understanding of Non-Federal Spectrum Usage

The comments as filed can be found here:

https://www.fcc.gov/ecfs/document/1009106258899/1

Text of the comments is below.

Comments of
Open Research Institute, Inc.

#1873 3525 Del Mar Heights Road
San Diego, CA, 92130
United States of America
9 October 2023

Open Research Institute (ORI) is a non-profit research and development organization devoted to Open Source digital radio technology.

ORI’s mission is to provide practical open source wireless communications solutions for terrestrial and space applications. ORI provides significant workforce development opportunities to the volunteers engaging in Open Source technical work.

Open Source means that everything required to reproduce a software, hardware, or firmware design is freely available to the general public. An Open Source design must have free redistribution, allow modifications and derived works, and be non-discriminatory against persons, groups, or fields of endeavor. Open Source work cannot be specific to a particular product, it cannot restrict other software or hardware, and needs to be technology-neutral.

Open Source is vital to the United States’ economic competitiveness in telecommunications. The Internet runs on Linux, an Open Source computer operating system, as 96.3% of the top one million web servers run on Linux, and the vast majority of the underlying networking infrastructure uses either Linux or other Open Source operating systems and libraries.

The telecommunications industry has historically been heavily reliant on proprietary software and hardware. An increase in the adoption of Open Source, from OpenRAN to Nephio, has enabled telecommunications companies to more quickly and efficiently meet market demands. There is a broad trend towards the continuing increase in adoption of Open Source designs in order to avoid silos of wheel reinvention and to promote effective interoperability.

The Open Source community can address this inquiry.

Introduction

ORI agrees that spectrum usage information is generally nonpublic and infrequently available. In the spirit of Open Source and Open Access, we believe that publicly available high-quality data about spectrum usage is in the national interest.

ORI agrees that the need for spectrum usage information will only continue to grow. In order to produce useful models, artificial intelligence and machine learning require enough data to complete the training process. Without enough data, a machine learning model can suffer from a condition called overfitting. When this happens, the model becomes a very expensive copy machine, presenting the training data as output, regardless of the input. The lack of sufficiently large high quality radiofrequency data sets is widely acknowledged as an impediment to research and development for radiofrequency machine learning.

ORI agrees that the development of new and innovative spectrum sharing techniques, allowing increased co-existence among users and services, will improve spectrum management. Spectrum usage
information is required in order to develop new spectrum sharing techniques. This is true whether or not machine learning is used either in the process or in the product. In other words, even if only ordinary humans had the task of improving spectrum sharing over and above what we have today, those humans would still need spectrum usage information to achieve their goal.

Without good spectrum usage information, neither machine learning nor human architects will be able to confidently produce quality results. The most common outcome to making the best guess in the absence of spectrum usage information is highly conservative spectrum sharing arrangements that do not fully utilize spectrum, overly restrict licensees, and prevent innovation in the field.

Central Question

We want a more sophisticated knowledge of non-Federal spectrum usage. The central question of this inquiry is how can we take advantage of modern capabilities for gaining this knowledge in a costeffective, accurate, scalable, and actionable manner?

In addition to the other spectrum monitoring efforts listed in this inquiry, we can start with the concepts established by the Spectrum Monitoring Pilot Program from NTIA/NIST.

This program measured spectrum occupancy with standardized distributed receivers reporting to Measured Spectrum Occupancy Databases. These databases publish the metadata of their measurements so that measured data can be retrieved over https:// connections. The concepts of federation are used in order to avoid inefficient and expensive replication of measured data.

The Spectrum Monitoring Pilot Program had two classes of receivers. One was dedicated to radar and the other to communications. The communications receiver approach is an order of magnitude less
expensive than the radar receiver approach.

Restricting the hardware package to the less expensive category of communications receiver, and using modern software-defined radios with Open Source software, and incorporating existing online radios (“WebSDRs”) alongside the standardized stations, and being deliberate and innovative about incentives, then we assert that a low-resolution receiver network that can produce useful spectrum usage information is achievable.

A Low Resolution Receiver Network – Why?

Why low resolution? Because a broad heatmap of spectrum usage, even at low resolution, is valuable in a different way for spectrum management purposes than a small amount of high resolution data from
one service in one geographical area.

The current situation is of essentially no real-time spectrum usage information. Even if we simply had noise floor measurements across the bands across the country, and even if those measurements were
gathered from stations of varying quality, we would have an immense improvement in our capacity to intelligently manage our spectrum over having no measurements at all.

Towards a Weather Map of the National Noise Floor

Noise floor is the measure of the noise power per bandwidth. Getting a snapshot of something like a National Noise Floor, comparable to a national radar weather map, needs a diversity of radio receivers.
We need to be able to measure or estimate power spectral density from these receivers. Services with intermittent use must be measured often enough to produce minimally accurate data, or those services
must be covered with alternate techniques.

As an example of where alternate techniques can work, FT8 mode contacts on the 17 meter Amateur Radio Service band can be obtained from https://gridtracker.org/. These reports, which are text based, can be used to estimate the power spectral density at the radios using Grid Tracker. Reports from radios can be used to create a map of real-time spectrum usage without directly measuring the spectrum. These are estimates, but they are based on measured signal-to-noise reports that all the radios are giving each other, in each 15-second long exchange.

We can compare this type of observation to eavesdropping on roomfuls of people talking to each other, and writing down how many conversations (and attempted conversations) there were. Multiple
eavesdroppers can spread through the house, combine their notes, and show where the conversational density was highest.

What does this look like on the 17 meter Amateur Radio Service allocation with a typical FT8 station?

A stream of which stations contacted other stations is reported from a single radio. The stream consists of all the contacts that the radio has heard from that location. The radio is not directly contacting all of
these other stations, but it can hear them all and tracks who is trying to talk to who and when. Any radio on the band can act like the eavesdropper above.

Open Source software (called WSJT-X) is controlling the radio and demodulating and decoding all received transmissions across the entire sub-band. WSJT-X does include a spectrum waterfall display,
which could be used to obtain power spectral density of all the simultaneous transmissions, but we do not have to do this. We instead use another commonly installed Open Source software program (called
Grid Tracker) which takes the text output of WSJT-X and provides a text list of active stations and their reported signal power. This list can produce a calculated estimate of the power spectral density in the band. It’s less math intensive to work with text based signal reports for fixed formal signals like FT8 than it is to use a spectrum analyzer waterfall, or deal with the IQ recordings of a radiometer.

The addition of additional radios (more eavesdroppers) improves coverage.

Does this exact network exist today? It almost does. Instances of Grid Tracker, very commonly installed alongside WSJT-X and running whenever the station is active are reporting all of this information, but they do not have coordinated reporting, as of today. However, since the software is Open Source, adding a function to opt in to do some math and donate the data to a server to produce a National Noise Floor snapshot, for this particular mode, is achievable.

This example outlines a proof of concept of a very tiny slice of the HF spectrum at 18.100 MHz, but it shows the advantage of using existing Open Source software, existing radios and existing communities
of licensed operators. Incentives to licensees to participate could be as simple as getting an opportunity to defend their licensed spectrum up to recognition awards for donating data consistently over time.

Achieving Aggregated Wireless Sensing

How can we broaden this concept? First, leverage existing online radio receivers, such as broadband WebSDRs. See http://www.websdr.org/ for deployed examples. Power spectral density measurements or estimates can be obtained from receivers already on the air and already connected to the Internet.

An anticipated and understandable objection is that the multitude of WebSDRs across the country are not calibrated stations and they are not standardized. The owner could take them down at any time. A
new one might be put on the air tomorrow. The answer to these objections is that the aggregation of these observations, even if the observations are of varying quality, provides immense value in efforts to
improve spectrum management because these receivers can produce real-time spectral usage information with no additional radio hardware investment.

We should, of course, not stop there. We must commit to a both/and and not an either/or approach to answer the central question of this inquiry.
Second, deploy inexpensive, standardized, and calibrated receivers to libraries, schools, post offices, and any other institution or organization that can be either incentivized or mandated.

For a model of an Open Source standardized distributed receiver system producing real-world practical radio results, please refer to the SatNOGS project at https://satnogs.org/

What are some standardized station examples that we could deploy in the United States to achieve the
goals of this inquiry?

An Open Source PLUTO SDR running Open Source Maia software creates an inexpensive spectrum analyzer with a built-in webserver. The addition of the federated reporting functions is possible because
the source code for Maia can be modified to include these additional functions. Maia can be found at https://maia-sdr.org/. Documentation for the standard PLUTO firmware (which is largely replaced by
the Maia firmware) can be found at

https://github.com/analogdevicesinc/plutosdr-fw

and documentation for the PLUTO hardware can be found at

https://wiki.analog.com/university/tools/pluto/hackers

A PLUTO/Maia package can cover frequencies from 70 MHz to 6 GHz. It would requires one or more antennas (depending on how many bands are to be monitored by that station), a power supply, a weatherproof enclosure, mechanical attachments, and cables. A proof of concept would be expected to cost less than the Spectrum Monitoring Pilot Program communications receiving station proof of concept, which came in at $6000 according to “An Overview of the NTIA/NIST Spectrum Monitoring Pilot Program”.

This can be read at

https://its.ntia.gov/umbraco/surface/download/publication? reportNumber=CottonSpectMonIwssSubmitted.pdf

A second and even less expensive example of a standardized station would be an RTL-SDR

https://www.rtl-sdr.com/about-rtl-sdr/

and a Raspberry Pi

https://www.raspberrypi.com/

running Linux. This kit can use a large number of Open Source software-defined radio packages. It can be tuned to any of the bands in its operating range of 2.4 MHz to 2 GHz. For a sweep of the entire operating range, multiple antennas and combiners would be necessary along with an amount of additional equipment and software.

The WebSDRs combined with the standard package stations form a heterogeneous distributed receiver network. In aggregate, with enough stations, the resulting network can provide useful real-time reports
of spectrum usage information. A centralized visualization of actual spectrum usage, even if not realtime, would be very informative. If a picture is worth a thousand words, then a video is worth a thousand pictures. People seeing gaps in the data may be motivated to put up or sponsor a station to fill in the gaps, similar to the effect we see with personal weather stations that measure temperature, wind
speed, and other meteorological aspects.

TAC Working Group as Asset

The Dynamic Spectrum Allocation Working Group of the Technological Advisory Council of the Federal Communications Commission could provide advisory feedback on real-time spectral usage information obtained through opportunistic and inexpensive sensor networks, invite speakers to present about the topic, and give specific recommendations. Leveraging an existing Working Group with familiarity in this particular topic would be an efficient use of already-available expert advice.

Conclusion

A National Noise Floor heatmap, even if low resolution, is achievable and it is valuable. Any reasonable real-time data, whether obtained opportunistically or purposefully, is an enormous step forward compared to no data at all.

There are drawbacks to low resolution data. The limits of the resolution must be acknowledged. The measurements have to at least be reasonable, meaning that a snapshot of noise floor includes enough
information to where lower power signals aren’t completely overlooked or missed. For each frequency allocation measured, a subject matter expert in that allocation would be expected to compare the realtime spectrum usage information to a model of expected spectrum usage. The difference between this theoretical or calculated model and the real-time spectrum usage information is valuable and informative in many ways. A subject matter expert would be able to explain observed differences, explain whether and why any difference was sufficient reason to make spectrum management adjustments, and provide feedback for improved spectrum sensing. There is no one-size solution for either the measurement stations involved or the allocations they are measuring.

The architecture for gaining visibility of spectral usage has been previously piloted in the Spectrum Monitoring Pilot Program. This and other excellent prior work can be adapted, and citizens can be
incentivized to participate in order to scale up the sensor network. Incentives range from the simple fact of being able to individually contribute directly towards the defense of a spectral allocation, to awards or recognition for the technical achievement of constructing and calibrating a station to a published standard, to a scoreboard of who provided the most consistent reports of real-time spectral information
over specific lengths of time.

There is a large amount of Open Source software and hardware that can be used to reduce costs and reward high quality collaborative and cooperative work. A “lower-tech, inexpensive, diverse, and start now” instead of a “high-tech, expensive, maybe sometime in the future” approach is cost-effective, accurate (enough), scalable, and actionable.

Respectfully,
Michelle Thompson
CEO ORI

AI vs. Human: ChatGPT vs. ORI Open Source Designers

Greetings all,

You know that ORI has been active in a variety of regulatory work for several years now. One of those areas has been in AI/ML and the effect on communications. We served on the US FCC TAC in 2022, have presented and written on the subject, and are working towards an AI/ML Handbook for Amateur Radio.

You may have seen news about OpenAI GPT, or ChatGPT.

https://openai.com/api/

There’s a lot of people testing this chatbot out for reverse engineering and in writing software. There’s been some successes in cases where the model would have had access to code or documentation about protocols, observations, or technical behavior in order to train, and requests match up well with the models. 

For example, a friend asked the chatbot to create a Python script to access the webcam on their computer, and they shared a still image from the resulting feed. Another example from another friend resulted in better reverse engineering results than they had gotten with either IDA or Ghidra over many weeks.

So, how does ChatGPT handle VHDL?

We have something specific to compare against, since we just completed a COBS decoder in VHDL for our open source HEO/GEO transponder work for #Haifuraiya.

Let’s start off simple. Here’s the first dialogue with the chatbot, and the results.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder in VHDL

Sure, here’s an example of a COBS decoder in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

entity COBS_decoder is
  Port (
    input_data : in std_logic_vector;
    decoded_data : out std_logic_vector
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in input_data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if input_data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      decoded_data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
end Behavioral;

This code decodes the input data using the COBS (Consistent Overhead Byte Stuffing) algorithm, which involves replacing runs of zeros in the input data with a single byte that indicates the length of the zero run. For example, the input ‘101001000’ would be decoded as ‘103101’.

This decoder iterates over the input data and maintains a variable zero_run_length that is incremented for each zero in the input data. When a non-zero value is encountered, the length of the zero run is written to the output data and the zero_run_length variable is reset.

I hope this helps! Let me know if you have any other questions.”

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knew what COBS meant. It produced code, with comments, and a brief “design document”, with examples.

The code produced is not licensed or cited.

Our open source HEO transponder project uses AXI-based blocks in the FPGA. So, let’s add AXI to the request.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder with AXI interface in VHDL

Sure, here’s an example of a COBS decoder with an AXI interface in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;

— AXI interface definitions
library unisim;
use unisim.vcomponents.all;

entity COBS_decoder is
  Port (
    AXI_IN  : in  AXI4Stream;
    AXI_OUT : out AXI4Stream
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in AXI_IN.data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if AXI_IN.data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      AXI_OUT.data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
 
  — Set the valid and last flags in the output AXI stream
  AXI_OUT.valid <= ‘1’;
  AXI_OUT.last  <= ‘1’;
end Behavioral;

This code is similar to the previous example, but it uses an AXI (Advanced eXtensible Interface) interface for the input and output data. The AXI interface is a standard interface used in many FPGA designs, and it provides a way to stream data into and out of the decoder.

In this code, the input data is read from the AXI_IN.data port, and the decoded output is written to the AXI_OUT.data port. The valid and last flags are also set in the output AXI stream to indicate that the data is valid and that it is the last data in the stream.

I hope this helps! Let me know if you have any other questions.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knows about the previous request and updated the design. No license.

Here’s a review of the COBS implementation from Paul Williamson KB5MU.

“I don’t know what this is, but it isn’t a COBS decoder with AXI interfaces.

It does seem at first glance to be some sort of thing with AXI interfaces, but it does not implement an AXI interface at all. It just relies on a (possibly imaginary) library type called AXI4Stream that somehow magically does AXI interfaces, in a way that’s both implausibly smart and inexplicably dumb. I think the classic phrase “not even wrong” applies here.

The AXI4Stream object apparently has a member .data that is an array of bytes, which might make sense if the entire AXI transfer is being treated as a single entity here. The code iterates through the input .data using the ’range attribute, as if the AXI4Stream object has a defined fixed transfer length.

Which it can’t have, if it’s a general purpose AXI stream implementation.

So, if this was ever anything real, it must have been a special purpose AXI implementation with a fixed transfer length. Assuming that the AXI4Stream .data actually works that way, the entity does
do some sort of transformation on the input data to create the output data.

We asked for that to be a COBS decoder, but the function that it actually implements bears only the slightest resemblance to a COBS decoder. What it actually does is output a stream of words that are the lengths of the runs of zeroes in the input data. Nonzero data does not get passed through at all.

Instead, a run of nonzero bytes gets translated into a run of zeroes, as if two adjacent nonzero bytes meant nothing but a zero-length run of zeroes.

Here’s an example of what this code seems to do:

  Input  .data:  1 2 3 4 0 0 0 0 1 2 3 4
  Output .data:  0 0 0 0         4 0 0 0

That’s certainly not a COBS decoder, or even anything useful at all.

Note that the output is variable length, but never longer than the input.
We had to assume that the AXI4Stream object had a fixed .data size to get this far, but now it has to have a variable data size for the output.

Supposing we had successfully suspended disbelief up to here, we now examine the final two lines of code in the Behavioral architecture. It sets the .valid and .last members of the AXI_OUT port to ‘1’.

That makes absolutely no sense if it’s supposed to encapsulate an entire AXI stream transfer. Those signals only make sense if we’re treating one word at a time, as you’d expect to see in a normal, general-purpose AXI stream object.

The extremely verbose commenting style in this code suggests that the original source material was from an introductory tutorial. Maybe the AI has been tuned to favor heavily commented code, and ends up picking up partially completed junk from the middle of tutorials?
Anyhow, this is not an example of automated programming in response to a high-level query. This is gibberish that succeeds only in having a superficial resemblance to the code requested.”

So, does this work?

Let’s try it out in the lab!

The first thing we notice is that there is an obvious syntax error with a missing semicolon. Aside from that, there is no AXI4Stream port type. It’s not recognized. Vivado appears to be disappointed in this file, and wants little to do with this code. This is where we should stop, but I spent some time looking to see if the ports were at least close to being declared correctly.

The model appears to have taken some custom or maybe enumerated type from somewhere, and come up with a custom AXI4Stream type. This is not recognized by the unisim library right off the bat, and that’s the sort of thing that automated code writing needs to get right.

Let’s look at the code written by humans from ORI. The decoder can be found here:

https://github.com/Abraxas3d/fpga_cores/blob/master/src/decoder.vhd

-=-=-=-=-=-=-=-=-=-=-=-=-=-

— Company: Open Research Institute, Inc.
— Engineer: Skunkwrx, Abraxas3d
— Design Name: COBS protocol decoder
— Module Name: decoder – Behavioral
— Project Name: Phase 4 “Haifuraiya”
— Target Devices: 7000 Zynq
— Tool Versions: 2021.1
— Description: COBS protocol decoder.

— Additional Comments: This work is Open Source and licensed using CERN OHL v2.0

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

use IEEE.NUMERIC_STD.ALL;

–Entity Declaration
entity decoder is
    Port ( rst      : in STD_LOGIC;
           clk      : in STD_LOGIC;
           s_tdata  : in STD_LOGIC_VECTOR (7 downto 0);
           s_tlast  : in STD_LOGIC;
           s_tvalid : in STD_LOGIC;
           s_tready : out STD_LOGIC;
           m_tdata  : out STD_LOGIC_VECTOR (7 downto 0);
           m_tlast  : out STD_LOGIC;
           m_tvalid : out STD_LOGIC;
           m_tready : in STD_LOGIC);
end decoder;

–Architecture
architecture Behavioral of decoder is

— internal copy of s_tdata
    signal input_data          : STD_LOGIC_VECTOR (7 downto 0);

    — s_tdata delayed by one clk where s_tvalid is high
    signal input_data_d        : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of s_tlast
    — not used for anything
    — We create our own m_tlast from the COBS framing instead
    signal s_tlast_i           : STD_LOGIC;

    — internal version of input handshaking signal s_tvalid
    signal s_tvalid_i          : STD_LOGIC;

    — s_tvalid delayed by one clk
    — thus, a version of s_tvalid aligned with input_data_d
    signal s_tvalid_i_d        : STD_LOGIC;

    — s_tvalid delayed by two clks
    — thus, a version of s_tvalid aligned with output_data
    signal s_tvalid_i_d_d      : STD_LOGIC;
   
    — countdown of bytes in this sequence
    — loads from input_data_d when it represents the frame count
    —   as determined by signal counter_load
    — contains the count while m_tdata contains the implied 0 at
    —   the end of a sequence, or the next count that replaced it
    — counts down thereafter, to contain 1 when the last non-zero
    —   byte of the sequence is on m_tdata
    — allowed to count down to 0, but not beyond
    signal count               : STD_LOGIC_VECTOR (7 downto 0);

— enable to load count from input_data_d on this clk edge
— two cases detected:
—    * first valid non-zero byte after a frame separator
—    * first valid byte after count is exhausted
— allowed to be high for multiple cycles
    signal counter_load        : STD_LOGIC;

    — counter_load delayed by one clk where s_tvalid is high
    — used to identify the first valid data byte of any sequence,
    —   for purposes of computing m_tvalid (via pre_tvalid)
    signal counter_load_d      : STD_LOGIC;

    — detection of a valid frame separator (zero) byte in input_data_d
    signal frame_sep           : STD_LOGIC;

    — frame_sep delayed by one clk where s_tvalid is high
    — used to compute counter_load
    — used to compute rising edge of pre_tvalid
    signal frame_sep_d         : STD_LOGIC;
   
    — frame_sep_d delayed by an additional clk (not depending on s_tvalid)
    — used to find the first non-zero byte of the new frame
    signal frame_sep_d_d       : STD_LOGIC;
       
    — move the frame_sep signal that occurred during m_tready low
    — out to the first cycle when m_tready is high again
    signal use_saved_frame_sep : STD_LOGIC;

    — flag to remember that the frame count for this sequence was 255,
    —   to handle the special case that such a sequence does not have
    —   an implied zero byte at the end.
    — set when loading count with 255
    — cleared when the counter is reloaded with anything else
    signal case_255            : STD_LOGIC;

    — internal version of m_tdata output
    signal output_data         : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of m_tlast output
    — high when the last byte of a frame is valid on m_tdata
    signal m_tlast_i           : STD_LOGIC;

    — delayed versions of m_tlast
    signal m_tlast_i_d         : STD_LOGIC;
    signal m_tlast_i_d_d       : STD_LOGIC;

    — intermediate result for m_tvalid.
    — high across all data bytes of each sequence on m_tdata
    — does not go low for bytes on m_tdata corresponding to
    —   bytes invalidated by s_tvalid.
    signal pre_tvalid          : STD_LOGIC;

    — internal version of m_tvalid output.
    — pre_tvalid with periods of low s_tvalid_d_d punched out
    signal m_tvalid_i          : STD_LOGIC;

    — internal version of m_tready input
    — also the internal version of s_tready output
    — passes through m_tready to s_tready with no clk delays
    signal m_tready_i          : STD_LOGIC;

    — constant byte value 0xFF, for comparison purposes
    signal all_ones            : STD_LOGIC_VECTOR(input_data’range) := (others => ‘1’);

    — constant byte value 0x00, for comparison purposes
    signal all_zeros           : STD_LOGIC_VECTOR(input_data’range) := (others => ‘0’);

begin

    — asynchronous assignments
   
    frame_sep <= ‘1’ when input_data_d = all_zeros and s_tvalid_i_d = ‘1’
                else ‘0’;
   
    m_tlast_i <= ‘1’ when ((frame_sep = ‘1’ and m_tvalid_i = ‘1’ and m_tready = ‘1’))
                else ‘0’;
   
    counter_load <= ‘1’ when (input_data_d /= all_zeros and frame_sep_d = ‘1’ and s_tvalid_i_d = ‘1’)   — start of frame
                          or (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘1’)   — start of next sequence in frame
                else ‘0’;
   
    m_tvalid_i <= ‘1’ when ((pre_tvalid = ‘1’ and s_tvalid_i_d_d = ‘1’    — usual case, if input_data was valid
                        and not (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘0’)) — defer last byte; might be m_tlast
                      or (pre_tvalid = ‘1’ and to_integer(unsigned(count)) = 1
                        and s_tvalid_i_d = ‘1’ and s_tvalid_i_d_d = ‘0’)) — pick up that deferred last byte
                else ‘0’;
               
   
    s_tready <= m_tready_i;
    m_tdata <= output_data;
    input_data <= s_tdata;
    s_tvalid_i <= s_tvalid;
    m_tready_i <= m_tready;
    m_tvalid <= m_tvalid_i;
    m_tlast <= m_tlast_i;
   
— processes

    set_case_255 : process (rst, clk)
    begin
        if rst = ‘1’ then
            case_255 <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’ and input_data_d = all_ones then
                case_255 <= ‘1’;
            elsif counter_load = ‘1’ and input_data_d /= all_ones then
                case_255 <= ‘0’;
            end if;
        end if;
    end process set_case_255;

   
    delay_s_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            s_tvalid_i_d <= ‘0’;
            s_tvalid_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            s_tvalid_i_d <= s_tvalid_i;            
            s_tvalid_i_d_d <= s_tvalid_i_d;
        end if;
    end process delay_s_tvalid;
   
   
   
    create_pre_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            counter_load_d <= ‘0’;
            pre_tvalid <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i_d = ‘1’ then
                counter_load_d <= counter_load;
                if (frame_sep_d_d = ‘1’ and frame_sep_d = ‘0’)            — normal last byte of frame
                or (counter_load_d = ‘1’ and frame_sep_d = ‘0’)           — normal first byte of a sequence
                then      
                    pre_tvalid <= ‘1’;
                end if;
            end if;
            if frame_sep = ‘1’
            then
                pre_tvalid <= ‘0’;
            end if;
            if counter_load = ‘1’ and case_255 = ‘1’ then
                pre_tvalid <= ‘0’;
            end if;
        end if;
    end process create_pre_tvalid;
     
     

    delay_m_tlast_i : process (rst, clk)
    begin
        if rst = ‘1’ then
            m_tlast_i_d <= ‘0’;
            m_tlast_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            m_tlast_i_d <= m_tlast_i;
            m_tlast_i_d_d <= m_tlast_i_d;
        end if;
    end process delay_m_tlast_i;

    set_counter : process (rst,clk)
    begin
        if rst = ‘1’ then
            count <= (others => ‘0’);
            frame_sep_d <= ‘0’;             frame_sep_d_d <= ‘0’;         elsif rising_edge(clk) and m_tready_i = ‘1’ then             frame_sep_d_d <= frame_sep_d;             if s_tvalid_i_d = ‘1’ then                 frame_sep_d <= frame_sep;                 if counter_load = ‘1’ then                     count <= input_data_d;                 elsif count /= all_zeros                 then                     count <= STD_LOGIC_VECTOR(unsigned(count) – 1);                 end if;             end if;         end if;     end process set_counter;                 create_output : process (rst, clk)     begin         if rst = ‘1’ then             output_data <= (others => ‘0’);
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’
            then
                output_data <= all_zeros;
            elsif s_tvalid_i_d = ‘1’ then
                output_data <= input_data_d;                
            end if;
        end if;
    end process create_output;
   
 
   
    selective_delay_of_input_data : process (rst,clk)
    begin
        if rst = ‘1’ then
            input_data_d <= all_zeros;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i = ‘1’ then
                input_data_d <= input_data;
            end if;    
        end if;
    end process selective_delay_of_input_data;
   

end Behavioral;

-=-=-=-=-=-=-=-=-=-=-=-=-=-

This codebase doesn’t include just the source code, but also a testbench file, a design document, timing diagrams, a Python notebook that creates test vectors, and Python code that compares results produced by the testbench that uses the test vectors.

None of this sort of thing is produced by ChatGPT.

There is code in some languages for some functions that can be correctly written by AI at this point.

This is a situation similar to algorithmic musical composition. Algorithms can come up with useful ideas, motifs, sketches, or traverse a search space more quickly than a human. They can help a composer, but do not replace one. Algorithms can produce some sparkling gems of ideas. These are fresh cuts and views. This is mixed in with a lot of music that is just plain wrong.

If your programming job happens to be in a well-defined area that is easily automated, then you do have something to worry about today.

All jobs are a target for eventual automation with AI/ML. Yes, all of them. As AI/ML becomes more capable, it will automate even things like testbench writing (HDL Coder already does something like this) and support code (like a Python notebook that creates a solid set of test vectors for a testbench) and design documents. All of us should be aware of what we do being replaced by an algorithm some number of tomorrows in the future.

That day is not yet “today” with Chat GPT, when it comes to writing #VHDL.

What do you think? We’re interested in your experience and opinions with AI/ML writing code.