Inner Circle Newsletter January 2025

The Who What When Where Why

Open Research Institute is a non-profit dedicated to open source digital radio work. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer. 

You can get involved by visiting https://openresearch.institute/getting-started

Membership is free. All work is published to the general public at no cost. Our work can be reviewed and designs downloaded at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world.

Want more Inner Circle Newsletters? Go to http://eepurl.com/h_hYzL and sign up.

Exponential moving what? Read on to find out!

What’s all this Fixed-Point Math Stuff, Anyhow?

With apologies to Bob Pease

Person 1: Hey, whatcha doing? Looks like something cool.

Person 2: Working on a Simulink model for an MSK model.

Person 1: Oh, that’s fun, how about I code up the modem in VHDL this weekend?

Person 2: Sounds great!

Famous last words, amirite? It has been many months – ahem 10 – since that pseudo conversation occurred. In that time there have been missteps, mistakes, and misery. We have come a long way, with internal digital and external analog loopback now working consistently, although unit to unit transmission isn’t quite there yet.

This is one way to do hardware development. Write some HDL code, simulate, and iterate. Once the simulation looks good, put it in an FPGA, test it, and iterate. You’ll get there eventually, but there will be some hair pulling and teeth gnashing along the way. But, it is worth the effort when you finally see it working, regardless (and because) of all the dumb mistakes made along the way.

There are many ways to approach a design problem, some better than others, and as with all things engineering, it’s a trade-off based on the overall design context. At opposing ends of the spectrum we have empirical and theoretical approaches. Empirical: build based on experience, try it, fix it, rinse and repeat. And, theoretical: build based on theory, try it, fix it, etc. Ultimately we must meet in the middle (and there is never getting away from the testing and iterating).

Starting from first principles always serves us well. One of the niggling points in the Minimum-Shift Keying (MSK) development has been selection of bit-widths for the signal processing chains.

Person 1 (yeah, that’s me): I’ll code it up this weekend. Let’s see, for the modulator we need data in, that’s 1-bit. The data gets encoded, still 1-bit. The data modulates a sine wave, hmmm, how many bits should that be? Well the DAC is 12-bits, so we should use a 2’s complement 12-bit number. That seems right.

Person 1: Now for the demodulator. The ADC outputs 12-bits, so a 2’s complement 12-bit number. That gets multiplied by a sine wave, let’s use a 2’s complement 12-bit number since we did that in the modulator. The multiply output is 24-bits, ok. Now we integrate that 24-bit number over a bit-period, hmmm, how many bits after the integration? No worries, let’s just make it 32-bits and keep on. But that is a lot of bits, let’s just scale the 32-bits down to 16-bits and keep going. Ah, the empirical approach, I’m getting so much done!

Person 1: Why isn’t this working? Oh, the 16-bit number is overflowing. I wish I could just make it 32-bits again, but that’s too many bits for the multiplier.

There is a better way! We can analyze the design from some starting point, such as, the number of input bits. From there we operate on those bits, adding, subtracting, multiplying, shifting, etc. And we know how each of those operations affects the bit widths, which allows us to choose appropriate bit-widths through the signal chain. Before we get there we need to take a quick look at fixed point math and related notation.

In our everyday base 10 math we have, essentially, infinite precision. One example is that irrational numbers can be expressed as fractions, and we can operate on those fractions and the infinite precision is maintained as long as we can keep the fractional representation. Often, though, we will need to use an approximation of the fraction to get an actual answer to a problem. One example is 1/3=0.33333… and we have to choose how many digits of 3 we need for the particular problem at hand. And, we know there is an error term when we use such an estimation (1/3=0.333+).

Base 2 math is largely the same, but hardware can’t use fractional representations, meaning that an irrational number will always be a finite representation. Also, we may be constrained in how many bits we can use to represent numbers. We need a way to notate these numbers.

Texas Instruments created Q-notation as a way to specify fixed-point numbers. The notation Qm.n is used to represent a signed 2’s complement number where m is the number of whole bits and n is the number of fractional bits. TI specifies m to not include the sign-bit, while ARM specifies m to include the sign bit. The table below shows examples of Q-notation using both TI and ARM variants.

Table 1. Q-Notation Examples

Total BitsARM Q FormatTI Q FormatRangeResolution
8Q3.5Q2.5-4.0 to +3.968752−5
16Q4.12Q3.12-8.0 to +7.99975585942−12
24Q8.16Q7.16-128.0 to +127.99998474122−16
32Q8.24Q7.24-128.0 to +127.99999994042−24
8UQ8.3UQ8.3  0.0 to +255.8752−3
10Q1.10Q10-1.0 to +0.99902343752−10

The table below shows how bit widths are affected by various operations.

Table 2. Math operators affect on bit-widths

OperationInput NumbersOutput
+/-Qm.n +/- Qx.yQj.k where j = max(m,x)+1 and k = max(n,y)
*Qm.n * Qx.yQ(m+x).(n+y)
>>Qm.n >> kQm.(n-k)
<<Qm.n << jQm(n+j)

The diagram below shows an exponential moving average circuit with signal bit-widths notated as Qm.d. You can see the bit widths adjusted as we shift, multiply, add, etc. Since the output is an average of the input, it should have the same representation as the input (Qm.d). The lower representation (i.e. Q22.0 for the input) is an actual bit-width selection based on targeting the Zynq7010 and its 25×18 hardware multipliers. Some particular notes:

  1. The Q22.0 input is specified by the surrounding system. It is left-shifted by 2-bits (Q22.2) to fully utilize the 25-bit multiplier input thus increasing the resolution.
  2. The alpha input is specifically chosen to be Q17 to fully utilize the 18-bit multiplier input.
  3. The bottom multiplier is in a feedback path, its output must match the upper multiplier output so that the binary points are aligned into the adder. To this end the adder output is truncated and rounded by 18-bits.
Exponential Moving Average Block Diagram

Figure 1. Exponential Moving Average Block Diagram

The adder output is truncated and rounded by 20-bits for the final output.

In summary Q-notation is a useful tool for understanding and specifying system bit-widths throughout the processing chain. It is especially useful to add Q-notation to the block diagram to help visualize the bit-widths. With this approach the optimal bit-widths should become apparent when taking into account system requirements. Doing this system analysis step before writing any code will save time and effort by reducing errors. The other benefit is that device resources are not over utilized, which may make the difference between fitting in an FPGA or not.

Additional reading and resources are available at these URLs:

As the saying goes, mind your Ps and Qs.

Matthew Wishek, NB0X

Opulent Voice moves from real to complex modulation. Read on to find out more!

Real and Complex Signal Basics

The magic of radio is rooted in mathematics. Some of that math can be complicated or scary looking. We are going to break things down bit by bit, so that we can better understand what it means when we say that we are going to transmit a complex baseband signal. 

Everything that we are going to talk about today is based on a single carrier real signal, even when we get to complex transmission. A single carrier real signal is where we take our data, a single-dimensional value that we want to communicate, and we multiply it by a carrier wave (a cosine wave) at a carrier frequency (fc). Let’s call the value we want to communciate “alpha”. 

Because we are dealing with digital signals, the value that we are transmitting is held for a period of time, called T. The next period of time we send another constant value. And, so on. We are sending discrete values for a period of time T, one after another until we are all done sending data, and not continuous values over time. 

Let’s say we are sending four different amplitudes to represent four different values. During each time period T we select one of these four amplitude values. We hold that value for the entire time period. These values can be thought of as single dimensional values. One value uniquely identifies the value we want to send. In this case, amplitude. 

Sending one of four values at a time means we are sending two bits of data at a time. 

alphabits
000
101
210
311

In order to send our value out over the air from transmitter to receiver, we multiply our alpha by our carrier frequency. The result is alpha*cos(2*pi*fc*t). Cosine is a function of time t. The 2*pi term converts radians per second to cycles per second, which is something that most of us find easier to deal with. When we multiply in the time domain, we cause a different mathematical thing to happen in the frequency domain. Multiplication is called convolution in the frequency domain. This mathematical process creates images, or copies, of our baseband signal in the frequency domain. One image will be located at fc and the other will be located at -fc. 

Our real signal has a special characteristic. It’s symmetric. At the receiver, we multiply what we receive by that same cosine wave, cos(2*pi*fc*t). We multiply in the time domain, and we convolve in the frequency domain. This results in images at 2*fc, -2*fc, and most useful to us, we get two images at 0 Hz. We use a low-pass filter to get rid of the unwanted images at 2*fc and -2*fc, and by integrating over the time period T, we get a scaled version of the original value (alpha) that was sent. Amazing! We reversed the process and we got our original sent value. 

So what’s all this complex signal stuff all about? Why mess with success? We have our single carrier signal and our four values. What more could we want? 

Well, we want to be able to send more than a shave and a haircut number of bits!

If we want to send more bits in the same time period (and who doesn’t?) then we must use a bigger alphabet. Let’s double our throughput. We now pick from sixteen different amplitudes, sending the value we picked out for a period of time T as a single-carrier real signal. Now, each alpha value stands for four bits.

We have a minor problem. Sending out sixteen different voltage levels on a single carrier means that we have to be able to differentiate between finer and finer resolution at our receiver. Before, we only had to distinguish between four different levels. Now we have sixteen. This means we better have a really clear channel and a lot of transmit power. But, we don’t always have that. It’s expensive and a bit unreasonable. There is a better way.

We know we now want to send out (at least) one of sixteen values, not just one of four. If we turn our one-dimensional problem into a two-dimensional problem, and assign a real single carrier signal to, say, the vertical dimension, and then a second real single carrier signal to the horizontal dimension, then we are now enjoying the outer limits of digital signal processing. The vertical handles four levels. The horizontal handles four levels.

We still have the same time period T. We just have a two-dimensional coordinate system instead of a one-dimensional coordinate system.

alphabitsalphabits
0000081000
1000191001
20010101010
30011111011
40100121100
50101131101
60110141110
70111151111

But how can we send two real signals over the air, at the same time? We can’t just add them together, can we? They will step on each other and we’ll get a noisy mess at the receiver. Math saves us! We can actually add these two signals together, send them as a sum, and then extract each dimension back out. But, only if we prepare them properly. And here is how that is done. 

Look at the two-dimensional diagram of 16QAM. The vertical axis is labeled Q, and the horizontal axis is labeled I. When we want to indicate the vertical dimension of our value (pick any one of them), then we take that vertical dimension (say, -1 for 1111) and we multiply it by sin(2*pi*fc*t). We now have our Q signal. Now we need the horizontal location of 1111. That would be +1 on the I axis. We multiply this value, giving the horizontal dimension, by cos(2*pi*fc*t). We now have our I signal. Q axis value was multiplied by sine. I axis value was multiplied by cosine. These signals are played for the duration of the sample period. Both of them happen at the same time to give a coordinate pair for a particular alpha. 

We add the I and Q signals together and transmit them. We are sending (I axis value) * cos(2*pi*fc*t) + (Q axis value) * sin(2*pi*fc*t). 

At the receiver, we take what we get and we split the signal. We now have two copies of what we received. We multiply one copy by cos(2*pi*fc*t). We multiply the other by sin(2*pi*fc*t). We integrate over our time period T. This is important because it lets us take advantage of several trig identities. 

First, let’s multiply and distribute our cos(2*pi*tfc*t) across the summed signals we received. We multiply:

[(I axis value) * cos(2*pi*fc*t) + (Q axis value) * sin(2*pi*fc*t)] * cos(2*pi*fc*t) 

And rewrite it to distribute our cos(2*pi*fc*t). 

(I axis value) * cos2(2*pi*fc*t) + (Q axis value) * sin(2*pi*fc*t)*cos(2*pi*fc*t)

Aha! We can convert that cos2() term to something we can use. Use the half angle identity, square each side, and double all the angle measurements (easy, right?). After this cleverness, this is what we have.

cos2(2*pi*fc*t) = 1/2*[1 + cos(2*pi*2fc*t)]

So now we have

(I axis value) * 1/2*[1 + cos(2*pi*2fc*t)]

See that 2fc term in there? Check out the notebook drawing for our signal in the frequency domain. It’s at 2fc. Q signal is on the right-hand half of the drawing.


Let’s rearrange things.

(I axis value/2) + [(I axis value/2) * cos(2*pi*2fc*t)]

Remember we are integrating over time at the receiver. We have one of the two terms rewritten in a useful way. What happens when we integrate a cosine signal from 0 to T? That value happens to be zero! This leaves just the integration of (I axis value/2)!

The result at the receiver for the multiplication and integration of the first copy of the received signal is (I axis value)*(T/2). We know T, we know what the number 2 is, so we know the I axis dimension value. 

But wait! We forgot something. We only did the first part. 

Remember we had 

(I axis value) * cos2(2*pi*fc*t) + (Q axis value) * sin(2*pi*fc*t)*cos(2*pi*fc*t)

We recovered I axis value from the term before the plus sign. But what about the term after the plus sign?

(Q axis value) * sin(2*pi*fc*t)*cos(2*pi*fc*t)

Uh oh we didn’t get away from summing the I and Q together after all…

Trig saves us here too. When we integrate sin(2*pi*fc*t)*cos(2*pi*fc*t) from 0 to period T, it happens to be zero. The entire Q axis value term drops out. Does the same technique work for the copy of the received signal that we multiply by sin(2*pi*fc*t)? 

You bet it does! First, let’s multiply and distribute our sin(2*pi*tfc*t) across the second copy of the summed I and Q signals we received. We multiply:

[(I axis value) * cos(2*pi*fc*t) + (Q axis value) * sin(2*pi*fc*t)] * sin(2*pi*fc*t) 

And rewrite it to distribute our cos(2*pi*fc*t). 

(I axis value) * cos(2*pi*fc*t) * sin(2*pi*fc*t) + (Q axis value) * sin(2*pi*fc*t)*sin(2*pi*fc*t)

Now that we know that integrating cos(2*pi*fc*t)sin(2*pi*fc*t) from 0 to T is zero, we can drop out the I axis value term. That’s good because we already have it from multiplying our received summed signal by cos(2*pi*fc*t) and doing trigonomtry tricks. 

We are left with

(Q axis value) * sin(2*pi*fc*t)*sin(2*pi*fc*t)

And we rewrite it

(Q axis value) * sin2(2*pi*fc*t)

And use the half angle trig identity, square each side, and then double all angle measurements. 

We can replace sin2(2*pi*fc*t) with 

1/2*[1 – cos(2*pi*2fc*t)]

which gives us

(Q axis value) * 1/2*[1 – cos(2*pi*2fc*t)]

And we rewrite it as

(Q axis value/2) – [(Q axis value/2)*cos(2*pi*2fc*t)]

Hey, guess who goes to zero again? That’s right, cosine integrated from 0 to T is zero. We are left with a constant term that integrates out to (Q axis value) * (T/2)

So when we multiply the summed signal that we received by cosine, we get I axis value. When we multiply the summed signal that we received by sine, we get Q axis value. 

I and Q give us the coordinates on the 16 QAM chart. As long as we are in sync with our transmitter (a whole other story) and as long as our map of which point stands for which label (read your documentation!) is the same as at the transmitter, then we have successfully received what was sent using a technique called quadrature mixing. 

Moving from a single carrier real signal to a “complex” signal, where two real signals are sent at the same time using math to separate them at the receiver, gives us advantages with respect to sending more bits without having to send more levels. Our two signals are each handling four levels, but using the results in a two-dimensional grid gives us more bits per unit time without having to change our performance expectations. Sending sixteen different levels is harder than sending four. So, we send four twice and use some mathematical cleverness. 

However, doing this complex modulation scheme gives us yet another advantage. Because of the math we just did, we eliminate an entire image when compared to a single carrier real signal. We have a less difficult time with filters because we no longer create a second image. Below (next page) are some diagrams of how this happens. 

A third advantage of I and Q modulation is that it doesn’t just do things like 16QAM. Using an I and a Q, and a fast enough sample period T, means you can send any type of modulation or waveform. Now that’s some power!

This technique does require some signal processing at the receiver. But, this type of signal handling is at the heart of every software defined radio. And, now you know how it’s done, and the reasons why Opulent Voice is now using complex modulation in the PLUTO SDR implementation.

-Michelle Thompson W5NYV

(Below are two more pages from our lab notebook, showing a few more visual representations. Don’t let the exponential functions worry you – “e” is a more compact way of representing the sine and cosine functions. In our notebook we show how sine and cosine can add to a single image. We can do this because sine and cosine are independent in a special way. This quality of “orthogonality” is used in all digital radios.)

Adding a Preamble to Opulent Voice

Looking at the Opulent Voice protocol overview diagram below, we can see that each transmission begins with a preamble. This section of the transmission contains no data, but is extremely helpful in receiving our digital signal. 

The preamble is like a lighthouse for the receiver, revealing a shoreline through the fog and darkness of interference and noise. While we may not need the entire 40 milliseconds of preamble signal to acquire phase and frequency, so that we are “on board” for the rest of the transmission, keeping the preamble at the length of a frame simplifies the protocol. 

There is a similar end of transmission (EOT) frame, so that the receiver knows for sure that the transmitted signal has ended, and has not simply been lost. This will reduce the uncertainty at the receiver, and allow it to return to searching for new signals faster and more efficiently. 

For minimum shift keying, the modulation of Opulent Voice, a recommended preamble data stream in binary is 1100 repeating. In other words, we’d get a frame’s worth of 11001100110011001100… at 54.2 kilobits per second for 40 milliseconds.

After the preamble is sent, data frames are sent. Note that there is a synchronization segment at the beginning of each frame. This keeps the receiver from drifting and improves reliability.

Constructing the Preamble in Simulink and in HDL

Below is the Simulink model output viewer showing the 1100 repeating pattern mathematical construction, followed by the planned update for the hardware description language (HDL) code updates. The target for HDL firmware is the PLUTO SDR. -Opulent Voice Team

“Take This Job”


Interested in Open Source software and hardware? Not sure how to get started? Here’s some places to begin at Open Research Institute. If you would like to take on one of these tasks, please write hello@openresearch.institute and let us know which one. We will onboard you onto the team and get you started.

Opulent Voice:

  • Add a carrier sync lock detector in VHDL. After the receiver has successfully synchronized to the carrier, a signal needs to be presented to the application layer that indicates success. Work output is tested VHDL code. 
  • Bit Error Rate (BER) waterfall curves for Additive White Gaussian Noise (AWGN) channel.
  • Bit Error Rate (BER) waterfall curves for Doppler shift.
  • Bit Error Rate (BER) waterfall curves for other channels and impairments.
  • Review Proportional-Integral Gain design document and provide feedback for improvement. 
  • Generate and write a pull request to include a Numerically Controlled Oscillator (NCO) design document for the repository located at https://github.com/OpenResearchInstitute/nco. 
  • Generate and write a pull request to include a Pseudo Random Binary Sequence (PRBS) design document for the repository located at https://github.com/OpenResearchInstitute/prbs.
  • Generate and write a pull request to include a Minimum Shift Keying (MSK) Demodulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_demodulator 
  • Generate and write a pull request to include a Minimum Shift Keying (MSK) Modulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_modulator
  • Evaluate loop stability with unscrambled data sequences of zeros or ones.
  • Determine and implement Eb/N0/SNR/EVM measurement. Work product is tested VHDL code.
  • Review implementation of Tx I/Q outputs to support mirror image cancellation at RF. 

Haifuraiya:

  • HTML5 radio interface requirements, specifications, and prototype. This is the user interface for the satellite downlink, which is DVB-S2/X and contains all of the uplink Opulent Voice channel data. Using HTML5 allows any device with a browser and enough processor to provide a useful user interface. What should that interface look like? What functions should be prioritized and provided? A paper and/or slide presentation would be the work product of this project. 
  • Default digital downlink requirements and specifications. This specifies what is transmitted on the downlink when no user data is present. Think of this as a modern test pattern, to help operators set up their stations quickly and efficiently. The data might rotate through all the modulation and coding, transmititng a short loop of known data. This would allow a receiver to calibrate their receiver performance against the modulation and coding signal to noise ratio (SNR) slope. A paper and/or slide presentation would be the work product of this project.

The Inner Circle Sphere of Activity

January 6, 2025 – All labs re-opened. Happy New Year!

January 13, 2025 – ORI presented to Deep Space Exploration Society about our history and projects line-up. 

January 18, 2025 – San Diego Section of IEEE Annual Awards Banquet. ORI volunteers supported this event as a media and program sponsor. ORI was represented by five members. 

January 23-26, 2025 – IEEE Annual Meeting for Region 6 and Region 4, ORI was represented by three members. 

January 28, 2025 – Co-hosted the IEEE Talk “AI/ML Role in RTL Design Generation” with the Information Theory Society and the Open Source Digital Radio San Diego Section Local Group. 

February 18, 2025 – San Diego County Engineering Council Annual Awards Banquet. ORI will be part of the IEEE Table display in the organizational fair held on site before dinner. ORI will be represented by at least one member. 



Thank you to all who support our work! We certainly couldn’t do it without you. 

Anshul Makkar, Director ORI

Frank Brickle, Director ORI (SK)

Keith Wheeler, Secretary ORI

Steve Conklin, CFO ORI

Michelle Thompson, CEO ORI

Matthew Wishek, Director ORI

Inner Circle Newsletter: Autumn 2024

A Harvest of Regulatory and Technical Updates

Open Research Institute is a non-profit dedicated to open source digital radio work. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer. You can get involved by visiting https://openresearch.institute/getting-started

Membership is free. All work is published to the general public at no cost. Our work can be reviewed and designs downloaded at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world.

Subscribe to this newsletter at http://eepurl.com/h_hYzL

Ribbit Radio Update

It isn’t often that a completely new way of using existing technology in a truly innovative way on the amateur bands excites the community, but Ribbit consistently delivers.

What is Ribbit?

Ribbit is a novel digital text messaging mode for VHF/UHF communications for recreational and emergency use. It radically increases the density of information transmitted by spectrum used. Ribbit turns text into encoded audio, which can be played from your smartphone into the microphone of any radio, on any band, under even very noisy conditions. A smartphone with the Ribbit application on the other side can recover this message. Your information will get through.

Ribbit leverages the computing power of the modern smartphone to increase the capabilities of any Handy Talkie without requiring any additional hardware or cable.
Its redundant distributed nature allows it to function even when internet connectivity is lost during emergencies or just not available due to poor coverage.

Developer Updates

Project Lead Pierre W4CKX and Rattlegram Developer Ahmet Inan supported a transition of project development to Alex Okita KO6BVA in late 2023. Since then, Alex has made several big strides forward in both development and demonstration.

Alex’s most recent TO DO list included the following tasks.

1) Building a scheme to compact the header for Ribbit. The header is a bit long and could be shortened to save some bits using a raw bit stream of a simplified character map.

2) Add a keyEncoder to Code Witches as the keyboard input gets letters added. When the “Enter” key is pressed begin encoding/consuming the characters in the input area.

3) May also want to add in a sort of message queue above the input area.

4) Build a useful CW-Font for code witches and anything else anyone wants to use it for.

All of this work enabled a highly successful demonstration of Ribbit at DEFCON 32 in Las Vegas, Nevada, in mid-August 2024. A special thank you to RF Village for hosting ORI’s Open Source Showcase.

Visitors to the exhibit were able to see Ribbit work in a busy and noisy environment. DEFCON, like most other very large conferences, is acoustically noisy, but also very electrically noisy. Ribbit was demonstrated all weekend long in a very harsh environment, as the WiFi Capture the Flag competition was happening literally steps away from the Ribbit demonstration. The venue, Las Vegas Convention Center West, was full of conference attendees. Despite the background noise and substantial RF interference, Ribbit worked from Smartphone app speaker output, to Family Radio Service microphone input, over the air on a busy FRS channel, to the receiving FRS radio speaker output, to a Ribbit application microphone input on the monitoring smartphone. Being able to transmit and reliably receive accurate text data is a notable achievement.

Alex has integrated the Ribbit functionality into a web application. This makes Ribbit even easier to use than traditional native applications. The live web app demonstration can be found at https://badkangaroo.github.io/

Recordings of presentations about Ribbit, along with posters and diagrams, can be found at https://www.ribbitradio.org

The source code can be found at https://github.com/OpenResearchInstitute/ribbit_webapp

Practical Real World Use

The technical demonstration for Ribbit, called Rattlegram, got very positive reviews for utility in the field from Chile and Argentina mountain rescue groups. The challenge is supporting hikers and skiiers in places where cellular coverage is limited or not available. Ribbit functionality provided clear benefits with no downsides.

Carlos, our contact in Argentina who supports mountain rescue operations, described his experiences using Ribbit technology.

“Hi everyone, how are you? I’m just back from the mountains in Chile. We did some testing with Rattlegram, in very windy conditions (without any cables) and worked just great !
Send you a picture & video of one of the test we made. We were at this summit:”

In Argentina, Carlos reported:

“These past few months, we have been going out to the mountains, and been using Rattlegram with lot of success. In one of the ‘refugio’ we left an old cell phone with Rattlegram loaded besides the base radio, and we have been sending messages in all the backcountry ski outings and all the messages arrived without a problem.”

If Ribbit can work on the top of the mountains in Chile and Argentina, and if it also performs really well in noisy urban environments, then what can’t it do?

Ribbit Can’t Flourish Without Your Support

Ribbit is innovative, useful, and fully supported by Open Research Institute. But, it won’t realize its full potential without continuing development and community support. If you are excited about Ribbit and want to help, please join ORI at https://openresearch.institute/getting-started and let us know you want to get involved with Ribbit. We’ll make sure you get on the mailing lists and Slack channels.

ORI graciously accept donations for Ribbit development. Ribbit has a dedicated bank account, follows non-profit best practices, has an enforced code of conduct, and regularly reports to ORI leadership.

Thank you to everyone helping make Ribbit successful, useful, exciting, and fun to use!

Opulent Voice Comes to the PLUTO SDR
An Open Source Minimum Shift Keying Implementation

Opulent Voice is the native digital voice and data protocol for ORI’s open source satellite (or terrestrial) system called Haifuraiya. With high fidelity voice quality, seamless integration of data, keyboard chat, and authentication and control messages, amateur operators experience something truly new and enjoyable. As described by ORI Director Steve Conklin AI4QR, “Leave scratchy and harsh proprietary vocoders with clunky or incomplete data mode integration behind for good.” All source code at https://github.com/OpenResearchInstitute/pluto_msk

Things worth doing are rarely easy. The Minimum Shift Keying (MSK) implementation of Opulent Voice (OPV) on the PLUTO SDR is no exception. Challenges to the documentation, design, implementation, and test cycles have come up throughout the process. The response from the open source digital radio community has been solid, inspiring, and productive, with patient problem-solving winning the day every time.

Michelle Thompson, ORI’s CEO, described her perspective on OPV’s progress recently at an IEEE event, stating:

“I would like to thank everyone helping ORI achieve ambitious goals for our uplink protocol. It’s a real pleasure to serve this community and meet the needs of this project. It’s a privilege to be part of something so innovative and relevant.”

This feeling is reflected by the OPV volunteer team, who are actively involved around the world in promoting open source work and helping ORI projects like OPV succeed.

Successes Over the Past Quarter and Current Status

OPV was the communications downlink for a NASA RockSat-X mission executed by the University of Puerto Rico, one of the most consistent and successful teams participating in this competitive program. The mid-August 2024 launch from Wallops went flawlessly. The performance of OPV met all expectations and data recovered from the launch is being used to further develop and improve the communications protocol.

OPV presentations at HamCon:Zion and DEFCON during the summer of 2024 were extremely well received, giving positive educational value to the community. Amateur Radio deserves quality digital communications. OPV delivers excellent results and is well-documented.

There is still a long way to go. The physical layer work is currently focused on tuning the tracking loops in the demodulator. Once the physical layer is reliable, then the functions that integrate voice, data, keyboard, and channel control messages will be integrated into the PLUTO SDR build and tested. Application layer functions in HTML 5 have a head start, with some tests already completed. Parallel integration efforts from F5OEO and OpenCPI, both of which will expand the number of hardware platforms that OPV can use, are well underway.

Below, an example of bit-accurate simulation results for OPV, with signal names on the left and waveform values on the right.

Below, the spectrum of OPV, showing main lobe null-to-null span of 81 kHz, and signal sidelobes.

Haifuraiya and Modem Module Update

The work on Haifuraiya has broader appeal than just as a HEO/GEO satellite design. The “modem module” approach, developed by Matthew NB0X, is summarized at left.

At left is the most recent high level block diagram.

Haifuraiya – Open Source HEO/GEO Amateur Satellite Update

The multirate channelizer has been implemented in Remote Labs West on the zc706/ADRV9009 station. At left is a summary of the channel plan for the satellite receiver.

Regulatory Work at ORI
Making Open Source Easier for Everyone

Past regulatory work at ORI can be found at
https://github.com/OpenResearchInstitute/documents/Regulatory

A special Halloween season workshop about exorcising the Haunted Band of 219-220 MHz was held by IEEE and ORI on 29 October 2024.

The Zoom meetup was well attended with the majority of RSVPs participating on the call. The meeting was recorded for those that couldn’t make it. The video recording of the background session, followed immediately by the workshop, can be found at https://youtu.be/wGX__qcFrT4

219 MHz is a secondary allocation for amateur radio. It has excellent channel characteristics, is open to fixed point-to-point digital links only, allows 100 kHz bandwidths, has a minimum of 56 kbps data rates, and has had no activity for decades. It’s time to take this band back!

Why is there no activity? We believe this band is inactive because of the notification requirements in FCC part 97 and part 80. We assert that these notification requirements, which pull ARRL into the loop from two different directions, are impossible for both amateurs and commercial licensees to comply with.

What are these notification requirements? If you as an amateur want to operate on 219 MHz, you have to determine if there’s an Automated Maritime Telecommunications System (AMTS) station within 80 km of your amateur station. You then have to ask permission from the AMTS station(s) in order to legally operate with at least 30 days notice. If you are between 80 km and 640 km, then you have to notify the AMTS stations at least 30 days in advance of activating the band.

A publicly accessible database of amateur and AMTS stations on 219 MHz is supposed to be maintained by ARRL. This is a rule in Part 97 (for the amateur radio stations) and in Part 80 (for AMTS stations). This database does not appear to exist.

AMTS stations did not report their information to ARRL. Requests to ARRL for a copy of AMTS stations have not been answered. Requests for a copy of amateur radio stations that have notified the ARRL of intent to operate have also not been answered.

These requests were made by multiple amateur radio operators in the October – December 2023 time frame. This was before the cyberattack in 2024, which did sideline a lot of ARRL’s services and activities.

The 219 MHz band regulations need a serious overhaul. We believe a proposal for new rules for this band are a necessary step towards restoring amateur use of this band. We believe that rules can be proposed that will both promote modern and innovative digital activity, and at the same time, not endanger the primary licensees in any way. While the AMTS service no longer exists as originally envisioned, the AMTS licenses have been leased out to fundamentally different services, such as railroads. These primary licensees can share the band with amateur radio in, at the very worst, a mutually inoffensive way and, at the very best, a mutually beneficial manner. It’s up to us to make this better future happen.

In order to make an effective proposal to the US FCC for updated rules for this band, there are things that we need to sort out. We need to know what we want and we need to know what the primary users want. We need to anticipate and answer objections to dropping the onerous notification requirements, and we need to clearly state the value that an amateur radio allocation on this band brings to education, public service, citizenship, and technical advancement of the radio arts.

The workshop on 29 October was the first strong step towards sorting out a draft of proposed rulemaking for restoring amateur radio service use of the 219 MHz band.

Discussion during the workshop resulted in an outline for a draft rulemaking proposal. ORI sincerely thanks everyone that participated during the event and through email correspondence.

Draft Outline

1) Demonstrate that the current rules are not working. State our case that the notification requirements are impossible to comply with and are the reason that there is no activity on this band.

2) Cite 60m as an example of successful similar activity.

3) Assert, providing evidence, that amateur radio operators already know how to share spectrum with other services. We already know how to not cause problems for the services that we share spectrum allocations with. We do not need additional notification requirements or to sign declarations to the ARRL in order to be able to successfully operate on an amateur radio band as a secondary user.

4) Show, from a ULS database perspective, the current utilization of 219-220 MHz. Amateur activity does not threaten primary licensees.

5) Show, to the best of our ability, the actual use of 219 – 220 MHz. This will require a call for participation for sensing stations, such as WebSDRs, amateur operators, receiver enthusiasts, and distributed receiver networks such as SatNOGS, to monitor 219-220 MHz for some period of time, in order to measure primary licensee activity on the band.

5) We assert that AMTS no longer exists. The current primary licensees are not coastal stations. The rules assuming coastal activity do not reflect the reality of this allocation as it exists today. Old rules are hampering efficient use of the spectrum.

6) Propose specific deletions of rules. What exactly should and could be struck down?

7) What are the anticipated effects, repercussions, both positive and negative, of deleting the notification requirements.

8) Is there anything else that could or should be changed? We discussed the 100 kHz bandwidth requirement and it’s origins in specific equipment available to amateurs at the time. Current FCC thinking is to not have rules that hardcode in equipment requirements. Deleting the 100 kHz bandwidth limit in favor of no bandwidth or symbol rate limit had broad support at the workshop.

Thank you to all of you that support our regulatory work. It will take a lot of time and effort to draft proposed rules and to go through the proposal process. The FCC may or may not choose to entertain new rules for 219 MHz. If they do, then a public comment period will happen. If they do not, then the draft rules will still be of benefit, as they will record the point of view and consensus of digital mode amateur radio experimenters and enthusiasts.

Please join this effort at ORI by visiting https://openresearch.institute/getting-started

Sphere of Activity

July 12-13 2024 – ORI exhibited at HamCon:Zion in St. George, Utah, USA. We invited Deep Space Exploration Society (DSES) and QRZ.com columnist Dave Jensen to join our booth. We showcased their work alongside our technical and regulatory acheivements. Total staff was three for this weekend event at a very nice conference center. Some of the travel expenses were reimbursed by ORI. This collaboration was entirely successful. Our CEO gave a talk about AI/ML in Amateur Radio at HamCon’s forum, which can be seen at https://www.youtube.com/watch?v=ZgHUavf5Ddo

August 8-11 2024 – Our Open Source Digital Radio exhibit at DEFCON 32 in Las Vegas, Nevada USA was a big success. ORI hosted demonstrations from RFBitBanger, Ribbit, Opulent Voice, and our Regulatory team in the RF Village. A staff of four set up, staffed, and tore down the exhbit. Travel and lodging was funded by ORI. Our CEO gave an RF Village talk about Synthetic Aperture Radar and Amateur Radio, which can be seen at https://www.youtube.com/watch?v=fiR4woe4O5Q

August 29 2024 – ORI participated in the quarterly Federal Communications Commission Technology Advisory Council meeting and presentation in Washington, DC. ORI is represented on the FCC TAC by Michelle Thomposn and Matthew Wishek. Open Source and Amateur Radio interests have been defended at the federal level. ORI Contributions to the Dynamic Spectrum Sharing and 6G Working Groups can be seen in the video at https://www.fcc.gov/news-events/events/2024/08/technological-advisory-council-meeting-august-29-2024 If you are interested in supporting regulatory participation and defense, please join ORI, support ORI’s work, and let us know how you want to help.

October 2-9 2024 – ORI organized a week-long event called Digital Update for Microwave (DUM) in Vancouver, British Columbia, Canada. There were 3 on site for this event and 9 participants virtually. Housing was reimbursed by ORI. The focus of the workshop was Opulent Voice and Iterative Receiver Theory technical work. We did not address the 10 GHz antenna questions posed from our outreach prior to the event due to time limitations. Better 10 GHz mobile radio gear is what people are looking for, and this will get attention from ORI over the next quarter. DUM is an event that provides opportunities for people interested in digital communications on the amateur microwave frequencies. The contributions to ORI’s GitHub repositories showed that DUM was well worth the effort. Should ORI have a DUMII?

Thank you to all who support our work! We certainly couldn’t do it without you.

Anshul Makkar, Director ORI
Frank Brickle, Director ORI
Keith Wheeler, Secretary ORI
Steve Conklin, CFO ORI
Michelle Thompson, CEO ORI

Inner Circle Newsletter: Summer 2024

A Sizzling Summer Summary prepared just for you!

Read on for highlights from all our technical and regulatory open source digital radio work. ORI’s work directly benefits amateur radio, provides educational and professional development opportunities for people all over the world, and puts ethics and good governance first.

Opulent Voice Flying High

Opulent Voice is an open source high bitrate digital voice (and data) protocol. It’s what we are using for our native digital uplink protocol for ORI’s broadband microwave digital satellite transponder project. Opulent Voice has excellent voice quality, putting it in a completely different category than low bitrate digital communications products such as D-Star, Yaesu System Fusion, and DMR.

Opulent voice switches between high resolution voice and data without requiring the operator to switch to a separate packet mode. Opulent voice also handles keyboard chat and digital file transmission. Seamless integration of different data types, using modern digital communication techniques, differentiates Opulent Voice from any other amateur radio protocol.

Opulent Voice will fly on the University of Puerto Rico’s RockSat-X launch on 13 August 2024. It’s been a very positive experience working with the students and faculty at the University.

An implementation on FPGA for the PLUTO SDR is well underway, with a active international team delivering quality results. This implementation will not only turn your PLUTO SDR into an Opulent Voice transceiver, but it will have remote operation functionality.

Hear what Opulent Voice sounds like by following the links in an earlier update at https://www.openresearch.institute/2022/07/30/opulent-voice-digital-voice-and-data-protocol-update/

We’ve come quite a long way in less than two years! The FPGA implementation upgrades the modulation from 4-ary frequency shift keying to minimum shift keying, and increases forward error correction performance and flexibility.

HAMCON:ZION 2024 is This Week!

Please visit us at HAMCON:ZION 2024 this weekend, 12-13 July 2024 in St. George, Utah, USA.

The event website is https://www.hamconzion.com/

ORI will have a club booth at the event. We opened our space to QRZ.com (https://www.qrz.com/) and Deep Space Exploration Society (https://dses.science/). This combined exhibit is a one-stop shop for the best in community, technical, and citizen science amateur radio activity.

We have a volunteer presenting on Artificial Intelligence and Machine Learning in Amateur Radio. The talk opens with a brief summary of the history of our relationship with created intelligence and then explores case studies of the use of artificial intelligence and machine learning in amateur radio. Talk is 1pm on Friday in Entrada B.

Open Research Institute at DEFCON32

We will present an Open Source Showcase at DEFCON in the Radio Frequency Village 12-13 August 2024, with accessible exhibits and demonstrations. Here is the list of scheduled project demonstrations.

Regulatory Efforts: ORI works hard for open source digital radio work and moves technology from proprietary and controlled to open and free in intelligent and mutually beneficial ways. Our work on ITAR, EAR, Debris Mitigation, AI/ML, and Synthetic Aperture Radar will be presented and explained. Find out more at https://github.com/OpenResearchInstitute/documents/tree/master/Regulatory

Ribbit: this open source communications protocol uses the highest performance error correction and modern techniques to turn any analog radio into a digital text terminal. No wires, no extra equipment.. Learn how to use this communications system and get involved in building a truly innovative open source tactical radio service. Find out more at https://www.ribbitradio.org

Satellite: ORI has the world’s first and only open source HEO/GEO communications satellite project. All working parts of the transponder project will be demonstrated, from Opulent Voice to antenna designs.

The Dumbbell antenna: We have an HF antenna design based on a highly effective inductive loading technique first written about in 1958. Learn about this antenna and find out how to make your own. Repository can be found at https://github.com/OpenResearchInstitute/dumbbell

RFBitBanger: an HF QRP system and novel digital protocol called SCAMP. Kit information and updates will be available. Get your Batch 2 kit today at https://www.ebay.com/itm/364783754396

Radar: Our regulatory and technical work in synthetic aperture radar will be demonstrated. One of our volunteers will be giving a talk about open source synthetic aperture radar in the RF Village speakers track. Here is the abstract.

Synthetic Aperture Radar (SAR) is one of the most useful and interesting techniques in radar, providing high resolution radar satellite images from relatively small satellites. SAR is not limited by the time of day or by atmospheric conditions. It complements satellite photography and other remote sensing techniques, revealing activity on the Earth that would otherwise be hidden. How does the magic happen? This talk will explain the basics of SAR in an accessible and friendly way. That’s the good news.

The bad news? SAR is controlled by ITAR, the International Traffic in Arms Regulations, and is listed in the USML, the United States Munitions List. ITAR regulates the export of defense articles and services and is administered by the US State Department. This includes both products and services as well as technical data. Such as, catalogs of high resolution radar imagary. The articles and services regulated by ITAR are identified in the USML. If ITAR doesn’t get you, then EAR just might. The Export Administration Regulations (EAR) are administered by the US Commerce Department, and items are listed on a Commerce Control List (CCL). Commercial products and services and dual-use items that are not subject to ITAR could be regulated by EAR. Even if you are free of ITAR and EAR, you may still be regulated by yet another agency, such as the FCC.

Regulation of SAR chills commercial activity, creating costly and time-consuming burdens. But why does any of this matter to signals hackers? Because technology has overtaken the rules, and devices used by enthusiasts, researchers, students, and hackers are increasingly likely to have enough capability to fall into export-controlled categories. The penalties are harsh. Fear of running afoul of ITAR is enough to stop a promising open source project in its tracks.

Is there a way forward? Yes. ITAR has a public domain carve out. Information that is published and that is generally accessible or available to the public is excluded from control as ITAR technical data. That’s great in theory, but how can we increase our confidence that we are interpreting these rules correctly? How can we use and build upon these rules, so that our community can learn and practice modern radio techniques with reduced fear and risk? Can we contribute towards regulatory relief when it comes to SAR? We will describe the process, report on the progress, and enumerate the challenges and roadblocks.

RFBitBanger Batch 2 Kits Available

Kits are available at our eBay store at this link https://www.ebay.com/itm/364783754396

Be a part of the future with a prototype Batch 2 kit build of the RFBitBanger, a low-power high-frequency digital radio by Dr. Daniel Marks KW4TI. Presented by Open Research Institute, this kit is designed to produce 4 watts of power and opens up a new digital protocol called SCAMP.

SCAMP Is now available in FLDigi!

Source code and extensive instructions can be found at https://github.com/profdc9/fldigi

Your donation in exchange for this kit directly enables the further development of an innovative Class E amplifier based radio design. It has a display, button menu navigation, and keyboard connection for keyboard modes and keyboard-enabled navigation. This radio can be taken portable or used in a case. If you have a 3d printer, then Dr. Marks has a design ready for you to print in the repository linked below.

  • Built-in digital modes: CW, RTTY, SCAMP (FSK and OOK, multiple speeds)
  • Key jack supports straight keys and iambic paddles
  • Open Source hardware and firmware, Arduino UNO compatible https://github.com/profdc9/RFBitBanger
  • External sound-card FSK digital modes supported (including FT4/FT8)
  • Experimental SSB support
  • Serial port support (2400 baud) for send and receive in keyboard modes

SCAMP is a new protocol that allows keyboard-to-keyboard contacts with a digital protocol that has excellent connection performance. See Dr. Marks presentation about RFBitBanger at QSO Today Academy in September 2023 to learn more about SCAMP and the RFBitBanger project. Link below:

All surface mount parts on the main board are pre-installed at the factory. All the through-hole parts you need to complete the radio are provided for you to solder yourself.

Builder’s notes and photos of all the components to help you identify and install them can be found here:

https://github.com/OpenResearchInstitute/RFBitBanger-kit/tree/main/batch2

If you don’t know how to wind toroids or solder surface mount capacitors, this is an excellent kit to learn on. There are just six toroids on the main board, and two on each band pass filter board. You can build just one band pass filter board and operate on a single band, or you can build an assortment. We provide 12 filter boards, enough toroids to build any 9 filters, and a supply of capacitors that will let you build those 9 filters for 9 different HF ham bands. These capacitors are size 1206, which is the largest common size for SMT capacitors and the easiest to solder manually. All you’ll need is a pair of tweezers and your regular soldering iron and solder. We provide detailed instructions on winding the toroids and soldering the capacitors. You get spare filter boards to experiment with.

Friendly Support is provided through a dedicated Open Research Institute Slack channel.

Instructions on how to join this community are here:

https://www.facebook.com/openresearchinstitute https://www.instagram.com/open_research_institute/
https://x.com/OpenResearchIns

400 Subscriber Milestone on YouTube

Thank you to everyone reading this that has supported ORI and how we publish our work on YouTube.

I know YouTube is not for everyone, but it is an effective way to communicate what we do, what challenges we face, and it lets people know there’s a community out there 1) doing things that they might find wonderful and 2) is worth hearing more about.

We have 400 subscribers, which is a bit of a milestone. This is a lot for a very technical all-volunteer organization that devotes its time supporting and promoting project work, while staying firmly in the background.

Our proudest moments are when projects succeed and are recognized on their own merits, under their own name, and under their own branding. Ribbit, RFBitBanger, Haifuraiya, a variety of published Open Source FPGA work, FPGA training, Opulent Voice, Versatuner, Dumbbell, actively participating in IEEE, FCC TAC membership, Remote Labs, our many regulatory successes, and our active and successful mentoring in professional and academic settings – these are all clear indications that we’re on the right track and doing a great job.

Not explicitly mentioned are the many places we’ve helped projects succeed behind the scenes, around the world.

We are committed to an altruistic approach that delivers clear value to project work. This approach has been abused only once, by one organization.

Being accountable, open, and successful is the cost of doing our type of business. This is a price happily paid.

Thank you for being part of it!

https://www.youtube.com/c/OpenResearchInstituteInc

Opportunities This Week at ORI

Greetings all!

What do you need help with this week?

Here’s some opportunities at ORI.

1) Pierre and Ahmet are looking for people to help with mobile app design on Ribbit

The Ribbit Radio app is in both Android and Apple testing. The updates to Rattlegram are incorporated and the app is functional on both platforms. We have had excellent response for test teams and things are moving forward.

To make the app as great as it can be, we could use some additional human resources for UX/UI/code development. If this sounds like something you are interested in, please join #ribbit on our Slack or write to me directly and I’ll get you in touch with the team leads. 

2) DEFCON volunteers for the booth/exhibit. We’ve got just enough people to cover it. It’s a great event. We have solid support from RF Village and we advertise to Ham Radio Village. If you have been sitting on the sidelines waiting for a chance to do something for ORI, this is the best event of the year. 

https://defcon.org/ for details about DEFCON 10-13 August 2023
https://wirelessvillage.ninja/ for details about our Village, RF Hackers Sanctuary.

3) FPGA designs for Haifuraiya and Neptune. Want to use MATLAB/Simulink, VHDL, and Verilog to make open source digital communications designs for aerospace, terrestrial, and drones? These designs run on updated FPGA stations in ORI Remote Labs, and everything is on the microwave amateur radio bands. When you see microwave frequencies mentioned, then it’s good to also say that “we use these bands or lose them”. We’ve got plenty to do. Get in touch on #haifuraiya or #neptune on Slack or to any ORI director. 

4) Meander Dipole construction phase. Project Dumbbell explores an overlooked HF antenna design. There’s been strong interest in these designs from multiple people (some of which are on this list), clubs, and organizations. We need to build the designs that MATLAB says look pretty good. Time to make it work over the air and write up some construction and measured performance articles. 
As always, there’s plenty more going on, but these projects have some specific needs, today. 

Thank you to everyone that supports our work. I’d like to especially thank the IEEE and ARRL for being excellent partners to ORI. 

-Michelle Thompson

Updating the Opulent Voice Interleaver

The interleaver for Opulent Voice needs to be updated because the frame size has increased. We are incorporating RTP, UDP, and IP layers into the existing OPUS and 4-ary MFSK layers and now have what we think may be the final frame size.

Since convolutional encoding is used for Opulent Voice payload, an interleaver is necessary to get the best bit error rate performance out of the convolutional encoder. The interleaver is used over both the physical layer header (Golay encoded) and the data payload (a 1/2 rate Convolutional code). Opulent Voice is an open protocol that we use for our HEO/GEO uplink. It can also be used terrestrially on the #hamradio bands at 70cm and above. Find out more at https://www.openresearch.institute/2022/07/30/opulent-voice-digital-voice-and-data-protocol-update/

The distance that an interleaver spreads out bits in a frame is the most familiar performance measurement. It’s commonly called “spread” or “minimum interleaved distance”. However, we learned about another metric that is important in Turbo codes. Several papers refer to the measure of randomness of the mixture of bit position reassignments as “dispersion” (for example, see https://cgi.tu-harburg.de/~c00e8lit/cgi-bin/ivpub/download.php?file=kb-wpmc2008.pdf). That particular paper cited another paper (reference [6]) as defining dispersion.

Following that citation lead to a paper but this paper didn’t mention dispersion or explain the equation. Going back to the original paper, we started working with the definition for dispersion that we had. This used the cardinality of the set of indices of original bit positions vs. permuted bit positions. This seemed straightforward enough. But, after trying this in MATLAB, we always got the minimum dispersion value, so there must be something wrong with our interpretation.

Volunteers then spent time trying to figure out if dispersion is important enough metric for a single convolutional code, like we have in #OpulentVoice. In other words, should we simply not simply choose the polynomials that result in the largest minimum interleaved distance? Selecting the right interleaver based on a balance between how far apart it spreads the bits vs. how randomly the bits are distributed is a useful selection methodology for Turbo codes, but may not be strictly necessary for a single convolutional code used with 40 mS frames.

Everyone is welcome to join in the discussion and work to create quality #OpenSource work for #digital communications at ORI. Please see https://openresearch.institute/getting-started to be welcomed to our community.

Inner Circle Newsletter March 2023

March 2023 Inner Circle
Welcome to our newsletter for March 2023!

Inner Circle is a non-technical update on everything that is happening at ORI. Sign up at this link http://eepurl.com/h_hYzL

Contents:
FPGA Workshop Cruise with ORI?
ORI’s Birthday 6 March – Celebrate With Pins!
RFBitBanger Prototypes
Announcing the ORI App Stores
QSO Today Ham Expo Spotlight
Jay Francis in QEX
Pierre W4CKX Declares Candidacy for ORI Board of Directors

FPGA Workshop Cruise with ORI?
Want to learn more about open source FPGA development from experts in the field? Want to get away? How about something that can give you both? We are looking at organizing an FPGA Workshop Adventure Cruise. Be part of the planning and write fpga@openresearch.institute

ORI’s Birthday – Celebrate With Pins!
We celebrate our 4th birthday on 6 March 2023. Thank you to everyone that has helped ORI grow and succeed in so many different ways. To commemorate our anniversary, we have a limited edition acrylic logo pin. They will be available for a small donation at all upcoming in-person events. Where will be be? We’ll be at DEFCON 31 and IEEE IWRC in Little Rock, AR, USA 13-14 September 2023. Want to include us at your event before then? Let us know at hello@openresearch.institute

RFBitBanger Prototypes
Interested in high frequency amateur (HF) bands? Want to learn about Class E amplification? Excited about open HF digital protocols that aren’t just signal reports? Well, we have a kit for you. Here’s a walk-through by Paul KB5MU of all RFBitBanger modes. This project is lead by Dr. Daniel Marks, is enthusiastically supported by ORI, and will be demonstrated at DEFCON in August 2023. We are doing all we can to have kits available for sale by DEFCON, or sooner.

Announcing the ORI App Stores
Open Research Institute can be found in the Google Play Store and the Apple App Store. That’s right – we are in both app stores delivering open source mobile apps. Thank you to everyone that has helped make this possible. The Ribbit app will be available on both platforms as our initial release. Do you know of an open source application that needs a home? Get in touch at hello@openresearch.institute and let’s talk. We want to share our platform and support applications that help open source and amateur radio.

QSO Today Ham Expo Spotlight
We hope to see you again at QSO Today Ham Expo, 25-26 March 2023. If you haven’t gotten a ticket yet, please visit https://www.qsotodayhamexpo.com/
This is a wonderful event that showcases the best parts of amateur radio. The theme for this Ham Expo is “New License – Now What?” Recordings will be available on the Ham Expo platform for a month, and then will be available on YouTube for anyone to view. ORI will volunteer at the March 2023 QSO Ham Expo session and will have technical presentations, a booth, and poster sessions at the Autumn 2023 QSO Today Ham Expo.

Jay Francis in QEX
Please see page 20 of the March/April 2023 issue of QEX magazine for an article co-authored by Jay Francis, our AmbaSat Re-Spin team lead. Excellent job, Jay!

Pierre W4CKX has declared his candidacy for ORI Board of Directors
We welcome Pierre’s interest in being a member of the board. Pierre is the Ribbit project lead. He brings broad industry knowledge, experience in Agile project management, a commitment to ethical leadership, and innovative energy. Learn about all our directors at https://www.openresearch.institute/board-of-directors/

Are you interested in supporting work at ORI? Consider being part of the board. We’d like to expand from 5 to 7 members in order to better serve our projects and community.

We’ve got lots going on with Opulent Voice, Haifuraiya, AmbaSat Respin, and regulatory work. We support IEEE in many ways, one of which is logistics support with technical presentations such as “Advances in AI for Web Integrity, Ethics, and Well Being” by Srijan Kumar PhD. Video recording of his talk can be found here.

Thank you from everyone at ORI for your continued support and interest!

Whatever will be do for our April 1st newsletter?

Want to be a part of the fun? Get in touch at ori@openresearch.institute

Inner Circle Newsletter February 2023

Greetings all! Welcome to the February 2023 issue of the Inner Circle Newsletter from Open Research Institute.

Join the Inner Circle

Sign up for this newsletter at http://eepurl.com/h_hYzL

Thank you so much for your time, attention, and support. We appreciate you, we welcome your feedback, and we are dedicated to serving the community to the best of our abilities. You can get in touch with the ORI board of directors directly at hello@operesearch.institute.

A Puzzle Just For Fun

Here’s a puzzle. Chicken Nuggets have been on the menu at the international fast food chain McDonald’s since 1983.

If Chicken McNuggets are sold in packs of 6, 9, or 20, then what is the largest number of nuggets that cannot be ordered?

Answer is at the end of this newsletter!

Projects

Our volunteer teams have been busy and successful, and our project lineup has grown.

Regulatory Efforts: ORI works hard to promote and defend open source digital radio work. We do all we can to help move technology from proprietary and controlled to open and free. Our work on ITAR, EAR, Debris Mitigation, and AI/ML are where we have spent most of our time over the past two years. We were a member of the Technological Advisory Committee for the US Federal Communications Commission in 2022, and co-chaired the Safe Uses of AI/ML Subworking Group. We have received consistently positive reviews for all of our work, and there has been increasing use of the results.

Ribbit: this open source communications protocol uses the highest performance error correction and modern techniques available to turn any analog radio into an efficient and useful digital text terminal. No wires, no extra equipment. The only thing you’ll need to use it is the free open source Android or IoS app on your phone. Learn how to use this communications system and get involved in building a truly innovative open source tactical radio service by visiting https://ribbitradio.org

Join Ribbit mailing lists at: https://www.openresearch.institute/mailing-lists/

Amateur Satellite: ORI has the world’s first and only open source HEO/GEO communications satellite program, called Haifuraiya. We will demonstrate all working parts of the transponder project at DEFCON 31, where broadband digital communications and open source electric propulsion will be featured. Find out how to support or join this and other teams at https://openresearch.institute/getting-started

AmbaSat for 70 cm: We’ve redesigned the AmbaSat board to move it from 915 MHz to 70 cm and it will be flown on a sounding rocket this year. With increasing interest in LoRa for both space and terrestrial use, this has proven to be a popular and useful project. The design has been adapted for applications in India and Japan.

Opulent Voice: a digital protocol that seamlessly combines high fidelity voice and data, using modern forward error correction, authentication and authorization, and efficient minimum frequency shift keying modulation. Opulent Voice will be flown on a sounding rocket this year and it is the native digital uplink protocol for Haifuraiya. Completely open with the high quality voice we deserve to hear. Due to the bandwidth requirements of the 16kHz OPUS codec, Opulent Voice can be used on 70cm and above ham bands, or anywhere else where the modest bandwidth requirements can be met.

Remote Labs: We have two remotely accessible workbenches for FPGA development, with Xilinx 7000 and Xilinx Ultrascale+ development boards as the focus. We also have several SDRs and radio utility devices available through virtual machine access. The 7000 series development board has an Analog Devices ADRV9371 radio system attached, and that has enabled a number of open source FPGA products to be published. This is a unique resource that has produced a lot of good work and is constantly being improved and updated. In addition to the development boards, the laboratory has a network accessible spectrum analyzer, an oscilloscope with logic analyzer extension, power supplies, frequency and power counters, and dedicated human resources available to help students, volunteers, or professionals contribute to open source work. Help it be more useful by spreading the word about ORI Remote Labs.

Equipment available: https://github.com/phase4ground/documents/tree/master/Remote_Labs/Test_Equipment
How to get an account: https://github.com/phase4ground/documents/blob/master/Remote_Labs/ORI-New-User-Setup.md
Using FPGA Development Stations: https://github.com/phase4ground/documents/blob/master/Remote_Labs/Working-With-FPGAs.md

Versatune: amateur digital television next generation hardware and software product. It is open source and affordable. We have committed engineering resources to support Versatune and are very excited about how things are going. Some of the Versatune team will be at Hamvention 2023 in Xenia, OH, USA, and it will be represented at DEFCON in August 2023.

HF antennas: We have a novel foldable antenna design for space and terrestrial use. The hardware prototype will be demonstrated at DEFCON. This design manipulates radiation resistance to produce best-of-class results. Think you can’t do 160m without an enormous antenna? Think again.

HF QRP: Coming soon, an exciting HF QRP digital radio board and protocol. The hardware prototypes will be demonstrated at DEFCON. What might happen when we combine the HF digital radio with the novel foldable antenna? We think you’ll be delighted.

Battery Matching Curves: are you available to mentor a college student interested in learning how to match up charge and discharge curves from NiCd cells in order to create battery packs? These packs would then be tested and/or deployed in the field. Our student volunteer has collected the data and is looking to learn how to use Jupyter Notebooks to select the cells to create battery packs.

Logistics

We’re growing and adapting!

We will be changing our GitHub project name from Phase4Ground to Open Research Institute very soon. Phase4Space GitHub project will change to Haifuraiya, which is the program name for our HEO/GEO design. These changes better reflect the content and purpose of the 64 repositories that span everything from important historical archives to open source music to the most modern open source encoders available.

We have a very well-qualified applicant for our open board of directors position. We would like to invite interested community members to consider applying to ORI in order to expand the board beyond this filled position in order to take us from our current five members to seven. Given our continuing growth, a larger leadership team would ensure continued smooth operations. These positions are unpaid, engaging, and can be demanding. The most important skill set is a strong sense of ethics and service.

Fundraising and Grants

We’ve applied for the GitHub Accelerator Program (Remote Labs) and the IEEE Innovation Fund (Polar Codes in Ribbit). If you have a recommendation for ORI in terms of partnerships or collaboration, please let us know at hello@openresearch.institute

Support ORI financially directly through the website https://openresearch.institute. There is a PayPal donation widget at the bottom of almost every page. Donations can be directed to any project, or to general operations. ORI has a very low overhead, with most projects coming in under 5%.

Support our open source propulsion work and get a cool desk toy at https://us.commitchange.com/ca/san-diego/open-research-institute/campaigns/where-will-we-go-next

We’ve raised enough money to cover materials for machining the engine parts. The next step is to raise enough money to pay for the electronics. Please help spread the word!

Thanks to our wonderful community, we have employee matching in place at Microsoft and Qualcomm. If you have an employee matching program at your work, and you think ORI would fit in, please consider nominating us. Our EIN is EIN: 82-3945232

Events

Where can you meet up with ORI people?

QSO Today Ham Expo

We support and attend QSO Today Ham Expo, held online 25-26 March 2023. The theme of this event is “New License, Now What?” and focuses on people new to amateur radio.

Our page for QSO Today Ham Expo content is https://www.openresearch.institute/qso-today-ham-expo-technical-demonstrations/

IMS2023

Join us at the amateur radio social at the International Microwave Symposium (IMS2023) on Tuesday 13 June 2023 in San Diego, CA, USA at 6pm. It will be held in a beautiful outdoor venue with food and drink provided. The easiest way to register for this event is to purchase an exhibition badge and then sign up for the social. https://ims-ieee.org/ is the event website.

DEFCON

We are getting ready for our biggest event of the year. We have proposed an in-person Open Source Showcase to RF Village for DEFCON 31 in Las Vegas, Nevada, USA from 10 – 13 August 2023.

Our page for the event, with all the latest and greatest details, can be found at https://www.openresearch.institute/defcon/

Want to help at DEFCON? Please visit https://openresearch.institute/getting-started and let us know!

IWRC 2023

IEEE wants to bring together all participants to take full advantage of CHIPS Act funding. IEEE will have an Innovative Workforce Resources Conference in Little Rock, AR 13-14 September. There will be a reception at the Clinton Presidential Library, and attendees will enjoy the best BBQ in the country. The National Science Foundation requires that a certain percentage of funding has to be spent in states that don’t get their fair share of research money. The goal of this conference is to pull together small researchers from small business like ORI and do research, with Arkansas as a focus.

We couldn’t agree more. After all, we are putting a lot of time and energy into Remote Labs South, located just outside Little Rock, AR. Bringing innovative open source digital radio work to students, workers, and volunteers that need it the most simply makes sense. If you can attend IWRC 2023 and help represent ORI please get in touch. We will be reaching out to IEEE chapters in Arkansas as well.

Read about the CHIPS and Science Act here: https://en.wikipedia.org/wiki/CHIPS_and_Science_Act

Puzzle Solution

43 is the largest number of nuggets that cannot be ordered.

What is the largest number of McNuggets that you can’t buy with packs of 6, 9 and 20? After putting in their blood, sweat, and tears, the mathematicians found that the answer is 43. You cannot buy 43 nuggets with packs of 6, 9 and 20, but you can buy any amount larger than 43.

Please see Mike Beneshan’s excellent blog about this type of problem at https://mikebeneschan.medium.com/the-chicken-mcnugget-theorem-explained-2daca6fbbe1e

The other Non-McNugget numbers are 1,2,3,4,5,7,8,10,11,13,14,16,17,19,22,23,25,28,31,34, and 37.

Trivia: You can get 47 in two ways: 36+19+120 or 06+39+120.

We’ve used the McDonald’s version of the chicken nugget to present and frame this mathematical puzzle. Here’s a link about the history of this menu item: https://www.thrillist.com/news/nation/history-of-chicken-mcnuggets

Robert C. Baker invented the chicken nugget, among many other things. He was a true innovator of what can be fairly called “modern foods”. A brief wikipedia article about him can be found here: https://en.wikipedia.org/wiki/Robert_C._Baker

A song written about this remarkable inventor can be enjoyed at this link: https://youtu.be/OEa8wqv4QM0

Do you have an idea for an interdisciplinary puzzle for our next newsletter? We’d love to hear about it. Write ori@openresearch.institute

Until Next Time

Thank you so much for being part of our Inner Circle! You are the motivation for all of this work, provided to the general public for free. We believe it makes the world a better place.

HamCation 2023: Call for Open Research Institute Volunteers (booth/housing provided)

Greetings all!

If you are not familiar with the US amateur radio vendor fair HamCation, then please visit https://www.hamcation.com/

ORI has a booth in a very nice location right in front of the prize drawing area.

We have shown at HamCation for many years. In the past, we’ve collaborated with and supported exhibits and presentations from Society of Amateur Radio Astronomy (SARA), Tucson Amateur Packet Radio (TAPR), M17, DEFCON, GNU Radio, SBMS Beacon Project, and more.

We’ve organized four all-day forum sessions, recruited talks from all over the world, and welcomed people and projects that needed space to present their efforts to the large crowd in Orlando, FL. We’ve done a lot of demos, given out thousands of dollars of prizes and swag, and hosted several contests. The organizers have been deeply appreciative and delight to work with.

Aside from ARRL, ORI has fielded the most diverse and successful set of exhibits at HamCation in recent history.

It’s not a bad hamfest, as ham events go. Out of all of the ones in the US, this has been the best one for us.

Our plans were to show the Versatune project, which is a fantastic upgrade to a beloved ham DVB product, show Ribbit and Rattlegram, demonstrate Haifuraiya satellite work, introduce our sounding rocket work, and more.

Ribbit project team members cannot attend due to work and time conflicts. We’ll show through IEEE and potentially at Ham Expo.

Versatune has run into insurmountable delays and supply chain issues with Raspberry Pis. We looked at every possible alternative to get Versatune done faster with alternate parts, including rewriting the codebase for some very different off-the-reel parts that were available. We decided this was not a reasonable or achievable course of action. The original design is excellent and making progress – just not fast enough to show at HamCation in February 2023.

Haifuraiya (Open source HEO/GEO satellite) will show at QSO Today Ham Expo in March and through IEEE.

Sounding Rocket students cannot afford to come to HamCation, and we have similar supply chain problems there with Opulent Voice.

So – What do we have at HamCation?

We have a four-bedroom AirBNB very close to the fairgrounds reserved. Each room has a king sized bed. It has a large kitchen and very nice living room.

We have a booth at HamCation that is now completely open for anyone from ORI to use. I will do anything I can to help you showcase our work at HamCation. Do a shift at the booth, enjoy the very nice house, and promote our work.

More about this will be on Slack in the HamCation2023 channel.

Get in touch with me (cell phone number below) to take advantage. Deadline is 1 February.

If we have no staff, then we cancel and attempt to get the money back or donate the housing and booth space to a worthy group.

Please forward to anyone you think might be interested. What sort of work do we support? It’s outlined here:

https://www.openresearch.institute/your-project-is-welcome/

-Michelle Thompson
(858) 229 3399

AI vs. Human: ChatGPT vs. ORI Open Source Designers

Greetings all,

You know that ORI has been active in a variety of regulatory work for several years now. One of those areas has been in AI/ML and the effect on communications. We served on the US FCC TAC in 2022, have presented and written on the subject, and are working towards an AI/ML Handbook for Amateur Radio.

You may have seen news about OpenAI GPT, or ChatGPT.

https://openai.com/api/

There’s a lot of people testing this chatbot out for reverse engineering and in writing software. There’s been some successes in cases where the model would have had access to code or documentation about protocols, observations, or technical behavior in order to train, and requests match up well with the models. 

For example, a friend asked the chatbot to create a Python script to access the webcam on their computer, and they shared a still image from the resulting feed. Another example from another friend resulted in better reverse engineering results than they had gotten with either IDA or Ghidra over many weeks.

So, how does ChatGPT handle VHDL?

We have something specific to compare against, since we just completed a COBS decoder in VHDL for our open source HEO/GEO transponder work for #Haifuraiya.

Let’s start off simple. Here’s the first dialogue with the chatbot, and the results.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder in VHDL

Sure, here’s an example of a COBS decoder in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

entity COBS_decoder is
  Port (
    input_data : in std_logic_vector;
    decoded_data : out std_logic_vector
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in input_data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if input_data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      decoded_data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
end Behavioral;

This code decodes the input data using the COBS (Consistent Overhead Byte Stuffing) algorithm, which involves replacing runs of zeros in the input data with a single byte that indicates the length of the zero run. For example, the input ‘101001000’ would be decoded as ‘103101’.

This decoder iterates over the input data and maintains a variable zero_run_length that is incremented for each zero in the input data. When a non-zero value is encountered, the length of the zero run is written to the output data and the zero_run_length variable is reset.

I hope this helps! Let me know if you have any other questions.”

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knew what COBS meant. It produced code, with comments, and a brief “design document”, with examples.

The code produced is not licensed or cited.

Our open source HEO transponder project uses AXI-based blocks in the FPGA. So, let’s add AXI to the request.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder with AXI interface in VHDL

Sure, here’s an example of a COBS decoder with an AXI interface in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;

— AXI interface definitions
library unisim;
use unisim.vcomponents.all;

entity COBS_decoder is
  Port (
    AXI_IN  : in  AXI4Stream;
    AXI_OUT : out AXI4Stream
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in AXI_IN.data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if AXI_IN.data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      AXI_OUT.data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
 
  — Set the valid and last flags in the output AXI stream
  AXI_OUT.valid <= ‘1’;
  AXI_OUT.last  <= ‘1’;
end Behavioral;

This code is similar to the previous example, but it uses an AXI (Advanced eXtensible Interface) interface for the input and output data. The AXI interface is a standard interface used in many FPGA designs, and it provides a way to stream data into and out of the decoder.

In this code, the input data is read from the AXI_IN.data port, and the decoded output is written to the AXI_OUT.data port. The valid and last flags are also set in the output AXI stream to indicate that the data is valid and that it is the last data in the stream.

I hope this helps! Let me know if you have any other questions.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knows about the previous request and updated the design. No license.

Here’s a review of the COBS implementation from Paul Williamson KB5MU.

“I don’t know what this is, but it isn’t a COBS decoder with AXI interfaces.

It does seem at first glance to be some sort of thing with AXI interfaces, but it does not implement an AXI interface at all. It just relies on a (possibly imaginary) library type called AXI4Stream that somehow magically does AXI interfaces, in a way that’s both implausibly smart and inexplicably dumb. I think the classic phrase “not even wrong” applies here.

The AXI4Stream object apparently has a member .data that is an array of bytes, which might make sense if the entire AXI transfer is being treated as a single entity here. The code iterates through the input .data using the ’range attribute, as if the AXI4Stream object has a defined fixed transfer length.

Which it can’t have, if it’s a general purpose AXI stream implementation.

So, if this was ever anything real, it must have been a special purpose AXI implementation with a fixed transfer length. Assuming that the AXI4Stream .data actually works that way, the entity does
do some sort of transformation on the input data to create the output data.

We asked for that to be a COBS decoder, but the function that it actually implements bears only the slightest resemblance to a COBS decoder. What it actually does is output a stream of words that are the lengths of the runs of zeroes in the input data. Nonzero data does not get passed through at all.

Instead, a run of nonzero bytes gets translated into a run of zeroes, as if two adjacent nonzero bytes meant nothing but a zero-length run of zeroes.

Here’s an example of what this code seems to do:

  Input  .data:  1 2 3 4 0 0 0 0 1 2 3 4
  Output .data:  0 0 0 0         4 0 0 0

That’s certainly not a COBS decoder, or even anything useful at all.

Note that the output is variable length, but never longer than the input.
We had to assume that the AXI4Stream object had a fixed .data size to get this far, but now it has to have a variable data size for the output.

Supposing we had successfully suspended disbelief up to here, we now examine the final two lines of code in the Behavioral architecture. It sets the .valid and .last members of the AXI_OUT port to ‘1’.

That makes absolutely no sense if it’s supposed to encapsulate an entire AXI stream transfer. Those signals only make sense if we’re treating one word at a time, as you’d expect to see in a normal, general-purpose AXI stream object.

The extremely verbose commenting style in this code suggests that the original source material was from an introductory tutorial. Maybe the AI has been tuned to favor heavily commented code, and ends up picking up partially completed junk from the middle of tutorials?
Anyhow, this is not an example of automated programming in response to a high-level query. This is gibberish that succeeds only in having a superficial resemblance to the code requested.”

So, does this work?

Let’s try it out in the lab!

The first thing we notice is that there is an obvious syntax error with a missing semicolon. Aside from that, there is no AXI4Stream port type. It’s not recognized. Vivado appears to be disappointed in this file, and wants little to do with this code. This is where we should stop, but I spent some time looking to see if the ports were at least close to being declared correctly.

The model appears to have taken some custom or maybe enumerated type from somewhere, and come up with a custom AXI4Stream type. This is not recognized by the unisim library right off the bat, and that’s the sort of thing that automated code writing needs to get right.

Let’s look at the code written by humans from ORI. The decoder can be found here:

https://github.com/Abraxas3d/fpga_cores/blob/master/src/decoder.vhd

-=-=-=-=-=-=-=-=-=-=-=-=-=-

— Company: Open Research Institute, Inc.
— Engineer: Skunkwrx, Abraxas3d
— Design Name: COBS protocol decoder
— Module Name: decoder – Behavioral
— Project Name: Phase 4 “Haifuraiya”
— Target Devices: 7000 Zynq
— Tool Versions: 2021.1
— Description: COBS protocol decoder.

— Additional Comments: This work is Open Source and licensed using CERN OHL v2.0

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

use IEEE.NUMERIC_STD.ALL;

–Entity Declaration
entity decoder is
    Port ( rst      : in STD_LOGIC;
           clk      : in STD_LOGIC;
           s_tdata  : in STD_LOGIC_VECTOR (7 downto 0);
           s_tlast  : in STD_LOGIC;
           s_tvalid : in STD_LOGIC;
           s_tready : out STD_LOGIC;
           m_tdata  : out STD_LOGIC_VECTOR (7 downto 0);
           m_tlast  : out STD_LOGIC;
           m_tvalid : out STD_LOGIC;
           m_tready : in STD_LOGIC);
end decoder;

–Architecture
architecture Behavioral of decoder is

— internal copy of s_tdata
    signal input_data          : STD_LOGIC_VECTOR (7 downto 0);

    — s_tdata delayed by one clk where s_tvalid is high
    signal input_data_d        : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of s_tlast
    — not used for anything
    — We create our own m_tlast from the COBS framing instead
    signal s_tlast_i           : STD_LOGIC;

    — internal version of input handshaking signal s_tvalid
    signal s_tvalid_i          : STD_LOGIC;

    — s_tvalid delayed by one clk
    — thus, a version of s_tvalid aligned with input_data_d
    signal s_tvalid_i_d        : STD_LOGIC;

    — s_tvalid delayed by two clks
    — thus, a version of s_tvalid aligned with output_data
    signal s_tvalid_i_d_d      : STD_LOGIC;
   
    — countdown of bytes in this sequence
    — loads from input_data_d when it represents the frame count
    —   as determined by signal counter_load
    — contains the count while m_tdata contains the implied 0 at
    —   the end of a sequence, or the next count that replaced it
    — counts down thereafter, to contain 1 when the last non-zero
    —   byte of the sequence is on m_tdata
    — allowed to count down to 0, but not beyond
    signal count               : STD_LOGIC_VECTOR (7 downto 0);

— enable to load count from input_data_d on this clk edge
— two cases detected:
—    * first valid non-zero byte after a frame separator
—    * first valid byte after count is exhausted
— allowed to be high for multiple cycles
    signal counter_load        : STD_LOGIC;

    — counter_load delayed by one clk where s_tvalid is high
    — used to identify the first valid data byte of any sequence,
    —   for purposes of computing m_tvalid (via pre_tvalid)
    signal counter_load_d      : STD_LOGIC;

    — detection of a valid frame separator (zero) byte in input_data_d
    signal frame_sep           : STD_LOGIC;

    — frame_sep delayed by one clk where s_tvalid is high
    — used to compute counter_load
    — used to compute rising edge of pre_tvalid
    signal frame_sep_d         : STD_LOGIC;
   
    — frame_sep_d delayed by an additional clk (not depending on s_tvalid)
    — used to find the first non-zero byte of the new frame
    signal frame_sep_d_d       : STD_LOGIC;
       
    — move the frame_sep signal that occurred during m_tready low
    — out to the first cycle when m_tready is high again
    signal use_saved_frame_sep : STD_LOGIC;

    — flag to remember that the frame count for this sequence was 255,
    —   to handle the special case that such a sequence does not have
    —   an implied zero byte at the end.
    — set when loading count with 255
    — cleared when the counter is reloaded with anything else
    signal case_255            : STD_LOGIC;

    — internal version of m_tdata output
    signal output_data         : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of m_tlast output
    — high when the last byte of a frame is valid on m_tdata
    signal m_tlast_i           : STD_LOGIC;

    — delayed versions of m_tlast
    signal m_tlast_i_d         : STD_LOGIC;
    signal m_tlast_i_d_d       : STD_LOGIC;

    — intermediate result for m_tvalid.
    — high across all data bytes of each sequence on m_tdata
    — does not go low for bytes on m_tdata corresponding to
    —   bytes invalidated by s_tvalid.
    signal pre_tvalid          : STD_LOGIC;

    — internal version of m_tvalid output.
    — pre_tvalid with periods of low s_tvalid_d_d punched out
    signal m_tvalid_i          : STD_LOGIC;

    — internal version of m_tready input
    — also the internal version of s_tready output
    — passes through m_tready to s_tready with no clk delays
    signal m_tready_i          : STD_LOGIC;

    — constant byte value 0xFF, for comparison purposes
    signal all_ones            : STD_LOGIC_VECTOR(input_data’range) := (others => ‘1’);

    — constant byte value 0x00, for comparison purposes
    signal all_zeros           : STD_LOGIC_VECTOR(input_data’range) := (others => ‘0’);

begin

    — asynchronous assignments
   
    frame_sep <= ‘1’ when input_data_d = all_zeros and s_tvalid_i_d = ‘1’
                else ‘0’;
   
    m_tlast_i <= ‘1’ when ((frame_sep = ‘1’ and m_tvalid_i = ‘1’ and m_tready = ‘1’))
                else ‘0’;
   
    counter_load <= ‘1’ when (input_data_d /= all_zeros and frame_sep_d = ‘1’ and s_tvalid_i_d = ‘1’)   — start of frame
                          or (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘1’)   — start of next sequence in frame
                else ‘0’;
   
    m_tvalid_i <= ‘1’ when ((pre_tvalid = ‘1’ and s_tvalid_i_d_d = ‘1’    — usual case, if input_data was valid
                        and not (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘0’)) — defer last byte; might be m_tlast
                      or (pre_tvalid = ‘1’ and to_integer(unsigned(count)) = 1
                        and s_tvalid_i_d = ‘1’ and s_tvalid_i_d_d = ‘0’)) — pick up that deferred last byte
                else ‘0’;
               
   
    s_tready <= m_tready_i;
    m_tdata <= output_data;
    input_data <= s_tdata;
    s_tvalid_i <= s_tvalid;
    m_tready_i <= m_tready;
    m_tvalid <= m_tvalid_i;
    m_tlast <= m_tlast_i;
   
— processes

    set_case_255 : process (rst, clk)
    begin
        if rst = ‘1’ then
            case_255 <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’ and input_data_d = all_ones then
                case_255 <= ‘1’;
            elsif counter_load = ‘1’ and input_data_d /= all_ones then
                case_255 <= ‘0’;
            end if;
        end if;
    end process set_case_255;

   
    delay_s_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            s_tvalid_i_d <= ‘0’;
            s_tvalid_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            s_tvalid_i_d <= s_tvalid_i;            
            s_tvalid_i_d_d <= s_tvalid_i_d;
        end if;
    end process delay_s_tvalid;
   
   
   
    create_pre_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            counter_load_d <= ‘0’;
            pre_tvalid <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i_d = ‘1’ then
                counter_load_d <= counter_load;
                if (frame_sep_d_d = ‘1’ and frame_sep_d = ‘0’)            — normal last byte of frame
                or (counter_load_d = ‘1’ and frame_sep_d = ‘0’)           — normal first byte of a sequence
                then      
                    pre_tvalid <= ‘1’;
                end if;
            end if;
            if frame_sep = ‘1’
            then
                pre_tvalid <= ‘0’;
            end if;
            if counter_load = ‘1’ and case_255 = ‘1’ then
                pre_tvalid <= ‘0’;
            end if;
        end if;
    end process create_pre_tvalid;
     
     

    delay_m_tlast_i : process (rst, clk)
    begin
        if rst = ‘1’ then
            m_tlast_i_d <= ‘0’;
            m_tlast_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            m_tlast_i_d <= m_tlast_i;
            m_tlast_i_d_d <= m_tlast_i_d;
        end if;
    end process delay_m_tlast_i;

    set_counter : process (rst,clk)
    begin
        if rst = ‘1’ then
            count <= (others => ‘0’);
            frame_sep_d <= ‘0’;             frame_sep_d_d <= ‘0’;         elsif rising_edge(clk) and m_tready_i = ‘1’ then             frame_sep_d_d <= frame_sep_d;             if s_tvalid_i_d = ‘1’ then                 frame_sep_d <= frame_sep;                 if counter_load = ‘1’ then                     count <= input_data_d;                 elsif count /= all_zeros                 then                     count <= STD_LOGIC_VECTOR(unsigned(count) – 1);                 end if;             end if;         end if;     end process set_counter;                 create_output : process (rst, clk)     begin         if rst = ‘1’ then             output_data <= (others => ‘0’);
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’
            then
                output_data <= all_zeros;
            elsif s_tvalid_i_d = ‘1’ then
                output_data <= input_data_d;                
            end if;
        end if;
    end process create_output;
   
 
   
    selective_delay_of_input_data : process (rst,clk)
    begin
        if rst = ‘1’ then
            input_data_d <= all_zeros;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i = ‘1’ then
                input_data_d <= input_data;
            end if;    
        end if;
    end process selective_delay_of_input_data;
   

end Behavioral;

-=-=-=-=-=-=-=-=-=-=-=-=-=-

This codebase doesn’t include just the source code, but also a testbench file, a design document, timing diagrams, a Python notebook that creates test vectors, and Python code that compares results produced by the testbench that uses the test vectors.

None of this sort of thing is produced by ChatGPT.

There is code in some languages for some functions that can be correctly written by AI at this point.

This is a situation similar to algorithmic musical composition. Algorithms can come up with useful ideas, motifs, sketches, or traverse a search space more quickly than a human. They can help a composer, but do not replace one. Algorithms can produce some sparkling gems of ideas. These are fresh cuts and views. This is mixed in with a lot of music that is just plain wrong.

If your programming job happens to be in a well-defined area that is easily automated, then you do have something to worry about today.

All jobs are a target for eventual automation with AI/ML. Yes, all of them. As AI/ML becomes more capable, it will automate even things like testbench writing (HDL Coder already does something like this) and support code (like a Python notebook that creates a solid set of test vectors for a testbench) and design documents. All of us should be aware of what we do being replaced by an algorithm some number of tomorrows in the future.

That day is not yet “today” with Chat GPT, when it comes to writing #VHDL.

What do you think? We’re interested in your experience and opinions with AI/ML writing code.

Inner Circle – September 2022

Greetings from Open Research Institute!

We hope to see you again at QSO Today Ham Expo this weekend, 17-18 September 2022. We have a booth, five talks, three project exhibits, and a lounge space for meet and greet.

To find out more about Ham Expo, visit https://www.qsotodayhamexpo.com/

Since the last Ham Expo, we’ve integrated the DVB-S2/X encoder into the downlink reference design for our open source broadband microwave transponder. We have started on the uplink receiver. We have published a specification for our high bitrate digital voice and data uplink protocol. It’s called Opulent Voice and it will be introduced and described at the Expo. Find the source code for a C++ implementation at https://github.com/phase4ground/opv-cxx-demod

We have two sounding rocket projects, an open source propulsion project, successful regulatory work, and we represent open source and amateur radio interests on the US FCC Technological Advisory Committee. We co-chair the “Safe Uses of AI/ML” subworking group.

Our open source HEO proposal Haifuraiya will be presented at the Expo this weekend and details will be in an upcoming JAMSAT Journal.

We do terrestrial communications as well! Ribbit is a digital emergency communications mode for VHF/UHF. No extra equipment or cables required. We have a poster about the project in the exhibit hall and a presentation. Get the free Android application at https://play.google.com/store/apps/details?id=com.aicodix.rattlegram

All video presentations will be available at our YouTube channel after the Ham Expo platform has closed in 30 days.

We have a mailing list for updates and discussion, a Slack account for engineering work, and all work is published as it is created to our GitHub account.

To join any of these resources at ORI, please visit https://www.openresearch.institute/getting-started/

If you’d like to get monthly newsletters like this one, then do nothing. You’re already part of the inner circle!

Our volunteers could not accomplish all of this wonderful work without your interest and support.

Thank you from everyone at ORI. We value your comment, critique, and feedback, and look forward to hearing from you. If you use social media, then a lot of what we do is published through the channels linked below.

Thank you from all of us at ORI!

QR code for Open Research Institute's newsletter signup form at http://eepurl.com/h_hYzL
Sign up for the newsletter

How Can Open Research Institute Help *You*?

Thank you so much for visiting with us at QSO Today Ham Expo! We hope you found it as enjoyable, vibrant, and inspiring as we did.

We presented five talks, had three project exhibits, and a booth. You can find a summary below.

Your interest in our work is appreciated. We want you to enjoy open source non-profit digital radio and amateur radio to the fullest. How can we help?

Here is how to get involved with what we do:

https://www.openresearch.institute/getting-started/

Here’s a roundup of what we brought to Ham Expo. All of what we do is published as it is created. Our work is free to the general public at no cost.

-=-=-=-=-=-=-=-=-=-

Ribbit, a new digital text messaging mode for UHF/VHF emergency communications

Presentors: Pierre Deliou W4CKX, Ahmet Inan

Level: General Audience Category: Emergency Communications Time:

PDT: September 18, 2022, 10:00 am UTC: September 18, 2022, 5:00 pm

Ribbit is a novel digital text messaging mode for VHF/UHF communications for recreational and emergency use which radically increases the density of information transmitted by spectrum used. It leverages the computing power of the modern smartphone to increase the capabilities of any Handy Talkie without requiring any additional hardware. Its redundant distributed nature allows it to function even when connectivity to the internet is lost during emergencies. Ribbit is open source and currently in its early stages of development. Get the free Android app here: https://play.google.com/store/apps/details?id=com.aicodix.rattlegram

Artificial Intelligence and Machine Learning and the Future of Amateur Radio

Level: Intermediate Category: DSP and Software Design, Future of Amateur Radio, SDR Software Defined Radio, Software and Services Time:

PDT: September 17, 2022, 1:00 pm UTC: September 17, 2022, 8:00 pm

Artificial Intelligence and Machine Learning are all over the news, taking over academia, getting billions of dollars of commercial investment, and will change both computer networking and wireless communications in fundamental ways. What does the future of amateur radio look like when radios use machine learning to operate? How does our relationship to the bands change with cognitive radio? Where are we in this transformation? What will happen next?A working group to produce an AI/ML Handbook for Amateur Radio is forming now. Register your interest here: https://forms.gle/4dZsLkmyxCkp8JiF6

Truly Excellent Digital Voice Quality: Opulent Voice

Presentors: Paul Williamson KB5MU

Level: General Audience Category: Digital Voice Mode, DSP and Software Design, Software and Satellites, Space and Satellites Time:

PDT: September 17, 2022, 9:00 am UTC: September 17, 2022, 4:00 pm

Digital voice on amateur radio to date has generally had terrible voice quality, due to both a perception of limited available bandwidth and the convenience (to manufacturers) of repurposing commercial protocols with their older, proprietary, low-bitrate voice codecs. We present and demonstrate a modern digital voice design that can be used on 222 MHz and above with truly excellent voice quality. Source code here: https://github.com/phase4ground/opv-cxx-demod

Haifuraiya – Open Source HEO Satellite Project Proposal

Presentors: Michelle Thompson W5NYV

Level: General Audience Category: Space and Satellites Time:

PDT: September 18, 2022, 9:00 am UTC: September 18, 2022, 4:00 pm

Haifuraiya is an open source highly elliptical orbit amateur satellite project proposal. This presentation will walk through the high-level project proposal. The project features a digital microwave multiplexing transceiver with a failover transponder mode, has an international volunteer team open to community participants, and complies with debris mitigation guidance from the United States Federal Communications Commission. Repository here: https://github.com/phase4ground/documents/blob/master/Papers_Articles_Presentations/Slide_Presentations/haifuraiya/README.md

User Authentication for Amateur Satellites

Presentors: Paul Williamson KB5MU

Level: General Audience Category: DSP and Software Design, Software and Satellites, Space and Satellites Time:

PDT: September 17, 2022, 8:00 am UTC: September 17, 2022, 3:00 pm

After a brief discussion of the reasons why user authentication is needed on an amateur satellite service system, we’ll cover a proposed system to implement reasonably secure user authentication on a P4XT digital multiplexing satellite transponder, within the bounds of the (United States) rules.

We had three project showcases in the Project Gallery

Each Gallery had a Q&A tab, a poster or document, links to any papers, and a link to any associated video presentations.

1) Ribbit, a distributed digital messaging platform

2) Opulent Voice – High Bitrate Digital Voice and Data

3) Open Source Satellite Work is Free of ITAR/EAR

Kumospace Lounge

Thank you to Ham Expo for providing fun and effective Lounges for interaction at the conference. We truly enjoyed visiting with everyone that dropped by! We hope you enjoyed our themed events and the extended Q&A in the Lounges.

-=-=-=-=-=-=-=-=-=-