Thank you for Helping ORI Celebrate 6 Years of Open Innovation
6 March 2025
Dear Friends, Supporters, and Fellow Innovators,
Six years ago, Open Research Institute (ORI) embarked on a mission to transform how we develop technology for citizen science and amateur space and terrestrial communications through open source principles.
Today, we celebrate not just our anniversary, but the extraordinary community that has turned this vision into reality.
ORI volunteers have spent these six years navigating a complex terrain between regulatory frameworks and technological innovation. We’ve built bridges between amateur radio enthusiasts, aerospace professionals, and open source developers—creating pathways where previously there were walls.
Over the past year, we’ve had several significant milestones:
Completed prototype development on Opulent Voice, enabling innovative open source communications for amateur radio operators worldwide.
Successfully advocated for open source approaches at Federal and International levels, ranging from the Federal Communications Commission Technological Advisory Council (USA) to Open Source Initiative (international), and Open Source Satellite (UK).
We expanded our contributor base and board of directors, bringing diverse and talented expertise to our technical challenges.
We published multiple peer-reviewed papers advancing open source digital radio research in ARRL QEX, from space to drones.
We conducted a workshop in Vancouver (DUM2024) that produced significant progress in Opulent Voice, and we were included in the University of Puerto Rico RockSat-X NASA sounding rocket launch at Wallops. The mission was successful.
Our approach to research and development continues to be grounded in careful domain modeling. This means understanding the fundamental structures and relationships in the communications designs before building solutions. This methodical approach has allowed us to:
1. Create reusable components that serve multiple missions, as seen in the modem module architecture for Opulent Voice.
2. Build technologies that truly serve our community’s needs by actively soliciting comment and critique.
3. Establish and use open standards that promote interoperability, flexibility, and re-use.
In Memorium
One of our milestones was quite difficult.
Frank Brickle AB2KT passed away in early February 2025 after a valiant battle with pancreatic cancer.
He was surrounded and supported by friends and loved ones, and continued to work and create and contribute until the end. He chose to leave us on his own terms.
Frank very generously agreed to be a Director of ORI in August of 2023. His advice on technical, regulatory, and organizational matters was excellent, tactful, clear, and deeply appreciated. All of us have benefited from his patient counsel.
Many of you know him from amateur radio, where his contributions ranged from designing DttSP (leveraged by HPSDR among many other projects) to Digital Spark Gap (as yet unpublished), a way of exciting all the HF bands in order to efficiently transmit data in an innovative way. And, plenty in between!
Frank explained polynomial spline modulation, synthetic aperture radar techniques, double-checked everything on the dumbbell antenna design, and made numerous suggestions for areas of investigation. He is responsible for our DUM2024 workshop being a success, which meant turning lemons into lemonade. That was just his style.
Frank was also an internationally renowned composer and music mentor.
We often hear “Together everyone achieves more”. Frank lived this. If he had a fault it was wanting to help everyone, all the time, at the expense of a more selfish focus.
The only thing he would want to leave behind is inspiration and encouragement.
Looking Forward:
Our Next Orbit Around The Sun
As we launch into our seventh year, we’re focusing on:
End-to-end communications demonstrations for Haifuraiya. This is a groundbreaking initiative to design and build a fully open source HEO/GEO amateur communications satellite. Opulent Voice is a very large fraction of this work, along with the polyphase channelizer and the scheduling state machine that handles multiplexing between uplink and downlink.
Expanded International Collaboration. We have a goal to submit more regularly in JAMSAT and AMSAT-DL publications.
Regulatory work in advancing solutions to revive the 219 MHz band.
If you are an IEEE member and you qualify for senior membership in IEEE, then please let one of our Directors know. The ORI board can and will happily provide references for your application.
For more information on this please read:
https://www.ieee.org/membership/senior
Join Our Mission
The beauty of open research is that it grows stronger with every contributor. Whether you’re a seasoned RF engineer, a software developer, a regulatory expert, or simply passionate about open technology for space and terrestrial communications, there’s a place for you in ORI.
Visit us at https://openresearch.institute/getting-started to learn how you can participate in our upcoming projects and events.
Thank You
ORI’s strength comes from the interconnection of many individual contributors. To everyone who has contributed code, documentation, expertise, funding, or moral support: thank you for being part of this journey.
Here’s to six years of achievement and many more orbits to come!
It Doesn’t Work Until it Works Over the Air
18 March 2025
“Moving Towards a Minimum Viable Product” for Opulent Voice
The goal for Opulent Voice is to provide an open source communications mode useful for both digital uplink to amateur satellite as well as for terrestrial links. Opulent Voice can transmit voice, keyboard chat, and file transfer (data) without having to switch to a secondary clunky packet mode. It also can handle authentication and authorization traffic in the background. All of this with a voice codec vastly superior to anything available in amateur digital voice products today.
As reported in ORI’s weekly FPGA video conferences
(see https://www.youtube.com/playlist?list=PLSfJ4B57S8DkZry2dr5tS0YVff1opWZjA)
there’s very good news about Opulent Voice.
The progress is hard won. Here’s a summary.
Paul KB5MU reports “Here’s the first 40+ hours of a long tracking test (i.e., the application code doesn’t limit the length of successful runs). You can see the accumulators and the NCO adjusts wandering around. I presume this is primarily driven by temperature changes. The Plutos are only a couple of inches apart on the bench, but they are not identically situated with respect to warm equipment and insulation. The room the Remote Lab is in gets some afternoon sun load. It’s always warmer than the rest of the house, due to the computers and test equipment. Right now it’s 82.4F in there. If the two Plutos were in different environments, the delta temperature would vary more and presumably so would the delta frequency. For the moment, we are within tracking limits.”
This is a long-term over the air test of Opulent Voice between two different PLUTO SDRs running the current firmware build, available at https://github.com/OpenResearchInstitute/pluto_msk and application code available at https://github.com/Abraxas3d/pluto-msk-application
The graphs, from top to bottom, show bit count and error rates. The bit counts are in black and the errors are in red. We see our bit rate is almost completely zero. The second from top graph is bit count and error count zoomed in. The third graph down is bit error rate, where we divide errors by the bit count. Next graph down is a view of what the accumulator in the Costas Loop is doing. The next (fifth) graph down is a zoomed in view of the accumulator. We look at that strong positive direction move made before a time count of 2*10^4. Numerically controlled oscillator (NCO) adjustments are the sixth graph down. These adjustments are directly in response to the conditions that the receiver is seeing, in efforts to track the two frequencies that compose our minimum frequency shift signal. The final (seventh) graph is a zoomed in view of the NCO adjustments.

What does all of this mean? It means Opulent Voice works over the air, reliably, and did so for more than five days, even when exposed to large temperature shifts, on relatively cheap hardware not known for the highest performance clocks.
The next step after the long-term stability test was to configure preambles. This functionalitity is in the hardware design. A four-bit data value is repeated for a length of time set by the application code through a register write. This register takes a value from 0x00_0000 to 0xFF_FFFF. This value represents the number of bit-times the synchronization signal should be sent after the “transmitter on” signal is asserted. For our implementation, with a bit rate of 54.2 kHz, the maximum length of preamble is a little over 300 seconds. A full 40 ms frame is 0x878 bits in hex.
Another register lets us enable or disable the preamble with a single bit. 0 disables the preamble and 1 enables it. If the preamble is disabled, then the special pattern is not transmitted at the beginning of transmissions. If the preamble is enabled, then it is transmitted at the beginning of transmissions. Finally, another bit allows us to “force” a preamble from application code. In other words, we can make the transmitter send the preamble pattern at any time we like by setting a particular register bit to 1. Setting this bit to 0 returns to normal operation.

The purpose of a preamble is to help the Costas loops in the design synchronize as quickly as possible. A very typical way to do this is to construct a signal that has a lot of energy at the two frequencies used in the minimum shift keying modulation.
The two frequencies are one quarter of the bitrate above and one quarter of the bitrate below our intermediate frequency, or +/- 13,550 Hz. So, what we want to create is a spectrum that has a lot of energy at +/- 13,550.
Here is what the simulated spectrum looks like during normal binary data transmissions.

Next is a view of the time series data for this transmission. Top pane is after the differential encoder. Middle pane is the even and odd bitstreams plotted on the same time axis. Bottom is the half-sine representations of the data. These half-sine representations are multiplied by the sine and cosine versions of the carrier frequency, and then those two signals are added together to make the transmitted minimum shift key waveform.

Here is the spectrum of the desired preamble.

By sending a periodic digital pattern, we get spikes in the spectrum in the right places for our Costas loops to lock onto. For this simulation, we input a 1100 repeating data pattern to the differential encoder, which outputs 1000 repeating data pattern after the differential encoder. This is sent to the modulator. The even bits are sent to the Imaginary part of the quadrature modulator. This becomes a 10 repeating pattern. The odd bits are sent to Quadrature part of the quadrature modulator. This becomes a 00 repeating pattern. Here is a time series capture.

An Earth-Venus-Earth Link Budget from Open Research Institute
20 March 2025
This article explains a detailed link budget analysis for Earth-Venus-Earth (EVE) amateur communications. This open source link budget is a Jupyter Lab Notebook. It begins with a Python dataclass for each fixed earth station. Dataclasses are a type of Python object where only variables are declared. No methods (functions) are included. The site-specific dataclasses have parameters that are true for the site regardless of the target. Following the site-specific dataclasses are the link budget classes, which contain target-specific values and methods that return various gains and losses. The separation between site-specific and target-specific elements modularizes the design and makes it easy to use for different sites and different targets.
The most important output of a link budget object is a carrier to noise ratio at a particular receive bandwidth. A link budget class is assigned a particular site dataclass. The site-specific dataclasses and the link budgets can be mixed and matched with a single line of code. This gives flexibility, as a link budget for a particular target, such as EVE, can be calculated for different sites by having that link budget inherit different site-specific dataclasses.
Once we have the results of this link budget, we then explain the recent results from the March 2025 EVE event. The article closes with a description of plans for the next opportunity for EVE communications experiments, which will happen in October 2026, and how individual amateur experimenters can get involved.
What is a Link Budget?
If you have ever planned out income and expenses with respect to pay, and handled monthly bills, then you have used a budget. A budget is a balance between an amount of something coming into an account or collection, and an amount of something going out. A link budget uses the same concepts as a financial budget, with income (gain) and expenses (losses) counted up over a period of time and space. The link in link budget is the physical path or link between transmitter and receiver. Like financial budgeting, link budgets can range from a very simple picture of financial activity to a very complex accounting of a wide variety of types of income and expenses.
Link budgets are models of the real world. There are many ways to represent the physical environment and almost any model of that environment can be improved. A simple link budget can be very useful. A complex one can be very wrong.
A high quality link budget is accurate, accessible, and flexible. Link budgets can be used in a variety of ways. The most familiar role is predictive, where a link budget is used to design a communications experiment or system implementation. Link budgets can also be used as documentation. These are accurate only after the fact of a communications experiment or system implementation. Finally, link budgets are an excellent educational resource, providing a system-level view of radio communications. The goal for this link budget is to be useful in all three of these roles: predictive, documentary, and educational.
The link budget in this article is written in an object oriented coding style. An object oriented style means that we construct our calculations so that they can be controlled and manipulated in a modular manner. The gains and losses are described using classes. Classes are structures that contain members and methods. Members, which are things like variables and constants, are the “nouns” of our model. The methods, or functions, are the “verbs”. When we think about our link budget, we group members and methods that are related to particular concepts. For example, one of the first classes in the link budget has to do with noise temperature, which is a very important part of our radio environment. The amount of noise impacts signal reception and performance. We need to know where sources of noise come from, and account for them.
Once we have decided on a topic or subject, we define a class. We list the members and the methods that belong to that class. Then we use our class definition to create a calculator. Think of a calculator as if you had a physical custom-designed calculator that let you enter in everything having to do with, say, noise temperature. Each member could be given a value through a keypad. Each method might have its own button. You press the button and, as long as you’ve defined all the members needed by that method, the answer to that particular calculation pops out. We use several different calculators along the way as we build up all the gains and losses of our link budget. Deciding what work goes into what class is part of the art of computer programming. Two different people, when given the same problem, and each deciding to use object oriented techniques, may come up with very different class structures for their code. The first person might have approached the problem in a highly modular way, with many classes and lots of intermediate results. The second person might have solved the problem with one class, doing things in less steps but with more complexity per step. As long as the right answer comes out and the program doesn’t use more resources than are available, both approaches are valid.
EVE compared to EME
The communications mode most similar to EVE is Earth-Moon-Earth (EME). EVE is more challenging than EME communications due to several factors.
1. Much greater distances involved
2. Greater variability in the distances involved
3. Doppler rate of change due to orbital positions
of Earth and Venus changing with respect to each other
4. Different signal reflection characteristics from Venus compared to the Moon
5. Doppler spreading, which is a type of Doppler due to the signal being reflected off a rotating object.
Contributors
The team contributing to the link budget includes but is not limited to Michelle Thompson, Matthew Wishek, Paul Williamson, Rose Easton, Thomas Telkamp, Pete Wyckoff, Gary K6MG, and Lee Blanton. Questions or comments about this document can be directed to the issues and pull request functions in the repository below, or one can write an email to hello at openresearch dot institute.
Follow along in the Python source code for the Jupyter Lab Notebook at the above GitHub link.
Dataclass Definitions
Dataclasses in Python are classes that contain only variables and constants. In other words, they are only “nouns”, containing no methods. Our dataclasses contain information about specific sites. Each station configuration at a site gets its own dataclass. We set up dataclasses using the naming format SiteNameLinkParameters. Values for the members of the class have been given to us from sites such as Deep Space Exploration Society (DSES), who first asked for assistance with an EVE link budget. Our dataclasses set the values that are true for these sites regardless of the celestial target. For example, 1296 MHz (23cm band) is a common frequency for all sorts of communications, not just EVE. The location and elevation of a site remains the same no matter what it’s pointing at. Values like this belong to the SiteNameLinkParameters class.
Select the Site
We select the site that we want to use for the link budget by setting the variable SiteLinkParameters to the name of the desired dataclass.
What Sites Might be Involved?
Let’s take some time to talk about what sites are involved in this sort of amateur radio and amateur radio astronomy work.
Dwingeloo Radio Observatory (Netherlands)
Completed in 1956, the Dwingeloo Radio Telescope features a 25-meter dish that was briefly the world’s largest fully steerable radio telescope. After ending scientific operations in 2000, it was designated as a national heritage site in 2009. Since 2007, the C.A. Muller Radio Astronomy Station (CAMRAS) foundation has restored and operates the telescope for amateur radio astronomy and Earth-Moon-Earth (EME) communication. Today, Dwingeloo stands as the world’s largest radio telescope for amateur use, with volunteers conducting radio astronomy observations, supporting educational outreach, and participating in special projects like visual moonbounce. This information is from Wikipedia and CAMRAS.
Stockert Radio Telescope (Germany)
Inaugurated on September 17, 1956, the 25-meter Stockert dish was Germany’s first telescope for radio astronomy. After serving the University of Bonn and Max Planck Institute until 1995, it gained historical monument status in 1999. Since 2005, the telescope has been owned by the Nordrhein-Westfalen-Stiftung and is maintained by Astropeiler Stockert e.V., a volunteer association that proudly calls it “the largest and most capable radio telescope in the world operated by amateurs.” Astropeiler The facility is now equipped with modern technology for scientific observations, student education, and public outreach. This information is from Wikipedia and Stockert.
Deep Space Exploration Society (DSES) (Colorado, USA)
The DSES is a Colorado-based nonprofit organization dedicated to practical astronomy and space science education. Its primary facility is a restored 60-foot dish antenna located in Kiowa County, Colorado. Known as the Paul Plishner Radio Astronomy and Space Sciences Center, the site features a former National Bureau of Standards antenna originally built for tropospheric propagation research between 1957-1974. In addition to radio astronomy research, DSES uses its 60-foot dish for amateur radio EME (moonbounce) activities on the 432 and 1296 MHz bands. This information is from Wikipedia and DSES.
Bochum Observatory (Germany)
Founded in 1946 by Professor Heinz Kaminski as a popular observatory, Bochum Observatory gained international recognition after detecting signals from Sputnik 1 in 1957. Its 20-meter parabolic antenna, inaugurated in 1965, became famous during the 1957-1975 period for receiving transmissions from Russian and American space vehicles, including Apollo missions. Today, the facility participates in research projects with NASA and DLR, receiving solar data from space probes while also serving as an educational center focusing on sustainability, climate change, and sky observation. This information is from Wikipedia and Bochum.
These historic sites represent the essential bridge between professional radio astronomy and amateur radio enthusiasts, making significant contributions to astronomical research, space communication, and public education while preserving important scientific heritage. When we say Dwingeloo, Stockert, DSES, or Bochum, we are talking about specific station configurations that are located at these sites. Each site has multiple configurations for a wide variety of scientific and communications targets.
System Noise Temperature Worksheet
The system noise temperature is an important factor in determining the sensitivity of the receiver of a radio telescope or communication system. It represents the total noise from all sources that affects the system’s ability to detect weak signals. It is measured in degrees Kelvin (K).
Unlike signal strength, which scales with dish diameter, system noise temperature is largely independent of the physical size of the antenna. Instead, it depends on factors related to the quality of the antenna construction, receiver electronics, and environmental conditions.
This system noise temperature calculation has the following components.
Sky Noise (T_sky): Background radiation from the whole universe and atmospheric contributions. This varies with elevation angle (more atmosphere at lower angles) and weather conditions (clear, cloudy, rainy) and with what is in the sky in that direction.
Spillover Noise (T_spillover): Noise caused by the antenna feed pattern transmitting energy beyond the dish edges. This means it also picks up thermal radiation from the ground. This is determined by the feed design and positioning.
Scatter Noise (T_scatter): Noise resulting from dish surface imperfections that scatter incoming signals. Calculated using the Ruze equation, which relates surface RMS errors to performance degradation at a given wavelength.
Receiver Noise (T_receiver): Noise generated by the receiver’s electronic components, often specified as noise figure or noise temperature.
Total System Temperature
The total system noise temperature in degrees Kelvin is calculated as:
T_sys = T_ant + T_receiver
Where T_ant (the antenna temperature) is the combination of sky noise, spillover noise, and scatter noise:
T_ant = (main_beam_efficiency * T_sky) + T_spillover + T_scatter
Impact on Performance
A lower system noise temperature directly translates to better receiver sensitivity. For deep space communications, minimizing each noise component is essential. The best way to do this is to use high-quality low-noise amplifiers (LNAs). The noise from the LNA is the most significant factor at the receiver.
Precise dish surfaces minimize scatter noise. If they’re out of round or warped, then there’s a degradation. Well-designed feeds reduce spillover noise. The feed needs to be matched as well as it can be to the dish dimension. Operating at higher elevation angles when possible reduces sky noise because we’re going through less of the atmosphere.
This worksheet implements some calculations for each of these noise components. The goal is to produce a realistic system performance assessment, and to provide a solid system noise temperature to the Link Budget calculation. If a measured T_sys number is available then that should be used instead.
Here’s the results from this section:
System Noise Analysis at 1296.0 MHz:
Elevation: 45.0°
Conditions: clear
Noise Temperatures:
T_sys: 47.8 K
T_ant: 19.8 K
T_sky: 7.6 K
T_spillover: 14.5 K
T_scatter: 0.1 K
T_receiver: 28.0 K
System Noise Analysis at 1296.0 MHz:
Elevation: 5.0°
Conditions: cloudy
Noise Temperatures:
T_sys: 83.0 K
T_ant: 55.1 K
T_sky: 58.7 K
T_spillover: 14.5 K
T_scatter: 0.1 K
T_receiver: 28.0 K
EVELinkBudget Class
This link budget class for EVE is called EVELinkBudget. It targets Venus as a reflective surface. We inherit site specific link parameters (SiteNameLinkParameters) and call them “params”. We then call upon them inside the link budget class. We then set up all of our Venus specific values in this class and define the functions that we need in order to calculate this specific link budget.
What does this section do? This is where the budgeting happens. We add up all the gains and subtract the losses. This gives power at the receiver. We calculate the noise in our receive bandwidth, and subtract it from our power at the receiver. This gives a carrier to noise ratio (CNR) in dB.
Our communications mode must be able to operate at this CNR or lower in order to close the link. Too low, and the signal cannot be heard at the receiver. We use the CNR result as an input to calculations about candidate communications modes.
Later on, when we use our CNR to evaulate different modes, we’ll include an additional margin above this CNR. Sometimes we use 10 dB over the calculated CNR. Sometimes we use 3 dB over the calculated CNR. It depends on the communications goals of a particular mode. Demodulating and decoding digital communications signals requires more “adverse margin” than simply detecting an analog carrier. Both cases are presented later on in this link budget.
We run this section for the minimum distance from Earth to Venus, and then we run it again to get the maximum. Look at the difference!
Link Budget at Minimum Distance (38 million km) at 0.1 MHz receiver bandwidth
tx_power_dbw: 31.76
radius_venus_km: 6051.80
venus_radar_albedo: 0.15
tx_gain_db: 46.29
rx_gain_db: 46.29
free_space_loss_db: 492.59
pointing_loss_db: -0.00
venus_reflection_loss_db: -8.18
venus_reflection_gain_db: 140.61
system_noise_temperature: 47.80
rx_power_dbw: -220.46
noise_dbw: -161.80
cnr_db: -58.65
cnr_db_1hz: -8.65
Link Budget at Maximum Distance (261 million km) at 0.1 MHz receiver bandwidth
tx_power_dbw: 31.76
radius_venus_km: 6051.80
venus_radar_albedo: 0.15
tx_gain_db: 46.29
rx_gain_db: 46.29
free_space_loss_db: 526.07
pointing_loss_db: -0.00
venus_reflection_loss_db: -8.18
venus_reflection_gain_db: 140.61
system_noise_temperature: 47.80
rx_power_dbw: -253.93
noise_dbw: -161.80
cnr_db: -92.12
cnr_db_1hz: -42.12
Effect of Distance on Received Power and Carrier-to-Noise Ratio
The large variation in distance from Earth to Venus results in a large variation in the received power at the site and in the carrier to noise ratio at the site. This section creates a visualization that shows the differences between the closest path and the furthest path for radio work.
Pointing Error Analysis
Dish antennas have a narrow beamwidth, and pointing errors can significantly impact our link budget. This section models pointing errors to determine their impact on signal strength and to help us include appropriate loss values in our link budget calculations.
We provide two visualizations in the link budget.
The first is a normalized visualization (error/beamwidth ratio). It provides a universal reference applicable to any dish size or frequency. It illustrates the fundamental relationship between pointing error and beamwidth. It demonstrates key principles: 1dB loss occurs at error = beamwidth/5.66, 3dB loss at error = beamwidth/2. It facilitates comparisons across different systems.
The second is an absolute visualization (error in degrees). It shows practical values specific to (for example) the DSES dish at 18.29m dish at 1296 MHz. It provides exact specifications for pointing requirements. It displays precisely how many degrees of error are acceptable for given loss levels. It is directly applicable to a specific system’s operational planning.
The notebook user can toggle between these visualizations using the show_normalized parameter (True/False).
The calculations are based on established antenna theory. First, there is a beamwidth calculation. It uses the formula beamwidth = 1.22 * lambda/D for circular apertures.
lambda is wavelength in meters and D is dish diameter in meters. The result is the 3dB beamwidth (where power drops to half).
Pointing Loss Formula:
pointing_loss = -12 * (error_angle / beamwidth)^2
This quadratic relationship means losses increase rapidly as pointing error grows.
Critical Error Thresholds:
Working backwards from the formula above we get the following. For 1dB loss: error = beamwidth/√12 ≈ beamwidth/5.66. For 3dB loss: error = beamwidth/2
For example, with a beamwidth of 0.6°:
Maximum error for 1dB loss: 0.106° (0.6°/5.66)
Maximum error for 3dB loss: 0.3° (0.6°/2)
The current system’s pointing error (for example, 0.01°) is displayed on the graph as a purple dot, showing its position relative to these critical thresholds.
For optimal performance, pointing errors should ideally be kept within 1/10th of the half-power beamwidth.
How much Doppler do we Have?
How much Doppler do we have to deal with? This section calculates Doppler shift and the rate of change of the Doppler shift, and visualizes the results.
We will have to anticipate and “track out” Doppler shift in order to find our signal in the frequency domain. We need to understand and account for the rate of change of this Doppler shift as well. A third factor is Doppler spread, which is what happens when a signal bounces off a rotating reflector. The DopplerCalculator class calculates Doppler shifts at particular times from particular positions. It calculates worst case Doppler shifts and worst case rate of change of Doppler.
Doppler shift is lowest at inferior conjunction with Earth. This is when Venus is closest to the Earth. How low does it go? Doppler shift briefly passes through zero Hz. At this point in the orbit, however, the rate of change of Doppler shift is the highest. The visualizations in this section of the link budget show what this looks like. In the example usage, we first calculate Doppler values as if we are at the center of the Earth. Why do this? Because we can show what is going on as if we were able to hold still from a non-rotating point of view, as if you were observing the Earth and Venus from a distance. This makes the fundamental trends and truths easier to see before we add in complexities like being on the surface of a more rapidly rotating planet.
After we show what’s going on this way, we use our location specific methods to further refine the model. We are now on the surface of the Earth. We have our particular radio site from the SiteLinkParameters. We set up two more sites for comparison and to show how to change location. Adding in the rotation of the earth changes our Doppler situation and the visibility of Venus. The graphs are more complex because they are more accurate.
Doppler spread and Doppler spread penalty calculations have their own calculator and methods.
Doppler Spread for Venus
Doppler spread happens when our signal reflects off a rotating structure, like Venus. Part of Venus is turning towards us and part of it is turning away from us. The signal reflected off the center of the planet is reflected back with little to no frequency change. The parts closer to the edges or limbs of Venus have the most frequency changes. This is a very important factor in coherently integrated modes of communication.
In the section below, we calculate the Doppler spread. As Gary K6MG puts it “A rough calculation of venus limb-to-limb doppler spreading @ 1296MHz: 4 * venus rotation velocity 1.8 m/s / 3e8 m/s * 1.296e9 c/s = 31 c/s. This calculation is the same as K1JT uses for EME in Frequency-Dependent Characteristics of the EME Path and is motivated from first principles. One edge of venus is approaching the earth at a 1.8m/s velocity relative to the center of venus, the other edge is receding at the same velocity giving one factor of 2. The other factor of 2 is due to the reflection, the wave is shortened or lengthened on both the approach and the retreat.”
This calculation gives us the full extent of the Doppler Spread from Venus rotation, which for Venus is 31 Hz. This is the “footprint” of the Doppler Spread on the x axis of any graph that wants to show the effect of Doppler Spread on a signal. Since Venus is a sphere, the effect is not a uniform distribution over the 31 Hz. The majority of the reflected signal, coming from the parts of the sphere directly facing us, has little shift. The parts of the signal that are greatly shifted are coming from signals bouncing off the edges of Venus. The good news is that this is a small part of the energy. How small a part of the energy, and what numbers to use for Doppler spread are a big factor in our link budget work.
So, when we think of and model Doppler Spread, we do not think of it as a uniform distribution. We think of it as something that has more of a Gaussian curve. Most of the energy is reflected back close to the midline or at 0 Hz spread. It tapers off from there. A Gaussian distribution for this type of signal spread is a good start, but can be further refined based on radar surveys of Venus and results in papers such as Backscattering from an Undulating Surface with Applications to Radar Returns from the Moon, by T. Hagfors.
Another refinement of this model of Doppler spread comes from data from AMSAT-DL’s Bochum site EVE attempt in 2009, where the Doppler spread was less than expected on Venus approach, and from the data from the March 2025 EVE event attempts.
Mode Analysis
This class does an analysis of potential amateur digital communications modes and their suitability for the link.
The relationship between the CNR from the link budget object (calculated at the operational receiver bandwidth) and SNR of particular signals (given at different bandwidths) is not always straightforward. The listed SNRs for many amateur modes are listed with an occupied bandwidth, but the given SNR is not calculated at that bandwidth, but is calculated at another “normalized” bandwidth. The most common “normalized” bandwidth is 2500 Hz. When we know this is the case, we can list this number (2500 Hz) as the “real” noise bandwidth as the noise_bandwidth_hz value.
We’ve updated our Doppler spread model to use a Gaussian representation as discussed in the Doppler Spread section above. This better reflects the physical reality of how Doppler affects signals. Instead of assuming a flat distribution of energy across frequencies (a linear model), we model the Doppler-shifted energy as following a bell curve centered at the carrier frequency.
The Doppler spread penalty for non-coherently integrated modes is calculated by determining what percentage of the signal’s energy falls outside the mode’s occupied bandwidth. For modes with bandwidth much wider than the Doppler spread, almost all energy is contained within the bandwidth and there’s minimal penalty. As the bandwidth narrows relative to the spread, more energy falls outside the usable bandwidth, increasing the penalty. The modes most affected by Doppler spread penalty are narrow-band modes. The calculation is doppler_penalty_db = 10 * log10(doppler_spread_hz / mode[“bandwidth_hz”]) The larger the mode bandwidth, the less the Doppler spread penalty.
This Gaussian approach is more accurate because it accounts for the concentration of energy near the carrier frequency with decreasing energy at the edges, which corresponds to the probability distribution of relative velocities in the signal path.
The Doppler spread penalty for coherently integrated modes is calculated not by dividing Doppler spread in Hz by the mode bandwidth like we did for traditional modes, but by multiplying Doppler spread in Hz by the integration time. The calculation is doppler_penalty_db = 10 * log10(doppler_spread_hz * symbol_duration). The longer the integration time, the greater the Doppler spread penalty.


Do any modes close the link? What happens when we include Doppler spread penalties? This penalty in dB makes it harder to receive our signal.
It does look like we have a problem! What can we do about this?
Zadoff-Chu Transmission Proposal
This section defines a class to evaluate Zadoff-Chu sequences as a proposed transmission. It calculates whether or not this transmission type can close the link.
What is a Zadoff-Chu Sequence? This is a digital signal that has a constant envelope and provides a very sharp and clear indication when it is received. These sequences are very useful for synchronization and detection.
Outline of the Proposal
1. Do coherent integration in Zadoff-Chu segments however long we can given the worst case Doppler rate of change. We assume that we’re going to have to do batch processing with overlapping segments to ensure no signal is missed.
2. Do a Doppler shift compensation between segments.
3. Apply a sliding Doppler compensation during correlation processing
4. Combine resulting segments non-coherently until we know we can close the link.
5. Return detection result.
What is the minimum integration time given our Doppler rate of change?
We calculate the maximum doppler rate of change with the get_integration_time()
method to find this number, and then use this integration time as a fixed value for our Zadoff-Chu sequences.
Coherent integration time is calculated by how long it takes to exceed a 45 degree phase shift. This number is equal to square root of (0.25/Doppler rate of change). This equation is from “Fundamentals of Radar Signal Processing” by Mark Andrew Richards, 2005.
Next, we get the Number of Chips with get_number_chips(). The Number of Chips in our coherent integration is (chip rate * coherent integration time).
We find Processing Gain with get_processing_gain(). Processing gain: is 10 * log10(number of chips in our coherent integration)
Processing Gain ends up being a large number. How can this be so high? By coherently integrating, for example, 5 MHz of bandwidth over 1.34 seconds, we’re concentrating the energy of 6.7 million independent measurements into a single detection decision. That’s a lot of measurements! We get some real gain from doing this.
Zadoff-Chu sequences have ideal auto-correlation properties. This is a mathematical quality that means they achieve the theoretical maximum processing gain. This is unlike other modulation schemes which suffer various losses due to inefficiencies that Zadoff-Chu sequences simply do not have. This is not without precedent or wildly made-up. NASA’s deep space network uses comparable processing gain to communicate with distant spacecraft at extremely low bit rates. Radio astronomers routinely detect signals far below the noise floor using long integration times and correlation techniques like this. Dwingeloo, DSES, and other amateur sites are already familiar with this technique and communications protocol structure.
This proposal is designed around a calculated worst case Doppler rate limitation for Venus of -0.14 Hz/second, ensuring optimal coherent processing during the worst case channel condition, which happens to occur at inferior conjunction.
Is our Processing Gain Enough? We compare to Bandwidth Expansion with get_bandwidth_expansion() to find out. Let’s see where we are with our CNR, assuming DSES is our site.
For example, CNR in 1 Hz bandwidth = -8.65 dB CNR in 5 MHz bandwidth = -8.65 dB – 10 * log(5×10^6) = -8.65 dB – 67 dB = -75.65 dB Processing gain for 5 MHz bandwidth = +68.26 dB We have a shortfall of ~7.35 dB.
We do a bandwidth expansion for our chip rate bandwidth with get_bandwidth_expansion() and compare to our previously calculated processing gain.
And, our processing gain, calculated with the limitation of the Doppler rate shift on the sequence length, doesn’t quite get us there.
It seemed to be going so well! What can we do about this? There’s three things we can look at. First, we can do multiple non-coherent integrations (e.g., 10 non-coherent integrations would add ~5 dB). We can try a slightly longer coherent integration time if Doppler allows. We designed for the worst case, at inferior conjunction. Maybe we can get away with more. And, we can use some error correction coding for actual data transmission.
Using JPL’s work as a model, let’s try doing multiple non-coherent integrations. This is under our control, doesn’t add as much complexity as adding error correction, and doesn’t run the risk of blowing past a physical Doppler limit when we have the lowest path loss.
We use a dual-stage processing strategy. First, we perform coherent integration within the Doppler-limited window of (for example) 1.34 seconds across a 5 MHz bandwidth. Then, we combine multiple such integrations non-coherently to achieve positive SNR. This part is flexible, and can be used as Venus approaches or recedes. We just combine more sequences non-coherently.

For example, starting with a 1 Hz CNR of -8.65 dB, the full-bandwidth 5 MHz CNR is -75.65 dB. Coherent processing provides 68.3 dB gain, yielding -7.35 dB.
So, we need to non-coherently integrate some number of segments at inferior conjunction. Coherent processing maximizes gains within a Doppler rate of change limitation, and non-coherent integration extends processing beyond those constraints in a flexible way, depending on what our shortfall really is. Our calculations have a parameter for margin. The default is 0 dB. Changing this margin parameter changes the threshold of detection.
Next, we calculate the number of non-coherent integrations needed with the get_number_noncoherent_sequences_required() method.
At the receiver, we do a matched filtering using the known Zadoff-Chu sequence. The matched filter correlates the received signal with the expected sequence. The correlation output is examined for peaks that exceed a detection threshold. The time offset of the peak indicates the precise round-trip delay. The peak amplitude provides information about signal strength.
Non-coherent combination works with the power (magnitude squared) of the correlation outputs rather than the complex values. For each of the non-coherently integrated sequence correlations, we calculate magnitude squared. This destroys the phase information but preserves signal energy, which is all we want at this point. Align the correlation outputs based on expected delay progression. Sum up all the magnitude squared results. This summation increases SNR by approximately 5 * log(L) where L is the number of these sequences.

The distinction between coherent versus non-coherent combining is important when calculating processing gain from multiple observations.
from the Link Budget calculator, our min cnr_db_1hz is -8.650 dB
Our chip rate is 1000.0 Hz
We calculated a Doppler spread of: 31.13 Hz
Full bandwidth SNR for our Zadoff-Chu sequence goes from the 1 Hz bandwidth value of -8.65 dB to the chip rate bandwidth expansion of -38.65 dB
Our maximum integration time based on the maximum Doppler rate of change is 0.03 seconds
The number of chips in our integration time is 31.25
Basic processing gain for our coherent integration time is 14.95 dB for 31.25 chips
Basic processing gain: 14.95 dB
Doppler spread penalty: 0.00 dB
Basic processing gain – Doppler Penalty: 14.95 dB
Processing gain with Doppler penalty for our coherent integration time is 14.95 dB for 31.25 chips
Basic processing gain: 14.95 dB
Doppler spread penalty: 0.00 dB
Basic processing gain – Doppler Penalty: 14.95 dB
The difference between Zadoff-Chu bandwidth expansion and the effective processing gain is -23.70 dB
We need this number to be at least 10.0 dB.
We didn’t close the link.
We need to non-coherently integrate 5499985 sequences to produce 33.70 dB more gain.
If we do that, we should now have at least a 10.0 dB margin.
This is 171874.5 seconds of sequences.
Recommended number of coherently integrated sequences recommended for further non-coherent integration is: 5499985, for a gain of 33.70 dB
Coherent combining at 10 * log10(N) is used when you can preserve both amplitude and phase information. Signals add linearly before detection. This is a “voltage addition”. Power grows as N^2 where N is the number of samples, which results in 10 * log10(N) dB gain. Non-coherent combining is 5 * log10(N). This is used when only signal power or amplitude can be preserved. In other words, when phase information is lost. Signals add after detection. This is a “power addition”. Power grows as N (and not N^2) where N is the number of samples. This results in 5 * log10(N) dB gain.
We apply detection threshold to the combined result. Setting the threshold is as much art as science, but we usually have an idea about what we want for a false-alarm detection rate and we will have either measurements or assumptions about the noise. These things are one of the reasons for having a margin parameter in the code for this section. The detected peaks in the non-coherent sum indicate successful reception. A detectable peak is what we are looking for.
We can extend out the non-coherent combination until we close the link, but does this have a limit? Non-coherent is (allegedly) more resilient to phase and Doppler than coherent integration. Can we assume it’s immune, or do we need a factor that scales with number of non-coherent integrations?
Each coherent segment can be processed independently, allowing for parallel implementation. This reduces the burden on the hardware compared to a huge sequence. We think that the combination of coherent processing (optimized to Doppler rate of change constraint) and non-coherent integration (for extending beyond the shortfall we still have) provides a practical approach to close the link.
What does this look like?
Above is a visualization of an example of this proposal. You can see the bandwidth expansion, followed by a processing gain for the coherent integration. This is followed by gain from non-coherent integrations of the coherent integrations.
Doppler Spread
However, this is not the entire story. We have not yet calculated and included the Doppler Spread penalty. Doppler spread has a large effect on coherent integration, and reduces our processing gain. What is the effect of the Doppler Spread from Venus on our Zadoff-Chu signal? Let’s show a summary visualization and then follow with the calculations including Doppler spread. This penalty puts communications back out of range. We need 171874 seconds is nearly 48 hours. This is unreasonably long and won’t work. Venus isn’t even visible for this long.
So what does this mean?
It means that the link is very hard to close, that no current amateur digital low SNR modes are expected to work with the dishes currently making these attempts, and that even techniques like spreading codes and Zadoff-Chu sequences face a huge setback, primarily from the Doppler spreading penalty.
Has Anything Like This Ever Been Done Before?
Yes, it has. A carrier wave signal has been bounced off Venus and received. This was accomplished by AMSAT-DL in 2009 at Bochum, and the files are on the internet at https://github.com/amsat-dl/EVE. Here’s a summary of what it took to succeed.
Transmission System
Frequency: 2.45 GHz
Transmitter Power: 5 kW (using a specially modified magnetron)
Antenna: 20-meter dish at Bochum observatory
Antenna Gain: 51.5 dBic
System Noise Temperature: 85K
Signal Characteristics
Signal Bandwidth: ~10 Hz (due to Venus rotation spreading)
Echo Frequency: Consistently detected at 516.6 Hz in the baseband
Signal-to-Noise Ratio
Measured: 1.1 dB in 1 Hz bandwidth
Expected/Calculated: 3.8 dB in 1 Hz bandwidth
Venus Reflectivity (radar albedo): 0.10
Receiving System
Detection Method: FFT processing with 1024-point FFT (bin bandwidth = 7.95 Hz)
Integration Time: 5 minutes (with signals visible after 2 minutes)
Sampling Rate: 8138 samples/second, 16-bit complex I/Q data
Antenna Effective Area to System Noise Temperature (A/T): 2 m^2/K
Distance Parameters
Range to Venus: 42.1 million km during the experiment
Round Trip Light Time (RTLT): ~280 seconds (~5 minutes)
Signal Processing Techniques
Noise Reduction: Simple limiter for WLAN interference (limiting signals above ±3000 units)
Detection: Power spectrum integration over multiple FFT frames
Doppler Tracking: Precise software by G3RUH accounting for Earth rotation, Earth orbital motion, Venus orbital motion, Venus rotation effects
Findings
1. Weather Impact: The experiment clearly showed degradation in SNR as weather worsened (from 1.08 dB to -4.26 dB), demonstrating the importance of weather conditions for such weak signal detection. This is why weather conditions are in our link budget.
2. Practical vs. Theoretical Performance: The measured SNR (1.1 dB) was lower than the calculated expectation (3.8 dB), showing realistic margins should be included in planning.
3. Signal Processing Requirements: The Bochum results demonstrate the importance of precise frequency control (within 1 Hz), proper interference mitigation (WLAN signals were problematic), long integration times (minimum 2-5 minutes), and accurate tracking/Doppler compensation
4. Hardware Innovations: The “magnetron taming” circuit by Karl Meinzer DJ4ZC allowed a magnetron to be used for narrowband communications, potentially a cost-effective approach for any EVE project.
5. Data Format: The raw signal was recorded as complex I/Q samples, as 16-bit PCM, 8138 samples/second. This can be improved by including metadata with SigMF. Information about this open source communications data storage format can be found here https://github.com/sigmf/SigMF
Can we do Amateur Communications with EVE?
Can we move from “simple” carrier wave detection to real communications? Yes, we think we can. There are four things we can do from here.
First, we duplicate (and build upon) the Bochum 2009 EVE experiment. We need to master techniques that work. Is our Doppler Spread penalty assumption too high? We need to find out.
During the March 2025 EVE event, successful reception of signals from Dwingeloo were recorded at both Dwingeloo Radio Telescope in the Netherlands and at the Stockert Telescope in Germany. The US-based Deep Space Exploration Society transmission was unfortunately not heard, but they, like all of the other sites, are already preparing for the October 2026 opportunity. Involving more sites will improve international amateur radio capability, as the skills involved to achieve an EVE goal are modern and innovative.
Second, we can get sites with larger dishes involved. Know of one? Let us know!
Third, we can try for higher transmit power. In some cases, this means getting special regulatory permission for the event. It also means testing the equipment to be used for a time critical event far enough in advance to have confidence in using it and relying upon it.
Fourth, testing digital signal processing techniques with the Moon is highly recommended. We don’t have to wait for Venus to get close again. This work is ongoing – join our Slack account to see it in person and coordinate with other like-minded folks.
How can you help? Support your local site! Join Dwingeloo, Stockert, DSES, and any other citizen science and amateur radio site. And, support ORI’s open source modeling work. Join ORI at https://openresearch.institute/getting-started.
FCC Filing for 219 MHz Rules Changes from ORI
31 March 2025
Thank you to the many people that have helped with this effort. Open Research Institute (ORI) has filed the first of what might be several comments and proposed rules making efforts to the FCC about reforming amateur radio use of the 219 MHz band.
https://www.fcc.gov/ecfs/search/search-filings/filing/10329271641887
The list of folks that have contributed and supported this effort to renovate 219 MHz for actual amateur radio use is quite long. This filing and any that follow are the result of over a year of work. Thank you especially to Mike McGinty, ARRL advisors, and Justin Overfelt.
If you would like to help?
1) Please use this comment to make your own similar request under this particular proceeding. This is a “what regulations do you want to delete?” type of call. As with many FCC calls for comment, it will be dominated by commercial interests. Anything from amateur radio will stand out. The deadline for comments is 11 April 2025. Speak simply and directly. We’d like to use this band without unnecessary and burdensome requirements.
2) Please be ready to file a “reply” comment after the 11 April 2025 deadline. This is a chance for you to say “I agree with this and support this.”
We are not asking to change the fundamental nature of the band. Fixed digital messaging forwarding is super exciting these days because of SDRs, mesh networking, and all sorts of amazing protocol work available to us. We decided to simply ask for removal of the notification and permissions requirements. These requirements have resulted in zero use of this band for over two decades.
The primary service back in the late 1990s when these rules came out was maritime (AMTS). Those licenses were never fully deployed and have now been leased out by railroads. This means, to us, that the permissions requirements now make no sense at all for secondary licensees.
ORI is tired of this and is working to make this situation better. This is a great band with huge, innovative, digital promise. We deserve to have a seat at this table and that means the chair has to actually exist and the door to the room the table is located within has to actually be something we can open.
“Take This Job”
1 April 2025
Interested in Open Source software and hardware? Not sure how to get started? Here’s some places to begin at Open Research Institute. If you would like to take on one of these tasks, please write hello@openresearch.institute and let us know which one. We will onboard you onto the team and get you started.
Opulent Voice:
Add a carrier sync lock detector in VHDL. After the receiver has successfully synchronized to the carrier, a signal needs to be presented to the application layer that indicates success. Work output is tested VHDL code.
Bit Error Rate (BER) waterfall curves for Additive White Gaussian Noise (AWGN) channel.
Bit Error Rate (BER) waterfall curves for Doppler shift.
Bit Error Rate (BER) waterfall curves for other channels and impairments.
Review Proportional-Integral Gain design document and provide feedback for improvement.
Generate and write a pull request to include a Numerically Controlled Oscillator (NCO) design document for the repository located at https://github.com/OpenResearchInstitute/nco.
Generate and write a pull request to include a Pseudo Random Binary Sequence (PRBS) design document for the repository located at https://github.com/OpenResearchInstitute/prbs.
Generate and write a pull request to include a Minimum Shift Keying (MSK) Demodulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_demodulator
Generate and write a pull request to include a Minimum Shift Keying (MSK) Modulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_modulator
Evaluate loop stability with unscrambled data sequences of zeros or ones.
Determine and implement Eb/N0/SNR/EVM measurement. Work product is tested VHDL code.
Review implementation of Tx I/Q outputs to support mirror image cancellation at RF.
Haifuraiya:
HTML5 radio interface requirements, specifications, and prototype. This is the primary user interface for the satellite downlink, which is DVB-S2/X and contains all of the uplink Opulent Voice channel data. Using HTML5 allows any device with a browser and enough processor to provide a useful user interface. What should that interface look like? What functions should be prioritized and provided? A paper and/or slide presentation would be the work product of this project.
Default digital downlink requirements and specifications. This specifies what is transmitted on the downlink when no user data is present. Think of this as a modern test pattern, to help operators set up their stations quickly and efficiently. The data might rotate through all the modulation and coding, transmitting a short loop of known data. This would allow a receiver to calibrate their receiver performance against the modulation and coding signal to noise ratio (SNR) slope. A paper and/or slide presentation would be the work product of this project.