Inner Circle Newsletter: Autumn 2024

A Harvest of Regulatory and Technical Updates

Open Research Institute is a non-profit dedicated to open source digital radio work. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer. You can get involved by visiting https://openresearch.institute/getting-started

Membership is free. All work is published to the general public at no cost. Our work can be reviewed and designs downloaded at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world.

Subscribe to this newsletter at http://eepurl.com/h_hYzL

Ribbit Radio Update

It isn’t often that a completely new way of using existing technology in a truly innovative way on the amateur bands excites the community, but Ribbit consistently delivers.

What is Ribbit?

Ribbit is a novel digital text messaging mode for VHF/UHF communications for recreational and emergency use. It radically increases the density of information transmitted by spectrum used. Ribbit turns text into encoded audio, which can be played from your smartphone into the microphone of any radio, on any band, under even very noisy conditions. A smartphone with the Ribbit application on the other side can recover this message. Your information will get through.

Ribbit leverages the computing power of the modern smartphone to increase the capabilities of any Handy Talkie without requiring any additional hardware or cable.
Its redundant distributed nature allows it to function even when internet connectivity is lost during emergencies or just not available due to poor coverage.

Developer Updates

Project Lead Pierre W4CKX and Rattlegram Developer Ahmet Inan supported a transition of project development to Alex Okita KO6BVA in late 2023. Since then, Alex has made several big strides forward in both development and demonstration.

Alex’s most recent TO DO list included the following tasks.

1) Building a scheme to compact the header for Ribbit. The header is a bit long and could be shortened to save some bits using a raw bit stream of a simplified character map.

2) Add a keyEncoder to Code Witches as the keyboard input gets letters added. When the “Enter” key is pressed begin encoding/consuming the characters in the input area.

3) May also want to add in a sort of message queue above the input area.

4) Build a useful CW-Font for code witches and anything else anyone wants to use it for.

All of this work enabled a highly successful demonstration of Ribbit at DEFCON 32 in Las Vegas, Nevada, in mid-August 2024. A special thank you to RF Village for hosting ORI’s Open Source Showcase.

Visitors to the exhibit were able to see Ribbit work in a busy and noisy environment. DEFCON, like most other very large conferences, is acoustically noisy, but also very electrically noisy. Ribbit was demonstrated all weekend long in a very harsh environment, as the WiFi Capture the Flag competition was happening literally steps away from the Ribbit demonstration. The venue, Las Vegas Convention Center West, was full of conference attendees. Despite the background noise and substantial RF interference, Ribbit worked from Smartphone app speaker output, to Family Radio Service microphone input, over the air on a busy FRS channel, to the receiving FRS radio speaker output, to a Ribbit application microphone input on the monitoring smartphone. Being able to transmit and reliably receive accurate text data is a notable achievement.

Alex has integrated the Ribbit functionality into a web application. This makes Ribbit even easier to use than traditional native applications. The live web app demonstration can be found at https://badkangaroo.github.io/

Recordings of presentations about Ribbit, along with posters and diagrams, can be found at https://www.ribbitradio.org

The source code can be found at https://github.com/OpenResearchInstitute/ribbit_webapp

Practical Real World Use

The technical demonstration for Ribbit, called Rattlegram, got very positive reviews for utility in the field from Chile and Argentina mountain rescue groups. The challenge is supporting hikers and skiiers in places where cellular coverage is limited or not available. Ribbit functionality provided clear benefits with no downsides.

Carlos, our contact in Argentina who supports mountain rescue operations, described his experiences using Ribbit technology.

“Hi everyone, how are you? I’m just back from the mountains in Chile. We did some testing with Rattlegram, in very windy conditions (without any cables) and worked just great !
Send you a picture & video of one of the test we made. We were at this summit:”

In Argentina, Carlos reported:

“These past few months, we have been going out to the mountains, and been using Rattlegram with lot of success. In one of the ‘refugio’ we left an old cell phone with Rattlegram loaded besides the base radio, and we have been sending messages in all the backcountry ski outings and all the messages arrived without a problem.”

If Ribbit can work on the top of the mountains in Chile and Argentina, and if it also performs really well in noisy urban environments, then what can’t it do?

Ribbit Can’t Flourish Without Your Support

Ribbit is innovative, useful, and fully supported by Open Research Institute. But, it won’t realize its full potential without continuing development and community support. If you are excited about Ribbit and want to help, please join ORI at https://openresearch.institute/getting-started and let us know you want to get involved with Ribbit. We’ll make sure you get on the mailing lists and Slack channels.

ORI graciously accept donations for Ribbit development. Ribbit has a dedicated bank account, follows non-profit best practices, has an enforced code of conduct, and regularly reports to ORI leadership.

Thank you to everyone helping make Ribbit successful, useful, exciting, and fun to use!

Opulent Voice Comes to the PLUTO SDR
An Open Source Minimum Shift Keying Implementation

Opulent Voice is the native digital voice and data protocol for ORI’s open source satellite (or terrestrial) system called Haifuraiya. With high fidelity voice quality, seamless integration of data, keyboard chat, and authentication and control messages, amateur operators experience something truly new and enjoyable. As described by ORI Director Steve Conklin AI4QR, “Leave scratchy and harsh proprietary vocoders with clunky or incomplete data mode integration behind for good.” All source code at https://github.com/OpenResearchInstitute/pluto_msk

Things worth doing are rarely easy. The Minimum Shift Keying (MSK) implementation of Opulent Voice (OPV) on the PLUTO SDR is no exception. Challenges to the documentation, design, implementation, and test cycles have come up throughout the process. The response from the open source digital radio community has been solid, inspiring, and productive, with patient problem-solving winning the day every time.

Michelle Thompson, ORI’s CEO, described her perspective on OPV’s progress recently at an IEEE event, stating:

“I would like to thank everyone helping ORI achieve ambitious goals for our uplink protocol. It’s a real pleasure to serve this community and meet the needs of this project. It’s a privilege to be part of something so innovative and relevant.”

This feeling is reflected by the OPV volunteer team, who are actively involved around the world in promoting open source work and helping ORI projects like OPV succeed.

Successes Over the Past Quarter and Current Status

OPV was the communications downlink for a NASA RockSat-X mission executed by the University of Puerto Rico, one of the most consistent and successful teams participating in this competitive program. The mid-August 2024 launch from Wallops went flawlessly. The performance of OPV met all expectations and data recovered from the launch is being used to further develop and improve the communications protocol.

OPV presentations at HamCon:Zion and DEFCON during the summer of 2024 were extremely well received, giving positive educational value to the community. Amateur Radio deserves quality digital communications. OPV delivers excellent results and is well-documented.

There is still a long way to go. The physical layer work is currently focused on tuning the tracking loops in the demodulator. Once the physical layer is reliable, then the functions that integrate voice, data, keyboard, and channel control messages will be integrated into the PLUTO SDR build and tested. Application layer functions in HTML 5 have a head start, with some tests already completed. Parallel integration efforts from F5OEO and OpenCPI, both of which will expand the number of hardware platforms that OPV can use, are well underway.

Below, an example of bit-accurate simulation results for OPV, with signal names on the left and waveform values on the right.

Below, the spectrum of OPV, showing main lobe null-to-null span of 81 kHz, and signal sidelobes.

Haifuraiya and Modem Module Update

The work on Haifuraiya has broader appeal than just as a HEO/GEO satellite design. The “modem module” approach, developed by Matthew NB0X, is summarized at left.

At left is the most recent high level block diagram.

Haifuraiya – Open Source HEO/GEO Amateur Satellite Update

The multirate channelizer has been implemented in Remote Labs West on the zc706/ADRV9009 station. At left is a summary of the channel plan for the satellite receiver.

Regulatory Work at ORI
Making Open Source Easier for Everyone

Past regulatory work at ORI can be found at
https://github.com/OpenResearchInstitute/documents/Regulatory

A special Halloween season workshop about exorcising the Haunted Band of 219-220 MHz was held by IEEE and ORI on 29 October 2024.

The Zoom meetup was well attended with the majority of RSVPs participating on the call. The meeting was recorded for those that couldn’t make it. The video recording of the background session, followed immediately by the workshop, can be found at https://youtu.be/wGX__qcFrT4

219 MHz is a secondary allocation for amateur radio. It has excellent channel characteristics, is open to fixed point-to-point digital links only, allows 100 kHz bandwidths, has a minimum of 56 kbps data rates, and has had no activity for decades. It’s time to take this band back!

Why is there no activity? We believe this band is inactive because of the notification requirements in FCC part 97 and part 80. We assert that these notification requirements, which pull ARRL into the loop from two different directions, are impossible for both amateurs and commercial licensees to comply with.

What are these notification requirements? If you as an amateur want to operate on 219 MHz, you have to determine if there’s an Automated Maritime Telecommunications System (AMTS) station within 80 km of your amateur station. You then have to ask permission from the AMTS station(s) in order to legally operate with at least 30 days notice. If you are between 80 km and 640 km, then you have to notify the AMTS stations at least 30 days in advance of activating the band.

A publicly accessible database of amateur and AMTS stations on 219 MHz is supposed to be maintained by ARRL. This is a rule in Part 97 (for the amateur radio stations) and in Part 80 (for AMTS stations). This database does not appear to exist.

AMTS stations did not report their information to ARRL. Requests to ARRL for a copy of AMTS stations have not been answered. Requests for a copy of amateur radio stations that have notified the ARRL of intent to operate have also not been answered.

These requests were made by multiple amateur radio operators in the October – December 2023 time frame. This was before the cyberattack in 2024, which did sideline a lot of ARRL’s services and activities.

The 219 MHz band regulations need a serious overhaul. We believe a proposal for new rules for this band are a necessary step towards restoring amateur use of this band. We believe that rules can be proposed that will both promote modern and innovative digital activity, and at the same time, not endanger the primary licensees in any way. While the AMTS service no longer exists as originally envisioned, the AMTS licenses have been leased out to fundamentally different services, such as railroads. These primary licensees can share the band with amateur radio in, at the very worst, a mutually inoffensive way and, at the very best, a mutually beneficial manner. It’s up to us to make this better future happen.

In order to make an effective proposal to the US FCC for updated rules for this band, there are things that we need to sort out. We need to know what we want and we need to know what the primary users want. We need to anticipate and answer objections to dropping the onerous notification requirements, and we need to clearly state the value that an amateur radio allocation on this band brings to education, public service, citizenship, and technical advancement of the radio arts.

The workshop on 29 October was the first strong step towards sorting out a draft of proposed rulemaking for restoring amateur radio service use of the 219 MHz band.

Discussion during the workshop resulted in an outline for a draft rulemaking proposal. ORI sincerely thanks everyone that participated during the event and through email correspondence.

Draft Outline

1) Demonstrate that the current rules are not working. State our case that the notification requirements are impossible to comply with and are the reason that there is no activity on this band.

2) Cite 60m as an example of successful similar activity.

3) Assert, providing evidence, that amateur radio operators already know how to share spectrum with other services. We already know how to not cause problems for the services that we share spectrum allocations with. We do not need additional notification requirements or to sign declarations to the ARRL in order to be able to successfully operate on an amateur radio band as a secondary user.

4) Show, from a ULS database perspective, the current utilization of 219-220 MHz. Amateur activity does not threaten primary licensees.

5) Show, to the best of our ability, the actual use of 219 – 220 MHz. This will require a call for participation for sensing stations, such as WebSDRs, amateur operators, receiver enthusiasts, and distributed receiver networks such as SatNOGS, to monitor 219-220 MHz for some period of time, in order to measure primary licensee activity on the band.

5) We assert that AMTS no longer exists. The current primary licensees are not coastal stations. The rules assuming coastal activity do not reflect the reality of this allocation as it exists today. Old rules are hampering efficient use of the spectrum.

6) Propose specific deletions of rules. What exactly should and could be struck down?

7) What are the anticipated effects, repercussions, both positive and negative, of deleting the notification requirements.

8) Is there anything else that could or should be changed? We discussed the 100 kHz bandwidth requirement and it’s origins in specific equipment available to amateurs at the time. Current FCC thinking is to not have rules that hardcode in equipment requirements. Deleting the 100 kHz bandwidth limit in favor of no bandwidth or symbol rate limit had broad support at the workshop.

Thank you to all of you that support our regulatory work. It will take a lot of time and effort to draft proposed rules and to go through the proposal process. The FCC may or may not choose to entertain new rules for 219 MHz. If they do, then a public comment period will happen. If they do not, then the draft rules will still be of benefit, as they will record the point of view and consensus of digital mode amateur radio experimenters and enthusiasts.

Please join this effort at ORI by visiting https://openresearch.institute/getting-started

Sphere of Activity

July 12-13 2024 – ORI exhibited at HamCon:Zion in St. George, Utah, USA. We invited Deep Space Exploration Society (DSES) and QRZ.com columnist Dave Jensen to join our booth. We showcased their work alongside our technical and regulatory acheivements. Total staff was three for this weekend event at a very nice conference center. Some of the travel expenses were reimbursed by ORI. This collaboration was entirely successful. Our CEO gave a talk about AI/ML in Amateur Radio at HamCon’s forum, which can be seen at https://www.youtube.com/watch?v=ZgHUavf5Ddo

August 8-11 2024 – Our Open Source Digital Radio exhibit at DEFCON 32 in Las Vegas, Nevada USA was a big success. ORI hosted demonstrations from RFBitBanger, Ribbit, Opulent Voice, and our Regulatory team in the RF Village. A staff of four set up, staffed, and tore down the exhbit. Travel and lodging was funded by ORI. Our CEO gave an RF Village talk about Synthetic Aperture Radar and Amateur Radio, which can be seen at https://www.youtube.com/watch?v=fiR4woe4O5Q

August 29 2024 – ORI participated in the quarterly Federal Communications Commission Technology Advisory Council meeting and presentation in Washington, DC. ORI is represented on the FCC TAC by Michelle Thomposn and Matthew Wishek. Open Source and Amateur Radio interests have been defended at the federal level. ORI Contributions to the Dynamic Spectrum Sharing and 6G Working Groups can be seen in the video at https://www.fcc.gov/news-events/events/2024/08/technological-advisory-council-meeting-august-29-2024 If you are interested in supporting regulatory participation and defense, please join ORI, support ORI’s work, and let us know how you want to help.

October 2-9 2024 – ORI organized a week-long event called Digital Update for Microwave (DUM) in Vancouver, British Columbia, Canada. There were 3 on site for this event and 9 participants virtually. Housing was reimbursed by ORI. The focus of the workshop was Opulent Voice and Iterative Receiver Theory technical work. We did not address the 10 GHz antenna questions posed from our outreach prior to the event due to time limitations. Better 10 GHz mobile radio gear is what people are looking for, and this will get attention from ORI over the next quarter. DUM is an event that provides opportunities for people interested in digital communications on the amateur microwave frequencies. The contributions to ORI’s GitHub repositories showed that DUM was well worth the effort. Should ORI have a DUMII?

Thank you to all who support our work! We certainly couldn’t do it without you.

Anshul Makkar, Director ORI
Frank Brickle, Director ORI
Keith Wheeler, Secretary ORI
Steve Conklin, CFO ORI
Michelle Thompson, CEO ORI

219 MHz FCC Rules Revision Workshop Announcement

All of us at ORI are looking forward to the “let’s take back 219 MHz for innovative and awesome digital operations” workshop coming up in a few days. (This is a US centric spectrum management issue.)

The response has been very strong. Thank you!

Yes, the workshop will be recorded. No, it will not be successful without participation. If you care about completely unused VHF/UHF spectrum in the US, and want to see educational, modern, non-commercial use of this band happen without unnecessary impediments and silly rules, then we need your input now.

Here’s the link to RSVP (no you do not have to be an IEEE member to use vTools and sign up and attend):

https://events.vtools.ieee.org/m/439290

Even if you simply want a recording of the workshop, please register. Then you can get a notification.

If you would like to participate in “The Haunted Band” or any of the other 9 open source amateur radio projects at ORI, then it’s easy to join.

Please visit https://openresearch.institute/getting-started to get involved.

Inner Circle Newsletter: Summer 2024

A Sizzling Summer Summary prepared just for you!

Read on for highlights from all our technical and regulatory open source digital radio work. ORI’s work directly benefits amateur radio, provides educational and professional development opportunities for people all over the world, and puts ethics and good governance first.

Opulent Voice Flying High

Opulent Voice is an open source high bitrate digital voice (and data) protocol. It’s what we are using for our native digital uplink protocol for ORI’s broadband microwave digital satellite transponder project. Opulent Voice has excellent voice quality, putting it in a completely different category than low bitrate digital communications products such as D-Star, Yaesu System Fusion, and DMR.

Opulent voice switches between high resolution voice and data without requiring the operator to switch to a separate packet mode. Opulent voice also handles keyboard chat and digital file transmission. Seamless integration of different data types, using modern digital communication techniques, differentiates Opulent Voice from any other amateur radio protocol.

Opulent Voice will fly on the University of Puerto Rico’s RockSat-X launch on 13 August 2024. It’s been a very positive experience working with the students and faculty at the University.

An implementation on FPGA for the PLUTO SDR is well underway, with a active international team delivering quality results. This implementation will not only turn your PLUTO SDR into an Opulent Voice transceiver, but it will have remote operation functionality.

Hear what Opulent Voice sounds like by following the links in an earlier update at https://www.openresearch.institute/2022/07/30/opulent-voice-digital-voice-and-data-protocol-update/

We’ve come quite a long way in less than two years! The FPGA implementation upgrades the modulation from 4-ary frequency shift keying to minimum shift keying, and increases forward error correction performance and flexibility.

HAMCON:ZION 2024 is This Week!

Please visit us at HAMCON:ZION 2024 this weekend, 12-13 July 2024 in St. George, Utah, USA.

The event website is https://www.hamconzion.com/

ORI will have a club booth at the event. We opened our space to QRZ.com (https://www.qrz.com/) and Deep Space Exploration Society (https://dses.science/). This combined exhibit is a one-stop shop for the best in community, technical, and citizen science amateur radio activity.

We have a volunteer presenting on Artificial Intelligence and Machine Learning in Amateur Radio. The talk opens with a brief summary of the history of our relationship with created intelligence and then explores case studies of the use of artificial intelligence and machine learning in amateur radio. Talk is 1pm on Friday in Entrada B.

Open Research Institute at DEFCON32

We will present an Open Source Showcase at DEFCON in the Radio Frequency Village 12-13 August 2024, with accessible exhibits and demonstrations. Here is the list of scheduled project demonstrations.

Regulatory Efforts: ORI works hard for open source digital radio work and moves technology from proprietary and controlled to open and free in intelligent and mutually beneficial ways. Our work on ITAR, EAR, Debris Mitigation, AI/ML, and Synthetic Aperture Radar will be presented and explained. Find out more at https://github.com/OpenResearchInstitute/documents/tree/master/Regulatory

Ribbit: this open source communications protocol uses the highest performance error correction and modern techniques to turn any analog radio into a digital text terminal. No wires, no extra equipment.. Learn how to use this communications system and get involved in building a truly innovative open source tactical radio service. Find out more at https://www.ribbitradio.org

Satellite: ORI has the world’s first and only open source HEO/GEO communications satellite project. All working parts of the transponder project will be demonstrated, from Opulent Voice to antenna designs.

The Dumbbell antenna: We have an HF antenna design based on a highly effective inductive loading technique first written about in 1958. Learn about this antenna and find out how to make your own. Repository can be found at https://github.com/OpenResearchInstitute/dumbbell

RFBitBanger: an HF QRP system and novel digital protocol called SCAMP. Kit information and updates will be available. Get your Batch 2 kit today at https://www.ebay.com/itm/364783754396

Radar: Our regulatory and technical work in synthetic aperture radar will be demonstrated. One of our volunteers will be giving a talk about open source synthetic aperture radar in the RF Village speakers track. Here is the abstract.

Synthetic Aperture Radar (SAR) is one of the most useful and interesting techniques in radar, providing high resolution radar satellite images from relatively small satellites. SAR is not limited by the time of day or by atmospheric conditions. It complements satellite photography and other remote sensing techniques, revealing activity on the Earth that would otherwise be hidden. How does the magic happen? This talk will explain the basics of SAR in an accessible and friendly way. That’s the good news.

The bad news? SAR is controlled by ITAR, the International Traffic in Arms Regulations, and is listed in the USML, the United States Munitions List. ITAR regulates the export of defense articles and services and is administered by the US State Department. This includes both products and services as well as technical data. Such as, catalogs of high resolution radar imagary. The articles and services regulated by ITAR are identified in the USML. If ITAR doesn’t get you, then EAR just might. The Export Administration Regulations (EAR) are administered by the US Commerce Department, and items are listed on a Commerce Control List (CCL). Commercial products and services and dual-use items that are not subject to ITAR could be regulated by EAR. Even if you are free of ITAR and EAR, you may still be regulated by yet another agency, such as the FCC.

Regulation of SAR chills commercial activity, creating costly and time-consuming burdens. But why does any of this matter to signals hackers? Because technology has overtaken the rules, and devices used by enthusiasts, researchers, students, and hackers are increasingly likely to have enough capability to fall into export-controlled categories. The penalties are harsh. Fear of running afoul of ITAR is enough to stop a promising open source project in its tracks.

Is there a way forward? Yes. ITAR has a public domain carve out. Information that is published and that is generally accessible or available to the public is excluded from control as ITAR technical data. That’s great in theory, but how can we increase our confidence that we are interpreting these rules correctly? How can we use and build upon these rules, so that our community can learn and practice modern radio techniques with reduced fear and risk? Can we contribute towards regulatory relief when it comes to SAR? We will describe the process, report on the progress, and enumerate the challenges and roadblocks.

RFBitBanger Batch 2 Kits Available

Kits are available at our eBay store at this link https://www.ebay.com/itm/364783754396

Be a part of the future with a prototype Batch 2 kit build of the RFBitBanger, a low-power high-frequency digital radio by Dr. Daniel Marks KW4TI. Presented by Open Research Institute, this kit is designed to produce 4 watts of power and opens up a new digital protocol called SCAMP.

SCAMP Is now available in FLDigi!

Source code and extensive instructions can be found at https://github.com/profdc9/fldigi

Your donation in exchange for this kit directly enables the further development of an innovative Class E amplifier based radio design. It has a display, button menu navigation, and keyboard connection for keyboard modes and keyboard-enabled navigation. This radio can be taken portable or used in a case. If you have a 3d printer, then Dr. Marks has a design ready for you to print in the repository linked below.

  • Built-in digital modes: CW, RTTY, SCAMP (FSK and OOK, multiple speeds)
  • Key jack supports straight keys and iambic paddles
  • Open Source hardware and firmware, Arduino UNO compatible https://github.com/profdc9/RFBitBanger
  • External sound-card FSK digital modes supported (including FT4/FT8)
  • Experimental SSB support
  • Serial port support (2400 baud) for send and receive in keyboard modes

SCAMP is a new protocol that allows keyboard-to-keyboard contacts with a digital protocol that has excellent connection performance. See Dr. Marks presentation about RFBitBanger at QSO Today Academy in September 2023 to learn more about SCAMP and the RFBitBanger project. Link below:

All surface mount parts on the main board are pre-installed at the factory. All the through-hole parts you need to complete the radio are provided for you to solder yourself.

Builder’s notes and photos of all the components to help you identify and install them can be found here:

https://github.com/OpenResearchInstitute/RFBitBanger-kit/tree/main/batch2

If you don’t know how to wind toroids or solder surface mount capacitors, this is an excellent kit to learn on. There are just six toroids on the main board, and two on each band pass filter board. You can build just one band pass filter board and operate on a single band, or you can build an assortment. We provide 12 filter boards, enough toroids to build any 9 filters, and a supply of capacitors that will let you build those 9 filters for 9 different HF ham bands. These capacitors are size 1206, which is the largest common size for SMT capacitors and the easiest to solder manually. All you’ll need is a pair of tweezers and your regular soldering iron and solder. We provide detailed instructions on winding the toroids and soldering the capacitors. You get spare filter boards to experiment with.

Friendly Support is provided through a dedicated Open Research Institute Slack channel.

Instructions on how to join this community are here:

https://www.facebook.com/openresearchinstitute https://www.instagram.com/open_research_institute/
https://x.com/OpenResearchIns

Standards Groups and Advisory Work

We have been hard at work defending #opensource and #hamradio in multiple ways by actively supporting standards committees and advisory groups.

We are a member of the United States Federal Communications Commission Technological Advisory Council, serving on the Advanced Spectrum Sharing and 6G Working Groups.

We serve on IEEE P1954 Standard for Self-Organizing Spectrum-Agile Unmanned Aerial Vehicles Communications working group.

We participated in several notices for proposed rulemaking at multiple government agencies over the past year, and expect this work to continue.

We have assisted in several successful experimental license applications for amateur radio satellite communications projects.

ORI is doing the work that helps the future of open source digital radio.

Thank you to all who support our work.

Comments from ORI to the FCC on Advancing Understanding of Non-Federal Spectrum Usage

The comments as filed can be found here:

https://www.fcc.gov/ecfs/document/1009106258899/1

Text of the comments is below.

Comments of
Open Research Institute, Inc.

#1873 3525 Del Mar Heights Road
San Diego, CA, 92130
United States of America
9 October 2023

Open Research Institute (ORI) is a non-profit research and development organization devoted to Open Source digital radio technology.

ORI’s mission is to provide practical open source wireless communications solutions for terrestrial and space applications. ORI provides significant workforce development opportunities to the volunteers engaging in Open Source technical work.

Open Source means that everything required to reproduce a software, hardware, or firmware design is freely available to the general public. An Open Source design must have free redistribution, allow modifications and derived works, and be non-discriminatory against persons, groups, or fields of endeavor. Open Source work cannot be specific to a particular product, it cannot restrict other software or hardware, and needs to be technology-neutral.

Open Source is vital to the United States’ economic competitiveness in telecommunications. The Internet runs on Linux, an Open Source computer operating system, as 96.3% of the top one million web servers run on Linux, and the vast majority of the underlying networking infrastructure uses either Linux or other Open Source operating systems and libraries.

The telecommunications industry has historically been heavily reliant on proprietary software and hardware. An increase in the adoption of Open Source, from OpenRAN to Nephio, has enabled telecommunications companies to more quickly and efficiently meet market demands. There is a broad trend towards the continuing increase in adoption of Open Source designs in order to avoid silos of wheel reinvention and to promote effective interoperability.

The Open Source community can address this inquiry.

Introduction

ORI agrees that spectrum usage information is generally nonpublic and infrequently available. In the spirit of Open Source and Open Access, we believe that publicly available high-quality data about spectrum usage is in the national interest.

ORI agrees that the need for spectrum usage information will only continue to grow. In order to produce useful models, artificial intelligence and machine learning require enough data to complete the training process. Without enough data, a machine learning model can suffer from a condition called overfitting. When this happens, the model becomes a very expensive copy machine, presenting the training data as output, regardless of the input. The lack of sufficiently large high quality radiofrequency data sets is widely acknowledged as an impediment to research and development for radiofrequency machine learning.

ORI agrees that the development of new and innovative spectrum sharing techniques, allowing increased co-existence among users and services, will improve spectrum management. Spectrum usage
information is required in order to develop new spectrum sharing techniques. This is true whether or not machine learning is used either in the process or in the product. In other words, even if only ordinary humans had the task of improving spectrum sharing over and above what we have today, those humans would still need spectrum usage information to achieve their goal.

Without good spectrum usage information, neither machine learning nor human architects will be able to confidently produce quality results. The most common outcome to making the best guess in the absence of spectrum usage information is highly conservative spectrum sharing arrangements that do not fully utilize spectrum, overly restrict licensees, and prevent innovation in the field.

Central Question

We want a more sophisticated knowledge of non-Federal spectrum usage. The central question of this inquiry is how can we take advantage of modern capabilities for gaining this knowledge in a costeffective, accurate, scalable, and actionable manner?

In addition to the other spectrum monitoring efforts listed in this inquiry, we can start with the concepts established by the Spectrum Monitoring Pilot Program from NTIA/NIST.

This program measured spectrum occupancy with standardized distributed receivers reporting to Measured Spectrum Occupancy Databases. These databases publish the metadata of their measurements so that measured data can be retrieved over https:// connections. The concepts of federation are used in order to avoid inefficient and expensive replication of measured data.

The Spectrum Monitoring Pilot Program had two classes of receivers. One was dedicated to radar and the other to communications. The communications receiver approach is an order of magnitude less
expensive than the radar receiver approach.

Restricting the hardware package to the less expensive category of communications receiver, and using modern software-defined radios with Open Source software, and incorporating existing online radios (“WebSDRs”) alongside the standardized stations, and being deliberate and innovative about incentives, then we assert that a low-resolution receiver network that can produce useful spectrum usage information is achievable.

A Low Resolution Receiver Network – Why?

Why low resolution? Because a broad heatmap of spectrum usage, even at low resolution, is valuable in a different way for spectrum management purposes than a small amount of high resolution data from
one service in one geographical area.

The current situation is of essentially no real-time spectrum usage information. Even if we simply had noise floor measurements across the bands across the country, and even if those measurements were
gathered from stations of varying quality, we would have an immense improvement in our capacity to intelligently manage our spectrum over having no measurements at all.

Towards a Weather Map of the National Noise Floor

Noise floor is the measure of the noise power per bandwidth. Getting a snapshot of something like a National Noise Floor, comparable to a national radar weather map, needs a diversity of radio receivers.
We need to be able to measure or estimate power spectral density from these receivers. Services with intermittent use must be measured often enough to produce minimally accurate data, or those services
must be covered with alternate techniques.

As an example of where alternate techniques can work, FT8 mode contacts on the 17 meter Amateur Radio Service band can be obtained from https://gridtracker.org/. These reports, which are text based, can be used to estimate the power spectral density at the radios using Grid Tracker. Reports from radios can be used to create a map of real-time spectrum usage without directly measuring the spectrum. These are estimates, but they are based on measured signal-to-noise reports that all the radios are giving each other, in each 15-second long exchange.

We can compare this type of observation to eavesdropping on roomfuls of people talking to each other, and writing down how many conversations (and attempted conversations) there were. Multiple
eavesdroppers can spread through the house, combine their notes, and show where the conversational density was highest.

What does this look like on the 17 meter Amateur Radio Service allocation with a typical FT8 station?

A stream of which stations contacted other stations is reported from a single radio. The stream consists of all the contacts that the radio has heard from that location. The radio is not directly contacting all of
these other stations, but it can hear them all and tracks who is trying to talk to who and when. Any radio on the band can act like the eavesdropper above.

Open Source software (called WSJT-X) is controlling the radio and demodulating and decoding all received transmissions across the entire sub-band. WSJT-X does include a spectrum waterfall display,
which could be used to obtain power spectral density of all the simultaneous transmissions, but we do not have to do this. We instead use another commonly installed Open Source software program (called
Grid Tracker) which takes the text output of WSJT-X and provides a text list of active stations and their reported signal power. This list can produce a calculated estimate of the power spectral density in the band. It’s less math intensive to work with text based signal reports for fixed formal signals like FT8 than it is to use a spectrum analyzer waterfall, or deal with the IQ recordings of a radiometer.

The addition of additional radios (more eavesdroppers) improves coverage.

Does this exact network exist today? It almost does. Instances of Grid Tracker, very commonly installed alongside WSJT-X and running whenever the station is active are reporting all of this information, but they do not have coordinated reporting, as of today. However, since the software is Open Source, adding a function to opt in to do some math and donate the data to a server to produce a National Noise Floor snapshot, for this particular mode, is achievable.

This example outlines a proof of concept of a very tiny slice of the HF spectrum at 18.100 MHz, but it shows the advantage of using existing Open Source software, existing radios and existing communities
of licensed operators. Incentives to licensees to participate could be as simple as getting an opportunity to defend their licensed spectrum up to recognition awards for donating data consistently over time.

Achieving Aggregated Wireless Sensing

How can we broaden this concept? First, leverage existing online radio receivers, such as broadband WebSDRs. See http://www.websdr.org/ for deployed examples. Power spectral density measurements or estimates can be obtained from receivers already on the air and already connected to the Internet.

An anticipated and understandable objection is that the multitude of WebSDRs across the country are not calibrated stations and they are not standardized. The owner could take them down at any time. A
new one might be put on the air tomorrow. The answer to these objections is that the aggregation of these observations, even if the observations are of varying quality, provides immense value in efforts to
improve spectrum management because these receivers can produce real-time spectral usage information with no additional radio hardware investment.

We should, of course, not stop there. We must commit to a both/and and not an either/or approach to answer the central question of this inquiry.
Second, deploy inexpensive, standardized, and calibrated receivers to libraries, schools, post offices, and any other institution or organization that can be either incentivized or mandated.

For a model of an Open Source standardized distributed receiver system producing real-world practical radio results, please refer to the SatNOGS project at https://satnogs.org/

What are some standardized station examples that we could deploy in the United States to achieve the
goals of this inquiry?

An Open Source PLUTO SDR running Open Source Maia software creates an inexpensive spectrum analyzer with a built-in webserver. The addition of the federated reporting functions is possible because
the source code for Maia can be modified to include these additional functions. Maia can be found at https://maia-sdr.org/. Documentation for the standard PLUTO firmware (which is largely replaced by
the Maia firmware) can be found at

https://github.com/analogdevicesinc/plutosdr-fw

and documentation for the PLUTO hardware can be found at

https://wiki.analog.com/university/tools/pluto/hackers

A PLUTO/Maia package can cover frequencies from 70 MHz to 6 GHz. It would requires one or more antennas (depending on how many bands are to be monitored by that station), a power supply, a weatherproof enclosure, mechanical attachments, and cables. A proof of concept would be expected to cost less than the Spectrum Monitoring Pilot Program communications receiving station proof of concept, which came in at $6000 according to “An Overview of the NTIA/NIST Spectrum Monitoring Pilot Program”.

This can be read at

https://its.ntia.gov/umbraco/surface/download/publication? reportNumber=CottonSpectMonIwssSubmitted.pdf

A second and even less expensive example of a standardized station would be an RTL-SDR

https://www.rtl-sdr.com/about-rtl-sdr/

and a Raspberry Pi

https://www.raspberrypi.com/

running Linux. This kit can use a large number of Open Source software-defined radio packages. It can be tuned to any of the bands in its operating range of 2.4 MHz to 2 GHz. For a sweep of the entire operating range, multiple antennas and combiners would be necessary along with an amount of additional equipment and software.

The WebSDRs combined with the standard package stations form a heterogeneous distributed receiver network. In aggregate, with enough stations, the resulting network can provide useful real-time reports
of spectrum usage information. A centralized visualization of actual spectrum usage, even if not realtime, would be very informative. If a picture is worth a thousand words, then a video is worth a thousand pictures. People seeing gaps in the data may be motivated to put up or sponsor a station to fill in the gaps, similar to the effect we see with personal weather stations that measure temperature, wind
speed, and other meteorological aspects.

TAC Working Group as Asset

The Dynamic Spectrum Allocation Working Group of the Technological Advisory Council of the Federal Communications Commission could provide advisory feedback on real-time spectral usage information obtained through opportunistic and inexpensive sensor networks, invite speakers to present about the topic, and give specific recommendations. Leveraging an existing Working Group with familiarity in this particular topic would be an efficient use of already-available expert advice.

Conclusion

A National Noise Floor heatmap, even if low resolution, is achievable and it is valuable. Any reasonable real-time data, whether obtained opportunistically or purposefully, is an enormous step forward compared to no data at all.

There are drawbacks to low resolution data. The limits of the resolution must be acknowledged. The measurements have to at least be reasonable, meaning that a snapshot of noise floor includes enough
information to where lower power signals aren’t completely overlooked or missed. For each frequency allocation measured, a subject matter expert in that allocation would be expected to compare the realtime spectrum usage information to a model of expected spectrum usage. The difference between this theoretical or calculated model and the real-time spectrum usage information is valuable and informative in many ways. A subject matter expert would be able to explain observed differences, explain whether and why any difference was sufficient reason to make spectrum management adjustments, and provide feedback for improved spectrum sensing. There is no one-size solution for either the measurement stations involved or the allocations they are measuring.

The architecture for gaining visibility of spectral usage has been previously piloted in the Spectrum Monitoring Pilot Program. This and other excellent prior work can be adapted, and citizens can be
incentivized to participate in order to scale up the sensor network. Incentives range from the simple fact of being able to individually contribute directly towards the defense of a spectral allocation, to awards or recognition for the technical achievement of constructing and calibrating a station to a published standard, to a scoreboard of who provided the most consistent reports of real-time spectral information
over specific lengths of time.

There is a large amount of Open Source software and hardware that can be used to reduce costs and reward high quality collaborative and cooperative work. A “lower-tech, inexpensive, diverse, and start now” instead of a “high-tech, expensive, maybe sometime in the future” approach is cost-effective, accurate (enough), scalable, and actionable.

Respectfully,
Michelle Thompson
CEO ORI

AI vs. Human: ChatGPT vs. ORI Open Source Designers

Greetings all,

You know that ORI has been active in a variety of regulatory work for several years now. One of those areas has been in AI/ML and the effect on communications. We served on the US FCC TAC in 2022, have presented and written on the subject, and are working towards an AI/ML Handbook for Amateur Radio.

You may have seen news about OpenAI GPT, or ChatGPT.

https://openai.com/api/

There’s a lot of people testing this chatbot out for reverse engineering and in writing software. There’s been some successes in cases where the model would have had access to code or documentation about protocols, observations, or technical behavior in order to train, and requests match up well with the models. 

For example, a friend asked the chatbot to create a Python script to access the webcam on their computer, and they shared a still image from the resulting feed. Another example from another friend resulted in better reverse engineering results than they had gotten with either IDA or Ghidra over many weeks.

So, how does ChatGPT handle VHDL?

We have something specific to compare against, since we just completed a COBS decoder in VHDL for our open source HEO/GEO transponder work for #Haifuraiya.

Let’s start off simple. Here’s the first dialogue with the chatbot, and the results.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder in VHDL

Sure, here’s an example of a COBS decoder in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

entity COBS_decoder is
  Port (
    input_data : in std_logic_vector;
    decoded_data : out std_logic_vector
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in input_data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if input_data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      decoded_data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
end Behavioral;

This code decodes the input data using the COBS (Consistent Overhead Byte Stuffing) algorithm, which involves replacing runs of zeros in the input data with a single byte that indicates the length of the zero run. For example, the input ‘101001000’ would be decoded as ‘103101’.

This decoder iterates over the input data and maintains a variable zero_run_length that is incremented for each zero in the input data. When a non-zero value is encountered, the length of the zero run is written to the output data and the zero_run_length variable is reset.

I hope this helps! Let me know if you have any other questions.”

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knew what COBS meant. It produced code, with comments, and a brief “design document”, with examples.

The code produced is not licensed or cited.

Our open source HEO transponder project uses AXI-based blocks in the FPGA. So, let’s add AXI to the request.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder with AXI interface in VHDL

Sure, here’s an example of a COBS decoder with an AXI interface in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;

— AXI interface definitions
library unisim;
use unisim.vcomponents.all;

entity COBS_decoder is
  Port (
    AXI_IN  : in  AXI4Stream;
    AXI_OUT : out AXI4Stream
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in AXI_IN.data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if AXI_IN.data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      AXI_OUT.data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
 
  — Set the valid and last flags in the output AXI stream
  AXI_OUT.valid <= ‘1’;
  AXI_OUT.last  <= ‘1’;
end Behavioral;

This code is similar to the previous example, but it uses an AXI (Advanced eXtensible Interface) interface for the input and output data. The AXI interface is a standard interface used in many FPGA designs, and it provides a way to stream data into and out of the decoder.

In this code, the input data is read from the AXI_IN.data port, and the decoded output is written to the AXI_OUT.data port. The valid and last flags are also set in the output AXI stream to indicate that the data is valid and that it is the last data in the stream.

I hope this helps! Let me know if you have any other questions.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knows about the previous request and updated the design. No license.

Here’s a review of the COBS implementation from Paul Williamson KB5MU.

“I don’t know what this is, but it isn’t a COBS decoder with AXI interfaces.

It does seem at first glance to be some sort of thing with AXI interfaces, but it does not implement an AXI interface at all. It just relies on a (possibly imaginary) library type called AXI4Stream that somehow magically does AXI interfaces, in a way that’s both implausibly smart and inexplicably dumb. I think the classic phrase “not even wrong” applies here.

The AXI4Stream object apparently has a member .data that is an array of bytes, which might make sense if the entire AXI transfer is being treated as a single entity here. The code iterates through the input .data using the ’range attribute, as if the AXI4Stream object has a defined fixed transfer length.

Which it can’t have, if it’s a general purpose AXI stream implementation.

So, if this was ever anything real, it must have been a special purpose AXI implementation with a fixed transfer length. Assuming that the AXI4Stream .data actually works that way, the entity does
do some sort of transformation on the input data to create the output data.

We asked for that to be a COBS decoder, but the function that it actually implements bears only the slightest resemblance to a COBS decoder. What it actually does is output a stream of words that are the lengths of the runs of zeroes in the input data. Nonzero data does not get passed through at all.

Instead, a run of nonzero bytes gets translated into a run of zeroes, as if two adjacent nonzero bytes meant nothing but a zero-length run of zeroes.

Here’s an example of what this code seems to do:

  Input  .data:  1 2 3 4 0 0 0 0 1 2 3 4
  Output .data:  0 0 0 0         4 0 0 0

That’s certainly not a COBS decoder, or even anything useful at all.

Note that the output is variable length, but never longer than the input.
We had to assume that the AXI4Stream object had a fixed .data size to get this far, but now it has to have a variable data size for the output.

Supposing we had successfully suspended disbelief up to here, we now examine the final two lines of code in the Behavioral architecture. It sets the .valid and .last members of the AXI_OUT port to ‘1’.

That makes absolutely no sense if it’s supposed to encapsulate an entire AXI stream transfer. Those signals only make sense if we’re treating one word at a time, as you’d expect to see in a normal, general-purpose AXI stream object.

The extremely verbose commenting style in this code suggests that the original source material was from an introductory tutorial. Maybe the AI has been tuned to favor heavily commented code, and ends up picking up partially completed junk from the middle of tutorials?
Anyhow, this is not an example of automated programming in response to a high-level query. This is gibberish that succeeds only in having a superficial resemblance to the code requested.”

So, does this work?

Let’s try it out in the lab!

The first thing we notice is that there is an obvious syntax error with a missing semicolon. Aside from that, there is no AXI4Stream port type. It’s not recognized. Vivado appears to be disappointed in this file, and wants little to do with this code. This is where we should stop, but I spent some time looking to see if the ports were at least close to being declared correctly.

The model appears to have taken some custom or maybe enumerated type from somewhere, and come up with a custom AXI4Stream type. This is not recognized by the unisim library right off the bat, and that’s the sort of thing that automated code writing needs to get right.

Let’s look at the code written by humans from ORI. The decoder can be found here:

https://github.com/Abraxas3d/fpga_cores/blob/master/src/decoder.vhd

-=-=-=-=-=-=-=-=-=-=-=-=-=-

— Company: Open Research Institute, Inc.
— Engineer: Skunkwrx, Abraxas3d
— Design Name: COBS protocol decoder
— Module Name: decoder – Behavioral
— Project Name: Phase 4 “Haifuraiya”
— Target Devices: 7000 Zynq
— Tool Versions: 2021.1
— Description: COBS protocol decoder.

— Additional Comments: This work is Open Source and licensed using CERN OHL v2.0

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

use IEEE.NUMERIC_STD.ALL;

–Entity Declaration
entity decoder is
    Port ( rst      : in STD_LOGIC;
           clk      : in STD_LOGIC;
           s_tdata  : in STD_LOGIC_VECTOR (7 downto 0);
           s_tlast  : in STD_LOGIC;
           s_tvalid : in STD_LOGIC;
           s_tready : out STD_LOGIC;
           m_tdata  : out STD_LOGIC_VECTOR (7 downto 0);
           m_tlast  : out STD_LOGIC;
           m_tvalid : out STD_LOGIC;
           m_tready : in STD_LOGIC);
end decoder;

–Architecture
architecture Behavioral of decoder is

— internal copy of s_tdata
    signal input_data          : STD_LOGIC_VECTOR (7 downto 0);

    — s_tdata delayed by one clk where s_tvalid is high
    signal input_data_d        : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of s_tlast
    — not used for anything
    — We create our own m_tlast from the COBS framing instead
    signal s_tlast_i           : STD_LOGIC;

    — internal version of input handshaking signal s_tvalid
    signal s_tvalid_i          : STD_LOGIC;

    — s_tvalid delayed by one clk
    — thus, a version of s_tvalid aligned with input_data_d
    signal s_tvalid_i_d        : STD_LOGIC;

    — s_tvalid delayed by two clks
    — thus, a version of s_tvalid aligned with output_data
    signal s_tvalid_i_d_d      : STD_LOGIC;
   
    — countdown of bytes in this sequence
    — loads from input_data_d when it represents the frame count
    —   as determined by signal counter_load
    — contains the count while m_tdata contains the implied 0 at
    —   the end of a sequence, or the next count that replaced it
    — counts down thereafter, to contain 1 when the last non-zero
    —   byte of the sequence is on m_tdata
    — allowed to count down to 0, but not beyond
    signal count               : STD_LOGIC_VECTOR (7 downto 0);

— enable to load count from input_data_d on this clk edge
— two cases detected:
—    * first valid non-zero byte after a frame separator
—    * first valid byte after count is exhausted
— allowed to be high for multiple cycles
    signal counter_load        : STD_LOGIC;

    — counter_load delayed by one clk where s_tvalid is high
    — used to identify the first valid data byte of any sequence,
    —   for purposes of computing m_tvalid (via pre_tvalid)
    signal counter_load_d      : STD_LOGIC;

    — detection of a valid frame separator (zero) byte in input_data_d
    signal frame_sep           : STD_LOGIC;

    — frame_sep delayed by one clk where s_tvalid is high
    — used to compute counter_load
    — used to compute rising edge of pre_tvalid
    signal frame_sep_d         : STD_LOGIC;
   
    — frame_sep_d delayed by an additional clk (not depending on s_tvalid)
    — used to find the first non-zero byte of the new frame
    signal frame_sep_d_d       : STD_LOGIC;
       
    — move the frame_sep signal that occurred during m_tready low
    — out to the first cycle when m_tready is high again
    signal use_saved_frame_sep : STD_LOGIC;

    — flag to remember that the frame count for this sequence was 255,
    —   to handle the special case that such a sequence does not have
    —   an implied zero byte at the end.
    — set when loading count with 255
    — cleared when the counter is reloaded with anything else
    signal case_255            : STD_LOGIC;

    — internal version of m_tdata output
    signal output_data         : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of m_tlast output
    — high when the last byte of a frame is valid on m_tdata
    signal m_tlast_i           : STD_LOGIC;

    — delayed versions of m_tlast
    signal m_tlast_i_d         : STD_LOGIC;
    signal m_tlast_i_d_d       : STD_LOGIC;

    — intermediate result for m_tvalid.
    — high across all data bytes of each sequence on m_tdata
    — does not go low for bytes on m_tdata corresponding to
    —   bytes invalidated by s_tvalid.
    signal pre_tvalid          : STD_LOGIC;

    — internal version of m_tvalid output.
    — pre_tvalid with periods of low s_tvalid_d_d punched out
    signal m_tvalid_i          : STD_LOGIC;

    — internal version of m_tready input
    — also the internal version of s_tready output
    — passes through m_tready to s_tready with no clk delays
    signal m_tready_i          : STD_LOGIC;

    — constant byte value 0xFF, for comparison purposes
    signal all_ones            : STD_LOGIC_VECTOR(input_data’range) := (others => ‘1’);

    — constant byte value 0x00, for comparison purposes
    signal all_zeros           : STD_LOGIC_VECTOR(input_data’range) := (others => ‘0’);

begin

    — asynchronous assignments
   
    frame_sep <= ‘1’ when input_data_d = all_zeros and s_tvalid_i_d = ‘1’
                else ‘0’;
   
    m_tlast_i <= ‘1’ when ((frame_sep = ‘1’ and m_tvalid_i = ‘1’ and m_tready = ‘1’))
                else ‘0’;
   
    counter_load <= ‘1’ when (input_data_d /= all_zeros and frame_sep_d = ‘1’ and s_tvalid_i_d = ‘1’)   — start of frame
                          or (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘1’)   — start of next sequence in frame
                else ‘0’;
   
    m_tvalid_i <= ‘1’ when ((pre_tvalid = ‘1’ and s_tvalid_i_d_d = ‘1’    — usual case, if input_data was valid
                        and not (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘0’)) — defer last byte; might be m_tlast
                      or (pre_tvalid = ‘1’ and to_integer(unsigned(count)) = 1
                        and s_tvalid_i_d = ‘1’ and s_tvalid_i_d_d = ‘0’)) — pick up that deferred last byte
                else ‘0’;
               
   
    s_tready <= m_tready_i;
    m_tdata <= output_data;
    input_data <= s_tdata;
    s_tvalid_i <= s_tvalid;
    m_tready_i <= m_tready;
    m_tvalid <= m_tvalid_i;
    m_tlast <= m_tlast_i;
   
— processes

    set_case_255 : process (rst, clk)
    begin
        if rst = ‘1’ then
            case_255 <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’ and input_data_d = all_ones then
                case_255 <= ‘1’;
            elsif counter_load = ‘1’ and input_data_d /= all_ones then
                case_255 <= ‘0’;
            end if;
        end if;
    end process set_case_255;

   
    delay_s_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            s_tvalid_i_d <= ‘0’;
            s_tvalid_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            s_tvalid_i_d <= s_tvalid_i;            
            s_tvalid_i_d_d <= s_tvalid_i_d;
        end if;
    end process delay_s_tvalid;
   
   
   
    create_pre_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            counter_load_d <= ‘0’;
            pre_tvalid <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i_d = ‘1’ then
                counter_load_d <= counter_load;
                if (frame_sep_d_d = ‘1’ and frame_sep_d = ‘0’)            — normal last byte of frame
                or (counter_load_d = ‘1’ and frame_sep_d = ‘0’)           — normal first byte of a sequence
                then      
                    pre_tvalid <= ‘1’;
                end if;
            end if;
            if frame_sep = ‘1’
            then
                pre_tvalid <= ‘0’;
            end if;
            if counter_load = ‘1’ and case_255 = ‘1’ then
                pre_tvalid <= ‘0’;
            end if;
        end if;
    end process create_pre_tvalid;
     
     

    delay_m_tlast_i : process (rst, clk)
    begin
        if rst = ‘1’ then
            m_tlast_i_d <= ‘0’;
            m_tlast_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            m_tlast_i_d <= m_tlast_i;
            m_tlast_i_d_d <= m_tlast_i_d;
        end if;
    end process delay_m_tlast_i;

    set_counter : process (rst,clk)
    begin
        if rst = ‘1’ then
            count <= (others => ‘0’);
            frame_sep_d <= ‘0’;             frame_sep_d_d <= ‘0’;         elsif rising_edge(clk) and m_tready_i = ‘1’ then             frame_sep_d_d <= frame_sep_d;             if s_tvalid_i_d = ‘1’ then                 frame_sep_d <= frame_sep;                 if counter_load = ‘1’ then                     count <= input_data_d;                 elsif count /= all_zeros                 then                     count <= STD_LOGIC_VECTOR(unsigned(count) – 1);                 end if;             end if;         end if;     end process set_counter;                 create_output : process (rst, clk)     begin         if rst = ‘1’ then             output_data <= (others => ‘0’);
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’
            then
                output_data <= all_zeros;
            elsif s_tvalid_i_d = ‘1’ then
                output_data <= input_data_d;                
            end if;
        end if;
    end process create_output;
   
 
   
    selective_delay_of_input_data : process (rst,clk)
    begin
        if rst = ‘1’ then
            input_data_d <= all_zeros;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i = ‘1’ then
                input_data_d <= input_data;
            end if;    
        end if;
    end process selective_delay_of_input_data;
   

end Behavioral;

-=-=-=-=-=-=-=-=-=-=-=-=-=-

This codebase doesn’t include just the source code, but also a testbench file, a design document, timing diagrams, a Python notebook that creates test vectors, and Python code that compares results produced by the testbench that uses the test vectors.

None of this sort of thing is produced by ChatGPT.

There is code in some languages for some functions that can be correctly written by AI at this point.

This is a situation similar to algorithmic musical composition. Algorithms can come up with useful ideas, motifs, sketches, or traverse a search space more quickly than a human. They can help a composer, but do not replace one. Algorithms can produce some sparkling gems of ideas. These are fresh cuts and views. This is mixed in with a lot of music that is just plain wrong.

If your programming job happens to be in a well-defined area that is easily automated, then you do have something to worry about today.

All jobs are a target for eventual automation with AI/ML. Yes, all of them. As AI/ML becomes more capable, it will automate even things like testbench writing (HDL Coder already does something like this) and support code (like a Python notebook that creates a solid set of test vectors for a testbench) and design documents. All of us should be aware of what we do being replaced by an algorithm some number of tomorrows in the future.

That day is not yet “today” with Chat GPT, when it comes to writing #VHDL.

What do you think? We’re interested in your experience and opinions with AI/ML writing code.