Podcasts by VK6FLAB

Subscribe to Podcasts by VK6FLAB feed Podcasts by VK6FLAB
Starting in the wonderful hobby of Amateur or HAM Radio can be daunting and challenging but can be very rewarding. Every week I look at a different aspect of the hobby, how you might fit in and get the very best from the 1000 hobbies that Amateur Radio represents. Note that this podcast started in 2011 as "What use is an F-call?".
Updated: 2 hours 29 min ago

Between decibels and milliwatts ...

5 hours 35 min ago
Foundations of Amateur Radio

Between decibels and milliwatts ...

As you might recall, I've been working towards using a cheap $20 RTL-SDR dongle to measure the second and third harmonic of a handheld radio in an attempt to discover how realistic that is as a solution when compared to using professional equipment like a Hewlett Packard 8920A RF Communications Test Set.

I spent quite some time discussing how to protect the receiver against the transmitter output and described a methodology to calculate just how much attenuation might be needed and what level of power handling. With that information in-hand, for reference, I used two 30 dB attenuators, one capable of handling 10 Watts and one capable of handling 2 Watts. In case you're wondering, it's not the dummy load with variable attenuation that I was discussing recently.

I ended up using a simple command-line tool, rtl-power, something which I've discussed before. You can use it to measure power output between a set of frequencies. In my case I measured for 5 seconds each, at the base frequency on the 2m band, on the second and on the third harmonic and to be precise, I measured 100 kHz around the frequencies we're looking at.

This generated a chunk of data, specifically I created just over a thousand power readings every second for 15 seconds. I then put those numbers into a spreadsheet, averaged these and then charted the result. The outcome was a chart with three lines, one for each test frequency range. As you'd expect, the line for the 2m frequency range showed a lovely peak at the centre frequency, similarly, there was a peak for the other two related frequencies.

The measurement data showed that the power measurement for 146.5 MHz was nearly 7 dB, for 293 MHz it was -44 dB and for 439.5 MHz it was -31 dB. If you've been paying attention, you'll notice that I used dB, not dBm or dBW in those numbers, more on that shortly.

From a measurement perspective we learnt that the second harmonic is 51 dB below the primary power output and the third harmonic was about 38 dB below the primary power output.

First observation to make is that these numbers are less than shown on the HP Test Set where those numbers were 60 dB and 62 dB respectively.

Second observation, potentially more significant, is that pesky dB thing I skipped over earlier.

If you recall, when someone says dB, they're referring to a ratio of something. When they refer to dBm, they're referring to a ratio in relation to 1 milliwatt. This means that when I say that the power reading was 7 dB, I'm saying that it's a ratio in relation to something, but I haven't specified the relationship. As I said, that's on purpose.

Let me explain.

When you use an RTL-SDR dongle to read power levels, you're essentially reading numbers from a chip that is converting voltages to numbers. In this case the chip is an Analog to Digital Converter or an ADC. At no point has any one defined what the number 128 means. It could mean 1 Volt, or it could mean 1 mV, or 14.532 mV, or something completely different. In other words, we don't actually know the absolute value that we're measuring. We can only compare values.

In this case we can say that when we're measuring on the 2m band we get a range of numbers that represent the voltage measured along those frequencies. When we then measure around the second harmonic, we're doing the same thing, possibly even using the same scale, so we know that if we get 128 back both times we might assume the voltage is the same in both cases, we just don't actually know how much the voltage is. We could say that there's no difference between the two, or 0 dB, but we cannot say how high or low the voltage is.

This is another way of describing something I've discussed before, calibration.

So, if I had a tool that could output a specific, known RF power level, and fed that into the receiver and measured, I could determine the relationship between my particular receiver and that particular power level. I could then measure at all three frequencies and determine if the numbers were actually the same for these three frequencies, which is what I've been assuming, but we don't actually know for sure right now.

So, at this point we need a known RF signal generator. The list of tools is growing. I've already used a NanoVNA to calibrate my attenuators and I've used a HP RF Communications Test Set to compare notes with.

At this point you might realise that we're not yet able to make any specific observations about using a dongle to make harmonic measurements, but you can make pretty pictures...

There's a good chance that you're becoming frustrated with this process, but I'd like to point out that at the beginning of this journey I can tell you that I had no idea what the outcome might be and obviously, that's the nature of experimentation.

If you have some ideas on how to explore further, feel free to get in touch.

I'm Onno VK6FLAB

Wet and Blue adventures with coax ...

Sat, 09/16/2023 - 12:00
Foundations of Amateur Radio

Over the weekend a friend of mine convinced me to help plant some trees. Mind you, I was told that this was going to be a blue tree painting day. The Blue Tree Project is now a global awareness campaign that paints dead trees blue to spread the message that "it's OK to not be OK", and help break down the stigma that's still largely attached to mental health.

In the process, I learnt that my physical stamina is not what it once was and my current appetite for bending over and shovelling dirt is, let's call it, muted.

After the digging and the sausage sizzle under the branches of an actual blue tree, there was some opportunity for playing radio, something I haven't done in much too long. I wasn't sure when I last got into the fresh air to actually listen, but I must confess, the coax cable that I picked up out of my shed had been hanging there for several years.

The location where we planned to play was in a rural setting, right next to a dam, which surprisingly actually had water in it. The idea was to set-up a vertical antenna with a couple of ground radials, plug in a radio and have a listen. I have to say, after the digging I was really looking forward to this.

My piece of coax, about 20 meters long, was used to connect the antenna to the radio so we could sit in the shade whilst the antenna stood out in the sun near the dam.

The antenna, a telescopic one, came with a ground spike and about eight radials and needed to be tuned to some extent, as-in, near-enough is close enough, since we had an antenna tuner with our radio. To achieve the tuning we wanted to connect a NanoVNA to the coax which was the first challenge. The BNC connectors on my coax were pretty dull, likely a combination of poor quality, accumulated dust, humidity and lack of use.

As an added bonus the centre pin on one end seemed a little bent.

After working out how to get an SMA adaptor into the connectors we were in business. Connected up between the antenna and the NanoVNA we set out to get things lined up. The SWR on the display, hard to read in the full sun at the best of times, seemed to be a little odd. Not something I could put my finger on, but if you've seen enough SWR plots you know what it's supposed to look like and for some reason it didn't.

We bravely carried on, connected the radio to the coax and started tuning around. Didn't seem to be a lot of activity on the 20m band. We couldn't hear the local NCDXF beacon which was odd. Also no FT8 activity, also odd.

If anything, it seemed like there was nothing happening at all.

Before we continue, I'll point out that this can happen with a big enough burp from the Sun. I hadn't seen any alerts, so I wasn't buying it. We removed my coax, plugged in something much shorter and the bands came alive with all the activity we'd been expecting.

And then it started to rain.

Seriously. Finally got out into the world, got radio activity going, had actual signals to tune to and it starts raining. Glynn VK6PAW and I took one look at each other, shook our heads and dashed for the radio to bring it under shelter. I put on my raincoat, and together we disassembled the antenna and the station and went home.

Clearly, my coax was faulty. Lesson learnt. Test your coax before you go out and you'll have a better outcome.

About that.

Today, a week later, I'm sitting on the floor of my shack with the offending coax between my legs, surrounded by adaptors, a NanoVNA, a RigExpert, a dummy load, a short and an open terminator. No matter how I test, no matter what I test, everything is as it should be. I can tell you that the Time Domain Reflectometry shows me that the coax is 25.8m long, useful information, but not really any surprise.

There's also no significant return loss, unless you head for 1 GHz, but even then it's perfectly respectable, if anything, better than I expected.

There are no loose connections, nothing rattling, nothing amiss.

The only thing that I can even begin to think might be the case is that one of the centre pins on one end of the coax is slightly shorter. Combined with "close enough is good enough" when I attached the SMA adaptor in the field, might account for a connection that never got made, since the adaptor wasn't seated deep enough.

So, I'm not quite ready to cut off the connectors and re-terminate this coax. I'll be taking it into the field again, but I'll make sure that I bring an alternative, just in case. I'm also leaving the SMA adaptors connected to the coax. Future me will thank me.

Oh, yes, in-case you're wondering, I'm slowly working out how to improve my stamina. That was not fun. If you want to know more about Blue Trees and its message, check out the BlueTreeProject.com.au website and if you ever just want to talk, get in touch.

I'm Onno VK6FLAB

Checking attenuation numbers ...

Sat, 09/09/2023 - 12:00
Foundations of Amateur Radio

Before we start I should give you fair warning. There are many moving parts in what I'm about to discuss and there's lots of numbers coming. Don't stress too much about the exact numbers. In essence, what I'm attempting is to explore how we can reduce the power output from a transmitter in such a way that it doesn't blow up a receiver whilst making sure that the signal is strong enough that we can actually measure it.

With that in mind, recently I discussed the idea of adding a series of attenuators to a transmitter to reduce the power output by a known amount so you could connect it to a receiver and use that to measure output power at various frequencies. One hurdle to overcome is the need to handle enough power in order to stop magic smoke from escaping.

None of my attenuators are capable of handling more than 1 or 2 Watts of power, so I cannot use any of them as the first in line. As it happens, a good friend of mine, Glynn VK6PAW, dropped off a device that allows you to divert most of the power into a dummy load and a small amount into an external connector. In effect creating an inline attenuator capable of handling 50 Watts.

The label doesn't specify what the attenuation is, so I measured it using a NanoVNA. To make our job a little interesting, it isn't constant. Between 10 kHz and 1 GHz, the attenuation decreases from 70 dB to 10 dB. We want to measure at a base frequency on the 2m band and its second and third harmonic. The attenuation at those frequencies varies by 11 dB, which means we'll need to take that into account.

So, let's subject our currently imaginary test set-up to some sanity checking. Our receiver is capable of reading sensible numbers between a signal strength of -127 dBm and -67 dBm and we'll need to adjust accordingly.

If we transmit an actual 20 Watt carrier, that's 43 dBm. With 110 dB of attenuation, we end up at -67 dBm, which is right at the top end of what we think the receiver will handle. If we're using something like 5 Watts, or 37 dBm, we end up at -73 dBm, which is well above the minimum detectable signal. Our best harmonic measurement was around -30 dBm, which means that with 110 dB of attenuation, we end up at -140 dBm, which is 13 dB below what we think we can detect.

So, at this point you might wonder if this is still worth our while, given that we're playing at the edges and to that I say: "Remind me again why you're here?"

First we need to attenuate our 20 Watts down to something useful so we don't blow stuff up. Starting with 110 dB attenuation, we can measure our base carrier frequency and its harmonics and learn just how much actual power is coming out of the transmitter. Once we know that, we can adjust our attenuation to ensure that we end up at the maximum level for the receiver and see what we are left with.

So, let's look at some actual numbers, mind you, we're just looking at calculated numbers, these aren't coming from an actual dongle, yet. Using Glynn's dummy load as the front-end, at 146.5 MHz, the attenuation is about 30 dB. If we look at a previously measured handheld and rounding the numbers, it produced 37 dBm. That's the maximum power coming into our set-up. With 30 dB of attenuation from Glynn's dummy load, that comes down to 7 dBm. We'll need an additional 74 dB of attenuation to bring that down to -67 dBm, in all we'll need 104 dB of attenuation.

The third harmonic for that radio was measured at -26 dBm. So, with a 104 dB of attenuation that comes out at -130 dBm, which is below the minimum detectable signal supported by our receiver. However, remember that I told you that our dummy load had different attenuation for different frequencies? In our case, the attenuation at 439.5 MHz is only 19 dB, not 30, so in actual fact, we'd expect to see a reading of -119 dBm, which is above the minimum detectable signal level.

I realise that's a lot of numbers to digest, and they're specific to this particular radio and dummy load, but they tell us that this is possible and that we're potentially going to be able to measure something meaningful using our receiver. I'll also point out that if you're going to do this, it would be a good idea to take notes and prepare what numbers you might expect to see because letting the magic smoke escape might not be one of your desired outcomes.

Speaking of smoke, what happens if you consider changing the attenuation when you're measuring at another frequency, like say the second or third harmonic and you see a reading close to, or perhaps even below the detectable signal level as we've just discussed. You might be tempted to reduce the attenuation to increase the reading, but you need to remember that the transmitter is still actually transmitting at full power into your set-up, even if you're measuring elsewhere. This is why for some radios you'll see a measurement that states that the harmonics are below a certain value because the equipment used doesn't have enough range to provide an actual number.

To simplify my life, using a NanoVNA, I created a spreadsheet with 101 data points for the attenuation levels of Glynn's dummy load between 10 kHz and 1 GHz. I charted it and with the help of the in-built trend-line function determined a formula that matched the data.

I've also skipped over one aspect that needs mentioning and that's determining if the receiver you're using to do this is actually responding in the same way for every frequency. One way you might determine if that's the case is to look at what happens to the signal strength across multiple frequencies using a dummy load as the antenna. One tool, rtl_power might help in that regard.

Is this going to give you the same quality readings as a professional piece of equipment? Well, do the test and tell me what you learn.

I'm Onno VK6FLAB

How much attenuation is enough?

Sat, 09/02/2023 - 12:00
Foundations of Amateur Radio

Recently I had the opportunity to use a piece of professional equipment to measure the so-called unwanted or spurious emissions that a transceiver might produce. In describing this I finished off with the idea that you could use a $20 RTL-SDR dongle to do these measurements in your own shack. I did point out that you should use enough attenuation to prevent the white smoke from escaping from your dongle, but it left a question, how much attenuation is enough?

An RTL-SDR dongle is a USB powered device originally designed to act as a Digital TV and FM radio receiver. It's normally fitted with an antenna plugged into a socket on the side. I'll refer to it more generically as a receiver because much of what we're about to explore is applicable for other devices too.

Using your transceiver, or transmitter, as a signal source isn't the same as tuning to a broadcast station, unless you move it some distance away, as-in meters or even kilometres away, depending on how much power you're using at the time. Ideally we want to connect the transmitter output directly to the receiver input so, at least theoretically, the RF coming from the transmitter stays within the measuring set-up between the two devices.

Assuming you have a way to physically connect your transmitter to your receiver we need to work out what power levels are supported by your receiver.

For an RTL-SDR dongle, this is tricky to discover. I came across several documents that stated that the maximum power level was 10 dBm or 0.01 Watt, but that seemed a little high, since an S9 signal is -73 dBm, so I kept digging and discovered a thoughtful report published in August 2013 by Walter, HB9AJG. It's called "Some Measurements on DVB-T Dongles with E4000 and R820T Tuners".

There's plenty to learn from that report, but for our purposes today, we're interested in essentially two things, the weakest and strongest signals that the receiver can accommodate. We're obviously interested in the maximum signal, because out of the box our transmitter is likely to be much too strong for the receiver. We're going to need to reduce the power by a known amount using one or more connected RF attenuators.

At the other end of the scale, the minimum signal is important because if we add too much attenuation, we might end up below the minimum detectable signal level of the receiver.

Over the entire frequency range of the receivers tested in the report the minimum varies by about 14 dB, so let's pick the highest minimum from the report to get started. That's -127 dBm. What that means is that any signal that's stronger than -127 dBm is probably going to be detectable by the receiver and for some receivers on some frequencies, you might be able to go as low as -141 dBm.

At the other end of the scale the report shows that the receiver range is about 60 dB, which means that the strongest signal that we can use is -67 dBm before various types of distortion start occurring. For comparison, that's four times the strength of an S9 signal.

So, if we have a 10 Watt transmitter, or 40 dBm, we need to bring that signal down to a maximum of -67 dBm. In other words we need at least 107 dB of attenuation and if we have a safety margin of two, we'll need 110 dB of attenuation, remember, double power means adding 3 dB.

So, find 110 dB of attenuation. As it happens, if I connect most of my attenuators together, I could achieve that level of attenuation, but there's one further issue that we'll need to handle and that's power.

As you might recall, an attenuator has several attributes, the most obvious one is how much attenuation it brings to the party. It's specified in dB. My collection of attenuators range from 1 dB to 30 dB. Another attribute is the connector it comes with, I have both N-type and SMA connectors in my collection, so I'll need some adaptors to connect them together. One less obvious and at the cheap end of the scale, often undocumented, aspect of an attenuator is its ability to handle power. Essentially we're turning an RF signal into heat, so an attenuator needs to be able to dissipate that heat to handle what your transmitter is throwing at it.

I said that from a safety perspective I'd like to be able to handle 20 Watts of power. Fortunately we don't need all our attenuators to be able to handle 20 Watts, just the first one directly connected to the transmitter. If we were to use a 20 Watt, 30 dB attenuator, the signal through the attenuator is reduced to 0.02 Watts and the next attenuator in line only needs to be able to handle that power level and so-on.

To get started, find about 110 dB of attenuation, make sure it can handle 20 Watts and you can start playing.

Before you start keying up your transmitter, how might you handle a range of different transmitters and power levels and can you remove an attenuator when you test on a different frequency?

On that last point, let me say "No", you cannot remove the attenuator when you're measuring a different frequency.

I'm Onno VK6FLAB

Starting to measure spurious emissions ...

Sat, 08/26/2023 - 12:00
Foundations of Amateur Radio

At a recent local HAMfest we set-up a table to measure second and third harmonic emissions from any handheld radio that came our way. The process was fun and we learnt lots and in due course we plan to publish a report on our findings.

When we received a handheld, we would disconnect the antenna, and replace it with a short length of coax and connect it to a spectrum analyser. We would then trigger the Push To Talk, or PTT button and measure several things. We'd record the actual frequency and how many Watts that the transmitter was producing and then record the power level in dBm for the base frequency, double that frequency and triple that frequency. In other words, we'd record the base, second and third harmonics.

This resulted in a list of numbers. Frequency and power in Watts are obvious, but the three dBm numbers caused confusion for many visitors. The most perplexing appeared to be that we were producing negative dBm numbers, and truth be told, some positive ones as well, we'll get to those in our report.

How can you have negative power you ask?

As I've discussed before. A negative dBm number isn't a negative value of power, it's a fraction, so, -30 dBm represents 0.000001 Watts and you'd have to admit that -30 dBm rolls off the tongue just a little easier.

What we measured and logged was the overall transmitter output and at specific frequencies. As I've discussed previously, if you transmit using any transceiver, you'll produce power at the intended frequency, but there will also be unintended or unwanted transmissions, known as spurious emissions.

The International Telecommunications Union, or ITU, has standards for such emissions. In Australia the regulator, the ACMA, uses the ITU standard for radio amateurs, but I should point out that this might not be the case where you are. It's entirely possible, and given human diversity, probable even, that there are places where there are more stringent requirements, so bear that in mind.

I'll state the standard and then explain.

For frequencies greater than 30 MHz, the spurious emission must not exceed the lesser of 43 + 10 * log (power) or 70 dB.

That might sound like gobbledegook, so let's explore.

First thing to notice is that this is for transmissions where the transmitter is tuned to a frequency greater than 30 MHz, there's a separate rule for frequencies less than 30 MHz and the ITU also specifies a range of different limits for special purpose transmitters like broadcast radio and television, space services, and others.

Second thing is that the spurious emissions are calculated based on total mean output power. This means that your spurious emissions are considered in relation to how much power you're using to transmit and it implies that for some transmitters you can be in compliance at one power level, but not at another, so keep that in mind.

The phrase "the lesser of", means that from a compliance perspective, there's a point at which power levels no longer determine how much attenuation of spurious emissions is required. You can calculate that point. It's where our formula hits 70 dB, and that is at 500 Watts. In other words, to meet the ITU standard, if you're transmitting with less than 500 Watts, you're subject to the formula and if you're transmitting with more than 500 Watts, you're required to meet the 70 dB standard.

It means that, at least in Australia, spurious emissions for amateurs are dependent on transmitter power because the maximum permitted power is currently 400 Watts for an amateur holding a so-called Advanced License.

Now I'll also point out explicitly that the emission standards that the ITU specifies are for generic "radio equipment", which includes amateur radio, but also includes anything else with a transmitter.

One thing to mention is that spurious emissions aren't limited to the second and third harmonics that we measured, in fact they're not even limited to harmonics. If you're using a particular mode then anything that's transmitted outside the bandwidth of that mode is considered a spurious emission and there are standards for that as well.

As an aside, it was interesting to me that in many cases amateur radio is treated separately from other radio services, but the ITU considers our community just one of several spectrum users and it's good to remember that the entire universe is playing in the same sandbox, even if only some of it is regulated by the ITU and your local regulator.

So, let's imagine that you have a handheld radio that has a total mean power output of 5 Watts. When you calculate using the formula, you end up at 50 dB attenuation. In other words, the spurious emissions may not exceed -13 dBm. So, if your radio measures -20 dBm on the second harmonic, it's compliant for that harmonic, but if it measures -10 dBm, it's not. I should also point out that this is for each spurious emission. About half the radios we tested had a second harmonic that was worse than the third harmonic.

So, what does this mean for your radio? I'd recommend that you start reading and measuring. You'll need to measure the total mean power, and the signal strength at the base frequency and the second and third harmonic. I will mention that surprises might happen. For example, the Yaesu FT-857d radio I use every week to host a net appears to be transmitting with a power level that doesn't match its setting. At 5 Watts, it's only transmitting just over 2 Watts into the antenna, but at the 10 Watt setting, it's pretty much 10 Watts.

You also don't need a fancy tool like we were using. All these measurements are relative to each other and you could even use a $20 RTL-SDR USB dongle, but before you start transmitting into its antenna port, make sure you have enough attenuation connected between the transmitter and your dongle, otherwise you'll quickly discover the escape velocity of the magic smoke inside.

I'm Onno VK6FLAB

Gathering Data rather than Opinions ...

Sat, 08/19/2023 - 12:00
Foundations of Amateur Radio

There's nothing quite as satisfying as the click of a well designed piece of equipment. It's something that tickles the brain and done well it makes the hairs stand up on the back of your neck.

If time was on my side and I wasn't going somewhere else with this, I'd now regale you with research on the phenomenon, I'd explore the community of people building mechanical keyboards and those who restore equipment to their former glory, instead I'm encouraging you to dig whilst I talk about the second and third harmonics. This is about amateur radio after all.

Over the years there has been a steady stream of commentary around the quality of handheld radios. Some suggest that the cheaper the radio, the worse it is. Given that these kinds of radios are often the very first purchase for an aspiring amateur it would be useful to have a go at exploring this.

When a radio is designed the aim is for it to transmit exactly where it's intended to and only there. Any transmission that's not where you plan is considered a spurious emission. By carefully designing a circuit, by adding shielding, by filtering and other techniques these spurious emissions can be reduced or eliminated, but this costs money, either in the design stage, or in the cost of materials and manufacturing. It's logical to think that the cheaper the radio, the worse it is, but is it really true that a cheap radio has more spurious emissions than an expensive one?

To give you an example of a spurious emission, consider an FM transmitter tuned to the 2m amateur band, let's say 146.5 MHz. If you key the radio and all is well, the radio will only transmit at that frequency, but that's not always the case. It turns out that if you were to listen on 293 MHz, you might discover that your radio is also transmitting there. If you're familiar with the amateur radio band plan, you'll know that 293 MHz is not allocated as an amateur frequency, so we're not allowed to transmit there, in fact, in Australia that frequency is reserved for the Australian Department of Defence, and there's an additional exclusion for the Murchison Radio-astronomy Observatory.

293 MHz isn't a random frequency. It's twice 146.5 MHz and it's called the second harmonic.

There's more. If you multiply the base frequency by three, you end up at 439.5 MHz, the third harmonic. In Australia, that frequency falls into the amateur allocation as a second use, its primary use is again the Department of Defence.

These two transmissions are examples of spurious emissions. To be clear, the transmitter is tuned to 146.5 MHz and these unintended extra signals come out of the radio at the same time.

This is bad for several reasons, legal and otherwise. The first, obvious one, is that you're transmitting out of band, which as an amateur you already have no excuse for, since getting your license requires you to understand that this is strictly not allowed.

The International Telecommunications Union, or ITU, has specific requirements for what's permitted in the way of spurious emissions from an amateur station.

Spurious emissions also mean that there is energy being wasted. Instead of the signal only coming out at the intended frequency, some of it is appearing elsewhere, making the 5 Watts you paid for less effective than you hoped for.

So, what's this got to do with the click I started with?

Well, thanks to Randall, VK6WR, I have on loan a heavy box with a Cathode Ray Tube or Green CRT screen, lots of buttons and knobs and the ability to measure such spurious emissions. It's marked "HP 8920A RF Communications Test Set". Using this equipment is very satisfying. You switch it on and a fan starts whirring. After a moment you hear a beep, then the screen announces itself, almost as-if there's a PC in there somewhere - turns out that there is and the beep is the Power On Self Test, or POST beep. Originally released in 1992, this magic box can replace 22 instruments for transceiver testing. I started downloading user manuals, oh boy, there's lots to learn. Bringing back lots of memories, it even has a programming language, Instrument BASIC, to control it. Where have you been all my life? Turns out that in 1992 this piece of kit cost as much as my car. Anything for the hobby right?

At the next HAMfest I'll be using it to measure as many handhelds as I can get my hands on and taking notes. I have no idea how many I'll be able to test, but I'm looking forward to putting some numbers against the repeated claims of quality and price. I can tell you that a couple of weeks ago I got together with Randall and Glynn VK6PAW and spent an enjoyable afternoon testing several radios and there are some surprising results already.

Perhaps this is something you might attempt at your next community event, gather data, rather than opinions...

I'm Onno VK6FLAB

Jumping into the unknown ...

Sat, 08/12/2023 - 12:00
Foundations of Amateur Radio

If you walk into your radio shack and switch on a light, the result is instantaneous, one moment it's dark, the next it's not. What if I told you that as immediate as it appears, there is actually a small delay between you closing the circuit and the light coming on. Likely the distance between your switch and your light is less than say 10 meters, so the delay is likely to be less than 33 nanoseconds, not something you'd notice unless you're out to measure it.

What if your light switch is 3,200 km away? That's the length of the first transatlantic telegraph cable in 1858.

Let's start with the notion that between the action of closing a switch, or applying a voltage at one end of the cable and it being seen at the other end takes time. If we ignore the wire for a moment, pretending that both ends are separated by vacuum, then the delay between the two ends is just over 10 milliseconds because that's how long it takes travelling at the speed of light. One of the effects of using a cable is that it slows things down. In case you're curious, the so-called Velocity Factor describes by how much. A common Velocity Factor of 66 would slow this down by 66%.

This means that there is a time when there is voltage at one end and no voltage at the other.

There are a few other significant and frequency dependent things going on, we'll get to them, but before we go any further, it's important to consider a couple of related issues.

Ohm's Law, which describes the relationship between voltage, current and resistance in an electrical circuit was first introduced in 1827 by Georg Ohm in his book: "The Galvanic Chain, Mathematically Worked Out". Initially, his work was not well received and his rival, Professor of Physics Georg Friedrich Pohl went so far as to describe it as "an unmistakable failure", convincing the German Minister for Education that "a physicist who professed such heresies was unworthy to teach science."

Although today Ohm's Law is part and parcel of being an amateur, it wasn't until 1841 that the Royal Society in London recognised the significance of his discovery, awarding the Society's oldest and most prestigious award, the Copley Medal, in recognition for "researches into the laws of electric currents".

I'll point out that Ohm only received recognition because his work was changing the way people were starting to build electrical engines and word of mouth eventually pressured the Royal Society into the formal recognition he deserved.

I also mentioned the speed of light in relation to the delay between applying a voltage and it being seen at the other end, but it wasn't until 1862 when James Clerk Maxwell published a series of papers called "On Physical Lines of Force" that light speed was actually derived when he combined electricity and magnetism and proved that light was an electromagnetic wave, and that there were other "invisible" waves, which Heinrich Rudolph Hertz discovered as radio waves in 1888.

How we understand transmission lines today went through a similar discovery process. Your radio is typically connected to an antenna using a length of coaxial cable, which is a description for the shape the cable has, but the nature of the cable, what it does, is what's known as a transmission line.

If you looked at the submarine telegraph cable of 1858, you'd recognise it as coaxial cable, but at the time there wasn't much knowledge about conductance, capacitance, resistance and inductance, let alone frequency dependencies. James Clerk Maxwell's equations weren't fully formed until 1865, seven years after the first transatlantic telegraph cable was commissioned and the telegraph equations didn't exist until 1876, 18 years after the first telegram between the UK and the USA.

In 1854 physicist William Thomson, was asked for his opinion on some experiments by Michael Faraday who had demonstrated that the construction of the transatlantic telegraph cable would limit the rate or bandwidth at which messages could be sent. Today we know William Thomson as the First Lord Kelvin, yes, the one we named the temperature scale after. Mr. Thomson was a prolific scientist from a very young age.

Over a month, using the analogy with the heat transfer theory of Joseph Fourier, Thomson proposed "The Law of Squares", an initial explanation for why signals sent across undersea cables appeared to be smeared across time, also known as dispersion of the signal, to such an extent that dits and dahs started to overlap, requiring the operator to slow down in order for their message to be readable at the other end and as a result, message speed for the first cable was measured in minutes per word, rather than words per minute.

Today we know this phenomenon as intersymbol interference.

It wasn't until 1876 that Oliver Heaviside discovered how to counter this phenomenon using loading coils based on his description of what we now call the Heaviside condition where you can, at least mathematically, create a telegraph cable without dispersion. It was Heaviside's transmission line model that first demonstrated frequency dependencies and this model can be applied to anything from low frequency power lines, audio frequency telephone lines, and radio frequency transmission lines.

Thomson worked out that, against the general consensus of the day, doubling the line would actually quadruple the delay needed. It turns out that the length of the line was so significant that the second cable laid in 1865, 560 km shorter, outperformed the original cable by almost ten times, even though it was almost identical in construction, providing physical proof of Thomson's work.

It has been said that the 1858 transatlantic telegraph cable was the scientific equivalent of landing man on the Moon. I'm not sure if that adequately explains just how far into the unknown we jumped. Perhaps if we blindfolded Neil Armstrong whilst he was landing the Eagle...

I'm Onno VK6FLAB

How fast is Morse code?

Sat, 08/05/2023 - 12:00
Foundations of Amateur Radio

The first official telegram to pass between two continents was a letter of congratulations from Queen Victoria of the United Kingdom to President of the United States James Buchanan on 16 August 1858. The text is captured in the collection of the US Library of Congress. It's a low resolution image of a photo of a wood engraving. Based on me counting the characters, the text from the Queen to the President is about 650 characters. IEEE reports it as 98 words, where my count gives 103 words or 95 words, depending on how you count the address.

Due to a misunderstanding between the operators at either end of the 3,200 km long cable, the message took 16 hours to transmit and 67 minutes to repeat back. If you use the shortest duration, the effective speed is just over one and a half Words Per Minute or WPM. That's not fast in comparison with speeds we use today. Until 2003, the ITU expected that emergency and meteorological messages should not exceed 16 WPM, that a second class operator could achieve 20 WPM and a first class operator could achieve 25 WPM.

To put the message speed in context of the era, in 1856, RMS Persia, an iron paddle wheel steamship and at the time, the largest ship in the world, won the so-called "Blue Riband" for the fastest westbound transatlantic voyage between Liverpool and Sandy Hook. The journey took nine days, 16 hours and 16 minutes. Similarly, it wasn't until 1861 that a transcontinental telegraph was established across the United States. In 1841 it took 110 days for the news of the death in office of President William Henry Harrison to reach Los Angeles. Today that distance is covered by a 39 hour drive, a 5 hour flight, and about 12 milliseconds on HF radio.

So, while the speed of the message might not be anything to write home about today, at the time it was world changing.

Speed in Morse code is measured in a specific way. Based on International Morse code, which is what I'm using throughout this discussion, if you send the word "PARIS" a dozen times in a minute and the next time starts right on the next minute, you officially sent Morse at 12 WPM.

Looking inside the message of the word "PARIS", it's made up of a collection of dits and dahs. If a dit is one unit of time, then the letter "a", represented by dit-dah, is six units long when you include the spacing. In total, the word "PARIS", including the space after it, is exactly 50 units long. When you send at 12 WPM, you're effectively sending 600 dit units per minute, or ten units or bits per second, each lasting a tenth of a second.

Unfortunately, there is not a one-to-one relationship between Morse speed and ASCII, the American Standard Code for Information Interchange, for a number of reasons. Firstly, Morse is made from symbols with varying lengths, where ASCII, the encoding that we really want to compare speeds with, has symbols with a fixed length. You cannot simply count symbols in both and compare their speeds, since communication speed is about what you send, how fast you send it, and how readable it is at the other end.

Thanks to Aiden, AD8GM, who, inspired by my initial investigation, shared the idea and python code to encode Morse dits, dahs and spacing using a one for a dit, one-one-one for a dah, and zeros for spacing. This means that the letter "e" can be represented by "10" and the letter "t" by "1110".

You can do this for the standard Morse word "PARIS" and end up with a combination of 50 zeros and ones, or exactly 50 bits. I've been extending the code that Aiden wrote to include other encoding systems. When I have something to show it will be on my GitHub page.

However, using Aiden's idea, we gain the ability to directly compare sending Morse bits with ASCII bits, since they share the same zero and one encoding. If you use standard binary encoded ASCII, each letter takes up eight bits and the six characters for the word "PARIS", including the space, will take up 48 bits. Given that I just told you that the Morse version of the same message takes up 50 bits, you could now smile and say, see, ASCII is faster - wait, what?

Yes, if you send the word "PARIS " using 8-bit binary coded ASCII it's two bits shorter than if you use Morse. Job done, roll the press, headline reads: "Morse is four percent slower than binary coded ASCII".

Not so fast grasshopper.

If you recall, American Morse code, the one that has Samuel Morse's name written all over it, was replaced by a different code, made by Friedrich Gerke which in turn was modified to become what we now know as International Morse code.

Ask yourself, why did Gerke change the code? It turns out that one of the biggest issues with getting a message across an undersea cable was decoding the message at the other end. Let me give you an example, using American Morse, consider the encoding of "e", dit, and "o", dit-extra-space-dit and now try sending the word "seed" across a noisy line. Did you convey "seed", or was it "sod". In other words, there is room for ambiguity in the message and when you're talking about commerce, that's never a good basis for coming to a mutually binding agreement.

It turns out that encoding needs to be more subtle than just creating a sequence of bits.

Something else to consider, 10 bits per second is another way of saying 10 Hz, as-in, this is not just switching, we're dealing with frequencies and because we're not sending lovely sinusoidal waves, from a signal processing perspective, a very horrible square wave, we're also dealing with harmonics, lots of harmonics, and more of them as we speed things up.

So, if you send binary coded ASCII and compare it to Morse code, will your message actually arrive?

I'm Onno VK6FLAB

Will the real inventor of Morse code please stand?

Sat, 07/29/2023 - 12:00
Foundations of Amateur Radio

Morse code is a way for people to send information across long distances. The code we use today, made from dit and dah elements is nothing like the code demonstrated and attributed to Samuel Morse in 1837.

Over years and with assistance from Professor of Chemistry Leonard Gail and Physicist Joseph Henry, then Professor of Literature, Samuel Morse, and mechanically minded Alfred Vail developed an electrical telegraph system that automatically moved a paper tape and used an electromagnet to pull a stylus into the paper and a spring to retract it, marking the paper with lines. The original system was only intended to transmit numbers, and combined with a dictionary, the operator could decode the message. The telegraph was able to send zig-zag and straight lines, transmitting the message "Successful experiment with telegraph September 4 1837". The system was enhanced to include letters, making it much more versatile. On the 6th of January 1838, across 4.8 km of wire, strung across a barn, the new design with letters and numbers was demonstrated.

To optimise the enhanced version of the code, Alfred Vail went to his local newspaper in Morristown, New Jersey, to count the movable type he found in the compositor's type-cases, and assigned shorter sequences to the most common letters. You might think that this explains the distribution of the codes we see today, but you'd be wrong.

The 1838 system used four different element lengths and varied the spacing inside a character. For example, the letter "o" was signified by two dits with a two unit space between them, where today it's represented by three dahs. The letter "p" was signified by five dits, today this represents the number "5", and the code didn't distinguish between "i" and "y", between "g" and "j", and between "s" and "z".

A decade later and an ocean away in Germany, writer, journalist, and musician Friedrich Gerke created the Hamburg alphabet, based on the work by Vail and Morse, it standardised the length of the elements and spacing into what we use today, the dit and the dah. He changed about half of the characters and also incorporated four special German characters, the umlaut version of A, O and U and the CH sound - pronounced like the sound for the composer "Bach" or the Dutch name "Benschop" - not to be confused with the CH in child, or the CK in clock, or the SH sound in shop. It was different in other ways. For example, the letter "i" and "j" had the same code. The code was optimised to be more robust across undersea telegraph cables. I'll be coming back to that before we're done exploring, but not today. If you want to skip ahead, the term you're looking for is dispersion. Gerke's code was adopted in 1851 across Germany and Austria and it is known as Continental Morse code.

By the time most of Gerke's code was adopted as the European Standard in 1865 as one of many agreements that mark the founding of the International Telegraph Union in Paris, only four sequences of the original 1838 code remained and only two of those, "e" and "h" were identical. Which means that although the idea that Morse code is based around English is often repeated, at this stage it's nothing more than a myth, which my previous word list and subsequent dictionary letter counts across over fifty languages confirm.

I'll mention that given Gerke's German heritage, I also made a letter count from a modern German dictionary and one from 1901 and found that the letter distribution in those two are very similar with only the letter "s" and "t" swapped between position four and five in the popularity contest stakes. The German letter Top-5 is "enrts" and the "o" is the 16th most popular letter.

Speaking of "o", one observation to make is that the new International Morse code contained the letter "o" as dah-dah-dah, it also contained the letter "p" as dit-dah-dah-dit. These two codes come from an 1849 telegraph code designed by physicist, inventor, engineer and astronomer Carl August von Steinheil. There is evidence suggesting that he invented a print telegraph and matching dot script in 1836, based around positive and negative pulses, rather than pulse duration. I'm purposely skipping over earlier telegraph systems built and used by Carl Friedrich Gauss, Wilhelm Edward Weber, and Steinheil, only because we're talking about Morse code, not the telegraph.

The 1865 ITU standard for International Morse code includes several accented letters, symbols for semi-colon, exclamation mark, chevrons and several control codes and both normal and short forms for numbers which merge all the dahs in any digit into a single dah. Many of these codes are not part of the official standard today.

I'll point out that over time, experienced telegraph operators learnt to decode dits and dahs based on sound alone, negating the need for paper. This translates directly into how we experience Morse in our hobby today, by tone only.

There is a much more detailed explanation on how the telegraph evolved in a book by Russel W. Burns called: "Communications: An International History of the Formative Years". Fair warning, there are many claims and counterclaims, including the possibility that someone else entirely, Harrison Gray Dyar, a Chemist, invented an electrochemical telegraph, using chemically treated paper to make marks, dits and dahs, and demonstrated it between 1826 and 1828 near a race track on Long Island.

I'm mentioning this because Samuel Morse is often attributed as the source of all things telegraphy, but the reality appears to be much more nuanced and, unsurprisingly, there are conflicting accounts depending on the source, including acceptance and repudiation that Alfred Vail was the inventor of what we now call Morse code.

I'm Onno VK6FLAB

Is Morse really built around the most popular letters in English?

Sat, 07/22/2023 - 12:00
Foundations of Amateur Radio

Thanks to several high profile races we already know that sending Morse is faster than SMS. Recently I started digging into the underpinnings of Morse code to answer the question, "Can you send Morse faster than binary encoded ASCII?" Both ASCII, the American Standard Code for Information Interchange and Morse are techniques to encode information for electronic transmission. One is built for humans, the other for computers.

To answer the question, which is faster, I set out to investigate. I'm using the 2009 ITU or International Telecommunications Union standard Morse for this.

Morse is said to be optimised for sending messages in English. In Morse the letter "e", represented by "dit" is the quickest to send, the next is the letter "t", "dah", followed by "i", dit-dit, "a", dit-dah, "n", dah-dit, and "m", dah-dah.

The underlying idea is that communication speed is increased by making the most common letter the fastest to send and so-on. Using a computer this is simple to test. I counted the letters of almost 400,000 words of my podcast and discovered that "e" is indeed the most common letter, the letter "t" is next, then "a", "o", and "i". Note that I said "letter". The most common character in my podcast is the "space", which in Morse takes seven dits to send.

Also note that the Morse top-5 is "etian", the letter "o" is 14th on the list in terms of speed. In my podcast it's the fourth most popular letter, mind you, my name is "Onno", so you might think that is skewing the data.

Not so much.

If I use the combined works of Shakespeare, and given that it represents an older and less technical use of language, and doesn't feature my name, I figured it might have a different result. The top-5 in his words are "etoai", the letter "o" is the third most popular, and "space" still leads the charge, by nearly 3 times.

I also had access to a listing of 850 job advertisements, yes, still looking, and the character distribution top-5 is "eotin", the letter "o" is the second most popular letter.

Because I can, and I'm well, me, I converted the ITU Morse Code standard to text and counted the characters there too. The top-5 letters are "etion", but the full stop is a third more popular than the letter "e", mind you that might be because the people at the ITU still need to learn how to use a computer, seriously, storing documents inside the "Program Files" directory under the ITU_Admin user, what were you thinking? I digress. The "space" is still on top, nearly six times as common as the letter "e".

As an aside, it's interesting to note that you cannot actually transmit the ITU Morse standard using standard Morse, since the document contains square brackets, a multiplication symbol, asterisks, a copyright symbol, percent signs, em-dashes, and both opening and closing quotation marks, none of which exist as valid symbols.

Back to Morse. The definition has other peculiarities. For example the open parenthesis takes less time to send than the closing one, but you would think that they are equally common, given that they come in pairs. If you look at numbers, "5" takes the least amount to send, "0" the longest. In my podcast text "0" is a third more common than "1" and "9" is the least common. In Shakespeare, "9" is the most common, "8" the least, and in job listings, "0" and "2" go head-to-head, and both are four times as common as the number "7" which is the least common.

All this to say that character distribution is clearly not consistent across different texts and Morse is built around more than the popularity of letters of the alphabet. For example, the difference between the left and right parenthesis is a dah at the end. If you know one of the characters, you know the other. The numerical digits follow a logical progression from all dits to all dahs between "0" and "9". In other words, the code appears to be designed with humans in mind.

There are other idiosyncrasies. Most of the code builds in sequences, but there are gaps. If you visualise Morse as a tree, the letter "e" has two children, both starting with a dit, one followed by another dit, or dit-dit, the letter "i", and the other, followed by a dah, dit-dah, the letter "a". Similarly, the letter "t", a dah, has two children dah-dit, "n" and dah-dah, "m". This sequence can be built for many definitions, but not all. The letter "o", dah-dah-dah, has no direct children. There's no dah-dah-dah-dit or dah-dah-dah-dah sequence in Morse. The letter "u", dit-dit-dah has one child "f", dit-dit-dah-dit, but the combination dit-dit-dah-dah is not valid Morse.

It's those missing combinations that led me to believe that Morse isn't as efficient as it could be and what originally led me to investigate the underpinnings of this language.

I think it's fair to conclude at this point that Morse isn't strictly optimised for English, or if it is, a very small subset of the language. It has several eccentricities, not unlike the most popular computer keyboard layout, QWERTY, which wasn't laid out for humans or speed typing, rather the opposite, it was to slow a typist down to prevent keys from getting in each other's way when there was still a mechanical arm punching a letter into a page.

In other words, Morse code has a history.

Now I'm off to start throwing some CPU cycles at the real question. Is Morse code faster than binary encoded ASCII?

I'm Onno VK6FLAB

Adventures with Morse Code

Sat, 07/15/2023 - 12:00
Foundations of Amateur Radio

If you've ever looked at Morse Code, you might be forgiven if you conclude that it appears to be a less than ideal way of getting information from point A to point B. The idea is simple, based on a set of rules, you translate characters, one at a time, into a series of dits and dahs, each spaced apart according to the separation between each element, each character and each word.

The other day I came across a statement that asserted that you could send Morse faster than binary encoded ASCII letters. If you're not sure what that means, there are many different ways to encode information. In Morse, the letter "e" is the first character, represented by "dit", the letter "t" is the second character, represented by "dah". In ASCII, the American Standard Code for Information Interchange, the letter "e" is the 69th character, represented by 100 0101. The letter "t" is number 84 on the list, represented by 101 0100.

A couple of things to observe. The order of the characters between Morse and ASCII are not the same. That doesn't really matter, as long as both the sender and receiver agree that they're using the same list. Another thing to notice is that in Morse, letters are encoded using dits and dahs and appropriate spacing. In ASCII, or technically, binary coded ASCII, the letters are encoded using zero and one.

I'll also mention that there are plenty of other ways to encode information, EBCDIC or Extended Binary Coded Decimal Interchange Code was defined by IBM for its mainframe and mid-range computers. It's still in use today. In EBCDIC, the letter "e" is 133 and the letter "t" is 163. It was based around punched cards to ensure that hole punches were not too close together. It was designed for global use and can, for example, support Chinese, Japanese, Korean and Greek. Another encoding you might have heard of is UTF-16, which supports over a million different characters including all the emojis in use today.

Before I continue, I must make a detour past the ITU or the International Telecommunications Union. The ITU has a standard, called "Recommendation M.1677-1", approved on the 3rd of October 2009, which defines International Morse code. I'm making that point because I'm going to dig deeper into Morse and it helps if we're talking about the same version of Morse. I have talked about many versions of Morse before, so I'll leave that alone, but I will point out a couple of things.

The ITU defines 56 unique Morse sequences or characters. The obvious ones are the letters of the alphabet, the digits and several other characters like parentheses, quotes, question mark, full-stop, and comma, including the symbol in the middle of an email address, which it calls the "commercial at symbol" with a footnote telling us that the French General Committee on Terminology approved the term "arobase" in December 2002, but it seems that seven years isn't enough time to convince the ITU to update its own standard, mind you, the rest of the world, well, the English speaking part, calls it "at", the letter "a" with a circle around it, as in my email address, cq@vk6flab.com.

Another thing to note is that this standard is only available in English, Arabic, Chinese, French and Russian, so I'm not sure what the Spanish, Hindi, Portuguese, Bengali and Japanese communities, who represent a similar population size do for their Morse definitions. It's interesting to note that as part of its commitment to multilingualism, the ITU actually defines six official languages. Specifically, the "Spanish" version of the standard appears to be missing.

There's other curious things. For example, the standard defines a special character called "accented e", though it doesn't describe which accent, given that there are four variants in French alone, I found at least seven versions and it completely ignores accents on the i, the c, the o, special character combinations like "sz" in German and "ij" in Dutch. This isn't to throw shade on Morse, it's to point out that it's an approximation of a language with odd variations. I'm also going to ignore capitalisation. In Morse there's none and in ASCII, there are definitions for both, capitalised and not.

In addition to things you write in a message, there's also control codes. The ITU defines six specific Morse control codes. Things like "Understood", "Wait", and "Error". ASCII has those too. The first 31 codes in ASCII are reserved for controls like "linefeed", "carriage return", and "escape".

There are other oddities. The ITU specifies that the control code "Invitation to transmit" is symbolised by dah-dit-dah. If you're familiar with Morse, you'll know that this is the same as the letter "k". The specification says that multiplication is dah-dit-dit-dah, which is the same as "x". There's also rules on how to signify percentages and fractions using dah-dit-dit-dit-dit-dah, the hyphen, as a separator.

At this point I haven't even gotten close to exploring efficiency, but my curiosity is in overdrive. Is Morse really optimised for English, or are there other forces at work? I'm already digging.

I'm Onno VK6FLAB

The nature and ownership of information

Sat, 07/08/2023 - 12:00
Foundations of Amateur Radio

Have you ever made an international contact using amateur radio and used that towards tracking an award like for example the DXCC? If you're not familiar, it's an award for amateurs who make contact with at least 100 "distinct geographic and political entities".

In 1935 the American Radio Relay League, or ARRL published an article by Clinton B. DeSoto, W1CBD, titled: "How to Count Countries Worked: A New DX Scoring System". In the article he asks: "Are Tasmania and Australia separate countries?"

In case you're wondering, Tasmania has, at least in legal terms, been part of Australia since Federation in 1901. Not to be confused with New Zealand, a separate country over 4,000 kilometres to the east of Australia, Tasmania is the island at the south eastern tip of Australia. It was previously called the Colony of Tasmania, between 1856 and 1901 and before that it was called Van Diemen's Land between 1642 and 1856. Before then it was inhabited by the palawa people who lived there for about 42,000 years. They eventually became isolated after being cut off from the mainland by the Bass Strait when about 10,000 years ago sea levels rose due to the ice age coming to an end. In the last remaining local Aboriginal language 'palawa kani' it appears to have been called 'lutruwita' (/lu-tru-wee-ta/), but no living speakers of any of the original Tasmanian languages exist. As audio evidence, we have a few barely audible sounds spoken by Fanny Cochrane Smith on a wax record from 1899 on which she sang traditional songs.

I'm mentioning this to illustrate that DeSoto asking the question: "Are Tasmania and Australia separate countries?" is, in my opinion, fundamentally misguided. More so because of an island, well, rock, Boundary Islet, that's split by a border, one half belonging to Victoria, the other half to Tasmania. Specifically, since 1825, the state of Victoria and the state of Tasmania share a land border thanks to a survey error made in 1801. If you're into Islands on the Air, or IOTA, it's part of the Hogan Island Group which for activation purposes is part of the Furneaux Group, which has IOTA designation OC-195.

One point to make is that today the DXCC does not mention Tasmania, either as a separate entity, or as a deleted entity. It was removed from the DXCC in 1947.

The DXCC list is pretty famous in amateur radio circles. It's not the only such list. I already mentioned the IOTA list which contains a list of islands and island groups and their IOTA designation. There's also a list of 40 groups of callsign prefixes called CQ zones, published in CQ magazine, and a list of IARU regions maintained by the International Amateur Radio Union. There's also an ITU zone list, maintained by the International Telecommunications Union.

Each of these lists are essentially grouped collections with an attached label.

The list of DXCC entities is copyrighted by the ARRL. If you want to use it for anything other than personal use you need to ask permission. In other words, if you write software that for example tracks amateur radio contacts and you make that software available for others to use, you officially need permission from the ARRL to use it to track a DXCC. If you're an amateur outside of the United States your peak body will need permission from the ARRL to issue any DXCC award.

The ITU, the International Telecommunications Union is a United Nations specialised agency, part of our global community, owned by all humans. It peppers its content with copyright notices. The same is true for the International Amateur Radio Union, the IARU, the global representative body of all radio amateurs. It too peppers its content with copyright notices, even going so far as to add requirements that "(a)ny copy or portion must include a copyright notice" and that "(i)t is used for informational, non-commercial purposes only".

Let me ask you a question.

Can you achieve a DXCC without international cooperation?

Of course not. If you are an American amateur and want to get an award for contacting 100 distinct geographic and political entities, you can only do so by making contacts outside the United States of America.

As an Australian however, I have, according to the February 2022 version of the DXCC list, 340 countries to choose from, only one of which is the United States of America, and Alaska isn't part of the United States, apparently.

It might appear that I'm singling out the ARRL, but that's not true. CQ Communications, Inc. owns the list of CQ Zones, the ITU owns the list of ITU zones, the IARU owns the list of IARU Regions, Islands On The Air Ltd. and the Radio Society of Great Britain own the IOTA list and Clinton B. DeSoto W1CBD became a silent key in 1949, his copyright expired in 1999.

So, is grouping and labelling things sufficient to actually claim copyright? Can I claim copyright for all countries starting with the letter 'A' and calling it the 'Alpha Amateur Award'? My preliminary list for the 'Alpha Amateur Award' includes Afghanistan, Albania, Algeria, Andorra, Antigua, Argentina, Armenia, Australia, Austria and Azerbaijan and because it's not part of the United States, Alaska. Which reminds me, to encourage amateur radio activity in continents that need more, I'll add Africa and Antarctica. Consider that the 2023 edition of the triple A.

A bigger question to ask is: "Why should I need permission to use any of these lists?"

Can I create a public repository on GitHub that has all these lists in a single place, so others could use them without needing to hunt? What if I wanted to reformat and reuse these lists to create an online service to show the relationship between each of these lists for use by all radio amateurs? What if I wanted to charge a subscription fee to pay for the service? What if I wanted to roll out a whole company behind it and pay people to maintain it?

I'm all for people creating things and receiving credit, but at some point we start to take away from the community instead of giving back to it. Are these lists really owned by the various organisations claiming copyright and requiring written permission for their use, or do they belong to all radio amateurs?

Oh, the 'Alpha Amateur Award' list is copyleft. Look it up.

I'm Onno VK6FLAB

Asking a professional in the community...

Sat, 07/01/2023 - 12:00
Foundations of Amateur Radio

In the earlier days of my career I worked in a computing centre at a university surrounded by people with different interests and experiences in computing. There were programmers, hardware engineers, technicians, sales people, administrators, educators, support staff, statisticians and even a librarian.

There wasn't a lot of socialising or foosball, but every now and then we'd bump into each other in the lunchroom and talk about things that were not work related. During such conversations I learnt that people had all manner of interests outside their work, they were volunteer firefighters, or building their house, or active in the girl guides and any number of other unrelated pursuits and skills.

That same is true for the people inside the hobby of amateur radio. I've met people who were submariners, tow-truck drivers, accountants, paramedics, radio astronomers, telco and broadcast engineers, doctors, IT people, lots of IT people, and plenty of other professions.

As you might know, I'm self-employed. I am now acutely aware of mixing business with pleasure because not that long ago, every single time I met another person outside my field I'd get asked about some or other computer problem. Similarly I've witnessed medical professionals being asked about specific and personal medical issues and every time I experienced it or noticed it, a little part of me shied away from either telling people what I did or asking others for professional advice.

Now before you think that I'm telling you not to talk about computers within earshot of me, that's not at all what this is about. It's about building an awareness that there are people in your community from all kinds of different backgrounds with different experiences, something which I've talked about many times before, but, and here's a new thing, some of those people do not want to give free professional advice, or be dragged kicking and screaming back into their day-job when they're out having fun.

There's a difference between talking about what a virus is and asking about which computer to buy, a difference between talking about the neurological aspects of mushrooms and asking if someone can help you with deciding which medication to use. There's a difference between talking about radio telescopes and asking to access laboratory measuring equipment.

If you're unsure where the line is, think of it in this way. If your mate is a plumber, it's one thing asking them what sand in your sink means and another thing entirely to ask them to dig up your backyard.

I'm not telling you how to live your life, I'm asking you to be considerate of those around you who might have a skill set that you lack and need, whom you've met through the amateur community.

An example of how you might navigate this process is to ask the person if it's appropriate to ask a specific question and to be prepared for hearing "No". Or you might be surprised and find that they're happy to help, to a point. I'd encourage you to be mindful of that point.

In case you're wondering, nobody has been stepping on my toes and if you recently asked me a question, you haven't overstepped any lines.

At this point you might be wondering what this has to do with amateur radio and why I'm talking about it now. The answer lies in the nature and evolution of our community. If you look at us as we were a century ago, like I did extensively when I discussed the evolving nature of the so-called "Amateur's Code", apparently written in 1923 by Lieut.-Commander Paul. M. Segal, you'll know that the community from last century is nothing like the community today.

I'm sure that you agree that today we're not Gentlemanly, we're not beholden to the ARRL, and we're not all male, to name a few obvious changes and as a result the Amateur's code was updated, many times, to reflect our evolution.

Those changes came about because people had ideas, had discussions, wrote things down and shared them. That's what this is. A mark on the page saying that I'd like our community to be mindful of the expectations made of the members of the community around us.

Where are your boundaries and what did you do when someone stepped on them?

I'm Onno VK6FLAB

Planning and making lemonade

Sat, 06/24/2023 - 12:00
Foundations of Amateur Radio

The other weekend there was an amateur radio contest on. Not surprising if you realise that's true for most weekends. For a change, I knew about this contest before it started, because I missed out a year ago, so I did the smart thing to add it to my diary with an alert a month out.

In this particular contest there's points to be made by being a so-called roving station, that is, one that moves around during the contest and in the past that's how I've participated and had lots of fun. So the die was cast and a plan was concocted.

Being a rover meant that I would be outfitting my car with my radio. It's been out of the car for several years, taken out when we had the transmission replaced, and never actually returned. I started making lists of everything I'd need, including learning that you can use a bench top power supply to charge a 12V battery if your trusty charger has let the smoke out. I went hunting for the cable that connects the front of the radio to the back and realised that it was still in the car, so I could cross that off my checklist.

I decided for the first time that realistically I could log using paper and save myself the heartache of finding a computer with a suitable battery and matching software, especially since I'd be operating with low power so making a gazillion contacts wasn't going to be a problem.

I went to the shops to get some road food, in my case I like to bring water and oatmeal bars which keep me going through the night. One change was that the contest only ran for 24 hours, leaving less time for sleep.

I found my portable antenna tuner, plugged everything in, configured the radio for remote tuning, and tested it all on the bench in my shack.

In further preparation I packed my food, got a headlamp out, spare batteries, a pen and a spare, a ring binder for logging and my wristwatch to keep track of logging times.

The day before the contest I parked the car in the sun, extracted all the cables from behind the backseat, installed the radio, the battery, the head, the suction mount, the microphone, the speaker, the antenna tuner and antenna mount, and got everything where I wanted it.

In between rain showers I located the ropes I use to keep the antenna from breaking off the car when I'm driving, set it all up to length after hunting through the garage to find my multi-tap antenna to suit. Strapped that all together to the handhold in the cabin with a Velcro strap and called it a day.

The next morning I drove to my first activation location, installed the antenna on the 40m band, turned on the radio, tuned it, and called CQ Contest. Made my first contact about six minutes after I started. I was excited. Drove to the next location, made the next contact six minutes later. On a roll I drove to my third spot, where things came unstuck.

I spent the next two hours getting nothing. I changed both location and band, setting the antenna to 15m and after initially tuning once I couldn't get it to tune again. I spent an hour trying. Given that I wasn't far from home, I went back for a break and to pick up one piece of equipment that I should have packed when I started, my antenna analyser.

I tested the antenna and for reasons I still don't understand, it was only resonant on 19 MHz, not much good if you're trying to tune somewhere on 21 MHz.

I moved back to my first spot and changed to the 10m band. Three hours to the minute after my second contact, I made another one, this one outside the state.

By this time it had been raining steadily for four hours, despite a forecast of little or no rain. The car was stuffy, no way to open the window and stay dry, no contacts, no fun. I asked myself why I was doing this and decided that I'd learnt a valuable lesson and packed up and went home.

I did go out later in the afternoon to provide some moral support to a friend who had made double the three contacts I'd made, but by dusk we had both had enough.

My lesson for this week? Test the antenna before you go out and bring your analyser. I must add that I've been contesting for years and I've always packed the analyser but never ever needed it. This time I didn't and Murphy let me know that anything that can happen, will.

It might sound like a dejected wet cat story, but I learnt a valuable lesson and now I've got another challenge, to discover just why my trusty antenna stopped working. If I do find out I'll let you know.

What unexpected lessons have you learnt of late?

I'm Onno VK6FLAB

Where is your community and how resilient is it?

Sat, 06/17/2023 - 12:00
Foundations of Amateur Radio

During the week, prompted by a protest on popular social media site Reddit, I rediscovered that there are other places to spend time. It sounds absurd now, but until then much of my social interaction with the world was via a single online presence. This didn't happen overnight. Over the years more and more of my time was spent on Reddit engaging with other humans around topics of my interest, amateur radio being one of them.

As you might know, I'm the host of a weekly net, F-troop. It's an on-air radio discussion for new and returning amateurs that's been running since 2011 and you can join in every Saturday for an hour at midnight UTC.

In addition to the net, there's an online component. It captures items of interest shared during the on-air conversation. It's intended to stop the need to read out web addresses on-air, create a historic record of the things we talk about and allow people who are not yet amateurs to explore the kinds of things that capture our interest.

Since 2014, F-troop online was a website that I maintained. After the announced demise of the service in 2020 I explored dozens of alternatives and landed on the idea to move to Reddit, which happened in March of 2021.

At the time of selecting Reddit as the successor to the website, I wanted to create a space where anyone could add content and discuss it, rather than rely on a single individual, me, to update the website every time something was mentioned. During the net these days you'll often hear me ask a person to post that on Reddit.

This to illustrate, at a small scale, how the F-troop community shares its knowledge with each other and the wider community.

With the realisation that there are other places to spend time, comes an uneasy feeling about how we build our online communities, and how resilient they really are.

Before the Internet our amateur radio community talked on-air, or in person at club meetings, or shared their interests in a magazine, or wrote letters. Today we congregate online in many different communities. If one of those fails or loses favour, finding those people elsewhere can be challenging, especially if those communities prefer anonymity.

For quite some time now I have been thinking about how to build a radio amateur specific online community. The issues to surface, address and overcome are wide and varied. I created a list ... hands up if you're surprised ... I will point out that I'm sure it's incomplete, your additions and comments are welcome.

Funding is the first item to consider. All of this costs time and money. Amateurs are notorious for their deep pockets and short arms, but they're no different from much of humanity. If this community needs to endure, it needs to be financially sustainable from the outset.

Authentication and Identity is the next priority. If it's for amateurs, how do you verify and enforce that and what happens if an amateur decides not to renew their callsign, do they stop being an amateur? Should this community be anonymous or not?

Moderation and Content is next on the list. What types of content are "permitted"? What is the process to regulate and enforce it?

Is this forum public and accessible via a search engine, or private? Can people who are not yet amateurs benefit from the community and use it to learn?

How do you set rules of conduct and how do you update them? How do you deal with rule infractions and how do you scale that?

Who is this for? Is it decentralised across each callsign prefix, across a DXCC entity, or based on some other selection criteria?

Can you have more than one account, or only one per person, or one per callsign?

What about machine accounts, like a local beacon, repeater, solar battery, radio link, propagation skimmer or other equipment?

What about bots and APIs? If that doesn't mean anything, a bot, short for robot, is a piece of software that can do things, like mark content as being Not Safe For Work, or NSFW, or it could enforce rules, or look-up callsigns, or share the latest propagation forecast or check for duplicates, scale an image, convert Morse code, check for malicious links, or anything you might want in an online community. The way a program like a bot, or a mobile client, or a screen reader, or a desktop application talks to the community is using an API, or an Application Programming Interface.

Incidentally, the protest at Reddit is about starting to charge for access to the API, something which will immediately affect software developers and eventually the entire Reddit community, even if many don't yet realise this.

What about system backups and availability? How seriously are we taking this community? Is there going to be a Service Level Agreement, or are we going to run it on a best-effort basis? How long is it acceptable for your community to be inaccessible?

What about content archiving and ageing? Do we keep everything forever, do we have an archive policy? What happens if a topic that's permitted one year isn't permitted a year later?

And those are just to start the discussion.

There are plenty of options for places to start building another community, but will they last more than a couple of years, or be subject to the same effects that a Coronal Mass Ejection causes on HF propagation, being wildly random and immensely disruptive?

At the moment I'm exploring an email list as a place to store our F-troop data and I intend to discuss archiving it in the Digital Library of Amateur Radio and Communications.

Where is your online community and how resilient is it really?

I'm Onno VK6FLAB

What is our legacy?

Sat, 06/10/2023 - 12:00
Foundations of Amateur Radio

Our hobby has been around for over a century. The Wireless Institute of Australia, or WIA, is the oldest amateur association on the globe, having just marked 113 years since formation. The American Radio Relay League, or ARRL, is four years younger, founded in 1914.

I'm mentioning these two associations because they documented their journey through many of the years since foundation. The ARRL has published QST magazine since 1915 and the WIA has published Amateur Radio Magazine since 1933.

Before the Internet and the Digital Library of Amateur Radio and Communications, magazines like QST and AR Magazine were some ways of documenting and archiving achievements across our community.

If you find my professional biography online, you'll read: Experienced polyglot IT professional, software developer, trouble shooter, researcher, public speaker, educator, writer and publisher, founder and small business owner, podcaster, and licensed radio amateur.

It's fair to say that I've done a great many things across the technology arena. I have been writing software since before I was a teenager. At the time we used words like freeware and shareware, we copied lines of BASIC from the pages of the latest computer magazine, or recorded the TV teletext signal to access a programme. I recall typing pages of hexadecimal codes and running the result. Very satisfying to make sprites running across your screen.

In the decades since, technology has moved on. I've had a front-row seat to see that evolution happen. I've also witnessed one of the victims of the 1980's computer craze, the fundamental obliteration of its history. Much has been lost, either physically by destruction or disposal of boxes of magazines or the deterioration of audio cassette tapes once used to store software. I hold a Guinness World Record of Endurance Computing, set in 1989 during the Hobby Computer Club days, but you'll not find it anywhere other than a copy of the Dutch World Records that might be somewhere in my garage, or not. The twice-daily magazine we published over the three days of the event, Elephant News, was lost to time.

I'm mentioning this because this loss is not limited to the 1980's, it's happening here, today. As our hobby evolves into the software realm, we need to consider just how that legacy continues beyond our own lifetime. For example, we have lost access to the fundamentals of how exactly HAM DRM works, we've lost the source for VK Contest Logger to name another, and the collected designs by so-called antenna guru L.B. Cebik W4RNL (SK) are scattered around the Internet, but as far as I know, none of it is complete.

Fortunately we have tools at our disposal to keep our history. As I mentioned, the Digital Library of Amateur Radio and Communications or DLARC is an Internet Archive project to catalogue and store current and historic amateur media. In the 30 weeks since starting in October 2022, it now has 75,000 items and continues to grow under the expert stewardship of Program Manager, Special Collections, Kay, K6KJN.

The DLARC is not the only tool at our disposal and documentation isn't the only way we share technology in our hobby. More and more of what we do is based around software. We use programs to process signals, to generate and receive different modes, to create logs, to model antennas, to log propagation, and that list grows daily.

One of the most significant changes in software since my childhood is that of the introduction of Open Source Software in 1998. I've spoken about this several times before and I recently pointed at Not1MM as an example of an Open Source contest logger, but that is not the only project available.

If you visit GitHub.com and search for "amateur radio", you'll discover over a thousand projects showing a healthy ecosystem of activity from people like Daniel EA4GPZ who shared gr-satellites, a collection of telemetry decoders that support many different amateur satellites.

You'll find APRSdroid by Georg D01GL, which allows radio amateurs to view and report locations using the APRS network.

There's an Arduino based rotator interface by Anthony K3NG, an advanced ham radio logger called CQRLOG by Petr OK2CQR, a radio modem by Dan KF7IJB, remoteAudio by Tobias DH1TW, and the list goes on.

I must also point out that I'm only naming the person behind the repository because as is the whole point of Open Source software, anyone can contribute in different ways. You can make a copy of the source-code and write your own version, a so-called fork, or you can create trouble-tickets to explain a bug or problem, there's ways of contributing fixes and ideas and all of that can be done by anyone anywhere. Many of the projects I've just shared are a combination of years of effort by many different people.

And that is the point of this conversation. Amateur Radio is a collaborative affair. We learn and share from the experience of others. We document how we built a schematic, or an antenna, or managed to achieve some feat and share that with the rest of the community.

It's not limited to hobbyist projects either. I purchased my Analog Devices ADALM-Pluto SDR hardware specifically because it was Open Source and came with all manner of tools and code that I could tease apart, improve on and change to my own requirements.

As we make more and more use of technology in our hobby, we run the risk of repeating the mistakes of the 1980's if we don't start making our efforts public and accessible to the community at large.

Imagine what our hobby would look like if we stopped sharing our successes and failures.

So, next time you want to look at some software to use, a calculator to build, or a thing you've learnt, consider what a technology appropriate way to share that might be.

What tools do you use today and how many of them are Open Source? How much of what you do is accessible to others and what happens if you stop paying for the hosting fees on your website?

I'm Onno VK6FLAB