Saturday, November 5, 2016

That Phat Tube Sound

I worked as a chief engineer for several radio stations in the late '70s and early '80s. At that time, there was still a lot of tube electronics in service, given that industrial grade electronics can be expected to last 30 or 40 years, if it is well-maintained. And it was only recently that solid state technology made it into high powered transmitters. Given that a 50KW transmitter could cost $250,000.00, there's a strong incentive to make it last.

I had occasion to work on much more powerful tube amps than most hi-fi hobbyists were able to. Our 1KW blowtorch, a Gates BC-1T had a 1200 watt push-pull audio power amp, modulating the final RF stage of the transmitter. Frequency response, noise and distortion were all very good. Replace the modulator transformer with an 8-ohm transformer, and it could have driven a loudspeaker to ear-shattering hi-fi loudness.

The amplifier consisted of four 807 drivers (pictured lower rear), and two 833A finals, operating in class AB push-pull (V42 & V43, rear). The two knobs at the front adjust the bias idle current.

Anyway, during my tenure as a broadcast engineer, I had occasion to build numerous audio amplifiers. Line amps, voltage controlled amps, headphone amps, studio monitor amps. They were all solid state amps, because I wanted them to be, well, modern. And being an audiophile, they all measured and sounded great, if I do say so myself. I was especially proud of my studio monitor power amps.

So, fast forward to 2016. I thought it would be a fun idea to build a little hi-fi amp from two 50C5s and two 12AV6s, having spent most of my childhood listening to the radio on All-American-5 radios made from those tubes. As you know, the AA5 radios were the culmination of reducto absurdum, in terms of building a radio from the least number of parts.

AA5 radios had no power transformer, and got their high voltage rectified directly from the AC power line. When the filter caps were new, they got maybe 150 volts. So the tubes barely had enough voltage to operate adequately. The filaments were connected in series, so the voltages added up to 121 volts. And because of all the rampant hum, the output transformer and speaker were specifically designed for poor low frequency response. The 50C5s were operated in full pentode mode, single-ended, cathode bias, at 10% distortion. They put out maybe 1 watt (while using about 50).

Well, you get the idea. What about building a nice, ultra-linear power amplifier using two 50C5s and two 12AV6s? Two 12AV6s are identical to a 12AX7, if you ignore the extra diodes in the 12AV6. I could put those four tubes in series, and power the filaments right off the power line (at 124 volts -- close enough). And I could get a power transformer to run the B+ safely, with a true ground reference, and full wave rectifiers, i.e., a safe and decent power supply. Yes, I would use silicon rectifiers for power.

But after studying this out, I came to realize why no sane person designs with tubes anymore. There are so many constraints, and extra considerations to construct a high quality tube amplifier, that it isn't worth it. First of all, my series-connected heaters would have caused terrible hum problems, and there's no way I could have fixed that, without resorting to a DC filament supply. Now, things are starting to get weird. Not only that, but getting an output transformer that would provide the correct taps for 50C5 ultra-linear operation would be difficult. I can't even find out what percentage tap is optimal for a 50C5. I guess nobody has ever attempted it.

It would have been a fun project, but I could design a really great transistor amplifier for less cost and less effort. And if I wanted "tube sound", I could build an amplifier with field-effect transistors. They're basically just like tubes, but without the filaments, and they can run at safer voltages.

Wednesday, October 19, 2016

WxService Update Available

WxService ow4j161019

  • Fixed a problem with reporting to CWOP
My weather server spontaneously started having problems reporting to cwop.aprs.net:14580, after about 4 years of continuous service. Evidently the CWOP servers have been made more "strict", and sloppy formatting is no longer tolerated. I implemented the CWOP protocol from this documentation, and I am pretty sure I followed it to the letter. Nevertheless, either this has changed quietly in the meantime, or I never had it right. Anyway, it's working now. 

(Download...)

Saturday, August 6, 2016

Vinyl Stereo Encoding -- Not That Anybody Cares

Audio on vinyl is having a resurgence among audiophiles and millennials, despite the fact that CD audio, and higher resolution audio on DVDs and Blu Ray is orders of magnitude better in every measurable way. It is also more convenient, generally skip-proof (in memory and solid state drives - and if you don't like MP3 compressed, there's always lossless).

So why is vinyl still popular? Well, it's fun! I enjoy listening to my old records. I bought them in the 1960s, '70s and '80s, and never replaced them with CDs. Many were never even available in digital formats, except the ones I transcribe myself. And frankly, most of them still sound as good as the day I bought them. I always played them on decent equipment. But there's more to it than that.

By the time digital audio hit the scene, vinyl (or analog) recording state-of-the-art had reached a very high level of sophistication. Nothing like the 96 dB dynamic range, wider bandwidth, ruler-flat frequency response, and vanishingly low distortion of digital, but dynamic range on vinyl is in the high 70s, save for the intrusive clicks and pops from damage or defects in the vinyl itself. The human ear is quite tolerant of distortion and frequency response errors, so we have that going for us.

While I never bought into the lie that vinyl sounds better than digital (even moderately compressed MP3s), vinyl does sound pretty darned good. Some records sound almost as good as a CD. The trouble is, some CDs sound really bad, and some vinyl sounds really bad, making comparison hopelessly subjective. It depends on how much care went into the engineering and manufacturing.

Which brings me to today's topic: How is two channel stereo encoded into one groove of a vinyl record? Well, after some really horrible ideas involving dual tonearms (which would have been completely incompatible with mono, and would have taken up twice as much space, and would have had intolerable phasing problems at high frequencies), they finally settled on the +/-45 system.

There are several misconceptions about the +/-45 degree encoding scheme, and I intend to clear those up today. To begin with, we need to understand that the cross section of a record groove is a 'v' shape, with the walls separated by 90 degrees. In other words, the outer groove wall is +45 degrees from a vertical line bisecting the groove, and the inner wall is -45 degrees, for a total of 90 degrees.

+/-45 Degree cutting head (upside down)
The stereo encoding is sometimes described as, the outer wall is modulated with the right channel, and the inner wall is modulated by the left channel. Well, sort of. But that is more of a side-effect than a specification. Another way of stating this is that the right channel is modulated on a plane -45 degrees from vertical, and the left channel is modulated on a plane +45 degrees from vertical. It is possible to separate the two channels on playback, because vectors that are separated by 90 degrees are orthogonal -- independent.

Neumann VMS 80 Record Cutting Lathe
(If you look closely, you can see the cutting head just 
above the right side of the turntable.) (Click to embiggen.)
So it seems this orientation will modulate the groove walls independently with each channel, but what about mono? A mono cut means that both left and right channels are identical, and a +/-45 degree cutting stylus will move laterally, in the same plane as the record. Both groove walls move together in the same direction. With a real stereo program, the cutting stylus moves in all directions. If you were to look at an image traced out over time, it would look like a Brillo pad. So the groove walls are not really independent.

In fact, a better way to visualize this, is to imagine L+R being recorded laterally, and L-R being recorded vertically. If both channels are equal, L-R goes to zero, and you have a mono record. I am not aware of any record cutters that work this way, but actually, it might make the stylus motion and groove excursions easier to control. Incidentally, L+R/L-R (also called mid-side, or M/S) is totally compatible with +/-45. The vector sum of the stylus motion is identical with either matrix.

Stereo phono pickup (cartridge)
The main reason that phono pickups (cartridges) are all +/-45 designs, is that they can decode the left & right channels directly, whereas a M/S cartridge would require additional decoding. Not that it's difficult; stereo phono preamps could have had this capability since day one, if anybody had thought of it. But it would have added a few cents to the cost of each unit sold. So +/-45 it is. But any record cutter could switch to M/S at any time (even on the same record!), and still be totally compatible with all +/-45 pickups (decoders). As I say, the vector sum of the stylus motion is identical.

The fact is, there is good reason to want to record lateral and vertical components separately. Lateral recording is more resistant to distortion, and can be recorded "hotter" than vertical can. So limiters could handle each plane separately, possibly having less audible impact on the sound quality of the end product. It might also be advantageous to engineer a cutting head optimized for separate lateral and vertical excursions, vs. two identical left and right coils, which could be a compromise for vertical and lateral cuts.

Not that anybody cares about this anymore. I don't think there are many new breakthroughs in record cutter designs, these days. The payback would be marginal, and maybe what we have is good enough for the market share that vinyl has, in the big picture.

Thursday, January 14, 2016

Charging Your Smart Phone

There seems to be a lot of misinformation about "how to charge your smart phone". Such as not to let your battery get too low. Not to charge it overnight. Not to leave it charging after it has fully charged. Not to run your phone while charging. Not to use a "different" charger from the one that came with the phone. And probably more that I haven't seen yet.

Here's the deal: The only thing the charger does is provide power to the phone. It has no idea about the state of your phone's battery. Only the phone knows that. The phone has hardware inside that controls how much charging current to provide to the battery. It knows the temperature of the battery. Some even have humidity sensors. If the phone is on (either on standby or in active use*), the charger, or the battery, or both, will power the phone. It doesn't matter!

Here's what I do: When I have power available, I plug it in. When I don't have power available, I use the battery. If the battery gets too low, the phone will go into battery saver mode, and eventually shut down when the battery gets down to 1% or so. And a smart phone knows how best to discharge and to charge the battery, based on the state of charge, temperature and other things. You may even see the battery management strategy improve with software updates. That's why they call them smart phones. They do these things, so you don't have to think about it. Just use your phone, and don't worry about the battery.

*How often do you use the GPS on a long trip, with the phone plugged into the car's USB the whole time? Yeah, the phone gets hot, and the battery charging hardware, along with the phone's software drivers, calculates the proper charging profile to use.

Monday, December 14, 2015

Expansion Storage Support: Lame, Lame, Lame!

I have three mobile devices that have expansion storage, by plugging in a MicroSD card. Or so they say. And yes, you can plug in a MicroSD, and it gets "recognized". You can even explicitly tell the device to put data there. But that is about the extent of it. It makes usability almost nil.

Example one: After a recent map update, my Garmin Nuvi GPS told me that my internal storage was almost full, and that I should consider adding more. So I added a MicroSD to double the storage. What happened? I can now see two "drives" on the device, but the GPS OS has no idea how to use the additional storage. There is no way to span volumes that I can see (and I have Googled this). I can put photos on the expansion storage, I suppose. Yeah, the GPS can display photos. Whoopee. But if the maps get much bigger, I'm boned.

Example two: I purchased a Motorola DROID RAZR M smartphone. It had a reasonable amount of storage, but I added a MicroSD to double the storage. What happened? Again, Android seems clueless about "just using" this extra storage. I can explicitly store my files there, but I can't say "all user files go on the expansion SD", nor is there any way to span volumes that I can see (and I have Googled this).

Example three: I received a Dell Venue 8 Pro with Windows 8 as a gift. The primary storage is about 60 GB. So I added a MicroSD to double the storage. What happened? Well, Windows recognized the "external drive". It would even allow me to move all of my user libraries over to it - by changing the "Location" property - something you have to do for each library. I wish Windows would support simply moving the "Users" directory over to another volume, or better yet, span volumes. But noooo!

Even worse, moving to the latest Windows 10 update failed because I had some files on an "external" drive. What's more, OneDrive cannot sync to an "external" drive. Jeez-louise! Shouldn't I get to decide if I want to treat a drive as "external" or not? A microSD that I tuck inside a covered slot that isn't even accessible when the case is installed, doesn't seem very "external" to me! Windows could just ask.

Bottom line, I have three microSD cards that have successfully "expanded" storage in three devices, and I have no usable way to make any practical use of it. I'm a technology geek; If I were really motivated, and I had the time, I could probably hack a work-around. But casual users? Fuggedaboudit! That's lame! Come on, OS vendors! Can't you get creative about making expansion storage a plug-n-play proposition?

Sunday, August 2, 2015

A Good Swift Backup

What this world needs is a good, convenient backup system. I realize that different people have different needs, but following a recent SSD failure, I can tell you what I need.

I had been doing regular backups to a 0.5 TB HDD, using Windows 7 scheduled backups. I "let Windows decide what to back up". That was probably a mistake. Had I chosen the custom backup, I could have selected "include a system image" with the scheduled backup. Then, I could have restored my system to a new drive from the system image. Bam! Just like that.

Instead, I had to re-install Windows, locate OEM drivers for the devices that Windows setup seemed to have no clue about, and that I could not get via online updates, since the NIC was one of the things Windows didn't recognize. I don't think I ever had an OEM driver DVD. I don't remember having to deal with that on the initial setup, but I digress. Anyway, the incremental backups that I did have were incomplete, and the entire feature seems to provide capabilities I never cared about (the ability to revert to an earlier version of a file that I created, and then edited). So it was a major headache that I thought I had taken precautions to avoid.

Okay, so Windows 7 has the backup capability I need: scheduled system image backups. So what's the problem? The problem is, this capability has been removed from Windows 8.1, and evidently, Windows 10, in favor of "file history", which I already pointed out, isn't what I need. Automated insurance against hardware disasters is what I really need.

Something that would be even better is an automatic system image backup that would start when I plug in a designated backup drive on a USB port, replacing the existing image, if any. A removable USB drive makes sense, because I want to keep backup drives offsite, and not connected full-time to a computer. (The reason for not keeping it connected full time is to help avoid attacks such as CryptoLocker.)

Maybe there's a market for what I'm selling here.

Tuesday, September 30, 2014

Microsoft Windows 10 Was Unveiled Today

I have always been highly critical of the way Microsoft callously abandoned Windows 7 users by making the Windows 8 workflow completely schizophrenic and foreign feeling to desktop users, by focusing on the Metro/Modern look and feel, primarily aimed at touch screens and tablet users.

The desktop interface that we were all familiar with was almost an afterthought, rather like the DOS window in Windows. It's there if you really think you need it, but we think you're going to like Metro so bloody well, that you'll never go back. But we went back. Worse yet, Microsoft ripped out the "Start" button/menu. Experienced Windows users no longer had their "anchor" to use as the focal point for navigating around the desktop.

A fundamental rule in software development is you don't remove features; you deprecate them. Meaning that you can hide them, or make them configurable but turned off by default. But no, you're going to use Windows the way we say, and you're going to like it.

Some things improved with Windows 8.1 after the blow-back and slow uptake, which restored some of the worst inconveniences, but it still didn't cut it. Of course, we have gotten used to using Windows 8 over time, but many of us still miss some of the old features, and we still marvel at the screwy split personality of Windows 8.

So Windows 10 appears to have added back better desktop support, seamless transition between touch and desktop environments, and the ability to merge the two experiences without the schizophrenia that plagues Windows 8. I'm waiting to get my hands on an advance copy of Windows 10.