Fixing HP8012B Pulse generator

Introduction

I’ve bought a HP 8012B Pulse generator. The main purpose is to have a small device which I can use for generating clock signals. However the device I received needs some work. The buttons are sticky, and very intermittent. And I also notice that the transformer got really hot while operating the device. However the device produces some signals, since the buttons are intermittent, it’s kind of hard to get some useful signals.

Cleaning the buttons

If you’re like me and like to mess around with old HP gear, you properly know that HP gear is well build, and engineered in such way that taking a device apart is often easy, so that access to the internals of the device for maintenance or repair is easy.  Well with the HP8012B this is not the case. Maybe because this device is from the mod / late 70’s. The access to the internals of the device is quite easy. Both the left and right panels have a few screws. Once these screws are taking out the panels can be removed.

To get the front panel off, it’s quite a different story. It’s a fiddly job, and just take the time for it. Make sure to take pictures, before taking the device apart. There are some cables which must be de-soldered. Because the boards needs to be taken out. Well actually the boards can be leaved in, but then it’s quite tricky to get to the screws.

Once the front panel is removed I give all the potentiometers and switches a treatment with Deoxit. And after operating the switches and potentiometers for a while they feel much more reliable.

Replacing the capacitors

I suspect that the transformer is getting hot, due to some capacitors which gone bad. Before removing the electrolytic capacitors, make pictures. Because there are no markings for the plus or minus on the boards. I tested a few capacitors with my trusty HP 4261A LCR Meter, and found quite a few which are bad. So  I decided to replace every electrolytic capacitor. I also check a few resistors, but they where in spec, so no need to change them.

After replacing all the caps, I started to reassemble the device again. This is also not as easy as it sounds. A few wires from the transformer needs to be soldered onto the board, and since the transformer is heavy and bolted onto the back plate this means I have to keep the back plate somewhat in place, and solder on the wires, without burning anything with my hot solder iron.

 

After a lot of swaeting I managed to get everything back, and then I realized I forgot to replace the power on light bulb.. Doh..

Replacing the power on light

The power on light was burned out, so I needed to replaced this as well. Ideally you need to take the front panel off, or if the device is taken apart, replace the bulb at that time. Since I forgot to do that I have to take the whole thing apart again. Which I really don’t want to do. So I decided to try to place the new light bulb in place with the front cover attached. That’s not easy to do but I managed it.

The end result is one working HP8012B. This device is going to be used well in my lab. Very small device, ideal for generating clock signals. But it also allows me to test IC’s.

Replace the hard disk with a CF card in a HP 1660ES Logic Analyzer

Introduction

The bench top Logic Analyzers from HP namely the HP 1660E/ES/EP-Series have a builtin IDE hard drive. While this hard drive is not used during the boot process of the Logic Analyzer, the hard drive can be necessarily to  initialize some of the modules. Without the hard drive you may see errors like: not enough room to initialize modules. Since these drives are old, and can fail it might be smart to swap them out for a CF card.

 

The internal drive

The internal drive is known as HP part number 0950-2801 or F1385-69100. Which turns out to be a IBM 2.5″  2.1Gb laptop hard drive, model: DYKA-22160.

The plan is to replace this hard drive with a IDE CF Card adapter and a CF card of 2Gb.

Creating a Hard drive image

This should be the easy part. The process should go along the lines of:

    • Opening the Logic Analyzer
    • Take out the Hard drive
    • Hook the hard drive up to a computer with a usb external hard drive docking system
    • Create image from the hard drive
    • Restore the created image to a CF card

Well it turns out to be not that easy. At least if your using Mac OS that is. In the past I tried several USB devices which should be able to present a external hard drive as a usable device under Mac OS. However most of those external USB hard drive docking thingies are not working properly. Or the USB device itself is not recognized by my Imac, or the device is recognized, but doesn’t show any device in Mac OS.

“Why don’t you use windows for that task?” I can almost hear you ask. Well, I don’t have any hardware lying around to run a physical Windows system on. Well I have a very old laptop which runs Windows XP, but has only very slow USB ports, which brings a lot of it own limitations to the table.

I use a windows 10 virtual machine, but for that to work the USB device much be recognized by Mac OS for this to work.

So step 1 is to find some usb device which is able to read laptop IDE’s and CF card at least. This USB device must work under Mac OS.

Let me introduce Tccmebius TCC-S862

After a lot of searching I found the Tccmebius Harde Schijf Docking Station, TCC-S862-DE USB 2.0. Reading about this device it looks like it is supported by Mac OS. While not cheap, it’s not to expensive either. So for 28,00 euro found on Amazon it’s worth the gamble.

Quick review of the Tccmebius TCC-S862

There are a couple of different versions of the Tccmebius TCC-S862. The TCC-S862 can read  XD / TF-card/ MS (Duo / Pro) -card / CF-card / SD-card). It also is able to read IDE disk (3.5″ and 2.5″) and SATA I,II,III as well. Even claims to support SSD disk. And this is a lot of functionality in one USB device.

Other models have for instance USB 3.0 support. However I read that it gives problems. Some had to use a USB hub to get it working. Since I wanted a device which could read CF, SD, and IDE and SATA I picked up this model.

Using the Tccmebius TCC-S862

While this device works under Mac OS without problems, not all is good. The quality of the usb device feels very cheap plastic. Inserting a CF card is scary. It’s easy to bend the pins (already done that). The CF card goes in upside down. And it’s difficult to place.

Placing a 2.5″ disk is also tricky. The manual states to remove any metal backplate. And I know understand why. The disk sits flat against the backsite of the slot. this makes it impossible to see where the pins and the connector mate. Risking again bending the pins.

Making the drive image with the Tccmebius TCC-S862

Once the hard drive and CF card are inserted without causing any damage, the disks shows up as devices under Mac OS.

To get a list of hard drive devices use:

sudo diskutils list

That’s the good news. So the next step is to use the ‘dd’ command to create a image from the hard drive.

This is done by using for example the command:

sudo dd if=/dev/rdisk3 of=hp1660es_disk.img bs=4096

The bs parameter (Block Size) is determined by selecting a size which gives the best performance. However I discovered that by setting the bs to a high value, the error:

disk /dev/disk3 is unconfigured

popsup. I’ve tested this under Windows 10 in a VM, and also get errors with the device as I choose the number to high.

And yes, you can use the dd command under windows as well. I’ve installed git under windows, and that comes with some Unix tools, like the command ‘dd’.

So after leaving the bs parameter alone, I could create a disk image. Which is great. Placing the created hard drive image back unto the 2Gb CF card was easy to do with the dd command as well. I used a command like:

sudo dd if=hp1660es_disk.img of=/dev/rdisk3

( I removed the IDE HD at this point, and the CF card reported itself as disk3.)

So while the Tccmebius TCC-S862 is not very user friendly, it kinda works. Maybe there a better solutions out there, I don’t know. But having a device which actually works under Mac OS is a big plus. However think twice if you really want to buy this device.

Installing the CF card into the HP 1660 Logic Analyzer

After the CF card was prepared by placing the disk image on it, installing the CF card adapter and CF card into the Logic Analyzer was easy. For now I used kapton tape to make sure the CF adapter cannot short out, and also for now taped the adapter to the chassis. This doesn’t look nice, but as this is a temporarily fix, till I know for sure it works reliable. And we all know how permanent a temporarily fix is.. right ?

 

Finally I upgrade my soldering station

Introduction

This is my daily use soldering station. As can be seen I used this station quite intensive.  It’s a HQ/Solder 30. It can deliver 48Watts, and it is relativity cheap. When I bought it a couple of years ago, it was around 70 euro’s. The only problem I encountered was that the soldering iron itself stops working after a couple of years. With a replacement of 30 euro’s I could continue to use this station. And it worked very well I soldered a lot with this station. However I become more and more across PCB’s with big heavy ground planes, and this station simply can’t deliver the power (and thus heat). So after a lot of thinking, postponing the decision I finally got around to upgrade this station.

Requirements for the new solder station

As being said, I ran more and more across big ground planes. So the next solder station must be able to handle this. So I made a small list with requirements. These requirements are partially based on the experience I gained with the HQ/30 solder station. The short list of requirements:

    • More power then 48Watss
    • Active tips
    • Broad selection of tips (for example for “drag” soldering
    • Light soldering handle
    • Small footprint
    • Easy operation of the station itself.
    • It must be easy to swap tips (On the HQ I have to wait for it to cool down, which is a nightmare)

Why active soldering tips?

One of the requirements is to have active soldering tip. The reason is that the heating element is build into the tip itself, as the temperature sensor. This has the benefit of a good thermal couple, and thus a better transfer of heat. Also if the tip cools down due to a large ground plane (for example) it will recover faster.

However, there are some downsides to a active tips system:

    • The tips are more costly then the traditional tips
    • The solder station itself my be more expensive
    • The active tip is more likely prone to temperature drops

The last point can be addressed by making sure the solder station has enough power. Also there are now clones on the market which are cheaper then the established brands. Which also provides cheaper active tips. However, take into account that the materials used in the tips are less durable then the more expensive tips.

A big upgrade

I looked around for a good replacement, taking the requirements into account. And finally came to the conclusion it’s better invest more money, then trying to save some money and go for a cheaper solution, by looking into clones. So I bite the bullet, and decided to go for a JBC station. These stations are not cheap, but in every test I have seen, the JBC just leaves all the others behind in terms of performance. Furthermore I looked for example at Hakko stations, but the don’t have a small footprint. Well they have a large order of different tips, the JBC even has more choice. Also the operation experience is better on the JBC.

Comparing the two stations

This isn’t a review of both of the soldering stations, but the question might arise to compare the two stations. Well comparing the two stations isn’t really possible. I tried to solder some components on a big ground plane with the JBC, and it has no problems with it, while the HQ/30 isn’t able to melt the solder.  Also the time to heat up, the JBC reaches it’s working temperature in a few seconds. The HQ/30 takes it time. Comparing a 70 euro solder station to a soldering station which costs around 400 euro is not a fair comparison. And comparing a traditional solder tip to a active tip doesn’t make any sense either.

Being said that, the tips of the JBC comes at a price which is almost half of the price of the HQ/30 station. A standard jbc tip cost around 25 euro’s. Some tips are above the 30 euro’s.  And that’s something to consider. The HQ/30 at least for me, is a perfect soldering station, for general soldering jobs. The temperature may not be accurate, but for most of the things I solder, this is not a big deal.

Learning to solder with a cheap station as the HQ/30 is a good option, and personally a great learning experience. However now that the time has come to upgrade I decided to invest the money, and leave the cheap soldering stations behind. With this JBC I should have a solder station I can trust on, that it will perform and works when I need it. And the HQ/30 is going to be my second solder station.

What soldering station to buy when you start ?

When someone starts with electronics, the question which solder station to buy comes up. Often the suggestion is done to start with a cheap soldering iron / station. Obviously if you don’t know if your going to solder a lot, there is no point in looking at expensive soldering irons / stations. But if your starting out, and you got the money to spend, it might be smart to invest in a solder station like this one.  At the end, you get a soldering station which will last a long time, and the luxury of active solder tips. Which makes the heat transfer much more direct, and consistent. If you don’t have the budget, there are some clones, which seems to be very good also.

Nanovna black-and-gold unstable after firmware upgrade. How to fix?

Introduction

After getting a Nanovna I tried to upgrade the firmware and that where I find out the enormous amount of different hardware versions, making it almost impossible to find out which firmware is the right one. The question is: where to start?

Where to start?

The Nanovna is a complex device, and the different hardware types, makes it even more complex to update the firmware. So when getting a Nanovna the learning curve can be quite steep. And when you finally figured out which firmware to get,   it’s not funny to discover the firmware just installed is making the Nanovna unstable.

When getting a Nanovna there are a couple of resouces. For instance subscribe to the different Nanovna groups.io:

      • nanovna user (https://groups.io/g/nanovna-users)
      • nanoVNA V2 Users Group (https://groups.io/g/NanoVNAV2)

There are a lot of posting, wiki’s to go through. To understand more about the hardware versions of the Nanovna check out:

Unstable firmware

The Nanovna I got is a so called “black-and-gold” version. This is a clone based on the Nanovna version 2 made by HCXQS in collaboration with OwOComm. According to their site: https://nanorfe.com/nanovna-v2.html the latest official firmware version is ‘20201013’. After installing this version I noticed that when moving the markers around, the Nanovna crashes. The only way to get it going again, is to power it off and on again.

There is also “alternative firmware” for these devices. Figuring out which firmware is for what hardware version, is almost impossible. I discovered,  that it’s relativity safe to try out fimware. If the device doesn’t boot or the firmware is misbehaving just install another one. The device can be boot into “DFU”, from there new firmware can be installed. In my case I use the vna_qt application. Once the device is in DFU mode, and I connect it to the vna_qt application it asks me if I want to install new firmware.

So after searching through the Nanovna users group I found the following firmware: https://groups.io/g/NanoVNAV2/topic/firmware_for_v2_it_contain/81847314?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,20,81847314. This seems older formware, but it works After uploading it the Nanovna works great :-).

I tried to compile the firmware myself, tried a lot of things, unfortunately I always end up with a white screen.

Running kubernetes on a 10 node Raspberry PI cluster

Introduction

In this article I designed and build a 10 node PI cluster. The specs of this cluster are not bad. The cluster consists of:

  • 5x Raspberry PI3B+ 4 cores 1.2Ghz (Broadcom BCM2837 Cortex-A53)
  • 5x Raspberry PI4 4G 4 cores 1.5Ghz (Broadcom BCM2711 Cortex-A72)

The specs of the cluster:

Total Storage (Gb)    : 320
Total RAM (Gb)        : 25
Total CPU Cores       : 40 
Total Cpu Ghz         : 13,5 
Max Power consumption : 130 Watt

In this article I’m going to describe what my plans are for this cluster. And while the title is a dead give away, I’m not going into technical depths. If this article is going to be to long I’ll split the article up in multiple articles.

What to do with all this power?

Having the cluster up and running, it’s time to do something with it. One of the goals I have to learn is CI/CD pipelines. I have a small personal Gitlab server running. The goal is to use CI/CD pipelines with Kubernetes. And now that I have a 10 node cluster, this should be possible.

Running Kubernetes on Raspberry PI

While it’s possible to run Kubernetes on Raspberry PI, there are a couple of things to consider. And that is that most of the development being done is for major platforms like AMD, Intel etc. This means that for ARMhf and ARM64 a lot of things won’t work out of the box.

Another things to consider is: Kubernetes itself is not designed to run on a low powered platform like the Raspberry PI. However there are some Kubernetes flavors which focus on lightweight Kubernetes, by reducing for example the memory footprints. More on this later on. The bottom line is: Yes it is possible to run Kubernetes on Raspberry PI. There are however some caveats.

What is this Kubernetes anyway?

To get a overview of what Kubernetes is and what is does take a look at: What is Kubernetes – an overview The gist of it is that Kubernetes can be seen as a framework which allows the deployment of applications in containers, and to manage these containers and providing scaling and fail-over for the application being deployed.  Also note that Kubernetes is also revered to as K8s, by stripping the 8 letters between K and S of Kubernetes. So don’t pronounce this as K eight, but pronounce it as: Kubernetes.

Starting with Kubernetes

Wanting to run Kubernetes on my Pi cluster is one thing. Actually getting to run it is quite a different story. So how to get Kubernetes running? Basically there are a couple of challenges hidden in this question. The first one is: Which flavor of Kubernetes to run, and the second one is: How to learn Kubernetes.

Which flavor of Kubernetes to run?

There are a couple of lightweight Kubernetes to choose from. Some of these allows you to run Kubernetes inside docker on your local machine for example K3d , Minicube. For the Raspberry Pi I investigated the following two:

    • Microk8s
    • K3s

Before installing Kubernetes onto a Raspberry PI notes to choose a 64bit OS. The reason for this is that most of the (docker) containers are for 64 bits OS’es. Therefore I used Ubuntu server (LTS) 64 bits for Raspberry PI. Normally I would use Rasbian, or Raspberry OS as it is called now, but since their 64bit version is still in beta during the time of writing I switched to Ubuntu server.

Let’s get started with MicroK8s

As the first candidate I installed MicroK8s. The reason for this is that the installation instruction sounds really simply. Install MicroK8s, and off you go. As it turns out, it was not that simple, and eventually I had to give up on MicroK8s, simply because I could not get is stable.

After I installed MicroK8s I noticed that after a couple of hours, the whole cluster was not responding to kubectl commands. All I get was timeout errors. Rebooting nodes was not helping. I did a lot of searching, and browsing through the issues on MicroK8s issues, and I did find that others where experience the same problems on Raspberry PI’s, but no solution.

So after two weeks of fighting I give up upon MicroK8s, and moved on to the next option

Installing K3S on Raspberry PI cluster

According to the documentation the name of K3s comes from:

We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10-letter word stylized as K8s. So something half as big as Kubernetes would be a 5-letter word stylized as K3s. There is no long form of K3s and no official pronunciation.

Installing K3s can be completely done by using k3s-Ansible. Which is perfect fit, since I do a lot with Ansible. After cloning the repository I Fowwloed the instruction (which basically tells you to change the host.ini file and hostvar file and to run the ansible playbook).

And right from the start I could tell that K3s was much more stable. I could  deploy applications, and could remove them, and even after a couple of days the cluster was stable. Since the default version installed is v1.17.5+k3s1 I decided to upgrade the cluster. Due to my inexperience with Kubernetes, K3s and how to upgrade I manged to completely destroy my beautiful working cluster. So I started over, flashing all my SD cards with Ubuntu server, ran my own Ansible playbook to install the basics, and installed the latest version of Rancher’s K3s. And to my relieve everything works perfect. The cluster runs stable.

So in conclusion: Microk8s sounds great, and I really wanted to use it, but couldn’t get it stable to run. Keep in mind that at this point I have no experience with Kubernetes, so your mileage may vary. Thanks to Ansible-k3s repository I could get K3s up and running quickly. At this point I’m not interested in the details on how to install and configure Kubernetes. In case of K3s it’s one binary anyway. At this point I’m mainly interested in getting Kubernetes up and running and start learning on how to use Kubernetes.

Learning Kubernetes

Now that Kubernetes is up and running where to start learning? I came across the Kubernetes 101 series done by Jeff Geerling. I highly recommend watching his  Kubernetes 101 YouTube video’s. I learned a lot watching these video’s and give me some basics to get me started.

In the next article I’m going to describe how I got Gitlab CI/CD pipelines working with my Kubernetes cluster.

An easy extensible Raspberry PI Cluster

Introduction

For quite some time I want to play around with a PI Cluster. Of course “cluster” can mean many things. A Cluster can be combining all the CPU power of the nodes, to get more processing power. A cluster can also mean having couple of nodes to build a scalable platform like OpenStack. In this case, the purpose of the cluster isn’t that important. The thing I’m mainly interested in is build a frame which can hold a couple of PI’s. And the frame should be easy to extend, so that when there is a need for more PI nodes in the cluster, the frame is can be easily extended. At the end I came up with a working frame. However this implementation might not be suitable for everyone… More on that later on.

How this all started

With a friend of my we started to build a 20 node PI cluster. The cluster was split into two: 10 Nodes lived in my home, and the other 10 nodes lived by my friends house. We used a VPN to connect the cluster nodes together. This worked great. However when we used PI3 nodes, we needed cooling. And the current frame didn’t provide that. After a attempt to alter the frame to add cooling FAN’s I realized that it might be better to start from scratch. The reason for that was during the cluster was operational, we discovered the current frame could be improved by adding some features.

Designing the ultimate cluster

Well the “ultimate” is maybe a bit strong.. but since I wanted to redesign a cluster from scratch I decided that at least the following features must be implemented:

    • The nodes must be easy to remove or easy to be inserted into the cluster
    • The frame itself should be 19 inch so it could fit in a standard network rack
    • The nodes should be powered from a own power supply, and should NOT rely on PoE
    • The frame must be easy to extend
    • Each node should have proper cooling
    • It would be nice that each node has some LED’s to display status (preferred RGB LED’s)

Tackling the power requirements

Powering each node from a power supply is the hardest part to implement. After thinking about it I came to the conclusion that I could do this by developing a back-plane. This back-plane is then used to distribute the power from one Power Supply (PSU) to all the other nodes. This sound like a great idea. However it introduces a new problem:

How can I make a easy extensible frame when there is a back-plane. The answer to that problem was quite easily actually: by splitting the back-plane up into smaller back-planes. This is how I came up with a back-plane design which can hold two PI nodes, and can be extended by adding more back-planes together. And as it turns out: splitting the back-plane up into smaller back-planes also makes the manufacturing of the PCB easier.

By using a back-plane design it’s easy to come up with a PCB for each node , which would provide the interconnect between the PI node and the back-plane. Which makes it possible to remove or insert a PI node.

Tackling the cooling

Next thing problem to tackle was: How to keep the PI node cool, so that the CPU won’t overheat and throttle. I soon came with the idea to develop a PI Cluster HAT. This Cluster HAT solves a couple of other problems:

  • The HAT can be used to hold a FAN to cool the CPU (PWM controlled)
  • The HAT can be used to distribute the 5V to the back-plane
  • The HAT can also be used for other features:
    • Hold the connections for 3 RGB LED’s so they can be controlled by GPIO pins
    • Break-out I2C
    • Break out serial RX and TX
    • Break out 3V

Developing the cluster frame

Within three months I had the first version of a 10 node cluster working. It would take me another six months to get it to a a workable version. The final design uses 3 PCB’s:

  • A back-plane PCB with can power 2 PI Nodes and which are extensible.
  • A PI Cluster HAT with a lot of features (EEProm, PWN controlled FAN, 3 RGB LEDS, I2C, 3V, 5V power lines)
  • A Power board which connects the power of the PI node to the back-plane
  • A LED Board which holds the 3 RGB LED, which connects to the PI Cluster HAT

The frame of the cluster has mainly two parts: The frame which holds the PI tray. And the PI tray which holds the PI, the PI Hat with FAN, the Power Board and Led Board. The whole design is modular.

The downside of this is: To build this cluster frame, a lot of parts are needed, and the PCB’s must be soldered. So that’s why this might not be for everyone. However…

OpenSource is the way to go

Did I mention this whole design is OpenSource? No?? Well it is. And it’s on github for everyone to download. All the 3D models, Gerber files , schematics, a full hardware assembly guide is available. You can find it here

Working on a older project – making a frequency counter

  Introduction

Since the Corona virus pandemic I got some more free time on my hands in the weekend and decided to blew off the dust from a project I was working on. This project involves designing and making a frequency counter. This project is more about learning on how to make a modular design.

The frequency counter modules

The frequency counter has the following modules:

      • A main board which holds all the modules, the counter itself and the display buffer
      • A Divider module. This modules controls the gate time (which can be set in three steps)
      • A Micro Controller (uC) module. This module controls the multiplexing of the display as well as other features such as handling the button pushes on the front panel. (there are two buttons)
      • A Display module. This module holds the 7 segment displays as well as  a max7219 which takes care of the multiplexing.
      • A power module which provides power to the frequency counter. This module is yet to be designed.

The things I wanted to improve

In the first versions the modules used wires to interconnect with the main-board. In this version I wanted the modules to interconnect to the main-board directly, without using wires.

Secondly I wanted to reroute the PCB’s. Since the start of the project I used auto router to route the traces. Since it was one of the first PCB I created. Since I learned a lot more about PCB design I decided to route all the trace manually.

And I wanted to learn more about SMD soldering so I decided to change some IC’s from a DIP package to a smd package. Which means I had to swap connections to other pins, since the pin numbers are not always the same.

And last I wanted to have the ability to mount the display on to the main-board.

 

Testing the changes made

Once the new PCB’s were in, I started to solder all the IC and other parts. Once completed I realized that I didn’t think about adding a ISP header on the uC board.

So I had to improvise:

First soldering the wires to the micro controller

And finally after connecting all the wires together I could hook up a Arduino UNO as a ISP programmer. And could flash the boot loader. After that I could use the program header to upload the firmware.

The end result

And this is how the end result looks like. I didn’t have cleaned the board, since I’m still testing, but to my surprise the counter was working the very first time. And the result looks like:

Fun with Ebay purchases

Introduction

A lot of the stuff in my lab comes from Ebay. Simply because this kind of equipment is hard or impossible to find in Europe. And if you find it, the seller is asking big money for it. Buying overseas is not cheap either, due to import taxes, and shipping costs.

And buying from Ebay has some risk to it. Of course, if you pay with Paypall, there is the “money back guarantee”. Most of the times this means shipping back the item

, and depending on the seller the shipments must be payed. If you buy stuff from Ebay you know all of this.

Most of the times however, it’s okay and there are no problems. And sometimes you find yourself in a situation which is just baffling.

What about a 54845A Infiniium Oscilloscope?

For some time I was looking for one of these. And I found a Ebay listing, made a best offer, which was accepted. For $1350,00 dollars the scope was mine. The scope was advertised as a working scope. So all in all it’s still on the high side. These scopes aren’t the best. But for what I have in mind it’s more then good enough.

And then the scope arrives

After a week and a couple of day waiting the scope was delivered. And well the package wasn’t that great. The scope could move around inside the box, which is bad. I wish that people sending equipment would learn to pack equipment so it can’t move around, with enough protection around all sides. In this case I feared the most. Luckily the scope itself was wrapped in a good amount of bubble wrap, and it didn’t destroyed the box it was in.

After unpacking and turning it on, the scope refused to boot. Due to a bad CMOS battery it want’s me to press F1 on the keyboard. And luckily a keyboard was included, so after hooking on the keyboard, and pressing F1, the scope booted into Windows, and started the scope application. The first impression was not that good. But if this was all.. I’m not complaining..

Self test time!

While the scope was booted I noticed some strange flickering and weird behavior while channel 1 was enabled. I disabled channel 1, and it looked good… for a while then some other glitches I could not explain. Hmm let’s do a self test.  And the self test failed on Video SRAM. And the second time  did a self test it failed on: “Tri State trig”. This points into the direction of a board called “A6”,which is a scope interface board.

It’s dirty… real dirty

So maybe the card has bad contacts, so I decided to remove the case, and take a look inside. And that is when I noticed a lot of dirt. I have seen dirt in machines, but this is really bad. I cleaned most of it. Cleaned the A6 board, and sprayed some Deoxit on the contacts. and reinstalled  the board. Unfortunately, the problem stayed.

Time to contact the seller

Now knowing that there is a problem with the scope I contacted the seller, which replied with: “I”ll see what I can do, and else send it back and you get a refund”.

After a good night of sleep I decided that sending the scope back in the original package, would be the end of the machine. The package would hold a second time. And I don’t have other packing materials. And it also means shipping costs, and the value of the dollar and euro.. so in the end it will cost me money, even if I get a refund. So I started to look online for a A6 board and could find one for 50 dollars. Which I bought. I let the seller know what I did to get the scope going. I also told about the state the scope is in, and that it’s not good. But that I went ahead and invested 50 dollar for a A6 card. The response I got was.. well i didn’t know if a should cry or laugh. the exact response was:

“All I can say is wow! That is amazing. Can’t thank you enough for doing that. I’m hoping it works out!”

Errr.. I don’t know how to respond to that.

Self calibration

I went ahead and tried to calibrate the scope by using the self calibration process. This is a straight forward process. It only takes some time to complete.

When I started the self calibration tool, it looks like the previous calibration seems to have failed on channel 2. Not a good sign. This is just the kind of problems I’m afraid of. These scopes have hybrid ADC’s. And it they fail, or there is some problem in the front-end.. it can be very hard to fix, and very costly.

And of course when I tried to calibrate the scope ,it failed. A lot of effort later by cleaning the hybrids the calibration process finally end successful. So now I only have to wait for the “A6” board, which is hopefully a working board, and I only have to swap the board, which is not very complicated.

In conclusion

When buying stuff on Ebay, there is always a risk that the item bought isn’t exactly what is advertised.And yes I could have shipped it back, and got a refund. However due to the state of the packaging I know for sure the scope wouldn’t make it in one piece. And spending 50 dollar seems the cheapest option. And the last thing I wanted was ending up in a discussion about a scope which was damaged during shipping.

To summarize: This scope was advertised as working, however:

      • Didn’t boot due to a dead CMOS battery (easy fix)
      • A6 board is defective (bad SRAM)
      • The machine wasn’t cleaned, and if I didn’t had to open it.. I would not know about the dirt inside.. which could easily destroyed the machine due to lack of cooling.
      • Calibration failed on channel 2. Luckily I could fix this. But this could have been a major problem.

This machine should have been listed as “untested for repair or parts”. If such a machine as this was advertised as such I wouldn’t even considering buying. It took a lot of time and to get it to a point of a good working scope. But I rather would have spend this time on the project I’m working on where I need the scope for in the first place.

Installing PyVisa on MacOS 10.14.6 (Mojave)

Installing PyVisa on MacOS 10.14.6 (Mojave)

In this article I had some trouble installing the NI-VISA library for py-visa. So this article is a quick update on that. This article describes what I did to test the NI-VISA library. And honestly I don’t know why it was not working.

First of all, when testing the installation of pyvisa with:

>>> import pyvisa
>>> rm = pyvisa.ResourceManager()
>>> rm.list_resources()

Make sure the equipment connected to the USB GPIB adapter is on. If the connected equipment is not on, you get a empty list of resources back.

Testing the NI-VISA library

The first thing I wanted to know was: When the NI-VISA library is not working, is that due to some configuration?

Testing can be a little annoying since when you reinstall the library, or de-install and (re)install you have to reboot your machine.  And I didn’t want to mess around to much, with the risk I wrecked some black magick library configuration. Which might be almost impossible to fix.

So I figured: Why not unpacking the installation package, and try the driver within the package directly ?

Unpacking a .pkg file under MacOS is really simply. First mount the Downloaded .dmg package. In my case: NI-VISA_20.0.0.dmg

Once it’s mounted, I changed to my home-dir, and created a test directory.

cd ~
mkdir test-nivisa
cd test-nivisa

Next I copied the installation package (NI-VISA_Full_20.0.0.pkg) to this test dir:

cp /Volumes/NI-VISA\ 20.0.0/NI-VISA_Full_20.0.0.pkg ./test-nivisa

Unpacking (or expanding) the install package is really easy:

pkgutil --expand nivisai.mpkg/.packages/NI-VISA_Full_20.0.0.pkg ./unpack

Note that the unpack dir is created during expanding the package. So don’t create the dir upfront! If you do the command fails with:

pkgutil --expand NI-VISA_Full_20.0.0.pkg ./unpack
Error encountered while creating ./unpack. Error 17: File exists

In the test dir where the package is unpacked, a lot of other packages can be found.  One of these packages contains the library which I’m after. However all the packages contains a file called “Payload” which is a gzipped tar file.

To unpack this file for each package, the find command is our friend:

cd unpack
find ./ -name 'Payload' -exec tar xzvf {} \;

This will unpack every Payload file in your current directory. Since the “v” flag is enabled (verbose) this outputs a lot of text (files which are untarred) There is a chance this will overwrite files, but this is not something I’m worried about, as long as I can use the NI-VISA library.

This library is called “VISA”, so a second find command is needed:

find ./ -name 'VISA'

Which gives the result:

.//VISA.framework/Versions/A/VISA
.//VISA.framework/VISA

Once I had the library I tested this with Pyvisa. This can easily be done in a virtual environment (not since I already tested this, the package pyvisa is already installed):

python3 -m venv env
pip install pyvisa
Requirement already satisfied: pyvisa in /Users/edwin/.pyenv/versions/3.7.3/lib/python3.7/

python3
Python 3.7.3 (default, Dec 4 2019, 15:11:28)
[Clang 10.0.1 (clang-1001.0.46.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyvisa
>>> rm = pyvisa.ResourceManager('./VISA.framework/VISA')
>>> rm.list_resources()
('GPIB0::9::INSTR',)
>>>

As can be seen on the last line:

('GPIB0::9::INSTR',)

The NI-VISA library works just fine. The actual library lives in:

/Library/Frameworks/VISA.framework/VISA

So I created a file .pyvisarc in my home dir (notice the dot (.) in front of the file!

This files contains:

cat ~/.pyvisarc
[Paths]
VISA library: /Library/Frameworks/VISA.framework/VISA

So know when I use pyvisa-info (pyvisa-shell) it works as well. pyvisa-info gives:

pyvisa-info
Machine Details:
Platform ID: Darwin-18.7.0-x86_64-i386-64bit
Processor: i386

Python:
Implementation: CPython
Executable: /Users/edwin/.pyenv/versions/3.7.3/bin/python3.7
Version: 3.7.3
Compiler: Clang 10.0.1 (clang-1001.0.46.4)
Bits: 64bit
Build: Dec 4 2019 15:11:28 (#default)
Unicode: UCS4

PyVISA Version: 1.11.3

Backends:
ivi:
Version: 1.11.3 (bundled with PyVISA)
#1: /Library/Frameworks/VISA.framework/VISA:
found by: auto
bitness: 64
Vendor: National Instruments
Impl. Version: National Instruments
Spec. Version: National Instruments
py:
Version: 0.5.1
ASRL INSTR: Available via PySerial (3.4)
USB INSTR: Available via PyUSB (1.1.1). Backend: libusb1
USB RAW: Available via PyUSB (1.1.1). Backend: libusb1
TCPIP INSTR: Available
TCPIP SOCKET: Available
GPIB INSTR:
Please install linux-gpib (Linux) or gpib-ctypes (Windows, Linux) to use this resource type. Note that installing gpib-ctypes will give you access to a broader range of funcionality.
No module named 'gpib'

So I really don’t know why it was not working the first time, and why it almost a day of pulling my hear out. There are two things I can think of:

I switch with my usb adater between a windows 10 VM maybe I didn’t release the adapter properly from Windows 10?

Or the adapter was not plugged in correctly ?

I tried switching from MacOS to my  Windows 10 VM multiple times, noticing it worked in Windows 10 perfectly, but not under MacOS.

Anyways, it works now. And hopefully the steps above might be useful to someone.

Comparison between Prologix and National Instruments USB GPIB controller

Introduction

Almost every electronics lab equipment has the possibility to be controlled remotely. This is almost always done by using IEEE-488. Also known as ” HP-IB” As HP called it when HP developed this 8 bit parallel bus.   It’s also know as “GPIB”. (General Purpose Interface Bus).

In the old days a dedicated computer card was used as a controller, to perform remote operations on the lab equipment.

Nowadays we have LXI for example, making it possible to remotely control devices over Ethernet network by using the TCP/IP protocol. This doesn’t mean GPIB isn’t used anymore. Even modern equipment can have a IEEE-488 interfaces. For example my Rigol DM3608 has a IEEE-488 interfaces and can be configured to understand a specific command set.

Use an IEEE488(GPIB) communication -adapter

Nowadays it’s more common to use USB GPIB controllers to remotely control the (old) lab equipment.  There are a couple of choices:

  • Use Prologix GPIB-USB
  • Use an IEEE488(GPIB) communication -adapter (Keithley or National Instruments for example)
  • These adapters may also be available as Ethernet controllers which plug into a LAN network.

However there are mainly two difference between the “brand names” and the Prologix:

For example the National Instruments (NI) USB controller present itself as a GPIB device. Whereas the Prologix presents itself as a serial device.

Which one to choose ?

If you look at the known brand names one, presenting them self as a gpib device, you notices these devices are not cheap. A new adapters can cost you as much as $1300,00 and no.. this is not a typo. While the Prologix adapter cost around $150,00 dollars.. So what’s the catch ? (there is always a catch).

And as always: It’s depends. Say for instance that you want to use an application from a vendor which only works with a GPIB device. The Prologix in this case won’t work. At least not out of the box.

On the other hand: If you’re about to write your own data log / measure applications, the Prologix might be a perfect solution, since  it’s a serial device, and you don’t have the overkill of using the NI-VISA drivers (for example)

Then there is of course the price. Luckily the GPIB USB adapters can be found relatively cheap second hand. I did found a genuine NI  GPIB USB-HS new in box for a around $150,00 dollar. Which brings these adapter in the price range of the Prologix one.

Comparing a Prologix and a NI GPIB USB-HS adapter

Since I have both types, let’s compare them in practice. To compare the adapters i’m going to use them in the following scenarios:

  • Using the adapters with a existing application from a vendor (Rohde Schwarz WINIQSIM) which is a Windows application
  • Use both adapters to write a own application, testing Windows 10 operating system and MacOS (10.14.6)

 

Prologix adapter

The Prologix USB adapter can be programmed by sending commands through a serial terminal. The GPIB address of the device can be set by sending:

++addr #

So for example to set the GPIB address to ‘8’:

++addr 8

There several commands, which allows to configure the adapter to one needs. Also it’s possible to update the firmware.  To talk to the adapter FTDI device drivers are needed, which are available for Windows, Linux and MacOS.

 

The national Instruments (NI) GPIB-USB HS adapter

The NI adapter needs NI-VISA drivers to be installed. These drivers are available for Windows, Linux and MacOS.As far that I know there is no firmware available for these adapters. There is a lot of information about how to install NI-VISA drivers etc. The only thing I needed to was to install the NI-VISA software, and plug in the adapter.

Using existing application WinIQSIM

The application WinIQSIM works perfectly with the NI USB-GPIB-HS. The application has the benefit of using a GPIB device, or serial. However I could not get a serial device to work with my Rohde Schwarz AMIQ. I either received a timeout, or a communication error. At the end I tried to use a null modem cable, but this gives me problems also.

The main problem when using the Prologx in the case is the speed setting. The applications “sweeps” the baud-rate setting. The Prologix however, doesn’t care about serial baud setting. So the WinIQSIM application gets confused, when trying to determine the baud settings. I tried several options, even disabling the “sweep”, however the application kept trying to find the highest speed it could communicate on.

It might be possible to implement a driver in NI-VISA, I didn’t test this however.

In this case: The NI USB adapter wins.

Writing my own application

In this test I’m going to test the python script which I wrote to remotely control my HP 8175A. In this case I’m going to use Python since this makes testing under MacOS and Windows 10 very easy.

I’m going to use two modules:

    • PyVisa
    • PySerial

On both systems I’m using virtual environments.

Test under MacOS

And this test ended very quickly.. I tried to use PyVisa under MacOS, and couldn’t get PyVisa to work. The library isn’t listing my GPIB device. I tried installing several versions of NI-VISA library. I even tried different version of NI-488.2 drivers.

>>> import pyvisa >>> rm = pyvisa.ResourceManager() >>> rm.list_resources() ('ASRL/dev/cu.Bluetooth-Incoming-Port::INSTR', 'ASRL/dev/cu.EEsiPhone-WirelessiAPv2::INSTR') >>>

Update: I finally got the NI-VISA driver working under MacOS. I just reinstalled the drivers, and when I give the path to the library (I tried that before, which didn’t work) it works:

>>> import pyvisa
>>> rm = pyvisa.ResourceManager('/Library/Frameworks/VISA.framework/VISA')
>>> rm.list_resources()
('GPIB0::8::INSTR', 'GPIB0::9::INSTR')
>>>

When I use the NI-VISA tools, the adapter is recognized without problems.

Test under Windows 10

So I moved to windows, installed NI-VISA library, and PyVisa and it worked instantly. No problems what so ever.

Next I tried pySerial on both platforms, and both worked just fine. Of course I needed to adapt the device name (under MacOS this is:

ser = serial.Serial('/dev/cu.usbserial-PX4UALP2')

Under Windows this is:

ser = serial.Serial('COM4')

The whole python script:

import serial

cmd = ['RST','DM0;DUR0,1s;IFM(CLOCK),,,1111','DM1;CFM(CLOCK);TSA0;CHD0,(CLOCK),0000,0001,0010,0011,0100,0101,0110,0111,1000,1001;TSA9;CHD0,(END)','PM0;CD;(PROG1);CR7;CE;(END)','OM;POD 1','CM 0;CYM 1','UP;SA','LO']

ser = serial.Serial('/dev/cu.usbserial-PX4UALP2')

for c in cmd:
   send_cmd = c+'\n'
   ser.write(send_cmd.encode())

ser.close()

Conclusion

When using software which requires a GPIB device, then the easiest option is to chose for a USB controller which present itself as such a device. With some patience these devices can be bought relativity cheap. It might be possible to develop a own NI-VISA (or alike) driver for a Prologix USB adapter. Since I’m no expert in this, I didn’t do any research.

When using a NI USB GPIB controller (or alike) this will work under Windows. Under other Operating systems this might be problematic. With a lot of searching, and trying it might result in a working solution.  I couldn’t get the pyvisa to work under macOS the first time, after lot of trying and finally a reinstall, I could get it to work.

In my case the Prologix as the NI GPIB HS works on both platforms.

So the big question is: which wins ? Well if I only had Windows running as my Operating System, I definitely would go for the NI USB GPIB HS adapter, despite of the overkill of the whole NI-VISA environment.

And now that the NI adapter also works under macOS, I prefer the NI adapter over the Prologix adapter. Once the NI-VISA library works, it’s very easy to interact with the device. If however the NI-VISA lib doesn’t work.. or you simply don’t want the overhead the Prologix adapter might be the way to go, while keeping in mind when using vendor supplied software which relies on GPIB controller, the Prologix might not work..

Another thing to consider when using an adapter which relies on drivers like NI-VISA is when transferring software to other systems. For example when you write this awesome script in python for a specific device. When uploading this script to GitHub for others to use it, they need to install the (external) library. Which may be undesirable. In my case this is not really a concern.

However since I got both adapters.. I have the best of both worlds 🙂