Skip to main content

Shack-Hartmann wavefront sensor: Thorlabs WFS-150 review

Shack-Hartmann
wavefront sensor is a brainchild of Cold War, born in late 1960-s by the need of US Air Force "to improve the images of satellites taken from earth"  -guess who's satellites caused such intense interest of Uncle Sam.
The working principle is simple and elegant - the wavefront sampled by an array of micro-lenses, and its local slope is converted into displacements of focal spots:
https://en.wikipedia.org/wiki/File:Shack_Hartmann_WFS_lensletarray.svg
To make an SH sensor today, one needs an array of microlenses and a digital camera. The main difficulty is accurate calibration and the software which will convert camera images into wavefront reading.

Thorlabs sells reasonably priced SH sensors and I purchased the WFS-150-7AR for my project.

The good

It is well built, comes with plate adapter and a C-mount ring adapter. The software runs smoothly and produces expected results (flat wavefront) when tested on spatially filtered and collimated HeNe laser.
The manual is very detailed, and API comes in C, C#, and LabView libraries, with basic examples. Very pleasing experience, especially after recently purchased deformable mirror from Imagine Optic which came barely naked in terms of API.

The sensor comes calibrated and tested, but one can do own calibration if desired. The software allows full control of the sensor, which is critical for research applications.

The wavefront can be measured directly and/or fitted with Zernike polynomials up to user-specified order.

The bad
Upon a closer look into LabView code that came with the SH sensor, things got worse. The front panel of main example VI is somewhat messed and hard to read, and when you press Measure the program pops notification windows with huge speed, which you fruitlessly struggle to close for a minute before killing LabView via Task Manager. This is a bad management of exceptions, when non-critical messages pop-up in separate windows and paralyze the user interface.

The wiring diagram is even more messy. If you write in LabView, you will know what I mean.

I am still figuring out how to extract meaningful status messages from the API in a non-paralyzing manner, there seems to be a bug which makes error codes nonsensical. I contacted Thorlabs with questions, and will follow up.

I will post my LabView control VI for this wavefront sensor when ready.

Update: Thorlabs replied promptly and explained the error codes. The device status is returned as a single integer number, and it's individual bits contain several status codes. For example, when device returns integer 1796, it is 0x00000704 in hex representation, which is a sum of 4 different messages:

    0x00000004  WFS_STATBIT_PTL  Power Too Low (low cam digits)
+  0x00000100  WFS_STATBIT_CFG  Camera is ConFiGured
+  0x00000200  WFS_STATBIT_PUD  PUpil is Defined
+  0x00000400  WFS_STATBIT_SPC  No. of Spots or Pupil Changed
______________________________________
 = 0x00000704

This is a very non-orthodox approach to status codes! 8-)

Update 21.10.2017
I wrote a suite of LabView virtual instruments for controlling the sensor, available on my github.
If you have a sensor, install Thorlabs drivers, connect it via USB and run the WFS_Thorlabs_testPanel.vi
I tried to write clean and simple, with comments in the wiring diagram.
Tested on LabView 2016 32-bit.

IPython notebooks for controlling this sensor can be also found in my github.

LabView VI screenshots:



Update 18.03.2018
After some real-life testing, I found that it is hard to have a good signal/noise ratio on this sensor using fluorescent beads. The reason is relatively low photon flux passing through an individual lenslet. The incident light from a bead is divided by a total of 39x31 = 1209 lenslets. So, large beads (I use 4 micron diameter) and high laser power can become handy. The camera sensor is CCD, which allows to crank up the sensor gain (up to x5). Also, the exposure time can be set up to 65 ms, which is not terrifically high. I wish if could be 10x higher..

Lesson learned: if you want high resolution of a wavefront, you must sacrifice signal/noise ratio, or make your beads (or other guide stars) really bright!

Comments

  1. can you tell is there is any alternative sensor because i want for my project please tell me if you know any other wavefront sensor in low price. thank you

    ReplyDelete
  2. Hello, thank you for sharing your work. I have a DIY wavefront sensor with a microlens array and a thorlabs camera. May I know if this can work with a camera instead of the commercial sensor?

    Thank you.

    TC

    ReplyDelete

Post a Comment

Popular posts from this blog

3D modeling in a lab

About once a week I am asked by my colleagues which 3D modeling software I am using - usually when I am staring at the new part being 3D printed. I am using  Autodesk Inventor for a few reasons: it is a professional software for engineers and has huge community around it it provides free academic license there are thousands of youtube videos with detailed tutorials by enthusiasts easy to learn at a basic level, but there is always a lot of room for growth In a lab, there are two main workflows where Inventor is necessary: 3D modeling of complex assemblies (like custom-built microscope) and 3D printing. There are many youtube tutorials for beginners , so I here only review some things that Inventor can do, without any specific instructions.  3D modeling of parts and assemblies Before building a new microscope, you can create its virtual model and check dimensions, required adapters, and whether things will fit together. Luckily, Thorlabs has 3D model of nearly all it

How to connect a rotary encoder to Arduino and make your first PCB board

After I discovered the OpenStage project for cheap DIY microscopy stage automation, I decided to add a twist to it - control the stage positions manually with a rotary encoder, in addition to already-implemented serial port (USB). I found a nice RGB illuminated rotary encoder from Sparkfun  - it's shaft works as a button, and it is internally illuminated by built-in 3-color LEDs - a perfect device to switch speeds and manually control the stages. Hooking it up to Arduino seemed easy, and there is a very nice Encoder library to do just that. But when I started to test it, I fell into a deep rabbit hole called 'debouncing'. In short, real-world switches are never perfect and the 'moment' of switching has many messy things happening between the two leads, creating noise in the logic of reading device (Arduino). So, the voltage readout from a real rotary encoder looks like this: Note the high-frequency chirp in yellow line when it falls from high to low. T

Programming NI DAQmx board in Python: easier than you think!

For my DIY microscope I had a task - generate a train of digital pulses which simulate camera trigger, so that other devices (galvo and laser) are synched. I wanted to do it in Python , so that it seamlessly integrates in my data acquisition and analysis Jupyter notebook. After some quick search I found a PyDAQmx library which seemed mature and had good examples to begin with. Installation was smooth: download, unzip, open Anaconda prompt, python setup.py install After only 30 min fiddling, I was able to solve my problem in just a few lines of code: Holy crap, it just works, out of the box. Oscilloscope shows nice digital pulses every 100 ms, each 1 ms long. The code is much shorter and cleaner than would be in C, C#, or LabView. PyDAQmx appears to be a full-power wrapper around native NI DAQmx drivers (yes, they need to be installed), so presumably it can do all that can be done in C or even LabView (this statement needs to be tested). One can use PyDAQmx to control ga