Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-28T03:11:44.491Z Has data issue: false hasContentIssue false

NetNotes

Published online by Cambridge University Press:  08 July 2019

Abstract

Type
NetNotes
Copyright
Copyright © Microscopy Society of America 2019 

Edited by Price Bob

University of South Carolina School of Medicine

Selected postings are from discussion threads included in the Microscopy (http://www.microscopy.com) and Confocal Microscopy (https://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy) listservers from February 15, 2019 to April 30, 2019. Postings may have been edited to conserve space or for clarity. Complete listings and subscription information can be found at the above websites.

Software and Image Analysis

Confocal Microscopy Listserver

General Question: Hardware vs. Software (Thread Started March 17, 2019)

There are software solutions that are able to create image data from wide-field and even microscope systems with seemingly similar quality to that obtained from confocal systems. The comparison of acquisition vs. software is essentially a comparison of image acquisition vs. image processing. While a software solution is way cheaper than a hardware solution, if it is able to produce image data with equal quality why would anyone choose to invest in a confocal anymore? I'm looking forward to a vivid discussion! Mika Ruonala

Are you talking about deconvolving widefield fluorescence stacks? This is a very old debate. I prefer confocal - fewer artifacts. Mike Model

I'll point out, that you can, of course, deconvolve confocal images too. So, while you can indeed get near confocal quality with well-acquired widefield data after deconvolution, you can also get near Super Resolution (SR) quality when deconvolving a well-acquired stack from a confocal. And then you can also deconvolve a SR stack and get… well you get the idea! It's like an arms race. I have the Hyvolution and had access for a couple of weeks to the Lighting, and now confocal images look blurry to me. Avi Jacob

To throw my ha'penn'eth's worth in, I am on the side of confocal for a lot of the same reasons: no artefacts in the first place, plus deconvolution can be applied to confocal images for a minor improvement (and I believe is recommended in Pawley's excellent book). All of the things that would make a widefield system comparable (multiple cameras, fast GPUs, pixel shift free filtercubes, etc.) just bring the price of the widefield close to a confocal anyway. Admittedly the running costs are cheaper though. So to summarize- confocal + deconvolution is my preference. Glyn Nelson

As Mike said, this is indeed a very old debate, but there are well-characterized, objective costs and benefits to each approach (scientific, not just monetary). Deconvolution is not just “computational confocal at the expense of artifacts”… it's a false comparison. By rejecting out-of-focus fluorescence, confocal microscopes reduce the *shot noise* contributed to the image by background. Deconvolution, by contrast, attempts to “reassign” that out of focus information (provided you have a very accurate representation of the actual Point Spread Function in your sample), but there will come a point with thicker samples at which the shot noise contributed by out-of-focus fluorescence overwhelms the SNR in the image, and deconvolution will fail (figure 4 in the first paper below). However, for thin samples with minimal out-of-focus fluorescence, the increased collection efficiency and minimized illumination/detector noise of widefield + decon has benefits for detection of weak signals (figure 2 in the paper below).

This tradeoff was well-characterized by Swedlow and Murray (https://www.ncbi.nlm.nih.gov/pubmed/11830634) and followed up with a treatment on the photon-efficiency of different optical sectioning techniques (https://www.ncbi.nlm.nih.gov/pubmed/18045334). As usual, there is no one technique that is universally “better” or preferable. It will depend on the samples you are imaging and the relative levels of in-focus and out-of-focus information. Talley Lambert

Avi, I really agree with your point. I feel that people deconvolve any time spatial information is critical, whether they're using widefield, CLSM, spinning disc, or light sheet. It's true that deconvolving adds time and data volume and especially cost, but in trade you get an image that is substantially sharper, with reduced noise and background, and more quantitatively accurate*.

Regarding whether to just go with a point scanning confocal, I don't see it as a simple question of better or worse**. A nuclear-cytoplasmic translocation assay with monolayer cells works just as well on a widefield, and (in my experience!) many types of biosensor assay work better with a properly set up widefield. The 16-bit depth of widefield images is nice for quantitation, and modern sCMOS cameras have by far the best acquisition speeds. I don't know whether widefield systems still have a more linear relationship between sample brightness and detected signal, but the last time I checked that was still true.

(*) Deconvolution is quantitatively useful as long as people make sure to tell the software to preserve the original intensity values. One of my complaints about Hyvolution was that you could not do that, so I just used the Huygens package that came with it. I don't know whether Lightning gives you that option…if not then caveat emptor.

(**) My advice mostly applies to turnkey stuff that any lab can implement, not exotic techniques available to folks with specialists or engineers on hand. Timothy N. Feinstein

I agree with most that has been said. I am firmly of the belief that you should try to acquire the best possible images and then you can always improve them further by using deconvolution or other methods. The more imaging artifacts and erroneous light you can reject during the imaging process, the less you will have to deal with later. Also, many journals have started asking about providing the raw data for figures as well - for which having superior data right off the bat is a huge benefit. One large risk of using software is that you can incur all sorts of artifacts, especially if you do not know what you are doing or if you push the limits of the deconvolution too far. I have had to gently let down some scientists who were excited about seeing this or that in their deconvolved data, with the raw data simply not supporting it. Furthermore, as Timothy pointed out, some deconvolution software automatically applies certain procedures or maybe does not make very clear exactly what has been done - reiterating the statement: you should know what you are doing when using software to improve your images. (Or at least consult with someone who does). So in conclusion: get the best images you can, and then improve them even further. The results speak for themselves. Nicolai Urban

Dear Mika, it would be really helpful if you could be more specific. What exactly do you want to compare? There are so many options out there these days that it is difficult to guess what you mean. Having said that, as Avi pointed out, it is generally best to first get a good image by physical means and then do the computational improvement like deconvolution. That is true for widefield, confocal and also STED super-resolution. So, confocal and decon is better than confocal alone or widefield and decon. Although for some applications widefield and decon might be good enough. Steffen Dietzel

Many applications overlap but here are three examples of when laser scanning confocal is indispensable: 1) You cannot tell by widefield whether you are seeing a fluorescent labeled structure or reflection from light emitted by another bright area; 2) Some thick tissues; and 3) Some of the weird chambers brought in by engineers, transwell chambers, and other oddities. Perhaps this does not really address the original question, but we have found that where deconvolution requires sitting at another computer with another software package and paying a core a fee for use, it just isn't going to happen. Michael Cammer

Not sure if this has been said, but this is basically like asking “screwdriver vs. wrench”. A laser scanning confocal microscope is not superior to a compound microscope, and the converse is also true, they are different tools for different tasks. This is one of the key points I try to get across to students when I teach them about microscopy, we have macro, spinning disk, 2P, light sheet, stereo, confocal, STED, STORM, TEM, SEM, AFM, FIB-SEM, etc., etc., for a reason. They all excel at tasks that other systems struggle with. Along these lines, here are two scenarios: Scenario 1) You want to get a kHz sample rate of a voltage dye in a cultured neuron. In this case, a compound microscope with deconvolution (or likely even just a simple high-pass filter) is the clear winner as all the pixels in the frame are temporally correlated (as long as you have CCD or global shutter CMOS), and the frame rate will be much higher than with confocal, even with the most cutting edge technologies.

Scenario 2) you want to measure the volume of densely packed nuclei using DAPI in a whole-mount sample. Deconvolution will quickly fall apart on this task simply because the deeper you go, the vast majority of the total signal is from out of focus light (much like trying to image a faint star right next to the sun). This means that the amount of information you have about the sample plane itself becomes nearly non-existent. Conversely, since confocal microscopes perform the deconvolution before light gets to the detector, you more or less eliminate this bottleneck caused by the dynamic range of the detector.

Also, one quick point about deconvolution. Unless you measure the Point Spread Function in the sample (such as using TetraSpeck beads) at a higher resolution than you acquire your image, you are not adding any information about the sample. Rather, you are whittling away information you wish to discard (i.e. it is a lossy process, much like JPEG compression). Along these lines, iterative blind deconvolution allows a computer to guess what information should be removed. Thus, just because the image looks better does not necessarily mean it is correct, otherwise STED, STORM, AFM, and cryo-EM would be obsolete. Benjamin Smith

I agree that one needs to define the problem more carefully, and it is like the screwdriver vs. wrench question. There are also other issues to consider, one of them being photon economy. I'm a bit biased and removed from the bench, originally coming from a lab that relied heavily on widefield deconvolution data (the late Fred Fay's lab at UMass Med. School) and having experience with earlier generations of confocal microscopes. One of the common refrains was about the availability of fluorescence photons. Much like the work of Agard and Sedat, the work of Carrington, Fogarty, Fay et al., demonstrated the efficacy of a robust, iterative deconvolution algorithm approach, using a minimization function with a non-negativity constraint to resolve structures to 100-200nm. Arriving at a best fit required providing certain inputs regarding anticipated feature characteristics that an informed imaging scientist would define and could also vary. Different variables would yield slightly different results which could be used to help determine best fit with other data. I think this level of engagement with and understanding of one's data is important. When one simply trusts either the computational technology or the imaging technology, poor choices are made with little understanding.

Regarding the Point Spread Function needing to be at a higher resolution, this makes no sense to me. The point of the PSF is to empirically model how light spreads in your particular system, under the conditions you're using. Use a sub-diffraction sized bead and image with the same parameters used to acquire the data. A restorative deconvolution doesn't subtract anything. It reassigns light to its purported origin. There should be constraints that the total integrated optical density be the same before/after deconvolution, else it's not really deconvolution but merely some sort of filter.

Confocal by its very nature rejects something like 90-98% of available fluorescence photons. That data is lost and irretrievable. This problem is confounded by sample photobleaching. The relatively poor photon economy means that in comparison to widefield, many more photons are emitted per each photon detected, and fluorescence can be exhausted before the data is even acquired.

Deconvolution – if done properly – not only can quantitatively reassign fluorescence to its point of origin, but it does this while collecting all available photons in the case of widefield. This is why it's attractive as an alternative - because it doesn't throw away data, and instead uses all available fluorescence data throughout a volume to restore light back to its point of origin. As others have pointed out, one can also deconvolve confocal and super resolution images. This advantage is probably limited to those applications where relatively large volumes are imaged and might otherwise be photobleached by confocal laser excitation before being acquired, and yes there are forms of structures that don't work well with deconvolution, and for those confocal is preferable. Time and expertise factor into whether this is practical, and for most, confocal is the most practical. Jeff Carmichael

I disagree with Jeff's statement: Confocal by its very nature rejects something like 90-98% of available fluorescence photons. That data is lost and irretrievable.

  1. 1. Trivially, if it is not digitized, it is not data. If it is from a far of focus plane, deconvolution not going to reassign it very well.

  2. 2. Reducing to the simplest cases: one vs two sub-resolution features, for simplicity, one or two 40 nm beads with some gap (or DNA origami), at the coverglass, refractive index matched media (i.e. 1.4 NA objective lens, R.I. 1.518 immersion oil and mounting medium). So: no out of focus photons to reject, just at-focus-plane photons to collect, or not. With confocal, choice of pinhole size (see my earlier email), which depending on the gap size (and wavelength), may be resolvable. Sure, GaAsP or (GaAsP)Hybrid detector has lower QE in the visible (~40%) than front illuminated (~82% sCMOS) or back-illuminated (~95%, sCMOS, EMCCD, CCD), but confocal can have APD(s) with 80+% QE, so QE is a wash.

  3. 3. Field of view and scanning:

    1. a. Camera based: at the mercy of whatever objective lens magnification and additional magnification in the instrument, readout (typically) some number of entire rows, i.e. 25 × 2048 pixels, for sCMOS (sure, some CCDs and EMCCDs have the acquisition area in the corner near the readout, so?).

    2. b. point scanning confocal: just scan the area of interest (and maybe a few more pixels to give the GPU deconvolver a little more work). For example, 25 × 25 pixels. Tweak the zoom as desired.

  4. 4. if we “change the game” a little … reflectance (i.e. nanogold, nannodiamond in reflectance), point scanning confocal is both trivial to get just the in focus light, and effectively infinite number of photons available, so shrink the pinhole, and shorter wavelength, as much as desired; for widefield, good luck finding anyone's research epi-illumination microscope to be clean enough and glare free enough for this to work well (maybe some absolutely pristine light path darkfield condenser and back of specimen might work … good luck with that). George McNamara

True, if it's not acquired it's not data, but that's semantics because those same photons would be acquired as data in widefield. The point is that a small fraction of the light that is acquired with widefield illumination is acquired in a typical confocal configuration using the same objective lens, so I guess I should have said that more data of the same sample is acquired with widefield…..which is simply stating the obvious when you look at a convolved, widefield, blurry image. To clarify the rejection of 90–98% estimate, this means relative to widefield collection. Even widefield fails to collect the large majority because it's only collecting a cone out of a 3D sphere of emitted fluorescence, influenced somewhat by the polarity of the fluorophores which generally are randomly distributed. Partly because of the geometry, most of the out-of-focus light from any particular object acquired in widefield is within a couple microns of the focal plane, not at a large distance. Much of the light from a large distance from the focal plane is dispersed outside the collection angles. Good deconvolution algorithms take this into account.

Regarding the bead example [*one or two 40 nm beads with some gap (or DNA origami), at the coverglass*] this doesn't strike me as very representative of most real-world biological specimens, which tend towards many structures in or on a cell (and not in an ideally perpendicular plane), surrounded by many cells, with lots of fluorescence from different focal planes. However, even in this case, deconvolution could well provide the same resolution. Even with no additional out-of-focus fluorescent objects to muddle the situation, the widefield collection will collect far more fluorescence. You simply image a cube (image stack) even though the beads are on one plane, acquiring z planes above and below, just like you do when acquiring a PSF with a bead. Now, voila - lots of out-of-focus fluorescence (real data)…..all of which can be used to fuel the deconvolution to describe the size and shape and separation of the objects with ever-increasing accuracy.

Regarding your earlier email George, I am always impressed by your encyclopedic knowledge and deep understanding of imaging, and I can't compete, nor do I wish to :) Jeff Carmichael

Deconvolution needs Nyquist sampling and this often means lots of z-slices causing bleaching and potential bleaching of live samples. The latest implementation of lattice Structured Illumination Microscopy from Zeiss in the Elyra7 (no commercial interest) has a “leap mode” which basically skips some z-slices. If I remember correctly, they claim that the missing information is recovered from the out-of-focus part of the signal. I think this would work only for samples where the signal is sufficiently sparse so that the in-focus signal is not swamped by the out-of-focus signal. When it works, it speeds up image acquisition and reduces bleaching. Otherwise the ApoTomes come to mind, using grid pattern illumination without super-resolution. Elyra7 also has an ApoTome mode. According to my experience, samples which are very inhomogeneous such as cells in hydrogels, cells on silicon, or other weird substrates, plants, and dense tissue slices are better imaged with a confocal. Adaptive optics with a guide star approach to set the parameters as in Eric Betzig's lattice light sheet might help to image some of these samples. Andreas Bruckbauer

Microscopy Listserver

8 bit vs 16 bit Images in All Microscopy (Thread Started May 1, 2019)

I am curious as to how people package digital data for themselves and other users. A few months ago on the confocal listserver we had a discussion regarding the sanctity of light microscopy data. One of the issues discussed was how to represent 12 to 16 bit data in an eight bit space.

Question: how do people routinely compress data into RGB for display and archiving and when is it permissible to not preserve the raw 12 to 16 bit raw data?

I have a similar question for TEM data. The new cameras on TEMs result in 16 bit images. However, for stained material, there cannot be real intensity information needing more than a few bits (or am I wrong about this?). When reducing bit depth, what is the best algorithm? Most reduction to 8 bits that I've seen involve putting the bottom x% into 0, for instance bottom 0.3% of pixel values are assigned to black. Does this risk losing the ability to see fine structure, or is this ok? Would it be preferable to not clip in the darks or is this ok? Is it ok to save only the 8 bit data and not bother with the 16 bit data? For instance, most people may not be prepared to deal with 16 bit data or consider it inconvenient.

I would very much like to know what is common practice and considered acceptable for both optical and electron microscopy images, both biological and material sciences. Michael Cammer

This is an excellent discussion topic, and I can offer a practical perspective based on some time spent working with a variety of users in a shared instrument facility.

  1. 1. The ease of dealing with 16 bit images depends strongly on what camera and software packages you are using. Gatan hardware and Digital Micrograph work fairly seamlessly with 16 bit images (Ultrascan and above lines), and it's easy to down-convert to 8 bit TIFFs. So if your users spend most of their time within the Digital Micrograph environment, then there's no real reason to use anything but the highest bit depth offered, which is the default saving option within DM.

  2. 2. For other camera manufacturers, I've noticed that saving data in 16 bit format TIFFs causes some issues with Windows (and Mac?) not being able to interpret the bit depth correctly, resulting in images being displayed as nothing but a flat gray frame in the file explorer view. This can obviously cause some frustration and confusion with users, especially corporate customers. For these cameras/software packages, I usually direct users to save their data in 8 bit format, which Windows (and Mac?) are able to correctly display.

  3. 3. It also depends on what the users are planning on doing with the data. If a user is doing quantitative analysis of their image, for example correlating intensity at an atom site with number of atoms in that column or measuring sample roughness by looking at raw intensity values at neighboring atomic columns, then they would obviously need the maximum bit depth offered by the camera. Likewise, diffraction experiments benefit from as much dynamic range--and bit depth--as possible, so 16 bit is the way to go. On the other hand, if a user is simply using the microscope to image morphology or measure particle sizes, for example, then the bit depth of the images doesn't matter much. Yes, information is forever lost when saving in 8 bit, as it must be when going from 65 k gray scale values to 256, but for most applications I would bet that a typical user could not pick which image is at what bit depth.

Having said all that, I know that most common image handlers, like ImageJ, will handle most anything you throw at them. I'm a bit of a data hoarder, and with most universities offering unlimited google drive storage, I find that saving in both 16 bit and 8 bit gives me my cake and lets me eat it too (not at the microscope console, though!).

If you don't have unlimited Google Drive storage, you can find 10 TB hard drives on sale. That's a lot of storage!

Thanks for starting this interesting discussion. Looking forward to reading other thoughts on this matter. Chris

> I have a similar question for TEM data. The new cameras on TEMs result in 16 bit images.<

This is not new, at all. First, after analog recording on negatives, and later digitization, I cannot remember having stored only 8 bit data back in my early times in the 1980s. I have ALWAYS stored all data in 16 bit mode. The first camera we bought in the 1990s (it was a used one), a TVIPS 1k × 1k camera, already recorded data with 12 bit depth and stored them as 16 bit files. These files could easily be handled in the 1990s by either ImageJ (which at that time already existed), or could be converted and saved as an extra file in 8 bit mode using the commercial TVIPS software EM-MENU3 (today, v.4). And I think, the same holds true for quite a number of the early EM cameras (maybe I am wrong?).

Doing all this was part of my training of people using the EM.

> When reducing bit depth, what is the best algorithm? < I would not trust any algorithm. This is part of the early steps in image processing, and people have to learn this as one of the first steps (in any lab, my argument). If you do not want to use commercial software I would recommend ImageJ/Fiji - a software package which is “free.”

> Is it ok to save only the 8 bit data and not bother with the 16 bit data? < simply, NO. Saving 8bit only is not ok, IMO.

> Most people may not be prepared to deal with 16 bit data or consider it inconvenient.< this is loss of scientific data … just make people aware of this.

>to Chris “microwink” <

  1. 1. Gatan hardware and software is widely distributed, and Gatan's sw can handle most of these tasks easily. There are image converters for Gatan's unique file format everywhere, and the rest, people have to learn. - I assume (although do not know) that other camera manufacturer offer similar software. At least TVIPS does.

  2. 2. 16 bit vs 8 bit data on Windows / Mac: this is not so much a problem of the operating system but of the 8bit displays we have. This is a longer story to explain. - But at the end, people have to learn to use proper SOFTWARE for analyzing EM data and histograms, and to save data in the proper format, for (a) scientific purposes, and for (b) generating slides for presentations and for publications. Again, there is a learning curve …

  3. 3. It all depends what the user wants to do … but people have to be aware of data compression and to become familiar with the software. We are not taking pictures; we do generate scientific data.

  4. 4. Sorry, Chris - NO: I do not tell people to store primary scientific data on a Google server. NEVER. Data are stored on University servers (tape roboter, etc.). Reinhard Rachel

Excellent points. For universities using the Google Suite services integration, it's actually the university IT department who is in charge of data management, retention, and oversight. Of course, Google could peer into your data if they wanted, or were compelled to by a government agency, but there are easy ways to encrypt data uploaded to the cloud automatically using software like rsync or rclone. For sensitive data, like ITAR or corporate R&D, then obviously storing data on the cloud is prohibited. I'm not sure that images of gold nanoparticles, for example, warrant such careful scrutiny, but ultimately each user is responsible their own data. Chris

From the prospective of a materials science TEMmer, I believe it is never wise to discard the original, full bit-depth images acquired from the detector. For much of my work, the absolute value of the detected intensity (the number of high energy electrons per pixel) is important. The connection between the digital counts and the data we want is best known and most consistent for full bit depth images. A fixed, linear mapping from 16 to 8 bits could be OK if the number of detected electrons is small. However, conversions to 8 bit designed for desktop publishing are dangerous because the look-up table used for conversion both changes for every image and (in general) is not preserved with the file. Once an unknown, undocumented transformation of the data has been made, the original intensity values are lost. We routinely convert to 8 bit images to make figures or to share images conveniently with colleagues, but we always keep the original data as close to the detector as we can. Disk space is cheap - keep your images! Paul Voyles

I just wanted to add a comment to reducing things to 8 bit from the original image. An old mentor of mine would always run around reminding people that 1) data is data, you don't change it and 2) microscopy images are data. You wouldn't run an assay, than only save an interpretation of the data and not the original data. Should be the same here, you wouldn't take an image, then reduce bit depth requiring a loss of “scientific data” as Dr. Reinhard mentioned, and only save that. You should always be able to get back to the original image as taken from the microscope. The only argument against this may be that more data intensive techniques like light-sheet, but even then there is some debate on what would be considered the original image that needs to be saved for archival reasons.

The only real problem with 12/16 bit may be displaying it on Windows, which if you are using the photo viewer it shouldn't be real analysis. So for my users on a Leica confocal, I'll teach them how to manipulate the .lif file in ImageJ, and guide them towards using that for their analysis (at the very least how to open, mark, and save tiff files from) and then also save the final image from there, so they have the original image with all metadata and whatever bit depth it was taken at, and reduced images for easy presentation.

I would avoid any automated reduction if possible. Photoshop used to show you clipped pixels when changing curves, unsure if there is an easy option to do so in ImageJ but certainly possible. Then it becomes a question of whether what is being clipped either from the dark or bright regions is of value and how much is acceptable loss for the downstream analysis (if only presenting the image you generally need to way over-saturate it, if doing mean intensity then you want to avoid any over-saturation in ROIs). Jason Saredy

This topic has a very long history. I will agree with Reinhard and others. Don't go to 8 bits. Upgrade your data visualization/analysis programs! I also will refer everyone reading this post to the Microscopy Society of America Scientific Data Resource page which also addresses this issue: https://www.microscopy.org/resources/scientific_data

The MSA policy which has been in place since 2003 (and I helped establish this policy) is below. It clearly elucidates that all data needs to be saved in the original “raw” data format. Compressing/altering in any form must be documented in all cases. The argument that Windows and the users cannot display the data holds no water. Windows is not a scientist, you and your users are, and you must control your data. Not a poorly written operating system, and the users also need to get properly trained to understand why.

The simple message is: if you recorded data at 8 bits that is fine, but don't down sample (which changes the scientific content), and then call it real data. I might add, just to finish the thought, don't save data as JPEG or any “compressed” format. That also changes the scientific content. Use only fully non-compressed data formats (RAW, TIFF, PNG,HMSA….) Nestor Your Friendly Neighborhood SysOp

I generally agree with what's been said, but I think the “it's scientific data and therefore should never be deleted” argument has its limits. Anybody who uses an annular dark field or bright field detector in STEM is physically compressing “scientific data”; as a lot of people have been showing lately, there is more information in the full spatially-resolved diffraction pattern for many samples. However, recording a spatially-resolved diffraction pattern at every probe position is obviously inconvenient if you don't have a fast camera, and since the compression that a STEM detector performs is generally well-characterized, we all accept the physical compression that happens when you only use a STEM detector and not a pixelated detector as a necessary loss of “scientific data.”

As Chris pointed out, this argument applies to TEM images as well: if you just want to measure particle size, you can probably reduce bit depth and still produce a fully quantitative analysis. If there's a good reason to compress your image and you know exactly what you did, it's hard to argue that this is worse scientific practice than using a STEM detector. Tyler Harvey

I agree with Reinhart Rachel and Paul Voyles. Always save the 16 bit image. Be careful and deliberate with image processing. The problem we face is that human eyes have a limited grayscale capacity and cannot distinguish all the gray levels in the 16 bit image. The microscopist's job is to faithfully represent the image information in the report. There are color maps that one can use to display an extended range of values. If you choose to do that, it is important to use a color bar that shows the mappings. Some color mappings such as the “jet” mapping that was the default in matplotlib was widely criticized for being misleading. The Viridis color map was developed to be a much better representation.

I am a firm believer that no image should be distributed without a description of how the image was recorded and processed. Reports with images with short captions and links to details are most helpful to your readers - and yourself. The statistician Karl Broman wrote: “Your closest collaborator is you six months from now and you do not respond to email. “I did a lot of microscopy, image processing, and image analysis during my career. Having such information in my archived reports helped when a client came to my lab months or even years later and wanted me to extend an analysis with new samples. John Minter

The reason 8 bit images are so ubiquitous is that the human eye can only distinguish ~50 gray levels at one time. Rounding up to 64 levels gives 6 bits of dynamic range-then add one bit above and below to avoid clipping, and voila-8 bits! Any more for images displayed to humans is a waste. However, do what Nestor says: whatever raw data comes from the camera, save it. Storage costs today are infinitesimal. A. John Mardinly

If I could redirect the question a slightly different direction, how much of that data is useful and where does noise overwhelm the worthwhile data? I admit that once the data has been collected, it should be saved. Even if there is a lot of noise, it may be possible to tease some worthwhile information out of it. Understand that I ask my question from the SEM side of the world and probably need a tutorial from someone on the TEM side.

Under normal circumstances on the SEM, even on good strong signals, noise dominates bits 7 and 8, and maybe even bit 6 or 5. I do a little exercise with new users to demonstrate this and try to persuade them that 16-bit images are a waste of space - at least in the SEM. I adjust contrast for the sample. I then try to find a nice homogeneous area, but to be sure, I take that area and raise the magnification to a hundred thousand times. The entire image should be at the same gray level. I collect an image under normal recording conditions (or at least set the dwell time to the same value of 30 us), and then look at the gray level histogram. If the peak is more than one gray level wide for an 8-bit image, and it always is, then the last bits are nothing but random noise. If the peak is 8 channels wide, then the last three bits are basically noise. And if bits 6–8 are noise, then why would I want to record bits 9–16 where they are nothing but noise?

Now this is where I need some instruction on the TEM side, on the SEM, we are adjusting the brightness and contrast to nicely fill the working range of the ADC. We get as much as we can out of those 8 bits. I would suppose that STEM in some imaging modes (e.g., HAADF) would be similar - that the output of the detector is scaled and then digitized. In other modes where a CCD is used to record the image, I suppose the count per pixel is collected as the raw value without scaling. What range of counts is usual? I certainly would expect they would routinely exceed 256 (8 bits). Maybe it is less than 4096 (12 bits) and certainly less than 65536 (16 bits). The count would determine the necessary depth for each pixel.

I thus infer that TEM software must scale the raw data before display. The bits would have been shifted by several positions to get a decent image. That might explain why a 16-bit TEM image fails to display properly in some applications. I suppose those apps simply cut off the lower 8 bits and display the upper 8 bits rather than scaling to the maximum value. If the most significant couple bits were zero, then the image would be confined to the lower quarter of the grayscale, which would not be very satisfactory. If I have misunderstood, someone please instruct me - preferably gently. Warren Straszheim

Light and Confocal Microscopy

Confocal Listserver

Fastest Spectral Acquisition/Unmixing Speed (Thread Started April 23, 2019)

Does anyone have a sense for how different point-scanning confocals compare in terms of speed of spectral acquisition for unmixing? I know that Zeiss and Nikon both have 34 parallel spectral channels and this seems like the fastest solution over a broad range (with Zeiss using GaAsP detectors so probably faster), just wondering what others have experienced. G. Esteban Fernandez

The Zeiss and Nikon systems are fastest because they acquire in parallel. Our Nikon A1 can capture spectral in 1us per pixel. Just keep in mind you are splitting your signal up into a large number of channels, so you need fairly bright signal. I suggest demoing both systems to see what works best for you. If you are really signal starved, the sequential spectral systems like Leica and Olympus are more sensitive, but take longer to acquire and may have photobleaching issues depending on the photostability of your fluorophore. Craig Brideau

We routinely use the Zeiss 880 for spectral detection. The 34 channel detector tops out at 690 or 700 nm but we can use an additional GaAsP detector to catch the light from 700 to 750 nm in one additional bin. We also demoed the Nikon A1 and were happy with the spectral detector but forget the specific wavelengths. Michael Cammer

Thank you Craig and Michael for your responses. The point about light splitting and sensitivity is well taken. Our current Zeiss 710 is in heavy demand so we're favoring parallel over sequential spectral for speed, with GaAsP in the spectral detector for sensitivity. I think Zeiss and Nikon have about the same range (400–700 or 750 nm). G. Esteban Fernandez

We use the Leica; it can collect the images for spectral unmixing either parallel (5 channels) or serially (adjustable range and bin width). Depending on the sample and the dyes, I often find that the five channels are sufficient. Happy unmixing. Richard Cole

That's 5 colors. I was wondering what you need the other 29 channels for on a 34 channel system? We have done six colors on the same slide (on a 5 channel system without unmixing, by sequential scanning, pretty much your channels plus Alexa 680). If you manage more than 10 colors with a reasonable separation I am impressed. With 10 or less, two sweeps in a 5 channel system should be sufficient, no? You would have to go over it twice but you also collect noise only 10 times, not 34 times. I don't have much experience with computational unmixing, but it does not seem a sure thing to me which way of scanning you would reach the same image quality faster. Maybe I am missing the point. Steffen Dietzel

> that's 5 colors. I was wondering what you need the other 29 channels for on a 34 channel system?<

To get the right intensities of the different fluorophores, you need to solve a system of linear equations. When the spectra are too close together, this gets more difficult and the error in the result increases. Having more data points can help, especially if you have good enough signal to noise. But splitting the signal into more channels decreases the SNR. It would be interesting to see if there is an optimum number of channels. As I remember, the Zeiss software can do the spectral unmixing on the fly and display the unmixed virtual channels during imaging, not sure about the others.

If your features do not overlap, e.g. separate cells and high enough resolution to resolve them, you actually can get away with just 2-3 channels and distinguish many colors, similar to how a RGB computer monitor can display them, e.g. CFP, GFP,YFP and RFP with 3 fluorescent channels. By combination of membrane and nuclear staining 10 different cell types can be distinguished using these 3 fluorescence channels. www.sciencemag.org/cgi/content/full/science.aad3439/DC1 Andreas Bruckbauer

There is some literature discussing ways to optimize channel settings for separating a given set of labels by Neher/Neher: https://doi.org/10.1111/j.1365-2818.2004.01262.x and there is an accompanying ImageJ Plugin. I am not sure how well it can be applied to the reality of your instrument - this will probably depend on how well you know the actual additive (read-out) noise and QE of your detector, the expected relative brightness of your dye species in the sample and how well you can define the goal of your experiment. It applies to a situation where all dye species can be present within the region represented by each individual pixel and you are interested in relative intensity. In other words, it assumes you wish to follow up with spectral unmixing, which always determines an estimate for the ratio of photons received from the respective dye species. When using APDs, as we do in our microscopes, which have negligible additive noise, it is safe to assume that using all your channels and then optimizing the detection bands (roughly speaking, the goal here is to have the inversion process of the mixing matrix not subtract large multiples of the detection channels from each other which would amplify noise), will give optimum SNR in the result.

As Andreas has mentioned, assuming perfect spatial separation of the species on scales accessible by the resolution of the instrument (i.e. when we can assume that the light in each pixel stems from only one dye species), the task shifts from unmixing to classification of pixels and two detection channels are theoretically enough when the detection bands are carefully selected. Super-resolution methods like STED can obviously help with this in the spatial domain, while multi-color single-molecule based methods, where bursts of photons are classified, rely on temporal separation (vs. spatial in the above case). In both cases the same considerations apply to the classification process itself: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Fluorescence+nanoscopy+goes+multicolor&btnG=). Andreas Schönle

Confocal Listserver

Strange Shape Appears on Dichroic After Blowing with Compressed Air (Thread started March 28, 2019)

I saw that the main dichroic that we are using for a multiphoton microscope was getting dusty and decided to clean with compressed air. The air duster we use is from Newport ( https://www.newport.com/f/canned-air-duster ). Strangely on one of our blows, a white area appeared on our filter forming an elliptical shape which quickly turned into a white/green-ish halo (see image here: https://imgur.com/rlrP0yB ) that does not seem to be going away. This is on the opposing side of the AR coating. I was under the impression that using compressed air was a safe way to clean optics. Has anyone else encountered a similar problem? Has the compressed air damaged the coating in some way? Thanks.

Steven Hou

My guess is that as the air from the can expanded, its temperature dropped, cooling the mirror and allowing water from the air to condense on the surface. If true, I would want to check with the manufacturer before assuming that the mirror is unchanged. Good luck. Martin Wessendorf

Oh, Steven, that's not good. The canned air contains propellant that can end up contaminating your optics. Use a blub blower instead like this one: https://www.bhphotovideo.com/c/product/1317992-REG/visibledust_19112366_zee_pro_sensor_cleaning.html (no commercial interest but I use them in our lab). If the artifact you are seeing is not going away then it is probably contaminant from the can. You will need to remove the filter and clean it.

Craig Brideau

My experience from years ago when I was a field service rep indicates to me a mist of compressor oil from when the can was charged. This is not unusual to see with canned air especially on front surface mirrors and dichroics. You are at the mercy of the bottling company when you use those cans. Sometimes you get one that does that. You might want to try the “Optic Bulb Blower” on the same page as the link you provided and use a micron filter on the intake side so you don't get dirt into the bulb. If you do go that route, replace the bulb every couple of years because eventually the rubber will break down and contribute its own particles. As long as they are new and internally clean they work well. You should check with the manufacturer of the dichroic to find out what they recommend to clean it. Some dichroics go to pieces when wet so be careful. Dan Focht

It could be both of the above, so think back on how exactly you were cleaning your optics: Was it only for very short bursts? Or was it longer… this could have caused a drop in temperature to cause problems, as Martin suggested. Did you always hold the can upright and not rotated (i.e. as it would stand on a shelf)? If not, this greatly increases the chance of things other than air spitting out of the can (such as propellant, as Craig suggested), which would then need to be chemically removed (risky when applied to a dichroic, unfortunately). Both of these can be avoided by using a manual air puffer/blower instead. Good luck in restoring your optics! Nicolai Urban

As mentioned, it could be condensation dissolving some dirt on the glass and then evaporating. Also, these cans are full of liquid, and if you're not careful (shaking the can or not holding it upright, especially when full), the liquid is expelled and can leave residue. Some cans also contain bittering agent! (I always try to pick the ones that don't, I don't know about the Newport stuff, try to find an MSDS). I would just clean it with an ethanol swab or tissue. Zdenek Svindrych

Stains from compressed air are common, especially if you have the can upside down when you try to dust. Usually it is some oil left over from the manufacturing/packaging process that was dissolved in the r134a while under pressure. I've never had trouble cleaning it off of lenses using methanol/lens paper. Remove the filter, get some forceps and lens paper, and wipe it off. Michael Giacomelli

The main vendors of dichroic mirrors both recommend using alcohol + lens paper for cleaning:

https://www.chroma.com/support/technical-support/cleaning-handling-and-orientation

https://www.semrock.com/cleaning-optical-filters.aspx. Following their recommendations is probably a good idea here. If you want to try more advanced cleaning methods, I would reserve them for when alcohol (or acetone) do not work.

Michael Giacomelli

Thank you everyone for your great insights and advice. I will contact the manufacture to confirm with them how I should proceed with cleaning the mirror. One thing we have in our lab is “First Contact” polymer cleaning kit from Newport (https://www.newport.com/f/polymer-optic-cleaning-kits). Would you think that this would be safe to use for cleaning dichroic mirrors in general or might this be riskier than directly using solvents like acetone or methanol? Thanks. Steven Hou

First contact works well if you know how to use it. It takes some playing around with the layer thickness to get it right. If you don't know what you are doing you end up with bits of First Contact stuck to your optic. I would start with isopropyl and/or methanol alcohols first, forceps, and a proper optics cleaning tissue. Craig Brideau

Electron Microscopy

Microscopy Listserver

Heat Polymerizing LR White Popoffs (Thread Started April 10, 2019)

I would like to use LR White resin to popoff a de-paraffinized tissue section from a slide and then perform immuno labeling for EM. The problem is that I have not found a good way to heat polymerize the LR White without introducing air into the BEEM capsule that is upside down on the slide. I have used gelatin capsules with LR white for embedding pieces of tissue and it has worked well but gelatin capsules will not work upside down on the slide. The resin flows out. Has anyone attempted this and had success? Gayle Schneider

Two thoughts occur:

First, leave the gelatin capsule right-side up, overfill it enough to get a convex surface, put a thin layer of LR White on the section, just enough to infiltrate the section and exclude air from the tissue, then put the slide on the capsule upside-down (section down). Given the sizes of gelatin capsules and paraffin sections, you should be able to use 3 or 4 capsules and so balance the slide. Better, collect the sections on 22 mm square coverslips.

Second, use BEEM capsules but first coat the outside with an oxygen impermeable coating. Dip the BEEM capsule in an epoxy resin and polymerize, or perhaps (clear) fingernail polish. Overfill the BEEM capsule and mount the sections on the BEEM capsule as above, but after getting the section mounted on top of the capsule, invert the capsule so its upside-down on the section, as is usually done. Phil Oshel

I had to do this once and almost lost my mind….people don't laugh. Try it the other way around. Line up the capsules in a holder and fill with LR White to the top until it makes a dome shape above the end of the capsule. Now lay the slide with the tissue downward on top of the capsule. Some LR White will run down but that is Ok. This worked for us. After polymerized, carefully cut away the area around the capsule. Lita Duraine

For me, BEEM capsules deform when LR-White is heat polymerized. Surrounding the tissue with a gasket, then applying a sheet of aclar film over the top might work, afterwards the hardened resin/tissue could be glued onto a resin stub.

See this paper: “A Novel Technique for Flat-Embedding Cryofixed Plant Specimens in LR White Resin.” Joseph Mowery

Microscopy Listserver

LaB6 or CeB6? (Thread Started February 26, 2019)

I would like to hear from the pros and cons of the cathode CeB6 in comparison to the LaB6? Erico Freitas

CeB6 is a little bit more expensive than a LaB6, this would be the only disadvantage I could think about… It is less sensitive to contamination than LaB6, yet you will still need a clean ultra-high vacuum in the gun (although not as low as a FEG). CeB6 has the advantage that it requires a lower evaporation temperature and it has a lower work function (2.7 eV for LaB6 vs 2.4 eV for CeB6). Therefore, you will get a longer lifetime with CeB6 over LaB6, and a slightly better brightness. If you can afford a CeB6, go for it! Julien Allaz

I have not used one, but I think it would be quite comparable. I am used to LaB6 filaments in higher end research microscopes. I have only ever heard of CeB6 in desktop microscopes. CeB6 should be better than tungsten and maybe just a little worse than LaB6. If it is in a desktop microscope, I would be more concerned about the other design details. How is the rest of the column? How does it compare to a regular research microscope? For our service lab, we run a field emitter on an FEI Quanta that is now rated at 1 nm resolution. We routinely push 100kx (based on a 5-inch Polaroid reference). If you based it on the image enlarged to the computer monitor, we would be pushing 300kx. How does that compare to your spec? Warren Straszheim

Hi Warren, Thanks for your reply. We have been using a LaB6 emitter in our TEM Tecnai ST20. We have also used Denka and Kimball. We had a look at the LaB6 and CeB6 specs and found out that they are pretty much the same, but CeB6 has a lower vapor pressure, perhaps a lower work function, and it might have a longer life. But what puzzled us is that neither Kimball nor Denka produce CeB6 filaments, so we were wondering about its quality, though we would like to give it a try. Erico Freitas