Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-13T03:13:50.732Z Has data issue: false hasContentIssue false

NetNotes

Published online by Cambridge University Press:  16 January 2020

Bob Price*
Affiliation:
University of South Carolina School of Medicine

Abstract

Type
NetNotes
Copyright
Copyright © Microscopy Society of America 2020

Selected postings are from discussion threads included in the Microscopy (http://www.microscopy.com) and Confocal Microscopy (https://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy) listservers from September 1, 2019 to October 31, 2019. Postings may have been edited to conserve space or for clarity. Complete listings and subscription information can be found at the above websites.

Techniques and Problem Solving

Confocal Microscopy Listserver

Problems with old samples and immunohistochemistry

I need some help troubleshooting an immunohistochemisty (IHC) protocol for a lab. Ninety nine percent of what reaches my facility is fresh, or relatively fresh material from tissue culture or small animal experiments. The samples we are having issues with are from human brains that have been in formalin for a number of years (some for over a decade or two) and, according to records, the tissues were not prepared immediately, rather some were processed 48 hours post-mortem. The IHC has seen little success: we are getting mostly background with little specificity. A couple of samples do seem to have worked, but we cannot rely on one or two slides (out of many dozens). In some samples, even the DAPI is not working well. Here are some things that were tried by the lab in order to improve the IHC protocol:

  • 24 h incubation in paraffin (as part of the paraffin block protocol).

  • Antigen retrieval with citraconic-anhydride acid (supposed to handle the formalin crossover).

  • Increased primary antibody concentration to 1:100.

  • Tyramide Signal Amplification (TSA) for secondary staining in order to strengthen the signal

  • Two blocking protocols were tried, both of which are supposed to work in this type of IHC: CAS-Block™ and Goat serum

  • A MaxBlock™ kit was used to lower the high autofluorescence inherent to these samples. Can this be blocking the antibodies? We have never worked with this before and it gives a black tint to all the samples. Supposedly without this, the samples are impossible to image.

Does anyone have experience working with such old samples and have successfully done IHC with multiple colors? Any tips or ideas are very much welcome. Avi Jacob

Human brain and postmortem: This is definitely a tough one. I would first try to reduce the autofluorescence without staining the sample. For example you can try bleaching the sample (e.g., exposing the sections overnight in a hood with a UV lamp). NAD and flavins usually bleach very nicely. You can also try hydrogen peroxide. Once you find a way to reduce the autofluorescence you can optimize the staining. I would guess that paraffin embedding would make things worse but if you need it, from what we have seen Sudan Black seems to reduce the fluorescence due to paraffin embedding. However it sounds like MaxBlock™ is similar. You could also try to use linear unmixing on the microscope. Sometimes you cannot see the signal, but if you manage to exclude the autofluorescence spectrum you find that the staining has worked. Then it is easier to troubleshoot. You can acquire the ‘pure’ fluorophore spectrum by immunostaining cells in culture. Then you use this spectrum to unmix your real sample. Good luck! Sylvie Le Guyader

With brain samples prepared under much more favorable conditions I have had success spectrally unmixing the autofluorescence using a 32-PMT Nikon A1 (no commercial interest). Leica and Zeiss scopes should have comparable functionality though Leica uses a different mechanism. Separately, if you can find a FLIM system then you may be able to isolate the antibody signal in the frequency domain, especially if you use probes with really distinct lifetimes. Lanthanides would be ideal, but I've never seen those used for imaging, just plate reader FRET-FLIM. Long-shift probes with built in FRET might also do. It's a technical challenge but your particular needs may justify the trip and effort. Timothy Feinstein

Microscopy Listserver

HM20 resin issues – HELP!

Has anyone had issues with infiltration and embedding of liver (less than 1 mm2) with HM20 resin for immunoelectron microscopy? I've been having issues with chatter and with areas in the tissue not infiltrating well. I first tried an old batch of HM20 resin, and then a new batch of HM20 resin. Both ended up with white, crusty tissue embedded in the resin (mostly the top and middle part of the tissue). After milling until all of the crusty part was gone the tissue seemed okay to section, but there was too much chatter. There was no improvement after further curing at -20°C with UV light for 24 hours or more. Rudy Alvarado

I think the infiltration is incomplete due to incomplete dehydration. I would suggest increasing the time for dehydration in alcohol to enhance infiltration. If you share your embedding protocol I can suggest further steps. Regan M.

To answer your questions, this is what I have done:

Trial 1 - room temperature - immunoelectron microscopy processing protocol: Liver pieces were fixed in 4% paraformaldehyde and 0.5% glutaraldehyde in PBS (pH: 7.20). They were washed in PBS, water, and dehydrated in a short graded ethanol series (25, 50, 75, 95, and 100%). Tissues were washed in anhydrous ethanol and infiltrated in 50% HM20 resin with Z6040 embedding primer, followed by 100% resin exchanges with Z6040 (left overnight with 4 exchanges every hour or so). I then cured the tissue at -20°C with UV light with fresh HM20 resin with Z6040 for 24 hours. HM20: mixed for 2 hours with N(g) bubbling. I also used a microwave processor during the processing.

Trial 2 - I did a long graded ethanol dehydration series (25, 50, 55, 65, 75, 80, 85, 90, 95, and 100%) followed by anhydrous ethanol washes. I also increase the HM20 resin exchanges to three days (2 overnights with a total of 6 resin exchanges). Everything else is the same as trial 1.

Trial 3 (in process) – Using high pressure frozen (HPF) tissue in 1-hexadecene as the cryo-protectant HPF samples were placed in an immunoelectron microscopy cocktail (0.5% glutaraldehyde, 0.1% UA in acetone) and left in a liquid nitrogen dewar until free substitution. Rudy Alvarado

Thanks for the detailed protocol. The second protocol looks good; I can see problems with the 1st protocol. If the 3rd protocol does not work, I suggest starting infiltration with 20% HM20 resin and gradually moving to 100% with 2 days of time at 4°C to preserve immunogenicity. Also use a control of pure resin without any tissue sample to check performance of the resin. Sometimes the resin absorbs moisture if the bottle is left open and this makes polymerization difficult. Regan M.

We've discussed this on several occasions, and it is said (by Heinz Schwartz among others) that the HM20 monostep can be difficult due to partial polymerization prior to use and that some brand new bottles in fact are not OK. Randi Olsen

I think bad polymerization may be the issue. It says in Electron Microscopy Science's HM20 technical data sheet that initiation of polymerization is largely independent of temperature, blocks may be polymerized at the same temperatures which are used for infiltration. I wonder if you infiltrated your samples at -20°C for the last few steps before curing them at -20°C. Another thing may be the embedding molds. I usually use BEEM and sometimes gelatin capsules. I have experienced bad polymerization with flat embedding molds. Gang (Greg) Ning

We purchase the Lowicryl HM20 kit from EMS. As for infiltration I do exchanges with a microwave (3 mins @ 120 W with 20 mm Hg vacuum pressure) followed by 30–60 minutes on the orbital rocker at 4°C. On the last infiltration exchange I let it sit at -20°C for 60–120 minutes before UV curing it at -20°C in the free substitution unit. I think it might be the flat BEEM capsules we are using. On the HPF/FS samples I will try both flat BEEM capsules and gelatin capsules. Z-6040 embedding primer (from EMS) is supposed to help with sample and resin adhesion, and to prevent the sample from pulling away from the resin when sectioning. I don't know the chemistry behind this. Lastly, I have never had bad effects from using it. Rudy Alvarado

I'm a little astonished with how much voodoo comes up in this HM20 thread. HM20 is really not complicated to use. 1. Make sure you don't bubble air through the resin. Mixing the components carefully by pipetting without bubbling is enough. It's stable in the fridge/freezer for at least a month. 2. Polymerize the resin under nitrogen gas (=in the AFS) if you have open/leaky vials. 3. If you can't polymerize under nitrogen, make sure your vials are filled to the top with resin and seal them as well as you can. 4. Polymerize 24–48 h under UV, if it doesn't polymerize by then it will never polymerize (The only thing that will happen is that the unpolymerized resin will slowly evaporate). Let the sample sit open, overnight in the hood for any unpolymerized resin to evaporate. This should give you dry, hard blocks. They can be soft at the top if they were in contact with air.

If you have infiltration problems (which is most likely the case based on your description), do more and longer dehydration/infiltration steps. There is a slight viscosity difference between the solvent and HM20, which in dense tissues can cause poor infiltration. If you start infiltration with 25% HM20 in solvent that problem should go away.

Also keep in mind that 1-hexadecene has a melting point of 4°C, it will be solid in the freeze substitution. If your sample is surrounded by it, the excess solvent in the FS will dissolve it. If it's deep in your tissue (= you threw the tissue into hexadecene and let it sit) it's not going to come out at -20°C.

Z-6040 is a silane (dive into the MSDS [3-(2,3-epoxypropoxy)propyl]trimethoxysilane). I struggle to understand why you would want to use that, particularly for HM20. The common use is for materials science samples where you have a flat surface of some solid material that doesn't bond with the resin, such as metal foils. The silane covers the inert surface and has an epoxy group that can now crosslink with an epoxy resin mixture. I'm also not a fan of applying a vacuum to resins because you're not entirely sure what resin components evaporate and what stays behind. Chris Buser

Folds in sections of cells grown on Transwell filters

I am doing TEM of embryonic cells grown on polycarbonate membranes in Transwell dishes. The cells grow in layers of two or three cells. After fixation (aldehydes, osmium) I remove the filters with the cells on top, cut them into little pieces, and proceed to dehydration and embedding in Eponate 12 placing the pieces on flat molds. I cut (ultra-thin, 60 nm) the samples perpendicular to the membrane. I get lots of folds and wrinkles that radiate from the membrane to the cells…a pity, the cells look really nice but it is hard to image because of the many folds. Has anyone done TEM of cells in Transwell or other types of filters and have any suggestion on how to avoid the folds? Thank you!! Amalia Pasolli

The cause of the folds may be the thinness of your sections. Try cutting at 80 nm or if you need your sections to be so thin you can try grids with formvar or carbon. Juan Carlos Leon

Recently I've experienced the same problem with chemically fixed cells on Transwell (PES) filters and on track-etched polycarbonate membranes, dehydrated in ethanol and embedded in EPON (Mollenhauer based mixture, medium-hard). In my case sectioning thicker doesn't help that much, the wrinkles are already visible when the sections come off of the diamond knife. Only sections less than 40 nm thick resulted in less wrinkling. Additional stretching with chloroform vapor doesn't help. I haven't tried stretching with a heat pen or adding a drop of 100% ethanol to the water bath (I avoid the latter since it makes it almost impossible to section ribbons). The whole behavior of the material during sectioning appears to be due to a difference in sectioning properties between the resin and the membrane (even though it is also infiltrated with resin). While the resin compresses somewhat during sectioning, it stretches again on the water bath. The filter doesn't follow this stretching and thus wrinkles are formed. The interesting thing is that Transwell filters have been successfully used for high pressure freezing- freeze substitution and resin embedding (Morphew et al., Journal of Microscopy, vol. 212, pt. 1, October 2003) without this wrinkling being a problem. It can be that for these filters dehydration in acetone will allow for a better resin infiltration of the filter or that a harder resin mixture (less compression during sectioning) may prevent the wrinkles from forming. Rob Mesman

In response to Amalia Pisolli's questions about working with cells on Transwell filters, I published a short paper in Microscopy Today a few years ago about embedding cell monolayers, both in flasks and on Transwells. doi:10.1017/S1551929513000485 MT May 2013. There is a typo in the resin formula: NMA should be 6.0 g. The resin is different in the filters than in the cells, but this method has worked well for me for many years. It's the only reason I still keep any propylene oxide in the lab. Acetonitrile, which we've switched to for most purposes, doesn't make the filters curl. Lee Cohen-Gould

Confocal Microscopy Concerns

Confocal Microscopy Listserver

Confocal imaging scan speed

I'd like to put a finer point on a question about scanning speed. Like some others I encourage users to just set the fastest speed and leave it there. Leica SP8s constrain the field of view with faster speed, so I suggest SP8 users choose 700 Hz as a good balance between speed and field of view. My reasoning comes strictly from my own testing. I found that you pay a lot more in bleaching when you try to get a brighter signal by slowing the scan speed than if you just turn up the laser power. Since the slowest speeds drastically increase bleaching, I use them for bleaching/ photoactivation of a region of interest (ROI) and that's about it. As I understand it, the strong relationship between scan speed and bleaching has to do with longer laser dwell time increasing the chance of exciting fluorophores to the triplet state. Hence, this is why resonant scanning plus line averaging is gentler on a live sample than a galvo scan of comparable signal-to-noise. Is that right? Thanks! Timothy Feinstein

I have always thought that the longer the pixel dwell time the more accurate the representation the image is of the sample. However, there must be an optimal scan speed that relates to Nyquist sampling (Nyquist indeed first theorized about temporal sampling). Perhaps modern electronics perform analog-to-digital conversion (ADC) and digital-to-analog (DAC) so quickly that sampling speed just isn't a factor in modern confocal imaging as far as the electronics are concerned. That being said there must be a “sweet spot” for temporal sampling although it isn't really discussed much. I default to a scan speed of 1.02 µsec/pixel. I would be happy to hear how and why I am wrong. Brian Armstrong

I teach our users what scan speed means and why you sometimes use a fast speed and sometimes slow. I don't have a hard and fast rule, but do have recommendations depending on the system for settings that work well for most fixed samples. For live cell/tissue imaging, there are often compromises to be made, i.e., scan speed usually higher/faster and no averaging and low laser power particularly if we need high temporal resolution. We will also sometimes open up the pinhole as I mentioned in an earlier post. I think it's important for the users to understand what scan speed means so that they can make their own decisions. The level of noise in the image is the main thing that determines what scan speed and averaging we use. If the voltage/master gain is low, then we can go fast and don't have to use averaging. However, if the voltage/master gain is high and we have fixed samples with good quality, then we have choices to make, i.e., do we increase laser power, slow scan speed down, use averaging or most often a combination of all three? If users are acquiring z stacks, then scan speed is very important as the time for acquisition is then multiplied. At this time investigators might decide to skimp on sampling in z (acquire fewer slices) so understanding what they can do to enable a faster scan speed for good quality images is helpful as long as they are also mindful of photobleaching. I stress to the users that they should always check the quality of their image by looking at 100% of the pixels. If they don't do this and have the image window set to display the entire image in a smaller space, they will not see the noise and will be disappointed later. For labelling of punctate structures, it's even more important not to have excess noise present. Jacqueline Ross

The spinning-disk people say that the critical point about bleaching is maximum laser power, and I tend to believe it. I think what you describe could be true for continuous wavelength lasers, but maybe not for pulsed lasers where the fluors have more time to drop back to ground state before the next excitation hits. In any case the laser power is important. But I admit I have not thoroughly tested this, and should it really make a difference whether the pixel dwell time is 0.5 or 1 or 2 μs? In both cases there is ample time to go to T-states. Resonant scanning with times < 50 ns may be different. Has anyone published a paper on this? Fast scan speed and averaging or higher laser power versus slow scan? I encourage our users to test for bleaching by recording the same image (stack) twice and check for intensity decrease in the second stack. If there is none, time can be saved by higher laser power (up to the point of saturation). Fortunately not many of our users use FITC or APC, and modern fluors are pretty stable. Steffen Dietzel

That is the nut of my question. Has anyone ever published a paper on this? Fast scan speed and averaging or higher laser power versus slow scan? I have tested this with gas and solid state lasers and as far as I can tell, strictly from my own experience, you get more sample loss by slowing the scan than if you raise laser power to get a comparable increase in signal. However I'd gladly submit to any properly done testing out there. In principle if a 100 MHz laser has a 10 ns pulse interval, that's short enough to doubly excite most common fluors. It seems reasonable then to think that a solid state laser could also endanger the sample more by scanning slowly than by scanning fast with higher pulse amplitude. Timothy Feinstein

Point scanning systems are (hopefully) shot noise limited, so what (hopefully) matters is the number of photons per pixel. If you scan fast at high power, or slow at lower power, or fast at low power but average, and in each case get (in expectation) N photons per pixel, you have a signal to noise ratio (SNR) of N^0.5 in all instances. If you want a better SNR more photons are required and it won't matter how you do that. Scanning speed becomes a factor in shot noise limited confocal systems however, because detectors have a non-zero dark count rate and because they have a maximum detection rate before they saturate. Since dark counts and photon counts both contribute shot noise, if you try to image with a dwell time that isn't much less than the inverse of the dark count rate, the darker pixels in the image will start to be dominated by dark shot noise rather than photon shot noise. In this case one should either image faster or get a detector with fewer dark counts. The maximum detection rate similarly puts an upper bound on how fast once can image for a given SNR by limiting the number of photons any pixel can contain. For example, with a hypothetical detector that has 1 dark count per microsecond on average and a maximum detection rate of 100 photons per microsecond, you would want to image with a dwell time of less than 1 microsecond, and would be limited to a maximum SNR of less than 10. This would be a very bad detector since its dark count rate and maximum detection rate are very close. For real photo multiplier tubes (PMTs), usually the dark count rate is extremely low (KHz) while the maximum count rate is somewhere around a few billion, so you can image very slow without being limited by dark counts, but if you try to image with dwell times of much less than a microsecond, the SNR pretty quickly drops down into the 20s or below and you either have to slow down or start averaging. This is why the default dwell time is usually around a few microseconds when using a PMT-based confocal or 2P. Michael Giacomelli

The concern of one of the early posts in this thread was fluorophore saturation and (superlinear) bleaching.

Point scanning confocal is able to saturate the fluorophores in the focus of the laser beam, so 1) the fluorescence intensity is lower than it should be and 2) bleaching may be accelerated (beyond the linear case analyzed in detail).

With this in mind it makes a difference if a sample is scanned slowly or quickly (+averaging) and the laser power used.

Since fluorescence itself is a fast process taking only a few nanoseconds it cannot be ‘outcompeted’ by fast scanning (i.e., a dwell time of a few ns per diffraction spot). This may not even be desirable if you want to collect the fluorescence before moving to the next spot. It seems to me that fluorophore saturation and the (earlier mentioned) excited state absorption and resulting bleaching can only be reduced by reducing the peak intensity. But there are other transient states of the molecules with microsecond - millisecond lifetimes that can lead to increased bleaching (when hit by yet another photon), and this is where faster scanning (+averaging) may help. I say *may* as I haven't performed in-depth experimental testing of these effects, and they will be heavily sample-dependent. Zdenek Svindrych

My understanding has always been that fast scanning would help, but we can't go fast enough with current commercial equipment. This was shown by Stefan Hell's lab when they used an electro-optic deflector (EOD) to obtain a pixel dwell time of ~6 ns. This means a single excitation event/fluorophore/scan. Here are some relevant quotes from the paper (Schneider et al., Nature Methods 2015): “In confocal microscopes with approximately microsecond pixel dwell times, fluorophores typically face 10–1,000 excitation events until the illumination spot is moved, usually after a certain number of photons are collected on average. Thus, the excitation, detection and bleaching events appear continuous despite the stochastic nature of these processes. Only if the dwell time of the moving illumination spot on a fluorophore is shorter than the average pause between two excitation events will the molecule not be subjected to multiple events. In this case, the molecule will emit at most one photon per illumination cycle, preserving the stochastic nature of the emission from the interrogated pixel. In the simple but common situation of excitation with relatively bright pulses that are shorter than the fluorescent state lifetime, the light exposure of a molecule typically must be shorter than the interval between two pulses.”

“Therefore, our fast scanning approach reduces bleaching and blinking. In fact, we compared the fluorescence yield of new ultrafast scanning with that of conventional (slow) scanning by imaging equally sized and dense areas for equal durations. For many fluorophores and laser configurations, we observed that the total signal increased by 1.5- to 4.5-fold when ultrafast scanning was used (Supplementary Fig. 1).” Douglas Richardson

Light and Electron Microscopy Training

Microscopy Listserver

Minimizing user error in a SEM laboratory

My university is installing a new field emission SEM that will be used by a wide range of faculty in biology, chemistry, physics, geology, archeology, and ceramics. Undergraduate students will use the instrument under the supervision of their faculty advisors. An engineering physics professor and I (a geology professor) will be the primary managers of the facility. We are both reasonably competent, knowledgeable individuals, but we will have no dedicated technician and will never have a budget for one. I would like to minimize people accidentally screwing things up for the rest of us. I have protocols in mind to achieve this, but I know that people in this community are far, far more experienced than I am. Experience can be a good filter for identifying effective ideas vs. folly. QUESTION: If you have experience managing an SEM facility used by diverse users, would you please share some of your successful (and unsuccessful) ideas that you've tried to minimize user harm to the instrument? Thank you. Kurt Friehauf

We are a primarily undergraduate institution with no recharge for in-house use. Our microscope (W-gun) is fully automated and is turbo-pumped, so our primary goal is to protect the backscattered detector from damage or from someone popping the X-ray window by slamming the specimen drawer shut. If a PI wants his/her lab to use the SEM, I train the PI first. We have a basic Standard Operating Procedure (SOP) that requires use of a standard stub and holder, and requires that the specimen not extend more than 3 mm above the stub surface and not beyond the edge of the stub.

Working distance is set at 20 mm nominal (about 17 mm in practice), and not decreased (increase is okay, but the WD is set back to 20 mm at the end of the session). Once trained, the SOP is signed by the PI (whose department thereby assumes financial liability for damage), and the PI is allowed to use the microscope independently. The PI is allowed to train his/her students (again with the same SOP); before the students are allowed to use the scope independently the PI has them sign off on the SOP, and he/she countersigns (again the PI's dept. is responsible for any damage). I develop custom SOP's for PI's or students that need special conditions (specimen tilt, X-ray analysis at 10 mm working distance, large specimens, etc.), but the same conditions apply—the PI and his/her department agree to be responsible for damage. One advantage to this system is that the PI is thoroughly familiar with the instrument, and is responsible for training his/her own students. We also don't have a technician, and if I had to train every student that uses the scope, I'd go insane. Julian Smith

I am in a similar position, with the exception that there is a technician hired to run the EMs and facility as a whole (me). For training, I do strongly encourage, and where I can, require that any student that wants to use one of our EMs or other microscopes take the relevant class. If you do not have classes in microscopy, SEM in your instance, I strongly encourage you to start one. This is perhaps the best way to make sure users are properly trained. For those users who can't take a class, then I do train them one-on-one, with particular emphasis on those parts of instrument operation where they could cause damage. For these steps, typically sample loading and removal, I'll make the user practice doing the step, and make it very clear that they are responsible for any damage and will be billed for it. Being told the price of a new specimen rod for the TEM or BSE detector for the SEM is a strong inducement to do things right. Use of the microscopes is also restricted to hours when I am present until I am comfortable with their abilities. With demonstrated competence, users can work after hours. Since instruments these days are generally well protected by safety switches and software, the possible failure points are much fewer. I also have service contracts on the EMs and confocals. This is another major help. TL/DR: Classes, individual training, and being clear about responsibility for damages, including paying for damages. Phil Oshel

I concur with Phil and Michael's comments. We do not allow derivative training. It is like the child's game of telephone. What one student thinks is important and passes on to another student is often not the same set of information that I would pass along. I warn trained users that I see doing it. Their trainees are not allowed to operate the SEM by themselves until they have gone through official training. The new Schottky-gun field emission SEMs are quite user friendly, maybe friendlier than W-gun SEMs. Of course, I think some brands or models are friendlier than others. They generally now have lots of interlocks to prevent damage although it is still possible to drive samples into detectors. For instance, there is an issue with software on one model that must have been written in Europe. When you want to set the stage height to 15 mm, be sure to enter “15” or “15.0”. Do not enter “15.” because it will be taken as 15.000 (i.e., 15 thousand according to European rules). That will definitely cause a problem. I would definitely get a chamber camera so users can see where they are driving the stages. I cannot bear the thought of students working blind. I also highly recommend a navigation camera or at least the ability to register with an external image. It makes it much easier finding the area of interest in the case of large samples or multiple samples.

As one who has been using SEM and EDS for a long time and now manages a facility, I recommend you have an experienced applications person on hand. I don't think that you want that to be you. You probably don't want to be sucked into the mundane, repetitive questions. You need to have someone who knows their microscopy in general and the peculiarities of the microscope in particular. I respect professors for knowledge in their field, but I have yet to meet one who was a top microscopist. Grad students are often tagged to be the resident expert. That may be better than nothing; however, they often have a rather short tenure and are interested in working on their own research. There is usually not much of a hand-off to the next grad student. Be careful to develop a solid SOP. I think a short but complete checklist is better than an exhaustive SOP with pictures that covers everything. If it is too long, users won't refer back to it. I want users to help themselves. Our necessary work flow is on a 2-page checklist. I make a point of sending users back to the point they missed when they encounter problems. It is usually because they skipped a step or took steps out of order. Soon enough, they learn to do it by themselves. Warren Straszheim

Confocal and widefield training

Can someone suggest any existing resources to pull from for designing new and advanced user training for confocal and widefield microscopy? I need to design training for a Zeiss LSM 980 Airyscan 2 confocal and an Axio Obsever widefield microscope. Interactive resources or existing quizzes that can ensure users know how to operate the microscopes would also be helpful. Thanks, Niyanta Kumar (email not available)

Alyona Minina has tons of videos on YouTube. Check her channel https://www.youtube.com/channel/UCfRx0TPF-R1TKYrPPu7OwWA/videos Sylvie Le Guyader

Thanks for pointing to these video tutorials by Alyona Minina. They nicely complement traditional Powerpoint material and hopefully could save some of my time. I am curious about how much long video material keeps trainee's attention focused, especially for the millennial YouTube generation. I noticed that traditional verbal and Powerpoint presentations makes a significant fraction of basic confocal trainees bored after 1 to 2 hours. Arvydas Matiukas

Definitely! That would bore me too and I am not from the millennium generation! I believe that the only way is to make very short videos and to give quizzes half way through or just after viewing. E.g., our trainees watch our video about bit depth, saturation and under exposure knowing that right after the video we will ask them to show us where the buttons are and if there was saturation or under exposure in the image they took immediately before watching the video. They are encouraged to stop the video and push the buttons on the software whenever necessary. Sylvie Le Guyader

That's a useful approach. Whenever possible I try an individual approach matching the scope and depth of the material to the trainee's background and skill level. The worst scenario is when a lab sends in several trainees of different levels of expertise (e.g., student, technician and postdoc). Do you accept several trainees for the same session?

The next major difficulty for a trainee is the transition from imaging a training sample to the self-prepared one. Here the ability to duplicate tutorial actions usually is not enough and in depth understanding and quality samples are required. Arvydas Matiukas

We never use a training sample. The reason why our training is so long (and expensive: 1200 €) is because we troubleshoot the sample preparation and experimental design extensively before, during and after the training. After our first 1 hour meeting briefing the researcher on how to modify their sample preparation if it is needed (most of the time), we demand that they bring the correct samples for the training. It is very rare that they don't and come with samples that they obtained from a colleague. If they do we cancel the training and send them back to the bench. Most of them understand after our 1 hour meeting that it is fully in their interest to get the best samples possible for the training because we help them acquire data. Our trainings are mandatory 1 to 1. We do not train people together. Therefore the training is always adapted to their sample/experiment/experience/personality. My colleague Gabriela Imreh does all of the training. She squeezes about 40 trainings per year outside of holidays. Sylvie Le Guyader

Hi Sylvie, Thanks for providing the whole outline of your training. Now I see that I and some other core managers were comparing basic (confocal) microscopy training aimed at independent use of core equipment vs the experimental aspects of the training. I think both types of training are useful and have their advantages. 90% of my core users are new lab members that just continue following their lab protocols and require only instruction on how to use (and not break) core microscopes. I guess cores may use different approaches and still make their users happy. Anyway it is very useful to exchange various training approaches. For Niyanta I would mention the iBiology Microscope Course available on their website or YouTube. Arvydas Matiukas

The discussions on microscopy training are extremely valuable. I need to review all the replies again and, based on them, revise the rules I post, modify how we do training, and write directions for new users to read before training.

In response to Arvydas's question about trainees, I find that status as student, postdoc, technician, or PI has no bearing on existing expertise or willingness or ability to learn the material. Why or how people learn diverges widely based on many factors. Small groups of trainees are good for saving time on the material that can be shown and discussed, however, only the person who gets to drive the machine may be ready to use it unassisted. Everyone else needs to follow up with practical training. Michael Cammer

We train people in the same way as Sylvie. This works not only to make sure people are well trained and supported but also to take care of the equipment. I also want to mention the great MyScope resource hosted by the University of Sydney from Microscopy Australia (https://myscope.training/index.html). Jacqueline Ross

I have collected a few resources on our website, so that I can refer our users to it more easily. This is at https://www.bioimaging.bmc.med.uni-muenchen.de/learn/. Have a look at “Educational Websites” and “Materials for teaching.” Maybe you will find something helpful. The listed page from Australia (was mentioned by others) contains a quiz that you can take, and you can print out a certificate if you answer 8+ of 10 questions correct. As in terms of hands on training, we start with a 3-hour introduction session for up to 3 people. Usually we use our samples for two reasons. First, we want the user to pay attention to the microscope and software, not on the biological content of the images we make. Second, we can be sure that the sample is all right and we don't have users with crappy samples blaming it on the microscope. I agree with others that after 3 hours it makes no more sense, brains are full. Next we do a “guided session” with the user's sample until the user is obtaining good results. This is a 1:1 session. Depending on sample difficulty, prior knowledge and user talent we sometimes have a third session. I am impressed by Sylvie's approach, but I doubt our group leaders would be willing to come up with so much money for training. Some think what we do now is already too time consuming (but most are happy about it). Steffen Dietzel

Does anybody offer their confocal training as an official grad-level class? In the past, offering a hands-on class was problematic for us since (i) no tuition dollars flow back to the core and (ii) PI's wanted their students using their own samples so “free class time” meant no income for us when those projects would normally have been done outside of class. Our campus is moving to a new budget model and tuition dollars may flow more directly back to us. We are considering combining training and supervised independent use as part of the class design and allow student efforts to be credentialed. I would be interested to know that if anyone else is taking this approach how many credit hours are offered, and what is covered as part of the course. Thomas E. Phillips

Very interesting discussion. Here in Leicester I train users in 3.5-hour 1-to-1 sessions on the microscope they want to use and with their sample, so I can show them how to setup the microscope and optimize it for their use. If I am happy with their progress they get an account and the next time they want to use the microscope they have to make an appointment with me or bring an experienced user from their group, so we are sure they are able to start the system and get an image on the screen. After that it is up to the user. I also would not have the time/resources to spend more time on training each individual user. My facility is spread over 4 buildings so I am not always around to guide and solve problems, but I walk in and try to keep an eye on their progress. Occasionally I have a user that fails the “progress test” and they will get a second induction session. If I am still not happy with their ability to use the microscope they have to make appointments with me to do their imaging. With some users I do the imaging or I keep a close eye on their imaging because their subject is difficult. As for teaching, the facility is involved in several courses. I think it is always good if the students know we are around as they are potential future users. None of the lectures bring in money, nor does undergrad use of the microscopes during courses. Postgraduate courses on the microscopes get charged for hourly use of the system. Kees Straatman

We teach a grad-level microscopy course in confocal (also SEM & TEM courses). Tuition dollars don't come to the facility, but the instrument use (student use hours) is billed to the course, and that money does come to the facility. The instruments are billed at the standard “in-house” rate. Note: These are the same courses the undergrads in our “microscopy major” take. Phil Oshel

Here at a predominantly undergraduate institution (PUI), I teach a microscopy methods course that includes training on the confocal (mixed grad/undergrad, about 10 students every other year). Most of our undergrads doing research start in their second year (or even before), so I only have 1 or 2 (sometimes none) to train between course offerings. We have no internal recharge (and no technician), the Dean's office pays for service contracts on the confocal, SEM, and TEM, and the course has a budget of $1,500. Beyond that, the facility gets a small amount of support from external-user recharges. I do help with protocol optimization once the students are out of the course and in someone's lab. Julian Smith, III