Marin Van Heel (LNNano)

Instrumental Resolution versus Results Resolution in 2D and 3D Imaging

The Instrumental Resolution of an imaging system is primarily given by the physical properties of the microscope, telescope, photographic camera or any 2D- or 3D-imaging device.  The classical case is that of a light microscope where the numerical aperture (NA) of the objective lens determines the ultimate instrumental resolution of the microscope. However, having a certain instrumental resolution level available in a device, is no guarantee that that resolution level will be reflected in the results.

The reproducible Results Resolution that can be achieved from a given sample, collected with an  instrument with a given instrumental resolution, is a very different concept! Suppose that one forgets to switch on the illumination of the  microscope; what good will the expensive high NA properties of your  instrument do you? If, on the other hand, you can only use a limited exposure  on your radiation-sensitive samples, the resulting images will be noisy but nevertheless best the best possible. The emerging question is: how to define a results-oriented quality metric that reflects the image information you have managed to collect on a certain object in a given experiment? (Keyword: Fourier Shell  Correlation / Wikipedia).

Related issues on camera properties, 3D reconstruction geometries, and algorithmic considerations will also be covered.

Ferenc Borondics (SOLEIL)

Not available

Dave Bond

Scientific Computing at Diamond Light Source – Challenges and development

Detectors and instrumentation at Diamond Light Source is following Moore’s law. Where detector data rates and overall amount of processed data is doubling every two years. Often this happens in large jumps with new detectors or equipment such as electron microscopes, rather than gradual increases. Diamond has had to develop its HPC and storage to allow for this workload, and this talk covers the overview of our systems, workflow, experiences and design considerations we have made currently and looking into the future.

Eduardo Miqueles (LNLS)

The phase problem and future perspectives

A brief introduction to the phase-problem will be presented, with an algorithmic approach to different strategies. From phase-lift to standard iteration techniques, the phase reconstruction pipeline for Sirius will be presented.

Thiago Spina

Image Segmentation and Analysis at LNLS/Sirius: Yesterday, Today, and Tomorrow

In this talk, I will be presenting, from both theoretical and practical points of view, some of the techniques being developed for image segmentation and analysis by the Scientific Computing Group. Those techniques aim to address the needs of the new imaging beamlines of the Sirius synchrotron light source. I will overview the Image Processing methods most commonly applied in the past for segmenting images acquired using the UVX light source, point out their limitations, and showcase the current Machine Learning tools being tested at the IMX microtomography beamline. Based on what we have learned, I shall conclude the talk with some future perspectives and ongoing developments that aim to help the Sirius beamline users to segment and analyze their images more efficiently and robustly than ever before.

Edgar Gadbem

Volumetric data visualization in virtual reality

In this talk we’ll take a look into the benefits of using virtual reality to visualize data and the limitations imposed when developing for head mounted displays. Then we will talk about the visualization of volumetric data and the computational challenges it presents compared to rendering modeled 3D structures. Finally we’ll analyze how these two topics mix and what are the benefits and roadblocks of this combination.

Brian Toby

Computational Science Research within the APS

The APS provides x-rays to 67 different beamlines; all have a unique science mission and are highly heterogeneous in design, which means that they have disparate software needs. The X-ray Science Division (XSD) of the APS runs slightly more than half of the APS beamlines, With very limited resources for software engineering and computational science, a strategic focus is placed on the areas where the XSD needs and expertise are greatest. This has resulted in a number of very successful projects, including: a Data management system, real-time XPCS data reduction, and creation of several open source packages: TomoPy, for tomographic reconstruction; GSAS-II, a general crystallographic data analysis package; Midas, grain characterization for high-energy diffraction microscopy imaging.

This talk will summarize some of these projects, but will concentrate on the computational research being done within XSD, which includes topics such as multimodal reconstruction and correction for experimental errors, joint ptychography-tomographic reconstructions, introducing feedback into beamline controls based on streaming data analysis using high-performance computer clusters. For the latter a mechanism using direct memory-to-memory transfers allows reconstructions as the experiment is being performed. Work is in progress to deploy this for routine operations.