local_stats.image
Stores the Image class, and its subclasses.
- class local_stats.image.DiffractionImage(image_array: numpy.ndarray, beam_centre: Tuple[int])[source]
Bases:
local_stats.image.Image
A container for images obtained as a result of a diffraction experiment.
- property pixel_chi
Returns each pixel’s azimuthal rotation for a polar coordinate mapping. This is equivalent to the typical diffraction motor chi.
- property pixel_radius
Returns each pixel’s radial distance from the beam centre, in units of pixels.
- class local_stats.image.Image(image_array: numpy.ndarray)[source]
Bases:
object
The base class for all images.
- Attrs:
- image_array:
Numpy array representing the image.
- cluster(signal_length_scale: int, bkg_length_scale: int, n_sigma: float = 4, significance_mask: Optional[numpy.ndarray] = None, frac_pixels_needed: float = 0.3183098861837907) List[local_stats.cluster.Cluster] [source]
Returns the clustered significant pixels. Does significance calculations here under the hood.
- Parameters
signal_length_scale – The length scale over which signal is present. This is usually just a few pixels for typical magnetic diffraction data.
bkg_length_scale – The length scale over which background level varies in a CCD image. If your CCD is perfect, you can set this to the number of pixels in a detector, but larger numbers will run more slowly. Typically something like 1/10th of the number of pixels in your detector is probably sensible.
n_sigma – The number of standard deviations above the mean that a pixel needs to be to be considered significant.
significance_mask – Pixels that should never be considered to be statistically significant (useful if, for example, stats are biased in this region due to a physical barrier like a beamstop).
frac_pixels_needed – The fraction of pixels within a distance of signal_length_scale of a pixel that need to also be statistically significant for the clustering algorithm to class that pixel as being a core point in a cluster. Defaults to 1/pi.
- classmethod from_file(path_to_file: str)[source]
Instantiates an image from a path to a data file that can be opened using PIL.Image.open().
- Parameters
path_to_file – The path to the image file of interest.
- Returns
An instance of Image.
- mask_from_clusters(clusters: List[local_stats.cluster.Cluster]) numpy.ndarray [source]
Generates a mask array from clusters.
- Parameters
clusters – A list of the cluster objects that we’ll use to generate our mask.
- Returns
A boolean numpy mask array.
- subtract_background(background_array: numpy.ndarray, zero_clip=True) None [source]
Carried out a simple background subtraction on self.image_array. If zero_clip is true, then any pixels in image_array that are decreased below zero by the background subtraction will be clipped to zero. This is particularly useful if there’s a hot pixel in your background array.
- Parameters
background_array – A numpy array representing the background to be subtracted. OR An instance of Image representing the background.
zero_clip – Boolean determining if the background subtracted image_array should be clipped at 0.
- wavelet_denoise(signal_length_scale: int = 20, cutoff_factor: float = 0.2, max_cutoff_factor: float = 0.8, wavelet_choice: str = 'sym4') None [source]
Runs some wavelet denoising on the image. Without arguments, will run default denoising.
- Parameters
signal_length_scale – We would like to preferentially rotate our image away from wavelets whose length-scales are decently smaller than our signal length scale. This is the most important parameter for decimating noise wavelets. A value of 20 will kill most typical noise wavelets, but if your signal length scale is significantly larger than 20 pixels then it may be productive to increase this number.
cutoff_factor – If any wavelet coefficient is less than cutoff_factor*(maximum wavelet coefficient), then set it to zero. The idea is that small coefficients are required to represent noise; meaningful data, as long as it is large compared to background, will require large coefficients to be constructed in the wavelet representation.
max_cutoff_factor – The cutoff factor to be applied to signal occuring on length scales much smaller than signal_length_scale.
wavelet_choice – Fairly arbitrary. Sym4 is the only currently supported wavelet. Look at http://wavelets.pybytes.com/ for more info. If you want a new wavelet supported, please feel free to raise an issue on the github page.