2. API¶
2.1. Patchify¶
- class torchmfbd.patchify4D.Patchify4D¶
Bases:
object
Methods
patchify
(x[, patch_size, stride_size, ...])Splits the input tensor into patches.
unpatchify
(x[, apodization, weight_type, ...])Reconstructs the original image from patches. :param x: The input tensor containing image patches with shape (n, L, f, x, y), where: - n: number of scans - L: number of patches - o: number of objects - f: number of frames (optional) - x, y: patch dimensions :type x: torch.Tensor :param apodization: Number of pixels to apodize at the edges of the image. Default is 0. :type apodization: int, optional.
- patchify(x, patch_size=64, stride_size=64, flatten_sequences=True, return_coordinates=False)¶
Splits the input tensor into patches. :param x: Input tensor of shape (n_scans, n_frames, nx, ny). :type x: torch.Tensor :param patch_size: Size of each patch. Default is 64. :type patch_size: int, optional :param stride_size: Stride size for patch extraction. Default is 64. :type stride_size: int, optional :param flatten_sequences: If True, the output tensor will have shape (n_scans * n_frames, patch_size, patch_size). Default is True. :type flatten_sequences: bool, optional :param return_coordinates: If True, the function will return the coordinates of the patches. Default is False. :type return_coordinates: bool, optional
- Returns:
Tensor containing the patches with shape (n_scans, L, n_frames, patch_size, patch_size), where L is the number of patches extracted.
- Return type:
torch.Tensor
- unpatchify(x, apodization=0, weight_type=None, weight_params=None)¶
Reconstructs the original image from patches. :param x: The input tensor containing image patches with shape
(n, L, f, x, y), where: - n: number of scans - L: number of patches - o: number of objects - f: number of frames (optional) - x, y: patch dimensions
- Parameters:
apodization (int, optional) – Number of pixels to apodize at the edges of the image. Default is 0.
- Returns:
- The reconstructed image tensor with shape
(n, o, f, x, y), where: - n: number of scans - o: number of objects - f: number of features (optional) - x, y: image dimensions
- Return type:
torch.Tensor
2.2. Destretch¶
- torchmfbd.destretch.align(frames, lr=0.01, border=10, n_iterations=20, mode='bilinear')¶
Perform image alignment between two images using gradient-based optimization. It optimizes the affine transformation matrix to align the second frame to the first frame, which is used as reference. To this end, it uses the correlation between the reference frame and the warped frames as defined in “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization” by Georgios D. Evangelidis and Emmanouil Z. Psarakis.
- Parameters:
frames (torch.Tensor) – Input tensor of shape (n_f, n_x, n_y) representing the sequence of frames.
lr (float, optional) – Learning rate for the optimizer. Default is 0.01.
border (int, optional) – Border size to exclude from the loss computation. Default is 10.
n_iterations (int, optional) – Number of optimization iterations. Default is 20.
mode (str, optional) – Interpolation mode for the warping. Default is ‘bilinear’ (‘nearest’/’bilinear’).
- Returns:
- A tuple containing:
warped (torch.Tensor): Warped frames after alignment.
tt (torch.Tensor): Estimated affine matrix.
- Return type:
tuple
- torchmfbd.destretch.apply_align(frames, affine, mode='bilinear')¶
- torchmfbd.destretch.apply_destretch(frames, tt, mode='bilinear')¶
- torchmfbd.destretch.destretch(frames, ngrid=32, lr=0.01, reference_frame=0, border=10, n_iterations=20, lambda_tt=0.1, mode='bilinear')¶
Perform image destretching on a sequence of frames using gradient-based optimization. It optimizes the optical flow in the field-of-view to align the frames to a reference frame. To this end, it uses the correlation between the reference frame and the warped frames as defined in “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization” by Georgios D. Evangelidis and Emmanouil Z. Psarakis.
- Parameters:
frames (torch.Tensor) – Input tensor of shape (n_seq, n_o, n_f, n_x, n_y) representing the sequence of frames.
ngrid (int, optional) – Grid size for the tip-tilt estimation. Default is 32.
lr (float, optional) – Learning rate for the optimizer. Default is 0.01.
reference_frame (int, optional) – Index of the reference frame to which other frames are aligned. Default is 0.
border (int, optional) – Border size to exclude from the loss computation. Default is 10.
n_iterations (int, optional) – Number of optimization iterations. Default is 20.
lambda_tt (float, optional) – Regularization parameter for the tip-tilt smoothness. Default is 0.1.
mode (str, optional) – Interpolation mode for the warping. Default is ‘bilinear’ (‘nearest’/’bilinear’).
- Returns:
- A tuple containing:
warped (torch.Tensor): Warped frames after destretching.
tt (torch.Tensor): Estimated tip-tilt values.
- Return type:
tuple
2.3. Basis generation¶
- class torchmfbd.nmf.Basis(n_pixel=128, wavelength=8542.0, diameter=100.0, pix_size=0.059, central_obs=0.0, n_modes=250, r0_min=15.0, r0_max=50.0)¶
Bases:
object
Class that generates a set of Point Spread Functions (PSFs) using Kolmogorov turbulence and computes the Non-negative Matrix Factorization (NMF) of the PSFs.
Methods
compute
([type, n, n_iter, verbose])Compute Non-negative Matrix Factorization (NMF) for a set of generated Point Spread Functions (PSFs).
- compute(type='nmf', n=100, n_iter=400, verbose=0)¶
Compute Non-negative Matrix Factorization (NMF) for a set of generated Point Spread Functions (PSFs). Parameters: n (int): Number of random PSFs to generate. n_iter (int, optional): Maximum number of iterations for the NMF algorithm. Default is 400. verbose (int, optional): Verbosity level of the NMF algorithm. Default is 0. Returns: None This function generates n random PSFs using Kolmogorov turbulence with a specified range of r0 values. It then computes the NMF of the reshaped PSFs and saves the resulting basis, diffraction PSF, modes, and coefficients to a file in the ‘basis’ directory. The filename includes the wavelength, number of modes, and r0 range.
2.4. Movie generation¶
- torchmfbd.movie.gen_movie(frames, frames2=None, filename='movie.gif', fps=1, deltat=300)¶
Generate an animated movie from a sequence of frames.
- Parameters:
frames (torch.Tensor) – A tensor containing the frames to be animated.
frames2 (torch.Tensor, optional) – An optional second tensor containing additional frames to be animated side-by-side. Default is None.
filename (str, optional) – The name of the output file. Default is ‘movie.gif’.
fps (int, optional) – Frames per second for the output animation. Default is 1.
deltat (int, optional) – Time interval between frames in milliseconds. Default is 300.
- Returns:
None
2.5. Deconvolution¶
- class torchmfbd.deconvolution.Deconvolution(config, add_piston=False)¶
Bases:
object
Methods
Adds external regularizations to the model.
add_frames
(frames[, sigma, id_object, ...])Add frames to the deconvolution object. Parameters: ----------- frames : torch.Tensor The input frames to be deconvolved (n_sequences, n_objects, n_frames, nx, ny). sigma : torch.Tensor The noise standard deviation for each object. id_object : int, optional The object index to which the frames belong (default is 0). diversity : torch.Tensor, optional The diversity coefficient to use for the deconvolution (n_sequences, n_objects). If None, the diversity coefficient is set to zero for all objects. Returns: -------- None.
Combine the frames from all objects and sequences into a single tensor.
compute_annealing
(modes, n_iterations)Annealing function We start with 2 modes and end with all modes but we give steps of the number of Zernike modes for each n
Compute the diffraction masks for the given dimensions and store them as class attributes.
compute_object
(images_ft, psf_ft, sigma, plane)Compute the object in Fourier space using the specified filter. Parameters: -------- images_ft (torch.Tensor): The Fourier transform of the observed images. psf_ft (torch.Tensor): The Fourier transform of the point spread function (PSF). type_filter (str, optional): The type of filter to use ('tophat'/'scharmer'). Default is 'tophat'. Returns: -------- torch.Tensor: The computed object in Fourier space.
Compute the Point Spread Functions (PSFs) from diffraction
compute_psfs
(modes, diversity)Compute the Point Spread Functions (PSFs) from the given modes. Parameters: modes (torch.Tensor): A tensor of shape (batch_size, num_modes, height, width) representing the modes. Returns: tuple: A tuple containing: - wavefront (torch.Tensor): The computed wavefronts from the estimated modes. - psf_norm (torch.Tensor): The normalized PSFs. - psf_ft (torch.Tensor): The FFT of the normalized PSFs.
compute_psfs_nmf
(modes)Compute the Point Spread Functions (PSFs) from the given modes. Parameters: modes (torch.Tensor): A tensor of shape (batch_size, num_modes, height, width) representing the modes. Returns: tuple: A tuple containing: - wavefront (torch.Tensor): The computed wavefronts from the estimated modes. - psf_norm (torch.Tensor): The normalized PSFs. - psf_ft (torch.Tensor): The FFT of the normalized PSFs.
deconvolve
([simultaneous_sequences, ...])Perform deconvolution on a set of frames using specified parameters. Parameters: ----------- frames : torch.Tensor The input frames to be deconvolved (n_sequences, n_objects, n_frames, nx, ny). sigma : torch.Tensor The noise standard deviation for each object. simultaneous_sequences : int, optional Number of sequences to be processed simultaneously (default is 1). infer_object : bool, optional Whether to infer the object during optimization (default is False). optimizer : str, optional The optimizer to use ('adam' for Adam, 'lbfgs' for LBFGS) (default is 'adam'). obj_in : torch.Tensor, optional Initial object to use for deconvolution (default is None). modes_in : torch.Tensor, optional Initial modes to use for deconvolution (default is None). annealing : bool or str, optional Annealing schedule to use ('linear', 'sigmoid', 'none') (default is 'linear''). n_iterations : int, optional Number of iterations for the optimization (default is 20). Returns: -------- None.
fft_filter
(image_ft)Applies a Fourier filter to the input image in the frequency domain.
get_defocus_basis
(overfill)Precalculate Zernike polynomials for a given overfill factor. This function computes the Zernike polynomials up to self.n_modes and returns them in a 3D numpy array. The Zernike polynomials are calculated over a grid defined by self.npix and scaled by the overfill factor. Parameters: ----------- overfill : float The overfill factor used to scale the radial coordinate rho. Returns: -------- Z : numpy.ndarray A 3D array of shape (self.n_modes, self.npix, self.npix) containing the precalculated Zernike polynomials. Each slice Z[mode, :, :] corresponds to a Zernike polynomial mode.
lofdahl_scharmer_filter
(Sconj_S, Sconj_I, sigma)Applies the Löfdahl-Scharmer filter to the given input tensors. Parameters: ----------- Sconj_S : torch.Tensor The conjugate of the Fourier transform of the observed image. Sconj_I : torch.Tensor The conjugate of the Fourier transform of the ideal image. Returns: -------- torch.Tensor A tensor representing the mask after applying the Löfdahl-Scharmer filter.
precalculate_zernike
(overfill)Precalculate Zernike polynomials for a given overfill factor. This function computes the Zernike polynomials up to self.n_modes and returns them in a 3D numpy array. The Zernike polynomials are calculated over a grid defined by self.npix and scaled by the overfill factor. Parameters: ----------- overfill : float The overfill factor used to scale the radial coordinate rho. Returns: -------- Z : numpy.ndarray A 3D array of shape (self.n_modes, self.npix, self.npix) containing the precalculated Zernike polynomials. Each slice Z[mode, :, :] corresponds to a Zernike polynomial mode.
read_config_file
(filename)Read a configuration file in YAML format.
Add frames to the deconvolution object. Parameters: ----------- frames : torch.Tensor The input frames to be deconvolved (n_sequences, n_objects, n_frames, nx, ny). sigma : torch.Tensor The noise standard deviation for each object. id_object : int, optional The object index to which the frames belong (default is 0). diversity : torch.Tensor, optional The diversity coefficient to use for the deconvolution (n_sequences, n_objects). If None, the diversity coefficient is set to zero for all objects. Returns: -------- None.
update_object
([cutoffs])Update the object estimate with new cutoffs in the Fourier filter.
write
(filename[, extra])Write the deconvolved object to a file. Parameters: ----------- filename : str The name of the file to which the object will be written. Returns: -------- None.
define_basis
find_basis_wavefront
set_regularizations
- add_external_regularizations(external_regularization)¶
Adds external regularizations to the model.
- add_frames(frames, sigma=None, id_object=0, id_diversity=0, diversity=0.0, XY=None)¶
Add frames to the deconvolution object. Parameters: ———– frames : torch.Tensor
The input frames to be deconvolved (n_sequences, n_objects, n_frames, nx, ny).
- sigmatorch.Tensor
The noise standard deviation for each object.
- id_objectint, optional
The object index to which the frames belong (default is 0).
- diversitytorch.Tensor, optional
The diversity coefficient to use for the deconvolution (n_sequences, n_objects). If None, the diversity coefficient is set to zero for all objects.
2. Returns:¶
None
- combine_frames()¶
Combine the frames from all objects and sequences into a single tensor. Observations with different diversity channels are concatenated along the frame axis. Returns: ——– torch.Tensor: A tensor of shape (n_sequences, n_objects, n_frames, nx, ny) containing the combined frames.
- compute_annealing(modes, n_iterations)¶
Annealing function We start with 2 modes and end with all modes but we give steps of the number of Zernike modes for each n
- Parameters:
annealing (_type_) – _description_
n_iterations (_type_) – _description_
- Returns:
_description_
- Return type:
_type_
- compute_diffraction_masks()¶
Compute the diffraction masks for the given dimensions and store them as class attributes. :param n_x: The number of pixels in the x-dimension. :type n_x: int :param n_y: The number of pixels in the y-dimension. :type n_y: int
- mask_diffraction¶
A 3D array of shape (n_o, n_x, n_y) containing the diffraction masks.
- Type:
numpy.ndarray
- mask_diffraction_th¶
A tensor containing the diffraction masks, converted to float32 and moved to the specified device.
- Type:
torch.Tensor
- mask_diffraction_shift¶
A 3D array of shape (n_o, n_x, n_y) containing the FFT-shifted diffraction masks.
- Type:
numpy.ndarray
- compute_object(images_ft, psf_ft, sigma, plane, type_filter='tophat')¶
Compute the object in Fourier space using the specified filter. Parameters: ——– images_ft (torch.Tensor):
The Fourier transform of the observed images.
- psf_ft (torch.Tensor):
The Fourier transform of the point spread function (PSF).
- type_filter (str, optional):
The type of filter to use (‘tophat’/’scharmer’). Default is ‘tophat’.
2. Returns:¶
torch.Tensor: The computed object in Fourier space.
- compute_psf_diffraction()¶
Compute the Point Spread Functions (PSFs) from diffraction
Returns: tuple: A tuple containing:
psf_norm (torch.Tensor): The normalized PSFs.
psf_ft (torch.Tensor): The FFT of the normalized PSFs.
- compute_psfs(modes, diversity)¶
Compute the Point Spread Functions (PSFs) from the given modes. Parameters: modes (torch.Tensor): A tensor of shape (batch_size, num_modes, height, width) representing the modes. Returns: tuple: A tuple containing:
wavefront (torch.Tensor): The computed wavefronts from the estimated modes.
psf_norm (torch.Tensor): The normalized PSFs.
psf_ft (torch.Tensor): The FFT of the normalized PSFs.
- compute_psfs_nmf(modes)¶
Compute the Point Spread Functions (PSFs) from the given modes. Parameters: modes (torch.Tensor): A tensor of shape (batch_size, num_modes, height, width) representing the modes. Returns: tuple: A tuple containing:
wavefront (torch.Tensor): The computed wavefronts from the estimated modes.
psf_norm (torch.Tensor): The normalized PSFs.
psf_ft (torch.Tensor): The FFT of the normalized PSFs.
- deconvolve(simultaneous_sequences=1, infer_object=False, optimizer='adam', obj_in=None, modes_in=None, n_iterations=20)¶
Perform deconvolution on a set of frames using specified parameters. Parameters: ———– frames : torch.Tensor
The input frames to be deconvolved (n_sequences, n_objects, n_frames, nx, ny).
- sigmatorch.Tensor
The noise standard deviation for each object.
- simultaneous_sequencesint, optional
Number of sequences to be processed simultaneously (default is 1).
- infer_objectbool, optional
Whether to infer the object during optimization (default is False).
- optimizerstr, optional
The optimizer to use (‘adam’ for Adam, ‘lbfgs’ for LBFGS) (default is ‘adam’).
- obj_intorch.Tensor, optional
Initial object to use for deconvolution (default is None).
- modes_intorch.Tensor, optional
Initial modes to use for deconvolution (default is None).
- annealingbool or str, optional
Annealing schedule to use (‘linear’, ‘sigmoid’, ‘none’) (default is ‘linear’’).
- n_iterationsint, optional
Number of iterations for the optimization (default is 20).
2. Returns:¶
None
- define_basis(n_modes=None)¶
- fft_filter(image_ft)¶
Applies a Fourier filter to the input image in the frequency domain.
- find_basis_wavefront(basis, nmax, wavelength)¶
- get_defocus_basis(overfill)¶
Precalculate Zernike polynomials for a given overfill factor. This function computes the Zernike polynomials up to self.n_modes and returns them in a 3D numpy array. The Zernike polynomials are calculated over a grid defined by self.npix and scaled by the overfill factor. Parameters: ———– overfill : float
The overfill factor used to scale the radial coordinate rho.
2. Returns:¶
- Znumpy.ndarray
A 3D array of shape (self.n_modes, self.npix, self.npix) containing the precalculated Zernike polynomials. Each slice Z[mode, :, :] corresponds to a Zernike polynomial mode.
- lofdahl_scharmer_filter(Sconj_S, Sconj_I, sigma)¶
Applies the Löfdahl-Scharmer filter to the given input tensors. Parameters: ———– Sconj_S : torch.Tensor
The conjugate of the Fourier transform of the observed image.
- Sconj_Itorch.Tensor
The conjugate of the Fourier transform of the ideal image.
2. Returns:¶
- torch.Tensor
A tensor representing the mask after applying the Löfdahl-Scharmer filter.
- precalculate_zernike(overfill)¶
Precalculate Zernike polynomials for a given overfill factor. This function computes the Zernike polynomials up to self.n_modes and returns them in a 3D numpy array. The Zernike polynomials are calculated over a grid defined by self.npix and scaled by the overfill factor. Parameters: ———– overfill : float
The overfill factor used to scale the radial coordinate rho.
2. Returns:¶
- Znumpy.ndarray
A 3D array of shape (self.n_modes, self.npix, self.npix) containing the precalculated Zernike polynomials. Each slice Z[mode, :, :] corresponds to a Zernike polynomial mode.
- read_config_file(filename)¶
Read a configuration file in YAML format.
- remove_frames()¶
Add frames to the deconvolution object. Parameters: ———– frames : torch.Tensor
The input frames to be deconvolved (n_sequences, n_objects, n_frames, nx, ny).
- sigmatorch.Tensor
The noise standard deviation for each object.
- id_objectint, optional
The object index to which the frames belong (default is 0).
- diversitytorch.Tensor, optional
The diversity coefficient to use for the deconvolution (n_sequences, n_objects). If None, the diversity coefficient is set to zero for all objects.
2. Returns:¶
None
- set_regularizations()¶
- update_object(cutoffs=None)¶
Update the object estimate with new cutoffs in the Fourier filter.
- Parameters:
cutoffs (list) – A list containing the new cutoffs for each object. Each cutoff contains two numbers, indicating the lower and upper frequencies for the transition.
2.6. Deconvolution SV¶
- class torchmfbd.deconvolution_sv.DeconvolutionSV(config)¶
Bases:
Deconvolution
Methods
add_external_regularizations
(...)Adds external regularizations to the model.
add_frames
(frames[, sigma, id_object, ...])Add frames to the deconvolution object. Parameters: ----------- frames : torch.Tensor The input frames to be deconvolved (n_sequences, n_objects, n_frames, nx, ny). sigma : torch.Tensor The noise standard deviation for each object. id_object : int, optional The object index to which the frames belong (default is 0). diversity : torch.Tensor, optional The diversity coefficient to use for the deconvolution (n_sequences, n_objects). If None, the diversity coefficient is set to zero for all objects. Returns: -------- None.
combine_frames
()Combine the frames from all objects and sequences into a single tensor.
compute_annealing
(modes, n_iterations)Annealing function We start with 2 modes and end with all modes but we give steps of the number of Zernike modes for each n
compute_diffraction_masks
()Compute the diffraction masks for the given dimensions and store them as class attributes.
compute_object
(images_ft, psf_ft, sigma, plane)Compute the object in Fourier space using the specified filter. Parameters: -------- images_ft (torch.Tensor): The Fourier transform of the observed images. psf_ft (torch.Tensor): The Fourier transform of the point spread function (PSF). type_filter (str, optional): The type of filter to use ('tophat'/'scharmer'). Default is 'tophat'. Returns: -------- torch.Tensor: The computed object in Fourier space.
compute_psf_diffraction
()Compute the Point Spread Functions (PSFs) from diffraction
compute_psfs
(modes, diversity)Compute the Point Spread Functions (PSFs) from the given modes. Parameters: modes (torch.Tensor): A tensor of shape (batch_size, num_modes, height, width) representing the modes. Returns: tuple: A tuple containing: - wavefront (torch.Tensor): The computed wavefronts from the estimated modes. - psf_norm (torch.Tensor): The normalized PSFs. - psf_ft (torch.Tensor): The FFT of the normalized PSFs.
compute_psfs_nmf
(modes)Compute the Point Spread Functions (PSFs) from the given modes. Parameters: modes (torch.Tensor): A tensor of shape (batch_size, num_modes, height, width) representing the modes. Returns: tuple: A tuple containing: - wavefront (torch.Tensor): The computed wavefronts from the estimated modes. - psf_norm (torch.Tensor): The normalized PSFs. - psf_ft (torch.Tensor): The FFT of the normalized PSFs.
compute_syn
(im, obj_filtered, tiptilt_infer, ...)Compute the synthetic image based on the inferred object, tip-tilt, and modes.
deconvolve
([simultaneous_sequences, ...])Perform spatially variant deconvolution on a set of frames.
fft_filter
(image_ft)Applies a Fourier filter to the input image in the frequency domain.
fft_filter_image
(obj, mask_diffraction)Filter the object in Fourier space It simply multiplies the FFT of the object (properly apodized) by the Fourier filter and returns the inverse FFT of the result.
get_defocus_basis
(overfill)Precalculate Zernike polynomials for a given overfill factor. This function computes the Zernike polynomials up to self.n_modes and returns them in a 3D numpy array. The Zernike polynomials are calculated over a grid defined by self.npix and scaled by the overfill factor. Parameters: ----------- overfill : float The overfill factor used to scale the radial coordinate rho. Returns: -------- Z : numpy.ndarray A 3D array of shape (self.n_modes, self.npix, self.npix) containing the precalculated Zernike polynomials. Each slice Z[mode, :, :] corresponds to a Zernike polynomial mode.
lofdahl_scharmer_filter
(Sconj_S, Sconj_I, sigma)Applies the Löfdahl-Scharmer filter to the given input tensors. Parameters: ----------- Sconj_S : torch.Tensor The conjugate of the Fourier transform of the observed image. Sconj_I : torch.Tensor The conjugate of the Fourier transform of the ideal image. Returns: -------- torch.Tensor A tensor representing the mask after applying the Löfdahl-Scharmer filter.
precalculate_zernike
(overfill)Precalculate Zernike polynomials for a given overfill factor. This function computes the Zernike polynomials up to self.n_modes and returns them in a 3D numpy array. The Zernike polynomials are calculated over a grid defined by self.npix and scaled by the overfill factor. Parameters: ----------- overfill : float The overfill factor used to scale the radial coordinate rho. Returns: -------- Z : numpy.ndarray A 3D array of shape (self.n_modes, self.npix, self.npix) containing the precalculated Zernike polynomials. Each slice Z[mode, :, :] corresponds to a Zernike polynomial mode.
read_config_file
(filename)Read a configuration file in YAML format.
remove_frames
()Add frames to the deconvolution object. Parameters: ----------- frames : torch.Tensor The input frames to be deconvolved (n_sequences, n_objects, n_frames, nx, ny). sigma : torch.Tensor The noise standard deviation for each object. id_object : int, optional The object index to which the frames belong (default is 0). diversity : torch.Tensor, optional The diversity coefficient to use for the deconvolution (n_sequences, n_objects). If None, the diversity coefficient is set to zero for all objects. Returns: -------- None.
update_object
([cutoffs])Update the object estimate with new cutoffs in the Fourier filter.
write
(filename[, extra])Write the deconvolved object to a file. Parameters: ----------- filename : str The name of the file to which the object will be written. Returns: -------- None.
active_annealing
define_basis
find_basis_wavefront
set_regularizations
- active_annealing(iter)¶
- compute_syn(im, obj_filtered, tiptilt_infer, modes_infer, infer_tiptilt, infer_modes, i_o)¶
Compute the synthetic image based on the inferred object, tip-tilt, and modes.
Parameters: im (torch.Tensor): Input image tensor of shape (ns, no, nf, nx, ny). obj_infer (torch.Tensor): Inferred object tensor. tiptilt_infer (torch.Tensor): Inferred tip-tilt tensor. modes_infer (torch.Tensor): Inferred modes tensor. infer_modes (bool): Flag indicating whether to infer modes or simply apply tip-tilt.
Returns: torch.Tensor: Synthetic image tensor.
- deconvolve(simultaneous_sequences=1, infer_tiptilt=True, infer_modes=True, tiptilt_init=None, n_iterations=20, batch_size=64)¶
Perform spatially variant deconvolution on a set of frames.
- define_basis()¶
- fft_filter_image(obj, mask_diffraction)¶
Filter the object in Fourier space It simply multiplies the FFT of the object (properly apodized) by the Fourier filter and returns the inverse FFT of the result. This can be used to avoid structures above the diffraction limit.
2.7. Utilities¶
- torchmfbd.util.aperture(npix=256, cent_obs=0.0, spider=0, overfill=1.0)¶
Compute the aperture image of a telescope
- Parameters:
npix (int, optional) – number of pixels of the aperture image
cent_obs (float, optional) – central obscuration fraction
spider (int, optional) – spider size in pixels
- Returns:
returns the aperture of the telescope
- Return type:
real
- torchmfbd.util.apodize(frames, window, gradient=False)¶
Apodizes the input frames by subtracting the mean value and applying a window function. The mean value is computed along the last two dimensions of the input tensor. The window function is applied differently depending on the number of dimensions of the input tensor. The mean value is added back to the frames after applying the window function.
- Parameters:
frames (torch.Tensor) – The input tensor containing the frames to be apodized. The tensor can have 2, 3, 4, or 5 dimensions.
window (torch.Tensor) – The window function to be applied to the frames. The shape of the window should match the last two dimensions of the input tensor.
gradient (bool, optional) – If True, the global gradient of the image is removed. Default is False.
- Returns:
The apodized frames with the same shape as the input tensor.
- Return type:
torch.Tensor
- torchmfbd.util.azimuthal_power(image, d=1, apodization=None, angles=None, range_angles=5)¶
Compute the azimuthal power spectrum of an image. :param image: The input image for which the azimuthal power spectrum is to be computed. :type image: numpy.ndarray :param d: The pixel size in the image. Default is 1. :type d: float, optional :param apodization: The size of the apodization window. Default is None. :type apodization: int, optional :param angles: A list of angles in degrees for which to compute the azimuthal power spectrum. Default is None. :type angles: list, optional :param range_angles: The range of angles around each specified angle (+-range) to include in the computation. Default is 5. :type range_angles: float, optional
- Returns:
The normalized frequency array (kvals) and the azimuthally averaged power spectrum (Abins).
- Return type:
(kvals, Abins) (tuple)
- torchmfbd.util.psf_scale(wavelength, telescope_diameter, simulation_pixel_size)¶
Return the PSF scale appropriate for the required pixel size, wavelength and telescope diameter The aperture is padded by this amount; resultant pix scale is lambda/D/psf_scale, so for instance full frame 256 pix for 3.5 m at 532 nm is 256*5.32e-7/3.5/3 = 2.67 arcsec for psf_scale = 3
https://www.strollswithmydog.com/wavefront-to-psf-to-mtf-physical-units/#iv