Skip to content

Scaling fitted transform #40

@racng

Description

@racng

Thank you for creating this useful tool and writing great tutorials.
I have tried using your package to align an H&E image with an immunofluorescence (IF) image. For quick troubleshooting, I downsized both images by a factor of 50 and 100 respectively. I was able to fit a functional model. However I have trouble figuring out if its possible to use the results to transform the original high res images.

The shape of these scaled down images are (3, 207, 203) and (3, 206, 204).

# Downscale images
height, width = img_he.size # (20704, 20325)
new_width = int(width / 100)
new_height = int(height / 100)
img_he_lores = img_he.resize((new_width, new_height), 
    Image.LANCZOS)

height, width = img_if.size # (10318, 10206)
new_width = int(width / 50)
new_height = int(height / 50)
img_if_lores = img_if.resize((new_width, new_height), 
    Image.LANCZOS)

# Normalize matrix to values ranging from 0 to 1
Inorm = STalign.normalize(img_he_lores)
print(Inorm.min())
print(Inorm.max())
# Remove scale bar
Inorm[0:20, 175:, :] = 1

Jnorm = STalign.normalize(img_if_lores)
print(Jnorm.min())
print(Jnorm.max())
# Transpose normalized matrix to be 3xNxM matrix
I = Inorm.transpose(2,0,1)
print(I.shape)
YI = np.array(range(I.shape[1]))*100. 
XI = np.array(range(I.shape[2]))*100.
extentI = STalign.extent_from_x((YI,XI))

J = Jnorm.transpose(2,0,1)
YJ = np.array(range(J.shape[1]))*50. 
XJ = np.array(range(J.shape[2]))*50.
extentJ = STalign.extent_from_x((YJ,XJ))

# Compute initial affine transformation from points
pointsI = ...
pointsJ = ...
L,T = STalign.L_T_from_points(pointsI, pointsJ)

I fitted the model using the following parameters and it took only 3 mins:

params = {'L':L,'T':T,
          'niter':2000,
          'pointsI':pointsI,
          'pointsJ':pointsJ,
          'device':device,
          'sigmaM':0.15,
          'sigmaB':0.05,
          'sigmaA':0.05,
          'epV': 10,
          'a': 7500,
          'muB': torch.tensor([0,0,0]), # black is background in target
          'muA': torch.tensor([1,1,1]) # use white as artifact
          }

out = STalign.LDDMM([YI,XI],I,[YJ,XJ],J,**params)

I tried to transform the high res images (3, 10206, 10318), but it produces a very low dimension image (3, 207, 203), same size the reference that I used to train.

# get necessary output variables
A = out['A']
v = out['v']
xv = out['xv']

Knorm = STalign.normalize(img_if)
print(Knorm.min())
print(Knorm.max())
# %%
K = Knorm.transpose(2,0,1)
print(K.shape) # (3, 10206, 10318)
YK = np.array(range(K.shape[1]))*1. 
XK = np.array(range(K.shape[2]))*1.
extentK = STalign.extent_from_x((YK,XK))

# Transform high res image
newK = STalign.transform_image_target_to_source(xv,v,A,[YK,XK],K,[YI,XI])
newK = newK.cpu()
print(newK.shape) #  (3, 207, 203)

Does this mean I need to train with a high resolution reference? Would that increase the training time a lot? Or is the training time mostly determined by the size of the target image?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions