Last active
November 27, 2024 10:16
-
-
Save Svidro/5829ba53f927e79bb6e370a6a6747cfd to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Scripts mostly taken from Pete, and also from the forums. For easy access and reference. | |
TOC | |
Alignment: Several scripts to assist in alignment as of M9. Store scripts make files using the Affine transformation while | |
TransferObjects uses the stored file in the Affine folder with the current image name to move objects into it. | |
The final result is that one set of files can be generated for the transforms, and those transforms can be accessed to move objects | |
back and forth between the images. | |
Change annotations into Cell objects.groovy - Converts annotations into PathCellObjects, which allows certian functions to work within | |
them, namely Subcellular detection. | |
Change some annotations into detections.groovy - Similar to above, except with generic detections. | |
Change some annotations into detections 0.2.0M11.groovy - updated | |
Copying annotations between images.groovy - Two scripts to save and restore annotations using a separate file. | |
Create cells from detections.groovy - allows editing of base nuclei, then use those ROIs to create a cell with cell expansion | |
Create detection objects from annotations.groovy - Another version of creating detection objects, with a few other options (bounding box) | |
Force update selected annotation.groovy - Updates a single annotation in the case when the cells within it are not considered child objects. | |
Lock all annotations.groovy | |
Merge touching detections.groovy - what it says! Similar to mergeSelectedAnnotations() command. | |
Mirroring objects.groovy - Creating a mirror image object (or other transformations) | |
Object Subtraction.groovy - Performing subtraction between two objects (to create holes, for instance) | |
Points to detections conversion.groovy - Change point annotations into detections | |
Polyline to polygon.groovy - convert a polyline into a closed polygon. No holes.1 | |
Rename annotations and modify annotation lists.groovy - Mass renaming. Creates a subset of a list and incrementally names annotations. | |
Rings around an annotation.groovy - Create bands around an annotation, for example stroma within X um, 2X um, 3X um etc of a tumor | |
Rotate all annotations.groovy - Another transformation to rotate annotations, partial tissue alignment when overlaying a QPDATA file. | |
Set all TMA cores to valid.groovy - Newly added cores, through the TMA menu, will not be "valid" and will be ignored. This will fix that. | |
Set TMA cores to valid if tissue area over threshold.groovy - Same but runs simple tissue detection and chooses which TMA cores to set | |
as valid based on the area of tissue detected. | |
Split annotations into contiguous areas.groovy - And version 2, two ways to split annotations. | |
TMA-add cell measurements by class.groovy - Adds percentage by class, and cells/mm^2 to the TMA core for every existing cell class | |
Tumor invasion areas 0.2.0.groovy - uses the tissue border and tumor annotation to generate regions of increasing distance from tumor border | |
Update detections in annotations.groovy - Another version of Force update selected annotation.groovy that works for multiple annotations. | |
Use points to create cells.groovy - https://gist.github.com/Svidro/ffb6951e70187de5eb007290f61aea4a#file-use-points-to-place-cells-groovy | |
Duplicate entry. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/** | |
From https://forum.image.sc/t/qupath-multiple-image-alignment-and-object-transfer/35521/3 | |
0.2.0m9 | |
* Script to transfer QuPath objects from one group of images to another single image, applying an AffineTransform to any ROIs. | |
* This script should be run from the image you want to move the objects into, and will access all files within the Affine subfolder. | |
* You must have generated Affine files first through another script before using this script. | |
1. Create objects in source images. | |
2. Create alignments to destination image from within each of the source images. | |
3. Run this script from the destination image. | |
Script base on Pete's here: https://forum.image.sc/t/interactive-image-alignment/23745/9 | |
Michael Nelson 3/2020 | |
* All objects in the source images should be imported into the destination image. | |
*/ | |
// SET ME! Delete existing objects | |
def deleteExisting = false | |
// SET ME! Change this if things end up in the wrong place | |
def createInverse = false | |
import qupath.lib.objects.PathCellObject | |
import qupath.lib.objects.PathDetectionObject | |
import qupath.lib.objects.PathObject | |
import qupath.lib.objects.PathObjects | |
import qupath.lib.objects.PathTileObject | |
import qupath.lib.roi.RoiTools | |
import qupath.lib.roi.interfaces.ROI | |
import java.awt.geom.AffineTransform | |
import static qupath.lib.gui.scripting.QPEx.* | |
path = buildFilePath(PROJECT_BASE_DIR, 'Affine') | |
new File(path).eachFile{ f-> | |
f.withObjectInputStream { | |
matrix = it.readObject() | |
def name = getProjectEntry().getImageName() | |
// Get the project & the requested image name | |
def project = getProject() | |
def entry = project.getImageList().find {it.getImageName() == f.getName()} | |
if (entry == null) { | |
print 'Could not find image with name ' + f.getName() | |
return | |
} | |
def otherHierarchy = entry.readHierarchy() | |
def pathObjects = otherHierarchy.getAnnotationObjects() | |
// Define the transformation matrix | |
def transform = new AffineTransform( | |
matrix[0], matrix[3], matrix[1], | |
matrix[4], matrix[2], matrix[5] | |
) | |
if (createInverse) | |
transform = transform.createInverse() | |
if (deleteExisting) | |
clearAllObjects() | |
def newObjects = [] | |
for (pathObject in pathObjects) { | |
newObjects << transformObject(pathObject, transform) | |
} | |
addObjects(newObjects) | |
} | |
} | |
print 'Done!' | |
/** | |
* Transform object, recursively transforming all child objects | |
* | |
* @param pathObject | |
* @param transform | |
* @return | |
*/ | |
PathObject transformObject(PathObject pathObject, AffineTransform transform) { | |
// Create a new object with the converted ROI | |
def roi = pathObject.getROI() | |
def roi2 = transformROI(roi, transform) | |
def newObject = null | |
if (pathObject instanceof PathCellObject) { | |
def nucleusROI = pathObject.getNucleusROI() | |
if (nucleusROI == null) | |
newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
else | |
newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else if (pathObject instanceof PathTileObject) { | |
newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else if (pathObject instanceof PathDetectionObject) { | |
newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else { | |
newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} | |
// Handle child objects | |
if (pathObject.hasChildren()) { | |
newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)})) | |
} | |
return newObject | |
} | |
/** | |
* Transform ROI (via conversion to Java AWT shape) | |
* | |
* @param roi | |
* @param transform | |
* @return | |
*/ | |
ROI transformROI(ROI roi, AffineTransform transform) { | |
def shape = RoiTools.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses! | |
shape2 = transform.createTransformedShape(shape) | |
return RoiTools.getShapeROI(shape2, roi.getImagePlane(), 0.5) | |
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/** | |
From https://forum.image.sc/t/qupath-multiple-image-alignment-and-object-transfer/35521/2 | |
0.2.0m9 | |
If you have annotations within annotations, you may get duplicates. Ask on the forum or change the def pathObjects line. | |
To use, have all objects desired in one image, and alignment files in the Affine folder within your project folder. | |
If you have not saved those, this script will not work. | |
It will use ALL of the affine transforms in that folder to transform the objects in the current image to the destination images | |
that are named after the affine files. | |
Requires creating each affine transformation from the destination images so that there are multiple transform files with different names. | |
Michael Nelson 03/2020 | |
Script base on: https://forum.image.sc/t/interactive-image-alignment/23745/9 | |
and adjusted thanks to Pete's script: https://forum.image.sc/t/writing-objects-to-another-qpdata-file-in-the-project/35495/2 | |
*/ | |
// SET ME! Delete existing objects | |
def deleteExisting = true | |
// SET ME! Change this if things end up in the wrong place | |
def createInverse = true | |
import qupath.lib.objects.PathCellObject | |
import qupath.lib.objects.PathDetectionObject | |
import qupath.lib.objects.PathObject | |
import qupath.lib.objects.PathObjects | |
import qupath.lib.objects.PathTileObject | |
import qupath.lib.roi.RoiTools | |
import qupath.lib.roi.interfaces.ROI | |
import java.awt.geom.AffineTransform | |
import static qupath.lib.gui.scripting.QPEx.* | |
path = buildFilePath(PROJECT_BASE_DIR, 'Affine') | |
new File(path).eachFile{ f-> | |
f.withObjectInputStream { | |
matrix = it.readObject() | |
def name = getProjectEntry().getImageName() | |
// Get the project & the requested image name | |
def project = getProject() | |
def entry = project.getImageList().find {it.getImageName() == f.getName()} | |
if (entry == null) { | |
print 'Could not find image with name ' + f.getName() | |
return | |
} | |
def imageData = entry.readImageData() | |
def otherHierarchy = imageData.getHierarchy() | |
def pathObjects = getAnnotationObjects() | |
// Define the transformation matrix | |
def transform = new AffineTransform( | |
matrix[0], matrix[3], matrix[1], | |
matrix[4], matrix[2], matrix[5] | |
) | |
if (createInverse) | |
transform = transform.createInverse() | |
if (deleteExisting) | |
otherHierarchy.clearAll() | |
def newObjects = [] | |
for (pathObject in pathObjects) { | |
newObjects << transformObject(pathObject, transform) | |
} | |
otherHierarchy.addPathObjects(newObjects) | |
entry.saveImageData(imageData) | |
} | |
} | |
print 'Done!' | |
/** | |
* Transform object, recursively transforming all child objects | |
* | |
* @param pathObject | |
* @param transform | |
* @return | |
*/ | |
PathObject transformObject(PathObject pathObject, AffineTransform transform) { | |
// Create a new object with the converted ROI | |
def roi = pathObject.getROI() | |
def roi2 = transformROI(roi, transform) | |
def newObject = null | |
if (pathObject instanceof PathCellObject) { | |
def nucleusROI = pathObject.getNucleusROI() | |
if (nucleusROI == null) | |
newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
else | |
newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else if (pathObject instanceof PathTileObject) { | |
newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else if (pathObject instanceof PathDetectionObject) { | |
newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else { | |
newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} | |
// Handle child objects | |
if (pathObject.hasChildren()) { | |
newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)})) | |
} | |
return newObject | |
} | |
/** | |
* Transform ROI (via conversion to Java AWT shape) | |
* | |
* @param roi | |
* @param transform | |
* @return | |
*/ | |
ROI transformROI(ROI roi, AffineTransform transform) { | |
def shape = RoiTools.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses! | |
shape2 = transform.createTransformedShape(shape) | |
return RoiTools.getShapeROI(shape2, roi.getImagePlane(), 0.5) | |
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// From https://forum.image.sc/t/qupath-multiple-image-alignment-and-object-transfer/35521 | |
//0.2.0m9 | |
//Script writes out a file with the name of the current image, and the Affine Transformation in effect in the current viewer. | |
//Can get confused if there is more than one overlay active at once. | |
//Current image should be the destination image | |
// Michael Nelson 03/2020 | |
def name = getProjectEntry().getImageName() | |
path = buildFilePath(PROJECT_BASE_DIR, 'Affine') | |
mkdirs(path) | |
path = buildFilePath(PROJECT_BASE_DIR, 'Affine', name) | |
import qupath.lib.gui.align.ImageServerOverlay | |
def overlay = getCurrentViewer().getCustomOverlayLayers().find {it instanceof ImageServerOverlay} | |
affine = overlay.getAffine() | |
print affine | |
afString = affine.toString() | |
afString = afString.minus('Affine [').minus(']').trim().split('\n') | |
cleanAffine =[] | |
afString.each{ | |
temp = it.split(',') | |
temp.each{cleanAffine << Double.parseDouble(it)} | |
} | |
def matrix = [] | |
affineList = [0,1,3,4,5,7] | |
for (i=0;i<12; i++){ | |
if (affineList.contains(i)) | |
matrix << cleanAffine[i] | |
} | |
new File(path).withObjectOutputStream { | |
it.writeObject(matrix) | |
} | |
print 'Done!' |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// From https://forum.image.sc/t/qupath-multiple-image-alignment-and-object-transfer/35521 | |
// 0.2.0m9 | |
//Paste matrix between brackets for 'matrix' variable | |
//Current image should be the destination image | |
// Michael Nelson 03/2020 | |
def name = getProjectEntry().getImageName() | |
path = buildFilePath(PROJECT_BASE_DIR, 'Affine') | |
mkdirs(path) | |
path = buildFilePath(PROJECT_BASE_DIR, 'Affine', name) | |
def matrix = [] | |
new File(path).withObjectOutputStream { | |
it.writeObject(matrix) | |
} | |
print 'Done!' |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/** | |
* Script to transfer QuPath objects from one image to another, applying an AffineTransform to any ROIs. | |
* Should be run from the image you want to move the objects into | |
* otherImageName should be the name of the file the objects are coming from... | |
* and will also be the name of the affine transformation matrix file in the Affine folder | |
* The easiest way to do this is to create objects in image A, perform the alignment between images in image B | |
* and then run this script from image B (destination). | |
* The script can be reworked to behave in different ways as well, for example you could | |
* create an alignment in image B, C, and D to image A (open 3 different images, create the alignment to the first image | |
* in each), which will give you Affine files named B, C and D. Then the objects in B, C and D could all be transferred back | |
* to A, or by flipping the invert boolean, distribute the objects in image A to B, C and D. Performing the alignment to A | |
* from all other images streamlines downstream mass transfers, though some of the script may need to be edited for specific cases | |
* There is also a varient of the script that loops through all Affine objects within a folder and uses them to transfer | |
* objects in a loop. | |
Built off of : https://forum.image.sc/t/interactive-image-alignment/23745/9 | |
*/ | |
def name = getProjectEntry().getImageName() | |
def path = buildFilePath(PROJECT_BASE_DIR, 'Affine',name) | |
def matrix = null | |
new File(path).withObjectInputStream { | |
matrix = it.readObject() | |
} | |
// SET ME! Define image containing the original objects (must be in the current project) | |
def otherImageName = "name.ndpi" | |
// SET ME! Delete existing objects | |
def deleteExisting = true | |
// SET ME! Change this if things end up in the wrong place | |
def createInverse = true | |
import qupath.lib.objects.PathCellObject | |
import qupath.lib.objects.PathDetectionObject | |
import qupath.lib.objects.PathObject | |
import qupath.lib.objects.PathObjects | |
import qupath.lib.objects.PathTileObject | |
import qupath.lib.roi.RoiTools | |
import qupath.lib.roi.interfaces.ROI | |
import java.awt.geom.AffineTransform | |
import static qupath.lib.gui.scripting.QPEx.* | |
// Get the project & the requested image name | |
def project = getProject() | |
def entry = project.getImageList().find {it.getImageName() == otherImageName} | |
if (entry == null) { | |
print 'Could not find image with name ' + otherImageName | |
return | |
} | |
def otherHierarchy = entry.readHierarchy() | |
def pathObjects = otherHierarchy.getAnnotationObjects() | |
// Define the transformation matrix | |
def transform = new AffineTransform( | |
matrix[0], matrix[3], matrix[1], | |
matrix[4], matrix[2], matrix[5] | |
) | |
if (createInverse) | |
transform = transform.createInverse() | |
if (deleteExisting) | |
clearAllObjects() | |
def newObjects = [] | |
for (pathObject in pathObjects) { | |
newObjects << transformObject(pathObject, transform) | |
} | |
addObjects(newObjects) | |
print 'Done!' | |
/** | |
* Transform object, recursively transforming all child objects | |
* | |
* @param pathObject | |
* @param transform | |
* @return | |
*/ | |
PathObject transformObject(PathObject pathObject, AffineTransform transform) { | |
// Create a new object with the converted ROI | |
def roi = pathObject.getROI() | |
def roi2 = transformROI(roi, transform) | |
def newObject = null | |
if (pathObject instanceof PathCellObject) { | |
def nucleusROI = pathObject.getNucleusROI() | |
if (nucleusROI == null) | |
newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
else | |
newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else if (pathObject instanceof PathTileObject) { | |
newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else if (pathObject instanceof PathDetectionObject) { | |
newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else { | |
newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} | |
// Handle child objects | |
if (pathObject.hasChildren()) { | |
newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)})) | |
} | |
return newObject | |
} | |
/** | |
* Transform ROI (via conversion to Java AWT shape) | |
* | |
* @param roi | |
* @param transform | |
* @return | |
*/ | |
ROI transformROI(ROI roi, AffineTransform transform) { | |
def shape = RoiTools.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses! | |
shape2 = transform.createTransformedShape(shape) | |
return RoiTools.getShapeROI(shape2, roi.getImagePlane(), 0.5) | |
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import qupath.lib.objects.PathCellObject | |
// Get the current hierarchy | |
def hierarchy = getCurrentHierarchy() | |
// Get the non-top level annotations. This assumes you have a tissue area selected, and have hand drawn some cells within that. | |
def targets = getObjects{return it.getLevel()!=1 && it.isAnnotation()} | |
// Check we have anything to work with | |
if ( targets.isEmpty()) { | |
print("No objects selected!") | |
return | |
} | |
// Loop through objects | |
def newDetections = new ArrayList<>() | |
for (def pathObject in targets) { | |
// Unlikely to happen... but skip any objects not having a ROI | |
if (!pathObject.hasROI()) { | |
print("Skipping object without ROI: " + pathObject) | |
continue | |
} | |
def roi = pathObject.getROI() | |
def detection = new PathCellObject(roi,roi,pathObject.getPathClass()) | |
newDetections.add(detection) | |
print("Adding " + detection) | |
} | |
removeObjects( targets, true) | |
// Actually add the objects | |
hierarchy.addPathObjects(newDetections, false) | |
//Remove nucleus ROI | |
newDetections.each{it.nucleus = null} | |
fireHierarchyUpdate() | |
if (newDetections.size() > 1) | |
print("Added " + newDetections.size() + " detections(s)") | |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Script works as of 0.2.0M11 | |
//Purpose: to change objects from annotations to detections or vice versa | |
//Measurements WILL be lost. | |
//By default it changes annotations of class "Positive" into detections, and removes the original annotations. | |
//Adjust the == section to fit your desired target classes | |
//To change detections into annotations, swap getAnnotationObjects to getDetectionObjects in the first line | |
//and createDetectionObject to createAnnotationObject in the middle of the script | |
toChange = getAnnotationObjects().findAll {it.getPathClass() == getPathClass("Positive")} | |
newObjects = [] | |
toChange.each{ | |
roi = it.getROI() | |
annotation = PathObjects.createDetectionObject(roi, it.getPathClass()) | |
newObjects.add(annotation) | |
} | |
// Actually add the objects | |
addObjects(newObjects) | |
//Comment this line out if you want to keep the original objects | |
removeObjects(toChange,true) | |
resolveHierarchy() | |
print("Done!") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// https://groups.google.com/forum/#!topic/qupath-users/rKHqWQHhaEE | |
// Get all annotations with ellipse ROIs | |
def annotations = getAnnotationObjects() | |
//change this line to subset = annotations if you want to convert ALL current annotations to detections. Otherwise adjust as desired. | |
def subset = annotations.findAll {it.getROI() instanceof qupath.lib.roi.EllipseROI} | |
// Create corresponding detections (name this however you like) | |
def classification = getPathClass('Node') | |
def detections = subset.collect { | |
new qupath.lib.objects.PathDetectionObject(it.getROI(), classification) | |
} | |
// Remove ellipse annotations & replace with detections | |
removeObjects(subset, true) | |
addObjects(detections) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//This is actually two scripts, the first should be run in the image you are copying from | |
//and the second in the image you are copying to. | |
//Export section | |
def path = buildFilePath(PROJECT_BASE_DIR, 'annotations-' + getProjectEntry().getImageName() + '.txt') | |
def annotations = getAnnotationObjects().collect {new qupath.lib.objects.PathAnnotationObject(it.getROI(), it.getPathClass())} | |
new File(path).withObjectOutputStream { | |
it.writeObject(annotations) | |
} | |
print 'Done!' | |
//There should now be a saved .txt file in the base project directory that holds the annotation information. | |
//Swap images and run the second set of code using the file name generated in the first half of the code. | |
// Current path works for an image that has the same name as the original image, but you can adjust that. | |
def path = buildFilePath(PROJECT_BASE_DIR, 'annotations-' + getProjectEntry().getImageName() + '.txt') | |
def annotations = null | |
new File(path).withObjectInputStream { | |
annotations = it.readObject() | |
} | |
addObjects(annotations) | |
getAnnotationObjects().each{it.setLocked(true)} | |
print 'Added ' + annotations |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//QuPath 0.3.0 | |
//https://forum.image.sc/t/is-it-possible-to-modify-the-cell-expansion-parameter-after-cells-have-been-detected/61665/3 | |
def detections = getDetectionObjects() | |
def cells = CellTools.detectionsToCells(detections, 10, -1) | |
removeObjects(detections, true) | |
addObjects(cells) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/* | |
* A script to create detection object(s) having the same ROI as all other annotation objects | |
*/ | |
import qupath.lib.objects.PathTileObject | |
import qupath.lib.roi.RectangleROI | |
import qupath.lib.scripting.QP | |
// Set this to true to use the bounding box of the ROI, rather than the ROI itself | |
boolean useBoundingBox = false | |
// Get the current hierarchy | |
def hierarchy = QP.getCurrentHierarchy() | |
// Get all annotation objects: You may want to change this to select only certain annotations based on class | |
// or just annotations you have selected using =getSelectedObjects() | |
def selected = getAnnotationObjects() | |
// Check we have anything to work with | |
if (selected.isEmpty()) { | |
print("No objects selected!") | |
return | |
} | |
// Loop through objects | |
def newDetections = new ArrayList<>() | |
for (def pathObject in selected) { | |
// Unlikely to happen... but skip any objects not having a ROI | |
if (!pathObject.hasROI()) { | |
print("Skipping object without ROI: " + pathObject) | |
continue | |
} | |
// Don't create a second annotation, unless we want a bounding box | |
if (!useBoundingBox && pathObject.isDetection()) { | |
print("Skipping annotation: " + pathObject) | |
continue | |
} | |
// Create an annotation for whichever object is selected, with the same class | |
// Note: because ROIs are (or should be) immutable, the same ROI is used here, rather than a duplicate | |
def roi = pathObject.getROI() | |
if (useBoundingBox) | |
roi = new RectangleROI( | |
roi.getBoundsX(), | |
roi.getBoundsY(), | |
roi.getBoundsWidth(), | |
roi.getBoundsHeight(), | |
roi.getC(), | |
roi.getZ(), | |
roi.getT()) | |
def detection = new PathTileObject(roi, pathObject.getPathClass()) | |
newDetections.add(detection) | |
print("Adding " + detection) | |
} | |
// Actually add the objects | |
hierarchy.addPathObjects(newDetections, false) | |
if (newDetections.size() > 1) | |
print("Added " + newDetections.size() + " detections(s)") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//This script forces the annotation to detect whether cells are inside of it | |
//Useful when pasting an annotation onto a set of detections | |
//Added warning, this may appear to freeze the program if a lot of detections are being updated. Be patient. | |
selected = getSelectedObject() | |
removeObject(selected, true) | |
addObject(selected) | |
//Locked is not absolutely necessary, but good practice so you don't "jiggle" your updated animation | |
selected.setLocked(true) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
getAnnotationObjects().each{it.setLocked(true)} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Grabs all detections, removes them, and creates new detections with touching detections being merged. | |
//All previous informations is lost. | |
// 0.2.0M9 | |
// Base code by @smcardle, cleaned up by @Research_Associate and @bpavie https://forum.image.sc/t/custom-segmentation-by-tiles/35501/8 | |
import org.locationtech.jts.geom.util.GeometryCombiner; | |
import org.locationtech.jts.operation.union.UnaryUnionOp | |
import qupath.lib.roi.GeometryTools | |
import qupath.lib.roi.RoiTools | |
import qupath.lib.objects.PathObjects | |
def islets= getDetectionObjects() | |
def geos=islets.collect{it.getROI().getGeometry()} | |
def combined=GeometryCombiner.combine(geos) | |
def merged= UnaryUnionOp.union(combined) | |
def mergedRois=GeometryTools.geometryToROI(merged, islets[0].getROI().getImagePlane()) | |
def splitRois=RoiTools.splitROI(mergedRois) | |
def newObjs=[] | |
splitRois.each{ | |
newObjs << PathObjects.createDetectionObject(it,getPathClass("Region")) | |
} | |
addObjects(newObjs) | |
removeObjects(islets,true) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/** https://github.com/qupath/qupath/issues/199 | |
* Flip a QuPath ROI vertically or horizontally. | |
* | |
* This creates a new annotation & adds it to the current image hierarchy. | |
* | |
* This also shows the method by which any arbitrary AffineTransform may be | |
* applied to a ROI by scripting. | |
* | |
* @author Pete Bankhead | |
*/ | |
import qupath.lib.objects.PathAnnotationObject | |
import qupath.lib.roi.PathROIToolsAwt | |
import qupath.lib.scripting.QPEx | |
import java.awt.geom.AffineTransform | |
// Get selected object & its ROI | |
def selected = QPEx.getSelectedObject() | |
def roi = selected?.getROI() | |
if (roi == null) { | |
print 'No ROI selected!' | |
return | |
} | |
// Create the relevant transforms, incorporating the image dimensions | |
def server = QPEx.getCurrentImageData().getServer() | |
def flipHorizontal = AffineTransform.getScaleInstance(-1, 1) | |
flipHorizontal.translate(-server.getWidth(), 0) | |
def flipVertical = AffineTransform.getScaleInstance(1, -1) | |
flipVertical.translate(0, -server.getHeight()) | |
// Get a Shape for the ROI & apply required transform | |
def shape = PathROIToolsAwt.getShape(roi) | |
def shape2 = flipHorizontal.createTransformedShape(shape) | |
//def shape2 = flipVertical.createTransformedShape(shape) | |
// Create & add a new annotation | |
def roi2 = PathROIToolsAwt.getShapeROI(shape2, roi.getC(), roi.getZ(), roi.getT(), 0.5) | |
QPEx.addObject(new PathAnnotationObject(roi2, selected.getPathClass())) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//QP 0.2.3 | |
//from https://forum.image.sc/t/qupath-scripting-combining-two-annotations/48152/2 | |
//For 0.1.2 see below | |
def tissue = getAnnotationObjects().find {it.getPathClass() == getPathClass("Positive")} | |
def vessel = getAnnotationObjects().find {it.getPathClass() == getPathClass("Negative")} | |
def plane = tissue.getROI().getImagePlane() | |
if (plane != vessel.getROI().getImagePlane()) { | |
println 'Annotations are on different planes!' | |
return | |
} | |
// Convert to geometries & compute distance | |
// Note: see https://locationtech.github.io/jts/javadoc/org/locationtech/jts/geom/Geometry.html#distance-org.locationtech.jts.geom.Geometry- | |
def g1 = tissue.getROI().getGeometry() | |
def g2 = vessel.getROI().getGeometry() | |
def difference = g1.difference(g2) | |
if (difference.isEmpty()) | |
println "No intersection between areas" | |
else { | |
def roi = GeometryTools.geometryToROI(difference, plane) | |
def annotation = PathObjects.createAnnotationObject(roi, getPathClass('Difference')) | |
addObject(annotation) | |
selectObjects(annotation) | |
println "Annotated created for subtraction" | |
} | |
//remove original objects | |
removeObject(tissue, true) | |
removeObject(vessel, true) | |
// https://groups.google.com/forum/#!topic/qupath-users/WlxDfgjqrCU | |
//0.1.2 | |
import qupath.lib.roi.* | |
import qupath.lib.objects.* | |
classToSubtract = "Endothel" | |
def topLevel = getObjects{return it.getLevel()==1 && it.isAnnotation()} | |
for (parent in topLevel){ | |
def total = [] | |
def polygons = [] | |
subtractions = parent.getChildObjects().findAll{it.isAnnotation() && it.getPathClass() == getPathClass(classToSubtract)} | |
for (subtractyBit in subtractions){ | |
if (subtractyBit instanceof AreaROI){ | |
subtractionROIs = PathROIToolsAwt.splitAreaToPolygons(subtractyBit.getROI()) | |
total.addAll(subtractionROIs[1]) | |
} else {total.addAll(subtractyBit.getROI())} | |
} | |
if (parent instanceof AreaROI){ | |
polygons = PathROIToolsAwt.splitAreaToPolygons(parent.getROI()) | |
total.addAll(polygons[0]) | |
} else { polygons[1] = parent.getROI()} | |
def newPolygons = polygons[1].collect { | |
updated = it | |
for (hole in total) | |
updated = PathROIToolsAwt.combineROIs(updated, hole, PathROIToolsAwt.CombineOp.SUBTRACT) | |
return updated | |
} | |
// Remove original annotation, add new ones | |
annotations = newPolygons.collect {new PathAnnotationObject(updated, parent.getPathClass())} | |
addObjects(annotations) | |
removeObjects(subtractions, true) | |
removeObject(parent, true) | |
} | |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.2.0 version of https://forum.image.sc/t/assign-point-objects-to-different-rois-by-overlap/26905/2?u=research_associate | |
// Convert points into detections, resolve the heirarchy so they are now child objects of any annotations. | |
import qupath.lib.roi.EllipseROI; | |
import qupath.lib.objects.PathDetectionObject | |
points = getAnnotationObjects().findAll{it.getROI().isPoint() } | |
print points[0].getROI() | |
describe(points[0].getROI()) | |
//Cycle through each points object (which is a collection of points) | |
points.each{ | |
//Cycle through all points within a points object | |
pathClass = it.getPathClass() | |
it.getROI().getAllPoints().each{ | |
//for each point, create a circle on top of it that is "size" pixels in diameter | |
x = it.getX() | |
y = it.getY() | |
size = 5 | |
def roi = ROIs.createEllipseROI(x-size/2,y-size/2,size,size, ImagePlane.getDefaultPlane()) | |
def aCell = new PathDetectionObject(roi, pathClass) | |
addObject(aCell) | |
} | |
} | |
//remove points if desired. | |
removeObjects(points, false) | |
resolveHierarchy() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// https://forum.image.sc/t/converting-a-polyline-annotation-into-a-polygon/36307/2?u=research_associate | |
//0.2.0 | |
def lineObjects = getAnnotationObjects().findAll {it.getROI().isLine()} | |
def polygonObjects = [] | |
for (lineObject in lineObjects) { | |
def line = lineObject.getROI() | |
def polygon = ROIs.createPolygonROI(line.getAllPoints(), line.getImagePlane()) | |
def polygonObject = PathObjects.createAnnotationObject(polygon, lineObject.getPathClass()) | |
polygonObject.setName(lineObject.getName()) | |
polygonObject.setColorRGB(lineObject.getColorRGB()) | |
polygonObjects << polygonObject | |
} | |
removeObjects(lineObjects, true) | |
addObjects(polygonObjects) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Takes all non ellipses and modifies the names of those annotations | |
//Classifies all ellipses | |
// https://groups.google.com/forum/#!topic/qupath-users/rKHqWQHhaEE | |
// 0.1.2 0.2.0 | |
// Get all annotations with ellipse ROIs | |
def annotations = getAnnotationObjects() | |
def ellipses = annotations.findAll {it.getROI() instanceof qupath.lib.roi.EllipseROI} | |
// Assign classifications to nodes | |
def classification = getPathClass('Node') | |
ellipses.each {it.setPathClass(classification)} | |
// Assign names to all other annotations | |
annotations.removeAll(ellipses) | |
annotations.eachWithIndex {annotation, i -> annotation.setName('Annotation ' + (i+1))} | |
fireHierarchyUpdate() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
guiscript=true | |
//Script demonstrates the use of guiscript=true to prevent threading errors (try turning it off!) | |
//and using subtraction and selection in combination with built in functions like dilation | |
//Loop starting from one annotation object with no other objects present to contstraing it | |
//Create bands | |
//0.1.2 | |
import qupath.lib.roi.* | |
import qupath.lib.objects.* | |
bands = 5 | |
first = getAnnotationObjects() | |
first[0].setName("0") | |
firstROI = first[0].getROI() | |
firstROIClass = first[0].getPathClass() | |
//test = [] | |
for (i=1; i<=bands; i++){ | |
j=i-1 | |
print j | |
currentObjects = getAnnotationObjects().findAll{it.getName() == j.toString()} | |
currentObjects[0].setName(i.toString()) | |
selectObjects{it.getName() == i.toString()} | |
//Thread.sleep(2000); | |
runPlugin('qupath.lib.plugins.objects.DilateAnnotationPlugin', '{"radiusMicrons": 100.0, "removeInterior": false, "constrainToParent": true}'); | |
fireHierarchyUpdate() | |
j = j.toString() | |
print(currentObjects) | |
currentObjects[0].setName(j.toString()) | |
} | |
for (i=bands; i>0; i--){ | |
toRingify = getAnnotationObjects().findAll{it.getName() == i.toString()} | |
j=i-1 | |
ringHole = getAnnotationObjects().findAll{it.getName() == j.toString()} | |
ring = PathROIToolsAwt.combineROIs(toRingify[0].getROI(), ringHole[0].getROI(), PathROIToolsAwt.CombineOp.SUBTRACT) | |
addObjects(new PathAnnotationObject(ring, toRingify[0].getPathClass())) | |
ring = getAnnotationObjects().findAll{it.getName() == null} | |
ring[0].setName(i.toString()) | |
removeObjects(toRingify,true) | |
fireHierarchyUpdate() | |
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// from https://groups.google.com/forum/#!searchin/qupath-users/rotate$20%7Csort:date/qupath-users/UvkNb54fYco/ri_4K6tiCwAJ | |
//0.1.2 | |
import qupath.lib.objects.* | |
import qupath.lib.roi.* | |
//As usual, change this to target the annotations you want to rotate | |
def annotations = getAnnotationObjects() | |
//Set the number of degrees you wish to rotate your objects | |
double deg = -60 | |
double degrees = Math.toRadians(deg) // sets the number of degrees to rotate | |
for (j = 0; j < annotations.size; j++) | |
{ | |
def roi = annotations[j].getROI() | |
def roiPoints = annotations[j].getROI().getPolygonPoints() | |
def roiPointsArx = roiPoints.x.toArray() //convert each point to an array | |
def roiPointsAry = roiPoints.y.toArray() //for x and y coordinates | |
//the centroid of the roi - which will be used to center the shape around (0,0) | |
double centroidX = roi.getCentroidX() | |
double centroidY = roi.getCentroidY() | |
for (i= 0; i< roiPointsAry.length; i++) | |
{ | |
// correct the center to 0 | |
roiPointsArx[i] = roiPointsArx[i] - centroidX | |
roiPointsAry[i] = roiPointsAry[i] - centroidY | |
//Makes prime placeholders, which allows the calculations x'=xcos(theta)-ysin(theta), y'=ycos(theta)+xsin(theta) to be performed | |
double newPointX = roiPointsArx[i] | |
double newPointY = roiPointsAry[i] | |
// then rotate | |
roiPointsArx[i] = (newPointX * Math.cos(degrees)) - (newPointY * Math.sin(degrees)) | |
roiPointsAry[i] = (newPointY * Math.cos(degrees)) + (newPointX * Math.sin(degrees)) | |
// then move it back | |
roiPointsArx[i] = roiPointsArx[i] + centroidX | |
roiPointsAry[i] = roiPointsAry[i] + centroidY | |
} | |
// then to convert it back into an object | |
def xFloat = roiPointsArx as float[] | |
def yFloat = roiPointsAry as float[] | |
def roiNew = new PolygonROI(xFloat, yFloat, -1, 0, 0) | |
def pathObjectNew = new PathAnnotationObject(roiNew) | |
addObject(pathObjectNew) | |
} | |
removeObjects(annotations, true) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//0.1.2 0.2.0 | |
getTMACoreList().each{ | |
it.setMissing(false) | |
} | |
fireHierarchyUpdate() | |
println("done") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//REPLACE TISSUE DETECTION LINE BELOW WITH YOUR OWN | |
//Set TMA cores where the tissue area is below a threshold to Missing so as to avoid inclusion in future calculations. | |
//Any annotations below theMINIMUM_AREA_um2 threshold will also be deleted. | |
MINIMUM_AREA_um2 = 30000 | |
def imageData = getCurrentImageData() | |
def server = imageData.getServer() | |
def pixelSize = server.getPixelHeightMicrons() | |
getTMACoreList().each{ | |
it.setMissing(false) | |
} | |
selectTMACores(); | |
///////////////////////REPLACE WITH YOUR OWN | |
runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 233, "requestedPixelSizeMicrons": 5.0, "minAreaMicrons": 10000.0, "maxHoleAreaMicrons": 1000000.0, "darkBackground": false, "smoothImage": true, "medianCleanup": true, "dilateBoundaries": false, "smoothCoordinates": true, "excludeOnBoundary": false, "singleAnnotation": true}'); | |
//////////////////////////////////////// | |
//Potentially alter script to run off of mean intensity in core? | |
//runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 2.0, "region": "ROI", "tileSizeMicrons": 25.0, "colorOD": true, "colorStain1": false, "colorStain2": false, "colorStain3": false, "colorRed": false, "colorGreen": false, "colorBlue": false, "colorHue": false, "colorSaturation": false, "colorBrightness": false, "doMean": false, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}'); | |
getTMACoreList().each{ | |
it.setMissing(true) | |
list = it.getChildObjects() | |
println(list) | |
if ( list.size() > 0){ | |
double total = 0 | |
for (object in list) { | |
total = total+object.getROI().getArea() | |
} | |
total = total*pixelSize*pixelSize | |
println(it.getName()+ " list "+list.size()+ "total "+total) | |
if ( total > MINIMUM_AREA_um2 ){ | |
it.setMissing(false) | |
} else {removeObjects(list, true)} | |
} | |
} | |
fireHierarchyUpdate() | |
println("done") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//from forums: https://groups.google.com/d/msg/qupath-users/DugpYQJq9Ic/J1qHWkZ4CgAJ | |
//0.1.2 | |
import qupath.lib.roi.* | |
import qupath.lib.objects.* | |
//selects all annotation areas. Change this if you want only a subset of your areas split into contiguous area components. | |
def areaAnnotations = getAnnotationObjects().findAll {it.getROI() instanceof AreaROI} | |
areaAnnotations.each { selected -> | |
def polygons = PathROIToolsAwt.splitAreaToPolygons(selected.getROI()) | |
def newPolygons = polygons[1].collect { | |
updated = it | |
for (hole in polygons[0]) | |
updated = PathROIToolsAwt.combineROIs(updated, hole, PathROIToolsAwt.CombineOp.SUBTRACT) | |
return updated | |
} | |
// Remove original annotation, add new ones | |
annotations = newPolygons.collect {new PathAnnotationObject(it)} | |
resetSelection() | |
removeObject(selected, true) | |
addObjects(annotations) | |
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Another version of the split annotations script from : https://github.com/qupath/qupath/issues/99 | |
//0.1.2 | |
import static qupath.lib.roi.PathROIToolsAwt.splitAreaToPolygons | |
import qupath.lib.roi.AreaROI | |
import qupath.lib.objects.PathAnnotationObject | |
// Get all the annotations | |
def annotations = getAnnotationObjects() | |
// Prepare to add/remove annotations in batch | |
def toAdd = [] | |
def toRemove = [] | |
// Loop through the annotations, preparing to make changes | |
for (annotation in annotations) { | |
def roi = annotation.getROI() | |
// If we have an area, prepare to remove it - | |
// and add the separated polygons | |
if (roi instanceof AreaROI) { | |
toRemove << annotation | |
for (p in splitAreaToPolygons(roi)[1]) { | |
toAdd << new PathAnnotationObject(p, annotation.getPathClass()) | |
} | |
} | |
} | |
// Perform the changes | |
removeObjects(toRemove, true) | |
addObjects(toAdd) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Checks for all detections within a TMA, DOES NOT EXCLUDE DETECTIONS WITHIN SUB-ANNOTATIONS. | |
//That last bit should make it compatible with trained classifiers. | |
//Cells per mm^2 assumes an annotation within the TMA, if there are no annotations within the TMA, remove related lines. | |
//0.1.2 | |
import qupath.lib.objects.PathCellObject | |
def imageData = getCurrentImageData() | |
def server = imageData.getServer() | |
def pixelSize = server.getPixelHeightMicrons() | |
Set classList = [] | |
for (object in getCellObjects()) { | |
classList << object.getPathClass() | |
} | |
println(classList) | |
hierarchy = getCurrentHierarchy() | |
getTMACoreList().each{ | |
totalCells = hierarchy.getDescendantObjects(it,null, PathCellObject) | |
for (aClass in classList){ | |
if (aClass){ | |
if (totalCells.size() > 0){ | |
def cells = hierarchy.getDescendantObjects(it,null, PathCellObject).findAll{it.getPathClass() == aClass} | |
//it.getMeasurementList().putMeasurement(aClass.getName()+" cells", cells.size()) | |
it.getMeasurementList().putMeasurement(aClass.getName()+" %", cells.size()*100/totalCells.size()) | |
def annotationArea = it.getChildObjects()[0].getROI().getArea() | |
it.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", cells.size()/(annotationArea*pixelSize*pixelSize/1000000)) | |
} else { | |
//it.getMeasurementList().putMeasurement(aClass.getName()+" cells", 0) | |
it.getMeasurementList().putMeasurement(aClass.getName()+" %", 0) | |
it.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", 0) | |
} | |
} | |
} | |
} | |
println("done") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/** | |
0.2.0? 0.3.0 yes | |
* Script to help with annotating tumor regions, chopping increasing chunks into the tumor. | |
* SEE THREAD HERE FOR DESCRIPTION ON USE: https://forum.image.sc/t/reduce-annotations/24305/12?u=research_associate | |
* Here, each of the margin regions is approximately 100 microns in width. | |
Recommended BEFORE running the script if your tumor is on the tissue border: | |
1. Create a tissue annotation for boundaries (class tissue, eliminate whitespace) | |
2. Create a tumor annotation | |
3. Run the script, it should only take into account tumor areas inside of the tissue annotation. | |
* @author Pete Bankhead | |
* @mangled by Svidro because reasons | |
* mix and match with @Mike_Nelson version | |
*/ | |
//----- | |
// Some things you might want to change | |
// How much to expand each region | |
double expandMarginMicrons = 20.0 | |
// How many times you want to chop into your annotation. Edit color script around line 115 if you go over 5 | |
int howManyTimes = 4 | |
// Define the colors | |
// Inner layers are given scripted colors, but gretaer than 6 or 7 layers may require adjustments | |
def colorOuterMargin = getColorRGB(0, 200, 0) | |
PrecisionModel PM = new PrecisionModel(PrecisionModel.FIXED) | |
// Choose whether to lock the annotations or not (it's generally a good idea to avoid accidentally moving them) | |
def lockAnnotations = true | |
// Extract the main info we need | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
def server = imageData.getServer() | |
// We need the pixel size | |
def cal = server.getPixelCalibration() | |
if (!cal.hasPixelSizeMicrons()) { | |
print 'We need the pixel size information here!' | |
return | |
} | |
//----- | |
//Setup - Merge all Tumor objects into one, they can be split later. Get Geometries for each object | |
selectObjectsByClassification("Tumor") | |
mergeSelectedAnnotations() | |
double expandPixels = expandMarginMicrons / cal.getAveragedPixelSizeMicrons() | |
initialTumorObject = getAnnotationObjects().find{it.getPathClass() == getPathClass("Tumor")} | |
def tumorGeom = getAnnotationObjects().find{it.getPathClass() == getPathClass("Tumor")}.getROI().getGeometry() | |
def plane = ImagePlane.getDefaultPlane() | |
def tissueGeom = getAnnotationObjects().find{it.getPathClass() == getPathClass("Tissue")}.getROI().getGeometry() | |
//Clean up the Tumor geometry | |
cleanTumorGeom = tissueGeom.intersection(tumorGeom) | |
tumorROIClean = GeometryTools.geometryToROI(cleanTumorGeom, plane) | |
cleanTumor = PathObjects.createAnnotationObject(tumorROIClean, getPathClass("Tumor")) | |
cleanTumor.setName("CleanTumor") | |
// Get the outer margin area | |
def geomOuter = cleanTumorGeom.buffer(expandPixels) | |
geomOuter = geomOuter.difference(cleanTumorGeom) | |
geomOuter = geomOuter.intersection(tissueGeom) | |
def roiOuter = GeometryTools.geometryToROI(geomOuter, plane) | |
def annotationOuter = PathObjects.createAnnotationObject(roiOuter) | |
annotationOuter.setName("Outer margin") | |
annotationOuter.setColorRGB(colorOuterMargin) | |
innerAnnotations = [] | |
innerAnnotations << cleanTumor | |
innerAnnotations << annotationOuter | |
//innerAnnotations << selected | |
for (i=0; i<howManyTimes;i++){ | |
currentArea = innerAnnotations[innerAnnotations.size()-1].getROI().getGeometry() | |
areaExpansion = currentArea.buffer(expandPixels) | |
areaExpansion = areaExpansion.intersection(cleanTumorGeom) | |
areaExpansion = areaExpansion.buffer(0) | |
//println(areaExpansion) | |
areaExpansion = areaExpansion.intersection(tissueGeom) | |
//println(areaExpansion) | |
//remove outer areas previously defined as other innerAnnotations | |
if(i>=1){ | |
for (k=1; k<=i;k++){ | |
remove = innerAnnotations[innerAnnotations.size()-k].getROI().getGeometry() | |
areaExpansion = areaExpansion.difference(remove) | |
} | |
} | |
areaExpansion= GeometryPrecisionReducer.reduce(areaExpansion, PM) | |
roiExpansion = GeometryTools.geometryToROI(areaExpansion, plane) | |
j = i+1 | |
annotationExpansion = PathObjects.createAnnotationObject(roiExpansion) | |
int nameValue = j*expandMarginMicrons | |
annotationExpansion.setName("Inner margin "+nameValue+" microns") | |
annotationExpansion.setColorRGB(getColorRGB(20*i, 40*i, 200-30*i)) | |
innerAnnotations << annotationExpansion | |
//println("innerannotations size "+innerAnnotations.size()) | |
} | |
//add one last inner annotation that contains the rest of the tumor | |
core = cleanTumorGeom | |
for (i=1; i<=howManyTimes;i++){ | |
core = core.difference(innerAnnotations[i].getROI().getGeometry()) | |
} | |
coreROI = GeometryTools.geometryToROI(core, plane) | |
coreAnno = PathObjects.createAnnotationObject(coreROI) | |
coreAnno.setName("Remaining Tumor") | |
print "core geom: " +coreAnno.getClass() | |
innerAnnotations << coreAnno | |
print innerAnnotations | |
// Add the annotations | |
//hierarchy.removeObject(selected, true) | |
getAnnotationObjects().each {it.setLocked(lockAnnotations)} | |
addObjects(innerAnnotations) | |
removeObject(initialTumorObject, true) | |
println("Done! Wheeeee!") | |
import org.locationtech.jts.geom.Geometry | |
import qupath.lib.common.GeneralTools | |
import qupath.lib.objects.PathObject | |
import qupath.lib.objects.PathObjects | |
import qupath.lib.roi.GeometryTools | |
import qupath.lib.roi.ROIs | |
import java.awt.Rectangle | |
import java.awt.geom.Area | |
import org.locationtech.jts.precision.GeometryPrecisionReducer | |
import org.locationtech.jts.geom.PrecisionModel |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//Description of use found here: https://petebankhead.github.io/qupath/scripts/2018/08/08/three-regions.html | |
//0.1.2 | |
/** | |
* Script to help with annotating tumor regions, separating the tumor margin from the center. | |
* | |
* Here, each of the margin regions is approximately 500 microns in width. | |
* | |
* @author Pete Bankhead | |
*/ | |
import qupath.lib.common.GeneralTools | |
import qupath.lib.objects.PathAnnotationObject | |
import qupath.lib.objects.PathObject | |
import qupath.lib.roi.PathROIToolsAwt | |
import java.awt.Rectangle | |
import java.awt.geom.Area | |
import static qupath.lib.scripting.QPEx.* | |
//----- | |
// Some things you might want to change | |
// How much to expand each region | |
double expandMarginMicrons = 500.0 | |
// Define the colors | |
def coloInnerMargin = getColorRGB(0, 0, 200) | |
def colorOuterMargin = getColorRGB(0, 200, 0) | |
def colorCentral = getColorRGB(0, 0, 0) | |
// Choose whether to lock the annotations or not (it's generally a good idea to avoid accidentally moving them) | |
def lockAnnotations = true | |
//----- | |
// Extract the main info we need | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
def server = imageData.getServer() | |
// We need the pixel size | |
if (!server.hasPixelSizeMicrons()) { | |
print 'We need the pixel size information here!' | |
return | |
} | |
if (!GeneralTools.almostTheSame(server.getPixelWidthMicrons(), server.getPixelHeightMicrons(), 0.0001)) { | |
print 'Warning! The pixel width & height are different; the average of both will be used' | |
} | |
// Get annotation & detections | |
def annotations = getAnnotationObjects() | |
def selected = getSelectedObject() | |
if (selected == null || !selected.isAnnotation()) { | |
print 'Please select an annotation object!' | |
return | |
} | |
// We need one selected annotation as a starting point; if we have other annotations, they will constrain the output | |
annotations.remove(selected) | |
// If we have at most one other annotation, it represents the tissue | |
Area areaTissue | |
PathObject tissueAnnotation | |
if (annotations.isEmpty()) { | |
areaTissue = new Area(new Rectangle(0, 0, server.getWidth(), server.getHeight())) | |
} else if (annotations.size() == 1) { | |
tissueAnnotation = annotations.get(0) | |
areaTissue = PathROIToolsAwt.getArea(tissueAnnotation.getROI()) | |
} else { | |
print 'Sorry, this script only support one selected annotation for the tumor region, and at most one other annotation to constrain the expansion' | |
return | |
} | |
// Calculate how much to expand | |
double expandPixels = expandMarginMicrons / server.getAveragedPixelSizeMicrons() | |
def roiOriginal = selected.getROI() | |
def areaTumor = PathROIToolsAwt.getArea(roiOriginal) | |
// Get the outer margin area | |
def areaOuter = PathROIToolsAwt.shapeMorphology(areaTumor, expandPixels) | |
areaOuter.subtract(areaTumor) | |
areaOuter.intersect(areaTissue) | |
def roiOuter = PathROIToolsAwt.getShapeROI(areaOuter, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT()) | |
def annotationOuter = new PathAnnotationObject(roiOuter) | |
annotationOuter.setName("Outer margin") | |
annotationOuter.setColorRGB(colorOuterMargin) | |
// Get the central area | |
def areaCentral = PathROIToolsAwt.shapeMorphology(areaTumor, -expandPixels) | |
areaCentral.intersect(areaTissue) | |
def roiCentral = PathROIToolsAwt.getShapeROI(areaCentral, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT()) | |
def annotationCentral = new PathAnnotationObject(roiCentral) | |
annotationCentral.setName("Center") | |
annotationCentral.setColorRGB(colorCentral) | |
// Get the inner margin area | |
areaInner = areaTumor | |
areaInner.subtract(areaCentral) | |
areaInner.intersect(areaTissue) | |
def roiInner = PathROIToolsAwt.getShapeROI(areaInner, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT()) | |
def annotationInner = new PathAnnotationObject(roiInner) | |
annotationInner.setName("Inner margin") | |
annotationInner.setColorRGB(coloInnerMargin) | |
// Add the annotations | |
hierarchy.getSelectionModel().clearSelection() | |
hierarchy.removeObject(selected, true) | |
def annotationsToAdd = [annotationOuter, annotationInner, annotationCentral]; | |
annotationsToAdd.each {it.setLocked(lockAnnotations)} | |
if (tissueAnnotation == null) { | |
hierarchy.addPathObjects(annotationsToAdd, false) | |
} else { | |
tissueAnnotation.addPathObjects(annotationsToAdd) | |
hierarchy.fireHierarchyChangedEvent(this, tissueAnnotation) | |
if (lockAnnotations) | |
tissueAnnotation.setLocked(true) | |
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/** | |
0.2.0 | |
* Script to help with annotating tumor regions, chopping increasing chunks into the tumor. | |
* SEE THREAD HERE FOR DESCRIPTION ON USE: https://forum.image.sc/t/reduce-annotations/24305/12?u=research_associate | |
* Here, each of the margin regions is approximately 100 microns in width. | |
Recommended BEFORE running the script if your tumor is on the tissue border: | |
1. Do have a simple tissue detection made to limit the borders. | |
2. Make an inverse area, delete the original tissue, and use CTRL+SHIFT BRUSH TOOL mostly to create the border of your tumor touching the simple tissue detection edge. You can use the wand tool for the interior of the tumor, but any little pixel that is missed by the wand tool defining the whitespace border of the tumor will result in an expansion. Brush tool them all away. | |
3. Invert the inverted tissue detection to get the original tissue back, and then delete the inverted annotation. You now have a tumor annotation that is right up against the tissue border. | |
4. Select the tumor and run the script | |
* @author Pete Bankhead | |
* @mangled by Svidro because reasons | |
*/ | |
//----- | |
// Some things you might want to change | |
// How much to expand each region | |
double expandMarginMicrons = 20.0 | |
// How many times you want to chop into your annotation. Edit color script around line 115 if you go over 5 | |
int howManyTimes = 2 | |
// Define the colors | |
// Inner layers are given scripted colors, but gretaer than 6 or 7 layers may require adjustments | |
def colorOuterMargin = getColorRGB(0, 200, 0) | |
import org.locationtech.jts.geom.Geometry | |
import qupath.lib.common.GeneralTools | |
import qupath.lib.objects.PathObject | |
import qupath.lib.objects.PathObjects | |
import qupath.lib.roi.GeometryTools | |
import qupath.lib.roi.ROIs | |
import java.awt.Rectangle | |
import java.awt.geom.Area | |
// Extract the main info we need | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
def server = imageData.getServer() | |
// We need the pixel size | |
def cal = server.getPixelCalibration() | |
if (!cal.hasPixelSizeMicrons()) { | |
print 'We need the pixel size information here!' | |
return | |
} | |
// Choose whether to lock the annotations or not (it's generally a good idea to avoid accidentally moving them) | |
def lockAnnotations = true | |
//----- | |
if (!GeneralTools.almostTheSame(cal.getPixelWidthMicrons(), cal.getPixelHeightMicrons(), 0.0001)) { | |
print 'Warning! The pixel width & height are different; the average of both will be used' | |
} | |
// Get annotation & detections | |
def annotations = getAnnotationObjects() | |
def selected = getSelectedObject() | |
if (selected == null || !selected.isAnnotation()) { | |
print 'Please select an annotation object!' | |
return | |
} | |
// We need one selected annotation as a starting point; if we have other annotations, they will constrain the output | |
annotations.remove(selected) | |
// If we have at most one other annotation, it represents the tissue | |
Geometry areaTissue | |
PathObject tissueAnnotation | |
// Calculate how much to expand | |
double expandPixels = expandMarginMicrons / cal.getAveragedPixelSizeMicrons() | |
def roiOriginal = selected.getROI() | |
def plane = roiOriginal.getImagePlane() | |
def areaTumor = roiOriginal.getGeometry() | |
if (annotations.isEmpty()) { | |
areaTissue = ROIs.createRectangleROI(0, 0, server.getWidth(), server.getHeight(), plane).getGeometry() | |
} else if (annotations.size() == 1) { | |
tissueAnnotation = annotations.get(0) | |
areaTissue = tissueAnnotation.getROI().getGeometry() | |
} else { | |
print 'Sorry, this script only support one selected annotation for the tumor region, and at most one other annotation to constrain the expansion' | |
return | |
} | |
println("Working, give it some time") | |
// Get the outer margin area | |
def geomOuter = areaTumor.buffer(expandPixels) | |
geomOuter = geomOuter.difference(areaTumor) | |
geomOuter = geomOuter.intersection(areaTissue) | |
def roiOuter = GeometryTools.geometryToROI(geomOuter, plane) | |
def annotationOuter = PathObjects.createAnnotationObject(roiOuter) | |
annotationOuter.setName("Outer margin") | |
annotationOuter.setColorRGB(colorOuterMargin) | |
innerAnnotations = [] | |
innerAnnotations << annotationOuter | |
//innerAnnotations << selected | |
for (i=0; i<howManyTimes;i++){ | |
//select the current expansion, which the first time is outside of the tumor, then expand it and intersect it | |
currentArea = innerAnnotations[innerAnnotations.size()-1].getROI().getGeometry() | |
println(currentArea) | |
/* | |
if (getQuPath().getBuildString().split()[1]<"0.2.0-m2"){ | |
areaExpansion = PathROIToolsAwt.shapeMorphology(currentArea, expandPixels) | |
}else {areaExpansion = PathROIToolsAwt.getArea(PathROIToolsAwt.roiMorphology(innerAnnotations[innerAnnotations.size()-1].getROI(), expandPixels))} | |
*/ | |
areaExpansion = currentArea.buffer(expandPixels) | |
areaExpansion = areaExpansion.intersection(areaTumor) | |
//println(areaExpansion) | |
areaExpansion = areaExpansion.intersection(areaTissue) | |
//println(areaExpansion) | |
//remove outer areas previously defined as other innerAnnotations | |
if(i>=1){ | |
for (k=1; k<=i;k++){ | |
remove = innerAnnotations[innerAnnotations.size()-k].getROI().getGeometry() | |
areaExpansion = areaExpansion.difference(remove) | |
} | |
} | |
roiExpansion = GeometryTools.geometryToROI(areaExpansion, plane) | |
j = i+1 | |
annotationExpansion = PathObjects.createAnnotationObject(roiExpansion) | |
int nameValue = j*expandMarginMicrons | |
annotationExpansion.setName("Inner margin "+nameValue+" microns") | |
annotationExpansion.setColorRGB(getColorRGB(20*i, 40*i, 200-30*i)) | |
innerAnnotations << annotationExpansion | |
//println("innerannotations size "+innerAnnotations.size()) | |
} | |
//add one last inner annotation that contains the rest of the tumor | |
core = areaTumor | |
for (i=1; i<=howManyTimes;i++){ | |
core = core.difference(innerAnnotations[i].getROI().getGeometry()) | |
} | |
coreROI = GeometryTools.geometryToROI(core, plane) | |
coreAnno = PathObjects.createAnnotationObject(coreROI) | |
coreAnno.setName("Remaining Tumor") | |
innerAnnotations << coreAnno | |
// Add the annotations | |
hierarchy.getSelectionModel().clearSelection() | |
//hierarchy.removeObject(selected, true) | |
def annotationsToAdd = innerAnnotations; | |
annotationsToAdd.each {it.setLocked(lockAnnotations)} | |
if (tissueAnnotation == null) { | |
hierarchy.addPathObjects(annotationsToAdd, false) | |
} else { | |
tissueAnnotation.addPathObjects(annotationsToAdd) | |
hierarchy.fireHierarchyChangedEvent(this, tissueAnnotation) | |
if (lockAnnotations) | |
tissueAnnotation.setLocked(true) | |
} | |
println("Done! Wheeeee!") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//This script forces the annotations to detect whether cells are inside of it | |
//Useful when pasting an annotation onto a set of detections | |
//Added warning, this may appear to freeze the program if a lot of detections are being updated. Be patient. | |
//0.1.2 0.2.0 | |
import qupath.lib.roi.* | |
import qupath.lib.objects.* | |
selected = getSelectedObjects() | |
def rois = [:] | |
for (object in selected){ | |
rois.put(object.getROI(),object.getPathClass()) | |
} | |
removeObjects(selected, true) | |
rois.each{r,c-> addObject(new PathAnnotationObject(r, c))} | |
//Locked is not absolutely necessary, but good practice so you don't "jiggle" your updated animation | |
for (annotation in getAnnotationObjects()) | |
annotation.setLocked(true) | |
Thank you for your help
Ottenere Outlook per Android<https://aka.ms/AAb9ysg>
…________________________________
From: MicroscopyRA ***@***.***>
Sent: Sunday, October 16, 2022 10:42:29 PM
To: Svidro ***@***.***>
Cc: asma23091978 ***@***.***>; Comment ***@***.***>
Subject: Re: Svidro/!Manipulating objects in QuPath
@Svidro commented on this gist.
________________________________
The tileExporter script you have a picture of exports tiles. Have you followed the steps to create a pixel classifier with your annotations?
I'd also recommend never running your pixel classifier on the training image, as you will not be able to retrain it later. Duplicate the image or create a new copy of the project before deleting all of your training objects.
https://qupath.readthedocs.io/en/stable/docs/tutorials/pixel_classification.html
This is also a pixel classifier, not an object classifier. I bring that up since some of your none annotations include tissue, which will confuse the pixel classifier. Also, as written, you will be creating a lot of none objects, which usually is not wanted. Better to not create the none objects at all, as described in the documentation link above.
—
Reply to this email directly, view it on GitHub<https://gist.github.com/5829ba53f927e79bb6e370a6a6747cfd#gistcomment-4337740>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AY4XSJTBS3SSLVVPWUDIHCLWDRSDLANCNFSM4JFFP2BQ>.
You are receiving this because you commented.Message ID: <Svidro/!Manipulating objects in ***@***.***>
Hi,
I want to change some parameters in the runplugin {} line for the cell detection groovy script and parse the changes from python. How can I make changes inside the specific line in the groovy script from python?
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The tileExporter script you have a picture of exports tiles. Have you followed the steps to create a pixel classifier with your annotations?
I'd also recommend never running your pixel classifier on the training image, as you will not be able to retrain it later. Duplicate the image or create a new copy of the project before deleting all of your training objects.
https://qupath.readthedocs.io/en/stable/docs/tutorials/pixel_classification.html
This is also a pixel classifier, not an object classifier. I bring that up since some of your none annotations include tissue, which will confuse the pixel classifier. Also, as written, you will be creating a lot of none objects, which usually is not wanted. Better to not create the none objects at all, as described in the documentation link above.