Overview
External Analyses are an experimental feature in Slide Score. Please note that External Analyses are in active development and breaking changes can come any time.
In the back-end "External Analyses" are called "webhooks" at the moment.
Key features of external analyses are the following:
-
Allow easy integration with custom 3rd party tools, for example for running a Machine Learning model on a user-specified Region Of Interest on the slide.
-
Give non-technical users the ability to run external analyses without needing to interact with command line tools. Including specificing custom parameters in the UI.
-
Provide a low-friction way to develop and test new functionality using a simple HTTP call.
An external analysis endpoint can be written in any language, its only requirements is being able to respond to HTTP requests made by the Slide Score server. But since the external analysis is likely interacting with Slide Score instance, it is recommended to use Python along with the slidescore sdk. If you use C# you can also use this example client in C# that includes SlideScoreClient.cs class that will help you make the API calls.
In order to configure and run an external analysis there are 4 steps that need to be followed:
- A external analysis server needs to be running on a location that can be reached by the Slide Score server.
- The external analysis needs to be configured via the Site administrator menu, along with any needed questions/parameters.
- The analysis needs to be enabled by a Study Admin for a specific study.
- A trigger needs to be send from the Slide viewing page, where the user is asked any configured questions.
- If no slide-specific questions are configured, the external analysis can also be triggered for multiple slides in the Edit Study page by a Study admin.
- The response from the external analysis is shown to the user.
In order to get familiar with these steps it is recommend to follow the example given below.
Configuration
To get started with Slide Score external analyses we have provided an example in the python slidescore-sdk: examples/webhook_slide_analysis.py
At the time of writing it contains the following code:
> Click to open the example external analysis endpoint in python
DESC = """
TODO
Date: 24-5-2024
Author: Bart Grosman & Jan Hudecek (SlideScore B.V.)
"""
from http.server import BaseHTTPRequestHandler, HTTPServer
import json
import argparse
import tempfile
import traceback
import time
import slidescore
import cv2 # $ pip install opencv-python
import numpy as np # $ pip install numpy
def create_tmp_file(content: str, suffix='.tmp'):
"""Creates a temporary file, used for intermediate files"""
fd, name = tempfile.mkstemp(suffix)
if content:
with open(fd, 'w') as fh:
fh.write(content)
return name
def convert_2_anno2_uuid(items, client, metadata=''):
# Convert to anno2 zip, upload, and return uploaded anno2 uuid
local_anno2_path = create_tmp_file('', '.zip')
client.convert_to_anno2(items, metadata, local_anno2_path)
response = client.perform_request("CreateOrphanAnno2", {}, method="POST").json()
assert response["success"] is True
client.upload_using_token(local_anno2_path, response["uploadToken"])
return response["annoUUID"]
def convert_polygons_2_centroids(polygons):
centroids = []
for polygon in polygons:
sum_x = 0
sum_y = 0
for point in polygon['points']:
sum_x += point['x']
sum_y += point['y']
centroids.append({
"x": sum_x / len(polygon['points']),
"y": sum_y / len(polygon['points']),
})
return centroids
def convert_points_2_heatmap(points, size_per_pixel = 64):
"""Creates an anno1 heatmap object from a set of points, size_per_pixel is in image pixels per heatmap "pixel" """
# Figure out the size of the heatmap
min_x, max_x = float('inf'), float('-inf')
min_y, max_y = float('inf'), float('-inf')
for point in points:
min_x, max_x = min(min_x, point['x']), max(max_x, point['x'])
min_y, max_y = min(min_y, point['y']), max(max_y, point['y'])
# Fill the heatmap data with empty rows
num_columns = int((max_x - min_x) // size_per_pixel + 1)
num_rows = int((max_y - min_y) // size_per_pixel + 1)
heatmap_data = [ [0] * num_columns for row_i in range(num_rows) ]
# Populate the heatmap with the points data
max_heatmap_val = 1
for point in points:
heatmap_x = int((point['x'] - min_x) // size_per_pixel)
heatmap_y = int((point['y'] - min_y) // size_per_pixel)
heatmap_data[heatmap_y][heatmap_x] += 1
max_heatmap_val = max(max_heatmap_val, heatmap_data[heatmap_y][heatmap_x])
# Remap heatmap data to be between 0 and 255
for heatmap_y in range(num_rows):
for heatmap_x in range(num_columns):
heatmap_data[heatmap_y][heatmap_x] = round((heatmap_data[heatmap_y][heatmap_x] / max_heatmap_val) * 255)
# Return full object
heatmap = {
"x": min_x,
"y": min_y,
"height": max_y - min_y,
"data": heatmap_data,
"type": "heatmap"
}
return heatmap
def convert_contours_2_polygons(contours, cur_img_dims, roi):
"""Converts OpenCV2 contours to AnnoShape Polygons format of SlideScore
Also needs the original img width and height to properly map the coordinates"""
x_factor = roi["size"]["x"] / cur_img_dims[0]
y_factor = roi["size"]["y"] / cur_img_dims[1]
x_offset = roi["corner"]["x"]
y_offset = roi["corner"]["y"]
polygons = []
for contour in contours:
points = []
for point in contour:
# The contours are based on a scaled down version of the image
# so translate these coordinates to coordinates of the original image
orig_x, orig_y = int(point[0][0]), int(point[0][1])
points.append({"x": x_offset + int(x_factor * orig_x), "y": y_offset + int(y_factor * orig_y)})
polygon = {
"type":"polygon",
"points": points
}
polygons.append(polygon)
return polygons
def threshold_image(client, image_id: int, rois: list):
# Extract pixel information by making a "screenshot" of each region of interest
polygons = []
for roi in rois:
if roi["corner"]["x"] is None or roi["corner"]["y"] is None:
continue # Basic validation
image_response = client.perform_request("GetScreenshot", {
"imageid": image_id,
"x": roi["corner"]["x"],
"y": roi["corner"]["y"],
"width": roi["size"]["x"],
"height": roi["size"]["y"],
"level": 15,
"showScalebar": "false"
}, method="GET")
jpeg_bytes = image_response.content
print("Retrieved image from server, performing analysis using OpenCV")
# Parse the returned JPEG using OpenCV, and extract the contours from it.
treshold = 220
jpeg_as_np = np.frombuffer(jpeg_bytes, dtype=np.uint8)
img = cv2.imdecode(jpeg_as_np, flags=1)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, treshold, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
print("Performed local image analysis")
# Convert OpenCV2 contour to AnnoShape Polygons format of SlideScore
cur_img_dims = (img.shape[1], img.shape[0])
roi_polygons = convert_contours_2_polygons(contours, cur_img_dims, roi)
polygons += roi_polygons
print("Converted image analysis results to SlideScore annotation")
# AnnoShape polygons
return polygons
def get_rois(answers: list):
roi_json = next((answer["value"] for answer in answers if answer["name"] == "ROI"), None)
if roi_json is None:
raise Exception("Failed to find the ROI answer")
rois = json.loads(roi_json)
if len(rois) == 0:
raise Exception("No ROI given")
return rois
class ExampleAPIServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write(bytes("Hello world", "utf-8"))
def do_POST(self):
content_len = int(self.headers.get('Content-Length'))
if content_len < 10 or content_len > 4096:
self.send_response(400)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write(bytes("Invalid request", "utf-8"))
try:
post_body = self.rfile.read(content_len).decode()
request = json.loads(post_body)
time_got_request = time.time()
"""
default http post payload:
"host": "${document.location.origin}",
"studyid": %STUDY_ID%,
"imageid": %IMAGE_ID%,
"imagename": "%IMAGE_NAME%",
"caseid": %CASE_ID%,
"casename": "%CASE_NAME%",
"email": "%USER_EMAIL%",
"analysisid": %ANALYSIS_ID%,
"analysisname": "%ANALYSIS_NAME%",
"answers": %ANSWERS%,
"apitoken": "%API_TOKEN%"
"""
host = request["host"]
study_id = int(request["studyid"])
image_id = int(request["imageid"])
imagename = request["imagename"]
case_id = int(request["imageid"])
email = request["email"]
analysis_id = int(request["analysisid"])
analysis_name = request["analysisname"]
case_name = request["casename"]
answers = request["answers"] # Answers to the questions field, needs to be validated to contain the expected vals
apitoken = request["apitoken"] # Api token that is generated on the fly for this request
rois = get_rois(answers) # Get Regions Of Interest
client = slidescore.APIClient(host, apitoken)
result_polygons = threshold_image(client, image_id, rois)
# [{type: "polygon", points: [{x: 1, y, 1}, ...]}]
request['apitoken'] = "HIDDEN"
print('Succesfully contoured image', request)
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
# Return an JSON array with a single result, A list of polygons surrounding the dark parts of the ROI.
points = convert_polygons_2_centroids(result_polygons)
# Convert centroids to a heatmap
heatmap = convert_points_2_heatmap(points)
self.wfile.write(bytes(json.dumps([{
"type": "polygons",
"name": "Dark parts",
"value": result_polygons,
"color": "#0000FF"
}, {
"type": "points",
"name": "Dark parts centroids",
"value": points,
"color": "#00FFFF"
}, {
"type": "anno2",
"name": "anno2 dark polygons",
"value": convert_2_anno2_uuid(result_polygons, client, metadata='{ "comment": "dark polygons"}'),
"color": "#00FF00"
}, {
"type": "anno2",
"name": "anno2 dark points",
"value": convert_2_anno2_uuid(points, client, metadata='{ "comment": "dark points"}'),
"color": "#FFFF00"
},
{
"type": "anno2",
"name": "anno2 heatmap",
"value": convert_2_anno2_uuid([heatmap], client, metadata='{ "comment": "heatmap of dark points"}'),
"color": "Turbo"
},
{
"type": "text",
"name": "Description of results",
"value": f'These results took {(time.time() - time_got_request):.2f} s to generate'
}
]), "utf-8"))
# Give up token, cannot be used after this request
client.perform_request("GiveUpToken", {}, "POST")
except Exception as e:
print("Caught exception:", e)
print(traceback.format_exc())
print(post_body)
self.send_response(500)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write(bytes("Unknown error: " + str(e), "utf-8"))
if __name__ == "__main__":
parser = argparse.ArgumentParser(
prog='SlideScore openslide OOF detector API',
description=DESC)
parser.add_argument('--host', type=str, default='localhost', help='HOST to listen on')
parser.add_argument('--port', type=int, default=8000, help='PORT to listen on')
args = parser.parse_args()
webServer = HTTPServer((args.host, args.port), ExampleAPIServer)
print(f"Server started http://{args.host}:{args.port}, configure your slidescore instance with a default analysis endpoint pointing to this host.")
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")
You can view the all other examples in webhooks branch in the github repository of our python sdk.
Example code explained
This example starts by running a HTTP handler and waiting for POST requests. Once it gets a POST requests, presumbly because a user triggered the external analysis, it retrieves the parameters and does basic input validation.
Then a Region Of Interest that has been specified by the user is downloaded using the Slide Score API. It continues by finding dark regions in the downloaded image using the opencv python library and thresholding.
Finally it converts the output of the opencv library to all of the 4 formats that can be returned.
Adding the endpoint to Slide Score
Next to running the above python code on a publically accessible server, it needs to be configured in the Slide Score Study UI.
Start by navigating to the Site Administrators page, and selecting the "External Analyses" page.
Then click the Add external analysis + button and give your external analysis a name, description and set the URL to the location of a server running the examply python code, i.e. http://localhost:8000 if you are running it on the same server. Please use HTTPS if you need to go over the internet.
For the questions string and the HTTP POST body click: Load example to load an example questions sheet for the external analysis parameters and an complete HTTP POST body with all the needed parameters.
Finally press the Save button to save the External Analysis endpoint configuration in the database.
In order to use the analysis, please create a new test study and navigate to the Study Administration page. Select the External Analyses tab, and enable your newly created External Analysis. You should now be able to test it from any image in the Study.
Trigger the external analysis
Please add a Slide image to the study and navigate to it's viewing page. If the external analysis was successfully configured in the study, a Run external analysis button should be visible in the left sidebar. Please press it and observe the external analysis pop-up.
In order to actually activate the trigger, select the Region of Interest using the Start button. If you are satisfied with your selection, press the Done and Ready to send buttons, and finally the Start Analysis button in the external analysis popup.
Now a new element should be shown below the Run external analysis button called External analysis results. Several options are visible, such as disabling individual results from the external analysis, or changing the transparency.
Now you can press any of the resulting annotations to view them and see the results of the external analysis.
Some external analyses do not require user selected regions on the slide image, for example out-of-focus detection. These analyses can be triggered for one or multiple slides in the Edit Study page by a Study admin. A seperate request is sent out to the analysis endpoint for each slide that is selected so that the code can remain agnostic. Upon completion of each analysis, an email is sent to the study administrator that requested analysis with the status of the analysis (completed / failed / etc.). Since these analysis are performed in the background, the HTTP timeout is currently configured for 3 hours. It is strongly recommended to use an asynchronous endpoint for this use case.
Troubleshooting
If the example code fails to run, the user should be shown the generated error in the external analysis results element. The external analysis log in the left side bar could give additional hints as to the reason of failure.
If you suspect a bug or have trouble setting up the example, just send us an email and we would be glad to help.
Additional settings
In order to tailor the usage of External Analysis to your purpose, multiple settings are exposed in the study admin page. Currently the following options are available: - Share External Analysis results among study participants - Show the most recent External Analysis to the user on opening the slide - Allow pathologists to run enabled external analyses - If this setting is enabled, only Study administrators are able to trigger External Analyses through the Slides tab.
Response
The HTTP timeout for external analyses triggered in the slide page is currently configured for 10 minutes.
If your would like to show a visual reponse to the user on triggering an external analysis, then a JSON array is expected as a response containing one or many of 4 response types, polygons, points, the more versatile anno2, or simply text. The general format of these results can be surmised from the example external analysis endpoint code, but are further specified below:
[
{
type: 'polygons'
value: [{
{
type: 'polygon',
points: [{x: 1, y: 1}, {x: 100, y: 1}, {x: 100, y: 100}, {x: 1, y: 1}]
}
}],
name: 'Polygon result',
color: '#FFFF00'
},
{
type: 'points'
value: [{x: 1, y: 1}, {x: 100, y: 1}, {x: 100, y: 100}, {x: 1, y: 1}],
name: 'Points result'
},
{
type: 'anno2'
value: 'a72a2644-37b9-4bb7-b69a-...',
name: 'Anno2 result'
},
{
type: "text",
name: "Description of results",
value: "This text is shown to the user"
}
]
Anno2
The anno2 format is better suited if:
- Better performance is needed
- You need to show a heatmap or mask
- Caching of results on the SlideScore server is wanted
If you would like to use the anno2 response option, you need to have already uploaded the anno2 zip before the external analysis response is returned. This can be done using the CreateOrphanAnno2 API method. In the example this is also used.
See more docs on the new annotation format.
Synchronous and Asynchronous endpoints
There are two types of external analysis endpoints possible:
- Synchronous endpoint
- Asynchronous endpoint
Synchronous (simple)
The synchronous endpoint is simpler, and is suitable for shorter running tasks (< 1 minute). The working principle is as follows. The Slide Score server sends a single HTTP POST request with the user parameters as body to the endpoint URL. Upon receivement of this POST request, the endpoint server processes the response, and returns its response in the HTTP body.
Asynchronous
For longer running tasks an asynchronous endpoint provides a more robust interface. The task is still started with a single HTTP POST with the user parameters as body. But then the endpoint is expected to immediately return a json object with at least an id property of type string (e.g. e1798a9e). This id is an identifier for this specific run. Following this, this Slide Score server will check the status of this run by performing a HTTP GET request to a seperate "status" endpoint (e.g. https://your-analysis-server.com/status/) , appending the id to the url (https://your-analysis-server.com/status/e1798a9e). This will be done every 5 seconds for up to 10 minutes (3 hours in the case of a background job). The analysis server must then respond with a json object containing at least a status property with type string. This status is shown to the user in the UI. Once the status becomes finished, the HTTP response must also include a property named output, this can contain a JSON array of results in the format specified in Response.
Parameters
When configuring a external analysis, an HTTP post body can be specified that will get send to the external analysis endpoint. These can contain the following parameters:
| Name | Type | Explanation |
|---|---|---|
| %STUDY_ID% | int | Numerical identifier of the study on which the analysis was triggered |
| %IMAGE_ID% | int | Numerical identifier of the image |
| %IMAGE_NAME% | string | Name of the image |
| %CASE_ID% | int | Numerical identifier of the case |
| %CASE_NAME% | string | Name of the case |
| %USER_EMAIL% | string | Email of the user that triggered the analysis |
| %ANALYSIS_ID% | int | Numerical identifier of the analysis |
| %ANALYSIS_NAME% | string | Name of the analysis |
| %ANSWERS% | array | JSON array of the answers to the questions of the questions_str |
| %API_TOKEN% | string | On-the-fly generated API_TOKEN that is valid (3 hours) for the study in %STUDY_ID%, including getting pixels and setting scores. Should be given up when the analysis is done |
Questions
In order to pass certain parameters to the analysis, a questions form can be specified that the user will be presented when triggering an external analysis. These can contain a description of the analysis, a region of interest, or a selection for the model to be used.
These questions use the same format as used for scoring elsewhere in Slide Score and can be configured by the Site administrator when configuring the external analysis.