JTransient is a Java library for transient extraction and moving-object track linking on aligned monochrome astronomical image sequences. It works on short[][] pixel matrices and can be used in three ways:
- run the full pipeline with
JTransientEngine - stop after transient extraction with
JTransientEngine.detectTransients(...) - use
SourceExtractor.extractSources(...)directly on a single frame
It is the core detection engine powering SpacePixels.
JTransientAutoTuner.tune(...): derives a cleanerDetectionConfigfrom a representative subset of framesJTransientEngine.runPipeline(...): full extraction, quality filtering, master-stack masking, slow-mover detection, and track linkingJTransientEngine.detectTransients(...): same early pipeline, but stops after the stationary-star veto and returns per-frame transientsJTransientEngine.generateMasterStack(...): precomputes a reusable median master stackSourceExtractor.extractSources(...): standalone single-frame object extraction
- PIPELINE.md: what each public entrypoint runs and returns
- ALGORITHM.md: internal phases of
JTransientEngine.runPipeline(...) - CONFIG.md:
DetectionConfigfield-by-field reference - AUTOTUNER.md: detailed walkthrough of
JTransientAutoTuner.tune(...) - PUBLISHING.md: Maven Central staging and release bundle workflow
This repository is a Gradle Java library project:
.\gradlew.bat buildThe project name is JTransient and the current library version in build.gradle is 1.0.0.
To prepare a Maven Central release bundle locally:
.\gradlew.bat mavenCentralBundleSee PUBLISHING.md for the required signing and Portal setup.
All engine entrypoints operate on ImageFrame objects:
ImageFrame frame = new ImageFrame(
sequenceIndex,
"frame_001.fit",
pixelData, // short[][]
timestampMillis, // use -1 if unavailable
exposureMillis // use -1 if unavailable
);Notes:
- frames must all have the same dimensions
- the data should already be aligned/registered to the same pixel grid
- the engine sorts the supplied
List<ImageFrame>in place bysequenceIndexbefore processing - time-based linking only activates when timestamps are present
The following examples are written as standalone skeletons. Any load...() helper
shown in an example is an application-specific placeholder that you should replace.
JTransientAutoTuner clones the base config, evaluates a representative frame sample, and returns an AutoTunerResult.
import io.github.ppissias.jtransient.config.DetectionConfig;
import io.github.ppissias.jtransient.engine.ImageFrame;
import io.github.ppissias.jtransient.engine.JTransientAutoTuner;
import java.util.List;
public final class AutoTuneExample {
public static void main(String[] args) {
List<ImageFrame> frames = loadFrames();
DetectionConfig baseConfig = new DetectionConfig();
JTransientAutoTuner.AutoTunerResult tuning = JTransientAutoTuner.tune(
frames,
baseConfig,
JTransientAutoTuner.AutoTuneProfile.BALANCED,
(percent, message) -> System.out.printf("%3d%% %s%n", percent, message)
);
DetectionConfig config = tuning.optimizedConfig;
System.out.println("Auto-tune success: " + tuning.success);
System.out.println(tuning.telemetryReport);
if (tuning.finalValidationTelemetry != null) {
System.out.println(tuning.finalValidationTelemetry.statusMessage);
}
}
private static List<ImageFrame> loadFrames() {
throw new UnsupportedOperationException("Replace with your frame-loading code.");
}
}If you do not need to pick a profile explicitly, use the three-argument overload. It defaults to BALANCED.
This is the main entrypoint. It performs extraction, frame rejection, master-stack generation, optional slow-mover detection, streak linking, time-based linking when timestamps exist, geometric fallback linking, and anomaly rescue.
import io.github.ppissias.jtransient.config.DetectionConfig;
import io.github.ppissias.jtransient.engine.ImageFrame;
import io.github.ppissias.jtransient.engine.JTransientEngine;
import io.github.ppissias.jtransient.engine.PipelineResult;
import java.util.List;
public final class RunPipelineExample {
public static void main(String[] args) throws Exception {
List<ImageFrame> frames = loadFrames();
DetectionConfig config = new DetectionConfig();
JTransientEngine engine = new JTransientEngine();
try {
PipelineResult result = engine.runPipeline(
frames,
config,
(percent, message) -> System.out.printf("%3d%% %s%n", percent, message)
);
System.out.println("Tracks found: " + result.tracks.size());
System.out.println("Anomalies rescued: " + result.anomalies.size());
System.out.println("Slow mover candidates: " + result.slowMoverAnalysis.candidates.size());
System.out.println(result.telemetry.generateReport());
result.tracks.forEach(track -> {
System.out.println(
"Track points=" + track.points.size()
+ " streak=" + track.isStreakTrack
+ " suspectedStreak=" + track.isSuspectedStreakTrack
+ " timeBased=" + track.isTimeBasedTrack
);
});
} finally {
engine.shutdown();
}
}
private static List<ImageFrame> loadFrames() {
throw new UnsupportedOperationException("Replace with your frame-loading code.");
}
}Key PipelineResult fields:
tracks: returnedTrackLinker.Trackobjects, including confirmed tracks and suspected same-frame streak groupingsanomalies: rescued single-frame anomalies kept separate from normal tracksallTransients: per-frame export of the full post-veto transient population carried through tracking, including point detections and mobile streak detectionsunclassifiedTransients: the true leftover detections that remain after tracks and anomalies are exportedresidualTransientAnalysis: post-processing ofunclassifiedTransientsinto weak local rescue candidates and broad activity clustersmasterStackData: median master stack used to extract stationary starsmaximumStackData: maximum stack exported for visualization/post-processingmasterStars: stationary objects extracted from the master stackmasterVetoMask: boolean veto mask used to purge stationary starsslowMoverAnalysis: grouped slow-mover result with per-candidate diagnostics and aggregate stage telemetryslowMoverStackData,slowMoverMedianVetoMask, andslowMoverCandidates: legacy slow-mover exports kept temporarily for compatibilitydriftPoints: per-frame border-drift diagnosticstelemetry: pipeline and tracker counters, including nestedslowMoverTelemetry
If you are iterating on parameters or running UI workflows, you can precompute the median master stack once and pass it into the overloads that accept providedMasterStack.
import io.github.ppissias.jtransient.config.DetectionConfig;
import io.github.ppissias.jtransient.engine.FrameTransients;
import io.github.ppissias.jtransient.engine.ImageFrame;
import io.github.ppissias.jtransient.engine.JTransientEngine;
import io.github.ppissias.jtransient.engine.PipelineResult;
import java.util.List;
public final class ReuseMasterStackExample {
public static void main(String[] args) throws Exception {
List<ImageFrame> frames = loadFrames();
DetectionConfig config = new DetectionConfig();
JTransientEngine engine = new JTransientEngine();
try {
short[][] masterStack = engine.generateMasterStack(frames, config, null);
PipelineResult pipeline = engine.runPipeline(frames, config, null, masterStack);
System.out.println("Tracks found: " + pipeline.tracks.size());
List<FrameTransients> transients =
engine.detectTransients(frames, config, null, masterStack);
System.out.println("Frames with exported transients: " + transients.size());
} finally {
engine.shutdown();
}
}
private static List<ImageFrame> loadFrames() {
throw new UnsupportedOperationException("Replace with your frame-loading code.");
}
}generateMasterStack(...) is lighter than a full run: it performs quality evaluation and session rejection, then stacks the retained frames, but it does not extract frame objects or link tracks.
detectTransients(...) runs the same early stages as the full engine and returns the per-frame export produced after stationary-star filtering, with preserved streak detections included.
import io.github.ppissias.jtransient.engine.FrameTransients;
import io.github.ppissias.jtransient.engine.ImageFrame;
import io.github.ppissias.jtransient.engine.JTransientEngine;
import io.github.ppissias.jtransient.config.DetectionConfig;
import java.util.List;
public final class DetectTransientsExample {
public static void main(String[] args) throws Exception {
List<ImageFrame> frames = loadFrames();
DetectionConfig config = new DetectionConfig();
JTransientEngine engine = new JTransientEngine();
try {
List<FrameTransients> frameTransients =
engine.detectTransients(frames, config, null);
for (FrameTransients frame : frameTransients) {
System.out.println(frame.filename + " -> " + frame.transients.size() + " transients");
System.out.println("Seed threshold: " + frame.extractionResult.seedThreshold);
System.out.println("Grow threshold: " + frame.extractionResult.growThreshold);
}
} finally {
engine.shutdown();
}
}
private static List<ImageFrame> loadFrames() {
throw new UnsupportedOperationException("Replace with your frame-loading code.");
}
}This entrypoint is useful when you want JTransient's extraction and stationary-star filtering, but you plan to do your own higher-level linking.
If you only want object detection on one image, call SourceExtractor.extractSources(...) directly.
import io.github.ppissias.jtransient.config.DetectionConfig;
import io.github.ppissias.jtransient.core.SourceExtractor;
public final class ExtractSingleFrameExample {
public static void main(String[] args) {
short[][] image = loadImage();
DetectionConfig config = new DetectionConfig();
SourceExtractor.ExtractionResult extraction = SourceExtractor.extractSources(
image,
config.detectionSigmaMultiplier,
config.minDetectionPixels,
config
);
System.out.println("Objects: " + extraction.objects.size());
System.out.println("Background median: " + extraction.backgroundMetrics.median);
System.out.println("Background sigma: " + extraction.backgroundMetrics.sigma);
for (SourceExtractor.DetectedObject object : extraction.objects) {
System.out.printf(
"x=%.2f y=%.2f area=%.0f elongation=%.2f streak=%s%n",
object.x,
object.y,
object.pixelArea,
object.elongation,
object.isStreak
);
}
}
private static short[][] loadImage() {
throw new UnsupportedOperationException("Replace with your single-frame loading code.");
}
}The extractor returns:
objects: detected blobs that survived the size and artifact filtersbackgroundMetrics: sigma-clipped background median and sigmaseedThreshold: threshold used to start a blobgrowThreshold: hysteresis threshold used to expand the blob
- use
JTransientAutoTuner.tune(...)before production runs if the dataset changes often - use
runPipeline(...)when you want confirmed tracks and full telemetry - use
detectTransients(...)when you want frame-by-frame candidates after stationary-star masking - use
generateMasterStack(...)plus the overloads withprovidedMasterStackwhen repeated runs would otherwise spend too much time stacking - use
SourceExtractor.extractSources(...)when you only need single-frame object detection
- PIPELINE.md: what each public entrypoint runs and returns
- ALGORITHM.md: internal phases of
JTransientEngine.runPipeline(...) - CONFIG.md:
DetectionConfigfield-by-field reference - AUTOTUNER.md: detailed walkthrough of
JTransientAutoTuner.tune(...)
BSD License. See LICENSE.