Benchmarks

The large scale image-processing required for ADI algorithms can lead to concerns about runtime efficiency. To this end, ADI.jl (and the associated JuliaHCI packages) are developed with performance in mind. These packages do not aim to be as fast as possible; rather they focus on being as fast as is convenient (for the users and the devs).

The Vortex Imaging Pipeline (VIP) is the inspiration for ADI.jl. It is one of the major Python HCI packages and it offers many more features than ADI.jl. Some of the common uses for both packages include full-frame ADI processing, S/N maps, and contrast curves.

System/Setup Information

The benchmarks here can be found in the bench/ folder organized into Julia files. The benchmarks utilize BenchmarkTools.jl, PyCall.jl with virtualenv, and CSV.jl for accuracy, reproducibility, and organization.

Julia Version 1.6.0-beta1
Commit b84990e1ac* (2021-01-08 12:42 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin19.6.0)
  CPU: Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.0 (ORCJIT, skylake)
Environment:
  JULIA_NUM_THREADS = 4

For the python code, there is a requirements.txt file in bench/. To reproduce this environment, (optionally) activate a virtual environment then install from the requirements file.

(venv) $ pip install -r requirements.txt

For reproducibility, there is a Manifest.toml file in bench/. To reproduce this environment, first activate it, then instantiate it

$ julia --project=bench -e 'using Pkg; Pkg.instantiate()'
PyCall.jl and virtual environments

The interface between Julia and python is handled by PyCall.jl. When using a virtual environment, PyCall may not use the correct python library. Before running the benchmarks, please read this reference.

Multi-threading

Some of the image-processing methods in ADI.jl and HCIToolbox.jl are multi-threaded, and will lead to a noticable difference in some benchmarks. To take advantage of this, set the environment variable JULIA_NUM_THREADS before starting your runtime. Multi-Threading documentation.

ADI Reduction

These benchmarks show the duration to fully reduce ADI data for various algorithms. The data used are $\beta$ Pictoris and HR8799 from HCIDatasets.jl.

adi_data = CSV.File(benchdir("adi_benchmarks.csv")) |> DataFrame |> sort!
cube_labels = @. ifelse(adi_data.N == 622261, "Beta Pictoris", "HR8799")
insertcols!(adi_data, 4, :cube => cube_labels)
adi_groups = groupby(adi_data, :framework)

GroupedDataFrame with 2 groups based on key: framework

First Group (5 rows): framework = InlineStrings.String7("ADI.jl")

frameworkalgNcubetime
String7String7Int64StringFloat64
1ADI.jlmedian622261Beta Pictoris0.0353724
2ADI.jlmedian18574074HR87991.16283
3ADI.jlnmf_20622261Beta Pictoris1.06399
4ADI.jlpca_20622261Beta Pictoris0.0535118
5ADI.jlpca_2018574074HR87992.38231

Last Group (5 rows): framework = InlineStrings.String7("VIP")

frameworkalgNcubetime
String7String7Int64StringFloat64
1VIPmedian622261Beta Pictoris0.044207
2VIPmedian18574074HR87991.00312
3VIPnmf_20622261Beta Pictoris0.79571
4VIPpca_20622261Beta Pictoris0.0603534
5VIPpca_2018574074HR87992.42638
cube_groups = groupby(adi_data, :cube)
plot(
    @df(cube_groups[1], groupedbar(:alg, :time, group=:framework, yscale=:log10)),
    @df(cube_groups[2], groupedbar(:alg, :time, group=:framework)),
    size=(700, 350),
    leg=:topleft,
    ylabel="time (s)",
    title=["Beta Pictoris" "HR8799"]
)

Please note the log-scale for the left figure.

Detection Maps

This benchmark measures the duration to produce a signal-to-noise ratio (S/N) map. Rather than test exact cubes, these benchmarks test randomly generated frames of various sizes. The FWHM is fixed at 5.

snrmap_data = CSV.File(benchdir("snrmap_benchmarks.csv")) |> DataFrame |> sort!
snrmap_groups = groupby(snrmap_data, :framework)

GroupedDataFrame with 2 groups based on key: framework

First Group (5 rows): framework = InlineStrings.String7("ADI.jl")

frameworkNtime
String7Int64Float64
1ADI.jl26010.0131519
2ADI.jl102010.135207
3ADI.jl404011.24764
4ADI.jl906014.30251
5ADI.jl16080110.2164

Last Group (3 rows): framework = InlineStrings.String7("VIP")

frameworkNtime
String7Int64Float64
1VIP26010.950529
2VIP102016.9494
3VIP4040147.5746
@df snrmap_data scatter(
    :N,
    :time,
    group=:framework,
    ms=6,
    xlabel="number of pixels",
    ylabel="time (s)"
)

Contrast Curves

Finally, this benchmark measures the duration to generate a contrast curve for analyzing the algorithmic throughput of an ADI algorithm. For both benchmarks 3 azimuthal branches are used for throughput injections and a FWHM of 8. A Gaussian PSF function is evaluated in a (21, 21) grid for the injections. The data used are $\beta$ Pictoris and HR8799 from HCIDatasets.jl.

contrast_data = CSV.File(benchdir("contrast_benchmarks.csv")) |> DataFrame |> sort!
cube_labels = @. ifelse(contrast_data.N == 622261, "Beta Pictoris", "HR8799")
insertcols!(contrast_data, 4, :cube => cube_labels)
contrast_groups = groupby(contrast_data, :framework)

GroupedDataFrame with 2 groups based on key: framework

First Group (2 rows): framework = InlineStrings.String7("ADI.jl")

frameworkalgNcubetime
String7String7Int64StringFloat64
1ADI.jlpca_20622261Beta Pictoris0.706419
2ADI.jlpca_2018574074HR879939.0223

Last Group (2 rows): framework = InlineStrings.String7("VIP")

frameworkalgNcubetime
String7String7Int64StringFloat64
1VIPpca_20622261Beta Pictoris1.55966
2VIPpca_2018574074HR879993.6585
@df contrast_data groupedbar(
    :cube,
    :time,
    group=:framework,
    leg=:topleft,
    ylabel="time (s)",
    yscale=:log10,
)

Please note the log-scale.