Webinar: Ottmar Cronie (CTH/GU) December 12

The second webinar of the autumn 2025 will be given by Ottmar Cronie (CTH/GU).

When: Friday December 12, 11.00 – 11.45

Where: Zoom (https://chalmers.zoom.us/j/63810947897)

Titel: Point Process Learning: a cross-validation-based statistical theory for Gibbs point processes

Abstract: Heuristically, a point process can be viewed as a generalisation of an i.i.d. random sample, which allows both a random sample size and dependence among the sample points. The dependence structure of a point process typically manifests through clustering/attraction, inhibition/repulsion or independence among the points. The literature offers a variety of models capturing these dependence structures, most of which belong to the class of Gibbs point processes, including prominent examples such as Poisson, Cox, determinantal and Hawkes processes. Given their interpretation as generalisations of i.i.d. random samples, point processes have become fundamental in modelling point pattern data representing spatial (and temporal) locations of naturally occurring events, such as tree centres, earthquakes, disease cases and accidents. However, when concerning the associated modelling, exact maximum likelihood estimation for parametrised Gibbs models is generally infeasible since their likelihood functions involve intractable normalising constants which depend on the model parameters. This challenge has motivated the development of numerous alternative inference procedures over the years.

In this talk, we introduce a new cross-validation-based statistical framework for Gibbs processes, termed Point Process Learning, which has been inspired by cross-validation’s ability to mitigate overfitting and reduce mean squared errors. The approach combines two novel concepts for point processes: cross-validation and prediction errors. Specifically, the method uses thinning to partition a point pattern into training and validation sets, while the prediction errors quantify discrepancies between two arbitrary point processes, via a parametrised Gibbs model. Point Process Learning evaluates how well a given model predicts validation sets based on associated training sets. More specifically, a prediction error evaluated in a training-validation pair has expectation zero if and only if the parameter used in the associated Gibbs model matches the actual parameter of the (unsplit) Gibbs process, and this enables the formulation of parametrised risk functions to be minimised. After presenting the theoretical foundations and properties of Point Process Learning, we discuss how it can be used in data analysis and we assess its numerical performance through a simulation study.