This time I am going to talk about a specific and easy to understand tool for revealing patterns in our data. It is called the Probability/Strangeness matrix, and it has been around for decades, although I’ve yet to see it used to its full potential.
The matrix is very simple. After we have investigated a case, we rate the case on a scale of 0 to 5 for both probability and strangeness. These two ratings are required to be independent – a case can be strange but improbable, it can be probable but not very strange, or any combination. What we are of course most interested in the small fraction of cases in the upper right hand corner of the matrix that are both probable and strange, and how they move on the matrix as the investigation proceeds. It’s very clear from the plot – usually done with probability on the horizontal axis and strangeness on the vertical – what is going on.
What we are trying to do at API is to define what we mean by Probability and Strangeness as unambiguously as possible, so that they’re not entirely a judgment call, and doesn’t depend on the investigator’s subjective impression of witness credibility. I’ll describe our criteria, and then I’d like to hear what you think. This is still a work in progress, so we are happy to consider peer comments on it.
Let’s start with probability. This is essentially a measure of our belief that the claims made by the witnesses in the case are essentially true. Note that this has nothing to do with whether the facts have a mundane explanation or not.
As I said, we rate probability on a scale of 0 to 5. Anything rated a zero has been closed as a hoax or hallucination and we usually don’t plot it.
0 – Strong evidence of a hoax or hallucination. Witness credibility demonstrably low.
1 – Little difference between the report and a fictional story: only a single witness willing to come forth, no physical evidence, no corroboration, no contemporaneous notes, photographs or videos.
2 – Single witness with contemporaneous notes, sketches or sighting reports. Possibly a second witness, but not a strong corroboration, or considerable collusion on the story before discussion with investigators. Photographs or videos do not have clear provenance.
3 – Multiple credible witness reports within short time of the sighting with good consistency between witnesses. Photographs or videos with clear provenance. Photographs have EXIF data consistent with testimony and have been subjected to careful analysis. Witnesses do not seek publicity.
4 – All criteria of (3), plus a high degree of independence between highly credible witnesses. More than 1 video or photograph or other physical evidence with clear provenance at the same time. Physical evidence subjected to analysis. A thorough investigation was conducted shortly after the event.
5 – Both remote sensing (e.g. RADAR, optical) and in-situ physical evidence with clear chains of custody from highly credible sources in addition to the criteria of 4.
A good example would be the famous Skylab 3 case from 20 September 1973, which I have been looking more closely at lately. We have 3 astronauts as witnesses, contemporaneous notes and reports, and four photographs. That would be at least a 3 in probability, if not a four.
The criteria aren’t perfect. Take the Stephenville, Texas case. There we do have RADAR, and multiple witnesses, but lack other physical evidence. It’s a complex case, but I would rate it a four. When we have a case like that, we need to be able to think through our rating and compare it to precedent.
Now, strangeness. Strangeness is a measure of how anomalous the reported observations are, without reference to their credibility. As strangeness increases, it gets harder to nail down. A zero is something that is clearly explained, like a satellite, the planet Venus or a lenticular cloud. API gets lots of cases like that. We normally don’t plot the zeros on the matrix. I personally find strangeness a little harder to define than probability, but here are the remainder of the criteria that we have so far.
1 – Possibly a known man-made or natural object if one aspect of the report is misreported or misperceived.
2 – More than one significant aspect of the report is highly puzzling.
3 – The report consistently describes behavior and appearance of an object or objects that defies conventional explanation.
4 – The report meets the criteria of three, plus indicates interaction with the witness, animals or the environment, such as landing traces, interference with equipment, or communication.
5 – Report meets some or all of the criteria of 4, plus additional strange aspects such as repetition of events, missing time or time distortion, artifacts, implants, or other highly strange phenomena.
Now, I have to admit that “defies conventional explanation” in criterion 3 is a little too vague. Maybe we haven’t tried hard enough, or have just not been imaginative enough. We’re working on something more measurable.
So, how is this a tool? There are lots of ways we can plot the P-S data to explore the evolution of cases. We want to see if the cases cluster anywhere, and if so, ask why. We can plot the matrix as time slices, or a function of location, or a function of how well surveilled an area is., or we can do plots by region – does New England have stranger high probability cases than the midwest? Of course, the key thing here is consistency, which is why we need clear criteria.
If you’re a field investigator, I’d like to hear what you think the criteria should be, and how your cases are coming out on the matrix. Just use our contact form.