Optimal Binary Hypothesis Testing Based on the Behavioral KullbackLeibler Divergence Criterion


BERBER A., DÜLEK B.

IEEE SIGNAL PROCESSING LETTERS, cilt.33, ss.161-165, 2026 (SCI-Expanded, Scopus) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 33
  • Basım Tarihi: 2026
  • Doi Numarası: 10.1109/lsp.2025.3640513
  • Dergi Adı: IEEE SIGNAL PROCESSING LETTERS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC
  • Sayfa Sayıları: ss.161-165
  • Hacettepe Üniversitesi Adresli: Evet

Özet

Kullback-Leibler (KL) divergence plays a central role in hypothesis testing. It gives a measure of the statistical distance between two probability distributions. In the distributed detection problem, it is used as a design criterion in the absence of the information regarding the fusion center's (FC) decision rule: The local sensor decision rules are designed to maximize the KL divergence between the distributions of quantized messages sent to the FC under alternative and null hypotheses. In decision making tasks involving humans, subjective perception of probability values due to behavioral biases needs to be taken into account. In this letter, the notion of behavioral KL divergence is proposed. The statistical distance between two distributions is computed based on the perceived values of the probabilities, which are obtained from the actual probabilities using the probability weighting function employed in prospect theory. It is proved that the behavioral KL divergence between the distributions of the quantized decision at the output of a detector under both hypotheses is maximized by either the Neyman-Pearson (NP) rule or flipped Neyman-Pearson (FNP) rule for any fixed false alarm probability. Based on this result, it is also established that under a constraint on the average perceived false alarm probability, the average behavioral KL divergence is maximized by time-sharing between at most two single-threshold likelihood-ratio tests, each of which is either an NP or an FNP rule. The theoretical results are supported by numerical examples.