brian _at_ psych -dot- columbia -dot- edu
Page last updated September 17 2019 (added Alan Lee's Python scripts for meta-d' analysis)
The original version of this webpage is archived here.
MATLAB files for meta-d' estimation can be found further down the page.
meta-d' analysis quantifies metacognitive sensitivity (i.e. the efficacy with which confidence ratings discriminate between correct and incorrect judgments) in a signal detection theory framework. A central idea is that primary task performance can influence metacognitive sensitivity, and it is informative to take this influence into account. Thus we quantify metacognitive sensitivtiy with an SDT-based measure meta-d', and the influence of primary task performance (d') on metacognitive sensitivity can be accounted for by computing a numerical comparison between the two, e.g. a subtraction (meta-d' - d') or a ratio (meta-d' / d'). See the papers below for more details.
The original description of the methodology can be found in the 2012 Consciousness and Cognition paper linked below. This serves as a good intuitive introduction.
(Please note an error in Figure 1C: the legend entry should read "expected from d'=2, c1=0" rather than "expected from d'=1, c1=0")
Maniscalco, B., & Lau, H. (2012). A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422–430. doi:10.1016/j.concog.2011.09.021
We have recently published a more formal and complete treatment of the meta-d' methodology in the book chapter linked below.
Maniscalco, B., & Lau, H. (2014). Signal detection theory analysis of type 1 and type 2 data: meta-d’, response-specific meta-d’, and the unequal variance SDT mode. In S. M. Fleming & C. D. Frith (Eds.), The Cognitive Neuroscience of Metacognition (pp.25-66). Springer.
In addition to providing a more in-depth presentation of the meta-d' methodology, in this book chapter we extend the original meta-d' treatment in two key ways.
We introduce methodology for computing response-specific meta-d'. Whereas meta-d' quanitifies overall metacognitive sensitivity in a 2-choice perceptual or cognitive task, response-specific meta-d' quantifies metacognitive sensitivity separately for each response type. Response-specific analysis may be desirable in situations where the response types are qualitatively different, e.g. in a perceptual detection task where the two response types would be "yes, I saw the stimulus" and "no, I did not see the stimulus."
We raise the issue that differential patterns in response-specific confidence rating data may be modeled as arising from either (a) an unequal variance SDT model, or (b) an equal variance SDT model that posits different meta-d' values for each response type. Because of this ambiguity, considerable care is warranted in estimating meta-d' based on an unequal variance SDT fit to the data, as well as in interpreting the results of response-specific metacognitive data.
In the previous version of this webpage, we provided a wrap-around function named type2_SDT_MLE which by default applied an unequal variance SDT fit to the input data by constructing pseudo-type 1 ROC curves from the confidence rating data, and used this fit as input to the function for estimaing meta-d', fit_meta_d_MLE. However, following the discussion in Maniscalco & Lau 2014, we no longer feel it is best practice to use such pseudo-type 1 ROC curves for estimating an unequal variance SDT model in the context of characterizing metacognitive sensitivity. Thus we recommend using an equal variance SDT model (i.e. setting s = 1), unless the unequal variance model can be estimated using a "true" type 1 ROC curve which does not rely on confidence rating data to generate multiple data points on the type 1 ROC curve (e.g. by using base rate or payoff manipulations). See section 3.6 of Maniscalco & Lau 2014 for more in-depth discussion.
Below are two sets of functions for conducting type 2 SDT analysis. One set uses maximum likelihood estimation (MLE), and the other works by minimizing the sum of squared errors (SSE). The functions using MLE estimation make use of Matlab's optimization toolbox. The functions that work with SSE don't require optimization toolbox, but use a cruder and slower model-fitting algorithm.
If you use these analysis files, please reference Maniscalco & Lau 2012 and/or Maniscalco & Lau 2014, as well as this website.
Converts trial-by-trial data for stimulus, response, and confidence into the response count input format used by the meta-d' functions below. See the comments in the help section for full details.
This function estimates meta-d' as well as basic type 1 SDT parameters. It takes as input a count of the number of times the subject used each available response for each stimulus type, as well as an estimate of the SDT parameter s. Per the discussion above, for most purposes we recommend setting s = 1. See the comments in the help section for full details.
This function estimate response-specific meta-d', i.e. meta-d' computed separately for "S1" and "S2" responses, as described in Maniscalco & Lau 2014. See the comments in the help section for full details.
This function estimates meta-d' using response-conditional type 2 HRs and FARs and the empirical type 1 criterion c' as input. See the comments in the help section for full details.
type2_SDT_SSE.m (requires fit_meta_d_SSE.m)
This is a wrap-around function you can use to pass in raw behavioral data and have it processed appropriately for input into fit_meta_d_SSE. It also provides a basic type 1 SDT analysis and a comparison between type 1 d' and meta-d'. In total it's more convenient to use type2_SDT_SSE than to use fit_meta_d_SSE directly, but using fit_meta_d_SSE directly gives you more control over the fine details of how you estimate meta-d'. See comments in the help section for full details.
This function estimates the optimal overall type 2 ROC curve when computed using type 2 likelihood ratio as the type 2 decision variable, as discussed by Galvin, Podd, Drga, and Whitmore (2003)