Repository materials can be downloaded only by EAA ARC members. Please log in here!
These are the slides that Harm Schütt (LMU Munich School of Management) presented at the 2018 EAA PhD Forum in Milan.
The learning objectives were as follows:
The session was aimed at young accounting researchers who are interested in learning new, robust methods to identify the many latent constructs we deal with in accounting. Because of advances in computing power and the public debate about issues in the application of classical hypothesis testing (e.g., Simmons, Nelson, and Simonsohn 2011; Dyckman and Zeff 2014; Gelman and Carlin 2014; Harvey 2017), Bayesian statistics is gaining more and more traction in various areas of the social sciences, including accounting research.
Bayesian statistics excels at two things. First it helps incorporating external knowledge into the model, thereby regularizing estimates (i.e., reducing the chance of noise fitting). Second it provides a flexible approach to model latent variables. Both use cases hold significant potential for accounting research questions that involve hard to measure constructs from noisy data. Such constructs are for example: disclosure characteristics (e.g., readability), accrual quality, undetected fraud (Hahn, Murray, and Manolopoulou 2016), latent topics and their distributions in a corpus of documents (e.g., Dyer, Lang, and Stice-Lawrence 2017), latent financial news audiences (Schütt 2017), or uncovering incrementally useful variables (Cremers 2002).
The agenda of the presentation was as follows
Bayesian and Frequentist statistics are two tools. Each has advantages and disadvantages. Talking about the differences between Bayesian and Frequentist approaches to statistics is instructive not only for understanding how Bayesian data analysis works, but also for better understanding how to apply frequentist methods. The remainder of the slides use a few examples to illustrate how Bayesian analysis works and when it is most useful for us: Settings where we want to model heterogeneity and settings with noisy data or hard to measure constructs. In such situations, the chance of accidental noise fitting and false positives is high. Here we would like to use every bit of uncontroversial prior knowledge we have to improve the precision of our inferences. Bayesian methods offer a very flexible and intuitive approach to do just that (Gelman et al. 2013). Many people find, as Nobel laureate Christopher Sims remarked: “Once one becomes used to thinking about inference from a Bayesian perspective, it becomes difficult to understand why many econometricians are uncomfortable with that way of thinking” (Sims 2010, 1; Sims 2007). However, the downside is that these methods are more complex to code and computationally intensive. Thus, we also consider the practical hurdles of Bayesian approaches.