Los Cursos Pre-Foro, son cursos especializados, con una duración de 5 hrs y que se llevan a cabo en la sede justo antes del evento. Los cursos conllevan un costo simbólico y tienen un número limitado de asistentes


Variational Bayesian inference and beyond:
Bayesian inference for big data

Tamara Broderick|MIT|Estados Unidos
Curso Pre-Foro| De 08:00 a 13:30 hrs.|Indioma: Inglés

Bayesian methods exhibit a number of desirable properties for modern data analysis---including

  1. (1) coherent quantification of uncertainty
  2. (2) a modular modeling framework able to capture complex phenomena
  3. (3) the ability to incorporate prior information from an expert source
  4. (4) interpretability

In practice, though, Bayesian inference necessitates approximation of a high-dimensional integral, and some traditional algorithms for this purpose can be slow---notably at data scales of current interest. The tutorial will cover modern tools for fast, approximate Bayesian inference at scale. One increasingly popular framework is provided by "variational Bayes" (VB), which formulates Bayesian inference as an optimization problem. We will examine key benefits and pitfalls of using VB in practice, with a focus on the widespread "mean-field variational Bayes" (MFVB) subtype. We will highlight properties that anyone working with VB, from the data analyst to the theoretician, should be aware of. We will cover modern corrections to VB for the purposes of uncertainty and robustness quantification. In addition to VB, we will cover recent data summarization techniques for scalable Bayesian inference that come equipped with finite-data theoretical guarantees on quality. We will motivate our exploration throughout with practical data analysis examples and point to a number of open problems in the field.



Machine learning

Elmer Garduño|Google Inc.|Estados Unidos
Curso Pre-Foro| De 08:00 a 13:30 hrs.

Learn how to build Machine Learning applications with Tensorflow, this hands-on workshop will walk you through how to use Tensorflow to build classification, regression and recommendation models using Tensorflow. We will also talk about building and deploying production models and reusing ML models with Tensorflow Hub.

El curso será dictado en español.



Bayesian computing with INLA

Haavard Rue|KAUST|Arabia Saudita
Curso Pre-Foro| de 15:30 a 21:00 hrs.

In this course, we will discuss approximate Bayesian inference for a class of models named latent Gaussian models (LGM). LGM's are perhaps the most commonly used class of models in statistical applications. It includes, among others, most of (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatio-temporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. The concept of LGM is intended for the modelling stage, but turns out to be extremely useful when doing inference as we can treat models listed above in a unified way and using the *same* algorithm and software tool. Our approach to (approximate) Bayesian inference, is to use integrated nested Laplace approximations (INLA). Using this tool, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged. In this course, we will introduce the class of latent Gaussian models, describe the "big picture" of the INLA algorithm and introduce the R-INLA package. We will focus on applied aspect and the use of the package illustrated on several examples.

Este curso será impartido conjuntamente con Daniela Castro-Camilo



Statistical and Psychometric Intricacies of
Educational Survey Assessments

Andreas Oranje|Educational Testing Service|Estados Unidos
Curso Pre-Foro| De 15:30 a 21:00 hrs.|Idioma: Inglés

During this course, we will briefly introduce the core goals of educational survey assessments (also known as group score assessments or large-scale assessments) and the most common designs to meet those goals. From there, we will discuss the core psychometric and statistical principles, including the use of item response theory and latent regression analysis to develop key group statistics of interest. The remainder of the course will focus on statistical topics including

  1. (1) typical item and population sampling designs
  2. (2) statistical inference (including the three main components of variance and how they are computed, estimation of degrees of freedom, multiple comparisons, and various statistical rules that are often applied).

We will end with a brief overview of where these assessments are heading in terms of digitization, the use of behavioral and process data, and using adaptive approaches to test design and administration. At the end of this course, we hope that participants can more quickly read psychometric and statistical papers about group score assessment and possibly gain an interest in working on and developing these types of assessments. Besides a set of slides, we will provide access to a new summary article about NAEP statistical and psychometric research (Oranje and Kolstad, in press, Special Issue of Journal of Educational and Behavioral Statistics) as well as a bibliography with easily accessible/downloadable papers that address the aforementioned topics in more detail.