Facial Expression Recognition (Face Recognition Techniques) Part 3

Geometric Feature Extraction

As shown in Fig. 19.4, in order to detect and track changes of facial components in near frontal face images, Tian et al. develop multi-state models to extract the geometric facial features. A three-state lip model describes the lip state: open, closed, tightly closed. A two-state model (open or closed) is used for each of the eyes. Each brow and cheek has a one-state model. Some appearance features, such as nasolabial furrows and crows-feet wrinkles (Fig. 19.5b), are represented explicitly by using two states: present and absent. Given an image sequence, the region of the face and approximate location of individual face features are detected automatically in the initial frame [78]. The contours of the face features and components then are adjusted manually in the initial frame. After the initialization, all face feature changes are automatically detected and tracked in the image sequence. The system groups 15 parameters for the upper face and 9 parameters for the lower face, which describe shape, motion, and state of face components and furrows. To remove the effects of variation in planar head motion and scale between image sequences in face size, all parameters are computed as ratios of their current values to that in the reference frame. Details of geometric feature extraction and representation can be found in paper [95].

Automatic active appearance model (AAM) mapping can be employed to reduce the manual preprocessing of the geometric feature initialization [66, 105]. Xiao et al. [104] performed the 3D head tracking to handle large out-of plane head motion (Sect. 19.4.1) and track nonrigid features. Once the head pose is recovered, the face region is stabilized by transforming the image to a common orientation for expression recognition [18, 67].


The systems in [15, 102] use an explicit 3D wireframe face model to track geometric facial features defined on the model [91]. The 3D model is fitted to the first frame of the sequence by manually selecting landmark facial features such as corners of the eyes and mouth. The generic face model, which consists of 16 surface patches, is warped to fit the selected facial features. Figure 19.6b shows an example of the geometric feature extraction of paper [102].

Example results of feature extraction [95]. a Permanent feature extraction (eyes, brows, and mouth). b Transient feature extraction (crows-feet wrinkles, wrinkles at nasal root, and nasolabial furrows)

Fig. 19.5 Example results of feature extraction [95]. a Permanent feature extraction (eyes, brows, and mouth). b Transient feature extraction (crows-feet wrinkles, wrinkles at nasal root, and nasolabial furrows)

Example of feature extraction [102]. a Input video frame. b Snapshot of the geometric tracking system. c Extracted texture map. d Selected facial regions for appearance feature extraction [102]

Fig. 19.6 Example of feature extraction [102]. a Input video frame. b Snapshot of the geometric tracking system. c Extracted texture map. d Selected facial regions for appearance feature extraction [102]

Appearance Feature Extraction

Gabor wavelets [22] are widely used to extract the facial appearance changes as a set of multiscale and multiorientation coefficients. The Gabor filter may be applied to specific locations on a face [59, 94, 96, 116] or to the whole face image [4, 23, 37]. Zhang et al. [116] was the first to compare two type of features to recognize expressions, the geometric positions of 34 fiducial points on a face and 612 Gabor wavelet coefficients extracted from the face image at these 34 fiducial points. The recognition rates for six emotion-specified expressions (e.g., joy and anger) were significantly higher for Gabor wavelet coefficients. Donato et al. [23] compared several techniques for recognizing six single upper face AUs and six lower face AUs. These techniques include optical flow, principal component analysis, independent component analysis, local feature analysis, and Gabor wavelet representation. The best performances were obtained using a Gabor wavelet representation and independent component analysis. All of these systems [23, 116] used a manual step to align each input image with a standard face image using the center of the eyes and mouth.

Tian et al. [96] studied geometric features and Gabor coefficients to recognize single AU and AU combinations. In their system, they used 480 Gabor coefficients in the upper face for 20 locations and 432 Gabor coefficients in the lower face for 18 locations (Fig. 19.4). They found that Gabor wavelets work well for single AU recognition for homogeneous subjects without head motion. However, for recognition of AU combinations when image sequences include nonhomogeneous subjects with small head motions, the recognition results are relatively poor if we use only Gabor appearance features. Several factors may account for the difference. First, the previous studies used homogeneous subjects. For instance, Zhang et al. [116] included only Japanese and Donato et al. [23] included only Euro-Americans. Tian et al. use Cohn-Kanade database which contains diverse subjects of European, African, and Asian ancestry. Second, the previous studies recognized emotion-specified expressions or only single AUs. Tian et al. tested the Gabor-wavelet-based method on both single AUs and AU combinations, including nonadditive combinations in which the occurrence of one AU modifies another. Third, the previous studies manually aligned and cropped face images. System [96] omitted this preprocessing step. In summary, using Gabor wavelets alone, recognition is adequate only for AU6, AU43, and AU0. Using geometric features alone, recognition is consistently good and shows high AU recognition rates with the exception of AU7. Combining both Gabor wavelet coefficients and geometric features, the recognition performance increased for all AUs.

In system [4], 3D pose and face geometry is estimated from hand-labeled feature points by using a canonical wire-mesh face model [73]. Once the 3D pose is estimated, faces are rotated to the frontal view and warped to a canonical face geometry. Then, the face images are automatically scaled and cropped to a standard face with a fixed distance between the two eyes. Difference images are obtained by subtracting a neutral expression face. They employed a family of Gabor wavelets at five spatial frequencies and eight orientations to a different image. Instead of specific locations on a face, they apply the Gabor filter to the whole face image. To provide robustness to lighting conditions and to image shifts they employed a representation in which the outputs of two Gabor filters in quadrature are squared and then summed. This representation is known as Gabor energy filters and it models complex cells of the primary visual cortex. Recently, Bartlett and her colleagues extend the system by using fully automatic face and eye detection. For facial expression analysis, they continue employ Gabor wavelets as appearance features [5].

Wen and Huang [102] use the ratio-image based method to extract appearance features, which is independent of a person’s face albedo. To limit the effects of the noise in tracking and individual variation, they extracted the appearance features in facial regions instead of points, and then used the weighted average as the final feature for each region. Eleven regions were defined on the geometric-motion-free texture map of the face (Fig. 19.6d). Gabor wavelets with two spatial frequency and six orientations are used to calculate Gabor coefficients. A 12-dimension appearance feature vector is computed in each of the 11 selected regions by weighted averaging of the Gabor coefficients. To track the face appearance variations, an appearance model (texture image) is trained using a Gaussian mixture model based on exemplars. Then an online adaption algorithm is employed to progressively adapt the appearance model to new conditions such as lighting changes or differences in new individuals. See [102] for details.

Facial Expression Recognition

The last step of AFEA systems is to recognize facial expression based on the extracted features. Many classifiers have been applied to expression recognition such as neural network (NN), support vector machines (SVM), linear discriminant analysis (LDA), K-nearest neighbor, multinomial logistic ridge regression (MLR), hidden Markov models (HMM), tree augmented naive Bayes, RankBoost, and others. Some systems use only a rule-based classification based on the definition of the facial actions. Here, we summarize the expression recognition methods to frame-based and sequence-based expression recognition methods. The frame-based recognition method uses only the current frame with or without a reference image (it is mainly a neutral face image) to recognize the expressions of the frame. The sequence-based recognition method uses the temporal information of the sequences to recognize the expressions for one or more frames. Table 19.7 summarizes the recognition methods, recognition rates, recognition outputs, and the databases used in the most recent systems. For the systems that used more classifiers, the best performance for person-independent test has been selected.

Frame-Based Expression Recognition Frame-based expression recognition does not use temporal information for the input images. It uses the information of current input image with/without a reference frame. The input image can be a static image or a frame of a sequence that is treated independently. Several methods can be found in the literature for facial expression recognition such as neural networks [95, 96, 116], support vector machines [4, 37], linear discriminant analysis [17], Bayesian network [15], and rule-based classifiers [70].

Tian et al. [96] employed a neural network-based recognizer to recognize FACS AUs. They used three-layer neural networks with one hidden layer to recognize AUs by a standard back-propagation method [78]. Separate networks are used for the upper and lower face. The inputs can be the normalized geometric features, the appearance feature, or both. The outputs are the recognized AUs. The network is trained to respond to the designated AUs whether they occur alone or in combination. When AUs occur in combination, multiple output nodes are excited. To our knowledge, system of [96] was the first system to handle AU combinations. Although several other systems tried to recognize AU combinations [17, 23, 57], they treated each combination as if it were a separate AU. More than 7000 different AU combinations have been observed [83], and a system that can handle AU combinations is more efficient. A overall recognition rate of 95.5% had been achieved for neutral expression and 16 AUs whether they occurred individually or in combinations.

Table 19.7 FACS AU or expression recognition of recent advances. SVM, support vector machines; MLR, multinomial logistic ridge regression; HMM, hidden Markov models; BN, Bayesian network; GMM, Gaussian mixture model; RegRankBoost, RankBoost with l1 regularization

Systems

Recognition

methods

Recognition

rate

Recognized outputs

Databases

[94-96]

Neural network

95.5%

16 single AUs and

Ekman-Hager [31],

(frame)

their combinations

Cohn-Kanade [49]

[18,67]

Rule-based

100%

Blink, nonblink

Frank-Ekman [40]

(sequence)

57%

Brow up, down, and non-motion

[37]

SVM + MLR

(frame)

91.5%

6 Basic expressions

Cohn-Kanade [49]

[5]

Adaboost + SVM (sequence)

80.1%

20 facial actions

Frank-Ekman [40]

[15]

BN + HMM

73.22%

6 Basic expressions

Cohn-Kanade [49]

(frame & sequence)

66.53%

6 Basic expressions

UIUC-Chen [14]

[102]

NN + GMM

(frame)

71%

6 Basic expressions

Cohn-Kanade [49]

[111]

RegRankBoost

(frame)

88%

6 Basic expressions

Cohn-Kanade [49]

In [37], a two-stage classifier was employed to recognize neutral expression and six emotion-specified expressions. First, SVMs were used for the pairwise classifiers, that is, each SVM is trained to distinguish two emotions. Then they tested several approaches, such as nearest neighbor, a simple voting scheme, and multinomial logistic ridge regression (MLR) to convert the representation produced by the first stage into a probability distribution over six emotion-specified expressions and neutral. The best performance at 91.5% was achieved by MLR.

Wen and Huang [102] also employed a two-stage classifier to recognize neutral expression and six emotion-specified expressions. First, a neural network is used to classify neutral and nonneutral-like [93]. Then Gaussian mixture models (GMMs) were used for the remaining expressions. The overall average recognition rate was 71% for a people-independent test.

Yang et al. [111] employ RankBoost with l1 regularization for expression recognition. They also estimate the intensity of expressions by using the output ranking scores. For six emotion-specified expressions in Cohn-Kanade database, they achieved 88% recognition rate.

Sequence-Based Expression Recognition The sequence-based recognition method uses the temporal information of the sequences to recognize the expressions of one or more frames. To use the temporal information, the techniques such as HMM [4, 15, 17, 57], recurrent neural networks [52, 77], and rule-based classifier [18] were employed in facial expression analysis. The systems of [4, 15, 18] employed a sequence-based classifier. Note that the systems of [4] and [18] are comparative studies for FACS AU recognition in spontaneously occurring behavior by using the same database [40]. In that database, subjects were ethnically diverse, AUs occurred during speech, and out-of-plane motion and occlusion from head motion and glasses were common. So far, only several systems tried to recognize AUs or expression in spontaneously occurring behavior [4, 5, 18, 97].

The system [18] employed a rule-based classifier to recognize AUs of eye and brow in spontaneously occurring behavior by using a number of frames in the sequence. The algorithm achieved an overall accuracy of 98% for three eye behaviors: blink (AU 45), flutter, and no blink (AU 0). Flutter is defined as two or more rapidly repeating blinks (AU 45) with only partial eye opening (AU 41 or AU 42) between them. 100% accuracy is achieved between blinks and non-blinks. Accuracy across the three categories in the brow region (brow-up, brow-down, nonbrow motion) was 57%. The number of brow-down actions was too small for reliable point estimates. Omitting brow-down from the analysis, recognition accuracy would increase to 80%. Human FACS coders had similar difficulty with brow-down, agreeing only about 50% in this database. The small number of occurrences was no doubt a factor for FACS coders as well. The combination of occlusion from eyeglasses and correlation of forward head pitch with brow-down complicated FACS coding.

System [4] first employed SVMs to recognize AUs by using Gabor representations. Then they used hidden Markov models (HMMs) to deal with AU dynamics. HMMs were applied in two ways: (1) taking Gabor representations as input, and (2) taking the outputs of SVM as input. When they use Gabor representations as input to train HMMs, the Gabor coefficients were reduced to 100 dimensions per image using PCA. Two HMMs, one for blinks and one for nonblinks were trained and tested using leave-one-out cross-validation. A best performance of 95.7% recognition rate was obtained using five states and three Gaussians. They achieved a 98.1% recognition rate for blink and non-blink using SVM outputs as input to train HMMs for five states and three Gaussians. Accuracy across the three categories in the brow region (brow-up, brow-down, nonbrow motion) was 70.1% (HMMs trained on PCA-reduced Gabors) and 66.9% (HMMs trained on SVM outputs) respectively. Omitting brow-down, the accuracy increases to 90.9% and 89.5%, respectively.

Cohen et al. [15] first evaluated Bayesian network (frame-based) classifiers such as Gaussian naive Bayes (NB-Gaussian), Cauchy naive Bayes (NB-Cauchy), and tree-augmented-naive Bayes (TAN), focusing on changes in distribution assumptions and feature dependency structures. They also proposed a new architecture of HMMs to segment and recognize neutral and six emotion-specified expressions from video sequences. For the person-independent test in the Cohn-Kanade database [49], the best performance at recognition rate of 73.2% was achieved by the TAN classifier. See details in Cohen et al. [15].

Example of the face and body feature extraction employed in the FABO system [45]. a Face features. b Body features—shoulder extraction procedure. Shoulder regions found and marked on the neutral frame (first row), estimating the movement within the shoulder regions using optical flow (second row)

Fig. 19.7 Example of the face and body feature extraction employed in the FABO system [45]. a Face features. b Body features—shoulder extraction procedure. Shoulder regions found and marked on the neutral frame (first row), estimating the movement within the shoulder regions using optical flow (second row)

Multimodal Expression Analysis

Facial expression is one of several modes of nonverbal communication. The message value of various modes may differ depending on context and may be congruent or discrepant with each other. Recently, several researchers integrated facial expression analysis with other modes such as gesture, prosody, and speech [20, 44, 45, 84]. Cohn et al. [20] investigated the relation between facial actions and vocal prosody for depression detection. They achieved the same accuracy rate at 79% by using facial actions and vocal prosody respectively. No results are reported for combination. Gunes and Piccardi [45] combined facial actions and body gestures for 9 expression recognition. They found that recognition from fused face and body modalities performs better than that from the face or the body modality alone.

For facial feature extraction in [45], following frame-by-frame face detection, a combination of appearance (e.g., wrinkles) and geometric features (e.g., feature points) is extracted from the face videos. A reference frame with neutral expression is employed for feature comparison. For body feature extraction and tracking, they detected and tracked head, shoulders and hands by using meanshift method from the body videos. Figure 19.7 shows examples of the face and body feature extraction in [45]. A total of 152 features for face modality and 170 features for body modality were used for the detection of face and body temporal segments with various classifiers including both frame-based and sequence-based methods. They tested the system on FABO database [44] and achieved recognition rate at 35.22% by only using face features and 76.87% by only using body features. The recognition rate increased to 85% with combination of both face and body features. More details can be found at [45].

Table 19.8 Summary of databases for facial expression analysis

Databases

Images/

Videos

Subjects

Expressions

Neutral

Spontaneous

Multimodal

3D data

Cohn-Kanade

videos

210

basic

yes

no

frontal face

no

[49]

expressions

single AUs

AU combinations

30° face 30° face

FABO

[44]

videos

23

9 expressions hand gestures

yes

no

frontal face upper body

no

JAFFE

[59]

images

10

6 basic expressions

yes

no

frontal face

no

MMI

images

19

single AUs

yes

no

frontal face

no

[71]

videos

AU combinations

profile face

RU-FACS

[5]

videos

100

AU combinations AU

yes

yes

4 face poses speech

no

BU-3DFE

[112]

static

100

6 basic expressions

yes

no

face

yes

BU-4DFE

[113]

dynamic

101

6 basic expressions

yes

no

face

yes

Databases for Facial Expression Analysis

Standard databases play important roles to train, evaluate, and compare different methods and systems for facial expression analysis. There are some public available databases (images or videos) of expression analysis for conducting comparative tests [5,24,40,44,49, 59, 63,71,74, 88,112,113]. In this topic, we summarize several common used standard databases for facial expression analysis in Table 19.8.

Cohn-Kanade AU-Coded Face Expression Database (Cohn-Kanade) [49] is the most commonly used comprehensive database in research on automated facial expression analysis. In Cohn-Kanade database, facial behavior was recorded for two views of faces (frontal view and 30-degree view) in 210 adults between the ages of 18 and 50 years. They were 69% female, 31% male, 81% Euro-American, 13% Afro-American, and 6% other groups. In the database, 1917 image sequences from frontal view videos for 182 subjects have been FACS coded for either target action units or the entire sequence. Japanese Female Facial Expression (JAFFE) Database [59] contains 213 images of 6 basic facial expressions and neutral posed by 10 Japanese female subjects. It is the first downloadable database for facial expression analysis. MMI Facial Expression Database (MMI) [71] contains more than 1500 samples of both static images and image sequences of faces from 19 subjects in frontal and profile views displaying various facial expressions of emotion, single AUs, and AU combinations. It also includes the identification of the temporal segments (onset, apex, offset) of shown AU and emotion facial displays. The Bi-modal Face and Body Gesture Database (FABO) [44] contains image sequences captured by two synchronized cameras (one for frontal view facial actions, and another for frontal view upper body gestures as shown in Fig. 19.7) from 23 subjects. The database is coded to neutral and nine general expressions (uncertainty, anger, surprise, fear, anxiety, happiness, disgust, boredom, and sadness) based on facial actions and body gestures. The RU-FACS Spontaneous Expression Database (RU-FACS) [5] is a dataset of spontaneous facial behavior with rigorous FACS coding. The dataset consists of 100 subjects participating in a ‘false opinion’ paradigm with speech-related mouth movements and out-of-plane head rotations from four views of face (frontal, left 45°, right 45°, and up about 22°). To date, image sequences from frontal view of 33 subjects have been FACS-coded. The database is being prepared for release. The Binghamton University 3D Facial Expression Database (BU-3DFE) [112] contains 2500 3D facial expression models including neutral and 6 basic expressions from 100 subjects. Associated with each 3D expression model, there are two corresponding facial texture images captured at two views (about +45° and -45°). The BU-4DFE database [113] is extended from a static 3D space (BU-3DFE database) to a dynamic 3D space at a video rate of 25 frames per second. BU-4DFE database contains 606 3D facial expression sequences captured from 101 subjects. Associated with each 3D expression sequence, there is a facial texture video with high resolution of 1040 x 1329 pixels per frame.

Open Questions

Although many recent advances and successes in automatic facial expression analysis have been achieved, as described in the previous sections, many questions remain open, for which answers must be found. Some major points are considered here.

1. How do humans correctly recognize facial expressions?

Research on human perception and cognition has been conducted for many years, but it is still unclear how humans recognize facial expressions. Which types of parameters are used by humans and how are they processed? By comparing human and automatic facial expression recognition we may be able to advance our understanding of each and discover new ways of improving automatic facial expression recognition.

2. Is it always better to analyze finer levels of expression?

Although it is often assumed that more fine-grained recognition is preferable, the answer depends on both the quality of the face images and the type of application. Ideally, an AFEA system should recognize all action units and their combinations. In high quality images, this goal seems achievable; emotion-specified expressions then can be identified based on emotion prototypes identified in the psychology literature. For each emotion, prototypic action units have been identified. In lower quality image data, only a subset of action units and emotion-specified expression may be recognized. Recognition of emotion-specified expressions directly may be needed. We seek systems that become “self aware” about the degree of recognition that is possible based on the information of given images and adjust processing and outputs accordingly. Recognition from coarse-to-fine, for example from emotion-specified expressions to subtle action units, depends on image quality and the type of application. Indeed, for some purposes, it may be sufficient that a system is able to distinguish between positive, neutral, and negative expression, or recognize only a limited number of target action units, such as brow lowering to signal confusion, cognitive effort, or negative affect.

3. Is there any better way to code facial expressions for computer systems?

Almost all the existing works have focused on recognition of facial expression, either emotion-specified expressions or FACS coded action units. The emotion-specified expressions describe expressions at a coarse level and are not sufficient for some applications. Although the FACS was designed to detect subtle changes in facial features, it is a human-observer-based system with only limited ability to distinguish intensity variation. Intensity variation is scored at an ordinal level; the interval level measurement is not defined and anchor points may be subjective. Challenges remain in designing a computer-based facial expression coding system with more quantitative definitions.

4. How do we obtain reliable ground truth?

Whereas some approaches have used FACS, which is a criterion measure widely used in the psychology community for facial expression analysis, most vision-based work uses emotion-specified expressions. A problem is that emotion-specified expressions are not well defined. The same label may apply to very different facial expressions, and different labels may refer to the same expressions, which confounds system comparisons. Another problem is that the reliability of labels typically is unknown. With few exceptions, investigators have failed to report interobserver reliability and the validity of the facial expressions they have analyzed. Often there is no way to know whether subjects actually showed the target expression or whether two or more judges would agree that the subject showed the target expression. At a minimum, investigators should make explicit labeling criteria and report interobserver agreement for the labels. When the dynamics of facial expression are of interest, temporal resolution should be reported as well. Because intensity and duration measurements are critical, it is important to include descriptive data on these features as well. Unless adequate data about stimuli are reported, discrepancies across studies are difficult to interpret. Such discrepancies could be due to algorithms or to errors in ground truth determination.

5. How do we recognize facial expressions in real life?

Real-life facial expression analysis is much more difficult than the posed actions studied predominantly to date. Head motion, low resolution input images, absence of a neutral face for comparison, and low intensity expressions are among the factors that complicate facial expression analysis. Recent works in 3D modeling of spontaneous head motion and action unit recognition in spontaneous facial behavior are exciting developments. How elaborate a head model is required to be in such work is as yet a research question. A cylindrical model is relatively robust and has proven effective as a part of blink detection system [104], but highly parametric, generic, or even custom-fitted head models may prove necessary for more complete action unit recognition.

Most works to date have used a single, passive camera. Although there are clear advantages to approaches that require only a single passive camera or video source, multiple cameras are feasible in a number of settings and can be expected to provide improved accuracy. Active cameras can be used to acquire high resolution face images [46]. Also, the techniques of super-resolution can be used to obtain higher resolution images from multiple low resolution images [2]. At present, it is an open question how to recognize expressions in situations in which a neutral face is unavailable, expressions are of low intensity, or other facial or nonverbal behaviors, such as occlusion by the hands, are present.

6. How do we best use the temporal information?

Almost all works have emphasized recognition of discrete facial expressions, regardless of being defined as emotion-specified expressions or action units. The timing of facial actions may be as important as their configuration. Recent work by our group has shown that intensity and duration of expression vary with context and that the timing of these parameters is highly consistent with automatic movement [85]. Related work suggests that spontaneous and deliberate facial expressions may be discriminated in terms of timing parameters [19], which is consistent with neuropsychological models [75] and may be important to lie detection efforts. Attention to timing is also important in guiding the behavior of computer avatars. Without veridical timing, believable avatars and ones that convey intended emotions and communicative intents may be difficult to achieve.

7.How may we integrate facial expression analysis with other modalities?

Facial expression is one of several modes of nonverbal communication. The message value of various modes may differ depending on context and may be congruent or discrepant with each other. An interesting research topic is the integration of facial expression analysis with that of gesture, prosody, and speech. Combining facial features with acoustic features would help to separate the effects of facial actions due to facial expression and those due to speech-related movements. The combination of facial expression and speech can be used to improve speech recognition and multimodal person identification [39].

Conclusions

Five recent trends in automatic facial expression analysis are (1) diversity of facial features in an effort to increase the number of expressions that may be recognized;(2) recognition of facial action units and their combinations rather than more global and easily identified emotion-specified expressions; (3) more robust systems for face acquisition, facial data extraction and representation, and facial expression recognition to handle head motion (both in-plane and out-of-plane), occlusion, lighting change, and low intensity expressions, all of which are common in spontaneous facial behavior in naturalistic environments; (4) fully automatic and real-time AFEA systems; and (5) combination of facial actions with other modes such as gesture, prosody, and speech. All of these developments move AFEA toward real-life applications. Several databases that addresses most problems for deliberate facial expression analysis have been released to researchers to conduct comparative tests of their methods. Databases with ground-truth labels, preferably both action units and emotion-specified expressions, are needed for the next generation of systems, which are intended for naturally occurring behavior (spontaneous and multimodal) in real-life settings. Work in spontaneous facial expression analysis is just now emerging and potentially will have significant impact across a range of theoretical and applied topics.

Next post:

Previous post: