Dataset Viewer
Auto-converted to Parquet Duplicate
title
string
authors
string
abstract
string
pdf_url
string
source_url
string
id
string
related_notes
string
year
string
conference
string
content
string
content_meta
string
Brain2GAN; Reconstructing perceived faces from the primate brain via StyleGAN3
Thirza Dado, Paolo Papale, Antonio Lozano, Lynn Le, Feng Wang, Marcel van Gerven, Pieter R. Roelfsema, Yağmur Güçlütürk, Umut Güçlü
Neural coding characterizes the relationship between stimuli and their corresponding neural responses. The usage of synthesized yet photorealistic reality by generative adversarial networks (GANs) allows for superior control over these data: the underlying feature representations that account for the semantics in synthesized data are known a priori and their relationship is perfect rather than approximated post-hoc by feature extraction models. We exploit this property in neural decoding of multi-unit activity responses that we recorded from the primate brain upon presentation with synthesized face images in a passive fixation experiment. The face reconstructions we acquired from brain activity were astonishingly similar to the originally perceived face stimuli. This provides strong evidence that the neural face manifold and the disentangled w-latent space conditioned on StyleGAN3 (rather than the z-latent space of arbitrary GANs or other feature representations we encountered so far) share how they represent the high-level semantics of the high-dimensional space of faces.
https://openreview.net/pdf?id=hT1S68yza7
https://openreview.net/forum?id=hT1S68yza7
hT1S68yza7
[{"review_id": "LYZxYFJqI5", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "Summary: Decoding neural activity while subjects are presented with images into the w-latent space of StyleGAN3.\n\nStrengths: A new dataset. The first reconstruction of faces from intracranial data.\n\nWeaknesses: Limited novelty and a lack of clarity about what the ICLR community can learn from this work. It's unclear how the dataset could be reused, it's fairly small (2 subjects), and designed very specifically for this experiment. An audience that would appreciate the value of decoding faces from intracranial data on its own would be more appropriate and receptive.", "strengths": "", "weaknesses": "", "comments": "", "overall_score": "Reject", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommendation": "", "tldr": ""}, {"review_id": "hfXPQVdFv6", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": "I thank the authors for their response.The revised version of the paper has additional subject data and clarification about the relationship with previous work. I think this is an improvement. The authors also clarify that they are the first to apply their method to intracranial data. I will recommend acceptance and update my score to a 6. My main concern is with the novelty for the ML community. From what I can tell, the main takeaway is that neural decoding can be done better with better off-the-shelf representations, i.e., from StyleGAN3. The downstream consequences of this finding for ML researchers seems limited.", "overall_score": "", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommendation": "", "tldr": ""}, {"review_id": "nU330zeBaM", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": "**Could we see the full results of the permutation test?**\n\nWe now included plots with the six similarity metrics over iterations for randomly sampled latents/faces as well as our predictions from brain activity (Appendix A.2). The random samples are never closer to the ground-truth than our predictions which indicates that our high decoding performance is not just a consequence of the high-quality images by StyleGAN.", "overall_score": "", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommendation": "", "tldr": ""}, {"review_id": "jU7sS9Tl37M", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": "**The analysis of reconstructed faces in terms of attributes is also introduced in these previous works.**\n\nThe reviewer is right that an attribute similarity metric has been used by [Dado et al., 2022]. Importantly, this metric was based on the decision boundaries identified by SVMs in a supervised setting. The attribute similarity in our work is based on the intrinsic latent semantics of the generator weights that are extracted by the unsupervised SeFa algorithm [Shen & Zhou, 2021], which makes it more straightforward to use due to the lack of label requirements. As such, we introduce a new \"SeFa\" attribute similarity metric for neural decoding.", "overall_score": "", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommendation": "", "tldr": ""}, {"review_id": "jiXHfBhEjN", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": "**Why the author use a AlexNet pretrained on object recognition task rather than on the face recognition task? What is the different between the cosine similarity from the AlexNet and the VGGFace?**\n\nWe use the features from alexnet pretrained on object recognition as well as the features from VGG16 pretrained on face recognition to evaluate similarity of stimuli and their reconstructions. Earlier studies have shown that object recognition models are the most accurate to explain neural representations during visual perception, so we include this metric in our analysis for completeness. We can explain the difference between alexnet and VGG16 similarity because our study uses face images and alexnet models more generic features whereas VGG16 is better at detecting differences in facial features.\n\n**For the conclusions in the summary, the author should add some qualitative comparison to make a visual support. Also, the qualitative comparison can give the reader a visual feeling of the difference in image space caused by the difference in six decoding performance evaluation metrics.**\n\nWe agree with the reviewer that the qualitative reconstructions from z-space together with a visual guide regarding perceptual similarity would be a valuable addition to the manuscript. The reconstructions from z-latent space can be found in Appendix A.1 and the visual guide in Appendix A.4. \n\n**The gap between the z-latent and w-latent in the “Alexnet sim” and “VGGFace sim” are much more lower than in the “Lat. sim.” and the “Lat. corr.”, how does the author explain this phenomenon? How much does the difference in “Alexnet sim” and “VGGFace sim” affect the reconstruction image quality? If the visualized difference is not obvious, it will weaken the conclustion that “This provides strong evidence that the neural face manifold and the disentangled w-latent space conditioned on StyleGAN3 (rather than the z-latent space of arbitrary GANs or other feature representations we encountered so far) share how they represent the high-level semantics of the high-dimensional space of faces” in the abstract.**\n**More comparison are suggest to be added between the reconstruction quality of z-latent and w-latent.**\n\nThe different metrics have different scales and dimensionality, and are thus not directly comparable among each other. That being said, we have now included how different metrics rank the reconstructions (Appendix A.4) as well as the reconstructions from z-latent space (Appendix A.1), showing obvious differences in reconstruction accuracy. \n\n**The experiments are not based on a public dataset and no implementation was released, which makes difficulty of reproduction.**\n \nThe complete dataset will be released upon publication so that the experiment can be reproduced and used as a benchmark for future studies.\n", "overall_score": "", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommendation": "", "tldr": ""}, {"review_id": "uYZqMt1_JZ", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": "**I believe this paper is more experimental. There is not much novelty.**\n\nWe clarified the methodological and empirical novelty of our contributions by including the following section in the introduction of the updated manuscript:\n\n*“Deep convnets have been used to explain neural responses during visual perception, imagery and dreaming (Horikawa & Kamitani, 2017b;a; St-Yves & Naselaris, 2018; Shen et al., 2019b;a; Gucluturk et al., 2017; VanRullen & Reddy, 2019; Dado et al., 2022). To our knowledge, the latter three are the most similar studies that also attempted to decode perceived faces from brain activity. Gucluturk et al. (2017) used the feature representations from VGG16 pretrained on face recognition (i.e., trained in a supervised setting). Although more biologically plausible, unsupervised learning paradigms seemed to appear less successful in modeling neural representations in the primate brain than their supervised counterparts (Khaligh, 2014) with the exception of VanRullen & Reddy (2019) and Dado et al. (2022) who used adversarially learned latent representations of a variational autoencoder-GAN (VAE-GAN) and a GAN, respectively. Importantly, Dado et al. (2022) used synthesized stimuli to have direct access to the ground-truth latents instead of using post-hoc approximate inference, as VAE-GANs do by design.*\n\n*The current work improves the experimental paradigm of Dado et al. (2022) and provides several novel contributions: face stimuli were synthesized by a feature-disentangled GAN and presented to a macaque with cortical implants in a passive fixation task. A decoder model was fit on the recorded brain activity and the ground-truth latents. Reconstructions were created by feeding the predicted latents from brain activity from a held-out test set to the GAN. Previous neural decoding studies used noninvasive fMRI signals that have a low signal-to-noise ratio and poor temporal resolution leading to a reconstruction bottleneck and precluding detailed spatio-temporal analysis. This work is the first to decode photorealistic faces from intracranial recordings which resulted in state-of-the-art reconstructions as well as new opportunities to study the brain. First, the high performance of decoding via w-latent space indicates the importance of disentanglement to explain neural representations upon perception, offering a new way forward for the previously limited yet biologically more plausible unsupervised models of brain function. Second, we show how decoding performance evolves over time and observe that the largest contribution is explained by the inferior temporal (IT) cortex which is located at the end of the visual ventral pathway. Third, the application of Euclidean vector arithmetic to w-latents and brain activity yielded similar results which further suggests functional overlap between these representational spaces. Taken together, the high quality of the neural recordings and feature representations resulted in novel and unprecedented experimental findings that not only demonstrate how advances in machine learning extend to neuroscience but also will serve as an important benchmark for future research.”*\n\n**It might be hard to reproduce the result, since it needs a macaque.**\n\nThe data and code will be shared upon publication which will be used as a benchmark for future research and will be one of the largest and highest-quality publicly available datasets of its kind.\n\n**Some of the paper’s claims have minor issues. A few statements are not well-supported, or require small changes to be made correct.**\n\nWe double-checked our claims and updated them where necessary. We are happy to apply more specific changes to any remaining issues when pointed out.\n", "overall_score": "", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommendation": "", "tldr": ""}, {"review_id": "pImSBKf_LyW", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": "**First attempt at reconstructing faces from intracranial data? Concerns about originality. Relationship with previous works must be more clearly explained. Not a lot of novelty.**\n\nWe now added the following section under the introduction:\n\n_“Deep convnets have been used to explain neural responses during visual perception, imagery and dreaming (Horikawa & Kamitani, 2017b;a; St-Yves & Naselaris, 2018; Shen et al., 2019b;a; Gucluturk et al., 2017; VanRullen & Reddy, 2019; Dado et al., 2022). To our knowledge, the latter three are the most similar studies that also attempted to decode perceived faces from brain activity. Gucluturk et al. (2017) used the feature representations from VGG16 pretrained on face recognition (i.e., trained in a supervised setting). Although more biologically plausible, unsupervised learning paradigms seemed to appear less successful in modeling neural representations in the primate brain than their supervised counterparts (Khaligh, 2014) with the exception of VanRullen & Reddy (2019) and Dado et al. (2022) who used adversarially learned latent representations of a variational autoencoder-GAN (VAE-GAN) and a GAN, respectively. Importantly, Dado et al. (2022) used synthesized stimuli to have direct access to the ground-truth latents instead of using post-hoc approximate inference, as VAE-GANs do by design._\n\n_The current work improves the experimental paradigm of Dado et al. (2022) and provides several novel contributions: face stimuli were synthesized by a feature-disentangled GAN and presented to a macaque with cortical implants in a passive fixation task. A decoder model was fit on the recorded brain activity and the ground-truth latents. Reconstructions were created by feeding the predicted latents from brain activity from a held-out test set to the GAN. Previous neural decoding studies used noninvasive fMRI signals that have a low signal-to-noise ratio and poor temporal resolution leading to a reconstruction bottleneck and precluding detailed spatio-temporal analysis. This work is the first to decode photorealistic faces from intracranial recordings which resulted in state-of-the-art reconstructions as well as new opportunities to study the brain. First, the high performance of decoding via w-latent space indicates the importance of disentanglement to explain neural representations upon perception, offering a new way forward for the previously limited yet biologically more plausible unsupervised models of brain function. Second, we show how decoding performance evolves over time and observe that the largest contribution is explained by the inferior temporal (IT) cortex which is located at the end of the visual ventral pathway. Third, the application of Euclidean vector arithmetic to w-latents and brain activity yielded similar results which further suggests functional overlap between these representational spaces. Taken together, the high quality of the neural recordings and feature representations resulted in novel and unprecedented experimental findings that not only demonstrate how advances in machine learning extend to neuroscience but also will serve as an important benchmark for future research.”_\n\n**The difference between the w-latent space and z-latent space could be explained more clearly.**\n\nWe now added the following explanation under 2.2.1 Stimuli:\n\n_“That is, the original z-latent space is restricted to follow the data distribution that it is trained on (e.g., older but not younger people wear eyeglasses in the training set images) and such biases are entangled in the z-latents. The less entangled w-latent space overcomes this such that unfamiliar latent elements can be mapped to their respective visual features [Karras, 2019].\"_\n\n**The novel contribution of this work seems mainly to be the newly collected data. But even this is only done for one subject.**\n\nWe now performed the experiment with another macaque as well (Appendix A.3). As such, we report the quantitative results for two subjects which is common in the field.\n\n**Some figures (figure 7) have hard to read portions.**\n\nWe now changed the texture of VGG16 and showed the outcomes in three distinct graphs.\n\n**Could we see the full results of the permutation test?**\n\nYes, we are currently re-running the permutation analyses to report the average closeness of random vectors to the ground-truth vectors and will update the manuscript with these results.\n\n**figure 8B: typo - \"Brunet\" --> \"Brunette man\"**\n\nThis suggestion is now incorporated.\n\n**Section 2.3.2 Yi is defined, but I can't see where it is ever used?**\n\nWe now double-checked all the defined variables and corrected them where necessary. \n\n**Plot of all electrode locations?**\n\nWe now added a schematic illustration showing the electrode placings (Fig. 3).\n", "overall_score": "", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommendation": "", "tldr": ""}, {"review_id": "VUgdajX6Uew", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "The reconstruction results are impressive, but have already been accomplished in many previous works. The decoding techniques are not significantly different than what was introduced in prior work, although the data is new. But even then, the data consists only of neural recordings for a single subject.", "strengths": "# strengths\n- New macaque monkey data.\n- This seems like the first attempt at reconstructing faces from intracranial data. (Is it?) If so, that should be mentioned somewhere\n- The reconstructed images are impressive\n\n# weaknesses\n- I have concerns about the originality of this work. Decoding high quality faces from brain signals has been previously accomplished (Dado et al. 2021, VanRullen and Reddy 2019, Güçlütürk and Güçlü 2017). The decoding and encoding techniques, namely the use of GAN latent spaces, have already been discussed in previous works. In this work, the authors use a slightly different model (StyleGAN3 vs StyleGAN), but the overall methods remain the same.\n- The analysis of reconstructed faces in terms of attributes is also introduced in these previous works. (The authors do acknowledge this)\n- I think the paper could be much improved if the relationship with previous works were more clearly explained.\n- The novel contribution of this work seems mainly to be the newly collected data. But even this is only done for one subject.\n- The difference between the $w$ latent space and $z$ latent space could be explained more clearly. It's mentioned that the $w$ latent space is obtained by passing the $z$ latent space through an MLP, but since it results in much better reproductions (relative to $z$), could the authors explain a little bit more about what makes the two spaces different?\n\n## references\n- Dado et al. 2021 Hyperrealistic neural decoding: Reconstructing faces from fMRI activations via the GAN latent space (https://www.biorxiv.org/content/10.1101/2020.07.01.168849v3.full)\n- VanRullen and Reddy 2019 Reconstructing faces from fMRI patterns using deep generative neural networks https://www.nature.com/articles/s42003-019-0438-y\n- Güçlütürk and Güçlü et al. 2017 Reconstructing perceived faces from brain activations with deep adversarial neural decoding", "weaknesses": "", "comments": "", "overall_score": "6: marginally above the acceptance threshold", "confidence": "3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.", "contribution": {"technical": {"rating": 2, "description": "The contributions are only marginally significant or novel."}, "empirical": {"rating": 1, "description": "The contributions are neither significant nor novel."}, "overall_assessment": "not significant"}, "correctness": "3: Some of the paper’s claims have minor issues. A few statements are not well-supported, or require small changes to be made correct.", "clarity_quality_novelty_reproducibility": "- originality: many previous works should be mentioned (see above section). As it stands, I don't think this paper adds a lot of novelty.\n- clarity: Clarity is good. Some figures (figure 7) have hard to read portions.\n- reproducability: The authors have said they will release the data, including the macaque data and code\n- quality: The reconstructed images are impressive. But confidence in the significance of these results could be improved by a more complete discussion of the permutation test results. (See below) \n\n# questions/minor comments\n- figure 8B: typo - \"Brunet\" --> \"Brunette man\"\n- Section 2.3.2 $Y_i$ is defined, but I can't see where it is ever used?\n- figure 7A: Hard to see the texture of VGG16\n- If it's simple to produce, could we see a plot of all electrode locations?\n- The authors mention a permutation test in section 3.1? Could we see the full results of the permutation test? What is the average closeness of the random latent vector to the ground truth? ", "recommendation": "6: marginally above the acceptance threshold", "tldr": ""}, {"review_id": "HFgUHMKVCU", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "It is my first time to know \"neural coding\" field. \nThe experiments that to explore brain actively with neural representation is interesting. \nAlthough the there is a face that StyleGAN3 W space is more disentangled than Z space, the experimental shows its closer relationship to brain perceived stimuli is exciting. \n", "strengths": "Pro: \nThe idea to evaluate StyleGAN3 W space and Z space by using a macaque is interesting. \nThe experiments are extensive and solid. \nCon:\nI believe this paper is more experimental. There is not much novelty.", "weaknesses": "", "comments": "", "overall_score": "5: marginally below the acceptance threshold", "confidence": "2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.", "contribution": {"technical": {"rating": 2, "description": "The contributions are only marginally significant or novel."}, "empirical": {"rating": 2, "description": "The contributions are only marginally significant or novel."}, "overall_assessment": "marginally significant"}, "correctness": "3: Some of the paper’s claims have minor issues. A few statements are not well-supported, or require small changes to be made correct.", "clarity_quality_novelty_reproducibility": "The experiments and writing are clear and sound. It might be hard to reproduce the result, since it needs a macaque.\n", "recommendation": "5: marginally below the acceptance threshold", "tldr": ""}, {"review_id": "kU0QsgirW8", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "This paper provides evidence that the neural face manifold and the disentangled w-latent space conditioned on StyleGAN3 are similar in how they represent the high-level semantics of the high-dimensional space of faces. But more comparison are suggest to be added between the reconstruction quality of z-latent and w-latent.", "strengths": "Strength:The author finds evidence that the neural face manifold and the disentangled w-latent space conditioned on StyleGAN3 are similar in how they represent the high-level semantics of the high-dimensional space of faces. Especially, the StyleGAN3 has never been optimized on neural data.\n\nWeaknesses: \n1) Why the author use a AlexNet pretrained on object recognition task rather than on the face recognition task? What is the different between the cosine similarity from the AlexNet and the VGGFace?\n2) For the conclusions in the summary, “This provides strong evidence that the neural face manifold and the disentangled w-latent space conditioned on StyleGAN3 (rather than the z-latent space of arbitrary GANs or other feature representations we encountered so far) share how they represent the high-level semantics of the high-dimensional space of faces”, the author should add some qualitative comparison to make a visual support. Also, the qualitative comparison can give the reader a visual feeling of the difference in image space caused by the difference in six decoding performance evaluation metrics.\n3) The gap between the z-latent and w-latent in the “Alexnet sim” and “VGGFace sim” are much more lower than in the “Lat. sim.” and the “Lat. corr.”, how does the author explain this phenomenon? How much does the difference in “Alexnet sim” and “VGGFace sim” affect the reconstruction image quality? If the visualized difference is not obvious, it will weaken the conclustion that “This provides strong evidence that the neural face manifold and the disentangled w-latent space conditioned on StyleGAN3 (rather than the z-latent space of arbitrary GANs or other feature representations we encountered so far) share how they represent the high-level semantics of the high-dimensional space of faces” in the abstract.", "weaknesses": "", "comments": "", "overall_score": "3: reject, not good enough", "confidence": "4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.", "contribution": {"technical": {"rating": 2, "description": "The contributions are only marginally significant or novel."}, "empirical": {"rating": 2, "description": "The contributions are only marginally significant or novel."}, "overall_assessment": "marginally significant"}, "correctness": "3: Some of the paper’s claims have minor issues. A few statements are not well-supported, or require small changes to be made correct.", "clarity_quality_novelty_reproducibility": "The experiments are not based on a public dataset and no implementation was released, which makes difficulty of reproduction.", "recommendation": "3: reject, not good enough", "tldr": ""}, {"review_id": "hT1S68yza7", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": "", "overall_score": "", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommendation": "", "tldr": "Reconstruction of perceived faces by neural decoding of cortical responses from the primate brain"}]
2023
ICLR
# BRAIN2GAN; RECONSTRUCTING PERCEIVED FACES FROM THE PRIMATE BRAIN VIA STYLEGAN3 #### Anonymous authors Paper under double-blind review ## ABSTRACT Neural coding characterizes the relationship between stimuli and their corresponding neural responses. The usage of synthesized yet photorealistic reality by generative adversarial networks (GANs) allows for superior control over these data: the underlying feature representations that account for the semantics in synthesized data are known a priori and their relationship is perfect rather than approximated post-hoc by feature extraction models. We exploit this property in neural decoding of multi-unit activity (MUA) responses that we recorded from the primate brain upon presentation with synthesized face images in a passive fixation experiment. First, the face reconstructions we acquired from brain activity were remarkably similar to the originally perceived face stimuli. Second, our findings show that responses from the inferior temporal (IT) cortex (i.e., the recording site furthest downstream) contributed most to the decoding performance among the three brain areas. Third, applying Euclidean vector arithmetic to neural data (in combination with neural decoding) yielded similar results as on w-latents. Together, this provides strong evidence that the neural face manifold and the feature-disentangled w-latent space conditioned on StyleGAN3 (rather than the z-latent space of arbitrary GANs or other feature representations we encountered so far) share how they represent the high-level semantics of the high-dimensional space of faces. ## 1 INTRODUCTION The field of neural coding aims at deciphering the neural code to characterize how the brain recognizes the statistical invariances of structured yet complex naturalistic environments. *Neural encoding* seeks to find how properties of external phenomena are stored in the brain by modeling the stimulus-response transformation [\(van Gerven, 2017\)](#page-11-0). Vice versa, *neural decoding* aims to find what information about the original stimulus is present in and can be retrieved from the measured brain activity by modeling the response-stimulus transformation [\(Haynes & Rees, 2006;](#page-10-0) [Kamitani](#page-10-1) [& Tong, 2005\)](#page-10-1). In particular, reconstruction is concerned with re-creating the literal stimulus image from brain activity. In both cases, it is common to factorize the direct transformation into two by ![](imgs/hT1S68yza7__page_0_Figure_8.jpeg) <span id="page-0-0"></span>Figure 1: Neural coding. The transformation between sensory stimuli and brain responses via an intermediate feature space. Neural encoding is factorized into a nonlinear "analysis" and a linear "encoding" mapping. Neural decoding is factorized into a linear "decoding" and a nonlinear "synthesis" mapping. invoking an in-between feature space (Figure [1\)](#page-0-0). Not only does this favor data efficiency as neural data is scarce but it also allows one to test alternative hypotheses about the relevant stimulus features that are stored in and can be retrieved from the brain. The brain can effectively represent an infinite amount of visual phenomena to interpret and act upon the environment. Although such neural representations are constructed from experience, novel yet plausible situations that respect the statistics of the natural environment can also be mentally simulated or *imagined* [\(Dijkstra et al., 2019\)](#page-9-0). From a machine learning perspective, generative models achieve the same objective: they capture the probability density that underlies a (very large) set of observations and can be used to synthesize new instances which appear to be from the original data distribution yet are suitably different from the observed instances. Particularly, generative adversarial networks (GANs) [\(Goodfellow et al., 2014\)](#page-9-1) are among the most impressive generative models to date which can synthesize novel yet realistic-looking images (e.g., natural images and images of human faces, bedrooms, cars and cats [\(Brock et al., 2018;](#page-9-2) [Karras et al., 2017;](#page-10-2) [2019;](#page-10-3) [2021\)](#page-10-4) from randomly-sampled latent vectors. A GAN consists of two neural networks: a generator network that synthesizes images from randomly-sampled latent vectors and a discriminator network that distinguishes synthesized from real images. During training, these networks are pitted against each other until the generated data are indistinguishable from the real data. The bijective latent-to-image relationship of the generator can be exploited in neural decoding to disambiguate the synthesized images as visual content is specified by their underlying latent code [\(Kriegeskorte, 2015\)](#page-10-5) and perform *analysis by synthesis* [\(Yuille & Kersten, 2006\)](#page-11-1). Deep convnets have been used to explain neural responses during visual perception, imagery and dreaming [\(Horikawa & Kamitani, 2017b](#page-10-6)[;a;](#page-10-7) [St-Yves & Naselaris, 2018;](#page-11-2) [Shen et al., 2019b;](#page-11-3)[a;](#page-11-4) Guc¸l ¨ ut¨ [urk et al., 2017;](#page-10-8) [VanRullen & Reddy, 2019;](#page-11-5) [Dado et al., 2022\)](#page-9-3). To our knowledge, the latter ¨ three are the most similar studies that also attempted to decode perceived faces from brain activity. (Guc¸l ¨ ut¨ [urk et al., 2017\)](#page-10-8) used the feature representations from VGG16 pretrained on face recognition ¨ (i.e., trained in a supervised setting). Although more biologically plausible, unsupervised learning paradigms seemed to appear less successful in modeling neural representations in the primate brain than their supervised counterparts [\(Khaligh-Razavi & Kriegeskorte, 2014\)](#page-10-9) with the exception of [\(VanRullen & Reddy, 2019\)](#page-11-5) and [\(Dado et al., 2022\)](#page-9-3) who used adversarially learned latent representations of a variational autoencoder-GAN (VAE-GAN) and a GAN, respectively. Importantly, [\(Dado et al., 2022\)](#page-9-3) used synthesized stimuli to have direct access to the ground-truth latents instead of using post-hoc approximate inference, as VAE-GANs do by design. The current work improves the experimental paradigm of [\(Dado et al., 2022\)](#page-9-3) and provides several novel contributions: face stimuli were synthesized by a feature-disentangled GAN and presented to a macaque with cortical implants in a passive fixation task. A decoder model was fit on the recorded brain activity and the ground-truth latents. Reconstructions were created by feeding the predicted latents from brain activity from a held-out test set to the GAN. Previous neural decoding studies ![](imgs/hT1S68yza7__page_1_Figure_5.jpeg) <span id="page-1-0"></span>Figure 2: StyleGAN3 generator architecture. The generator takes a 512-dim. latent vector as input and transforms it into a 1024<sup>2</sup> resolution RGB image. We collected a dataset of 4000 training set images and 100 test set images. used noninvasive fMRI signals that have a low signal-to-noise ratio and poor temporal resolution leading to a reconstruction bottleneck and precluding detailed spatio-temporal analysis. This work is the first to decode photorealistic faces from intracranial recordings which resulted in state-of-theart reconstructions as well as new opportunities to study the brain. First, the high performance of decoding via w-latent space indicates the importance of disentanglement to explain neural representations upon perception, offering a new way forward for the previously limited yet biologically more plausible unsupervised models of brain function. Second, we show how decoding performance evolves over time and observe that the largest contribution is explained by the inferior temporal (IT) cortex which is located at the end of the visual ventral pathway. Third, the application of Euclidean vector arithmetic to w-latents and brain activity yielded similar results which further suggests functional overlap between these representational spaces. Taken together, the high quality of the neural recordings and feature representations resulted in novel and unprecedented experimental findings that not only demonstrate how advances in machine learning extend to neuroscience but also will serve as an important benchmark for future research. ## 2 METHODS ### 2.1 DATA #### 2.1.1 STIMULI We synthesized photorealistic face images of 1024<sup>2</sup> resolution from (512-dim.) z-latent vectors with the generator network of StyleGAN3 [\(Karras et al., 2020\)](#page-10-10) (Figure [2\)](#page-1-0) which is pretrained on the highquality Flickr-Faces-HQ (FFHQ) dataset [\(Karras et al., 2019\)](#page-10-3). The z-latents were randomly sampled from the standard Gaussian. First, StyleGAN3 maps the z-latent space via an 8-layer MLP to an intermediate (512-dim.) w-latent space in favor of feature disentanglement. That is, the original zlatent space is restricted to follow the data distribution that it is trained on (e.g., older but not younger people wear eyeglasses in the training set images) and such biases are entangled in the z-latents. The less entangled w-latent space overcomes this such that unfamiliar latent elements can be mapped to their respective visual features [\(Karras et al., 2019\)](#page-10-3). Second, we specified a truncation of 0.7 so that the sampled values are ensured to fall within this range to benefit image quality. During synthesis, learned affine transformations integrate w-latents into the generator network with adaptive instance normalization (like *style transfer* [\(Huang & Belongie, 2017\)](#page-10-11)) as illustrated in Figure [2.](#page-1-0) Finally, we synthesized a training set of 4000 face images that were each presented once to cover a large stimulus space to fit a general model. The test set consisted of 100 synthesized faces that were averaged over twenty repetitions. ## 2.1.2 FEATURES For the main analysis, the z- and w-latent space of StyleGAN3 were both used as the in-between feature space. In addition, we also extracted the intermediate layer activations to our face stimuli from alexnet for object recognition [\(Krizhevsky, 2014\)](#page-10-12), VGG16 for face recognition [\(Parkhi et al.,](#page-10-13) [2015\)](#page-10-13) and the discriminator network of StyleGAN3. We fit multiple encoding models to see how well their feature representations can explain the recorded responses. Because the features from VGGFace and the discriminator were very large (∼ 10<sup>6</sup> ), we performed downsampling, as done in [\(Eickenberg et al., 2017\)](#page-9-4). I.e., for each channel in the activation, the feature map was spatially smoothed with a Gaussian filter and subsampled such that the total number of output features was lower than 50,000 per image. The kernel size was set to be equal to the downsampling factor. # 2.1.3 RESPONSES We recorded multi-unit activity [\(Super & Roelfsema, 2005\)](#page-11-6) with 15 chronically implanted electrode arrays (64 channels each) in one macaque (male, 7 years old) upon presentation with the synthesized face images in a passive fixation experiment (Figure [3\)](#page-3-0). Neural responses were recorded in V1 (7 arrays), V4 (4 arrays) and IT (4 arrays) leading to a total of 960 channels (see electrode placings in Figure [1\)](#page-0-0). For each trial, we averaged the early response of each channel using the following timewindows: 25-125 ms for V1, 50-150 ms for V4 and 75-175 ms for IT. The data was normalized as in [\(Bashivan et al., 2019\)](#page-9-5) such that for each channel, the mean was subtracted from all the responses which were then divided by the standard deviation. All procedures complied with the NIH Guide <span id="page-3-0"></span>Figure 3: Passive fixation task. The monkey was fixating a red dot with gray background for 300 ms followed by a fast sequence four face images (500<sup>2</sup> pixels): 200 ms stimulus presentation and 200 ms inter-trial interval. The stimuli were slightly shifted to the lower right such that the fovea corresponded with pixel (150, 150). The monkey was rewarded with juice if fixation was kept for the whole sequence. for Care and Use of Laboratory Animals and were approved by the local institutional animal care and use committee. #### 2.2 MODELS We used linear mapping to evaluate our claim that the feature- and neural representation effectively encode the same stimulus properties, as is standard in the field. A more complex nonlinear transformation would not be valid to support this claim since this could theoretically map anything to anything. We used regularization for encoding due to the high dimensionality of the feature space. #### 2.2.1 DECODING Multiple linear regression was used to model how the individual units within feature representations y (e.g., w-latents) are linearly dependent on brain activity x per electrode: $$\mathcal{L} = \frac{1}{2} \sum_{i=1}^{N} (y_i - \mathbf{w}^T \mathbf{x}_i)^2$$ (1) where i ranges over samples. This was implemented by prepending a dense layer to the generator architecture to transform brain responses into feature representations which were then run through the generator as usual. This response-feature layer was fit with ordinary least squares while the remainder of the network was kept fixed. Note that no truncation was applied for the reconstruction from predicted features/latents. #### 2.2.2 ENCODING Kernel ridge regression was used to model how every recording site in the visual cortex is linearly dependent on the stimulus features. That is, an encoding model is defined for each electrode. In contrast to decoding, encoding required regularization to avoid overfitting since we predicted from feature space x<sup>i</sup> → ϕ(xi) where ϕ() is the feature extraction model. Hence we used ridge regression where the norm of w is penalized to define encoding models by a weighted sum of ϕ(x): $$\mathcal{L} = \frac{1}{2} \sum_{i=1}^{N} \left( y_i - \mathbf{w}^T \phi(\mathbf{x}_i) \right)^2 + \frac{1}{2} \lambda_j ||\mathbf{w}||^2$$ (2) where x = (x1, x2, ..., x<sup>N</sup> ) <sup>T</sup> ∈ R N×d , y = (y1, y2, ..., y<sup>N</sup> ) <sup>T</sup> ∈ R N×1 , N the number of stimulusresponse pairs, d the number of pixels, and λ<sup>j</sup> ≥ 0 the regularization parameter. We then solved for w by applying the "kernel trick" [\(Welling, 2013\)](#page-11-7): $$\mathbf{w_j} = (\lambda_j \mathbf{I}_m + \Phi \Phi^T)^{-1} \Phi y \tag{3}$$ where Φ = (ϕ(x1), ϕ(x2), ..., ϕ(x<sup>N</sup> )) ∈ R N×q (i.e., the design matrix) where q is the number of feature elements and y = (y1, y2, . . . , y<sup>N</sup> ∈ R N×1 . This means that w must lie in the space induced by the training data even when q ≫ N. The optimal λ<sup>j</sup> is determined with grid search, as in [\(Guc¸l](#page-9-6) ¨ u¨ [& van Gerven, 2014\)](#page-9-6). The grid is obtained by dividing the domain of λ in M values and evaluate model performance at every value. This hyperparameter domain is controlled by the capacity of the model, i.e., the effective degrees of freedom dof of the ridge regression fit from [1, N]: $$\operatorname{dof}(\lambda_j) = \sum_{i=1}^{N} \frac{s_i^2}{s_i^2 + \lambda_j} \tag{4}$$ where s are the non-zero singular values of the design matrix Φ as obtained by singular value decomposition. We can solve for each λ<sup>j</sup> with Newton's method. Now that the grid of lambda values is defined, we can search for the optimal λ<sup>j</sup> that minimizes the 10-fold cross validation error. ### 2.3 EVALUATION Decoding performance was evaluated by six metrics that compared the stimuli from the held-out test set with their reconstructions from brain activity: latent similarity, alexnet perceptual similarity (object recognition), VGG16 perceptual similarity (face recognition), latent correlation, pixel correlation and structural similarity index measure (SSIM). For *latent similarity*, we considered the cosine similarity per latent dimension between predicted and ground-truth latent vectors: Latent cos similarity = $$\frac{\hat{z}_i \cdot z_i}{\sqrt{\sum_{i=1}^{512} (\hat{z}_i)^2} \sqrt{\sum_{i=1}^{512} (z_i)^2}}$$ where zˆ and z are the 512-dimensional predicted and ground-truth latent vectors, respectively. For *perceptual similarity*, we computed the cosine similarity between deeper layer activations (rather than pixel space which is the model input) extracted by deep neural networks. Specifically, we fed the stimuli and their reconstructions to alexnet pretrained on object recognition [\(Krizhevsky, 2014\)](#page-10-12) and VGG16 pretrained on face recognition [\(Parkhi et al., 2015\)](#page-10-13) and extracted the activations of their last convolutional layer. We then considered the cosine similarity per activation unit: Perceptual cos similarity = $$\frac{f(\hat{x})_i \cdot f(x)_i)}{\sqrt{\sum_{i=1}^n (f(x)_i)^2} \sqrt{\sum_{i=1}^n (f(\hat{x})_i)^2}}$$ where x and xˆ are the 224 × 224 RGB (image dimensionality that the models expects) visual stimuli and their reconstructions, respectively, n the number of activation elements, and f(.) the image-activation transformation. *Latent- and pixel correlation* measure the standard linear (Pearson product-moment) correlation coefficient between the latent dimensions of the predicted and groundtruth latent vectors and the luminance pixel values of stimuli and their reconstructions, respectively. *SSIM* looked at similarities in terms of luminance, contrast and structure [\(Wang et al., 2004\)](#page-11-8). Furthermore, we introduce a new *SeFa attribute similarity* metric between stimuli and their reconstructions using the intrinsic semantic vectors of the generator which we accessed using closed-form factorization ("SeFa") [\(Shen & Zhou, 2021\)](#page-11-9). In short, the unsupervised SeFa algorithm decomposes the pre-trained weights of the generator into 512 different latent semantics (of 512 dimensions each) which can be used for editing the synthesized images in w-space. This is also a means to understand what each latent semantic encodes: if a face becomes younger or older when traversing the latent in the negative or positive direction of the latent semantic, we can conclude post-hoc that it encodes the attribute "age". In our case, we used it to score each stimulus and reconstruction by taking the inner product between their w-latent and latent semantic and check for similarity. #### 2.4 IMPLEMENTATION DETAILS We used the original PyTorch implementation of StyleGAN3 [\(Karras et al., 2021\)](#page-10-4), the PyTorch implementation of alexnet and the keras implementation of VGG16. All analyses were carried out in Python 3.8 on the internal cluster. # 3 RESULTS #### 3.1 NEURAL DECODING We performed neural decoding from the primate brain via the feature-disentangled w-latent space of StyleGAN3; see Figure [4](#page-5-0) and Table [1](#page-5-1) for qualitative and quantitative results, respectively. Perceptually, it is obvious that the stimuli and their reconstructions share a significant degree of similarity ![](imgs/hT1S68yza7__page_5_Picture_1.jpeg) Figure 4: **Qualitative results:** The 100 test set stimuli (top row) and their reconstructions from brain activity (bottom row). <span id="page-5-0"></span>(e.g., gender, age, pose, haircut, lighting, hair color, skin tone, smile and eyeglasses). The importance of feature disentanglement for neural decoding is highlighted when compared to the decoding performance via the original z-latent space. The qualitative results from z-latent space can be found in Appendix A.1. We provided a visual guide for the six evaluation metrics in Appendix A.4 by showing the stimulus-reconstruction pairs with the five highest- and the five lowest similarity for each metric. In addition, we also repeated the experiment with another macaque that had silicon-based electrodes in V1, V2, V3 and V4; see Appendix A.3. The contribution of each brain region to the overall reconstruction performance was determined by occluding the recordings from the other two sites. Concretely, the two site responses were replaced with the average response of all but the corresponding response such that only the site of interest remained. Alternatively, one could evaluate region contribution by training three different decoders <span id="page-5-1"></span>Table 1: **Quantitative results.** The upper block decoded the z-latent whereas the lower block decoded the w-latent from brain activity. Decoding performance is quantified in terms of six metrics: latent cosine similarity, latent correlation (Student's t-test), perceptual cosine similarity using alexnet and VGGFace, pixel-wise correlation (Student's t-test) and structural similarity index (SSIM) in pixel space between stimuli and their reconstructions ( $mean \pm std.error$ ). To make the comparison more fair, the predicted z-latents were transformed to w-space and truncated at 0.7 for comparison with the ground-truth w-latents. The rows display performance when using either the recordings from "all" recording sites or only from a specific brain area. The latter is achieved by occluding the recordings from the other two brain regions in the test set. Neural decoding from all brain regions into the w-latent resulted in the overall highest reconstruction performance. | | Lat. sim. | Lat. corr. | Alexnet sim. | VGG16 sim. | Pixel corr. | SSIM | |-----------|---------------------|---------------------|---------------------|---------------------|---------------------|--------| | All | $0.3909 \pm 0.0070$ | $0.2824 \pm 0.0054$ | $0.2416 \pm 0.0023$ | $0.1789 \pm 0.0013$ | $0.4331 \pm 0.0001$ | 0.3811 | | ~ V1 | $0.2785 \pm 0.0079$ | $0.1398 \pm 0.0051$ | $0.1871 \pm 0.0022$ | $0.1430 \pm 0.0012$ | $0.2887 \pm 0.0001$ | 0.2640 | | $^{z}$ V4 | $0.2763 \pm 0.0079$ | $0.1362 \pm 0.0050$ | $0.1956 \pm 0.0022$ | $0.1485 \pm 0.0012$ | $0.2430 \pm 0.0001$ | 0.2211 | | IT | $0.3012 \pm 0.0076$ | $0.1747 \pm 0.0055$ | $0.2054 \pm 0.0022$ | $0.1498 \pm 0.0012$ | $0.3105 \pm 0.0001$ | 0.2794 | | All | $0.4579 \pm 0.0076$ | $0.2908 \pm 0.0047$ | $0.2740 \pm 0.0029$ | $0.2391 \pm 0.0017$ | $0.6055 \pm 0.0001$ | 0.5547 | | V1 | $0.3792 \pm 0.0089$ | $0.1478 \pm 0.0047$ | $0.1447 \pm 0.0023$ | $0.1151 \pm 0.0012$ | $0.2958 \pm 0.0001$ | 0.2256 | | $^w$ V4 | $0.3783 \pm 0.0023$ | $0.1450 \pm 0.0049$ | $0.1856 \pm 0.0020$ | $0.1315 \pm 0.0011$ | $0.1816 \pm 0.0001$ | 0.1684 | | IT | $0.4009 \pm 0.0085$ | $0.1828 \pm 0.0051$ | $0.1861 \pm 0.0026$ | $0.1451 \pm 0.0014$ | $0.4119 \pm 0.0001$ | 0.3275 | ![](imgs/hT1S68yza7__page_6_Figure_1.jpeg) <span id="page-6-0"></span>Figure 5: SeFa attribute similarity. The SeFa attribute scores were computed for stimuli and their reconstructions and evaluated for similarity in terms of correlation (Student's t-test). The six plots display the attribute scores of the 100 stimuli (True; Y-axis) and the predictions (Pred; X-axis) for the six latent semantics with the highest similarity. We travel in the semantic direction and edit an arbitrary face to reveal what facial attributes a latent semantic encodes. That is, a latent semantic was subtracted of or added to a w-latent which was fed to the generator to create the corresponding face image. For instance, when the semantic boundary with the highest similarity (r = 0.8217) is subtracted from a w-latent, the corresponding face changes pose to the left and becomes younger whereas its pose changes to the right and gets older when this semantic is added. on neural data subsets, but the current occlusion method made it more interpretable how a region contributed to the same decoder's performance. As such, we found that this is for the largest part determined by responses from IT - which is the most downstream site we recorded from. We validated our results with a permutation test as follows: for a thousand times, we sampled a hundred random latents from the same distribution as our original test set and generated their corresponding face images. Per iteration, we checked whether these random latents and faces were closer to the ground-truth latent and faces than our predicted latents and faces. We found that our predicted latents from brain activity and corresponding faces were always closer to the original stimuli for the w-latents and all six metrics, yielding statistical significance (p < 0.001). This indicates that our high decoding performance is not just a consequence of the high-quality images that the generator synthesizes. The charts showing the six similarity metrics over iterations for the random samples and our predictions based on brain activity can be found in Appendix A.2. Next, we quantified how well facial attributes were predicted from brain activity (Figure [5\)](#page-6-0). The 512 latent semantics are known to be hierarchically organized [\(Shen & Zhou, 2021\)](#page-11-9) and we also find this back in our predictive performance where the highest and lowest correlations are found at the earlier ![](imgs/hT1S68yza7__page_6_Figure_6.jpeg) <span id="page-6-1"></span>Figure 6: Neural decoding performance in time. The development of decoding performance (Yaxis) in terms of all six metrics based on the recorded response data in V1, V4, IT and all brain regions and over the full time course of 300 ms (X-axis). Stimulus onset happened at 100 ms. For visualization purposes, a sigmoid was fit to these data points and the shaded areas denote the predefined time windows we used for our original analysis. and later latent semantics, respectively. Face editing revealed that the earlier latent semantics encode clear and well-known facial attributes (e.g., gender, age, skin color, lighting and pose) whereas those in later latent semantics remain unclear since editing did not result in perceptual changes. Finally, Figure [6](#page-6-1) shows how decoding performance evolved in time. Rather than taking the average response over the predefined time windows for V1, V4 and IT, we took the average response over a 10 ms window that was slided without overlap over the full time course of 300 ms. This resulted in thirty responses over time per stimulus. As expected, performance climbed and peaked first for (early) V1, then (intermediate) V4 and lastly (deep) IT. We can see that IT outperformed the other two regions in terms of all six metrics. #### 3.2 NEURAL ENCODING Neural encoding predicted the brain activity from the eight (feature) layers from alexnet for object recognition, VGG16 for face recognition and the StyleGAN3 discriminator (i.e., 3 × 8 = 24 encoding models in total). Encoding performance was assessed by correlating the predicted and recorded responses (Student's t-test) after which the model layer with the highest performance was assigned to each recording site on the brain (Figure [7A](#page-7-0)). Our results show a gradient from early to deeper brain areas for all three models. That is, visual experience is partially determined by the selective responses of neuronal populations along the visual ventral "what" pathway [\(Ungerleider](#page-11-10) [& Mishkin, 1982\)](#page-11-10) such that the receptive fields of neurons in early cortical regions are selective for simple features (e.g., local edge orientations [\(Hubel & Wiesel, 1962\)](#page-10-14)) whereas those of neurons in deeper regions respond to more complex patterns of features [\(Gross et al., 1972;](#page-9-7) [Hung et al.,](#page-10-15) [2005\)](#page-10-15). Previous work has shown how the features extracted by deep convolutional neural networks predict neural responses in the ventral visual stream to perceived naturalistic stimuli in the human brain [\(Yamins et al., 2014;](#page-11-11) [Cadieu et al., 2014;](#page-9-8) [Khaligh-Razavi & Kriegeskorte, 2014;](#page-10-9) Guc¸l ¨ [u & van](#page-10-16) ¨ [Gerven, 2015;](#page-10-16) [Yamins & DiCarlo, 2016;](#page-11-12) [Cichy et al., 2016;](#page-9-9) [Eickenberg et al., 2017\)](#page-9-4) as well as in the primate brain [\(Freiwald & Tsao, 2010;](#page-9-10) [Chang & Tsao, 2017\)](#page-9-11). In line with the literature, our results show that (early) V1 encodes earlier model layers whereas (deeper) IT encodes deeper model layers. #### 3.3 WALKING THE NEURAL FACE MANIFOLD VIA W-LATENT SPACE Manifold hypothesis states that real-world data instances can be viewed as points in highdimensional space that are concentrated on manifolds which (locally) resemble Euclidean space. Linear changes in the (low-dimensional) GAN latent landscape directly translate to the corresponding (high-dimensional) pixel space and thus approximate local manifolds [\(Shao et al., 2018\)](#page-11-13). That is, visual data that look perceptually similar in terms of certain features are also closely positioned in latent space. As such, interpolation between two distinct latent vectors resulted in an ordered set ![](imgs/hT1S68yza7__page_7_Figure_7.jpeg) <span id="page-7-0"></span>Figure 7: A. Layer assignment across all electrodes to visual areas V1, V4 and IT. We observe a gradient for Alexnet, VGG16 and the discriminator where early layers are mostly assigned to V1, intermediate layers to V4 and deep layers to IT. B. Performance of all 24 encoding models (x-axis) in terms of correlation (Student's t-test, y-axis) between all predicted and recorded brain responses shows that encoding performance was good overall. ![](imgs/hT1S68yza7__page_8_Figure_1.jpeg) <span id="page-8-0"></span>Figure 8: Linear operations were applied to ground-truth w-latents (row 1), w-latents decoded from brain activity (row 2), recorded responses which we then decoded into w-latents (row 3) and encoded responses from ground-truth w-latents which we then decoded w-latents. A. Linear interpolation between the w-latents of two test set stimuli. B. Vector arithmetic with the average w-latent of specific facial attributes. To obtain the average w-latent we averaged as many relevant samples as possible from the test set. of latents where contained semantics in the corresponding images vary gradually with latent code. We used spherical rather than regular linear interpolation to account for the spherical geometry of the latent space (i.e., multimodal Gaussian distribution). The latent space also obeys simple arithmetic operations [\(Mikolov et al., 2013\)](#page-10-17). The generated faces from interpolation (Figure [8A](#page-8-0)) and vector arithmetic (Figure [8B](#page-8-0)) in neural space were perceptually similar as when applied directly to the w-latent space. This indicates that the functional neural face manifold and w-latent space are organized similarly such that responses can be linearly modified to obtain responses to unseen faces. ## 4 DISCUSSION Neural decoding of brain activity during visual perception via the feature-disentangled w-latent space conditioned on StyleGAN3 resulted in image reconstructions that strongly resemble the originally-perceived stimuli, making it the state-of-the-art in the field. Although it is safe to assume that the brain represents the visual stimuli it is presented with, it has been largely unsolved *how* it represents them as there are virtually infinite candidate representations possible to encode the same image. The goal of this study was to find the correct representation. Our results demonstrate that StyleGAN3 features/latents are linearly related to brain responses such that latent and response must encode the same real-world phenomena similarly. This indicates that StyleGAN3 successfully disentangled the neural face manifold [\(DiCarlo & Cox, 2007\)](#page-9-12) rather than the conventional z-latent space of arbitrary GANs or any other feature representation we encountered so far. Note that StyleGAN3 has never been optimized on neural data. We also found that the features of the discriminator are predictive of neural responses. The StyleGAN3-brain correspondence can shed light on what drives the organization of (neural) information processing in vision. For instance, the analogy between adversarial training of StyleGAN3 and predictive coding where the brain is continuously generating and updating its mental model of the world to minimize prediction errors. To conclude, unsupervised generative modeling can be used to study biological vision which in turn supports the development of better computational models thereof and other (clinical) applications. # 5 ETHICS STATEMENT In conjunction with the evolving field of neural decoding grows the concern regarding mental privacy. Because we think it is likely that access to subjective experience will be possible in the foreseeable future, we want to emphasize that it is important to at all times strictly follow the ethical rules and regulations regarding data extraction, storage and protection. It should never be possible to invade subjective contents of the mind. ## 6 REPRODUCIBILITY STATEMENT Upon publication, we will make our data publicly available together with the code and a detailed description on how to recreate our results to ensure transparency and reproducibility. ## REFERENCES - <span id="page-9-5"></span>Pouya Bashivan, Kohitij Kar, and James J DiCarlo. Neural population control via deep image synthesis. *Science*, 364(6439), 2019. - <span id="page-9-2"></span>Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. *arXiv preprint arXiv:1809.11096*, 2018. - <span id="page-9-8"></span>Charles F Cadieu, Ha Hong, Daniel LK Yamins, Nicolas Pinto, Diego Ardila, Ethan A Solomon, Najib J Majaj, and James J DiCarlo. Deep neural networks rival the representation of primate it cortex for core visual object recognition. *PLoS computational biology*, 10(12):e1003963, 2014. - <span id="page-9-11"></span>Le Chang and Doris Y Tsao. The code for facial identity in the primate brain. *Cell*, 169(6):1013– 1028, 2017. - <span id="page-9-9"></span>Radoslaw Martin Cichy, Aditya Khosla, Dimitrios Pantazis, Antonio Torralba, and Aude Oliva. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. *Scientific reports*, 6(1):1–13, 2016. - <span id="page-9-3"></span>Thirza Dado, Yagmur G ˘ uc¸l ¨ ut¨ urk, Luca Ambrogioni, Gabri ¨ elle Ras, Sander Bosch, Marcel van Ger- ¨ ven, and Umut Guc¸l ¨ u. Hyperrealistic neural decoding for reconstructing faces from fmri activa- ¨ tions via the gan latent space. *Scientific reports*, 12(1):1–9, 2022. - <span id="page-9-12"></span>James J DiCarlo and David D Cox. Untangling invariant object recognition. *Trends in cognitive sciences*, 11(8):333–341, 2007. - <span id="page-9-0"></span>Nadine Dijkstra, S.E. Sander E. Bosch, and Marcel A.J. M.A.J. van Gerven. Shared neural mechanisms of visual perception and imagery. *Trends in Cognitive Sciences*, 23(5):423–434, 2019. ISSN 1879307X. doi: 10.1016/j.tics.2019.02.004. URL [https://doi.org/10.1016/j.](https://doi.org/10.1016/j.tics.2019.02.004) [tics.2019.02.004](https://doi.org/10.1016/j.tics.2019.02.004). - <span id="page-9-4"></span>Michael Eickenberg, Alexandre Gramfort, Gael Varoquaux, and Bertrand Thirion. Seeing it all: ¨ Convolutional network layers map the function of the human visual system. *NeuroImage*, 152: 184–194, 2017. - <span id="page-9-10"></span>Winrich A Freiwald and Doris Y Tsao. Functional compartmentalization and viewpoint generalization within the macaque face-processing system. *Science*, 330(6005):845–851, 2010. - <span id="page-9-1"></span>Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. *Advances in neural information processing systems*, 27, 2014. - <span id="page-9-7"></span>Charles G Gross, CE de Rocha-Miranda, and DB Bender. Visual properties of neurons in inferotemporal cortex of the macaque. *Journal of neurophysiology*, 35(1):96–111, 1972. - <span id="page-9-6"></span>Umut Guc¸l ¨ u and Marcel AJ van Gerven. Unsupervised feature learning improves prediction of ¨ human brain activity in response to natural images. *PLoS computational biology*, 10(8):e1003724, 2014. - <span id="page-10-16"></span>Umut Guc¸l ¨ u and Marcel AJ van Gerven. Deep neural networks reveal a gradient in the complexity of ¨ neural representations across the ventral stream. *Journal of Neuroscience*, 35(27):10005–10014, 2015. - <span id="page-10-8"></span>Yagmur G ˘ uc¸l ¨ ut¨ urk, Umut G ¨ uc¸l ¨ u, Katja Seeliger, Sander Bosch, Rob van Lier, and Marcel A van ¨ Gerven. Reconstructing perceived faces from brain activations with deep adversarial neural decoding. *Advances in neural information processing systems*, 30, 2017. - <span id="page-10-0"></span>John-Dylan Haynes and Geraint Rees. Decoding mental states from brain activity in humans. *Nature reviews neuroscience*, 7(7):523–534, 2006. - <span id="page-10-7"></span>Tomoyasu Horikawa and Yukiyasu Kamitani. Generic decoding of seen and imagined objects using hierarchical visual features. *Nature communications*, 8(1):1–15, 2017a. - <span id="page-10-6"></span>Tomoyasu Horikawa and Yukiyasu Kamitani. Hierarchical neural representation of dreamed objects revealed by brain decoding with deep neural network features. *Frontiers in computational neuroscience*, 11:4, 2017b. - <span id="page-10-11"></span>Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In *Proceedings of the IEEE international conference on computer vision*, pp. 1501–1510, 2017. - <span id="page-10-14"></span>David H Hubel and Torsten N Wiesel. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. *The Journal of physiology*, 160(1):106–154, 1962. - <span id="page-10-15"></span>Chou P Hung, Gabriel Kreiman, Tomaso Poggio, and James J DiCarlo. Fast readout of object identity from macaque inferior temporal cortex. *Science*, 310(5749):863–866, 2005. - <span id="page-10-1"></span>Yukiyasu Kamitani and Frank Tong. Decoding the visual and subjective contents of the human brain. *Nature neuroscience*, 8(5):679–685, 2005. - <span id="page-10-2"></span>Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. *arXiv preprint arXiv:1710.10196*, 2017. - <span id="page-10-3"></span>Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 4401–4410, 2019. - <span id="page-10-10"></span>Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 8110–8119, 2020. - <span id="page-10-4"></span>Tero Karras, Miika Aittala, Samuli Laine, Erik Hark ¨ onen, Janne Hellsten, Jaakko Lehtinen, and ¨ Timo Aila. Alias-free generative adversarial networks. *Advances in Neural Information Processing Systems*, 34, 2021. - <span id="page-10-9"></span>Seyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. Deep supervised, but not unsupervised, models may explain it cortical representation. *PLoS computational biology*, 10(11):e1003915, 2014. - <span id="page-10-5"></span>Nikolaus Kriegeskorte. Deep neural networks: a new framework for modeling biological vision and brain information processing. *Annual review of vision science*, 1:417–446, 2015. - <span id="page-10-12"></span>Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. *arXiv preprint arXiv:1404.5997*, 2014. - <span id="page-10-17"></span>Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. *Advances in neural information processing systems*, 26, 2013. - <span id="page-10-13"></span>Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep face recognition. In Xianghua Xie, Mark W. Jones, and Gary K. L. Tam (eds.), *Proceedings of the British Machine Vision Conference (BMVC)*, pp. 41.1–41.12. BMVA Press, September 2015. ISBN 1-901725-53-7. doi: 10.5244/C. 29.41. URL <https://dx.doi.org/10.5244/C.29.41>. - <span id="page-11-13"></span>Hang Shao, Abhishek Kumar, and P Thomas Fletcher. The riemannian geometry of deep generative models. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops*, pp. 315–323, 2018. - <span id="page-11-4"></span>Guohua Shen, Kshitij Dwivedi, Kei Majima, Tomoyasu Horikawa, and Yukiyasu Kamitani. Endto-end deep image reconstruction from human brain activity. *Frontiers in Computational Neuroscience*, pp. 21, 2019a. - <span id="page-11-3"></span>Guohua Shen, Tomoyasu Horikawa, Kei Majima, and Yukiyasu Kamitani. Deep image reconstruction from human brain activity. *PLoS computational biology*, 15(1):e1006633, 2019b. - <span id="page-11-9"></span>Yujun Shen and Bolei Zhou. Closed-form factorization of latent semantics in gans. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 1532–1540, 2021. - <span id="page-11-2"></span>Ghislain St-Yves and Thomas Naselaris. Generative adversarial networks conditioned on brain activity reconstruct seen images. In *2018 IEEE international conference on systems, man, and cybernetics (SMC)*, pp. 1054–1061. IEEE, 2018. - <span id="page-11-6"></span>Hans Super and Pieter R Roelfsema. Chronic multiunit recordings in behaving animals: advantages and limitations. *Progress in brain research*, 147:263–282, 2005. - <span id="page-11-10"></span>L.G. Ungerleider and M. Mishkin. Two cortical visual systems. In *Analysis of visual behavior*, pp. 549–586–. MIT Press, Cambridge, MA, 1982. - <span id="page-11-0"></span>Marcel AJ van Gerven. A primer on encoding models in sensory neuroscience. *Journal of Mathematical Psychology*, 76:172–183, 2017. - <span id="page-11-5"></span>Rufin VanRullen and Leila Reddy. Reconstructing faces from fmri patterns using deep generative neural networks. *Communications biology*, 2(1):1–10, 2019. - <span id="page-11-8"></span>Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. *IEEE transactions on image processing*, 13(4):600– 612, 2004. - <span id="page-11-7"></span>Max Welling. Kernel ridge regression. *Max Welling's classnotes in machine learning*, pp. 1–3, 2013. - <span id="page-11-12"></span>Daniel LK Yamins and James J DiCarlo. Using goal-driven deep learning models to understand sensory cortex. *Nature neuroscience*, 19(3):356–365, 2016. - <span id="page-11-11"></span>Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. *Proceedings of the national academy of sciences*, 111(23):8619–8624, 2014. - <span id="page-11-1"></span>Alan Yuille and Daniel Kersten. Vision as bayesian inference: analysis by synthesis? *Trends in cognitive sciences*, 10(7):301–308, 2006.
{ "table_of_contents": [ { "title": "BRAIN2GAN; RECONSTRUCTING PERCEIVED FACES\nFROM THE PRIMATE BRAIN VIA STYLEGAN3", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.49505615234375 ], [ 503.5880126953125, 80.49505615234375 ], [ 503.5880126953125, 117.63543701171875 ], [ 107.578125, 117.63543701171875 ] ] }, { "title": "Anonymous authors", "heading_level": null, "page_id": 0, "polygon": [ [ 112.060546875, 136.8984375 ], [ 200.05490112304688, 136.8984375 ], [ 200.05490112304688, 146.89208984375 ], [ 112.060546875, 146.89208984375 ] ] }, { "title": "ABSTRACT", "heading_level": null, "page_id": 0, "polygon": [ [ 277.013671875, 187.55859375 ], [ 333.7221984863281, 187.55859375 ], [ 333.7221984863281, 199.66949462890625 ], [ 277.013671875, 199.66949462890625 ] ] }, { "title": "1 INTRODUCTION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 431.506103515625 ], [ 205.9888458251953, 431.506103515625 ], [ 205.9888458251953, 443.4613037109375 ], [ 107.578125, 443.4613037109375 ] ] }, { "title": "2 METHODS", "heading_level": null, "page_id": 2, "polygon": [ [ 107.876953125, 254.0721435546875 ], [ 178.72157287597656, 254.0721435546875 ], [ 178.72157287597656, 266.02734375 ], [ 107.876953125, 266.02734375 ] ] }, { "title": "2.1 DATA", "heading_level": null, "page_id": 2, "polygon": [ [ 106.3828125, 277.6640625 ], [ 155.2249755859375, 277.6640625 ], [ 155.2249755859375, 289.1649169921875 ], [ 106.3828125, 289.1649169921875 ] ] }, { "title": "2.1.1 STIMULI", "heading_level": null, "page_id": 2, "polygon": [ [ 107.279296875, 299.3203125 ], [ 177.205078125, 299.3203125 ], [ 177.205078125, 309.68792724609375 ], [ 107.279296875, 309.68792724609375 ] ] }, { "title": "2.1.2 FEATURES", "heading_level": null, "page_id": 2, "polygon": [ [ 107.876953125, 493.83984375 ], [ 184.5867156982422, 493.83984375 ], [ 184.5867156982422, 505.90972900390625 ], [ 107.876953125, 505.90972900390625 ] ] }, { "title": "2.1.3 RESPONSES", "heading_level": null, "page_id": 2, "polygon": [ [ 108.17578125, 623.77734375 ], [ 190.17987060546875, 623.77734375 ], [ 190.17987060546875, 636.3777313232422 ], [ 108.17578125, 636.3777313232422 ] ] }, { "title": "2.2 MODELS", "heading_level": null, "page_id": 3, "polygon": [ [ 106.083984375, 277.6640625 ], [ 169.1543426513672, 277.6640625 ], [ 169.1543426513672, 288.7630615234375 ], [ 106.083984375, 288.7630615234375 ] ] }, { "title": "2.2.1 DECODING", "heading_level": null, "page_id": 3, "polygon": [ [ 107.876953125, 355.0078125 ], [ 187.365234375, 355.0078125 ], [ 187.365234375, 365.7980651855469 ], [ 107.876953125, 365.7980651855469 ] ] }, { "title": "2.2.2 ENCODING", "heading_level": null, "page_id": 3, "polygon": [ [ 106.681640625, 509.012451171875 ], [ 186.91212463378906, 509.012451171875 ], [ 186.91212463378906, 518.9750671386719 ], [ 106.681640625, 518.9750671386719 ] ] }, { "title": "2.3 EVALUATION", "heading_level": null, "page_id": 4, "polygon": [ [ 107.578125, 189.10546875 ], [ 187.6640625, 189.10546875 ], [ 187.6640625, 200.71502685546875 ], [ 107.578125, 200.71502685546875 ] ] }, { "title": "2.4 IMPLEMENTATION DETAILS", "heading_level": null, "page_id": 4, "polygon": [ [ 107.578125, 586.1273956298828 ], [ 247.27737426757812, 586.1273956298828 ], [ 247.27737426757812, 596.0899963378906 ], [ 107.578125, 596.0899963378906 ] ] }, { "title": "3 RESULTS", "heading_level": null, "page_id": 4, "polygon": [ [ 107.876953125, 654.71484375 ], [ 172.538330078125, 654.71484375 ], [ 172.538330078125, 667.2764205932617 ], [ 107.876953125, 667.2764205932617 ] ] }, { "title": "3.1 NEURAL DECODING", "heading_level": null, "page_id": 4, "polygon": [ [ 108.2489242553711, 679.46484375 ], [ 216.41868591308594, 679.46484375 ], [ 216.41868591308594, 689.8279876708984 ], [ 108.2489242553711, 689.8279876708984 ] ] }, { "title": "3.2 NEURAL ENCODING", "heading_level": null, "page_id": 7, "polygon": [ [ 107.578125, 201.8671875 ], [ 216.41876220703125, 201.8671875 ], [ 216.41876220703125, 212.65899658203125 ], [ 107.578125, 212.65899658203125 ] ] }, { "title": "3.3 WALKING THE NEURAL FACE MANIFOLD VIA W-LATENT SPACE", "heading_level": null, "page_id": 7, "polygon": [ [ 105.78515625, 412.3812255859375 ], [ 397.2401428222656, 412.3812255859375 ], [ 397.2401428222656, 422.3438415527344 ], [ 105.78515625, 422.3438415527344 ] ] }, { "title": "4 DISCUSSION", "heading_level": null, "page_id": 8, "polygon": [ [ 107.578125, 518.58984375 ], [ 190.20143127441406, 518.58984375 ], [ 190.20143127441406, 530.7714233398438 ], [ 107.578125, 530.7714233398438 ] ] }, { "title": "5 ETHICS STATEMENT", "heading_level": null, "page_id": 9, "polygon": [ [ 107.876953125, 81.59765625 ], [ 227.91500854492188, 81.59765625 ], [ 227.91500854492188, 94.7125244140625 ], [ 107.876953125, 94.7125244140625 ] ] }, { "title": "6 REPRODUCIBILITY STATEMENT", "heading_level": null, "page_id": 9, "polygon": [ [ 107.578125, 179.3482666015625 ], [ 285.0504455566406, 179.3482666015625 ], [ 285.0504455566406, 191.303466796875 ], [ 107.578125, 191.303466796875 ] ] }, { "title": "REFERENCES", "heading_level": null, "page_id": 9, "polygon": [ [ 107.578125, 243.061279296875 ], [ 175.2598419189453, 243.061279296875 ], [ 175.2598419189453, 255.0164794921875 ], [ 107.578125, 255.0164794921875 ] ] } ], "page_stats": [ { "page_id": 0, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 101 ], [ "Line", 38 ], [ "SectionHeader", 4 ], [ "Text", 3 ], [ "PageHeader", 1 ], [ "Figure", 1 ], [ "Caption", 1 ], [ "PageFooter", 1 ], [ "FigureGroup", 1 ], [ "Reference", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 1, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 139 ], [ "Line", 43 ], [ "Text", 4 ], [ "PageHeader", 1 ], [ "Figure", 1 ], [ "Caption", 1 ], [ "PageFooter", 1 ], [ "FigureGroup", 1 ], [ "Reference", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 2, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 174 ], [ "Line", 55 ], [ "SectionHeader", 5 ], [ "Text", 4 ], [ "PageHeader", 1 ], [ "PageFooter", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 3, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 375 ], [ "Line", 72 ], [ "Text", 7 ], [ "SectionHeader", 3 ], [ "Equation", 3 ], [ "PageHeader", 2 ], [ "Caption", 1 ], [ "PageFooter", 1 ], [ "Reference", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 4, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 301 ], [ "Line", 72 ], [ "Text", 8 ], [ "SectionHeader", 4 ], [ "Equation", 3 ], [ "PageHeader", 1 ], [ "PageFooter", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 5, "text_extraction_method": "surya", "block_counts": [ [ "TableCell", 63 ], [ "Line", 30 ], [ "Span", 24 ], [ "Text", 4 ], [ "Reference", 2 ], [ "PageHeader", 1 ], [ "Picture", 1 ], [ "Caption", 1 ], [ "Table", 1 ], [ "PageFooter", 1 ], [ "PictureGroup", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 6, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 92 ], [ "Line", 31 ], [ "Text", 3 ], [ "Figure", 2 ], [ "Caption", 2 ], [ "FigureGroup", 2 ], [ "Reference", 2 ], [ "PageHeader", 1 ], [ "PageFooter", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 7, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 122 ], [ "Line", 40 ], [ "Text", 4 ], [ "SectionHeader", 2 ], [ "PageHeader", 1 ], [ "Figure", 1 ], [ "Caption", 1 ], [ "PageFooter", 1 ], [ "FigureGroup", 1 ], [ "Reference", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 8, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 116 ], [ "Line", 33 ], [ "Text", 2 ], [ "PageHeader", 1 ], [ "Figure", 1 ], [ "Caption", 1 ], [ "SectionHeader", 1 ], [ "PageFooter", 1 ], [ "FigureGroup", 1 ], [ "Reference", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 9, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 162 ], [ "Line", 46 ], [ "ListItem", 13 ], [ "Reference", 13 ], [ "SectionHeader", 3 ], [ "Text", 2 ], [ "PageHeader", 1 ], [ "PageFooter", 1 ], [ "ListGroup", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 10, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 178 ], [ "Line", 49 ], [ "ListItem", 18 ], [ "Reference", 18 ], [ "PageHeader", 1 ], [ "PageFooter", 1 ], [ "ListGroup", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } }, { "page_id": 11, "text_extraction_method": "pdftext", "block_counts": [ [ "Span", 107 ], [ "Line", 35 ], [ "ListItem", 14 ], [ "Reference", 14 ], [ "PageHeader", 1 ], [ "PageFooter", 1 ], [ "ListGroup", 1 ] ], "block_metadata": { "llm_request_count": 0, "llm_error_count": 0, "llm_tokens_used": 0, "previous_text": "", "previous_type": "", "previous_order": 0 } } ], "debug_data_path": "debug_data/hT1S68yza7" }
Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics
Shoaib Ahmed Siddiqui, Nitarshan Rajkumar, Tegan Maharaj, David Krueger, Sara Hooker
"Modern machine learning research relies on relatively few carefully curated datasets. Even in these(...TRUNCATED)
https://openreview.net/pdf?id=PvLnIaJbt9
https://openreview.net/forum?id=PvLnIaJbt9
PvLnIaJbt9
"[{\"review_id\": \"B2vdYFQIL3M\", \"paper_id\": \"PvLnIaJbt9\", \"reviewer\": null, \"paper_summary(...TRUNCATED)
2023
ICLR
"# <span id=\"page-0-0\"></span>Metadata Archaeology: Unearthing Data Subsets by Leveraging Training(...TRUNCATED)
"{\n \"table_of_contents\": [\n {\n \"title\": \"Metadata Archaeology: Unearthing Data\\nSu(...TRUNCATED)
Light Sampling Field and BRDF Representation for Physically-based Neural Rendering
Jing Yang, Hanyuan Xiao, Wenbin Teng, Yunxuan Cai, Yajie Zhao
"Physically-based rendering (PBR) is key for immersive rendering effects used widely in the industry(...TRUNCATED)
https://openreview.net/pdf?id=yYEb8v65X8
https://openreview.net/forum?id=yYEb8v65X8
yYEb8v65X8
"[{\"review_id\": \"UKUFPSVsxWo\", \"paper_id\": \"yYEb8v65X8\", \"reviewer\": null, \"paper_summary(...TRUNCATED)
2023
ICLR
"# LIGHT SAMPLING FIELD AND BRDF REPRESENTA-TION FOR PHYSICALLY-BASED NEURAL RENDERING\n\nJing Yang (...TRUNCATED)
"{\n \"table_of_contents\": [\n {\n \"title\": \"LIGHT SAMPLING FIELD AND BRDF REPRESENTA-\(...TRUNCATED)
"EPISODE: Episodic Gradient Clipping with Periodic Resampled Corrections for Federated Learning with(...TRUNCATED)
Michael Crawshaw, Yajie Bao, Mingrui Liu
" Gradient clipping is an important technique for deep neural networks with exploding gradients, suc(...TRUNCATED)
https://openreview.net/pdf?id=ytZIYmztET
https://openreview.net/forum?id=ytZIYmztET
ytZIYmztET
"[{\"review_id\": \"6MMbTRgNFU\", \"paper_id\": \"ytZIYmztET\", \"reviewer\": null, \"paper_summary\(...TRUNCATED)
2023
ICLR
"## EPISODE: EPISODIC GRADIENT CLIPPING WITH PE-RIODIC RESAMPLED CORRECTIONS FOR FEDERATED LEARNING (...TRUNCATED)
"{\n \"table_of_contents\": [\n {\n \"title\": \"EPISODE: EPISODIC GRADIENT CLIPPING WITH P(...TRUNCATED)
On the Expressiveness of Rational ReLU Neural Networks With Bounded Depth
Gennadiy Averkov, Christopher Hojny, Maximilian Merkert
"To confirm that the expressive power of ReLU neural networks grows with their depth, the function $(...TRUNCATED)
https://openreview.net/pdf?id=uREg3OHjLL
https://openreview.net/forum?id=uREg3OHjLL
uREg3OHjLL
"[{\"review_id\": \"zsfIdvmO9S\", \"paper_id\": \"uREg3OHjLL\", \"reviewer\": null, \"paper_summary\(...TRUNCATED)
2025
ICLR
"# ON THE EXPRESSIVENESS OF RATIONAL RELU NEURAL NETWORKS WITH BOUNDED DEPTH\n\nGennadiy Averkov BTU(...TRUNCATED)
"{\n \"table_of_contents\": [\n {\n \"title\": \"ON THE EXPRESSIVENESS OF RATIONAL RELU NEU(...TRUNCATED)
Trajectory attention for fine-grained video motion control
Zeqi Xiao, Wenqi Ouyang, Yifan Zhou, Shuai Yang, Lei Yang, Jianlou Si, Xingang Pan
"Recent advancements in video generation have been greatly driven by video diffusion models, with ca(...TRUNCATED)
https://openreview.net/pdf?id=2z1HT5lw5M
https://openreview.net/forum?id=2z1HT5lw5M
2z1HT5lw5M
"[{\"review_id\": \"Dm8G2Y7Zsh\", \"paper_id\": \"2z1HT5lw5M\", \"reviewer\": null, \"paper_summary\(...TRUNCATED)
2025
ICLR
"# TRAJECTORY ATTENTION FOR FINE-GRAINED VIDEO MOTION CONTROL\n\nZeqi Xiao<sup>1</sup>, Wenqi Ouyang(...TRUNCATED)
"{\n \"table_of_contents\": [\n {\n \"title\": \"TRAJECTORY ATTENTION FOR FINE-GRAINED VIDE(...TRUNCATED)
Adversarial Imitation Learning with Preferences
Aleksandar Taranovic, Andras Gabor Kupcsik, Niklas Freymuth, Gerhard Neumann
"Designing an accurate and explainable reward function for many Reinforcement Learning tasks is a cu(...TRUNCATED)
https://openreview.net/pdf?id=bhfp5GlDtGe
https://openreview.net/forum?id=bhfp5GlDtGe
bhfp5GlDtGe
"[{\"review_id\": \"FtJO6roG6Hq\", \"paper_id\": \"bhfp5GlDtGe\", \"reviewer\": null, \"paper_summar(...TRUNCATED)
2023
ICLR
"# ADVERSARIAL IMITATION LEARNING WITH PREFER-ENCES\n\nAleksandar Taranovic1,2<sup>∗</sup> , Andra(...TRUNCATED)
"{\n \"table_of_contents\": [\n {\n \"title\": \"ADVERSARIAL IMITATION LEARNING WITH PREFER(...TRUNCATED)
Consistency Checks for Language Model Forecasters
"Daniel Paleka, Abhimanyu Pallavi Sudhir, Alejandro Alvarez, Vineeth Bhat, Adam Shen, Evan Wang, Flo(...TRUNCATED)
"Forecasting is a task that is difficult to evaluate: the ground truth can only be known in the futu(...TRUNCATED)
https://openreview.net/pdf?id=r5IXBlTCGc
https://openreview.net/forum?id=r5IXBlTCGc
r5IXBlTCGc
"[{\"review_id\": \"ME5Mo9tI3M\", \"paper_id\": \"r5IXBlTCGc\", \"reviewer\": null, \"paper_summary\(...TRUNCATED)
2025
ICLR
"# CONSISTENCY CHECKS FOR LANGUAGE MODEL FORECASTERS\n\nDaniel Paleka <sup>∗</sup> ETH Zurich Abhi(...TRUNCATED)
"{\n \"table_of_contents\": [\n {\n \"title\": \"CONSISTENCY CHECKS FOR LANGUAGE MODEL\\nFO(...TRUNCATED)
Offline RL with Observation Histories: Analyzing and Improving Sample Complexity
Joey Hong, Anca Dragan, Sergey Levine
"Offline reinforcement learning (RL) can in principle synthesize more optimal behavior from a datase(...TRUNCATED)
https://openreview.net/pdf?id=GnOLWS4Llt
https://openreview.net/forum?id=GnOLWS4Llt
GnOLWS4Llt
"[{\"review_id\": \"I0mag2gBuh\", \"paper_id\": \"GnOLWS4Llt\", \"reviewer\": null, \"paper_summary\(...TRUNCATED)
2024
ICLR
"# OFFLINE RL WITH OBSERVATION HISTORIES: ANALYZING AND IMPROVING SAMPLE COMPLEXITY\n\nJoey Hong Anc(...TRUNCATED)
"{\n \"table_of_contents\": [\n {\n \"title\": \"OFFLINE RL WITH OBSERVATION HISTORIES:\\nA(...TRUNCATED)
A Neural PDE Solver with Temporal Stencil Modeling
Zhiqing Sun, Yiming Yang, Shinjae Yoo
"Numerical simulation of non-linear partial differential equations plays a crucial role in modeling (...TRUNCATED)
https://openreview.net/pdf?id=Nvlqsofsc6-
https://openreview.net/forum?id=Nvlqsofsc6-
Nvlqsofsc6-
"[{\"review_id\": \"79w4ICIpxY2\", \"paper_id\": \"Nvlqsofsc6-\", \"reviewer\": null, \"paper_summar(...TRUNCATED)
2023
ICLR
"# A NEURAL PDE SOLVER WITH TEMPORAL STENCIL MODELING\n\nAnonymous authors Paper under double-blind (...TRUNCATED)
"{\n \"table_of_contents\": [\n {\n \"title\": \"A NEURAL PDE SOLVER WITH TEMPORAL STENCIL\(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for ICLR Papers with Reviews (2023-2025)

Dataset Description

This dataset contains paper submissions and review data from the International Conference on Learning Representations (ICLR) for the years 2023, 2024, and 2025. The data is sourced from OpenReview, an open peer review platform that hosts the review process for top ML conferences.

Focus on Review Data

This dataset emphasizes the peer review ecosystem surrounding academic papers. Each record includes comprehensive review-related information:

  • Related Notes (related_notes): Contains review discussions, meta-reviews, author responses, and community feedback from the OpenReview platform
  • Full Paper Content: Complete paper text in Markdown format, enabling analysis of the relationship between paper content and review outcomes
  • Review Metadata: Structured metadata including page statistics, table of contents, and document structure analysis

The review data captures the full peer review workflow:

  • Initial submission reviews from multiple reviewers
  • Author rebuttal and response rounds
  • Meta-reviews from area chairs
  • Final decision notifications (Accept/Reject)
  • Post-publication discussions and community comments

This makes the dataset particularly valuable for:

  • Review Quality Analysis: Studying patterns in peer review quality and consistency
  • Decision Prediction: Building models to predict acceptance decisions based on paper content and reviews
  • Review Generation: Training models to generate constructive paper reviews
  • Bias Detection: Analyzing potential biases in the peer review process
  • Scientific Discourse Analysis: Understanding how scientific consensus forms through discussion

Dataset Statistics

  • Total Papers: 8,310
  • Year Coverage: 2023-2025
  • Source: OpenReview platform
  • Conference: ICLR (International Conference on Learning Representations)
  • Content: Full paper text + complete review discussions

Dataset Structure

Data Instances

Each instance represents a paper with its associated review data:

{
  "id": "RUzSobdYy0V",
  "title": "Quantifying and Mitigating the Impact of Label Errors on Model Disparity Metrics",
  "authors": "Julius Adebayo, Melissa Hall, Bowen Yu, Bobbie Chern",
  "abstract": "Errors in labels obtained via human annotation adversely affect...",
  "year": "2023",
  "conference": "ICLR",
  "related_notes": "[Review discussions, meta-reviews, and author responses]",
  "pdf_url": "https://openreview.net/pdf?id=RUzSobdYy0V",
  "source_url": "https://openreview.net/forum?id=RUzSobdYy0V",
  "content": "[Full paper text in Markdown format]",
  "content_meta": "[JSON metadata with TOC and page statistics]"
}

Data Fields

Field Type Description
id string Unique OpenReview paper ID
title string Paper title
authors string Author names (comma-separated)
abstract string Paper abstract
year string Publication year (2023-2025)
conference string Conference name (ICLR)
related_notes string Review data - includes reviews, meta-reviews, discussions
pdf_url string Link to PDF on OpenReview
source_url string Link to paper forum on OpenReview
content string Full paper content in Markdown
content_meta string JSON metadata (TOC, page stats, structure)

Review Data Structure

The related_notes field contains the complete review history from OpenReview, including:

  1. Primary Reviews: Detailed reviews from 3-4 reviewers per paper
  2. Reviewer Ratings: Numerical scores and confidence levels
  3. Author Responses: Rebuttals and clarifications from authors
  4. Meta-Reviews: Summary and recommendations from area chairs
  5. Final Decisions: Accept/reject decisions with rationale
  6. Post-Decision Discussions: Community comments and feedback

Data Splits

The dataset does not have predefined splits. Users should create their own train/validation/test splits based on their specific use case.

Dataset Creation

Curation Rationale

This dataset was created to enable research on understanding and improving the peer review process in machine learning conferences. By combining full paper content with complete review discussions, researchers can:

  • Analyze how paper characteristics relate to review outcomes
  • Study the language and patterns in constructive reviews
  • Build systems to assist reviewers or authors
  • Investigate fairness and bias in peer review

Source Data

The data was collected from the OpenReview platform, which hosts the ICLR review process in an open format. All reviews, discussions, and decisions are publicly available on the OpenReview website.

Data Processing

  1. Paper Content Extraction: Full papers were converted to Markdown format from PDF sources
  2. Review Aggregation: Review discussions were extracted from OpenReview forums
  3. Quality Filtering: Records with missing essential fields (ID, content, or related_notes) were removed
  4. Metadata Extraction: Structural metadata (TOC, page statistics) was extracted from papers

Considerations for Using the Data

Social Impact of the Dataset

This dataset provides transparency into the peer review process, which is typically opaque. By making reviews and discussions publicly available, it enables:

  • Analysis of review quality and consistency
  • Identification of potential biases in evaluation
  • Development of tools to assist the review process
  • Educational resources for understanding peer review

However, users should be aware that:

  • Reviews represent subjective opinions of reviewers
  • Reviewer identities are not included to protect privacy
  • Reviews should be interpreted within the context of the specific conference and time period

Discussion of Biases

The dataset may contain several biases:

  • Reviewer Bias: Different reviewers may have different standards and tendencies
  • Conference-Specific Norms: ICLR review norms may differ from other venues
  • Temporal Shifts: Review criteria may have evolved across 2023-2025
  • Selection Bias: Papers in this dataset represent ICLR submissions, which may not generalize to all ML research

Other Known Limitations

  • Reviewer identities are anonymized to protect privacy
  • Some papers may have incomplete review histories (e.g., withdrawn submissions)
  • The related_notes field contains unstructured text that may require parsing for specific analyses

Additional Information

Dataset Curators

This dataset was compiled from publicly available OpenReview data.

Licensing Information

The papers and reviews in this dataset are subject to the copyright and terms of use of the OpenReview platform and the respective authors.

Citation Information

If you use this dataset, please cite:

@dataset{iclr_papers_with_reviews,
  title = {ICLR Papers with Reviews (2023-2025)},
  author = {Dataset Curator},
  year = {2025},
  note = {Compiled from OpenReview platform data}
}

Contributions

This dataset was created by extracting and aggregating publicly available data from the OpenReview platform for research purposes.


Usage Examples

Loading the Dataset

import json

# Load from JSONL
with open('ICLR_merged_cleaned_huggingface.jsonl', 'r', encoding='utf-8') as f:
    for line in f:
        paper = json.loads(line)

        print(f"Title: {paper['title']}")
        print(f"Year: {paper['year']}")
        print(f"Review Data: {paper['related_notes'][:200]}...")
        break

Analyzing Review Content

# Extract reviews for analysis
def extract_reviews(paper):
    """Parse review-related information from related_notes field"""
    notes = paper['related_notes']

    # Parse review discussions, ratings, and decisions
    # Implementation depends on specific format

    return {
        'paper_id': paper['id'],
        'title': paper['title'],
        'reviews': reviews,
        'decision': decision
    }

Acknowledgments

This dataset would not be possible without the open peer review platform provided by OpenReview and the contributions of the ICLR community.

Downloads last month
91