Keith Rayner Eye Movements in Reading Data Collection
About this CollectionThis collection consists of eye movement data from published studies conducted in Keith Rayner’s Eyetracking Lab in the Department of Psychology at UCSD. The collection does not represent one project, but rather a snapshot of the work produced by Rayner’s highly productive lab from his arrival at UCSD in 2008 until he passed away in 2015. During this time, Rayner published more than 130 papers with collaborators at UCSD and around the world. Included in this collection are data and materials from 29 studies that were conducted in-house, and so the collection reflects the subset of work carried out primarily by Rayner’s graduate students, post-docs, and research assistants at UCSD. Rayner’s academic interests were broad, but much of his work focused on how visual and cognitive processes guide eye movements in reading, visual search, and scene perception. This collection consists of data from reading studies, and covers a range of topics (e.g., parafoveal processing, phonological coding, lexical ambiguity, word predictability), paradigms (e.g., gaze-contingent display change, proofreading), and populations (e.g., undergraduate students, deaf adult readers, bilinguals). Each package is a complete set of data and materials from published studies, some of which contain multiple experiments. Our goals are to make a rich source of information accessible to interested researchers to satisfy a variety of inquiries, and to preserve some of the work from Rayner’s lab for the future. Each data package contains (1) a “Readme” file with a detailed description of the package contents, (2) a “Data” directory, which consists of both raw and processed data in most cases, and (3) a “Materials” directory, containing the relevant experiment scripts or files with details about the words and/or sentences that were presented. In addition, some packages contain data processing and analysis scripts (e.g., .R files) to facilitate reanalysis. Within the Readme file is contact information for both the corresponding author for the published paper as well as the lab member who compiled the data package. Questions regarding the data package itself should be directed to the compiler, while questions regarding the design of materials, data analysis or interpretation should be directed to the corresponding author. Rayner’s contributions to Experimental Cognitive Psychology are many, including advancing theory, discovering phenomena, and developing novel eyetracking methodologies. Eyetrackers enable precise measurement of where and for how long readers position their eyes (i.e., during fixations) and where they move next (i.e., via saccades). By knowing where people look, we are able to control what readers are able to see both within central vision and outside of it (i.e., in the parafovea) on each eye fixation via the gaze-contingent boundary paradigm (Rayner, 1975). With this paradigm, Rayner demonstrated that readers frequently access information about the upcoming word before looking at it, and that having incorrect information about the word that ultimately appears once it is directly looked at slows reading. Many of the packages in this collection contain data from experiments employing this paradigm. Throughout his career, Rayner and his family of researchers (students, post-docs, and local and international collaborators) made a number of important advances to our understanding of how eye movements reflect information processing in reading. The collection contains studies comparing different groups of readers (keywords: College students, Adults (25-45), Seniors (65-95), Deaf individuals, Bilingual individuals), using different experimental methods (keywords: Gaze-contingent display change, Moving window paradigm, Proofreading), and examining various empirical and theoretical issues (keywords: Word identification, Sentence processing, Word frequency, Word predictability, Sentence context, Lexical ambiguity, Preview benefit, Parafoveal processing, Reading ability).
Filters you've selected: