<?xml version="1.0"?>
<response><xml version="1.0" encoding="UTF-8"><resource xmlns="http://datacite.org/schema/kernel-4" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4/metadata.xsd"><identifier identifierType="DOI">10.5287/bodleian:vmJOOm7KD</identifier><creators><creator><creatorName nameType="Personal">Koolschijn RS</creatorName><givenName>Ren&#xE9;e S</givenName><familyName>Koolschijn</familyName><nameIdentifier nameIdentifierScheme="ORCID" schemeURI="https://orcid.org">https://orcid.org/0000-0001-9553-4213</nameIdentifier></creator><creator><creatorName nameType="Personal">Shpektor A</creatorName><givenName>Anna</givenName><familyName>Shpektor</familyName></creator><creator><creatorName nameType="Personal">Emir UE</creatorName><givenName>U E</givenName><familyName>Emir</familyName><nameIdentifier nameIdentifierScheme="ORCID" schemeURI="https://orcid.org">https://orcid.org/0000-0001-5376-0431</nameIdentifier></creator><creator><creatorName nameType="Personal">Barron HC</creatorName><givenName>H C</givenName><familyName>Barron</familyName><nameIdentifier nameIdentifierScheme="ORCID" schemeURI="https://orcid.org">https://orcid.org/0000-0002-4575-6472</nameIdentifier></creator></creators><titles><title xml:lang="en">Combined fMRI-fMRS dataset in an inference task in humans</title></titles><resourceType resourceTypeGeneral="Dataset">Combined fMRI-fMRS dataset in an inference task in humans</resourceType><publisher>University of Oxford</publisher><publicationYear>2021</publicationYear><dates><date dateType="Issued">2021</date></dates><language>en</language><rightsList><rights rightsURI="https://creativecommons.org/licenses/by-sa/4.0/legalcode">Creative Commons Attribution Share Alike 4.0 International</rights></rightsList><descriptions><description xml:lang="en" descriptionType="TechnicalInfo"><![CDATA[This dataset consists of the following components:


	fMRI data showing group maps for contrasts of interest (nifti)
	raw fMRS data of 19 subjects (dicom)
	preprocessed fMRS data of 19 subjects, preprocessed in MRspa (mat)
	behavioural data from inference task during MRI scan (mat)
	behavioural data from associative test post MRI scan (mat)


Participants performed a three-stage inference task across three days.&nbsp;On day 1 participants learned up to 80 auditory-visual associations.&nbsp;On day 2, each visual cue was paired with either a rewarding (set 1, monetary reward) or neutral outcome (set 2, woodchip). On day 3, auditory cues were presented in isolation (‘inference test’), without visual cues or outcomes, and we measured evidence for inference from the auditory cues to the appropriate outcome.&nbsp;Participants performed the inference test in an MRI scanner where fMRI-fMRS data was acquired. After the MRI scan,&nbsp;participants completed a surprise ­associative test for auditory-visual associations learned on day 1.

fMRI data:

SPM group maps in MNI space showing:


	BOLD signal on inference test trials with a contrast between auditory cues where the associated visual cue was ‘remembered’ versus ‘forgotten’
	Correlation between the contrast described in (1) and V1 fMRS measures of glu/GABA ratio for ‘remembered’ versus ‘forgotten’ trials in the inference test
	BOLD signal on inference test trials contrasted with the BOLD signal on conditioning trials, smoothed using 5mm kernel prior to second level analysis
	BOLD signal on inference test trials contrasted with the BOLD signal on conditioning trials, smoothed using 5mm kernel at the first level analysis
	BOLD signal on inference test trials contrasted with the BOLD signal on conditioning trials, smoothed using 8mm kernel at the first level analysis
	BOLD signal on inference test trials with a contrast between auditory cues where the associated visual cue was remembered’ versus ‘forgotten’, smoothed using 8mm kernel at first level


Regions of interest (ROI) in MNI space:


	Hippocampal ROI
	Parietal-occipital cortex ROI
	Brainstem ROI
	Cumulative map of MRS voxel position across participants


fMRS data:

The raw fMRS data is included in DICOM format. Preprocessed data is included as a MATLAB structure for each subject, containing the following fields:


	Arrayedmetab: preprocessed spectra
	ws: water signal
	procsteps: preprocessing information
	ntmetab: total number of spectra
	params: acquisition parameters
	TR of each acquisition
	Block length: number of spectra acquired in each block


&nbsp;

Behavioural data from the inference task performed in the MRI scanner:

On each trial of the inference task, participants were presented with an auditory cue, before being asked if they would like to look in the wooden box (‘yes’ or ‘no’) where they had previously found the outcomes.&nbsp;The behavioural data from the inference test includes columns containing the following information:


	Auditory stimulus: 0 (none) for conditioning, 1-80 for inference test trials
	Visual stimulus associated with the presented auditory stimulus (1-4)
	Migrating visual stimulus (1: no 4: yes)
	Rewarded visual stimulus (0: no 1: yes)
	Set during learning (1-8)
	Video number for inference test trials (1-32)
	Video number for conditioning trials (1-16)
	Trial type: (2: conditioning, 3: inference)
	Trial start time
	Auditory stimulus/video play start time
	Inference trials: display question time, Conditioning trials: outcome presentation time
	Trial end time
	Reaction time for inference test trials
	0: incorrect response, 1: correct response
	Wall on which visual stimulus was presented for conditioning trials
	Inter trial interval
	Button pressed (0: no, 1: yes)


&nbsp;

Behavioural data from post MRI-scan associative test:

On each trial of the associative test,&nbsp;participants were presented with an auditory cue and then asked which of the 4 visual stimuli was associated with it. The columns contain the following information:

1. Auditory stimulus number (1-80, 3 repeats)

2. Visual stimulus associated with the presented auditory stimulus (1-4)

3. Migrating visual stimulus (1: no 4: yes)

4. Rewarded visual stimulus (0: no 1: yes)

5-8. Visual stimulus positions (top left/right, bottom left/right 1-4)

9-12. Wall visual stimulus is presented on (1-4)

13-16. Angle of visual stimulus still image (2-30)

17. Background image presented during auditory stimulus (2-57)

18. Chosen visual stimulus (1-4)

19. Reaction time

20. 0: incorrect response 1: correct response

21. Overall performance on presented visual stimulus

22. Overall performance on presented auditory stimulus (3 presentations)

23. Set during learning (1-8)

&nbsp;

For a more detailed description of the scanning sequence and behavioural tasks, see the paper.
]]></description></descriptions><fundingReferences><fundingReference><funderName>EPSRC/MRC, UKRI</funderName><awardNumber>EP/L016052/1</awardNumber></fundingReference><fundingReference><funderName>Wellcome Trust</funderName><awardNumber>203836/Z/16/Z</awardNumber></fundingReference><fundingReference><funderName>Royal Society Dorothy Hodgkin Research Fellowship</funderName></fundingReference><fundingReference><funderName>Biotechnology and Biological Sciences Research Council, UKRI</funderName><awardNumber>BB/N0059TX/1</awardNumber></fundingReference><fundingReference><funderName>Medical Research Council, UKRI</funderName><awardNumber>MC_UU_12024/3</awardNumber></fundingReference><fundingReference><funderName>John Fell Oxford University Press Research Fund</funderName><awardNumber>153/046</awardNumber></fundingReference><fundingReference><funderName>Wellcome Centre for Integrative Neuroimaging</funderName><awardNumber>Seed grantbi</awardNumber></fundingReference><fundingReference><funderName>Junior Research Fellowship from Merton College</funderName></fundingReference><fundingReference><funderName>Wellcome Trust</funderName><awardNumber>203139/Z/16/Z</awardNumber></fundingReference></fundingReferences></resource></xml></response>
