Environments
annubes
¶
AnnubesEnv
¶
AnnubesEnv(session: dict[str, float] | None = None, stim_intensities: list[float] | None = None, stim_time: int = 1000, catch_prob: float = 0.5, max_sequential: int | None = None, fix_intensity: float = 0, fix_time: Any = 500, iti: Any = 0, dt: int = 100, tau: int = 100, output_behavior: list[float] | None = None, noise_std: float = 0.01, rewards: dict[str, float] | None = None, random_seed: int | None = None)
Bases: TrialEnv
General class for the Annubes type of tasks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
session
|
dict[str, float] | None
|
Configuration of the trials that can appear during a session.
It is given by a dictionary representing the ratio (values) of the different trials (keys) within the task.
Trials with a single modality (e.g., a visual trial) must be represented by single characters, while trials
with multiple modalities (e.g., an audiovisual trial) are represented by the character combination of those
trials. Note that values are read relative to each other, such that e.g. |
None
|
stim_intensities
|
list[float] | None
|
List of possible intensity values of each stimulus, when the stimulus is present. Note that when the stimulus is not present, the intensity is set to 0. Defaults to [0.8, 0.9, 1]. |
None
|
stim_time
|
int
|
Duration of each stimulus in ms. Defaults to 1000. |
1000
|
catch_prob
|
float
|
Probability of catch trials in the session. Must be between 0 and 1 (inclusive). Defaults to 0.5. |
0.5
|
max_sequential
|
int | None
|
Maximum number of sequential trials of the same modality. It applies only to the modalities
defined in |
None
|
fix_intensity
|
float
|
Intensity of input signal during fixation. Defaults to 0. |
0
|
fix_time
|
Any
|
Fixation time specification. Can be one of the following: - A number (int or float): Fixed duration in milliseconds. - A callable: Function that returns the duration when called. - A list of numbers: Random choice from the list. - A tuple specifying a distribution: - ("uniform", (min, max)): Uniform distribution between min and max. - ("choice", [options]): Random choice from the given options. - ("truncated_exponential", [parameters]): Truncated exponential distribution. - ("constant", value): Always returns the given value. - ("until", end_time): Sets duration to reach the specified end time. The final duration is rounded down to the nearest multiple of the simulation timestep (dt). Note that the duration of each input and output signal is increased by this time. Defaults to 500. |
500
|
iti
|
Any
|
Inter-trial interval, or time window between sequential trials, in ms. Same format as |
0
|
dt
|
int
|
Time step in ms. Defaults to 100. |
100
|
tau
|
int
|
Time constant in ms. Defaults to 100. |
100
|
output_behavior
|
list[float] | None
|
List of possible intensity values of the behavioral output. Currently only the smallest and largest value of this list are used. Defaults to [0, 1]. |
None
|
noise_std
|
float
|
Standard deviation of the input noise. Defaults to 0.01. |
0.01
|
rewards
|
dict[str, float] | None
|
Dictionary of rewards for different outcomes. The keys are "abort", "correct", and "fail". Defaults to {"abort": -0.1, "correct": +1.0, "fail": 0.0}. |
None
|
random_seed
|
int | None
|
Seed for numpy's random number generator (rng). If an int is given, it will be used as the seed
for |
None
|
Source code in neurogym/envs/annubes.py
antireach
¶
Anti-reach or anti-saccade task.
AntiReach
¶
Bases: TrialEnv
Anti-response task.
During the fixation period, the agent fixates on a fixation point. During the following stimulus period, the agent is then shown a stimulus away from the fixation point. Finally, the agent needs to respond in the opposite direction of the stimulus during the decision period.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
anti
|
bool, if True, requires an anti-response. If False, requires a pro-response, i.e. response towards the stimulus. |
True
|
Source code in neurogym/envs/antireach.py
bandit
¶
Multi-arm Bandit task.
Bandit
¶
Bandit(dt: int = 100, n: int = 2, p: tuple[float, ...] | list[float] = (0.5, 0.5), rewards: None | list[float] | ndarray = None, timing: None | dict = None)
Bases: TrialEnv
Multi-arm bandit task.
On each trial, the agent is presented with multiple choices. Each option produces a reward of a certain magnitude given a certain probability.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n
|
int
|
int, the number of choices (arms) |
2
|
p
|
tuple[float, ...] | list[float]
|
tuple of length n, describes the probability of each arm leading to reward |
(0.5, 0.5)
|
rewards
|
None | list[float] | ndarray
|
tuple of length n, describe the reward magnitude of each option when rewarded |
None
|
Source code in neurogym/envs/bandit.py
contextdecisionmaking
¶
SingleContextDecisionMaking
¶
Bases: TrialEnv
Context-dependent decision-making task.
The agent simultaneously receives stimulus inputs from two modalities ( for example, a colored random dot motion pattern with color and motion modalities). The agent needs to make a perceptual decision based on only one of the two modalities, while ignoring the other. The agent reports its decision during the decision period, with an optional delay period in between the stimulus period and the decision period. The relevant modality is not explicitly signaled.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
context
|
int, 0 or 1 for the two context (rules). If 0, need to focus on modality 0 (the first one) |
0
|
Source code in neurogym/envs/contextdecisionmaking.py
ContextDecisionMaking
¶
Bases: TrialEnv
Context-dependent decision-making task.
The agent simultaneously receives stimulus inputs from two modalities ( for example, a colored random dot motion pattern with color and motion modalities). The agent needs to make a perceptual decision based on only one of the two modalities, while ignoring the other. The relevant modality is explicitly indicated by a rule signal.
Source code in neurogym/envs/contextdecisionmaking.py
dawtwostep
¶
DawTwoStep
¶
Bases: TrialEnv
Daw Two-step task.
On each trial, an initial choice between two options lead to either of two, second-stage states. In turn, these both demand another two-option choice, each of which is associated with a different chance of receiving reward.
Source code in neurogym/envs/dawtwostep.py
delaycomparison
¶
DelayComparison
¶
Bases: TrialEnv
Delayed comparison.
The agent needs to compare the magnitude of two stimuli are separated by a delay period. The agent reports its decision of the stronger stimulus during the decision period.
Source code in neurogym/envs/delaycomparison.py
represent
¶
Input representation of stimulus value.
delaymatchcategory
¶
DelayMatchCategory
¶
Bases: TrialEnv
Delayed match-to-category task.
A sample stimulus is shown during the sample period. The stimulus is characterized by a one-dimensional variable, such as its orientation between 0 and 360 degree. This one-dimensional variable is separated into two categories (for example, 0-180 degree and 180-360 degree). After a delay period, a test stimulus is shown. The agent needs to determine whether the sample and the test stimuli belong to the same category, and report that decision during the decision period.
Source code in neurogym/envs/delaymatchcategory.py
delaymatchsample
¶
DelayMatchSample
¶
Bases: TrialEnv
Delayed match-to-sample task.
A sample stimulus is shown during the sample period. The stimulus is characterized by a one-dimensional variable, such as its orientation between 0 and 360 degree. After a delay period, a test stimulus is shown. The agent needs to determine whether the sample and the test stimuli are equal, and report that decision during the decision period.
Source code in neurogym/envs/delaymatchsample.py
DelayMatchSampleDistractor1D
¶
Bases: TrialEnv
Delayed match-to-sample with multiple, potentially repeating distractors.
A sample stimulus is shown during the sample period. The stimulus is characterized by a one-dimensional variable, such as its orientation between 0 and 360 degree. After a delay period, the first test stimulus is shown. The agent needs to determine whether the sample and this test stimuli are equal. If so, it needs to produce the match response. If the first test is not equal to the sample stimulus, another delay period and then a second test stimulus follow, and so on.
Source code in neurogym/envs/delaymatchsample.py
delaypairedassociation
¶
DelayPairedAssociation
¶
Bases: TrialEnv
Delayed paired-association task.
The agent is shown a pair of two stimuli separated by a delay period. For half of the stimuli-pairs shown, the agent should choose the Go response. The agent is rewarded if it chose the Go response correctly.
Source code in neurogym/envs/delaypairedassociation.py
detection
¶
Created on Mon Jan 27 11:00:26 2020.
@author: martafradera
Detection
¶
Bases: TrialEnv
The agent has to GO if a stimulus is presented.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
delay
|
If not None indicates the delay, from the moment of the start of the stimulus period when the actual stimulus is presented. Otherwise, the delay is drawn from a uniform distribution. (def: None (ms), int) |
None
|
|
stim_dur
|
Stimulus duration. (def: 100 (ms), int) |
100
|
Source code in neurogym/envs/detection.py
dualdelaymatchsample
¶
DualDelayMatchSample
¶
Bases: TrialEnv
Two-item Delay-match-to-sample.
The trial starts with a fixation period. Then during the sample period, two sample stimuli are shown simultaneously. Followed by the first delay period, a cue is shown, indicating which sample stimulus will be tested. Then the first test stimulus is shown and the agent needs to report whether this test stimulus matches the cued sample stimulus. Then another delay and then test period follows, and the agent needs to report whether the other sample stimulus matches the second test stimulus.
Source code in neurogym/envs/dualdelaymatchsample.py
economicdecisionmaking
¶
EconomicDecisionMaking
¶
Bases: TrialEnv
Economic decision making task.
A agent chooses between two options. Each option offers a certain amount of juice. Its amount is indicated by the stimulus. The two options offer different types of juice, and the agent prefers one over another.
Source code in neurogym/envs/economicdecisionmaking.py
gonogo
¶
GoNogo
¶
Bases: TrialEnv
Go/No-go task.
A stimulus is shown during the stimulus period. The stimulus period is followed by a delay period, and then a decision period. If the stimulus is a Go stimulus, then the subject should choose the action Go during the decision period, otherwise, the subject should remain fixation.
Source code in neurogym/envs/gonogo.py
hierarchicalreasoning
¶
Hierarchical reasoning tasks.
HierarchicalReasoning
¶
Bases: TrialEnv
Hierarchical reasoning of rules.
On each trial, the subject receives two flashes separated by a delay period. The subject needs to judge whether the duration of this delay period is shorter than a threshold. Both flashes appear at the same location on each trial. For one trial type, the network should report its decision by going to the location of the flashes if the delay is shorter than the threshold. In another trial type, the network should go to the opposite direction of the flashes if the delay is short. The two types of trials are alternated across blocks, and the block transtion is unannouced.
Source code in neurogym/envs/hierarchicalreasoning.py
intervaldiscrimination
¶
IntervalDiscrimination
¶
Bases: TrialEnv
Comparing the time length of two stimuli.
Two stimuli are shown sequentially, separated by a delay period. The duration of each stimulus is randomly sampled on each trial. The subject needs to judge which stimulus has a longer duration, and reports its decision during the decision period by choosing one of the two choice options.
Source code in neurogym/envs/intervaldiscrimination.py
multisensory
¶
Multi-Sensory Integration.
MultiSensoryIntegration
¶
Bases: TrialEnv
Multi-sensory integration.
Two stimuli are shown in two input modalities. Each stimulus points to one of the possible responses with a certain strength (coherence). The correct choice is the response with the highest summed strength from both stimuli. The agent is therefore encouraged to integrate information from both modalities equally.
Source code in neurogym/envs/multisensory.py
null
¶
perceptualdecisionmaking
¶
PerceptualDecisionMaking
¶
Bases: TrialEnv
Two-alternative forced choice task: subject has to integrate two stimuli to decide which is higher on average.
A noisy stimulus is shown during the stimulus period. The strength ( coherence) of the stimulus is randomly sampled every trial. Because the stimulus is noisy, the agent is encouraged to integrate the stimulus over time.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
cohs
|
list of float, coherence levels controlling the difficulty of the task |
None
|
|
sigma
|
float, input noise level |
1.0
|
|
dim_ring
|
int, dimension of ring input and output |
2
|
Source code in neurogym/envs/perceptualdecisionmaking.py
PerceptualDecisionMakingDelayResponse
¶
Bases: TrialEnv
Perceptual decision-making with delayed responses.
Agents have to integrate two stimuli and report which one is larger on average after a delay.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
stim_scale
|
Controls the difficulty of the experiment. (def: 1., float) |
1.0
|
Source code in neurogym/envs/perceptualdecisionmaking.py
PulseDecisionMaking
¶
Bases: TrialEnv
Pulse-based decision making task.
Discrete stimuli are presented briefly as pulses.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
p_pulse
|
array-like, probability of pulses for each choice |
(0.3, 0.7)
|
|
n_bin
|
int, number of stimulus bins |
6
|
Source code in neurogym/envs/perceptualdecisionmaking.py
postdecisionwager
¶
PostDecisionWager
¶
Bases: TrialEnv
Post-decision wagering task assessing confidence.
The agent first performs a perceptual discrimination task (see for more details the PerceptualDecisionMaking task). On a random half of the trials, the agent is given the option to abort the sensory discrimination and to choose instead a sure-bet option that guarantees a small reward. Therefore, the agent is encouraged to choose the sure-bet option when it is uncertain about its perceptual decision.
Source code in neurogym/envs/postdecisionwager.py
probabilisticreasoning
¶
Random dot motion task.
ProbabilisticReasoning
¶
Bases: TrialEnv
Probabilistic reasoning.
The agent is shown a sequence of stimuli. Each stimulus is associated with a certain log-likelihood of the correct response being one choice versus the other. The final log-likelihood of the target response being, for example, option 1, is the sum of all log-likelihood associated with the presented stimuli. A delay period separates each stimulus, so the agent is encouraged to lean the log-likelihood association and integrate these values over time within a trial.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape_weight
|
array-like, evidence weight of each shape |
None
|
|
n_loc
|
int, number of location of show shapes |
4
|
Source code in neurogym/envs/probabilisticreasoning.py
reaching
¶
Reaching to target.
Reaching1D
¶
Bases: TrialEnv
Reaching to the stimulus.
The agent is shown a stimulus during the fixation period. The stimulus encodes a one-dimensional variable such as a movement direction. At the end of the fixation period, the agent needs to respond by reaching towards the stimulus direction.
Source code in neurogym/envs/reaching.py
post_step
¶
Reaching1DWithSelfDistraction
¶
Bases: TrialEnv
Reaching with self distraction.
In this task, the reaching state itself generates strong inputs that overshadows the actual target input. This task is inspired by behavior in electric fish where the electric sensing organ is distracted by discharges from its own electric organ for active sensing. Similar phenomena in bats.
Source code in neurogym/envs/reaching.py
post_step
¶
reachingdelayresponse
¶
ReachingDelayResponse
¶
Bases: TrialEnv
Reaching task with a delay period.
A reaching direction is presented by the stimulus during the stimulus period. Followed by a delay period, the agent needs to respond to the direction of the stimulus during the decision period.
Source code in neurogym/envs/reachingdelayresponse.py
readysetgo
¶
Ready-set-go task.
ReadySetGo
¶
Bases: TrialEnv
Agents have to measure and produce different time intervals.
A stimulus is briefly shown during a ready period, then again during a set period. The ready and set periods are separated by a measure period, the duration of which is randomly sampled on each trial. The agent is required to produce a response after the set cue such that the interval between the response and the set cue is as close as possible to the duration of the measure period.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
gain
|
Controls the measure that the agent has to produce. (def: 1, int) |
1
|
|
prod_margin
|
controls the interval around the ground truth production time within which the agent receives proportional reward |
0.2
|
Source code in neurogym/envs/readysetgo.py
MotorTiming
¶
Bases: TrialEnv
Agents have to produce different time intervals using different effectors (actions).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prod_margin
|
controls the interval around the ground truth production time within which the agent receives proportional reward. |
0.2
|
Source code in neurogym/envs/readysetgo.py
OneTwoThreeGo
¶
Bases: TrialEnv
Agents reproduce time intervals based on two samples.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prod_margin
|
controls the interval around the ground truth production time within which the agent receives proportional reward |
0.2
|
Source code in neurogym/envs/readysetgo.py
registration
¶
all_envs
¶
Return a list of all envs in neurogym.
Source code in neurogym/envs/registration.py
spatialsuppressmotion
¶
SpatialSuppressMotion
¶
Bases: TrialEnv
Spatial suppression motion task.
This task is useful to study center-surround interaction in monkey MT and human psychophysical performance in motion perception.
Tha task is derived from (Tadin et al. Nature, 2003). In this task, there is no fixation or decision stage. We only present a stimulus and a subject needs to perform a 4-AFC motion direction judgement. The ground-truth is the probabilities for choosing the four directions at a given time point. The probabilities depend on stimulus contrast and size, and the probabilities are derived from emprically measured human psychophysical performance.
In this version, the input size is 4 (directions) x 8 (size) = 32 neurons. This setting aims to simulate four pools (8 neurons in each pool) of neurons that are selective for four directions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
<dt>
|
millisecs per image frame, default: 8.3 (given 120HZ monitor) |
required | |
<win_size>
|
size per image frame |
required | |
<timing>
|
millisecs, stimulus duration, default: 8.3 * 36 frames ~ 300 ms. This is the longest duration we need (i.e., probability reach ceilling) |
required |
Note that please input default seq_len = 36 frames when creating dataset object.
FIXME: find more stable way of enforcing above.¶
Source code in neurogym/envs/spatialsuppressmotion.py
getgroundtruth
¶
The utility function to obtain ground truth probabilities for four direction.
Input trial is a dict, contains fields
We output a (4,) tuple indicate the probabilities to perceive left/right/up/down direction. This label comes from emprically measured human performance
Source code in neurogym/envs/spatialsuppressmotion.py
tonedetection
¶
auditory tone detection task.
ToneDetection
¶
Bases: TrialEnv
A subject is asked to report whether a pure tone is embeddied within a background noise.
If yes, should indicate the position of the tone. The tone lasts 50ms and could appear at the 500ms, 1000ms, and 1500ms. The tone is embbeded within noises.
By Ru-Yuan Zhang (ruyuanzhang@gmail.com)
Note in this version we did not consider the fixation period as we mainly aim to model human data.
For an animal version of this task, please consider to include fixation and saccade cues. See https://www.nature.com/articles/nn1386
Note that the output labels is of shape (seq_len, batch_size). For a human perceptual task, you can simply run labels = labels[-1, :] get the final output.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
<dt>
|
milliseconds, delta time, |
required | |
<sigma>
|
float, input noise level, control the task difficulty |
required | |
<timing>
|
stimulus timing |
required |