Skip to content

Environments

annubes

AnnubesEnv

AnnubesEnv(session: dict[str, float] | None = None, stim_intensities: list[float] | None = None, stim_time: int = 1000, catch_prob: float = 0.5, max_sequential: int | None = None, fix_intensity: float = 0, fix_time: Any = 500, iti: Any = 0, dt: int = 100, tau: int = 100, output_behavior: list[float] | None = None, noise_std: float = 0.01, rewards: dict[str, float] | None = None, random_seed: int | None = None)

Bases: TrialEnv

General class for the Annubes type of tasks.

Parameters:

Name Type Description Default
session dict[str, float] | None

Configuration of the trials that can appear during a session. It is given by a dictionary representing the ratio (values) of the different trials (keys) within the task. Trials with a single modality (e.g., a visual trial) must be represented by single characters, while trials with multiple modalities (e.g., an audiovisual trial) are represented by the character combination of those trials. Note that values are read relative to each other, such that e.g. {"v": 0.25, "a": 0.75} is equivalent to {"v": 1, "a": 3}. Defaults to {"v": 0.5, "a": 0.5}.

None
stim_intensities list[float] | None

List of possible intensity values of each stimulus, when the stimulus is present. Note that when the stimulus is not present, the intensity is set to 0. Defaults to [0.8, 0.9, 1].

None
stim_time int

Duration of each stimulus in ms. Defaults to 1000.

1000
catch_prob float

Probability of catch trials in the session. Must be between 0 and 1 (inclusive). Defaults to 0.5.

0.5
max_sequential int | None

Maximum number of sequential trials of the same modality. It applies only to the modalities defined in session, i.e., it does not apply to catch trials. Defaults to None (no maximum).

None
fix_intensity float

Intensity of input signal during fixation. Defaults to 0.

0
fix_time Any

Fixation time specification. Can be one of the following: - A number (int or float): Fixed duration in milliseconds. - A callable: Function that returns the duration when called. - A list of numbers: Random choice from the list. - A tuple specifying a distribution: - ("uniform", (min, max)): Uniform distribution between min and max. - ("choice", [options]): Random choice from the given options. - ("truncated_exponential", [parameters]): Truncated exponential distribution. - ("constant", value): Always returns the given value. - ("until", end_time): Sets duration to reach the specified end time. The final duration is rounded down to the nearest multiple of the simulation timestep (dt). Note that the duration of each input and output signal is increased by this time. Defaults to 500.

500
iti Any

Inter-trial interval, or time window between sequential trials, in ms. Same format as fix_time. Defaults to 0.

0
dt int

Time step in ms. Defaults to 100.

100
tau int

Time constant in ms. Defaults to 100.

100
output_behavior list[float] | None

List of possible intensity values of the behavioral output. Currently only the smallest and largest value of this list are used. Defaults to [0, 1].

None
noise_std float

Standard deviation of the input noise. Defaults to 0.01.

0.01
rewards dict[str, float] | None

Dictionary of rewards for different outcomes. The keys are "abort", "correct", and "fail". Defaults to {"abort": -0.1, "correct": +1.0, "fail": 0.0}.

None
random_seed int | None

Seed for numpy's random number generator (rng). If an int is given, it will be used as the seed for np.random.default_rng(). Defaults to None (i.e. the initial state itself is random).

None
Source code in neurogym/envs/annubes.py
def __init__(
    self,
    session: dict[str, float] | None = None,
    stim_intensities: list[float] | None = None,
    stim_time: int = 1000,
    catch_prob: float = 0.5,
    max_sequential: int | None = None,
    fix_intensity: float = 0,
    fix_time: Any = 500,
    iti: Any = 0,
    dt: int = 100,
    tau: int = 100,
    output_behavior: list[float] | None = None,
    noise_std: float = 0.01,
    rewards: dict[str, float] | None = None,
    random_seed: int | None = None,
):
    if session is None:
        session = {"v": 0.5, "a": 0.5}
    if output_behavior is None:
        output_behavior = [0, 1]
    if stim_intensities is None:
        stim_intensities = [0.8, 0.9, 1.0]
    if session is None:
        session = {"v": 0.5, "a": 0.5}
    super().__init__(dt=dt)
    self.session = {i: session[i] / sum(session.values()) for i in session}
    self.stim_intensities = stim_intensities
    self.stim_time = stim_time
    self.catch_prob = catch_prob
    self.max_sequential = max_sequential
    self.sequential_count = 1
    self.last_modality: str | None = None
    self.fix_intensity = fix_intensity
    self.fix_time = fix_time
    self.iti = iti
    self.dt = dt
    self.tau = tau
    self.output_behavior = output_behavior
    self.noise_std = noise_std
    self.random_seed = random_seed
    alpha = dt / self.tau
    self.noise_factor = self.noise_std * np.sqrt(2 * alpha) / alpha
    # Set random state
    if random_seed is None:
        rng = np.random.default_rng(random_seed)
        self._random_seed = rng.integers(2**32)
    else:
        self._random_seed = random_seed
    self._rng = np.random.default_rng(self._random_seed)
    # Rewards
    if rewards is None:
        self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    else:
        self.rewards = rewards
    self.timing = {"fixation": self.fix_time, "stimulus": self.stim_time, "iti": self.iti}
    # Set the name of each input dimension
    obs_space_name = {"fixation": 0, "start": 1, **{trial: i for i, trial in enumerate(session, 2)}}
    self.observation_space = ngym.spaces.Box(low=0.0, high=1.0, shape=(len(obs_space_name),), name=obs_space_name)
    # Set the name of each action value
    self.action_space = ngym.spaces.Discrete(
        n=len(self.output_behavior),
        name={"fixation": self.fix_intensity, "choice": self.output_behavior[1:]},
    )

antireach

Anti-reach or anti-saccade task.

AntiReach

AntiReach(dt=100, anti=True, rewards=None, timing=None, dim_ring=32)

Bases: TrialEnv

Anti-response task.

During the fixation period, the agent fixates on a fixation point. During the following stimulus period, the agent is then shown a stimulus away from the fixation point. Finally, the agent needs to respond in the opposite direction of the stimulus during the decision period.

Parameters:

Name Type Description Default
anti

bool, if True, requires an anti-response. If False, requires a pro-response, i.e. response towards the stimulus.

True
Source code in neurogym/envs/antireach.py
def __init__(self, dt=100, anti=True, rewards=None, timing=None, dim_ring=32) -> None:
    super().__init__(dt=dt)

    self.anti = anti

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {"fixation": 500, "stimulus": 500, "delay": 0, "decision": 500}
    if timing:
        self.timing.update(timing)

    self.abort = False

    # action and observation spaces
    self.dim_ring = dim_ring
    self.theta = np.arange(0, 2 * np.pi, 2 * np.pi / dim_ring)
    self.choices = np.arange(dim_ring)

    name = {"fixation": 0, "stimulus": range(1, dim_ring + 1)}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(1 + dim_ring,),
        dtype=np.float32,
        name=name,
    )

    name = {"fixation": 0, "choice": range(1, dim_ring + 1)}
    self.action_space = spaces.Discrete(1 + dim_ring, name=name)

bandit

Multi-arm Bandit task.

Bandit

Bandit(dt: int = 100, n: int = 2, p: tuple[float, ...] | list[float] = (0.5, 0.5), rewards: None | list[float] | ndarray = None, timing: None | dict = None)

Bases: TrialEnv

Multi-arm bandit task.

On each trial, the agent is presented with multiple choices. Each option produces a reward of a certain magnitude given a certain probability.

Parameters:

Name Type Description Default
n int

int, the number of choices (arms)

2
p tuple[float, ...] | list[float]

tuple of length n, describes the probability of each arm leading to reward

(0.5, 0.5)
rewards None | list[float] | ndarray

tuple of length n, describe the reward magnitude of each option when rewarded

None
Source code in neurogym/envs/bandit.py
def __init__(
    self,
    dt: int = 100,
    n: int = 2,
    p: tuple[float, ...] | list[float] = (0.5, 0.5),
    rewards: None | list[float] | np.ndarray = None,
    timing: None | dict = None,
) -> None:
    super().__init__(dt=dt)
    if timing is not None:
        print("Warning: Bandit task does not require timing variable.")

    self.n = n
    self._p = np.array(p)  # Reward probabilities

    if rewards is not None:
        self._rewards = np.array(rewards)
    else:
        self._rewards = np.ones(n)  # 1 for every arm

    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(1,),
        dtype=np.float32,
    )
    self.action_space = spaces.Discrete(n)

contextdecisionmaking

SingleContextDecisionMaking

SingleContextDecisionMaking(dt=100, context=0, rewards=None, timing=None, sigma=1.0, dim_ring=2)

Bases: TrialEnv

Context-dependent decision-making task.

The agent simultaneously receives stimulus inputs from two modalities ( for example, a colored random dot motion pattern with color and motion modalities). The agent needs to make a perceptual decision based on only one of the two modalities, while ignoring the other. The agent reports its decision during the decision period, with an optional delay period in between the stimulus period and the decision period. The relevant modality is not explicitly signaled.

Parameters:

Name Type Description Default
context

int, 0 or 1 for the two context (rules). If 0, need to focus on modality 0 (the first one)

0
Source code in neurogym/envs/contextdecisionmaking.py
def __init__(
    self,
    dt=100,
    context=0,
    rewards=None,
    timing=None,
    sigma=1.0,
    dim_ring=2,
) -> None:
    super().__init__(dt=dt)

    # trial conditions
    self.cohs = [5, 15, 50]
    self.sigma = sigma / np.sqrt(self.dt)  # Input noise
    self.context = context

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 300,
        # 'target': 350, # noqa: ERA001
        "stimulus": 750,
        "delay": ngym.random.TruncExp(600, 300, 3000),
        "decision": 100,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    # set action and observation space
    self.theta = np.linspace(0, 2 * np.pi, dim_ring + 1)[:-1]
    self.choices = np.arange(dim_ring)

    name = {
        "fixation": 0,
        "stimulus_mod1": range(1, dim_ring + 1),
        "stimulus_mod2": range(dim_ring + 1, 2 * dim_ring + 1),
    }
    shape = (1 + 2 * dim_ring,)
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=shape,
        dtype=np.float32,
        name=name,
    )

    name = {"fixation": 0, "choice": range(1, dim_ring + 1)}
    self.action_space = spaces.Discrete(1 + dim_ring, name=name)

ContextDecisionMaking

ContextDecisionMaking(dt=100, rewards=None, timing=None, sigma=1.0)

Bases: TrialEnv

Context-dependent decision-making task.

The agent simultaneously receives stimulus inputs from two modalities ( for example, a colored random dot motion pattern with color and motion modalities). The agent needs to make a perceptual decision based on only one of the two modalities, while ignoring the other. The relevant modality is explicitly indicated by a rule signal.

Source code in neurogym/envs/contextdecisionmaking.py
def __init__(self, dt=100, rewards=None, timing=None, sigma=1.0) -> None:
    super().__init__(dt=dt)

    # trial conditions
    self.contexts = [0, 1]  # index for context inputs
    self.choices = [1, 2]  # left, right choice
    self.cohs = [5, 15, 50]
    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 300,
        # 'target': 350, # noqa: ERA001
        "stimulus": 750,
        "delay": ngym.random.TruncExp(600, 300, 3000),
        "decision": 100,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    # set action and observation space
    names = [
        "fixation",
        "stim1_mod1",
        "stim2_mod1",
        "stim1_mod2",
        "stim2_mod2",
        "context1",
        "context2",
    ]
    name = {name: i for i, name in enumerate(names)}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(7,),
        dtype=np.float32,
        name=name,
    )

    name = {"fixation": 0, "choice1": 1, "choice2": 2}
    self.action_space = spaces.Discrete(3, name=name)

dawtwostep

DawTwoStep

DawTwoStep(dt=100, rewards=None, timing=None)

Bases: TrialEnv

Daw Two-step task.

On each trial, an initial choice between two options lead to either of two, second-stage states. In turn, these both demand another two-option choice, each of which is associated with a different chance of receiving reward.

Source code in neurogym/envs/dawtwostep.py
def __init__(self, dt=100, rewards=None, timing=None) -> None:
    super().__init__(dt=dt)
    if timing is not None:
        print("Warning: Two-step task does not require timing variable.")
    # Actions are ('FIXATE', 'ACTION1', 'ACTION2')
    self.actions = [0, 1, 2]

    # trial conditions
    self.p1 = 0.8  # prob of transitioning to state1 with action1 (>=05)
    self.p2 = 0.8  # prob of transitioning to state2 with action2 (>=05)
    self.p_switch = 0.025  # switch reward contingency
    self.high_reward_p = 0.9
    self.low_reward_p = 0.1
    self.tmax = 3 * self.dt
    self.mean_trial_duration = self.tmax
    self.state1_high_reward = True
    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0}
    if rewards:
        self.rewards.update(rewards)

    self.action_space = spaces.Discrete(3)
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(3,),
        dtype=np.float32,
    )

delaycomparison

DelayComparison

DelayComparison(dt=100, vpairs=None, rewards=None, timing=None, sigma=1.0)

Bases: TrialEnv

Delayed comparison.

The agent needs to compare the magnitude of two stimuli are separated by a delay period. The agent reports its decision of the stronger stimulus during the decision period.

Source code in neurogym/envs/delaycomparison.py
def __init__(self, dt=100, vpairs=None, rewards=None, timing=None, sigma=1.0) -> None:
    super().__init__(dt=dt)

    # Pair of stimulus strengthes
    if vpairs is None:
        self.vpairs = [(18, 10), (22, 14), (26, 18), (30, 22), (34, 26)]
    else:
        self.vpairs = vpairs

    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 500,
        "stimulus1": 500,
        "delay": 1000,
        "stimulus2": 500,
        "decision": 100,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    # Input scaling
    self.vall = np.ravel(self.vpairs)
    self.vmin = np.min(self.vall)
    self.vmax = np.max(self.vall)

    # action and observation space
    name: dict[str, int | list] = {"fixation": 0, "stimulus": 1}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(2,),
        dtype=np.float32,
        name=name,
    )
    name = {"fixation": 0, "choice": [1, 2]}
    self.action_space = spaces.Discrete(3, name=name)

    self.choices = [1, 2]

represent

represent(v)

Input representation of stimulus value.

Source code in neurogym/envs/delaycomparison.py
def represent(self, v):
    """Input representation of stimulus value."""
    # Scale to be between 0 and 1
    v_ = (v - self.vmin) / (self.vmax - self.vmin)
    # positive encoding, between 0.5 and 1
    return (1 + v_) / 2

delaymatchcategory

DelayMatchCategory

DelayMatchCategory(dt=100, rewards=None, timing=None, sigma=1.0, dim_ring=2)

Bases: TrialEnv

Delayed match-to-category task.

A sample stimulus is shown during the sample period. The stimulus is characterized by a one-dimensional variable, such as its orientation between 0 and 360 degree. This one-dimensional variable is separated into two categories (for example, 0-180 degree and 180-360 degree). After a delay period, a test stimulus is shown. The agent needs to determine whether the sample and the test stimuli belong to the same category, and report that decision during the decision period.

Source code in neurogym/envs/delaymatchcategory.py
def __init__(self, dt=100, rewards=None, timing=None, sigma=1.0, dim_ring=2) -> None:
    super().__init__(dt=dt)
    self.choices = ["match", "non-match"]  # match, non-match

    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {"fixation": 500, "sample": 650, "first_delay": 1000, "test": 650}

    if timing:
        self.timing.update(timing)

    self.abort = False

    self.theta = np.linspace(0, 2 * np.pi, dim_ring + 1)[:-1]

    name = {"fixation": 0, "stimulus": range(1, dim_ring + 1)}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(1 + dim_ring,),
        dtype=np.float32,
        name=name,
    )

    name = {"fixation": 0, "match": 1, "non-match": 2}
    self.action_space = spaces.Discrete(3, name=name)

delaymatchsample

DelayMatchSample

DelayMatchSample(dt=100, rewards=None, timing=None, sigma=1.0, dim_ring=2)

Bases: TrialEnv

Delayed match-to-sample task.

A sample stimulus is shown during the sample period. The stimulus is characterized by a one-dimensional variable, such as its orientation between 0 and 360 degree. After a delay period, a test stimulus is shown. The agent needs to determine whether the sample and the test stimuli are equal, and report that decision during the decision period.

Source code in neurogym/envs/delaymatchsample.py
def __init__(self, dt=100, rewards=None, timing=None, sigma=1.0, dim_ring=2) -> None:
    super().__init__(dt=dt)
    self.choices = [1, 2]
    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 300,
        "sample": 500,
        "delay": 1000,
        "test": 500,
        "decision": 900,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    self.theta = np.linspace(0, 2 * np.pi, dim_ring + 1)[:-1]

    name = {"fixation": 0, "stimulus": range(1, dim_ring + 1)}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(1 + dim_ring,),
        dtype=np.float32,
        name=name,
    )

    name = {"fixation": 0, "match": 1, "non-match": 2}
    self.action_space = spaces.Discrete(3, name=name)

DelayMatchSampleDistractor1D

DelayMatchSampleDistractor1D(dt=100, rewards=None, timing=None, sigma=1.0)

Bases: TrialEnv

Delayed match-to-sample with multiple, potentially repeating distractors.

A sample stimulus is shown during the sample period. The stimulus is characterized by a one-dimensional variable, such as its orientation between 0 and 360 degree. After a delay period, the first test stimulus is shown. The agent needs to determine whether the sample and this test stimuli are equal. If so, it needs to produce the match response. If the first test is not equal to the sample stimulus, another delay period and then a second test stimulus follow, and so on.

Source code in neurogym/envs/delaymatchsample.py
def __init__(self, dt=100, rewards=None, timing=None, sigma=1.0) -> None:
    super().__init__(dt=dt)
    self.choices = [1, 2, 3]
    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": -1.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 300,
        "sample": 500,
        "delay1": 1000,
        "test1": 500,
        "delay2": 1000,
        "test2": 500,
        "delay3": 1000,
        "test3": 500,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    self.theta = np.arange(0, 2 * np.pi, 2 * np.pi / 32)

    name = {"fixation": 0, "stimulus": range(1, 33)}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(33,),
        dtype=np.float32,
        name=name,
    )

    name = {"fixation": 0, "match": 1}
    self.action_space = spaces.Discrete(2, name=name)

delaypairedassociation

DelayPairedAssociation

DelayPairedAssociation(dt=100, rewards=None, timing=None, sigma=1.0)

Bases: TrialEnv

Delayed paired-association task.

The agent is shown a pair of two stimuli separated by a delay period. For half of the stimuli-pairs shown, the agent should choose the Go response. The agent is rewarded if it chose the Go response correctly.

Source code in neurogym/envs/delaypairedassociation.py
def __init__(self, dt=100, rewards=None, timing=None, sigma=1.0) -> None:
    super().__init__(dt=dt)
    self.choices = [0, 1]
    # trial conditions
    self.pairs = [(1, 3), (1, 4), (2, 3), (2, 4)]
    self.association = 0  # GO if np.diff(self.pair)[0]%2==self.association
    self.sigma = sigma / np.sqrt(self.dt)  # Input noise
    # Durations (stimulus duration will be drawn from an exponential)

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": -1.0, "miss": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 0,
        "stim1": 1000,
        "delay_btw_stim": 1000,
        "stim2": 1000,
        "delay_aft_stim": 1000,
        "decision": 500,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False
    # action and observation spaces
    name = {"fixation": 0, "stimulus": range(1, 5)}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(5,),
        dtype=np.float32,
        name=name,
    )

    self.action_space = spaces.Discrete(2, name={"fixation": 0, "go": 1})

detection

Created on Mon Jan 27 11:00:26 2020.

@author: martafradera

Detection

Detection(dt=100, rewards=None, timing=None, sigma=1.0, delay=None, stim_dur=100)

Bases: TrialEnv

The agent has to GO if a stimulus is presented.

Parameters:

Name Type Description Default
delay

If not None indicates the delay, from the moment of the start of the stimulus period when the actual stimulus is presented. Otherwise, the delay is drawn from a uniform distribution. (def: None (ms), int)

None
stim_dur

Stimulus duration. (def: 100 (ms), int)

100
Source code in neurogym/envs/detection.py
def __init__(
    self,
    dt=100,
    rewards=None,
    timing=None,
    sigma=1.0,
    delay=None,
    stim_dur=100,
) -> None:
    super().__init__(dt=dt)
    # Possible decisions at the end of the trial
    self.choices = [0, 1]

    self.sigma = sigma / np.sqrt(self.dt)  # Input noise
    self.delay = delay
    self.stim_dur = int(stim_dur / self.dt)  # in steps should be greater
    # than 1 stp else it wont have enough time to respond within the window
    if self.stim_dur == 1:
        self.extra_step = 1
        if delay is None:
            warnings.warn(
                "Added an extra stp after the actual stimulus, else model will not be able to respond "
                "within response window (stimulus epoch).",
                UserWarning,
                stacklevel=2,
            )
    else:
        self.extra_step = 0

    if self.stim_dur < 1:
        warnings.warn("Stimulus duration shorter than dt", stacklevel=2)

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": -1.0, "miss": -1}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 500,
        "stimulus": ngym.random.TruncExp(1000, 500, 1500),
    }
    if timing:
        self.timing.update(timing)

    # whether to abort (T) or not (F) the trial when breaking fixation:
    self.abort = False

    name = {"fixation": 0, "stimulus": 1}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(2,),
        dtype=np.float32,
        name=name,
    )

    self.action_space = spaces.Discrete(2, name={"fixation": 0, "go": 1})

dualdelaymatchsample

DualDelayMatchSample

DualDelayMatchSample(dt=100, rewards=None, timing=None, sigma=1.0)

Bases: TrialEnv

Two-item Delay-match-to-sample.

The trial starts with a fixation period. Then during the sample period, two sample stimuli are shown simultaneously. Followed by the first delay period, a cue is shown, indicating which sample stimulus will be tested. Then the first test stimulus is shown and the agent needs to report whether this test stimulus matches the cued sample stimulus. Then another delay and then test period follows, and the agent needs to report whether the other sample stimulus matches the second test stimulus.

Source code in neurogym/envs/dualdelaymatchsample.py
def __init__(self, dt=100, rewards=None, timing=None, sigma=1.0) -> None:
    super().__init__(dt=dt)
    self.choices = [1, 2]
    self.cues = [0, 1]

    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 500,
        "sample": 500,
        "delay1": 500,
        "cue1": 500,
        "test1": 500,
        "delay2": 500,
        "cue2": 500,
        "test2": 500,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    name = {
        "fixation": 0,
        "stimulus1": range(1, 3),
        "stimulus2": range(3, 5),
        "cue1": 5,
        "cue2": 6,
    }
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(7,),
        dtype=np.float32,
        name=name,
    )
    name = {"fixation": 0, "match": 1, "non-match": 2}
    self.action_space = spaces.Discrete(3, name=name)

economicdecisionmaking

EconomicDecisionMaking

EconomicDecisionMaking(dt=100, rewards=None, timing=None)

Bases: TrialEnv

Economic decision making task.

A agent chooses between two options. Each option offers a certain amount of juice. Its amount is indicated by the stimulus. The two options offer different types of juice, and the agent prefers one over another.

Source code in neurogym/envs/economicdecisionmaking.py
def __init__(self, dt=100, rewards=None, timing=None) -> None:
    super().__init__(dt=dt)

    # trial conditions
    self.B_to_A = 1 / 2.2
    self.juices = [("a", "b"), ("b", "a")]
    self.offers = [
        (0, 1),
        (1, 3),
        (1, 2),
        (1, 1),
        (2, 1),
        (3, 1),
        (4, 1),
        (6, 1),
        (2, 0),
    ]

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +0.22}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 1500,
        "offer_on": lambda: self.rng.uniform(1000, 2000),
        "decision": 750,
    }
    if timing:
        self.timing.update(timing)

    self.R_B = self.B_to_A * self.rewards["correct"]
    self.R_A = self.rewards["correct"]
    self.abort = False
    # Increase initial policy -> baseline weights
    self.baseline_Win = 10

    name = {
        "fixation": 0,
        "a1": 1,
        "b1": 2,  # a or b for choice 1
        "a2": 3,
        "b2": 4,  # a or b for choice 2
        "n1": 5,
        "n2": 6,  # amount for choice 1 or 2
    }
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(7,),
        dtype=np.float32,
        name=name,
    )

    self.act_dict = {"fixation": 0, "choice1": 1, "choice2": 2}
    self.action_space = spaces.Discrete(3, name=self.act_dict)

gonogo

GoNogo

GoNogo(dt=100, rewards=None, timing=None)

Bases: TrialEnv

Go/No-go task.

A stimulus is shown during the stimulus period. The stimulus period is followed by a delay period, and then a decision period. If the stimulus is a Go stimulus, then the subject should choose the action Go during the decision period, otherwise, the subject should remain fixation.

Source code in neurogym/envs/gonogo.py
def __init__(self, dt=100, rewards=None, timing=None) -> None:
    super().__init__(dt=dt)
    # Actions are (FIXATE, GO)
    self.actions = [0, 1]
    # trial conditions
    self.choices = [0, 1]

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": -0.5, "miss": -0.5}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {"fixation": 0, "stimulus": 500, "delay": 500, "decision": 500}
    if timing:
        self.timing.update(timing)

    self.abort = False
    # set action and observation spaces
    name = {"fixation": 0, "nogo": 1, "go": 2}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(3,),
        dtype=np.float32,
        name=name,
    )
    self.action_space = spaces.Discrete(2, {"fixation": 0, "go": 1})

hierarchicalreasoning

Hierarchical reasoning tasks.

HierarchicalReasoning

HierarchicalReasoning(dt=100, rewards=None, timing=None)

Bases: TrialEnv

Hierarchical reasoning of rules.

On each trial, the subject receives two flashes separated by a delay period. The subject needs to judge whether the duration of this delay period is shorter than a threshold. Both flashes appear at the same location on each trial. For one trial type, the network should report its decision by going to the location of the flashes if the delay is shorter than the threshold. In another trial type, the network should go to the opposite direction of the flashes if the delay is short. The two types of trials are alternated across blocks, and the block transtion is unannouced.

Source code in neurogym/envs/hierarchicalreasoning.py
def __init__(self, dt=100, rewards=None, timing=None) -> None:
    super().__init__(dt=dt)
    self.choices = [0, 1]

    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": ngym.random.TruncExp(600, 400, 800),
        "rule_target": 1000,
        "fixation2": ngym.random.TruncExp(600, 400, 900),
        "flash1": 100,
        "delay": (530, 610, 690, 770, 850, 930, 1010, 1090, 1170),
        "flash2": 100,
        "decision": 700,
    }
    if timing:
        self.timing.update(timing)
    self.mid_delay = np.median(self.timing["delay"][1])

    self.abort = False

    name = {"fixation": 0, "rule": [1, 2], "stimulus": [3, 4]}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(5,),
        dtype=np.float32,
        name=name,
    )
    name = {"fixation": 0, "rule": [1, 2], "choice": [3, 4]}
    self.action_space = spaces.Discrete(5, name=name)

    self.chose_correct_rule = False
    self.rule = 0
    self.trial_in_block = 0
    self.block_size = 10
    self.new_block()

intervaldiscrimination

IntervalDiscrimination

IntervalDiscrimination(dt=80, rewards=None, timing=None)

Bases: TrialEnv

Comparing the time length of two stimuli.

Two stimuli are shown sequentially, separated by a delay period. The duration of each stimulus is randomly sampled on each trial. The subject needs to judge which stimulus has a longer duration, and reports its decision during the decision period by choosing one of the two choice options.

Source code in neurogym/envs/intervaldiscrimination.py
def __init__(self, dt=80, rewards=None, timing=None) -> None:
    super().__init__(dt=dt)
    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 300,
        "stim1": lambda: self.rng.uniform(300, 600),
        "delay1": lambda: self.rng.uniform(800, 1500),
        "stim2": lambda: self.rng.uniform(300, 600),
        "delay2": 500,
        "decision": 300,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    name = {"fixation": 0, "stim1": 1, "stim2": 2}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(3,),
        dtype=np.float32,
        name=name,
    )
    name = {"fixation": 0, "choice1": 1, "choice2": 2}
    self.action_space = spaces.Discrete(3, name=name)

multisensory

Multi-Sensory Integration.

MultiSensoryIntegration

MultiSensoryIntegration(dt=100, rewards=None, timing=None, sigma=1.0, dim_ring=2)

Bases: TrialEnv

Multi-sensory integration.

Two stimuli are shown in two input modalities. Each stimulus points to one of the possible responses with a certain strength (coherence). The correct choice is the response with the highest summed strength from both stimuli. The agent is therefore encouraged to integrate information from both modalities equally.

Source code in neurogym/envs/multisensory.py
def __init__(self, dt=100, rewards=None, timing=None, sigma=1.0, dim_ring=2) -> None:
    super().__init__(dt=dt)

    # trial conditions
    self.cohs = [5, 15, 50]

    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {"fixation": 300, "stimulus": 750, "decision": 100}
    if timing:
        self.timing.update(timing)
    self.abort = False

    # set action and observation space
    self.theta = np.linspace(0, 2 * np.pi, dim_ring + 1)[:-1]
    self.choices = np.arange(dim_ring)

    name = {
        "fixation": 0,
        "stimulus_mod1": range(1, dim_ring + 1),
        "stimulus_mod2": range(dim_ring + 1, 2 * dim_ring + 1),
    }
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(1 + 2 * dim_ring,),
        dtype=np.float32,
        name=name,
    )

    name = {"fixation": 0, "choice": range(1, dim_ring + 1)}
    self.action_space = spaces.Discrete(1 + dim_ring, name=name)

null

Null

Null(dt=100)

Bases: TrialEnv

Null task.

Source code in neurogym/envs/null.py
def __init__(self, dt=100) -> None:
    super().__init__(dt=dt)
    self.action_space = spaces.Discrete(1)
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(1,),
        dtype=np.float32,
    )

perceptualdecisionmaking

PerceptualDecisionMaking

PerceptualDecisionMaking(dt=100, rewards=None, timing=None, cohs=None, sigma=1.0, dim_ring=2)

Bases: TrialEnv

Two-alternative forced choice task: subject has to integrate two stimuli to decide which is higher on average.

A noisy stimulus is shown during the stimulus period. The strength ( coherence) of the stimulus is randomly sampled every trial. Because the stimulus is noisy, the agent is encouraged to integrate the stimulus over time.

Parameters:

Name Type Description Default
cohs

list of float, coherence levels controlling the difficulty of the task

None
sigma

float, input noise level

1.0
dim_ring

int, dimension of ring input and output

2
Source code in neurogym/envs/perceptualdecisionmaking.py
def __init__(
    self,
    dt=100,
    rewards=None,
    timing=None,
    cohs=None,
    sigma=1.0,
    dim_ring=2,
) -> None:
    super().__init__(dt=dt)
    if cohs is None:
        self.cohs = np.array([0, 6.4, 12.8, 25.6, 51.2])
    else:
        self.cohs = cohs
    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {"fixation": 100, "stimulus": 2000, "delay": 0, "decision": 100}
    if timing:
        self.timing.update(timing)

    self.abort = False

    self.theta = np.linspace(0, 2 * np.pi, dim_ring + 1)[:-1]
    self.choices = np.arange(dim_ring)

    name = {"fixation": 0, "stimulus": range(1, dim_ring + 1)}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(1 + dim_ring,),
        dtype=np.float32,
        name=name,
    )
    name = {"fixation": 0, "choice": range(1, dim_ring + 1)}
    self.action_space = spaces.Discrete(1 + dim_ring, name=name)

PerceptualDecisionMakingDelayResponse

PerceptualDecisionMakingDelayResponse(dt=100, rewards=None, timing=None, stim_scale=1.0, sigma=1.0)

Bases: TrialEnv

Perceptual decision-making with delayed responses.

Agents have to integrate two stimuli and report which one is larger on average after a delay.

Parameters:

Name Type Description Default
stim_scale

Controls the difficulty of the experiment. (def: 1., float)

1.0
Source code in neurogym/envs/perceptualdecisionmaking.py
def __init__(self, dt=100, rewards=None, timing=None, stim_scale=1.0, sigma=1.0) -> None:
    super().__init__(dt=dt)
    self.choices = [1, 2]
    # cohs specifies the amount of evidence (modulated by stim_scale)
    self.cohs = np.array([0, 6.4, 12.8, 25.6, 51.2]) * stim_scale
    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 0,
        "stimulus": 1150,
        #  TODO: sampling of delays follows exponential
        "delay": (300, 500, 700, 900, 1200, 2000, 3200, 4000),
        # 'go_cue': 100,  # noqa: ERA001 TODO: Not implemented
        "decision": 1500,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    # action and observation spaces
    self.action_space = spaces.Discrete(3)
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(3,),
        dtype=np.float32,
    )

PulseDecisionMaking

PulseDecisionMaking(dt=10, rewards=None, timing=None, p_pulse=(0.3, 0.7), n_bin=6)

Bases: TrialEnv

Pulse-based decision making task.

Discrete stimuli are presented briefly as pulses.

Parameters:

Name Type Description Default
p_pulse

array-like, probability of pulses for each choice

(0.3, 0.7)
n_bin

int, number of stimulus bins

6
Source code in neurogym/envs/perceptualdecisionmaking.py
def __init__(self, dt=10, rewards=None, timing=None, p_pulse=(0.3, 0.7), n_bin=6) -> None:
    super().__init__(dt=dt)
    self.p_pulse = p_pulse
    self.n_bin = n_bin

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {"fixation": 500, "decision": 500}
    for i in range(n_bin):
        self.timing[f"cue{i}"] = 10
        self.timing[f"bin{i}"] = 240
    if timing:
        self.timing.update(timing)

    self.abort = False

    name = {"fixation": 0, "stimulus": [1, 2]}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(3,),
        dtype=np.float32,
        name=name,
    )
    name = {"fixation": 0, "choice": [1, 2]}
    self.action_space = spaces.Discrete(3, name=name)

postdecisionwager

PostDecisionWager

PostDecisionWager(dt=100, rewards=None, timing=None, dim_ring=2, sigma=1.0)

Bases: TrialEnv

Post-decision wagering task assessing confidence.

The agent first performs a perceptual discrimination task (see for more details the PerceptualDecisionMaking task). On a random half of the trials, the agent is given the option to abort the sensory discrimination and to choose instead a sure-bet option that guarantees a small reward. Therefore, the agent is encouraged to choose the sure-bet option when it is uncertain about its perceptual decision.

Source code in neurogym/envs/postdecisionwager.py
def __init__(self, dt=100, rewards=None, timing=None, dim_ring=2, sigma=1.0) -> None:
    super().__init__(dt=dt)

    self.wagers = [True, False]
    self.theta = np.linspace(0, 2 * np.pi, dim_ring + 1)[:-1]
    self.choices = np.arange(dim_ring)
    self.cohs = [0, 3.2, 6.4, 12.8, 25.6, 51.2]
    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)
    self.rewards["sure"] = 0.7 * self.rewards["correct"]

    self.timing = {
        "fixation": 100,
        # 'target':  0,  # noqa: ERA001
        "stimulus": ngym.random.TruncExp(180, 100, 900),
        "delay": ngym.random.TruncExp(1350, 1200, 1800),
        "pre_sure": lambda: self.rng.uniform(500, 750),
        "decision": 100,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    # set action and observation space
    name = {"fixation": 0, "stimulus": [1, 2], "sure": 3}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(4,),
        dtype=np.float32,
        name=name,
    )
    name = {"fixation": 0, "choice": [1, 2], "sure": 3}
    self.action_space = spaces.Discrete(4, name=name)

probabilisticreasoning

Random dot motion task.

ProbabilisticReasoning

ProbabilisticReasoning(dt=100, rewards=None, timing=None, shape_weight=None, n_loc=4)

Bases: TrialEnv

Probabilistic reasoning.

The agent is shown a sequence of stimuli. Each stimulus is associated with a certain log-likelihood of the correct response being one choice versus the other. The final log-likelihood of the target response being, for example, option 1, is the sum of all log-likelihood associated with the presented stimuli. A delay period separates each stimulus, so the agent is encouraged to lean the log-likelihood association and integrate these values over time within a trial.

Parameters:

Name Type Description Default
shape_weight

array-like, evidence weight of each shape

None
n_loc

int, number of location of show shapes

4
Source code in neurogym/envs/probabilisticreasoning.py
def __init__(self, dt=100, rewards=None, timing=None, shape_weight=None, n_loc=4) -> None:
    super().__init__(dt=dt)
    # The evidence weight of each stimulus
    if shape_weight is not None:
        self.shape_weight = shape_weight
    else:
        self.shape_weight = [-10, -0.9, -0.7, -0.5, -0.3, 0.3, 0.5, 0.7, 0.9, 10]

    self.n_shape = len(self.shape_weight)
    dim_shape = self.n_shape
    # Shape representation needs to be fixed cross-platform
    self.shapes = np.eye(self.n_shape, dim_shape)
    self.n_loc = n_loc

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 500,
        "delay": lambda: self.rng.uniform(450, 550),
        "decision": 500,
    }
    for i_loc in range(n_loc):
        self.timing[f"stimulus{i_loc}"] = 500
    if timing:
        self.timing.update(timing)

    self.abort = False

    obs_name: dict[str, int | range] = {"fixation": 0}
    start = 1
    for i_loc in range(n_loc):
        obs_name[f"loc{i_loc}"] = range(start, start + dim_shape)
        start += dim_shape
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(1 + dim_shape * n_loc,),
        dtype=np.float32,
        name=obs_name,
    )

    action_name = {"fixation": 0, "choice": [1, 2]}
    self.action_space = spaces.Discrete(3, name=action_name)

reaching

Reaching to target.

Reaching1D

Reaching1D(dt=100, rewards=None, timing=None, dim_ring=16)

Bases: TrialEnv

Reaching to the stimulus.

The agent is shown a stimulus during the fixation period. The stimulus encodes a one-dimensional variable such as a movement direction. At the end of the fixation period, the agent needs to respond by reaching towards the stimulus direction.

Source code in neurogym/envs/reaching.py
def __init__(self, dt=100, rewards=None, timing=None, dim_ring=16) -> None:
    super().__init__(dt=dt)
    # Rewards
    self.rewards = {"correct": +1.0, "fail": -0.1}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {"fixation": 500, "reach": 500}
    if timing:
        self.timing.update(timing)

    # action and observation spaces
    obs_name = {"self": range(dim_ring, 2 * dim_ring), "target": range(dim_ring)}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(2 * dim_ring,),
        dtype=np.float32,
        name=obs_name,
    )
    action_name = {"fixation": 0, "left": 1, "right": 2}
    self.action_space = spaces.Discrete(3, name=action_name)

    self.theta = np.arange(0, 2 * np.pi, 2 * np.pi / dim_ring)
    self.state = np.pi
    self.dim_ring = dim_ring

post_step

post_step(ob, reward, terminated, truncated, info)

Modify observation.

Source code in neurogym/envs/reaching.py
def post_step(self, ob, reward, terminated, truncated, info):
    """Modify observation."""
    ob[self.dim_ring :] = np.cos(self.theta - self.state)
    return ob, reward, terminated, truncated, info

Reaching1DWithSelfDistraction

Reaching1DWithSelfDistraction(dt=100, rewards=None, timing=None)

Bases: TrialEnv

Reaching with self distraction.

In this task, the reaching state itself generates strong inputs that overshadows the actual target input. This task is inspired by behavior in electric fish where the electric sensing organ is distracted by discharges from its own electric organ for active sensing. Similar phenomena in bats.

Source code in neurogym/envs/reaching.py
def __init__(self, dt=100, rewards=None, timing=None) -> None:
    super().__init__(dt=dt)
    # Rewards
    self.rewards = {"correct": +1.0, "fail": -0.1}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {"fixation": 500, "reach": 500}
    if timing:
        self.timing.update(timing)

    # action and observation spaces
    self.action_space = spaces.Discrete(3)
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(32,),
        dtype=np.float32,
    )
    self.theta = np.arange(0, 2 * np.pi, 2 * np.pi / 32)
    self.state = np.pi

post_step

post_step(ob, reward, terminated, truncated, info)

Modify observation.

Source code in neurogym/envs/reaching.py
def post_step(self, ob, reward, terminated, truncated, info):
    """Modify observation."""
    ob += np.cos(self.theta - self.state)
    return ob, reward, terminated, truncated, info

reachingdelayresponse

ReachingDelayResponse

ReachingDelayResponse(dt=100, rewards=None, timing=None, lowbound=0.0, highbound=1.0)

Bases: TrialEnv

Reaching task with a delay period.

A reaching direction is presented by the stimulus during the stimulus period. Followed by a delay period, the agent needs to respond to the direction of the stimulus during the decision period.

Source code in neurogym/envs/reachingdelayresponse.py
def __init__(self, dt=100, rewards=None, timing=None, lowbound=0.0, highbound=1.0) -> None:
    super().__init__(dt=dt)
    self.lowbound = lowbound
    self.highbound = highbound

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": -0.0, "miss": -0.5}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {"stimulus": 500, "delay": (0, 1000, 2000), "decision": 500}
    if timing:
        self.timing.update(timing)

    self.r_tmax = self.rewards["miss"]
    self.abort = False

    name = {"go": 0, "stimulus": 1}
    self.observation_space = spaces.Box(
        low=np.array([0.0, -2]),
        high=np.array([1, 2.0]),
        dtype=np.float32,
        name=name,
    )

    self.action_space = spaces.Box(
        low=np.array((-1.0, -1.0)),
        high=np.array((1.0, 2.0)),
        dtype=np.float32,
    )

readysetgo

Ready-set-go task.

ReadySetGo

ReadySetGo(dt=80, rewards=None, timing=None, gain=1, prod_margin=0.2)

Bases: TrialEnv

Agents have to measure and produce different time intervals.

A stimulus is briefly shown during a ready period, then again during a set period. The ready and set periods are separated by a measure period, the duration of which is randomly sampled on each trial. The agent is required to produce a response after the set cue such that the interval between the response and the set cue is as close as possible to the duration of the measure period.

Parameters:

Name Type Description Default
gain

Controls the measure that the agent has to produce. (def: 1, int)

1
prod_margin

controls the interval around the ground truth production time within which the agent receives proportional reward

0.2
Source code in neurogym/envs/readysetgo.py
def __init__(self, dt=80, rewards=None, timing=None, gain=1, prod_margin=0.2) -> None:
    super().__init__(dt=dt)
    self.prod_margin = prod_margin

    self.gain = gain

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 100,
        "ready": 83,
        "measure": lambda: self.rng.uniform(800, 1500),
        "set": 83,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False
    # set action and observation space
    name = {"fixation": 0, "ready": 1, "set": 2}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(3,),
        dtype=np.float32,
        name=name,
    )

    name = {"fixation": 0, "go": 1}
    self.action_space = spaces.Discrete(2, name=name)  # (fixate, go)

MotorTiming

MotorTiming(dt=80, rewards=None, timing=None, prod_margin=0.2)

Bases: TrialEnv

Agents have to produce different time intervals using different effectors (actions).

Parameters:

Name Type Description Default
prod_margin

controls the interval around the ground truth production time within which the agent receives proportional reward.

0.2
Source code in neurogym/envs/readysetgo.py
def __init__(self, dt=80, rewards=None, timing=None, prod_margin=0.2) -> None:
    super().__init__(dt=dt)
    self.prod_margin = prod_margin
    self.production_ind = [0, 1]
    self.intervals = [800, 1500]

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": 500,  # XXX: not specified
        "cue": lambda: self.rng.uniform(1000, 3000),
        "set": 50,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False
    # set action and observation space
    self.action_space = spaces.Discrete(2)  # (fixate, go)
    # Fixation, Interval indicator x2, Set
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(4,),
        dtype=np.float32,
    )

OneTwoThreeGo

OneTwoThreeGo(dt=80, rewards=None, timing=None, prod_margin=0.2)

Bases: TrialEnv

Agents reproduce time intervals based on two samples.

Parameters:

Name Type Description Default
prod_margin

controls the interval around the ground truth production time within which the agent receives proportional reward

0.2
Source code in neurogym/envs/readysetgo.py
def __init__(self, dt=80, rewards=None, timing=None, prod_margin=0.2) -> None:
    super().__init__(dt=dt)

    self.prod_margin = prod_margin

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    self.timing = {
        "fixation": ngym.random.TruncExp(400, 100, 800),
        "target": ngym.random.TruncExp(1000, 500, 1500),
        "s1": 100,
        "interval1": (600, 700, 800, 900, 1000),
        "s2": 100,
        "interval2": 0,
        "s3": 100,
        "interval3": 0,
        "response": 1000,
    }
    if timing:
        self.timing.update(timing)

    self.abort = False
    # set action and observation space
    name = {"fixation": 0, "stimulus": 1, "target": 2}
    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(3,),
        dtype=np.float32,
        name=name,
    )
    name = {"fixation": 0, "go": 1}
    self.action_space = spaces.Discrete(2, name=name)

registration

all_envs

all_envs(tag=None, psychopy=False, contrib=False, collections=False)

Return a list of all envs in neurogym.

Source code in neurogym/envs/registration.py
def all_envs(tag=None, psychopy=False, contrib=False, collections=False):
    """Return a list of all envs in neurogym."""
    envs = ALL_NATIVE_ENVS.copy()
    if psychopy:
        envs.update(ALL_PSYCHOPY_ENVS)
    if contrib:
        envs.update(ALL_CONTRIB_ENVS)
    if collections:
        envs.update(ALL_COLLECTIONS_ENVS)
    env_list = sorted(envs.keys())
    if tag is None:
        return env_list
    if not isinstance(tag, str):
        msg = f"{type(tag)=} must be a string."
        raise TypeError(msg)

    new_env_list = []
    for env in env_list:
        from_, class_ = envs[env].split(":")
        imported = getattr(__import__(from_, fromlist=[class_]), class_)
        env_tag = imported.metadata.get("tags", [])
        if tag in env_tag:
            new_env_list.append(env)
    return new_env_list

spatialsuppressmotion

SpatialSuppressMotion

SpatialSuppressMotion(dt=8.3, timing=None, rewards=None)

Bases: TrialEnv

Spatial suppression motion task.

This task is useful to study center-surround interaction in monkey MT and human psychophysical performance in motion perception.

Tha task is derived from (Tadin et al. Nature, 2003). In this task, there is no fixation or decision stage. We only present a stimulus and a subject needs to perform a 4-AFC motion direction judgement. The ground-truth is the probabilities for choosing the four directions at a given time point. The probabilities depend on stimulus contrast and size, and the probabilities are derived from emprically measured human psychophysical performance.

In this version, the input size is 4 (directions) x 8 (size) = 32 neurons. This setting aims to simulate four pools (8 neurons in each pool) of neurons that are selective for four directions.

Parameters:

Name Type Description Default
<dt>

millisecs per image frame, default: 8.3 (given 120HZ monitor)

required
<win_size>

size per image frame

required
<timing>

millisecs, stimulus duration, default: 8.3 * 36 frames ~ 300 ms. This is the longest duration we need (i.e., probability reach ceilling)

required

Note that please input default seq_len = 36 frames when creating dataset object.

FIXME: find more stable way of enforcing above.

Source code in neurogym/envs/spatialsuppressmotion.py
def __init__(self, dt=8.3, timing=None, rewards=None) -> None:
    if timing is None:
        timing = {"stimulus": 300}
    super().__init__(dt=dt)

    # Rewards
    self.rewards = {"abort": -0.1, "correct": +1.0, "fail": 0.0}
    if rewards:
        self.rewards.update(rewards)

    # Timing
    self.timing = {
        "stimulus": 300,  # we only need stimulus period for psychophysical task
    }
    if timing:
        self.timing.update(timing)

    self.abort = False

    # define action space four directions
    self.action_space = spaces.Box(
        0,
        1,
        shape=(4,),
        dtype=np.float32,
    )  # the probabilities for four direction

    # define observation space
    self.observation_space = spaces.Box(
        0,
        np.inf,
        shape=(32,),
        dtype=np.float32,
    )  # observation space, 4 directions * 8 sizes
    # larger stimulus could elicit more neurons to fire

    self.directions = [1, 2, 3, 4]  # motion direction left/right/up/down
    self.theta = [-np.pi / 2, np.pi / 2, 0, np.pi]  # direction angle of the four directions
    self.directions_anti = [2, 1, 4, 3]
    self.directions_ortho = [[3, 4], [3, 4], [1, 2], [1, 2]]

getgroundtruth

getgroundtruth(trial)

The utility function to obtain ground truth probabilities for four direction.

Input trial is a dict, contains fields , ,

We output a (4,) tuple indicate the probabilities to perceive left/right/up/down direction. This label comes from emprically measured human performance

Source code in neurogym/envs/spatialsuppressmotion.py
def getgroundtruth(self, trial):
    """The utility function to obtain ground truth probabilities for four direction.

    Input trial is a dict, contains fields <duration>, <contrast>, <diameter>

    We output a (4,) tuple indicate the probabilities to perceive left/right/up/down direction. This label comes
    from emprically measured human performance
    """
    frame_ind = [8, 9, 10, 13, 15, 18, 21, 28, 36, 37, 38, 39]
    xx = [1, 2, 3, 4, 5, 6, 7]
    yy = [0.249] * 7

    frame_ind = xx + frame_ind  # to fill in the first a few frames
    frame_ind = [i - 1 for i in frame_ind]  # frame index start from

    seq_len = self.view_ob(period="stimulus").shape[0]
    xnew = np.arange(seq_len)

    if trial["contrast"] > 0.5:
        # large size (11 deg radius), High contrast
        prob_corr = [*yy, 0.249, 0.249, 0.249, 0.27, 0.32, 0.4583, 0.65, 0.85, 0.99, 0.99, 0.99, 0.99]
        prob_anti = [*yy, 0.249, 0.29, 0.31, 0.4, 0.475, 0.4167, 0.3083, 0.075, 0.04, 0.04, 0.03, 0.03]

    elif trial["contrast"] < 0.5:
        # large size (11 deg radius), low contrast
        prob_corr = [*yy, 0.25, 0.26, 0.2583, 0.325, 0.45, 0.575, 0.875, 0.933, 0.99, 0.99, 0.99, 0.99]
        prob_anti = [*yy, 0.25, 0.26, 0.2583, 0.267, 0.1417, 0.1167, 0.058, 0.016, 0.003, 0.003, 0.003, 0.003]

    corr_prob = interp1d(
        frame_ind,
        prob_corr,
        kind="slinear",
        fill_value="extrapolate",
    )(xnew)
    anti_prob = interp1d(
        frame_ind,
        prob_anti,
        kind="slinear",
        fill_value="extrapolate",
    )(xnew)
    ortho_prob = (1 - (corr_prob + anti_prob)) / 2

    direction = trial["direction"] - 1
    direction_anti = self.directions_anti[direction] - 1
    direction_ortho = [i - 1 for i in self.directions_ortho[direction]]

    gt = np.zeros((4, seq_len))
    gt[direction, :] = corr_prob
    gt[direction_anti, :] = anti_prob
    gt[direction_ortho, :] = ortho_prob

    return gt.T  # gt is a seq_len x 4 numpy array

tonedetection

auditory tone detection task.

ToneDetection

ToneDetection(dt=50, sigma=0.2, timing=None)

Bases: TrialEnv

A subject is asked to report whether a pure tone is embeddied within a background noise.

If yes, should indicate the position of the tone. The tone lasts 50ms and could appear at the 500ms, 1000ms, and 1500ms. The tone is embbeded within noises.

By Ru-Yuan Zhang (ruyuanzhang@gmail.com)

Note in this version we did not consider the fixation period as we mainly aim to model human data.

For an animal version of this task, please consider to include fixation and saccade cues. See https://www.nature.com/articles/nn1386

Note that the output labels is of shape (seq_len, batch_size). For a human perceptual task, you can simply run labels = labels[-1, :] get the final output.

Parameters:

Name Type Description Default
<dt>

milliseconds, delta time,

required
<sigma>

float, input noise level, control the task difficulty

required
<timing>

stimulus timing

required
Source code in neurogym/envs/tonedetection.py
def __init__(self, dt=50, sigma=0.2, timing=None) -> None:
    super().__init__(dt=dt)
    """
    Here the key variables are
    <self.toneDur>: ms, duration of the tone
    <self.toneTiming>: ms, onset of the tone
    """
    self.sigma = sigma / np.sqrt(self.dt)  # Input noise

    # Rewards
    self.rewards = {
        "abort": -0.1,
        "correct": +1.0,
        "noresp": -0.1,
    }  # need to change here

    self.timing = {
        "stimulus": 2000,
        "toneTiming": [500, 1000, 1500],
        "toneDur": 50,
    }
    if timing:
        self.timing.update(timing)

    self.toneTiming = self.timing["toneTiming"]
    self.toneDur = self.timing["toneDur"]  # ms, the duration of a tone

    if dt > self.toneDur:
        msg = f"{dt=} must be smaller or equal tp tone duration {self.toneDur} (default=50)."
        raise ValueError(msg)

    self.toneDurIdx = int(self.toneDur / dt)  # how many data point it lasts

    self.toneTimingIdx = [int(i / dt) for i in self.toneTiming]
    self.stimArray = np.zeros(int(self.timing["stimulus"] / dt))

    self.abort = False

    self.signals = np.linspace(0, 1, 5)[:-1]  # signal strength
    self.conditions = [0, 1, 2, 3]  # no tone, tone at position 1/2/3

    self.observation_space = spaces.Box(
        -np.inf,
        np.inf,
        shape=(1,),
        dtype=np.float32,
    )
    self.ob_dict = {"fixation": 0, "stimulus": 1}
    self.action_space = spaces.Discrete(4)
    self.act_dict = {"fixation": 0, "choice": range(1, 5 + 1)}