Wrappers

Monitor-v0

class neurogym.wrappers.monitor.Monitor(env, folder=None, sv_per=100000, sv_stp='trial', verbose=False, sv_fig=False, num_stps_sv_fig=100, name='', fig_type='png')[source]

Monitor task.

Saves relevant behavioral information: rewards,actions, observations, new trial, ground truth.

Parameters:
  • folder – Folder where the data will be saved. (def: None, str) sv_per and sv_stp: Data will be saved every sv_per sv_stp’s. (def: 100000, int)

  • verbose – Whether to print information about average reward and number of trials. (def: False, bool)

  • sv_fig – Whether to save a figure of the experiment structure. If True, a figure will be updated every sv_per. (def: False, bool)

  • num_stps_sv_fig – Number of trial steps to include in the figure. (def: 100, int)

reset(step_fn=None)[source]

Resets the environment with kwargs.

step(action)[source]

Steps through the environment with action.

Noise-v0

class neurogym.wrappers.noise.Noise(env, std_noise=0.1)[source]

Add Gaussian noise to the observations.

Parameters:
  • std_noise – Standard deviation of noise. (def: 0.1)

  • perf_th – If != None, the wrapper will adjust the noise so the mean performance is not larger than perf_th. (def: None, float)

  • w – Window used to compute the mean performance. (def: 100, int)

  • step_noise – Step used to increment/decrease std. (def: 0.001, float)

reset(step_fn=None)[source]

Resets the environment with kwargs.

step(action)[source]

Steps through the environment with action.

PassReward-v0

class neurogym.wrappers.pass_reward.PassReward(env)[source]
reset(step_fn=None)[source]

Resets the environment with kwargs.

step(action)[source]

Steps through the environment with action.

PassAction-v0

class neurogym.wrappers.pass_action.PassAction(env)[source]

Modifies observation by adding the previous action.

reset(step_fn=None)[source]

Resets the environment with kwargs.

step(action)[source]

Steps through the environment with action.

ReactionTime-v0

class neurogym.wrappers.reaction_time.ReactionTime(env, urgency=0.0)[source]

Allow reaction time response.

Modifies a given environment by allowing the network to act at any time after the fixation period.

reset(step_fn=None, **kwargs)[source]

Resets the environment with kwargs.

step(action)[source]

Steps through the environment with action.

SideBias-v0

class neurogym.wrappers.side_bias.SideBias(env, probs=None, block_dur=200)[source]

Changes the probability of ground truth.

Parameters:
  • prob – Specifies probabilities for each choice. Within each block,the probability should sum up to 1. (def: None, numpy array (n_block, n_choices))

  • block_dur – Number of trials per block. (def: 200, int)

RandomGroundTruth-v0

class neurogym.wrappers.block.RandomGroundTruth(env, p=None)[source]

ScheduleAttr-v0

class neurogym.wrappers.block.ScheduleAttr(env, schedule, attr_list)[source]

Schedule attributes.

Parameters:
  • env – TrialEnv object

  • schedule

seed(seed=None)[source]

Seeds the environment.

ScheduleEnvs-v0

class neurogym.wrappers.block.ScheduleEnvs(envs, schedule, env_input=False)[source]

Schedule environments.

Parameters:
  • envs – list of env object

  • schedule – utils.scheduler.BaseSchedule object

  • env_input – bool, if True, add scalar inputs indicating current environment. default False.

reset(**kwargs)[source]

Reset each environment in self.envs and use the scheduler to select the environment returning the initial observation. This environment is also used to set the current environment self.env.

seed(seed=None)[source]

Seeds the environment.

set_i(i)[source]

Set the current environment to the i-th environment in the list envs.

TrialHistoryV2-v0

class neurogym.wrappers.block.TrialHistoryV2(env, probs=None)[source]

Change ground truth probability based on previous outcome.

Parameters:

probs – matrix of probabilities of the current choice conditioned on the previous. Shape, num-choices x num-choices