Wrappers
block
¶
ScheduleAttr
¶
ScheduleAttr(env: TrialEnv, schedule, attr_list)
Bases: TrialWrapper
Schedule attributes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
env
|
TrialEnv
|
TrialEnv object |
required |
schedule
|
|
required |
Source code in neurogym/wrappers/block.py
MultiEnvs
¶
Bases: TrialWrapper
Wrap multiple environments.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
envs
|
list of env object |
required | |
env_input
|
bool, if True, add scalar inputs indicating current environment. default False. |
False
|
Source code in neurogym/wrappers/block.py
ScheduleEnvs
¶
Bases: TrialWrapper
Schedule environments.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
envs
|
list of env object |
required | |
schedule
|
utils.scheduler.BaseSchedule object |
required | |
env_input
|
bool, if True, add scalar inputs indicating current environment. default False. |
False
|
Source code in neurogym/wrappers/block.py
reset
¶
Resets environments.
Reset each environment in self.envs and use the scheduler to select the environment returning the initial observation. This environment is also used to set the current environment self.env.
Source code in neurogym/wrappers/block.py
set_i
¶
Set the current environment to the i-th environment in the list envs.
TrialHistoryV2
¶
TrialHistoryV2(env: TrialEnv, probs: ndarray | None = None)
Bases: TrialWrapper
Change ground truth probability based on previous outcome.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
probs
|
ndarray | None
|
matrix of probabilities of the current choice conditioned on the previous. Shape, num-choices x num-choices |
None
|
Source code in neurogym/wrappers/block.py
monitor
¶
Monitor
¶
Monitor(
env: TrialEnv,
config: Config | str | Path | None = None,
name: str | None = None,
trigger: str = "trial",
interval: int = 1000,
verbose: bool = True,
plot_create: bool = False,
plot_steps: int = 1000,
ext: str = "png",
step_fn: Callable | None = None,
save_dir: str | Path | None = None,
)
Bases: Wrapper
Monitor class to log, visualize, and evaluate NeuroGym environment behavior.
Wraps a NeuroGym TrialEnv to track actions, rewards, and performance metrics, save them to disk, and optionally generate trial visualizations. Supports logging at trial or step level, with configurable frequency and verbosity.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
env
|
TrialEnv
|
The NeuroGym environment to wrap. |
required |
config
|
Config | str | Path | None
|
Optional configuration source (Config object, TOML file path, or dictionary). |
None
|
name
|
str | None
|
Optional monitor name; defaults to the environment class name. |
None
|
trigger
|
str
|
When to save data ("trial" or "step"). |
'trial'
|
interval
|
int
|
How often to save data, in number of trials or steps. |
1000
|
plot_create
|
bool
|
Whether to generate and save visualizations of environment behavior. |
False
|
plot_steps
|
int
|
Number of steps to visualize in each plot. |
1000
|
ext
|
str
|
Image file extension for saved plots (e.g., "png"). |
'png'
|
step_fn
|
Callable | None
|
Optional custom step function to override the environment's. |
None
|
verbose
|
bool
|
Whether to log information when logging or saving data. |
True
|
level
|
Logging verbosity level (e.g., "INFO", "DEBUG"). |
required | |
log_trigger
|
When to log progress ("trial" or "step"). |
required | |
log_interval
|
How often to log, in trials or steps. |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
config |
Config
|
Final validated configuration object. |
data |
dict[str, list]
|
Collected behavioral data for each completed trial. |
data_eval |
dict[str, Any]
|
Evaluation data collected during policy evaluation runs. |
cum_reward |
Cumulative reward for the current trial. |
|
num_tr |
Number of completed trials. |
|
t |
Step counter (used when trigger is "step"). |
|
save_dir |
Directory where data and plots are saved. |
Source code in neurogym/wrappers/monitor.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | |
reset
¶
Reset the environment.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed
|
Random seed for the environment |
None
|
Returns:
| Type | Description |
|---|---|
|
The initial observation from the environment reset |
Source code in neurogym/wrappers/monitor.py
step
¶
step(
action: Any,
collect_data: bool = True,
record: bool = True,
) -> tuple[Any, float, bool, bool, dict[str, Any]]
Execute one environment step.
This method: 1. Takes a step in the environment 2. Collects data if sv_fig is enabled 3. Saves data when a trial completes and saving conditions are met
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
action
|
Any
|
The action to take in the environment |
required |
collect_data
|
bool
|
If True, collect and save data |
True
|
record
|
bool
|
A toggle for recording activation traces. |
True
|
Returns:
| Type | Description |
|---|---|
tuple[Any, float, bool, bool, dict[str, Any]]
|
Tuple of (observation, reward, terminated, truncated, info) |
Source code in neurogym/wrappers/monitor.py
reset_data
¶
store_data
¶
Store data for visualization figures.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obs
|
Any
|
Current observation |
required |
action
|
Any
|
Current action |
required |
rew
|
float
|
Current reward |
required |
info
|
dict[str, Any]
|
Info dictionary from environment |
required |
Source code in neurogym/wrappers/monitor.py
evaluate_policy
¶
evaluate_policy(
num_trials: int = 100,
model: Any | None = None,
verbose: bool = True,
) -> dict[str, float | list[float]]
Evaluates the average performance of the RL agent in the environment.
This method runs the given model (or random policy if None) on the environment for a specified number of trials and collects performance metrics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_trials
|
int
|
Number of trials to run for evaluation |
100
|
model
|
Any | None
|
The policy model to evaluate (if None, uses random actions) |
None
|
verbose
|
bool
|
If True, prints progress information |
True
|
Returns: dict: Dictionary containing performance metrics: - mean_performance: Average performance (if reported by environment) - mean_reward: Proportion of positive rewards - performances: List of performance values for each trial - rewards: List of rewards for each trial.
Source code in neurogym/wrappers/monitor.py
292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 | |
plot_training_history
¶
plot_training_history(
figsize: tuple[int, int] = (12, 6),
save_fig: bool = True,
plot_performance: bool = True,
) -> Figure | None
Plot rewards and performance training history from saved data files with one data point per trial.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
figsize
|
tuple[int, int]
|
Figure size as (width, height) tuple |
(12, 6)
|
save_fig
|
bool
|
Whether to save the figure to disk |
True
|
plot_performance
|
bool
|
Whether to plot performance in a separate plot |
True
|
Returns: matplotlib figure object
Source code in neurogym/wrappers/monitor.py
402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 | |
record_activations
¶
record_activations(
layer: Module,
name: str | None = None,
steps: int | None = None,
) -> ActivationProbe
Record the output activations of a layer over a trial.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
layer
|
Module
|
The layer whose activations are being monitored. |
required |
name
|
str | None
|
The name to use for the activation monitor. This can be useful for retrieving activation monitors at a later stage. |
None
|
steps
|
int | None
|
The steps to record for. This could be less than the total number of steps in a trial. |
None
|
Returns:
| Type | Description |
|---|---|
ActivationProbe
|
An ActivationMonitor instance. |
Source code in neurogym/wrappers/monitor.py
plot_activations
¶
plot_activations(
name: str,
population: str,
figsize: tuple[int, ...] | None = None,
neurons: int | list[int] | None = None,
mean: bool = False,
batch_dim: int | None = None,
) -> tuple[Figure, Axes]
Plot the neuron activations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name of the layer. |
required |
population
|
str
|
The neuron population to plot. |
required |
figsize
|
tuple[int, ...] | None
|
The size of the figure. |
None
|
neurons
|
int | list[int] | None
|
List of neuron ids to plot. If None, all neurons will be plotted. |
None
|
mean
|
bool
|
If set, plot the mean activation over all trials rather than each separate trial. |
False
|
batch_dim
|
int | None
|
The batch dimension, if it exists. |
None
|
Raises:
| Type | Description |
|---|---|
ValueError
|
Raised if there is no such layer in the history. |
KeyError
|
Raised if activations have not been recorded for the requested neuron population. |
ValueError
|
Raised if the requested neuron IDs are outside the layer range. |
Returns:
| Type | Description |
|---|---|
tuple[Figure, Axes]
|
A plot of all neuron activations. |
Source code in neurogym/wrappers/monitor.py
601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 | |
noise
¶
Noise
¶
Noise(env: TrialEnv, std_noise: float = 0.1)
Bases: Wrapper
Add Gaussian noise to the observations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
env
|
TrialEnv
|
The NeuroGym environment to wrap. |
required |
std_noise
|
float
|
Standard deviation of noise. (def: 0.1) |
0.1
|
perf_th
|
If != None, the wrapper will adjust the noise so the mean performance is not larger than perf_th. (def: None, float) |
required | |
w
|
Window used to compute the mean performance. (def: 100, int) |
required | |
step_noise
|
Step used to increment/decrease std. (def: 0.001, float) |
required |
Source code in neurogym/wrappers/noise.py
pass_action
¶
PassAction
¶
PassAction(env: TrialEnv)
Bases: Wrapper
Modifies observation by adding the previous action.
Source code in neurogym/wrappers/pass_action.py
pass_reward
¶
PassReward
¶
PassReward(env: TrialEnv)
Bases: Wrapper
Modifies observation by adding the previous reward.
Source code in neurogym/wrappers/pass_reward.py
reaction_time
¶
ReactionTime
¶
ReactionTime(
env: TrialEnv,
urgency: float = 0.0,
end_on_stimulus: bool = True,
)
Bases: Wrapper
Allow reaction time response.
Modifies a given environment by allowing the network to act at any time after the fixation period. By default, the trial ends when the stimulus period ends. Optionally, the original trial structure can be preserved while still allowing early responses.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
env
|
TrialEnv
|
The environment to wrap |
required |
urgency
|
float
|
Urgency signal added to reward at each timestep |
0.0
|
end_on_stimulus
|
bool
|
If True (default), trial ends when stimulus ends. If False, preserves original trial timing while allowing early responses during stimulus period. |
True
|
Source code in neurogym/wrappers/reaction_time.py
side_bias
¶
SideBias
¶
SideBias(
env: TrialEnv,
probs: list[list[float]],
block_dur: float | tuple[int, int] = 200,
)
Bases: TrialWrapper
Changes the probability of ground truth with block-wise biases.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
env
|
TrialEnv
|
The task environment to wrap (must expose |
required |
probs
|
list[list[float]]
|
Explicit probability matrix with shape (n_blocks, n_choices).
Each row defines the choice probabilities for one block and must sum to 1.0.
The number of columns (n_choices) must match the number of choices defined in the task
(i.e., |
required |
block_dur
|
float | tuple[int, int]
|
Duration of each block, with behavior depending on the type: - int (≥ 1): Use a fixed number of trials per block (e.g., block_dur=20 means each block has exactly 20 trials). - float (0 < value < 1): Specify a per-trial probability of switching to a new block (e.g., block_dur=0.1 means there's a 10% chance of switching blocks after each trial). - tuple (low, high): Draw the number of trials per block randomly from a uniform distribution over the integer range [low, high] (inclusive). |
200
|
Examples:
- probs=[[0.8, 0.2], [0.2, 0.8], [0.4, 0.6]], block_dur=200
Stay 200 trials per block, randomly switch to new block after;
- probs=[[0.8, 0.2], [0.2, 0.8], [0.4, 0.6]], block_dur=0.1
10% probability per trial to switch to new random block;
- probs=[[0.8, 0.2], [0.2, 0.8], [0.4, 0.6]], block_dur=(200, 400)
Random trials in [200, 400] range per block, then switch.
Source code in neurogym/wrappers/side_bias.py
new_trial
¶
Generate new trial with block-based probability biases.