Wrappers
block
¶
ScheduleAttr
¶
Bases: TrialWrapper
Schedule attributes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
env
|
TrialEnv object |
required | |
schedule
|
|
required |
Source code in neurogym/wrappers/block.py
MultiEnvs
¶
Bases: TrialWrapper
Wrap multiple environments.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
envs
|
list of env object |
required | |
env_input
|
bool, if True, add scalar inputs indicating current envinronment. default False. |
False
|
Source code in neurogym/wrappers/block.py
ScheduleEnvs
¶
Bases: TrialWrapper
Schedule environments.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
envs
|
list of env object |
required | |
schedule
|
utils.scheduler.BaseSchedule object |
required | |
env_input
|
bool, if True, add scalar inputs indicating current environment. default False. |
False
|
Source code in neurogym/wrappers/block.py
reset
¶
Resets environments.
Reset each environment in self.envs and use the scheduler to select the environment returning the initial observation. This environment is also used to set the current environment self.env.
Source code in neurogym/wrappers/block.py
set_i
¶
Set the current environment to the i-th environment in the list envs.
TrialHistoryV2
¶
Bases: TrialWrapper
Change ground truth probability based on previous outcome.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
probs
|
matrix of probabilities of the current choice conditioned on the previous. Shape, num-choices x num-choices |
None
|
Source code in neurogym/wrappers/block.py
monitor
¶
Monitor
¶
Monitor(env: TrialEnv, config: Config | str | Path | None = None, name: str | None = None, trigger: str = 'trial', interval: int = 1000, plot_create: bool = False, plot_steps: int = 1000, ext: str = 'png', step_fn: Callable | None = None, verbose: bool = True, level: str = 'INFO', log_trigger: str = 'trial', log_interval: int = 1000)
Bases: Wrapper
Monitor class to log, visualize, and evaluate NeuroGym environment behavior.
Wraps a NeuroGym TrialEnv to track actions, rewards, and performance metrics, save them to disk, and optionally generate trial visualizations. Supports logging at trial or step level, with configurable frequency and verbosity.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
env
|
TrialEnv
|
The NeuroGym environment to wrap. |
required |
config
|
Config | str | Path | None
|
Optional configuration source (Config object, TOML file path, or dictionary). |
None
|
name
|
str | None
|
Optional monitor name; defaults to the environment class name. |
None
|
trigger
|
str
|
When to save data ("trial" or "step"). |
'trial'
|
interval
|
int
|
How often to save data, in number of trials or steps. |
1000
|
plot_create
|
bool
|
Whether to generate and save visualizations of environment behavior. |
False
|
plot_steps
|
int
|
Number of steps to visualize in each plot. |
1000
|
ext
|
str
|
Image file extension for saved plots (e.g., "png"). |
'png'
|
step_fn
|
Callable | None
|
Optional custom step function to override the environment's. |
None
|
verbose
|
bool
|
Whether to print information when logging or saving data. |
True
|
level
|
str
|
Logging verbosity level (e.g., "INFO", "DEBUG"). |
'INFO'
|
log_trigger
|
str
|
When to log progress ("trial" or "step"). |
'trial'
|
log_interval
|
int
|
How often to log, in trials or steps. |
1000
|
Attributes:
Name | Type | Description |
---|---|---|
config |
Config
|
Final validated configuration object. |
data |
dict[str, list]
|
Collected behavioral data for each completed trial. |
cum_reward |
Cumulative reward for the current trial. |
|
num_tr |
Number of completed trials. |
|
t |
Step counter (used when trigger is "step"). |
|
save_dir |
Directory where data and plots are saved. |
Source code in neurogym/wrappers/monitor.py
reset
¶
Reset the environment.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed
|
Random seed for the environment |
None
|
Returns:
Type | Description |
---|---|
The initial observation from the environment reset |
Source code in neurogym/wrappers/monitor.py
step
¶
Execute one environment step.
This method: 1. Takes a step in the environment 2. Collects data if sv_fig is enabled 3. Saves data when a trial completes and saving conditions are met
Parameters:
Name | Type | Description | Default |
---|---|---|---|
action
|
Any
|
The action to take in the environment |
required |
collect_data
|
bool
|
If True, collect and save data |
True
|
Returns:
Type | Description |
---|---|
tuple[Any, float, bool, bool, dict[str, Any]]
|
Tuple of (observation, reward, terminated, truncated, info) |
Source code in neurogym/wrappers/monitor.py
reset_data
¶
store_data
¶
Store data for visualization figures.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
obs
|
Any
|
Current observation |
required |
action
|
Any
|
Current action |
required |
rew
|
float
|
Current reward |
required |
info
|
dict[str, Any]
|
Info dictionary from environment |
required |
Source code in neurogym/wrappers/monitor.py
evaluate_policy
¶
evaluate_policy(num_trials: int = 100, model: Any | None = None, verbose: bool = True) -> dict[str, float | list[float]]
Evaluates the average performance of the RL agent in the environment.
This method runs the given model (or random policy if None) on the environment for a specified number of trials and collects performance metrics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_trials
|
int
|
Number of trials to run for evaluation |
100
|
model
|
Any | None
|
The policy model to evaluate (if None, uses random actions) |
None
|
verbose
|
bool
|
If True, prints progress information |
True
|
Returns: dict: Dictionary containing performance metrics: - mean_performance: Average performance (if reported by environment) - mean_reward: Proportion of positive rewards - performances: List of performance values for each trial - rewards: List of rewards for each trial.
Source code in neurogym/wrappers/monitor.py
259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 |
|
plot_training_history
¶
plot_training_history(figsize: tuple[int, int] = (12, 6), save_fig: bool = True, plot_performance: bool = True) -> Figure | None
Plot rewards and performance training history from saved data files with one data point per trial.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
figsize
|
tuple[int, int]
|
Figure size as (width, height) tuple |
(12, 6)
|
save_fig
|
bool
|
Whether to save the figure to disk |
True
|
plot_performance
|
bool
|
Whether to plot performance in a separate plot |
True
|
Returns: matplotlib figure object
Source code in neurogym/wrappers/monitor.py
339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 |
|
noise
¶
Noise wrapper.
Created on Thu Feb 28 15:07:21 2019
@author: molano
Noise
¶
Bases: Wrapper
Add Gaussian noise to the observations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
std_noise
|
Standard deviation of noise. (def: 0.1) |
0.1
|
|
perf_th
|
If != None, the wrapper will adjust the noise so the mean performance is not larger than perf_th. (def: None, float) |
required | |
w
|
Window used to compute the mean performance. (def: 100, int) |
required | |
step_noise
|
Step used to increment/decrease std. (def: 0.001, float) |
required |
Source code in neurogym/wrappers/noise.py
pass_action
¶
PassAction
¶
Bases: Wrapper
Modifies observation by adding the previous action.
Source code in neurogym/wrappers/pass_action.py
pass_reward
¶
PassReward
¶
Bases: Wrapper
Modifies observation by adding the previous reward.
Source code in neurogym/wrappers/pass_reward.py
reaction_time
¶
Noise wrapper.
Created on Thu Feb 28 15:07:21 2019
@author: molano
ReactionTime
¶
Bases: Wrapper
Allow reaction time response.
Modifies a given environment by allowing the network to act at any time after the fixation period.
Source code in neurogym/wrappers/reaction_time.py
side_bias
¶
SideBias
¶
Bases: TrialWrapper
Changes the probability of ground truth.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prob
|
Specifies probabilities for each choice. Within each block,the probability should sum up to 1. (def: None, numpy array (n_block, n_choices)) |
required | |
block_dur
|
Number of trials per block. (def: 200, int) |
200
|