Public Member Functions | Static Public Member Functions | Public Attributes | Private Member Functions | Private Attributes
core.module.Module Class Reference
Inheritance diagram for core.module.Module:
Inheritance graph
[legend]

List of all members.

Public Member Functions

def __init__
def backward
def bind
def borrow_optimizer
def data_names
def data_shapes
def forward
def get_input_grads
def get_outputs
def get_params
def get_states
def init_optimizer
def init_params
def install_monitor
def label_names
def label_shapes
def load_optimizer_states
def output_names
def output_shapes
def reshape
def save_checkpoint
def save_optimizer_states
def set_params
def set_states
def update
def update_metric

Static Public Member Functions

def load

Public Attributes

 binded
 for_training
 inputs_need_grad
 optimizer_initialized
 params_initialized

Private Member Functions

def _reset_bind
def _sync_params_from_devices

Private Attributes

 _arg_params
 _aux_names
 _aux_params
 _context
 _data_names
 _data_shapes
 _exec_group
 _fixed_param_names
 _grad_req
 _kvstore
 _label_names
 _label_shapes
 _optimizer
 _output_names
 _param_names
 _params_dirty
 _preload_opt_states
 _state_names
 _symbol
 _update_on_kvstore
 _updater
 _work_load_list

Detailed Description

Module is a basic module that wrap a `Symbol`. It is functionally the same
as the `FeedForward` model, except under the module API.

Parameters
----------
symbol : Symbol
data_names : list of str
    Default is `('data')` for a typical model used in image classification.
label_names : list of str
    Default is `('softmax_label')` for a typical model used in image
    classification.
logger : Logger
    Default is `logging`.
context : Context or list of Context
    Default is `cpu()`.
work_load_list : list of number
    Default `None`, indicating uniform workload.
fixed_param_names: list of str
    Default `None`, indicating no network parameters are fixed.
state_names : list of str
    states are similar to data and label, but not provided by data iterator.
    Instead they are initialized to 0 and can be set by set_states()

Definition at line 30 of file module.py.


Constructor & Destructor Documentation

def core.module.Module.__init__ (   self,
  symbol,
  data_names = ('data',,
  label_names = ('softmax_label',,
  logger = logging,
  context = ctx.cpu(),
  work_load_list = None,
  fixed_param_names = None,
  state_names = None 
)

Definition at line 54 of file module.py.


Member Function Documentation

def core.module.Module._reset_bind (   self) [private]
Internal function to reset binded state.

Definition at line 165 of file module.py.

Synchronize parameters from devices to CPU. This function should be called after
calling `update` that updates the parameters on the devices, before one can read the
latest parameters from `self._arg_params` and `self._aux_params`.

Definition at line 665 of file module.py.

def core.module.Module.backward (   self,
  out_grads = None 
)
Backward computation.

Parameters
----------
out_grads : NDArray or list of NDArray, optional
    Gradient on the outputs to be propagated back.
    This parameter is only needed when bind is called
    on outputs that are not a loss function.

Definition at line 549 of file module.py.

def core.module.Module.bind (   self,
  data_shapes,
  label_shapes = None,
  for_training = True,
  inputs_need_grad = False,
  force_rebind = False,
  shared_module = None,
  grad_req = 'write' 
)
Bind the symbols to construct executors. This is necessary before one
can perform computation with the module.

Parameters
----------
data_shapes : list of (str, tuple)
    Typically is `data_iter.provide_data`.
label_shapes : list of (str, tuple)
    Typically is `data_iter.provide_label`.
for_training : bool
    Default is `True`. Whether the executors should be bind for training.
inputs_need_grad : bool
    Default is `False`. Whether the gradients to the input data need to be computed.
    Typically this is not needed. But this might be needed when implementing composition
    of modules.
force_rebind : bool
    Default is `False`. This function does nothing if the executors are already
    binded. But with this `True`, the executors will be forced to rebind.
shared_module : Module
    Default is `None`. This is used in bucketing. When not `None`, the shared module
    essentially corresponds to a different bucket -- a module with different symbol
    but with the same sets of parameters (e.g. unrolled RNNs with different lengths).

Definition at line 328 of file module.py.

def core.module.Module.borrow_optimizer (   self,
  shared_module 
)
Borrow optimizer from a shared module. Used in bucketing, where exactly the same
optimizer (esp. kvstore) is used.

Parameters
----------
shared_module : Module

Definition at line 521 of file module.py.

A list of names for data required by this module.

Definition at line 173 of file module.py.

Get data shapes.
Returns
-------
A list of `(name, shape)` pairs.

Definition at line 188 of file module.py.

def core.module.Module.forward (   self,
  data_batch,
  is_train = None 
)
Forward computation.

Parameters
----------
data_batch : DataBatch
    Could be anything with similar API implemented.
is_train : bool
    Default is `None`, which means `is_train` takes the value of `self.for_training`.

Definition at line 536 of file module.py.

def core.module.Module.get_input_grads (   self,
  merge_multi_context = True 
)
Get the gradients with respect to the inputs of the module.

Parameters
----------
merge_multi_context : bool
    Default is `True`. In the case when data-parallelism is used, the outputs
    will be collected from multiple devices. A `True` value indicate that we
    should merge the collected results so that they look like from a single
    executor.

Returns
-------
If `merge_multi_context` is `True`, it is like `[grad1, grad2]`. Otherwise, it
is like `[[grad1_dev1, grad1_dev2], [grad2_dev1, grad2_dev2]]`. All the output
elements are `NDArray`.

Definition at line 600 of file module.py.

def core.module.Module.get_outputs (   self,
  merge_multi_context = True 
)
Get outputs of the previous forward computation.

Parameters
----------
merge_multi_context : bool
    Default is `True`. In the case when data-parallelism is used, the outputs
    will be collected from multiple devices. A `True` value indicate that we
    should merge the collected results so that they look like from a single
    executor.

Returns
-------
If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it
is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output
elements are `NDArray`.

Definition at line 580 of file module.py.

Get current parameters.
Returns
-------
`(arg_params, aux_params)`, each a dictionary of name to parameters (in
`NDArray`) mapping.

Definition at line 219 of file module.py.

def core.module.Module.get_states (   self,
  merge_multi_context = True 
)
Get states from all devices

Parameters
----------
merge_multi_context : bool
    Default is `True`. In the case when data-parallelism is used, the states
    will be collected from multiple devices. A `True` value indicate that we
    should merge the collected results so that they look like from a single
    executor.

Returns
-------
If `merge_multi_context` is `True`, it is like `[out1, out2]`. Otherwise, it
is like `[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]`. All the output
elements are `NDArray`.

Definition at line 620 of file module.py.

def core.module.Module.init_optimizer (   self,
  kvstore = 'local',
  optimizer = 'sgd',
  optimizer_params = (('learning_rate', 0.01),
  force_init = False 
)
Install and initialize optimizers.

Parameters
----------
kvstore : str or KVStore
    Default `'local'`.
optimizer : str or Optimizer
    Default `'sgd'`
optimizer_params : dict
    Default `(('learning_rate', 0.01),)`. The default value is not a dictionary,
    just to avoid pylint warning of dangerous default values.
force_init : bool
    Default `False`, indicating whether we should force re-initializing the
    optimizer in the case an optimizer is already installed.

Definition at line 443 of file module.py.

def core.module.Module.init_params (   self,
  initializer = Uniform(0.01),
  arg_params = None,
  aux_params = None,
  allow_missing = False,
  force_init = False,
  allow_extra = False 
)
Initialize the parameters and auxiliary states.

Parameters
----------
initializer : Initializer
    Called to initialize parameters if needed.
arg_params : dict
    If not None, should be a dictionary of existing arg_params. Initialization
    will be copied from that.
aux_params : dict
    If not None, should be a dictionary of existing aux_params. Initialization
    will be copied from that.
allow_missing : bool
    If true, params could contain missing values, and the initializer will be
    called to fill those missing params.
force_init : bool
    If true, will force re-initialize even if already initialized.

Definition at line 232 of file module.py.

def core.module.Module.install_monitor (   self,
  mon 
)
Install monitor on all executors 

Definition at line 704 of file module.py.

A list of names for labels required by this module.

Definition at line 178 of file module.py.

Get label shapes.
Returns
-------
A list of `(name, shape)` pairs. The return value could be `None` if
the module does not need labels, or if the module is not binded for
training (in this case, label information is not available).

Definition at line 198 of file module.py.

def core.module.Module.load (   prefix,
  epoch,
  load_optimizer_states = False,
  kwargs 
) [static]
Create a model from previously saved checkpoint.

Parameters
----------
prefix : str
    path prefix of saved model files. You should have
    "prefix-symbol.json", "prefix-xxxx.params", and
    optionally "prefix-xxxx.states", where xxxx is the
    epoch number.
epoch : int
    epoch to load.
load_optimizer_states : bool
    whether to load optimizer states. Checkpoint needs
    to have been made with save_optimizer_states=True.
data_names : list of str
    Default is `('data')` for a typical model used in image classification.
label_names : list of str
    Default is `('softmax_label')` for a typical model used in image
    classification.
logger : Logger
    Default is `logging`.
context : Context or list of Context
    Default is `cpu()`.
work_load_list : list of number
    Default `None`, indicating uniform workload.
fixed_param_names: list of str
    Default `None`, indicating no network parameters are fixed.

Definition at line 105 of file module.py.

def core.module.Module.load_optimizer_states (   self,
  fname 
)
Load optimizer (updater) state from file

Parameters
----------
fname : str
    Path to input states file.

Definition at line 689 of file module.py.

A list of names for the outputs of this module.

Definition at line 183 of file module.py.

Get output shapes.
Returns
-------
A list of `(name, shape)` pairs.

Definition at line 210 of file module.py.

def core.module.Module.reshape (   self,
  data_shapes,
  label_shapes = None 
)
Reshape the module for new input shapes.

Parameters
----------
data_shapes : list of (str, tuple)
    Typically is `data_iter.provide_data`.
label_shapes : list of (str, tuple)
    Typically is `data_iter.provide_label`.

Definition at line 424 of file module.py.

def core.module.Module.save_checkpoint (   self,
  prefix,
  epoch,
  save_optimizer_states = False 
)
Save current progress to checkpoint.
Use mx.callback.module_checkpoint as epoch_end_callback to save during training.

Parameters
----------
prefix : str
    The file prefix to checkpoint to
epoch : int
    The current epoch number
save_optimizer_states : bool
    Whether to save optimizer states for continue training

Definition at line 143 of file module.py.

def core.module.Module.save_optimizer_states (   self,
  fname 
)
Save optimizer (updater) state to file

Parameters
----------
fname : str
    Path to output states file.

Definition at line 673 of file module.py.

def core.module.Module.set_params (   self,
  arg_params,
  aux_params,
  allow_missing = False,
  force_init = True 
)
Assign parameter and aux state values.

Parameters
----------
arg_params : dict
    Dictionary of name to value (`NDArray`) mapping.
aux_params : dict
    Dictionary of name to value (`NDArray`) mapping.
allow_missing : bool
    If true, params could contain missing values, and the initializer will be
    called to fill those missing params.
force_init : bool
    If true, will force re-initialize even if already initialized.

Examples
--------
An example of setting module parameters::
    >>> sym, arg_params, aux_params = \
    >>>     mx.model.load_checkpoint(model_prefix, n_epoch_load)
    >>> mod.set_params(arg_params=arg_params, aux_params=aux_params)

Definition at line 290 of file module.py.

def core.module.Module.set_states (   self,
  states = None,
  value = None 
)
Set value for states. Only one of states & value can be specified.

Parameters
----------
states : list of list of NDArrays
    source states arrays formatted like [[state1_dev1, state1_dev2],
    [state2_dev1, state2_dev2]].
value : number
    a single scalar value for all state arrays.

Definition at line 640 of file module.py.

def core.module.Module.update (   self)
Update parameters according to the installed optimizer and the gradients computed
in the previous forward-backward batch.

Definition at line 562 of file module.py.

def core.module.Module.update_metric (   self,
  eval_metric,
  labels 
)
Evaluate and accumulate evaluation metric on outputs of the last forward computation.

Parameters
----------
eval_metric : EvalMetric
labels : list of NDArray
    Typically `data_batch.label`.

Definition at line 654 of file module.py.


Member Data Documentation

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 54 of file module.py.

Definition at line 165 of file module.py.

Definition at line 350 of file module.py.

Definition at line 350 of file module.py.

Definition at line 457 of file module.py.

Definition at line 249 of file module.py.


The documentation for this class was generated from the following file:


rail_object_detector
Author(s):
autogenerated on Sat Jun 8 2019 20:26:31