Information Technology Reference
In-Depth Information
LeabraConSpec connection-level specifications:
Variable
Default
Description
rnd
Controls the random initialization of the weights:
.type
Select type of random distribution to use (e.g., UNIFORM (default), NORMAL
(Gaussian)).
.5
Mean of the random distribution (mean rnd weight val).
.mean
.var
.25
Variance of the distribution (range for UNIFORM).
.par
0
2nd parameter for distributions like BINOMIAL and GAMMA that require it (not
typically used).
Sets limits on the weight values — Leabra weights are constrained between 0 and
1 and are initialized to be symmetric:
wt limits
Type of constraint (GT MIN = greater than min, LT MAX = less than max,
MIN MAX (default) within both min and max)
.type
0
Minimum weight value (if GT MIN or MIN MAX).
.min
1
Maximum weight value (if LT MAX or MIN MAX).
.max
true
Symmetrizes the weights (only done at initialization).
.sym
false
Makes the connection inhibitory (net input goes to g i instead of net).
inhib
wt scale
Controls relative and absolute scaling of weights from different projections (see
equation 2.17):
1
Absolute scaling ( sk ): directly multiplies weight value.
.abs
1
Relative scaling ( rk ): effect is normalized by sum of rel values for all incoming
projections.
.rel
Parameters for the sigmoidal weight contrast enhancement function:
wt sig
6
Gain parameter: how sharp is the contrast enhancement. 1=linear function.
.gain
1.25
Offset parameter: for values > 1 , how far above .5 is neutral point on contrast
enhancement curve (1=neutral is at .5, values < 1 not used, 2 is probably the
maximum usable value).
.off
.01
Learning rate ( ￿ ).
lrate
.01
Current learning rate as affected by lrate sched : note that this is only updated
when the network is actually run (and only for ConSpecs that are actually used in
network).
cur lrate
Schedule of learning rate over training epochs: to use, create elements in the list,
assign start ctr's to epoch vals when lrate's (given by start val's) take effect. These
start val lrates multiply the basic lrate, so use .1 for a cur lrate of .001 if basic lrate
= .01.
lrate sched
Sets mixture of Hebbian and err-driven learning:
lmix
.01
Amount of Hebbian learning: unless using pure Hebb (1), values greater than .05
are usually to big. For large networks trained on many patterns, values as low as
.00005 are still useful.
.hebb
.err
.99
Amount of error-driven: automatically set to be 1-hebb, so you can't set this inde-
pendently.
Search WWH ::




Custom Search