Database Reference
In-Depth Information
print "L1 (100.0) number of zeros weights: " +
str(sum(model_l1_100.weights.array == 0))
We can see from the results that as we might expect, the number of zero feature weights in
the model weight vector increases as greater levels of L1 regularization are applied:
L1 (1.0) number of zero weights: 4
L1 (10.0) number of zeros weights: 20
L1 (100.0) number of zeros weights: 55
Intercept
The final parameter option for the linear model is whether to use an intercept or not. An
intercept is a constant term that is added to the weight vector and effectively accounts for
the mean value of the target variable. If the data is already centered or normalized, an in-
tercept is not necessary; however, it often does not hurt to use one in any case.
We will evaluate the effect of adding an intercept term to the model here:
params = [False, True]
metrics = [evaluate(train_data, test_data, 10, 0.1, 1.0,
'l2', param ) for param in params]
print params
print metrics
bar(params, metrics, color='lightblue')
fig = matplotlib.pyplot.gcf()
We can see from the result and plot that adding the intercept term results in a very slight
increase in RMSLE:
[False, True]
[1.4900275345312988, 1.506469812020645]
Search WWH ::




Custom Search