Database Reference
In-Depth Information
agentSettings.setBetaType(
RecommAgentSettings.
RL_STEP_SIZE_MEAN );
agentSettings.setBeta( 1.0 );
agentSettings.setControlGroupLearning( false );
agentSettings.setScaleCuDP( 1.1 );
agentSettings.setMinProbCountDP( 1 );
agentSettings.verifySettings();
// Get agent specification:
String agentName ¼ "DPRecommAgent";
AgentSpecification agentSpecification ¼ getAgentSpeci-
fication(agentName);
if( agentSpecification ¼¼ null )
throw new MiningException( "Can't find application " +
agentName );
// Create agent object:
RecommAgent agent ¼ (RecommAgent) agentSpecification.
createAgentInstance();
// Create action-value function:
ActionValueFunction actionValueFct ¼ new ActionValue-
Function(recoEnv);
// Put it all together:
agent.setAgentSettings(agentSettings);
agent.setEnv(recoEnv);
agent.setQfunction(actionValueFct);
agent.setInnerAgent( createInnerAgent() );
agent.verify();
return agent;
}
After we have created the agent, we can assign initial values to its action-value
functions. In reality, this means that we load the previous rule base after the
recommendation engine has been restarted and continue the online learning.
Now we turn to the online learning and demonstrate the first three steps and the
last step of the first session 1 ! 5* ! 4 ! ... ! 6* and the transition to the next
session starting with product 6.
/**
* Do the online learning.
*
* @param agent the recommendation agent
* @throws MiningException
*/
private void onlineLearning(RecommAgent agent) throws
MiningException {
Search WWH ::




Custom Search