Understanding the Retraining Process

Initially, our models are trained in batches, which is efficient for handling large datasets. However, when it comes to retraining with new data, especially a single game, we face a unique challenge. The model, originally trained on batches of games, now has to adapt to learning from just one game at a time.

The Role of Variable Learning Rates

This is where variable learning rates come into play. When retraining a model, you can now adjust the learning rate to ensure the new game data doesn't disproportionately influence the model. This adjustment is crucial to maintain the balance between retaining previously learned patterns and adapting to new information.

How to Use This Feature

When you're set to retrain a model, you'll find a slider next to the retrain button. This slider allows you to set the learning rate anywhere from 1% to 200% of its original value. It's a handy tool to control how much the new game data impacts the model.

Key Considerations

Eligibility for Retraining: Only completed games with final scores are used for retraining. Also, the retraining is limited to the model that was used for the original prediction.

Custom Models: This feature is exclusively available for custom models. The default model cannot be retrained.

Retraining Visibility: The retrain option appears only if a custom model was used for the initial prediction.

Practical Implications

Retraining with a single game, especially with an adjusted learning rate, allows for more precise model tuning. This process is particularly beneficial if the model's initial prediction was off. By retraining with the actual game data, the model can recalibrate and improve its predictive accuracy.