实施 NEAT python 以在每次预测后重新训练
Implementing NEAT python to retraining after each prediction
我想了解如何实现 neat python 以便在每次做出预测后重新训练,因此每次预测后训练集的大小都会增加。
我正在尝试通过配置文件设置整洁 python,以便在每次预测 test/unseen 集后重新训练。例如,如果 XOR "evolve-minimal" 示例,根据我的理解,它可以进行调整,以便它训练部分数据(达到特定的适应度水平,获得最佳基因组),然后它预测设置的其他数据放在一边作为测试集。请参阅下面的代码以了解我的意思:
from __future__ import print_function
import neat
import visualize
# 2-input XOR inputs and expected outputs. Training set
xor_inputs = [(0.0, 0.0, 0.0), (0.0, 1.0, 0.0), (1.0, 1.0, 1.0), (0.0, 0.0, 1.0), (1.0, 1.0, 0.0)]
xor_outputs = [(1.0,), (1.0,), (1.0,), (0.0,), (0.0,)]
# Test set
xor_inputs2 = [(1.0, 0.0, 1.0), (1.0, 1.0, 0.0), (1.0, 0.0, 0.0)]
xor_outputs2 = [(1.0,), (0.0,), (0.0,)]
def eval_genomes(genomes, config):
for genome_id, genome in genomes:
genome.fitness = 5
net = neat.nn.FeedForwardNetwork.create(genome, config)
for xi, xo in zip(xor_inputs, xor_outputs):
output = net.activate(xi)
genome.fitness -= (output[0] - xo[0]) ** 2
# Load configuration.
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
'config-feedforward')
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(True))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
# Run until a solution is found.
winner = p.run(eval_genomes)
# Display the winning genome.
print('\nBest genome:\n{!s}'.format(winner))
# Show output of the most fit genome against training data.
print('\nOutput:')
winner_net = neat.nn.FeedForwardNetwork.create(winner, config)
count = 0
#To make predictions using the best genome
for xi, xo in zip(xor_inputs2, xor_outputs2):
prediction = winner_net.activate(xi)
print(" input {!r}, expected output {!r}, got {!r}".format(
xi, xo[0], round(prediction[0])))
#to get prediction accuracy
if int(xo[0]) == int(round(prediction[0])):
count = count + 1
accuracy = count / len(xor_outputs2)
print('\nAccuracy: ', accuracy)
node_names = {-1: 'A', -2: 'B', 0: 'A XOR B'}
visualize.draw_net(config, winner, True, node_names=node_names)
visualize.plot_stats(stats, ylog=False, view=True)
visualize.plot_species(stats, view=True)
配置文件是:
#--- parameters for the XOR-2 experiment ---#
[NEAT]
fitness_criterion = max
fitness_threshold = 4.8
pop_size = 150
reset_on_extinction = True
[DefaultGenome]
# node activation options
activation_default = sigmoid
activation_mutate_rate = 0.0
activation_options = sigmoid
# node aggregation options
aggregation_default = sum
aggregation_mutate_rate = 0.0
aggregation_options = sum
# node bias options
bias_init_mean = 0.0
bias_init_stdev = 1.0
bias_max_value = 30.0
bias_min_value = -30.0
bias_mutate_power = 0.5
bias_mutate_rate = 0.7
bias_replace_rate = 0.1
# genome compatibility options
compatibility_disjoint_coefficient = 1.0
compatibility_weight_coefficient = 0.5
# connection add/remove rates
conn_add_prob = 0.5
conn_delete_prob = 0.5
# connection enable options
enabled_default = True
enabled_mutate_rate = 0.01
feed_forward = True
initial_connection = full_direct
# node add/remove rates
node_add_prob = 0.2
node_delete_prob = 0.2
# network parameters
num_hidden = 0
num_inputs = 3
num_outputs = 1
# node response options
response_init_mean = 1.0
response_init_stdev = 0.0
response_max_value = 30.0
response_min_value = -30.0
response_mutate_power = 0.0
response_mutate_rate = 0.0
response_replace_rate = 0.0
# connection weight options
weight_init_mean = 0.0
weight_init_stdev = 1.0
weight_max_value = 30
weight_min_value = -30
weight_mutate_power = 0.5
weight_mutate_rate = 0.8
weight_replace_rate = 0.1
[DefaultSpeciesSet]
compatibility_threshold = 3.0
[DefaultStagnation]
species_fitness_func = max
max_stagnation = 20
species_elitism = 2
[DefaultReproduction]
elitism = 2
survival_threshold = 0.2
但是,这里的问题是在测试集中进行每次预测后都不会进行再训练。我相信配置文件中的参数是静态的,在训练过程开始后不能改变,所以如果你的健身水平是基于训练集正确分类的数量(这就是我想要实现的,非常类似于这里使用的那个)这将是一个问题,所以我想了解是否可以通过调整配置文件中的设置来实现重新训练的模型。或者还有更多内容?
如果我正确理解您的要求,这不能简单地在 config_file 中完成。
config_file 中定义的参数只是改变模型直接运行数据时发生的情况,无需任何重新训练即可进行预测。
如果您希望模型在每次预测后重新训练,您必须在 eval_genomes
and/or run
函数中实现此功能。您可以在迭代每个基因组的循环中添加另一个 for 循环,以获取每个输出并重新训练模型。但是,这可能会显着增加计算时间,因为您不仅仅是获得输出,而是 运行 每个输出的另一组训练代数。
我想了解如何实现 neat python 以便在每次做出预测后重新训练,因此每次预测后训练集的大小都会增加。
我正在尝试通过配置文件设置整洁 python,以便在每次预测 test/unseen 集后重新训练。例如,如果 XOR "evolve-minimal" 示例,根据我的理解,它可以进行调整,以便它训练部分数据(达到特定的适应度水平,获得最佳基因组),然后它预测设置的其他数据放在一边作为测试集。请参阅下面的代码以了解我的意思:
from __future__ import print_function
import neat
import visualize
# 2-input XOR inputs and expected outputs. Training set
xor_inputs = [(0.0, 0.0, 0.0), (0.0, 1.0, 0.0), (1.0, 1.0, 1.0), (0.0, 0.0, 1.0), (1.0, 1.0, 0.0)]
xor_outputs = [(1.0,), (1.0,), (1.0,), (0.0,), (0.0,)]
# Test set
xor_inputs2 = [(1.0, 0.0, 1.0), (1.0, 1.0, 0.0), (1.0, 0.0, 0.0)]
xor_outputs2 = [(1.0,), (0.0,), (0.0,)]
def eval_genomes(genomes, config):
for genome_id, genome in genomes:
genome.fitness = 5
net = neat.nn.FeedForwardNetwork.create(genome, config)
for xi, xo in zip(xor_inputs, xor_outputs):
output = net.activate(xi)
genome.fitness -= (output[0] - xo[0]) ** 2
# Load configuration.
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
'config-feedforward')
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(True))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
# Run until a solution is found.
winner = p.run(eval_genomes)
# Display the winning genome.
print('\nBest genome:\n{!s}'.format(winner))
# Show output of the most fit genome against training data.
print('\nOutput:')
winner_net = neat.nn.FeedForwardNetwork.create(winner, config)
count = 0
#To make predictions using the best genome
for xi, xo in zip(xor_inputs2, xor_outputs2):
prediction = winner_net.activate(xi)
print(" input {!r}, expected output {!r}, got {!r}".format(
xi, xo[0], round(prediction[0])))
#to get prediction accuracy
if int(xo[0]) == int(round(prediction[0])):
count = count + 1
accuracy = count / len(xor_outputs2)
print('\nAccuracy: ', accuracy)
node_names = {-1: 'A', -2: 'B', 0: 'A XOR B'}
visualize.draw_net(config, winner, True, node_names=node_names)
visualize.plot_stats(stats, ylog=False, view=True)
visualize.plot_species(stats, view=True)
配置文件是:
#--- parameters for the XOR-2 experiment ---#
[NEAT]
fitness_criterion = max
fitness_threshold = 4.8
pop_size = 150
reset_on_extinction = True
[DefaultGenome]
# node activation options
activation_default = sigmoid
activation_mutate_rate = 0.0
activation_options = sigmoid
# node aggregation options
aggregation_default = sum
aggregation_mutate_rate = 0.0
aggregation_options = sum
# node bias options
bias_init_mean = 0.0
bias_init_stdev = 1.0
bias_max_value = 30.0
bias_min_value = -30.0
bias_mutate_power = 0.5
bias_mutate_rate = 0.7
bias_replace_rate = 0.1
# genome compatibility options
compatibility_disjoint_coefficient = 1.0
compatibility_weight_coefficient = 0.5
# connection add/remove rates
conn_add_prob = 0.5
conn_delete_prob = 0.5
# connection enable options
enabled_default = True
enabled_mutate_rate = 0.01
feed_forward = True
initial_connection = full_direct
# node add/remove rates
node_add_prob = 0.2
node_delete_prob = 0.2
# network parameters
num_hidden = 0
num_inputs = 3
num_outputs = 1
# node response options
response_init_mean = 1.0
response_init_stdev = 0.0
response_max_value = 30.0
response_min_value = -30.0
response_mutate_power = 0.0
response_mutate_rate = 0.0
response_replace_rate = 0.0
# connection weight options
weight_init_mean = 0.0
weight_init_stdev = 1.0
weight_max_value = 30
weight_min_value = -30
weight_mutate_power = 0.5
weight_mutate_rate = 0.8
weight_replace_rate = 0.1
[DefaultSpeciesSet]
compatibility_threshold = 3.0
[DefaultStagnation]
species_fitness_func = max
max_stagnation = 20
species_elitism = 2
[DefaultReproduction]
elitism = 2
survival_threshold = 0.2
但是,这里的问题是在测试集中进行每次预测后都不会进行再训练。我相信配置文件中的参数是静态的,在训练过程开始后不能改变,所以如果你的健身水平是基于训练集正确分类的数量(这就是我想要实现的,非常类似于这里使用的那个)这将是一个问题,所以我想了解是否可以通过调整配置文件中的设置来实现重新训练的模型。或者还有更多内容?
如果我正确理解您的要求,这不能简单地在 config_file 中完成。
config_file 中定义的参数只是改变模型直接运行数据时发生的情况,无需任何重新训练即可进行预测。
如果您希望模型在每次预测后重新训练,您必须在 eval_genomes
and/or run
函数中实现此功能。您可以在迭代每个基因组的循环中添加另一个 for 循环,以获取每个输出并重新训练模型。但是,这可能会显着增加计算时间,因为您不仅仅是获得输出,而是 运行 每个输出的另一组训练代数。