有没有办法在 Tensorflow 对象检测中实现自定义学习率?
Is there a way to implement a custom learning rate in Tensorflow Object Detection?
我正在处理对象检测问题,我想使用循环学习率。问题是,这种特定的学习在 Tensorflow 对象检测的原型中并不存在。我想知道我是否可以修改原型(或其他文件)以实现这种新的学习率方法?
我正在使用 Tensorflow 1.14 和最新更新版本的 Tensorflow 对象检测存储库。
我试图修改 optimizer.proto 和 optimizer_pb2.py 文件。我只展示我修改过的部分。
optimizer.proto
// Configuration message for optimizer learning rate.
message LearningRate {
oneof learning_rate {
ConstantLearningRate constant_learning_rate = 1;
ExponentialDecayLearningRate exponential_decay_learning_rate = 2;
ManualStepLearningRate manual_step_learning_rate = 3;
CosineDecayLearningRate cosine_decay_learning_rate = 4;
CosineDecayRestartLearningRate cosine_decay_restart_learning_rate = 5; // Added
}
}
...
// Added for test
message CosineDecayRestartLearningRate {
optional uint32 total_steps = 1 [default = 400000];
}
optimizer_pb2.py
_LEARNINGRATE = _descriptor.Descriptor(
...
full_name='object_detection.protos.LearningRate.cosine_decay_restart_learning_rate', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
...)
...
_COSINEDECAYRESTARTLEARNINGRATE = _descriptor.Descriptor(
name='CosineDecayRestartLearningRate',
full_name='object_detection.protos.CosineDecayRestartLearningRate',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='total_steps', full_name='object_detection.protos.CosineDecayLearningRate.total_steps', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=True, default_value=4000000,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=1671,
serialized_end=1861,
)
_LEARNINGRATE.fields_by_name['cosine_decay_restart_learning_rate'].message_type = _COSINEDECAYRESTARTLEARNINGRATE
_LEARNINGRATE.oneofs_by_name['learning_rate'].fields.append(
_LEARNINGRATE.fields_by_name['cosine_decay_restart_learning_rate'])
_LEARNINGRATE.fields_by_name['cosine_decay_restart_learning_rate'].containing_oneof = _LEARNINGRATE.oneofs_by_name['learning_rate']
DESCRIPTOR.message_types_by_name['CosineDecayRestartLearningRate'] = _COSINEDECAYRESTARTLEARNINGRATE
CosineDecayRestartLearningRate = _reflection.GeneratedProtocolMessageType('CosineDecayRestartLearningRate', (_message.Message,), dict(
DESCRIPTOR = _COSINEDECAYRESTARTLEARNINGRATE,
__module__ = 'object_detection.protos.optimizer_pb2'
# @@protoc_insertion_point(class_scope:object_detection.protos.CosineDecayRestartLearningRate)
))
_sym_db.RegisterMessage(CosineDecayRestartLearningRate)
我没想到它会起作用,因为我从来没有达到为学习率添加输入代码的步骤,它确实给我一个错误。
File "/home/renart/Tensorflow/models/research/object_detection/protos/optimizer_pb2.py", line 253, in <module>
options=None),
File "/home/renart/Tensorflow/venv-1.13/lib/python3.5/site-packages/google/protobuf/descriptor.py", line 534, in __new__
return _message.default_pool.FindFieldByName(full_name)
KeyError: "Couldn't find field object_detection.protos.LearningRate.cosine_decay_restart_learning_rate"
不幸的是,错误来自 Protobuf 而不是 Tensorflow,所以我仍然不知道要查看哪里才能实现学习率。
我认为您不应该手动修改 optimizer_pb2.py
。它由 protoc
命令生成。具体步骤在这里 https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md#protobuf-compilation
您应该只修改 optimizer.proto
并按照上述说明重新编译 .proto 文件。
要添加新的学习率计划,您可以考虑将其添加到此文件中https://github.com/tensorflow/models/blob/master/research/object_detection/utils/learning_schedules.py
因为它是官方学习率时间表所在的地方。
学习率的配置在此文件中解释:https://github.com/tensorflow/models/blob/0b3a8abf095cb8866ca74c2e118c1894c0e6f947/research/object_detection/builders/optimizer_builder.py#L103
你应该在这里调用建立你的学习率计划。
@Fan Luo 已经回答了我的问题,但我仍然会写下我正确设置工作的步骤。
首先,转到 protos/optimizer.proto 文件并添加你的学习率,就像我问题的第一个代码框中一样。你会注意到我更改了我的学习率名称,不要混淆。
然后,修改builders/optimizer_builder.py文件来描述proto中的消息应该return.
if learning_rate_type == 'cyclical_learning_rate':
config = learning_rate_config.cyclical_learning_rate
learning_rate = learning_schedules.cyclical_learning_rate(
tf.train.get_or_create_global_step()
)
转到 utils/learning_schedules.py 并添加新学习率背后的逻辑。
def cyclical_learning_rate(global_step):
return tf.Variable(0.001, name='learning_rate')
最后,compile your protos again 您就完成了。您现在可以像在 tensorflow 对象检测中使用任何其他学习率一样使用您的学习率。
我正在处理对象检测问题,我想使用循环学习率。问题是,这种特定的学习在 Tensorflow 对象检测的原型中并不存在。我想知道我是否可以修改原型(或其他文件)以实现这种新的学习率方法?
我正在使用 Tensorflow 1.14 和最新更新版本的 Tensorflow 对象检测存储库。
我试图修改 optimizer.proto 和 optimizer_pb2.py 文件。我只展示我修改过的部分。
optimizer.proto
// Configuration message for optimizer learning rate.
message LearningRate {
oneof learning_rate {
ConstantLearningRate constant_learning_rate = 1;
ExponentialDecayLearningRate exponential_decay_learning_rate = 2;
ManualStepLearningRate manual_step_learning_rate = 3;
CosineDecayLearningRate cosine_decay_learning_rate = 4;
CosineDecayRestartLearningRate cosine_decay_restart_learning_rate = 5; // Added
}
}
...
// Added for test
message CosineDecayRestartLearningRate {
optional uint32 total_steps = 1 [default = 400000];
}
optimizer_pb2.py
_LEARNINGRATE = _descriptor.Descriptor(
...
full_name='object_detection.protos.LearningRate.cosine_decay_restart_learning_rate', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
...)
...
_COSINEDECAYRESTARTLEARNINGRATE = _descriptor.Descriptor(
name='CosineDecayRestartLearningRate',
full_name='object_detection.protos.CosineDecayRestartLearningRate',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='total_steps', full_name='object_detection.protos.CosineDecayLearningRate.total_steps', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=True, default_value=4000000,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=1671,
serialized_end=1861,
)
_LEARNINGRATE.fields_by_name['cosine_decay_restart_learning_rate'].message_type = _COSINEDECAYRESTARTLEARNINGRATE
_LEARNINGRATE.oneofs_by_name['learning_rate'].fields.append(
_LEARNINGRATE.fields_by_name['cosine_decay_restart_learning_rate'])
_LEARNINGRATE.fields_by_name['cosine_decay_restart_learning_rate'].containing_oneof = _LEARNINGRATE.oneofs_by_name['learning_rate']
DESCRIPTOR.message_types_by_name['CosineDecayRestartLearningRate'] = _COSINEDECAYRESTARTLEARNINGRATE
CosineDecayRestartLearningRate = _reflection.GeneratedProtocolMessageType('CosineDecayRestartLearningRate', (_message.Message,), dict(
DESCRIPTOR = _COSINEDECAYRESTARTLEARNINGRATE,
__module__ = 'object_detection.protos.optimizer_pb2'
# @@protoc_insertion_point(class_scope:object_detection.protos.CosineDecayRestartLearningRate)
))
_sym_db.RegisterMessage(CosineDecayRestartLearningRate)
我没想到它会起作用,因为我从来没有达到为学习率添加输入代码的步骤,它确实给我一个错误。
File "/home/renart/Tensorflow/models/research/object_detection/protos/optimizer_pb2.py", line 253, in <module>
options=None),
File "/home/renart/Tensorflow/venv-1.13/lib/python3.5/site-packages/google/protobuf/descriptor.py", line 534, in __new__
return _message.default_pool.FindFieldByName(full_name)
KeyError: "Couldn't find field object_detection.protos.LearningRate.cosine_decay_restart_learning_rate"
不幸的是,错误来自 Protobuf 而不是 Tensorflow,所以我仍然不知道要查看哪里才能实现学习率。
我认为您不应该手动修改 optimizer_pb2.py
。它由 protoc
命令生成。具体步骤在这里 https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md#protobuf-compilation
您应该只修改 optimizer.proto
并按照上述说明重新编译 .proto 文件。
要添加新的学习率计划,您可以考虑将其添加到此文件中https://github.com/tensorflow/models/blob/master/research/object_detection/utils/learning_schedules.py 因为它是官方学习率时间表所在的地方。
学习率的配置在此文件中解释:https://github.com/tensorflow/models/blob/0b3a8abf095cb8866ca74c2e118c1894c0e6f947/research/object_detection/builders/optimizer_builder.py#L103 你应该在这里调用建立你的学习率计划。
@Fan Luo 已经回答了我的问题,但我仍然会写下我正确设置工作的步骤。 首先,转到 protos/optimizer.proto 文件并添加你的学习率,就像我问题的第一个代码框中一样。你会注意到我更改了我的学习率名称,不要混淆。
然后,修改builders/optimizer_builder.py文件来描述proto中的消息应该return.
if learning_rate_type == 'cyclical_learning_rate':
config = learning_rate_config.cyclical_learning_rate
learning_rate = learning_schedules.cyclical_learning_rate(
tf.train.get_or_create_global_step()
)
转到 utils/learning_schedules.py 并添加新学习率背后的逻辑。
def cyclical_learning_rate(global_step):
return tf.Variable(0.001, name='learning_rate')
最后,compile your protos again 您就完成了。您现在可以像在 tensorflow 对象检测中使用任何其他学习率一样使用您的学习率。