我正在尝试训练DNNClassifier
labels = ['BENIGN', 'Syn', 'UDPLag', 'UDP', 'LDAP', 'MSSQL', 'NetBIOS', 'WebDDoS']
# Build a DNN
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[30, 10],
n_classes=len(labels),
label_vocabulary=labels)
def input_fn(features, labels, training=True, batch_size=32):
'''
An input function for training or evaluating
'''
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
# Shuffle and repeat if you are in training mode.
if training:
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
# Train the model
classifier.train(
input_fn=lambda: input_fn(train_features, train_label, training=True),
steps=5000)
在使用更大的数据集之前,训练工作正常
train_features.shape
>>> (15891114, 20)
train_label.shape
>>> (15891114,)
培训开始后,我正在使用Google Colaboratory,并且由于超出RAM使用量(12GB RAM),我的会话崩溃
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python
/ops/resource_variable_ops.py:1666: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:Layer dnn is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/adagrad.py:106: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
训练开始前仅使用了1GB的RAM,但是训练开始后RAM迅速饱和。
[通过提供数据框的chunks
来训练/评估模型,从而使其发挥作用。
仍然,当我提供用于训练或评估Estimator
的整个数据帧时,我不清楚RAM为什么会饱和。