如何使用tensorflow_io的IODataset?

How to use tensorflow_io's IODataset?

我正在尝试编写一个可以使用恶意 pcap 文件作为数据集并预测其他 pcaps 文件中是否包含恶意数据包的程序。 在深入了解 Tensorflow 文档后,我找到了 TensorIO,但我不知道如何使用数据集创建模型并用它进行预测。

这是我的代码:

%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
from tensorflow import keras

try:
  import tensorflow_io as tfio
  import tensorflow_datasets as tfds
except:
  !pip install tensorflow-io
  !pip install tensorflow-datasets

import tensorflow_io as tfio
import tensorflow_datasets as tfds

# print(tf.__version__)

dataset = tfio.IODataset.from_pcap("dataset.pcap")
print(dataset) # <PcapIODataset shapes: ((), ()), types: (tf.float64, tf.string)>

(使用 Google 协作)

我试过在网上寻找答案,但找不到。

我已经下载了两个 pcap 文件并将它们连接起来。后来我提取了 packet_timestamp 和 packet_data。请您根据您的要求预处理 packet_data。如果你有任何标签要添加,你可以添加到训练数据集中(在下面的模型示例中,我创建了一个全零的虚拟标签并添加为一列)。如果它在一个文件中,那么你可以 zip 它们到 pcap 文件。 Model.fitModel.evaluate:

只需要传递 (feature, label) 对的数据集

下面是packet_data预处理的例子-也许你可以修改成if packet_data is valid then labels = valid else malicious.

%tensorflow_version 2.x
import tensorflow as tf
import tensorflow_io as tfio 
import numpy as np

# Create an IODataset from a pcap file  
first_file = tfio.IODataset.from_pcap('/content/fuzz-2006-06-26-2594.pcap')
second_file = tfio.IODataset.from_pcap(['/content/fuzz-2006-08-27-19853.pcap'])

# Concatenate the Read Files
feature = first_file.concatenate(second_file)
# List for pcap 
packet_timestamp_list = []
packet_data_list = []

# some dummy labels
labels = []

packets_total = 0
for v in feature:
    (packet_timestamp, packet_data) = v
    packet_timestamp_list.append(packet_timestamp.numpy())
    packet_data_list.append(packet_data.numpy())
    labels.append(0)
    if packets_total == 0:
        assert np.isclose(
            packet_timestamp.numpy()[0], 1084443427.311224, rtol=1e-15
        )  # we know this is the correct value in the test pcap file
        assert (
            len(packet_data.numpy()[0]) == 62
        )  # we know this is the correct packet data buffer length in the test pcap file
    packets_total += 1
assert (
    packets_total == 43
)  # we know this is the correct number of packets in the test pcap file

下面是在模型中使用的例子- 模型不会工作,因为我没有处理字符串类型的packet_data。根据您的要求进行预处理并在模型中使用。

%tensorflow_version 2.x
import tensorflow as tf
import tensorflow_io as tfio 
import numpy as np

# Create an IODataset from a pcap file  
first_file = tfio.IODataset.from_pcap('/content/fuzz-2006-06-26-2594.pcap')
second_file = tfio.IODataset.from_pcap(['/content/fuzz-2006-08-27-19853.pcap'])

# Concatenate the Read Files
feature = first_file.concatenate(second_file)

# List for pcap 
packet_timestamp = []
packet_data = []

# some dummy labels
labels = []

# add 0 as label. You can use your actual labels here
for v in feature:
  (timestamp, data) = v
  packet_timestamp.append(timestamp.numpy())
  packet_data.append(data.numpy())
  labels.append(0)

## Do the preprocessing of packet_data here

# Add labels to the training data
# Preprocess the packet_data to convert string to meaningful value and use here
train_ds = tf.data.Dataset.from_tensor_slices(((packet_timestamp,packet_data), labels))
# Set the batch size
train_ds = train_ds.shuffle(5000).batch(32)

##### PROGRAM WILL RUN SUCCESSFULLY TILL HERE. TO USE IN THE MODEL DO THE PREPROCESSING OF PACKET DATA AS EXPLAINED ### 

# Have defined some simple model
model = tf.keras.Sequential([
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(100),
  tf.keras.layers.Dense(10)
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), 
              metrics=['accuracy'])

model.fit(train_ds, epochs=2)

希望这能回答您的问题。快乐学习。