量化感知训练:ValueError:`to_quantize`只能是 keras 顺序模型或函数模型

问题描述 投票:0回答:1

我正在尝试测试 TensorFlow Lite 的量化感知训练。以下源代码创建了一个使用 MNIST 数据集(仅 1 个 epoch 用于测试目的)训练的 AI 模型(变量:模型)。我将模型保存在文件(model.h5)中。当我使用 quantize_model 方法直接从模型变量创建名为 q_aware_model 的量化感知模型时,它可以工作。 但是,当我加载存储在 model.h5 文件中的模型并尝试名为 q_aware_model1 的量化感知模型时,它失败并出现错误: q_aware_model1 = quantize_model1(模型1) ValueError:

to_quantize
只能是 keras 顺序模型或功能模型。

我不知道为什么。问题是什么? 谢谢。 埃迪33

我的环境:spyder 5.5.6,Python 3.12.6 64位,tensorflow 2.17.0,tensorflow-model-optimization 0.8.0,keras 3.5.0。

我的代码:

import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras.datasets import mnist

import tf_keras as keras

(x_train, y_train), (x_test, y_test) = mnist.load_data()


x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

num_classes = 10

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

from keras.models import Sequential
from keras import models, layers
from keras import regularizers

model = keras.Sequential([
keras.layers.Dropout(0.2,input_shape=(784,)),
keras.layers.Dense(1000, kernel_regularizer = regularizers.l2(0.01), activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(1000, kernel_regularizer = regularizers.l2(0.01), activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(10,  activation='softmax')
])   

#model.summary()

model.compile(loss=keras.losses.categorical_crossentropy, 
          optimizer='adam', 
          metrics=['accuracy'])

hist = model.fit(x_train, y_train,
                    batch_size=128,
                    epochs=1, # just 1
                    verbose=1,
                    validation_data=(x_test,y_test))

score = model.evaluate(x_test, y_test, verbose=1)
print("Test loss {:.4f}, accuracy {:.2f}%".format(score[0], score[1] * 100))

print("Saved model.h5 to disk")
model.save("model.h5")


# Quantization Aware Training
import tensorflow_model_optimization as tfmot

print("\n\n\nDirect QAT")
quantize_model = tfmot.quantization.keras.quantize_model

q_aware_model = quantize_model(model)

q_aware_model.compile(loss=keras.losses.categorical_crossentropy, 
          optimizer='adam', 
          metrics=['accuracy'])

q_aware_model.summary()


print("\n\n\nQAT from loading model.h5")
model1 = tf.keras.models.load_model('model.h5')
quantize_model1 = tfmot.quantization.keras.quantize_model

q_aware_model1 = quantize_model1(model1)

q_aware_model1.compile(loss=keras.losses.categorical_crossentropy, 
          optimizer='adam', 
          metrics=['accuracy'])

q_aware_model1.summary()
tensorflow-lite quantization-aware-training
1个回答
0
投票

感谢这篇文章(https://github.com/tensorflow/model-optimization/issues/426),我通过替换解决了我的问题:

model1 = tf.keras.models.load_model('model.h5')

与:

with tfmot.quantization.keras.quantize_scope():                                 
    model1 = tf.keras.models.load_model('model.h5')

愿这对您有帮助! 埃迪33

我的新代码:

import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras.datasets import mnist

import tf_keras as keras

(x_train, y_train), (x_test, y_test) = mnist.load_data()


x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
num_classes = 10

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

from keras.models import Sequential
from keras import models, layers
from keras import regularizers

model = keras.Sequential([
keras.layers.Dropout(0.2,input_shape=(784,)),
keras.layers.Dense(1000, kernel_regularizer = regularizers.l2(0.01), activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(1000, kernel_regularizer = regularizers.l2(0.01), activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(10,  activation='softmax')
])   
    
#model.summary()

model.compile(loss=keras.losses.categorical_crossentropy, 
              optimizer='adam', 
              metrics=['accuracy'])

hist = model.fit(x_train, y_train,
                        batch_size=128,
                        epochs=1, # just 1
                        verbose=1,
                        validation_data=(x_test,y_test))

score = model.evaluate(x_test, y_test, verbose=1)
print("Test loss {:.4f}, accuracy {:.2f}%".format(score[0], score[1] * 100))

print("Saved model.h5 to disk")
model.save("model.h5")


# Quantization Aware Training
import tensorflow_model_optimization as tfmot

print("\n\n\nDirect QAT")
quantize_model = tfmot.quantization.keras.quantize_model

q_aware_model = quantize_model(model)

q_aware_model.compile(loss=keras.losses.categorical_crossentropy, 
              optimizer='adam', 
              metrics=['accuracy'])

q_aware_model.summary()


print("\n\n\nQAT from loading model.h5")    
with tfmot.quantization.keras.quantize_scope():                                 
    model1 = tf.keras.models.load_model('model.h5')  
quantize_model1 = tfmot.quantization.keras.quantize_model

q_aware_model1 = quantize_model1(model1)

q_aware_model1.compile(loss=keras.losses.categorical_crossentropy, 
              optimizer='adam', 
              metrics=['accuracy'])

q_aware_model1.summary()
© www.soinside.com 2019 - 2024. All rights reserved.