**重要**:请添加您正在开发的语言标记。 TENSORFLOW支持超过一种语言。 TensorFlow是一个用于机器学习和机器智能的开源库。它由Google开发,并于2015年11月成为开源。
为什么使用多个输入的KERAS模型接受.call()的训练数据的形状,但不接受.evaluate()?
IM当前研究了变压器模型中的多螺态层上掩盖注意力评分的EFFEKT,以分类时间序列数据。我已经建立了一个接受时间
考虑到您的基本模型类似的事实: input_layer = layers.input(shape =(50,20)) layer = layers.dense(123,activation ='relu') layer = layers.lstm(128,return_sequences ...
Thu Feb 13 03:13:38 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.42.06 Driver Version: 555.42.06 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce GTX 1650 ... Off | 00000000:02:00.0 Off | N/A | | N/A 51C P8 5W / 35W | 227MiB / 4096MiB | 11% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2370 G /usr/lib/xorg/Xorg 79MiB | | 0 N/A N/A 2745 G /usr/bin/gnome-shell 122MiB | | 0 N/A N/A 3442 G /usr/bin/nautilus 22MiB | +-----------------------------------------------------------------------------------------+ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Jan__6_16:45:21_PST_2023 Cuda compilation tools, release 12.0, V12.0.140 Build cuda_12.0.r12.0/compiler.32267302_0
def build_model(): model = keras.Sequential([ layers.Input(shape=(400,)), layers.Dense(128, activation="relu"), layers.Dense(128, activation="relu"), layers.Dense(3) ]) model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss="huber") return model class DQNAgent: def __init__(self): self.model = build_model() self.target_model = build_model() self.target_model.set_weights(self.model.get_weights()) self.memory = deque(maxlen=1000) self.epsilon = 1.0 self.epsilon_min = 0.01 self.epsilon_decay = 0.995 self.gamma = 0.95 self.batch_size = 32 def choose_action(self, state): if np.random.rand() < self.epsilon: return random.choice([0, 1, 2]) q_values = self.model.predict(np.array([state]), verbose=0) return np.argmax(q_values[0]) def remember(self, state, action, reward, next_state, done): self.memory.append((state, action, reward, next_state, done)) def train(self): if len(self.memory) < self.batch_size: return batch = random.sample(self.memory, self.batch_size) states, targets = [], [] for state, action, reward, next_state, done in batch: target = reward if not done: target += self.gamma * np.max(self.target_model.predict(np.array([next_state]), verbose=0)) q_values = self.model.predict(np.array([state]), verbose=0) q_values[0][action] = target states.append(state) targets.append(q_values[0]) self.model.fit(np.array(states), np.array(targets), epochs=1, verbose=0) if self.epsilon > self.epsilon_min: self.epsilon *= self.epsilon_decay def update_target_model(self): self.target_model.set_weights(self.model.get_weights())
inceptionv3_model = InceptionV3(weights="imagenet", include_top=False, input_shape=(224, 224, 3)) inceptionv3_model.trainable = False model = Sequential() model.add(inceptionv3_model) model.add(Conv2D(32, (3, 3), activation='relu', padding='same')) model.add(GlobalAveragePooling2D()) # flatten model.add(Flatten()) # hidden layer model.add(Dense(64, activation='relu')) model.add(Dropout(0.2)) # output layer model.add(Dense(3, activation='softmax'))
到目前为止,我有这个代码,但我无法正常工作。我收到的错误,例如:valueError:通过类型“ kerastensor”的对象
NVIDIA-L4T核32.6.1-20210726122000(我认为这是JetPack 4.6) CUDA 10.2 Cudnn8.2.1
I尝试将Yolov11模型导出到TensorFlow,它说: 具有输入形状(1,3,640,640)BCHW和输出形状的“ Yolo11n.pt”(1,1,84,8400)(5.4 MB)(5.4 MB) 现在,我在Keras 3中有此模型摘要: 模型:...