如何将数据集导入数组

问题描述 投票:0回答:1

我已经完成了所有的教程并搜索了“加载csv tensorflow”,但却无法得到它的逻辑。我不是一个初学者,但我没有太多时间来完成这个,而且我突然被扔进了Tensorflow,这是出乎意料的困难。

让我说出来:

非常简单的184列的CSV文件,都是浮点数。一排只是今天的价格,三个买入信号,以及之前的180天价格

close = tf.placeholder(float, name='close')

signals = tf.placeholder(bool, shape=[3], name='signals')

previous = tf.placeholder(float, shape=[180], name = 'previous')

这篇文章:https://www.tensorflow.org/guide/datasets它涵盖了如何加载相当好。它甚至有一个关于更改为numpy数组的部分,这是我需要训练和测试'网络。但是,正如作者在导致这个网页的文章中所说的那样,它非常复杂。似乎一切都是为了进行数据操作,我们已经对数据进行了规范化(自1983年以来,在输入,输出和层数方面,AI没有真正改变)。

这是一种加载它的方法,但不是Numpy,也没有不操作数据的例子。

 with tf.Session as sess:

  sess.run( tf.global variables initializer())

  with open('/BTC1.csv') as csv_file:

    csv_reader = csv.reader(csv_file, delimiter =',')

    line_count = 0

    for row in csv_reader:

      ?????????

      line_count += 1

我需要知道如何将csv文件放入

close = tf.placeholder(float, name='close')

signals = tf.placeholder(bool, shape=[3], name='signals')

previous = tf.placeholder(float, shape=[180], name = 'previous')

这样我就可以按照教程来训练和测试网络。

csv numpy tensorflow artificial-intelligence
1个回答
1
投票

我的问题并不是那么清楚。您可能会回答,告诉我,如果我错了,如何在模型中提供数据?有几种方式可以做到这一点。

  1. 在会话期间使用feed_dict占位符。这是基本的,更容易的,但往往会受到培训性能问题的影响。进一步解释,检查这个post
  2. 使用队列。我不建议难以实施和记录错误,因为它已被第三种方法接管。
  3. tf.data API。

...

所以用第一种方法回答你的问题:

# get your array outside the session
with open('/BTC1.csv') as csv_file:
    csv_reader = csv.reader(csv_file, delimiter =',')
    dataset = np.asarray([data for data in csv_reader])
    close_col = dataset[:, 0]
    signal_cols = dataset[:, 1: 3]
    previous_cols = dataset[:, 3:]

# let's say you load 100 row each time for training
batch_size = 100

# define placeholders like you
...

with tf.Session() as sess:
    ...
    for i in range(number_iter):
        start = i * batch_size
        end = (i + 1) * batch_size
        sess.run(train_operation, feed_dict={close: close_col[start: end, ],
                                             signals: signal_col[start: end, ],
                                             previous: previous_col[start: end, ]
                                             }
                 )

通过第三种方法:

# retrieve your columns like before
...

# let's say you load 100 row each time for training
batch_size = 100

# construct your input pipeline
c_col, s_col, p_col = wrapper(filename)
batch = tf.data.Dataset.from_tensor_slices((close_col, signal_col, previous_col))
batch = batch.shuffle(c_col.shape[0]).batch(batch_size)  #mix data --> assemble batches --> prefetch to RAM and ready inject to model
iterator = batch.make_initializable_iterator()
iter_init_operation = iterator.initializer
c_it, s_it, p_it = iterator.get_next() #get next batch operation automatically called at each iteration within the session

# replace your close, signal, previous placeholder in your model by c_it, s_it, p_it when you define your model
...

with tf.Session() as sess:
    # you need to initialize the iterators
    sess.run([tf.global_variable_initializer, iter_init_operation])
    ...
    for i in range(number_iter):
        start = i * batch_size
        end = (i + 1) * batch_size
        sess.run(train_operation)

祝好运!

© www.soinside.com 2019 - 2024. All rights reserved.