我试图从torch7运行一个例子只是为了遇到这个错误。
sandesh@sandesh-H87M-D3H:~/Downloads/tutorials-master/2_supervised$ luajit doall.lua ==> processing options ==> executing all ==> downloading dataset ==> using regular, full training data ==> loading dataset ==> preprocessing data ==> preprocessing data: colorspace RGB -> YUV ==> preprocessing data: normalize each feature (channel) globally ==> preprocessing data: normalize all three channels locally ==> verify statistics training data, y-channel, mean: 0.00067706172257129 training data, y-channel, standard deviation: 0.39473240322794 test data, y-channel, mean: -0.0010822884348063 test data, y-channel, standard deviation: 0.38091408093043 training data, u-channel, mean: -0.0048219975630079 training data, u-channel, standard deviation: 0.29768662619471 test data, u-channel, mean: -0.0030795217110624 test data, u-channel, standard deviation: 0.22289780235542 training data, v-channel, mean: 0.0036312269637064 training data, v-channel, standard deviation: 0.25405592463897 test data, v-channel, mean: 0.0033847450016769 test data, v-channel, standard deviation: 0.20362829592977 ==> visualizing data ==> define parameters ==> construct model ==> here is the model: nn.Sequential { [input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> output] (1): nn.SpatialConvolutionMM(3 -> 64, 5x5) (2): nn.Tanh (3): nn.Sequential { [input -> (1) -> (2) -> (3) -> (4) -> output] (1): nn.Square (2): nn.SpatialAveragePooling(2,2,2,2) (3): nn.MulConstant (4): nn.Sqrt } (4): nn.SpatialSubtractiveNormalization (5): nn.SpatialConvolutionMM(64 -> 64, 5x5) (6): nn.Tanh (7): nn.Sequential { [input -> (1) -> (2) -> (3) -> (4) -> output] (1): nn.Square (2): nn.SpatialAveragePooling(2,2,2,2) (3): nn.MulConstant (4): nn.Sqrt } (8): nn.SpatialSubtractiveNormalization (9): nn.Reshape(1600) (10): nn.Linear(1600 -> 128) (11): nn.Tanh (12): nn.Linear(128 -> 10) } ==> define loss ==> here is the loss function: nn.ClassNLLCriterion ==> defining some tools luajit: /home/sandesh/torch/install/share/lua/5.1/sys/init.lua:38: attempt to index local 'f' (a nil value) stack traceback: /home/sandesh/torch/install/share/lua/5.1/sys/init.lua:38: in function 'execute' /home/sandesh/torch/install/share/lua/5.1/sys/init.lua:71: in function 'uname' /home/sandesh/torch/install/share/lua/5.1/optim/Logger.lua:38: in function '__init' /home/sandesh/torch/install/share/lua/5.1/torch/init.lua:91: in function [C]: in function 'Logger' 4_train.lua:60: in main chunk [C]: in function 'dofile' doall.lua:70: in main chunk [C]: at 0x00406670
我没有更改任何lua文件中的任何代码...... 这是4_train.lua文件
----------------------------------------------------------------------
-- This script demonstrates how to define a training procedure,
-- irrespective of the model/loss functions chosen.
--
-- It shows how to:
-- + construct mini-batches on the fly
-- + define a closure to estimate (a noisy) loss
-- function, as well as its derivatives wrt the parameters of the
-- model to be trained
-- + optimize the function, according to several optmization
-- methods: SGD, L-BFGS.
--
-- Clement Farabet
----------------------------------------------------------------------
require 'torch' -- torch
require 'xlua' -- xlua provides useful tools, like progress bars
require 'optim' -- an optimization package, for online and batch methods
----------------------------------------------------------------------
-- parse command line arguments
if not opt then
print '==> processing options'
cmd = torch.CmdLine()
cmd:text()
cmd:text('SVHN Training/Optimization')
cmd:text()
cmd:text('Options:')
cmd:option('-save', 'results', 'subdirectory to save/log experiments in')
cmd:option('-visualize', false, 'visualize input data and weights during training')
cmd:option('-plot', false, 'live plot')
cmd:option('-optimization', 'SGD', 'optimization method: SGD | ASGD | CG | LBFGS')
cmd:option('-learningRate', 1e-3, 'learning rate at t=0')
cmd:option('-batchSize', 1, 'mini-batch size (1 = pure stochastic)')
cmd:option('-weightDecay', 0, 'weight decay (SGD only)')
cmd:option('-momentum', 0, 'momentum (SGD only)')
cmd:option('-t0', 1, 'start averaging at t0 (ASGD only), in nb of epochs')
cmd:option('-maxIter', 2, 'maximum nb of iterations for CG and LBFGS')
cmd:text()
opt = cmd:parse(arg or {})
end
----------------------------------------------------------------------
-- CUDA?
if opt.type == 'cuda' then
model:cuda()
criterion:cuda()
end
----------------------------------------------------------------------
print '==> defining some tools'
-- classes
classes = {'1','2','3','4','5','6','7','8','9','0'}
-- This matrix records the current confusion across classes
confusion = optim.ConfusionMatrix(classes)
-- Log results to files
trainLogger = optim.Logger(paths.concat(opt.save, 'train.log'))
testLogger = optim.Logger(paths.concat(opt.save, 'test.log'))
-- Retrieve parameters and gradients:
-- this extracts and flattens all the trainable parameters of the mode
-- into a 1-dim vector
if model then
parameters,gradParameters = model:getParameters()
end
----------------------------------------------------------------------
print '==> configuring optimizer'
if opt.optimization == 'CG' then
optimState = {
maxIter = opt.maxIter
}
optimMethod = optim.cg
elseif opt.optimization == 'LBFGS' then
optimState = {
learningRate = opt.learningRate,
maxIter = opt.maxIter,
nCorrection = 10
}
optimMethod = optim.lbfgs
elseif opt.optimization == 'SGD' then
optimState = {
learningRate = opt.learningRate,
weightDecay = opt.weightDecay,
momentum = opt.momentum,
learningRateDecay = 1e-7
}
optimMethod = optim.sgd
elseif opt.optimization == 'ASGD' then
optimState = {
eta0 = opt.learningRate,
t0 = trsize * opt.t0
}
optimMethod = optim.asgd
else
error('unknown optimization method')
end
----------------------------------------------------------------------
print '==> defining training procedure'
function train()
-- epoch tracker
epoch = epoch or 1
-- local vars
local time = sys.clock()
-- set model to training mode (for modules that differ in training and testing, like Dropout)
model:training()
-- shuffle at each epoch
shuffle = torch.randperm(trsize)
-- do one epoch
print('==> doing epoch on training data:')
print("==> online epoch # " .. epoch .. ' [batchSize = ' .. opt.batchSize .. ']')
for t = 1,trainData:size(),opt.batchSize do
-- disp progress
xlua.progress(t, trainData:size())
-- create mini batch
local inputs = {}
local targets = {}
for i = t,math.min(t+opt.batchSize-1,trainData:size()) do
-- load new sample
local input = trainData.data[shuffle[i]]
local target = trainData.labels[shuffle[i]]
if opt.type == 'double' then input = input:double()
elseif opt.type == 'cuda' then input = input:cuda() end
table.insert(inputs, input)
table.insert(targets, target)
end
-- create closure to evaluate f(X) and df/dX
local feval = function(x)
-- get new parameters
if x ~= parameters then
parameters:copy(x)
end
-- reset gradients
gradParameters:zero()
-- f is the average of all criterions
local f = 0
-- evaluate function for complete mini batch
for i = 1,#inputs do
-- estimate f
local output = model:forward(inputs[i])
local err = criterion:forward(output, targets[i])
f = f + err
-- estimate df/dW
local df_do = criterion:backward(output, targets[i])
model:backward(inputs[i], df_do)
-- update confusion
confusion:add(output, targets[i])
end
-- normalize gradients and f(X)
gradParameters:div(#inputs)
f = f/#inputs
-- return f and df/dX
return f,gradParameters
end
-- optimize on current mini-batch
if optimMethod == optim.asgd then
_,_,average = optimMethod(feval, parameters, optimState)
else
optimMethod(feval, parameters, optimState)
end
end
-- time taken
time = sys.clock() - time
time = time / trainData:size()
print("\n==> time to learn 1 sample = " .. (time*1000) .. 'ms')
-- print confusion matrix
print(confusion)
-- update logger/plot
trainLogger:add{['% mean class accuracy (train set)'] = confusion.totalValid * 100}
if opt.plot then
trainLogger:style{['% mean class accuracy (train set)'] = '-'}
trainLogger:plot()
end
-- save/log current net
local filename = paths.concat(opt.save, 'model.net')
os.execute('mkdir -p ' .. sys.dirname(filename))
print('==> saving model to '..filename)
torch.save(filename, model)
-- next epoch
confusion:zero()
epoch = epoch + 1
end
这是doall.lua
----------------------------------------------------------------------
-- This tutorial shows how to train different models on the street
-- view house number dataset (SVHN),
-- using multiple optimization techniques (SGD, ASGD, CG), and
-- multiple types of models.
--
-- This script demonstrates a classical example of training
-- well-known models (convnet, MLP, logistic regression)
-- on a 10-class classification problem.
--
-- It illustrates several points:
-- 1/ description of the model
-- 2/ choice of a loss function (criterion) to minimize
-- 3/ creation of a dataset as a simple Lua table
-- 4/ description of training and test procedures
--
-- Clement Farabet
----------------------------------------------------------------------
require 'torch'
----------------------------------------------------------------------
print '==> processing options'
cmd = torch.CmdLine()
cmd:text()
cmd:text('SVHN Loss Function')
cmd:text()
cmd:text('Options:')
-- global:
cmd:option('-seed', 1, 'fixed input seed for repeatable experiments')
cmd:option('-threads', 2, 'number of threads')
-- data:
cmd:option('-size', 'full', 'how many samples do we load: small | full | extra')
-- model:
cmd:option('-model', 'convnet', 'type of model to construct: linear | mlp | convnet')
-- loss:
cmd:option('-loss', 'nll', 'type of loss function to minimize: nll | mse | margin')
-- training:
cmd:option('-save', 'results', 'subdirectory to save/log experiments in')
cmd:option('-plot', false, 'live plot')
cmd:option('-optimization', 'SGD', 'optimization method: SGD | ASGD | CG | LBFGS')
cmd:option('-learningRate', 1e-3, 'learning rate at t=0')
cmd:option('-batchSize', 1, 'mini-batch size (1 = pure stochastic)')
cmd:option('-weightDecay', 0, 'weight decay (SGD only)')
cmd:option('-momentum', 0, 'momentum (SGD only)')
cmd:option('-t0', 1, 'start averaging at t0 (ASGD only), in nb of epochs')
cmd:option('-maxIter', 2, 'maximum nb of iterations for CG and LBFGS')
cmd:option('-type', 'double', 'type: double | float | cuda')
cmd:text()
opt = cmd:parse(arg or {})
-- nb of threads and fixed seed (for repeatable experiments)
if opt.type == 'float' then
print('==> switching to floats')
torch.setdefaulttensortype('torch.FloatTensor')
elseif opt.type == 'cuda' then
print('==> switching to CUDA')
require 'cunn'
torch.setdefaulttensortype('torch.FloatTensor')
end
torch.setnumthreads(opt.threads)
torch.manualSeed(opt.seed)
----------------------------------------------------------------------
print '==> executing all'
dofile '1_data.lua'
dofile '2_model.lua'
dofile '3_loss.lua'
dofile '4_train.lua'
dofile '5_test.lua'
----------------------------------------------------------------------
print '==> training!'
while true do
train()
test()
end
git链接是https://github.com/torch/tutorials/blob/master/2_supervised/4_train.lua 我也没有使用cuda,因为我没有GPU
我不会告诉你出了什么问题,因为你没有表现出任何自己解决问题的努力。但我会告诉你如何继续。
luajit:/home/sandesh/torch/install/share/lua/5.1/sys/init.lua:38:尝试索引本地'f'(零值)堆栈追溯:/ home / sandesh / torch / install / share /lua/5.1/sys/init.lua:38:在函数'execute'/home/sandesh/torch/install/share/lua/5.1/sys/init.lua:71:在函数'uname'/ home / sandesh中/torch/install/share/lua/5.1/optim/Logger.lua:38:在函数'__init'/home/sandesh/torch/install/share/lua/5.1/torch/init.lua:91:在函数中[ C]:在函数'Logger'中
这告诉你init.lua第38行中的一些本地f是nil,这会导致问题。所以打开那个文件,找出f的值应该来自哪里以及为什么它是零。然后修复它。另请参阅是否有更新版本的Torch正确处理f为零。如果没有,请尽可能自行更改代码。否则,请尝试通过验证您对火炬的输入来阻止这种情况发生。
我现在正在和火炬手一起玩,并且反复在这个问题上磕磕绊绊。检查输入并查看是否输入了错误/无文件。如果代码之前有效,并且没有尝试更改它,它应该是完整的,并且您的输入是有问题的。