我有一个目录结构,其中包含许多带有非ASCII字符的目录,大部分为梵语。我正在脚本中为这些目录/文件建立索引,但无法弄清楚如何最好地处理这些实例。这是我的过程:
{'path': columns[0], 'filename': columns[1], 'status': True}
,其中状态决定以后是否对该文件采取操作。mv a b
);这并不重要,但以为我会包括在内。下面是一些示例数据以及到目前为止我写的内容:
示例生成的tsv(路径/名称/哈希):
./Personal Research/Ramnad 9"14"10 DSC_0004.JPG 850cd9dcb0075febd4c0dcd549dd7860
./Personal Research/Ramnad 9"14"10 DSC_0010.JPG 9db2219fc4c9423016fb9e295452f1ad
./Personal Research/Ramnad 9"14"10 DSC_0006.JPG ef7d13b88bbaabc029390bcef1319bb1
"
实际上是unicode:
Block:私人使用区Unicode: U + F019UTF-8: 0xEF 0x80 0x99JavaScript: 0xF019
代码:将以上内容写入文件(fulltsv):
for root, dirs, files in os.walk(SRC_DIR, topdown=True):
files[:] = [f for f in files if any(ext in f for ext in EXT_LIST) if not f.startswith('.')]
for file in files:
with open(os.path.join(root,file),'r') as f:
with open(SAVE_DIR+re.sub(r'\W+', '', os.path.basename(root).lower())+'.tsv', 'a') as fout:
writer = csv.writer(fout, delimiter='\t', quotechar='\"', quoting=csv.QUOTE_MINIMAL)
checksums = []
with open(os.path.join(root, file), 'rb') as _file:
checksums.append([root, file, hashlib.md5(_file.read()).hexdigest()])
writer.writerows(checksums)
从该文件读取:
# generate list of all tsv
for (dir, subs, files) in os.walk(ROOT):
# remove the new-root from the search
subs = [s for s in subs if NROOT not in s]
for f in files:
fpath = os.path.join(dir,f)
if ".tsv" in fpath:
TSVLIST.append(fpath)
# open/append all TSV content to a single new TSV
with open(FULL,'w') as wfd:
for f in TSVLIST:
with open(f,'r') as fd:
wfd.write(fd.read())
lines = sum(1 for line in f)
# add all entries to a dictionary keyed to their hash
entrydict = {}
ec = 0
with open(FULL, 'r') as fulltsv:
for line in fulltsv:
columns = line.strip().split('\t')
if not columns[2].startswith('.'):
if columns[2] not in entrydict.keys():
entrydict[str(columns[2])] = []
entrydict[str(columns[2])].append({'path': columns[0], 'filename': columns[1], 'status': True})
if len(entrydict[str(columns[2])]) > 1:
ec += 1
ed = {k:v for k,v in entrydict.items() if len(v)>=2}
移动重复项:
for e in f:
if len(f)-mvcnt > 1:
if e['status'] is True:
p = e['path'] # path
n = e['filename'] # name
n0,n0ext = os.path.splitext(n)
n1 = n
# directory structure for new file
FROOT = p.replace(p.split('/')[0],NROOT,1)
n1 = n
rebk = 'mv {0}/{1} {2}/{3}'.format(FROOT,n,p,n)
shutil.move('{0}/{1}'.format(p,n),'{0}/{1}'.format(FROOT,n))
dupelist.write('{0} #{1}\n'.format(rebk,str(h)))
mvcnt += 1
我遇到的错误:
Traceback (most recent call last):
File "/usr/lib/python3.6/shutil.py", line 550, in move
os.rename(src, real_dst)
FileNotFoundError: [Errno 2] No such file or directory: '"./Personal Research/Ramnad 9""14""10"/DSC_0003.NEF' -> './duplicateRoot/Personal Research/Ramnad 9""14""10"/DSC_0003.NEF'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "dCompare.py", line 164, in <module>
shutil.move('{0}/{1}'.format(p,n),'{0}/{1}'.format(FROOT,n))
File "/usr/lib/python3.6/shutil.py", line 564, in move
copy_function(src, real_dst)
File "/usr/lib/python3.6/shutil.py", line 263, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib/python3.6/shutil.py", line 120, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: '"./Personal Research/Ramnad 9""14""10"/DSC_0003.NEF'
显然,这与我处理Unicode字符的方式有关,但是我以前从未使用过它,并且不确定我应该在哪一点/如何处理文件名。在适用于Linux,Python 3的Windows子系统下的ubuntu 10上工作。
在阅读堆栈跟踪时看到的一个问题是,给定OP的示例TSV,Unicode字符是错误的(它们不存在):
FileNotFoundError: [Errno 2] No such file or directory: '"./Personal Research/Ramnad 9""14""10"/DSC_0003.NEF' -> './duplicateRoot/Personal Research/Ramnad 9""14""10"/DSC_0003.NEF'
我认为不应在源路径和目标路径中存在一些引号,这些多余的和双引号看起来好像路径已分解并再次连接在一起(或其他内容:]
'"./Personal Research/Ramnad 9""14""10"/DSC_0003.NEF'
我试图重新创建OP的错误,但是没有。但是,当我处理下面的示例时,最初得到的是FileNotFoundError
(因为我缺少目标文件夹,因此我的示例中没有os.makedirs()
),但是路径已正确编码:
FileNotFoundError: [Errno 2] No such file or directory: 'foo/Personal Research/Ramnad 9\uf01914\uf01910/DSC_0006.JPG'
我所能提供的只是猜测是在TSV文件中还是在entrydict
中编码混乱。 OP,您是否在解释器中检查了该文件或dict,并确认您在期望的路径中看到了\uf019
?可能类似以下内容,以确保存在这些代码点:
>>> print(path.encode('unicode_escape'))
b'./Personal Research/Ramnad 9\\uf01914\\uf01910'
>>> # or, look for 61465
>>> [ord(char) for char in path]
[46, 47, 80, 101, 114, 115, 111, 110, 97, 108, 32, 82, 101,
115, 101, 97, 114, 99, 104, 47, 82, 97, 109, 110, 97, 100,
32, 57, 61465, 49, 52, 61465, 49, 48]
这是我的尝试,可能会有所帮助...
我创建了一个示例TSV文件和相应的目录结构:
>>> p='./Personal Research/Ramnad 9\uf01914\uf01910'
>>> os.makedirs(p)
>>> checksums=[[p, 'DSC_0006.JPG', 'hash']]
>>> with open('full.tsv', 'a') as fout:
writer = csv.writer(fout, delimiter='\t', quotechar='\"', quoting=csv.QUOTE_MINIMAL)
writer.writerows(checksums)
并触摸了外壳中的文件:
$ touch Personal\ Research/Ramnad\ 91410/DSC_0006.JPG
已检查full.tsv
,以确保将其正确写入:
$cat full.tsv
./Personal Research/Ramnad 91410 DSC_0006.JPG hash
空块是基于OP包含的"
的Unicode描述的正确utf-8编码的代码点。
运行hexdump -C full.tsv
以确保utf-8编码(查找2套ef 80 99
):
00000010 72 63 68 2f 52 61 6d 6e 61 64 20 39 ef 80 99 31 |rch/Ramnad 9...1|
00000020 34 ef 80 99 31 30 09 44 53 43 5f 30 30 30 36 2e |4...10.DSC_0006.|
然后我跑了
>>> entrydict = {}
>>> ec = 0
>>> with open('full.tsv', 'r') as fulltsv:
for line in fulltsv:
columns = line.strip().split('\t')
if not columns[2].startswith('.'):
if columns[2] not in entrydict.keys():
entrydict[str(columns[2])] = []
entrydict[str(columns[2])].append({'path': columns[0], 'filename': columns[1], 'status': True})
if len(entrydict[str(columns[2])]) > 1:
ec += 1
>>> entrydict
{'hash': [{'path': './Personal Research/Ramnad 9\uf01914\uf01910', 'filename': 'DSC_0006.JPG', 'status': True}]}`
最后:
>>> e = entrydict['hash'][0]
>>> e
{'path': './Personal Research/Ramnad 9\uf01914\uf01910', 'filename': 'DSC_0006.JPG', 'status': True}
>>> NROOT='foo'
>>> if e['status'] is True:
p = e['path'] # path
n = e['filename'] # name
n0,n0ext = os.path.splitext(n)
n1 = n
# directory structure for new file
FROOT = p.replace(p.split('/')[0],NROOT,1)
rebk = 'mv {0}/{1} {2}/{3}'.format(FROOT,n,p,n)
print(rebk)
src='{0}/{1}'.format(p,n)
dst='{0}/{1}'.format(FROOT,n)
os.makedirs(FROOT)
shutil.move(src,dst)
并且有效。闷闷不乐。