MalformedXML:您提供的 XML 格式不正确或未根据我们发布的架构进行验证

问题描述 投票:0回答:11

我在使用 AWS S3 时遇到这个奇怪的问题。我正在开发可以将图像存储到 AWS 存储桶的应用程序。使用 Multer 作为中间件和 S3FS 库连接并上传到 AWS。

但是当我尝试上传内容时,弹出以下错误。

“MalformedXML:您提供的 XML 格式不正确或未根据我们的发布进行验证 hed 模式”

Index.js

var express = require('express');
var router = express();
var multer = require('multer');
var fs = require('fs');
var S3FS = require('s3fs');
var upload = multer({
  dest: 'uploads'
})
var S3fsImpl = new S3FS('bucket-name', {
  region: 'us-east-1',
  accessKeyId: 'XXXXXXXXXXXX',
  secretAccessKey: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
});

/* GET home page. */
router.get('/', function (req, res, next) {
  res.render('profile', {
    title: 'Express'
  });
});

router.post('/testupload', upload.single('file'), function (req, res) {
  var file = req.file;
  console.log(file);

  var path = req.file.path;
  var stream = fs.createReadStream(path);
  console.log(stream);

  S3fsImpl.writeFile(file.name, stream).then(function () {
    fs.unlink(file.path, function (err) {
      if (err) {
        console.log(err);
      }
    });
    res.redirect('/profile');
  })
});

module.exports = router;

编辑 输出:

{ fieldname: 'file',
  originalname: '441_1.docx',
  encoding: '7bit',
  mimetype: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
  destination: 'uploads',
  filename: '662dcbe544804e4f50dfef1f52b40d22',
  path: 'uploads\\662dcbe544804e4f50dfef1f52b40d22',
  size: 13938 }
ReadStream {
  _readableState:
   ReadableState {
     objectMode: false,
     highWaterMark: 65536,
     buffer: BufferList { head: null, tail: null, length: 0 },
     length: 0,
     pipes: null,
     pipesCount: 0,
     flowing: null,
     ended: false,
     endEmitted: false,
     reading: false,
     sync: true,
     needReadable: false,
     emittedReadable: false,
     readableListening: false,
     resumeScheduled: false,
     defaultEncoding: 'utf8',
     ranOut: false,
     awaitDrain: 0,
     readingMore: false,
     decoder: null,
     encoding: null },
  readable: true,
  domain: null,
  _events: { end: [Function] },
  _eventsCount: 1,
  _maxListeners: undefined,
  path: 'uploads\\662dcbe544804e4f50dfef1f52b40d22',
  fd: null,
  flags: 'r',
  mode: 438,
  start: undefined,
  end: undefined,
  autoClose: true,
  pos: undefined,
  bytesRead: 0 }

Package.json

{
  "name": "aws-s3-images",
  "version": "1.0.0",
  "private": true,
  "scripts": {
    "start": "node ./bin/www"
  },
  "dependencies": {
    "body-parser": "~1.17.1",
    "connect-multiparty": "^2.0.0",
    "cookie-parser": "~1.4.3",
    "debug": "~2.6.3",
    "express": "~4.15.2",
    "hbs": "~4.0.1",
    "morgan": "~1.8.1",
    "multer": "^1.3.0",
    "s3fs": "^2.5.0",
    "serve-favicon": "~2.4.2"
  },
  "description": "AWS S3 uploading images",
  "main": "app.js",
  "devDependencies": {},
  "keywords": [
    "javascript"
  ],
  "author": "reeversedev",
  "license": "MIT"
}
javascript node.js amazon-web-services amazon-s3
11个回答
23
投票

S3限制每个DeleteObjectsRequest删除1000个文件。因此,在获取所有 KeyVersions 列表后,我检查键是否 >1000,然后将列表划分为子列表,然后将其传递给带有子列表的 DeleteObjectsRequest,如下所示 -

if (keys.size() > 1000) {
            int count = 0;
            List<List> partition = ListUtils.partition(keys, 1000);
            for (List list : partition) {
                count = count + list.size();
                DeleteObjectsRequest request = new DeleteObjectsRequest(
                        fileSystemConfiguration.getTrackingS3BucketName()).withKeys(list);
                amazonS3Client.deleteObjects(request);
                logger.info("Deleted the completed directory files " + list.size() + " from folder "
                        + eventSpecificS3bucket);
            }
            logger.info("Deleted the total directory files " + count + " from folder " + eventSpecificS3bucket);
        } else {
            DeleteObjectsRequest request = new DeleteObjectsRequest(
                    fileSystemConfiguration.getTrackingS3BucketName()).withKeys(keys);
            amazonS3Client.deleteObjects(request);
            logger.info("Deleted the completed directory files from folder " + eventSpecificS3bucket);
        }

23
投票

因此,如果有人仍然面临这个问题,就我而言,只有当您传递要删除的空对象数组时,才会发生该问题,它会导致服务器崩溃并出现以下错误“MalformedXML”。

const data: S3.DeleteObjectsRequest = {
  Bucket: bucketName,
  Delete: {
    Objects: [], <<---here
  },
}

return s3Bucket.deleteObjects(data).promise()

因此,只需在将该请求发送到 aws 之前检查

Objects
键数组是否不等于 0。


22
投票

我在使用 AmplifyJS 库时遇到这个问题。按照 AWS 主页中有关分段上传概述的文档进行操作:

每当您上传某个部分时,Amazon S3 都会在其文件中返回一个 ETag 标头 回复。对于每个上传的部件,您必须记录部件号和 ETag 值。您需要将这些值包含在后续的内容中 请求完成分段上传。

但是S3默认配置不会这样做。只需转到“权限”选项卡 -> 将

<ExposeHeader>ETag</ExposeHeader>
添加到 CORS 配置中。 https://github.com/aws-amplify/amplify-js/issues/61


3
投票

如果您将 ActiveStorage 与 Minio 一起使用,请将

force_path_style: true
添加到您的配置中

# config/storage.yml
minio:
   service: S3
   access_key_id: name
   secret_access_key: password
   endpoint: http://example.com:9000/
   region: us-east-1
   bucket: myapp-production
   force_path_style: true # add this

3
投票

我在 ExposeHeaders 中添加了 ETag,它解决了问题 enter image description here


2
投票
input := &s3.DeleteObjectsInput{
        Bucket: bucketName,
        Delete: &s3.Delete{Objects: objs, // <-  up to 1000 keys
       Quiet: aws.Bool(false)},
    }

我正在使用aws-sdk-go sdk,当objs中的key的数量超过1000时,我有 得到同样的错误,例如: MalformedXML:您提供的 XML 格式不正确或未根据我们发布的架构进行验证。

请求包含最多 1000 个键的列表。 参考: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html


0
投票

据我所知,只需交叉检查桶名称即可。

final PutObjectRequest putObjectRequest = new PutObjectRequest(**bucketName**, accessKeyId, is ,meta); 

0
投票

对于那些来自 talend 的人,在我的例子中,交叉检查 tS3Put 组件的存储桶名称,并在关键部分给出您希望在 s3 中作为上传文件看到的任何名称。

由于我是 StackOverflow 的新手,因此不允许我在此处附加图像。您可以复制以下网址来查看。谢谢

https://i.sstatic.net/Q1pW0.png


0
投票

我在运行以下命令时遇到此错误消息

program-that-prints-to-stdout | \
aws s3 cp \
  --expected-size=${INCORRECT_SIZE} \
  --ss3=AES256 \
  - \
  s3://path/to/uploaded-file

就我而言,

program-that-prints-to-stdout
没有打印我期望的23 Gb(并在
INCORRECT_SIZE
参数中设置),事实上,它根本没有打印任何东西。

我希望这对某人有帮助。


-1
投票

此代码应该适合您。你需要记住: 1)使用唯一的存储桶名称 2)在文件对象下使用“originalname”而不是“name”<-- this property does not exist

app.post('/testupload', function(req, res){


    var file = req.files[0];

    console.log(file.path);
    console.log(file.name);

    console.log('FIRST TEST: ' + JSON.stringify(file));

    var stream = fs.createReadStream(file.path);    

    S3fsImpl.writeFile(file.originalname, stream).then(function () 
      {
        console.log('File has been sent - OK');
      },
      function(reason)
      {
          throw reason;
      }
     ); 

     res.redirect('/index');   

});

-2
投票

你可以试试这个代码吗:

var S3fsImpl = new S3FS('bucket-name', {
  region: 'us-east-1',
  accessKeyId: 'XXXXXXXXXXXX',
  secretAccessKey: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
});

var fsImplStyles = S3fsImpl.getPath(file.name);

// Change us-east-1 for your region
var url = 'https://s3-us-east-1.amazonaws.com/' + fsImplStyles;

如果此代码适合您,请发送反馈。

© www.soinside.com 2019 - 2024. All rights reserved.