未找到分隔符错误 - 使用Kinesis Firehose从s3加载AWS Redshift

Mas*_*one 2 amazon-s3 amazon-web-services amazon-redshift amazon-kinesis-firehose

我正在使用Kinesis firehose通过S3将数据传输到Redshift.我有一个非常简单的csv文件,看起来像这样.firehose将它放到s3但Redshift错误输出Delimiter未找到错误.我已经查看了与此错误相关的所有帖子,但我确保包含分隔符.

文件

GOOG,2017-03-16T16:00:01Z,2017-03-17 06:23:56.986397,848.78
GOOG,2017-03-16T16:00:01Z,2017-03-17 06:24:02.061263,848.78
GOOG,2017-03-16T16:00:01Z,2017-03-17 06:24:07.143044,848.78
GOOG,2017-03-16T16:00:01Z,2017-03-17 06:24:12.217930,848.78
Run Code Online (Sandbox Code Playgroud)

要么

"GOOG","2017-03-17T16:00:02Z","2017-03-18 05:48:59.993260","852.12"
"GOOG","2017-03-17T16:00:02Z","2017-03-18 05:49:07.034945","852.12"
"GOOG","2017-03-17T16:00:02Z","2017-03-18 05:49:12.306484","852.12"
"GOOG","2017-03-17T16:00:02Z","2017-03-18 05:49:18.020833","852.12"
"GOOG","2017-03-17T16:00:02Z","2017-03-18 05:49:24.203464","852.12"
Run Code Online (Sandbox Code Playgroud)

红移表

CREATE TABLE stockvalue
( symbol                   VARCHAR(4),
  streamdate               VARCHAR(20),
  writedate                VARCHAR(26),
  stockprice               VARCHAR(6)
);
Run Code Online (Sandbox Code Playgroud)
  • 错误 错误

  • 以防万一,这是我的kinesis流看起来像 Firehose

有人可以指出文件可能有什么问题.我在字段之间添加了一个逗号.目标表中的所有列都是varchar,因此不应该有数据类型错误的原因.此外,列长度与文件和redshift表之间完全匹配.我已经尝试在双引号中嵌入列而没有.

小智 5

你可以发布完整的COPY命令吗?它在屏幕截图中被截断了.

我的猜测是你DELIMITER ','在COPY命令中丢失了.尝试将其添加到COPY命令.