如何从CSV为PostgreSQL拷贝生成模式

DPS*_*ial 20 csv postgresql schema postgresql-copy

给定具有几十列或更多列的CSV,如何创建可以在PostgreSQL中的CREATE TABLE SQL表达式中使用以与COPY工具一起使用的"模式"?

我看到了很多关于COPY工具和基本CREATE TABLE表达式的示例,但是当您手动创建模式的列数可能过高时,没有详细说明.

Dan*_*ler 19

如果CSV不是太大而且在本地计算机上可用,那么csvkit是最简单的解决方案.它还包含许多其他用于处理CSV的实用程序,因此它是一个非常有用的工具.

最简单的键入shell:

$ csvsql myfile.csv
Run Code Online (Sandbox Code Playgroud)

将打印出所需的CREATE TABLESQL命令,可以使用输出重定向将其保存到文件中.

如果您还提供了连接字符串,csvsql则会创建表并一次性上传文件:

$ csvsql --db "$MY_DB_URI" --insert myfile.csv
Run Code Online (Sandbox Code Playgroud)

还有一些选项可用于指定您正在使用的SQL和CSV的风格.它们记录在内置帮助中:

$ csvsql -h
usage: csvsql [-h] [-d DELIMITER] [-t] [-q QUOTECHAR] [-u {0,1,2,3}] [-b]
              [-p ESCAPECHAR] [-z MAXFIELDSIZE] [-e ENCODING] [-S] [-H] [-v]
              [--zero] [-y SNIFFLIMIT]
              [-i {access,sybase,sqlite,informix,firebird,mysql,oracle,maxdb,postgresql,mssql}]
              [--db CONNECTION_STRING] [--query QUERY] [--insert]
              [--tables TABLE_NAMES] [--no-constraints] [--no-create]
              [--blanks] [--no-inference] [--db-schema DB_SCHEMA]
              [FILE [FILE ...]]

Generate SQL statements for one or more CSV files, create execute those
statements directly on a database, and execute one or more SQL queries.
positional arguments:
  FILE                  The CSV file(s) to operate on. If omitted, will accept
                        input on STDIN.

optional arguments:
  -h, --help            show this help message and exit
  -d DELIMITER, --delimiter DELIMITER
                        Delimiting character of the input CSV file.
  -t, --tabs            Specifies that the input CSV file is delimited with
                        tabs. Overrides "-d".
  -q QUOTECHAR, --quotechar QUOTECHAR
                        Character used to quote strings in the input CSV file.
  -u {0,1,2,3}, --quoting {0,1,2,3}
                        Quoting style used in the input CSV file. 0 = Quote
                        Minimal, 1 = Quote All, 2 = Quote Non-numeric, 3 =
                        Quote None.
  -b, --doublequote     Whether or not double quotes are doubled in the input
                        CSV file.
  -p ESCAPECHAR, --escapechar ESCAPECHAR
                        Character used to escape the delimiter if --quoting 3
                        ("Quote None") is specified and to escape the
                        QUOTECHAR if --doublequote is not specified.
  -z MAXFIELDSIZE, --maxfieldsize MAXFIELDSIZE
                        Maximum length of a single field in the input CSV
                        file.
  -e ENCODING, --encoding ENCODING
                        Specify the encoding the input CSV file.
  -S, --skipinitialspace
                        Ignore whitespace immediately following the delimiter.
  -H, --no-header-row   Specifies that the input CSV file has no header row.
                        Will create default headers.
  -v, --verbose         Print detailed tracebacks when errors occur.
  --zero                When interpreting or displaying column numbers, use
                        zero-based numbering instead of the default 1-based
                        numbering.
  -y SNIFFLIMIT, --snifflimit SNIFFLIMIT
                        Limit CSV dialect sniffing to the specified number of
                        bytes. Specify "0" to disable sniffing entirely.
  -i {access,sybase,sqlite,informix,firebird,mysql,oracle,maxdb,postgresql,mssql}, --dialect {access,sybase,sqlite,informix,firebird,mysql,oracle,maxdb,postgresql,mssql}
                        Dialect of SQL to generate. Only valid when --db is
                        not specified.
  --db CONNECTION_STRING
                        If present, a sqlalchemy connection string to use to
                        directly execute generated SQL on a database.
  --query QUERY         Execute one or more SQL queries delimited by ";" and
                        output the result of the last query as CSV.
  --insert              In addition to creating the table, also insert the
                        data into the table. Only valid when --db is
                        specified.
  --tables TABLE_NAMES  Specify one or more names for the tables to be
                        created. If omitted, the filename (minus extension) or
                        "stdin" will be used.
  --no-constraints      Generate a schema without length limits or null
                        checks. Useful when sampling big tables.
  --no-create           Skip creating a table. Only valid when --insert is
                        specified.
  --blanks              Do not coerce empty strings to NULL values.
  --no-inference        Disable type inference when parsing the input.
  --db-schema DB_SCHEMA
                        Optional name of database schema to create table(s)
                        in.
Run Code Online (Sandbox Code Playgroud)

其他一些工具也可以进行模式推理,包括:

  • Apache Spark
  • 熊猫(Python)
  • Blaze(Python)
  • read.csv +你最喜欢的R包中的db包

其中每个都具有将CSV(和其他格式)读取到通常称为DataFrame或类似的表格数据结构的功能,从而推断出流程中的列类型.然后,他们可以使用其他命令写出等效的SQL模式,或者将DataFrame直接上载到指定的数据库中.工具的选择取决于数据量,存储方式,CSV的特性,目标数据库以及您喜欢的语言.


kli*_*lin 10

基本上,您应该使用现成的工具或使用python,ruby或您选择的语言来准备数据库外部的数据(包括其结构).但是,由于缺少这样的机会,您可以使用plpgsql做很多事情.

创建包含文本列的表

csv格式的文件不包含有关列类型,主键或外键等的任何信息.您可以相对轻松地创建包含文本列的表并将数据复制到该表.之后,您应手动更改列的类型并添加约束.

create or replace function import_csv(csv_file text, table_name text)
returns void language plpgsql as $$
begin
    create temp table import (line text) on commit drop;
    execute format('copy import from %L', csv_file);

    execute format('create table %I (%s);', 
        table_name, concat(replace(line, ',', ' text, '), ' text'))
    from import limit 1;

    execute format('copy %I from %L (format csv, header)', table_name, csv_file);
end $$;
Run Code Online (Sandbox Code Playgroud)

文件中的示例数据c:\data\test.csv:

id,a_text,a_date,a_timestamp,an_array
1,str 1,2016-08-01,2016-08-01 10:10:10,"{1,2}"
2,str 2,2016-08-02,2016-08-02 10:10:10,"{1,2,3}"
3,str 3,2016-08-03,2016-08-03 10:10:10,"{1,2,3,4}"
Run Code Online (Sandbox Code Playgroud)

进口:

select import_csv('c:\data\test.csv', 'new_table');

select * from new_table;

 id | a_text |   a_date   |     a_timestamp     | an_array  
----+--------+------------+---------------------+-----------
 1  | str 1  | 2016-08-01 | 2016-08-01 10:10:10 | {1,2}
 2  | str 2  | 2016-08-02 | 2016-08-02 10:10:10 | {1,2,3}
 3  | str 3  | 2016-08-03 | 2016-08-03 10:10:10 | {1,2,3,4}
(3 rows)
Run Code Online (Sandbox Code Playgroud)

大型csv文件

上面的函数导入数据两次(到临时表和目标表).对于大型文件,这可能会严重损失服务器上的时间和不必要的负载.解决方案是将csv文件拆分为两个文件,一个带有标头,另一个带有数据.然后该函数应如下所示:

create or replace function import_csv(header_file text, data_file text, table_name text)
returns void language plpgsql as $$
begin
    create temp table import (line text) on commit drop;
    execute format('copy import from %L', header_file);

    execute format('create table %I (%s);', 
        table_name, concat(replace(line, ',', ' text, '), ' text'))
    from import;

    execute format('copy %I from %L (format csv)', table_name, data_file);
end $$;
Run Code Online (Sandbox Code Playgroud)

改变列类型

您可以尝试根据内容自动更改列类型.如果处理简单类型并且文件中的数据始终保持特定格式,则可以成功.但是,一般来说,这是一项复杂的任务,下面列出的功能应仅作为示例.

根据内容确定列类型(编辑函数以添加所需的转换):

create or replace function column_type(val text)
returns text language sql as $$
    select 
        case 
            when val ~ '^[\+-]{0,1}\d+$' then 'integer'
            when val ~ '^[\+-]{0,1}\d*\.\d+$' then 'numeric'
            when val ~ '^\d\d\d\d-\d\d-\d\d$' then 'date'
            when val ~ '^\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d$' then 'timestamp'
        end
$$;
Run Code Online (Sandbox Code Playgroud)

使用上述功能更改列类型:

create or replace function alter_column_types(table_name text)
returns void language plpgsql as $$
declare
    rec record;
    qry text;
begin
    for rec in
        execute format(
            'select key, column_type(value) ctype
            from (
                select row_to_json(t) a_row 
                from %I t 
                limit 1
            ) s, json_each_text (a_row)',
            table_name)
    loop
        if rec.ctype is not null then
            qry:= format(
                '%salter table %I alter %I type %s using %s::%s;', 
                qry, table_name, rec.key, rec.ctype, rec.key, rec.ctype);
        end if;
    end loop;
    execute(qry);
end $$;
Run Code Online (Sandbox Code Playgroud)

使用:

select alter_column_types('new_table');

\d new_table

               Table "public.new_table"
   Column    |            Type             | Modifiers 
-------------+-----------------------------+-----------
 id          | integer                     | 
 a_text      | text                        | 
 a_date      | date                        | 
 a_timestamp | timestamp without time zone | 
 an_array    | text                        |
Run Code Online (Sandbox Code Playgroud)

(好吧,正确识别数组类型非常复杂)