×
Community Blog Methods of Importing Data to MySQL Servers in Batches

Methods of Importing Data to MySQL Servers in Batches

The article lists the two techniques of importing data to MySQL servers, along with examples.

By Digoal

Background

There are two ways to import data to MySQL servers quickly.

1) LOAD DATA LOCAL INFILE

Import data in client files to MySQL servers.

Here is an example:

create table test (id int, c1 numeric);    
    
直接从pg生成, 管道过去    
    
psql -c "copy (select generate_series(1,10000), random()*100) to stdout with (format 'csv')" | mysql -h rm-bp1wv992ym962k85888370.mysql.rds.aliyuncs.com -P 3306 -u user123 --password=Dts_test123! -D db1 -e "LOAD DATA LOCAL INFILE '/dev/stdin' INTO TABLE test FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' (id, c1)"    

2) LOAD DATA INFILE

Import data in files from one MySQL server to another. Use stored procedures for loop insert, and don't forget to utilize transactions; otherwise, the process will be very slow.

Here is an example:

CREATE TABLE employees (    
  id INT NOT NULL,    
  fname VARCHAR(30),    
  lname VARCHAR(30),    
  birth TIMESTAMP,    
  hired DATE NOT NULL DEFAULT '1970-01-01',    
  separated DATE NOT NULL DEFAULT '9999-12-31',    
  job_code INT NOT NULL,    
  store_id INT NOT NULL    
);    
    
DROP PROCEDURE IF EXISTS BatchInser;    
    
delimiter //   -- 把界定符改成双斜杠    
CREATE PROCEDURE BatchInsert(IN init INT, IN loop_time INT)  -- 第一个参数为初始ID号(可自定义),第二个位生成MySQL记录个数    
  BEGIN    
      DECLARE Var INT;    
      DECLARE ID INT;    
      SET Var = 0;    
      SET ID = init;    
      WHILE Var < loop_time DO    
          insert into employees    
          (id, fname, lname, birth, hired, separated, job_code, store_id)     
          values     
          (ID, CONCAT('chen', ID), CONCAT('haixiang', ID), Now(), Now(), Now(), 1, ID);    
          SET ID = ID + 1;    
          SET Var = Var + 1;    
      END WHILE;    
  END;    
//    
delimiter ;  -- 界定符改回分号    
    
-- 开启事务插入,否则会很慢    
    
begin;    
CALL BatchInsert(1, 200000);    
commit;    
    
Query OK, 1 row affected (7.53 sec)    

In contrast, PostgreSQL is much faster:

CREATE TABLE employees (  
  id INT NOT NULL,  
  fname VARCHAR(30),  
  lname VARCHAR(30),  
  birth TIMESTAMP,  
  hired DATE NOT NULL DEFAULT '1970-01-01',  
  separated DATE NOT NULL DEFAULT '9999-12-31',  
  job_code INT NOT NULL,  
  store_id INT NOT NULL  
);  
  
\timing  
  
insert into employees  
    (id, fname, lname, birth, hired, separated, job_code, store_id)   
select   
    ID, CONCAT('chen', ID), CONCAT('haixiang', ID), Now(), Now(), Now(), 1, ID   
from generate_series(1,200000) id;  
  
INSERT 0 200000  
Time: 355.652 ms  
0 0 0
Share on

digoal

276 posts | 24 followers

You may also like

Comments