To ensure better recoverability:
Break large input files into smaller units.
For example, if you use bcp with a batch size of 100,000 rows to bulk copy in 300,000 rows, and a fatal error occurs after row 200,000, bcp would have successfully copied in the first two batches—200,000 rows—to Adaptive Server. If you had not used batching, bcp would not have been able to copy in any rows to Adaptive Server.
Set the trunc log on chkpt to true (on).
The log entry for the transaction is available for truncation after the batch completes. If you copy into a database that has the trunc log on chkpt database option set on (true), the next automatic checkpoint removes the log entries for completed batches. This log cleaning breaks up large bcp operations and keeps the log from filling.
Set -b batch_size to 10.
The batch size parameter set to 10 causes bcp to reject the batch of 10 rows, including the defective row. The error log from this setting allows you to identify exactly which row failed.
A batch size of 10 is the smallest that bcp processes. If you specify a smaller number, bcp automatically reverts the number to 10.
Because bcp creates 1 data page per batch, and setting b batch_size to 10 creates data pages with 10 rows on each page, this setting causes the data to load slowly and takes up storage space.