Previous pageData Integration Next page
Bulk importing from CSV files 


This utility allows you to bulk import from a CSV file to a Table, with each line in the CSV file generating a new Record in the Table.

When run it will import each line of the CSV file into a separate new record in the specified Table.

See also:
Simple CSV import

Updating existing records

If you want the import to update some existing records, rather than always append, you need to mark the fields that should be used to match against (fields are marked with a U - see syntax below). Typically this will be a field containing a unique product identifier. If there is a match, then the existing record is updated, and it's built-in updated date/time field is updated to the moment of import.

Deleting old records

In a typical import and update scenario, there will be some old records which were not updated, and their absence from the import file means they are no-longer needed, and should be deleted from the table. To identify which records should be deleted, once the import stage is complete, the system inspects the built-in updated date/time field for each record, and deletes any record which is 'old' - ie which has a date earlier than when the import process started. It only does this if one or more of the field are marked with a D (see syntax below).

However since tables often consist of an aggregate of import from different sources (for example, a product table might contain data from several supplier, each with their own import file) it is important to only delete old record from the same source that the current import file is processing. To assist with this, the field that identifies the source (ie the supplier) can be marked. (fields are marked with a D - see syntax below). As the import stage is executed, a list is kept of all different values of this field (which may be just be the one same value for an import from a single supplier), and all old records matching all these values are then deleted.

Note: When a Record is added, any Events (sending emails etc) that would normally be triggered by the addition of a Record will be triggered. If you are bulk loading a large amount of records you should therefore be careful you do not overwhelm your email system.

The utility is provided in the form of a script, LoadFormFromCsv.wsf, which should be installed and run from the neatComponents server itself. It is include din the msi install, and is found by default at: C:\Program Files\Enstar\neatComponents\Scripts

Note that the CSV file should not contain a set of field names as its first line, or it will attempt to import them as a record too.

The script can be placed in any location, but we recommend the Enstar\neatComponents\Scripts directory.

Example usage:

C:\>cscript "C:\Program Files\Enstar\neatComponents\scripts\LoadFormFromCsv.wsf" /siteid 107 /formid 55 /fields "[1,U2,3,D4>25]" /csvpath "C:\myCsv.csv" /filespath "C:\myFiles"



cscript [script path] /siteid {siteid} /formid {formid} /fields "{fields list}" /csvpath "{full path to csv file}" /filespath "{full path to location of images / files for import into records}" [/separator {"char"}]


Note: on 64bit operating systems  run c:\windows\syswow64\cscript instead of cscript


[script path]

Path to the script. eg:

"C:\Program Files\Enstar\neatComponents\scripts\LoadFormFromCsv.wsf"

Note: if you are running on a 64bit operating system the path will normally be "C:\Program Files (x86)\Enstar\neatComponents\scripts\LoadFormFromCsv.wsf"


Identifies the ID of the website to import into


{siteid} - The ID of the Website (See 'How to identify the siteid of a site')


Identifies the page number of the Table to import into


{formid} - The page number of the table - this is generally the number shown after the domain name when you are configuring the Table


Maps the columns in the CSV file to the fields in the Form


{fields list} - This is a comma separated list of the target fields the data is to be imported into, enclosed in square brackets:

The list follows the order that the fields are listed in the CSV file.

  • If a field is not to be imported, this should be 0 (a zero)
  • If a field is to be imported, this should be the field number. (The field number is displayed in the Table field editor, at the top of the Edit dialog for the field)
  • If a field should be used to match existing records (to update the existing record if there is a match, rather than append as a new record) then the field number should be given a U prefix
  • If a field should be used a delete criteria, it should be prefixed with a D


If the target field is a Recordlink field, the data in your CSV file may either be the RecordID of the related Table, or it may match another field in the related Form.

  • If it contains the RecordID, simply use the field number, as normal
  • If it contains data to be matched against another field in the related Table, you need to also specify the field to match against in the related Table

using the syntax:


For example, if the recordlink field is id 4 and that relates to a Table, which has a text field containing the name, and that field is id 25, then the syntax would be 4>25

(You do not specify the formid of the related table, as the system can obtain that automatically from the Recordlink field definition)

Images and files

If you are uploading images or files, the CSV file needs to specify the filename of the resource.

This can be an absolute URL (ie starting with http://) or it can be a local filename. If it is a local filename it should be relative to the value in the /filespath parameter. Note that the full path and filename of the file or image is defined by the combination of the /filespath and the value for the field in the csv file. You can choose how much of the path to place in the /filespath, and place the remainder of the path in the csv

If a resource cannot be found, the record is still imported, but without the missing resource.

Images and files in zip files

The files can be within a zip file (which makes uploading a batch of files to the server easier, and possibly faster). If the files are within a zip file, the zip filename is treated as a folder name. For example, if a set of images were zipped into '' then the /filespath might be "C:\my-upload-folder\" (where 'my-upload-folder' is substituted with the folder the zip file was uploaded to.

The zip file can contain a folder structure of its own (ie if you have a folder structure of files, and zip that). 


Full path to the csv file, eg:

/csvpath "C:\product-data.csv"


Optional (only required if you are importing locally stored files)

Full path to the directory containing the files, eg:

/filespath "C:\images"

Note: do not include a trailing \


Optional (only required if default comma separator is not being used in the csv file)

Character used to separate the fields in the CSV file, eg, for a semicolon:

/separator ";"



Usage suggestions

Use the Upload component to upload the csv file, and any other files or images involved in the import, to the server. (This will ensure they have the correct permissions on the server). After you have run the import script, you can delete the uploaded files using the Upload component (right click on a file, and choose Delete).

If you are uploading a set of files or images:

  • Ensure any images are resized for display on the site before you upload. Free tools like Irfanview ( provide for batch resizing, cropping and renaming.
  • Don't upload them individually, but zip them into one file and upload that. There is no need to unzip it on the server, as the import script can 'see into' the zip file. Note there is a 200MB file size limit on uploads.

To create a file with a list of the files in a folder, browse to the folder at a command prompt, and type:

dir /B > files.csv

This will create a file named files.csv, containing a list of the filename, one per line.

If you then need to add extra columns in to this, you can open the files.csv file in Excel, add the columns, and resave it in csv format.

Direct ODBC connection
While it will be possible to use ODBC to access the database, such access bypasses the business rules and security inherent in neatComponents, and is not supported for general use cases. Instead, we provide a scripting interface described above that allows programmatic data access, with the same safety restrictions and triggers as if the data were entered via the web user interface.
If you have an application that would only function using a direct ODBC connection please contact us to discuss your requirement.
Managing data