CSV-Text File
Last updated
Last updated
Syntax
Details
Syntax
Details
From 5.3.1 version
Write a very long text file on the server file system, through a 3 steps approach:
first, open the output stream, through the "opeTextFile" method
next, add as many rows as you need, by invoking the "writeTotextFile" method multiple times, for each row to add
finally, close the output stream, by invoking the "closeTextFile"
Data could be retrieved from a SQL query, through the executeQueryWithCallback method, since this will ensure a limited amount of memory consumption.
Syntax for openTextFile method
or
Details
Syntax for writeToTextFile method
Details
Syntax for closeTextFile method
Example
Important note: please do not use this method in case of a very long text file (e.g. > 10MB), since it can consume an excessive amount of memory on the server and reduce the scalability of your application. In case of very long files, use the method described below.
Details
A text file can be saved with different charset formats. You have to know in advance which format to use. Examples of formats are: UTF-8 or Windows-1252
Important note: please do not use this method in case of a very long text file (e.g. > 10MB), since it can consume an excessive amount of memory on the server and reduce the scalability of your application. In case of very long files, use the method described below.
Details
From 5.3.1 version
In case of a very long text file, it would be better to read it row by row, in order to limit the amount of data stored in the server.
The following method requires a callback function, which will be invoked automatically, for each line read from the file. The current line (string) will be passed forward to the callback method.
Details
From 5.3.1 version
In case of a very long text file, it would be better to read it row by row, in order to limit the amount of data stored in the server.
The following method requires a callback function, which will be invoked automatically, for each line read from the file. The current line (string) will be passed forward to the callback method.
Details
From 5.3.1 version
In case of a very long text file, it would be better to read it row by row, in order to limit the amount of data stored in the server.
The following method requires a callback function, which will be invoked automatically, for each line read from the file. The current line (string) will be passed forward to the callback method.
Details
Syntax
Details
Syntax
Details
Syntax
Details
Read up to the specified number of rows, starting from the specified index (0..x) from the csv file stored in the specified path and get back the content of a specific folder
dirId path identifier fileName name of the csv file attributeNames, list of attributes, assigned to each column, starting from leftmost column to the right
Syntax
Important note: this method should NOT be used in case of very long CSV files, since it can consume a large amount of memory on the server. In case of a very long CSV file, please refer the next method.
Details
startRow row index; if null it is the first row, i.e. 0
blockSize max number of rows to read, if available; if null it is set to 10000
attributeNames, list of attributes, assigned to each column, starting from leftmost column to the right
A list of js objects, where each object is expressed as a set of couples <attributename, related value>; a 0 length list in case of no more rows available. It supports also nested objects and list of objects: it depends on the definition of the attribute lists. A few examples of attributes: [ "attr1", "subobject.attrsub1", "subobject.attrsub2", "sublist[0].subattr3","sublist[0].subattr4","sublist[1].subattr5" ]
From 5.3.2 version
In case of very long CSV files to read (e.g. > 1MB), it would be better to avoid reading the whole content and maintain it in the server memory, until the end of its processing: this could lead to an excessive amount of memory consumption.
The best approach consists of reading that file row by row and maintain in memory only one row per time. The following method allows to do it: a callback function is used to process a single row and it is invoked automatically when reading the CSV file.
Syntax
Details
Example
From 5.3.2 version
In case of very long CSV files to read (e.g. > 1MB), it would be better to avoid reading the whole content and maintain it in the server memory, until the end of its processing: this could lead to an excessive amount of memory consumption.
The best approach consists of reading that file row by row and maintain in memory only one row per time. The following method allows to do it; moreover, this method is optimized in case you need to load data into a database table.
This methods allows to read part of data from the CSV file and part from default values. You can also read only a part of the columns provided in the CSV file.
Syntax
Details
Example
Write a CSV file on the server file system, starting from a list of data provided in input.
Data could be retrieved from a SQL query, through the executeQuery method.
Syntax
Important note: this method should NOT be used in case of very long CSV files, since it can consume a large amount of memory on the server. In case of a very long CSV file, please refer the next method.
Details
From 5.3.2 version
Write a very long CSV file on the server file system, through a 3 steps approach:
first, open the output stream, through the "openCSVFile" method
next, add as many rows as you need, by invoking the "writeToCSVFile" method multiple times, for each row to add
finally, close the output stream, by invoking the "closeCSVFile"
Data could be retrieved from a SQL query, through the executeQueryWithCallback method, since this will ensure a limited amount of memory consumption.
Syntax for openCSVFile method
Details
Syntax for writeToCSVFile method
Details
Syntax for closeCSVFile method
Details
Example
From 5.3.2 version
Write a very long CSV file on the server file system, starting from a SQL query.
Details
Example
From 5.3.2 version
Starting from a CSV file stored in Google Cloud Storage, this method allows to load the whole content to a CloudSQL instance, in the specified table/schema, always in append mode.
In order to use this function, a Google cloud administrator must set the correct grants to the GCS and CloudSQL services.
Details
Example
From 6.0.2 version
Starting from text type file stored in the server file system, it returns the file charset.
Syntax
The return value can be any of the following:
Chinese
ISO-2022-CN
BIG-5
EUC-TW
HZ-GB-2312
GB-18030
Cyrillic
ISO-8859-5
KOI8-R
WINDOWS-1251
MACCYRILLIC
IBM866
IBM855
Greek
ISO-8859-7
WINDOWS-1253
Hebrew
ISO-8859-8
WINDOWS-1255
Japanese
ISO-2022-JP
Shift_JIS
EUC-JP
Korean
ISO-2022-KR
EUC-KR
Unicode
UTF-8
UTF-16BE / UTF-16LE
UTF-32BE / UTF-32LE / X-ISO-10646-UCS-4-3412 / X-ISO-10646-UCS-4-2143
Others
WINDOWS-1252
US-ASCII
Argument
Description
fileName -file name; it can includes a subpath to append to the base path specified through directoryId
fileContent
text content to save
overwrite
true to overwrite the file content if already exists, falseto ignore this operation and returns false as result
directoryId
directory identifier, used to define the absolute path, in the central server, where the file will be stored; if null, there must be one only entry for this application
outcome
true in case of the operation has beenexecuted successfully, an exception otherwise
Argument
Description
fileName -file name; it can includes a subpath to append to the base path specified through directoryId
fileContent
text content to save
overwrite
true to overwrite the file content if already exists, falseto ignore this operation and returns false as result
directoryId
directory identifier, used to define the absolute path, in the central server, where the file will be stored; if null, there must be one only entry for this application
outcome
true in case of the operation has beenexecuted successfully, an exception otherwise
charsetName
name charset
Argument
Description
fileName
text file name to create within the specified directory
overwrite
flag true|false used to define whether the file must overwrite a previous one
directoryId
directory identifier, used to define the absolute path, in the central server, where the file will be stored; if null, there must be one only entry for this application
fileAppend
flag true|false used to define if rows to add must be appneded at the end of an already existing file
fileId
a text id, representing the output stream, to refer in the next two methods.
charsetName
name charset
Argument
Description
fileId
a text id, representing the output stream, needed to work qwith the correct output stream
row
a text to write; this text does NOT contain a return carriage, so it is up to the programmer to add it, if needed (e.g. "\r\n")
Argument
Description
fileId
a text id, representing the output stream, needed to work qwith the correct output stream
Argument
Description
filePath
Filepath + Filename of the file to read
textString
The content of the read file
Argument
Description
filePath
Filepath + Filename of the file to read
charSet
the charset to use when reading the text file
textString
The content of the read file
Argument
Description
fileName
text file name to read
directoryId
directory id, identifying where the file is located; can work both with server file system and Google Cloud Storage
callbackFunName
callback function name, invoked by this one for each row
Argument
Description
filePath
text file name to read
directoryId
directory id, identifying where the file is located; can work both with server file system and Google Cloud Storage
callbackFunName
callback function name, invoked by this one for each row
charset
charset to use when reading file; e.g. UTF-8
Argument
Description
filePath
absolute path on the server file system, including the file name to read
callbackFunName
callback function name, invoked by this one for each row
charset
charset to use when reading file; e.g. UTF-8
Argument
Description
fileName
it can includes a subpath to append to the base path specified through directoryId
directoryId
directory identifier, used to define the absolute path, in the central server, where the file will be stored; if null, there must be one only entry for this application
outcome
true in case of the operation has beenexecuted successfully, an exception otherwise
Argument
Description
filenName
CSV file name to read from the specified directory
directoryId
directory identifier, used to define the absolute path, in the central server, where the file is already located
sep
field separator; allowed values: , or ;
skipFirstRowflag
true|false, indicating whether the first row in the file should me skipped, for instance because it contains the column header names.
callbackFunName
callback function name which will be invoked for each row read from the CSV file. Such a function must be declared before the invocation of this one, and must include and argument used to pass forward the js object representing data read for theCS
attributeNames
will be created and filled with the values coming from the fields, each with the specified attribute name
Argument
Description
filenName
CSV file name to read from the specified directory
directoryId
directory identifier, used to define the absolute path, in the central server, where the file is already located; can be null
sep
field separator; allowed values: , or ;
skipFirstRowflag
true or false, indicating whether the first row in the file should me skipped, for instance because it contains the column header names.
datastoreId
data source id which identifies where the table to load is located
tableName
name of the table where data will be loaded
defaultFieldNames
javascript object containing the list of additional table fields to fill in, expressed as field name + value; example: { STATUS: "E", CREATE_DATE: new java.sql.Date() }
csvFields
list of javascript objects, one for each CSV column; the object structure can contains optionally these attributes: { fieldName: "....", convertToSqlDate: "yyyy-MM-dd", convertToSqlTimestamp: "yyyy-MM-dd HH:mm:ss" }. This object allows to define whether the CSV column must be used to fill in the corresponding table field specified through "fieldName" attribute; if this attribute is omitted, the CSV column is ignored when importing the row; the other two attributes (convertXXXX) are optional and must be used in case a CSV column must be mapped to a DATE or DATETIME table field. Another property is trim: true|false, used to remove spaces from text type fields. Another property is "progressive": INTERNAL_UUID or INTERNAL_P.
Argument
Description
filenName
CSV file name to create within the specified directory
directoryId
directory identifier, used to define the absolute path, in the central server, where the file will be stored; if null, there must be one only entry for this application
data
list of js objects, one for each row to save into the CSV file
sep
field separator; allowed values: , or ;
languageId
IT, EN, ...
exportAttributes
list of attribute names to export into the CSV file
exportColums
list of column headers to include as the first row in the CSV file
formatColumns
optional list of formatters to apply to the data defined in each js object, in order to convert data to text (e.g. "0.00" to covnert a number to string)
printTitles
flag true|false, indicating whether the column headers must be included as the first row in the CSV file
outcome
"" in case of the operation has been executed successfully, the error message otherwise
Argument
Description
fileName
CSV file name to create within the specified directory
overwrite
flag true|false used to define whether the file must overwrite a previous one
directoryId
directory identifier, used to define the absolute path, in the central server, where the file will be stored; if null, there must be one only entry for this application
sep
field separator; allowed values: , or ;
fileAppend
flag true|false used to define if rows to add must be appneded at the end of an already existing file
attributesToWrite
list of attribute names to export into the CSV file
formatters
optional list of formatters to apply to the data defined in each js object, in order to convert data to text (e.g. "0.00" to covnert a number to string)
fileId
a text id, representing the output stream, to refer in the next two methods.
Argument
Description
fileId
a text id, representing the output stream, needed to work qwith the correct output stream
obj
javascript object, containing data to write as a row in the CSV file; data contains attributes referred in the previous method through "attributrstoWrite" argument
Argument
Description
fileId
a text id, representing the output stream, needed to work qwith the correct output stream
Argument
Description
fileName
CSV file name to create within the specified directory
overwrite
flag true/false used to define whether the file must overwrite a previous one
directoryId
directory identifier, used to define the absolute path, in the central server, where the file will be stored; if null, there must be one only entry for this application
sep
field separator; allowed values: , or ;
languageId
language to use when formatting date values
printTitles
flag true/false used to incude a first row within the CSV file related to the SELECT field names
fileAppend
flag true/false used to define if rows to add must be appneded at the end of an already existing file
formatColumns
optional list of formatters to apply to the data defined in each js object, in order to convert data to text (e.g. "0.00" to covnert a number to string)
datasourceId
datasource id which identifies the database schema where reading data
sqlQuery
SQL query to execute, in order to fill in the CSV file
pars
list of javascript values, related to binding variables expressed as ?
Argument
Description
instance
CloudSQL instance name; this value is reported within the Google Cloud Console
bucketPath
GCS bucket name + subpath + file name for the CSV, expressed as gs://bucketname/filename.csv
dbSchema
database schema where the table to load is located
dataSourceId
datasource id related to the schema specified with the previous argument
tableName
table where saving data coming from the CSV file
where
can be null; optional WHERE condition to include in the SQL query automatically executed (every 2 seconds) in order to check out if the data to import has been loaded. In null, it is assumed that the table is empty and the executed query would be "SELECT COUNT(*) FROM tablename". This method terminates when the returned value is > 0. If this argument is filled in, the query would be "SELECT COUNT(*) from tablename WHERE yourwhere". Again, the method terminates when there is at least one record
timeout
max number of seconds to wait before a timeout is fired, waiting from the completition of data loading on the table
columns
list of table fields where the CSV data will be saved: these fields must be as many as the number of columns in the CSV file