https://man.liquidfiles.com
LiquidFiles Documentation

Before uploading our first file, a little bit a background information is required. To upload the actual file to the LiquidFiles system, we have two alternate methods. We can either use a combination of html form based uploads, or we can use straight JSON.

JSON based uploads

JSON is what's being used for all other aspects of the LiquidFiles API but when it comes to sending files it poses a bit of a problem. Consider the following example request:

{"file":
  {
    "name":"logo.gif",
    "data":?????
  }
}

Since JSON is a text based protocol, we can't just insert binary data in it in the "data" tag. The standard programatical solution to problems like this is to Base64 encode the binary data into something that can be transmitted over text. This works, but using Base64 encoding has several problems though, some of which include:

  • Aproximately 33% file size increase. We're taking 3 bytes or binary data and spreading over 4 bytes using only the text writable characters. With large files, this leads to a significant increase.
  • It's impossible to do any fancy server side file handling. The entire JSON request will be loaded into the web applications memory. Normally when files are sent using LiquidFiles, the web server (nginx) takes care of the binary file data and the web application only deals with moving files around but the actual file data never passes through the web application. This keeps LiquidFiles fast and efficient. If you where to send a 1Gb file using this method, first it would be 1.3Gb transferred (Base64) and this would be loaded (twice - raw Base64 data and decoded) into the memory of LiquidFiles before it could write it to disk. This is very slow and inefficient.
  • With these limitations, the maximum message size you can upload with this method is 100MB. While this should still be plenty for a lot of applications, if you upload files near that size on a regular basis, it's the recommendation to switch to the form based upload mechanism instead.

Request Info and Parameters for JSON based Uploads

Request Info
Info Value
Request URL /shares/_share_id_/folders/_folder_id_/files
Request VERB POST
Request Parameters
Parameter Type Description
name String The filename of the file you're uploading.
data String The data of the file, encoded in Base64 encoding (max size 100MB).
content_type String (Optional) The Content-Type of the uploaded file. If not present the server will calculate the Content-Type.
checksum String (Optional) The SHA-1 checksum of the uploaded file. If not present the server will calculate the SHA-1 checksum.
crc32 String (Optional) The CRC32 checksum of the uploaded file. If not present the server will calculate the CRC32 checksum.

To upload files using Base64 Encoding, please see the following example using curl and bash:

cat <<EOF | curl -s -X POST -H 'Content-Type: application/json' --user "nkpIxMK9ucUUE7FvfNpdAf:x" -d @- https://test.host/shares/project-alpha/folders/root/files
{"file":
  {
    "name":"Presentation1.pptx",
    "data":"`base64 /path/to/Presentation1.pptx`"
  }
}     
EOF

HTML form based uploads

This is a much more efficient way of sending files. The only real problem with it is that it doesn't conform to the API standard way of using JSON for everything, and can lead to some cludges when you're implementing this into your application.

It's based on this simple html form:

<form action="https://liquidfiles.example.com/shares/project-alpha/folders/root/files" enctype="multipart/form-data" method="post">
  <input type="file" name="Filedata" filename="filename.ext">
</form>

which would lead to the raw data being transmitted like this:

Content-type: multipart/form-data; boundary=AaB03x

--AaB03x
content-disposition: form-data; name="Filedata"; filename="filename.ext"
Content-Type: image/gif

... contents of filename.ext ...
--AaB03x--

While this may not look very different. It will enable us to send binary data as content, and the webserver can intercept this before the web application sees it and so on. Much more efficient.

This will also send the files separate from the message, and we'll just include references to the files when sending the message.

Request

Request URL: /shares/_project_id_/folders/_folder_id_/files
Request VERB: POST
Parameters:
  file:     # MultiPart:  The html multiplart file data
Response:
  File      # JSON File:  The JSON File API Response, please see the
            #             View Share File API for more info on the response

Example Request using curl

curl -X POST --user nkpIxMK9ucUUE7FvfNpdAf:x -F file=@Presentation1.pptx https://test.host/shares/project-alpha/folders/goals/files

{"file":
  {
    "id":"presentation1-pptx",
    "folder_id":"goals",
    "name":"Presentation1.pptx",
    "size":35203,
    "size_to_human":"34 KB",
    "content_type":"application/vnd.ms-powerpoint",
    "checksum":"a85c83ddd1db53aeec9225c139a73f5c417aec2a",
    "crc32":"03661ea5",
    "av_scanned":false,
    "av_infected":false,
    "deleted":false,
    "created_at":"2017-01-12T00:13:52.030Z",
    "updated_at":"2017-01-12T00:13:52.030Z"
  }
}

Please note in this example that curl syntax of @Presentation1.pptx means to load the data from the file Presentation1.pptx. If you enter '-F file=Presentation1.pptx' without the @ it means send the string "Presentation1.pptx" as file, that won't send the data from the file.

Also please note that the file will not likely be AV scanned when you get the API reponse. This is because this happens in the background after the file has been uploaded so by the time LiquidFiles responds this won't have happened yet.

Sending files in chunks

One of the additions in the API v3 is the ability to send files in pieces. This only works with the html form based upload and works by you splitting a large file in smaller pieces (chunks) and sending the chunks individually. When completed the server will rebuild the complete file. The benefits to this is that if one upload fails, the entire file doesn't have to be retransmitted. And some devices such as Microsoft ISA and TMG proxies struggle with files larger than 2Gb. Sending files in chunks will get around this and enable files of unlimited (well, limited by disk space) file size.

Request

Request URL: /shares/_project_id_/folders/_folder_id_/files
Request VERB: POST
Parameters:
  file:          # The html multiplart file data
  name           # String.  The file name. This is needed because we no longer can use the filename from the
                 #          html multipart file data.
  chunk          # Integer. The current piece between 0-(number of pieces -1)
  chunks         # Integer. The total number of pieces
Response:
  File      # JSON File:  The JSON File API Response, please see the
            #             View Share File API for more info on the response.

Example Request using curl

In this example, we're taking bigfile.zip, and splitting into two files: bigfile.zip.00 and bifile.zip.01 and sending them individually like this:

#!/bin/sh

# Some nice variables
api_key="Y9fdTmZdv0THButt5ZONIY"
server="https://liquidfiles.example.com"

curl -X POST --user "$api_key:x" -F file=@bigfile.zip.00 -F name=bigfile.zip -F chunk=0 -F chunks=2 $server/shares/project-alpha/folders/root/files
curl -X POST --user "$api_key:x" -F file=@bigfile.zip.01 -F name=bigfile.zip -F chunk=1 -F chunks=2 $server/shares/project-alpha/folders/root/files

In this example there are a few things to highlight:

  • We will get the attachment id only when the last piece has been uploaded.
  • The individual chunk sizes doesn't matter. If you're sending three chunks it can be two big ones and one small, three of equal size or one big, one medium and one small. It doesn't matter.
  • You can send chunks in any order that you want. as long as you number the chunks correctly. You can for instance begin by sending the second chunk with chunk=1, followed by the first one with chunk=0. It's the chunk number that will order the pieces correctly on the server.
  • The script above doesn't have any error handling. You need to make sure that you get a http response code 200 (success) after each chunk, and resend any chunks that fail accordingly. It's only when all pieces are uploaded that the server will rebuild the attachment and give you the attachment id.
  • The "name" parameter needs to be unique for that user until the entire file has been uploaded. It's the only thing that we have to identify this file. If you try to send multiple files with the same name to the same user at the same time (the user with the api key: Y9fdTmZdv0THButt5ZONIY in this example) there will be a right mess on the server as it has no way of distinguishing between the two different files if the name is the same.

Checking what chunks have been uploaded

This API call requires LiquidFiles v2.5 or later.

When chunks are uploaded, they are stored for a week before being removed (if the file was not uploaded completely). Another thing to note is that a chunk (or a file for that matter) is interrupted during transit, the half uploaded chunk (or file) will be discarded. If you want to resume uploads, you can query the server to see what chunks are available and upload any missing chunks.

Also, a final thing to note is that chunks are unique per user and per filename. We don't have an Attachment ID (that would otherwise be unique) until the file has been completely uploaded. This means that if the same user starts uploading a new file using chunks with the same filename as the old one, and if you use this API call to check what's already uploaded you may well end up completing the old file with the remains of the new file.


Request

Request URL: /shares/_share_id_/folders/_folder_id_/files/available_chunks
Request VERB: GET
Format: JSON
Parameters:
  name           # String.  The file name.
Response:
  chunks         # Array.   The response will be an array with chunk ID's and chunk sizes (in bytes).

Example Request using curl

#!/bin/sh

# Some nice variables
api_key="Y9fdTmZdv0THButt5ZONIY"
server="https://liquidfiles.company.com"

curl -X GET --user "$api_key:x" -H 'Content-Type: application/json' $server//shares/project-alpha/folders/root/files/available_chunks?name=Presentation1.pptx

{"chunks":[
  {
    "id":0,
    "size":104857600
  }, {
    "id":1,
    "size":104857600
  }]
}

In this case the first 2 chunks have been uploaded, both with 100Mb size. You can now continue with chunk 3 (chunk id 2, when starting from 0) starting at 200Mb.