Documente Academic
Documente Profesional
Documente Cultură
Welcome to the MEGA SDK! We hope that it will prove useful to developers who are interested in integrating
MEGA support into their applications.
The MEGA SDK consists of code and documentation that enables you to make use of MEGA's API functionality
at a comfortably high level of abstraction. Its core component a code module called client access engine
maintains a current copy of the user's account in memory (which includes all relevant files, folders, contacts and
shares), accepts commands from the application and notifies the application of command results and other
updates through callbacks.
The MEGA client access engine comes as a set of C++ classes and interfaces. If you are using C++, you can
simply add them to your project. You then instantiate the MegaClient class (which holds the session state) and
pass it an instance of your implementation of the HttpIO interface (which handles network requests and blocking)
and MegaApp (through which you receive the engine's callbacks).
The core code is reasonably platform independent (if you encounter any issues with your specific C++ compiler
environment, please let us know). To illustrate practical usage, a sample application (a basic ftp-style interactive
console client) is included.
3 Why do you provide a code module rather than documenting the API interface in sufficient
detail for me to implement it myself?
Two reasons:
Complexity/efficiency Since all of MEGA's crypto logic runs on the client side, you'd be looking at a project
exceeding 5,000 lines of code. And, as natural language is rather inefficient when it comes to specifying
algorithms, the documentation would be similarly voluminous.
4 Thanks, but why C++? I am using C, Objective C, C#, Java, Scala, Python, Ruby, Perl, PHP,
VB...
The requirement to integrate with projects that compile to native code rules out all languages that rely on specific
interpreters or runtime environments. C, being the "lingua franca" of nearly all modern systems, would have been
the obvious choice, but the code compactness and readability benefits provided by C++'s syntactic sugar and
template library are well worth the minor additional integration overhead. We will work with interested developers
to add MEGA support to their preferred environments by way of native code modules/extensions (rather than by
porting the functionality to the target language itself). Please contact us at developers@mega.nz if you are willing
and able to contribute to a particular integration effort.
5 Programming model
5.1 Interaction
The application submits requests to the client access engine through nonblocking calls to methods of the
MegaClient object and signals events to the application by invoking methods of an object of its implementation of
the MegaApp interface.
Files and folders are represented by Node objects and referenced by node handles. Nodes point to parent
nodes, forming trees. Trees have exactly one root node (circular linkage is not allowed). Node updates (caused
by the session's own actions, other sessions of the same account or other accounts, e.g. through activity in a
shared folder) are notified in real time through a callback specifying the affected nodes. Deleted nodes are first
notified with their removed flag set before being purged to give the application an opportunity to remove them
from the UI view.
There are at least three node trees per account: Root, incoming and rubbish. Additional trees can originate from
other users as shared folders.
Users are referenced by their user handle and/or their primary e-mail address. The engine maintains a User
object for every user account that has appeared in the context of the current session: As a contact or merely as
the owner of a filesystem node. A visibility flag turns a user into a contact if set.
User attributes can be used to store credentials such as avatar pictures, address, date of birth etc. It is
recommended to store application-private user credentials AES-CBC encrypted.
Three types of operation are subject to acceleration by a mechanism called "speculative instant completion":
Node attribute updates, moves and deletions. As these merely receive a highly predictable "OK" or "failure"
response from the API, there is some benefit in immediately updating the local nodes and reloading the session
state in the rare event of an inconsistency. The engine loosely protects them with an access check that is
following the same semantics as the authoritative check on the API server side.
Shared access to a resource (writable nodes accessible to the user's account) based on a potentially outdated
(due to network latency) view is naturally prone to race conditions, leading to inconsistencies when two parties
make conflicting updates within the latency window. The engine contains heuristics to detect these conflicts and
will ask the application to discard and reload its view if needed (the user should be informed accordingly).
Due to its nonblocking nature, the MEGA client access engine integrates extremely well with single-threaded
applications (although on platforms without a nonblocking DNS lookup facility, you may not get around using a
worker thread for name resolution). If you, however, prefer to use multiple threads, you are welcome to do so - as
long as you ensure that no two threads get to enter the engine or access its data structures at the same time.
There are three approaches to integrating the engine with the application. The goal is to have the engine get the
CPU (through MegaClient's exec() method) swiftly whenever one of its wakeup triggers fires:
block inside the engine's own blocking callback (which waits for socket I/O and timeouts) and include the
application's own wakeup triggers
record the engine's wakeup triggers and include them in the application's existing blocking facility
dedicate a worker thread to the MEGA engine and interact with the application through e.g. a bidirectional
message queue (inefficient, but the only option if, for some reason, you cannot modify the application's event
processing)
5.8 File name conflicts
Applications must be prepared to deal with file and folder name clashes. In many scenarios, this is trivial - the
user sees all copies and makes the decision which file he is interested in, mostly based on its timestamp. Some
applications, however, map a MEGA node tree to a resource that uses file paths as unique keys, e.g. filesystems.
In this case, we recommend that only the most recent node is used.
Filename characters that are not allowed on the host must be urlencoded using %xx. When writing files back to
the server, valid urlencoded sequences must be replaced with the encoded character. This has the potential
unwanted side effect of mangling filenames that originally contained valid %xx sequences, but this should be
rare, and they'll be unmangled when read back to the local machine.
We kindly ask all application developers wishing to introduce new node, file or user attributes to coordinate the
numbering/naming and formatting conventions with us to maximize cross-application interoperability. Should you
elect not to do so, please avoid cluttering the namespaces and prefix the new attribute names with your
abbreviated company or application name.
6 Implementation
6.1 Interfaces
The SDK provides reference implementations of FileAccess (using POSIX calls), HttpIO (using cURL) and
ofPrnGen SymmCipher and AsymmCipher (using Crypto++). If you decide to use cURL in your application,
please ensure that it was built with c-ares support for asynchronous DNS requests. Some platforms (e.g. MacOS
and Fedora) bundle cURL binaries that were compiled with threaded-resolver support - these will not work.
6.2 Usage
MegaClient
Then, the application must call MegaClient 's wait() immediately before or instead of blocking for events
itself. If wait() wishes to block, it calls HttpIO 's waitio() , supplying a timeout. The application can either
piggyback its own wakeup criteria onto the socket events/timeout in waitio() , or record the
criteriawaitio() gets waken up by and include these in its own blocking logic. The supplied SDK example
(megaapp.cpp) uses the former approach, adding fileno(stdin) to the select() fd set to process user
input in real time.
The application must call MegaClient's exec() at least once after every wakeup by the waitio() criteria (it
doesn't hurt to call it too often).
nodes ( MegaClient::nodes )
users ( MegaClient::users )
the logged in user's own handle and e-mail address ( MegaClient::me , MegaClient::myemail )
6.3.1 Nodes
handle (caveat: the application-side handle bit layout is endian-dependent - do not transfer between systems
using different CPU architectures!)
crypto key
owner
share information (share key, outgoing share peers, incoming share properties)
6.3.2 Users
A third-party user is part of the session either because he is in a contact relationship with the session user, or
because he owns at least one of the session's nodes. A user record is also created when a share is added to a
previously unregistered e-mail address. Users can be referenced by their handle or by their e-mail address.
User properties:
handle
name
contact visibility
public key
user properties
Multiple concurrent file transfers are supported. It is strongly suggested not to run more than one large upload
and one large download in parallel to avoid network congestion (there will be little, if any, speed benefit). A file
transfer can aggregate multiple TCP channels (recommended starting point: 4) for greater throughput. File
transfers can be aborted at any time by calling MegaClient::tclose() . Significant local network congestion
during uploads is common with ADSL uplinks and can be prevented by enabling an automatic or fixed rate limit.
Applications that transfer batches of files should do using the engine's transfer queueing functionality. It uses
pipelining (new transfers are dispatched approximately three seconds before the end of the current transfer) to
reduce or eliminate the dead time between files. Failed transfers are retried with exponential backoff.
Files can have attributes. Only the original creator of a file can update its attributes. All nodes referencing the
same encrypted file see the same attributes. Attributes carry a 16-bit type field. The client access engine
supports attaching file attributes during or after the upload and their bulk retrieval.
6.4.4 Thumbnails
All applications capable of uploading image files should add thumbnails in the process (remember that there is
no way for us to do this on the server side). Thumbnails are stored as type 0 file attributes and should be
120p*120p JPEGs compressed to around 3-4 KB. The sample application supplied with the SDK demonstrates
how to do this using the FreeImage library. As the extraction of a thumbnail from a large image can take a
considerable amount of time, it is also suggested to perform this in separate worker threads to avoid stalling the
application.
There are two types of quota limitations an application can encounter during its operation: Storage and
bandwidth. MEGA, by policy, is quite generous on both, which means that only a small fraction of your user base
will ever run out of quota, but it is essential that if it happens, the situation is handled correctly - the user needs to
be informed about the reason for his upload or download failing rather than being left in the dark with what looks
like a malfunction.
A download attempt will be rejected with error EOVERQUOTA if the bandwidth consumption during the past five to
six hours plus the residual size of all running downloads plus the size of the file to download would exceed the
current per-IP bandwidth limit (if any).
In contrast to uploads, running downloads can be interrupted with an out-of-quota error under certain
circumstances, which will trigger a quota_exceeded() callback, and the download will retry automatically until
bandwidth quota becomes available.
To access the MEGA API, applications need to present a valid API key, which can be obtained free of charge
from https://mega.nz/sdk. Please see the SDK terms of service for details.
7 Development process
The release of the C++ SDK is merely a starting point, and it will evolve over time. We'd like to hear from
developers actually using it in their applications - if you find bugs, design flaws or other shortcomings, please get
in touch at developers@mega.nz. MEGA is hiring, so be prepared to receive an offer if your feedback indicates
that you are a bright mind.
A developer forum and source code repository will be made available soon.
Large MEGA accounts can take a considerable amount of time to load. This is due to the sheer size of the state
information that has to be read from the API cluster. There are two ways around this:
Maintaining the account state in a local persistent database rather than in memory. Upon login, the client
would ask the server for the number of operations that have affected the account since the last database
update, and then decide whether it is more efficient to wipe the local state and reload, or to continue with the
stored state and apply the queued updates to it (in the vast majority of cases, it will be the latter).
The application initially loads only folder nodes and retrieves file nodes dynamically when they are actually
needed.
Currently, files cannot be modified after creation. This limitation will be overcome by using encrypted delta files.
Currently, a file's integrity is verified only after it was downloaded in full. Chunk MAC verification will enable
applications to ensure the integrity of partial reads.
a POSIX-compliant environment
libcrypto++
libreadline
At this time, the SDK does not come with an autoconf script, so you may have to manually adapt the Makefile to
your system.
Similar to the UNIX ftp command, megaclient displays a prompt and accepts commands to log into a MEGA
account or exported folder link, list folder contents, create folders, copy, move and delete files and folders, view
inbound and outbound folder shares, establish, modify and delete outbound folder shares, upload and download
files, export links to folders and files, import file links, view the account's status and change its password.
To log into an exported folder link, specify the full link, including the key (prompting for the key has not been
implemented yet).
This command takes no parameters and displays the paths of your available filesystem trees (typically / for the
main tree, //in for the inbox, //bin for the rubbish bin and email:sharename for inbound shares from
useremail .
ls lists the contents of the current folder (if given with the qualifier -R , it does so recursively) or the specified
path.
Paths can be relative or absolute. Valid absolute paths start with one of the prefixes displayed by the mount
command.
File and folder properties are displayed along with their names: File size and available file attributes, or exported
folder links and outbound folder shares.
cd changes the current working directory to the specified folder path. If no path is given, it changes to / .
lcd changes the current local working directory to the specified path.
put queues the specified file for upload. Patterns are supported - put *.jpg will upload all .jpg files present in
the current local directory.
The queued transfers are listed with an index, target path (uploads only) and activity status. To cancel a transfer,
specify its index.
mkdir creates an empty subfolder in the specified (or current) folder. Although MEGA permits identical folder
names, mkdir fails if the folder already exists.
The specified source path or folder is copied or moved to the destination folder. If the destination indicates a new
name in an existing folder, a rename takes place along the way.
rm deletes the specified file or folder. If the folder contains files or subfolders, these are recursively deleted as
well. All affected outbound shares and exported folder links are canceled in the process.
The deletion is final. To take advantage of the rubbish bin functionality, use mv path //bin instead.
share lists, creates, updates or deletes outbound shares on the specified folder. The folder cannot be in an
inbound share.
export creates a read-only file or folder link that contains the related encryption key. To cancel and existing link,
add the keyword del .
import adds the file described by the supplied link (importing folder links is currently unsupported) to the current
folder.
Uploading through a DSL line can cause significant outbound packet loss. You can limit the send rate by
specifying an absolute maximum in bytes per second, auto to have the server figure out your line speed and
leave approximately 10% of it idle, or none to transfer at full speed.
It is currently not possible to change the send rate of an active upload. The setting will only affect subsequent
uploads.
whoami displays various account quota, balances and the session history.
passwd prompts for the current password and then asks for the new password and its confirmation. No
password quality checking is performed.
recon tears down all existing server connections. This has no effect on ongoing operations other than causing
transfers to take longer due to partially transferred chunks being discarded and having to be resent.
Debug mode outputs HTTP connection activity and the raw JSON API requests and responses.
10 Method/callback overview
Hashes a UTF-8-encoded password and stores the result in the supplied buffer.
Initiates a session login based on the user e-mail address (case insensitive) and the corresponding password
hash.
Callback: login_result(error e)
Error codes: API_ENOENT : Invalid e-mail address or password, API_EKEY : Private key could not be decrypted
Callback: none (does not interact with the server, proceed with calling fetchnodes() )
Updates the supplied AccountDetails structure with information on current storage and transfer utilization and
quota, Pro status, and the transaction and session history. You can specify which types of information you are
interested in by setting the related flag to a non-zero value.
Callback: changepw_result(error e)
The node's current attributes are pushed to the server. The optional newattr parameter specifies attribute
deltas as NULL-terminated sequence of attribute name/attribute C-string pointer pairs.
In setattr_result(), nodehandle is the node's handle.
The node (along with all of its children) is moved to the new parent node, which must be part of the same user
account as the node itself. You cannot move a mode to its own subtree.
Return value: API_EACCESS if the node's parent is not writable (with full access).
The node and all of its subnodes are deleted. The node's parent must be writable (with full access). Affected
outbound shares are canceled.
Uploads and downloads are started with topen() , which returns a transfer descriptor identifying the transfer.
Multiple transfers can run in parallel, and each transfer can use multiple TCP connections in parallel. Efficient
transfer queueing with pipelining is available. Uploads can be speed-limited using an absolute or a dynamic cap.
Progress information is conveyed through the callback transfer_update(int td, m_off_t bytes,
m_off_t size, dstime starttime) , whereby td identifies the transfer, bytes denotes the number of bytes
transferred so far, size indicates the total transfer size, and starttime is the time the transfer started.
A running transfer can be aborted at any time by calling tclose(int td) . Failure-indicating callbacks perform
the tclose() implicitly. The callback indicating a transient HTTP error, transfer_error(int td, int
httpcode, int count) will call tclose() if the application returns a non-zero value.
10.10.1 Uploading
speedlimit - maximum upload speed in bytes/second or -1 for approx. 90% of the line speed
connections - number of parallel connections to use (default: 3)
Return value: transfer descriptor or API_ETOOMANY if all transfer channels are busy, API_ENOENT if file failed to
open.
10.10.2 Downloading
Method: int topen(handle nodehandle, const byte* key, m_off_t start, m_off_t len,
int c)
nodehandle is the handle of the node to download (if key is set, the handle part of the exported file link)
key is the base64-decoded key part of the exported file link if set
start and len can be set to download only a slice of the file (default: 0, -1 for the full file)
c denotes the number of parallel TCP connections that this download should employ.
Return value: transfer descriptor or API_ETOOMANY if all transfer channels are busy, API_ENOENT if local file is
not present, API_EACCESS if attempting to download a non-file
Callback upon successful opening of the file to download: topen_result(int td, string* filename,
const char* fa, int pfa)
filename is the name of the file as specified by the node attribute 'n'
pfa is a flag that indicates whether the requesting user is allowed to write file attributes (i.e., is the file's owner)
The engine maintains separate upload and download queues, putq and getq . Transfers are started by pushing
transfer objects (classes FilePut and FileGet onto these queues). You no longer need to
call topen() /tclose() yourself, but you still need to process the transfer callbacks. A transfer is considered
failed if no bytes were transmitted for at least XFERTIMEOUT deciseconds, in which case it will be aborted and
repeated indefinitely with exponential backoff.
The main benefit of using the engine-supplied transfer queueing is not the reduced application complexity, but
the built-in overlapping transfer pipelining that reduces the impact of transitions between files on your overall
throughput.
newnodes is an array of populated NewNode structures. Under some circumstances, this array will be accessed
after the call to putnodes() has already returned. Therefore, always allocate this array from the heap and free
it in putnodes_result() .
Callback: share_result(client,error)
Callback: share_result(client,int,error)
Method: loggedin()
Method: checkaccess(node,level)
Method: checkmove(node,targetnode)
10.16 Prefix and encrypt JSON object for use as node attribute string
Method: makeattr(cipher,targetstring,jsonstring,length)
Callback: newfile()
Callback: users_updated(client,users,count) - users were added or updated (users are never deleted)
11 Error codes
API_EINTERNAL (-1): An internal error has occurred. Please submit a bug report, detailing the exact
circumstances in which this error occurred.
API_EAGAIN (-3) (always at the request level): A temporary congestion or server malfunction prevented your
request from being processed. No data was altered. Retry. Retries must be spaced with exponential backoff.
API_ERATELIMIT (-4): You have exceeded your command weight per time quota. Please wait a few seconds,
then try again (this should never happen in sane real-life applications).
API_ETOOMANY (-6): Too many concurrent IP addresses are accessing this upload target URL.
API_ERANGE (-7): The upload file packet is out of range or not starting and ending on a chunk boundary.
API_EEXPIRED (-8): The upload target URL you are trying to access has expired. Please request a fresh one.
API_ETEMPUNAVAIL (-18): Resource temporarily not available, please try again later
Knowledge of the following low-level details is not required to successfully develop MEGA client applications. It is
provided as a reference to those who are interested in fully understanding how MEGA's API works.
The MEGA API is based on a simple HTTP/JSON request-response scheme. Requests are submitted as arrays
of command objects and can be initiated by both the client and the server, effectively resulting in bidirectional
RPC capability. To see the raw request flow in the SDK's megaclient sample app, use the debug command.
12.2 Cryptography
All symmetric cryptographic operations are based on AES-128. It operates in cipher block chaining mode for the
file and folder attribute blocks and in counter mode for the actual file data. Each file and each folder node uses its
own randomly generated 128 bit key. File nodes use the same key for the attribute block and the file data, plus a
64 bit random counter start value and a 64 bit meta MAC to verify the file's integrity.
Each user account uses a symmetric master key to ECB-encrypt all keys of the nodes it keeps in its own trees.
This master key is stored on MEGA's servers, encrypted with a hash derived from the user's login password.
File integrity is verified using chunked CBC-MAC. Chunk sizes start at 128 KB and increase to 1 MB, which is a
reasonable balance between space required to store the chunk MACs and the average overhead for integrity-
checking partial reads.
In addition to the symmetric key, each user account has a 2048 bit RSA key pair to securely receive data such as
share keys or file/folder keys. Its private component is stored encrypted with the user's symmetric master key.
The owner of the folder is solely responsible for managing access; shares are non-transitive (shares cannot be
created on folders in incoming shares). All participants in a shared folder gain cryptographic access through a
common share-specific key, which is passed from the owner (theoretically, from anyone participating in the share,
but this would create a significant security risk in the event of a compromise of the core infrastructure) to new
participants through RSA. All keys of the nodes in a shared folder, including its root node, are encrypted to this
share key. The party adding a new node to a shared folder is responsible for supplying the appropriate
node/share-specific key. Missing node/share-specific keys can only be supplied by the share owner.
MEGA supports secure unauthenticated data delivery. Any fully registered user can receive files or folders in their
inbox through their RSA public key.
Each login starts a new session. For regular accounts, this involves the server generating a random session
token and encrypting it to the user's private key. The user password is considered verified if it successfully
decrypts the private key, which then decrypts the session token.
To prevent remote offline dictionary attacks on a user's password, the encrypted private key is only supplied to
the client if a hash derived from the password is presented to the server.
.
API requests flow in two directions: client &rarrow; server and server &rarrow; client
Client-server requests are issued as HTTP POST with a raw JSON payload. A request consists of one or
multiple commands and is executed as a single atomic, isolated and consistent transaction - in the event of a
request-level error response, no data was altered. Requests are idempotent - sending the same request multiple
times is equivalent to sending it once, which makes it safe to retry them, e.g. in case of intermittent network
issues. Each request must therefore be tagged with a session-unique identifier (e.g., a sequence number) to
prevent inadvertent cache hits caused by preceding identical requests.
While a request is executed, all users that may be affected by it are locked. This includes the requesting user and
all users that are in a shared folder relationship and/or in the contact list. A request may return the error code
EAGAIN in the event of a failed locking attempt or a temporary server-side malfunction. The request is likely to
complete when retried. Client applications must implement exponential backoff (with user-triggerable immediate
retry) and should inform the user of a possible server or network issue if the EAGAIN condition persists or no
response is received for more than a few seconds.
A successfully executed request returns an array of result objects, with each result appearing in the same array
index location as the corresponding command.
sequence_number is a session-unique number that is incremented per dispatched request (but not changed
when requests are repeated in response to network issues or EAGAIN)
The JSON object shall be sent as the payload of a raw POST request. No additional framing shall take place.
The Content-Type HTTP header is not processed, but should be set to application/json.
Its structure is an array of commands: [cmd1,cmd2,...]
with cmd = { a : command type, [argument : value]* }.
It is structured as single number (e.g. -3 for EAGAIN) in the case of a request-level error or as an array of per-
command return objects: [res1,res2,...]
To prevent infrastructure overload, dynamic rate limiting is in effect. Before a request is executed, the total
"weight" of the commands it contains is computed and checked against the current balance of the requesting IP
address. If the total exceeds a defined threshold, the request is rejected as a whole and must be repeated with
exponential backoff
As a server cannot reliably establish a connection to a client, server-client requests have to be polled by the latter
through a blocking read loop.
sequence_reference tells the server which server-client request(s) to deliver next. It is initialized from the
response to a filesystem tree fetch (f command).
sessionid is the session ID of the user session authentication mode
ssl=1 forces an HTTPS URL for the returned wait_url (which is needed for most browsers, but not in an
application context)
A request level error is received as a single number (e.g. -3 for EAGAIN) or a raw JSON object with content-type
application/json. Its structure is as follows:
{ a : [req1,req2,...], [ sn : sequence_reference | w : wait_url ] }
sequence_reference updates the sequence reference that is used the invocation of the /cs request URL
wait_url requests that the client connects to this (potentially long) URL, which will block until new requests are
ready for delivery, so once it disconnects (with an HTTP 200 OK response and a content-length of 0), the
polling process shall loop
As JSON is not binary clean, all non-ASCII data has to be encoded. For binary data, the MEGA API uses a
variation of base64 with -_ instead of +/ and the trailing = stripped (where necessary, the actual payload length is
heuristically inferred after decoding, e.g. by stripping trailing NULs). UNICODE text has to be encoded as UTF-8.
12.9.1
Node handles
Node handles are eight alphanumeric characters in length and case sensitive.
12.9.2
User handles
Node and file keys in a share context are transmitted in a compound per-share format:
sharehandle:key/sharehandle:key/... - each key is encrypted to its corresponding share handle
MEGA uses client-side encryption/decryption to end-to-end-protect file transfers and storage. Data received from
clients is stored and transmitted verbatim; servers neither decrypt, nor re-encrypt, nor verify the encryption of
incoming user files. All cryptographic processing is under the control of the end user.
To allow for integrity-checked partial reads, a file is treated as a series of chunks. To simplify server-side
processing, partial uploads can only start and end on a chunk boundary. Furthermore, partial downloads can only
be integrity-checked if they fulfil the same criterion.
Chunk boundaries are located at the following positions:
0 / 128K / 384K / 768K / 1280K / 1920K / 2688K / 3584K / 4608K / ... (every 1024 KB) / EOF
A file key is 256 bits long and consists of the following components:
The upper 64 bit n of the counter start value (the lower 64 bit are starting at 0 and incrementing by 1 for each
AES block of 16 bytes)
A chunk MAC is computed as follows (this is essentially CBC-MAC, which was chosen instead of the more
efficient OCB over intellectual property concerns):
h := (n << 64) + n
For each AES block d: h := AES(k,h XOR d)
A chunk is encrypted using standard counter mode:
For each AES block d at block position p: d' := d XOR AES(k,(n << 64)+p)
MAC computation and encryption can be performed in the same loop.
Decryption is analogous.
To obtain the meta-MAC m, apply the same CBC-MAC to the resulting block MACs with a start value of 0. The 64
bit meta-MAC m is computed as ((bits 0-31 XOR bits 32-63) << 64) + (bits 64-95 XOR bits 96-127).
12.11 Uploads
Uploads are performed by POSTing raw data to the target URL returned by the API u command. If so desired, an
upload can be performed in chunks. Chunks can be sent in any order and can be of any size, but they must
begin and end on a chunk boundary. The byte offset x of a chunk within the file is indicated by appending /x to
the URL. Multiple chunks can be sent in parallel. After a chunk completes, the server responds with a status
message, which can be:
A (negative) error code in decimal ASCII representation, typically requiring a restart of the upload from scratch
A 27-character base64-encoded completion handle that must be used in conjunction with the p (put node) API
command to complete the upload
The per-upload encryption key must be generated by a strong random number generator. Using a weak one will
undermine the confidentiality and integrity of your data.
12.12 Downloads
TCP throughput on high-latency links is adversely affected by slow congestion window growth, insufficient send
or receive buffer size and (even mild) packet loss. All of these factors can be mitigated by using multiple transfer
connections in parallel. Client applications are encouraged to offer users to configure up to six parallel
connections in each direction. The recommended default value is four.
All MEGA servers support HTTPS access - this is due to many web browsers enforcing a policy where HTTP
requests cannot be made from an HTTPS page at all (IE, Firefox 18+) or at least trigger a visual warning
(Chrome, Firefox until 17). However, only two types of requests actually benefit from and therefore require
HTTPS: The loading of https://mega.nz/index.html and the API request interface. Neither the hash-protected
loading of static .html and .js components, nor the waiting for new server-client requests, nor already encrypted
and MAC'ed data transfers from and to the storage cluster benefit from HTTPS in any meaningful way. Client
applications are therefore required to use SSL for access to the API interface, but strongly discouraged from
doing so for wait requests and bulk file transfers.
MEGA's HTTPS access supports most ciphers/hashes and uses strong 2048 bit RSA where SSL is relevant (i.e.
on the root HTML and the API servers) and RC4/MD5 with CPU-saving 1024 bit RSA where it is not (i.e. on static
HTML and storage servers).
PFS ("perfect forward secrecy") is supported on the API servers only, because secrecy is not required for public
static content.