The DDC consists of two kind on nodes: CDN and Storage. Both kinds are daemon applications and communicate with each other with HTTP requests and data wrapped into protocol buffers. Both kinds share some code:
The DDC consists of two kind on nodes: CDN and Storage. Both kinds are daemon applications and communicate with each other with HTTP requests and data wrapped into protocol buffers. Both kinds share some code:
Also there is a SDK-JS module in DDC. This is the code working on the client side and making requests to the CDN cluster. The important notion here is a piece. A piece is an abstraction that represents a unit of data stored in the DDC. It doesn't have a fixed size and can represent fully logically completed data or part of it. It consists of:
data
- raw bytes that this piece contains,bucketId
- bucket (in smart contract sense) that this piece belongs to,tags
- key/value pairs that contain metadata attached to the piece, for example, encryption options or file name,links
- If this piece is interpreted as a file, the linked pieces make up the file content, otherwise empty.Nodes store data locally in badgerDb database. The database API is wrapped into datastore package that actually adds a notion of buckets. Datastore bucket (not to be confused with the buckets in smart contract) is like a partition, a subset of keys that store similar kinds of value. For example, currently, Storage nodes have 4 buckets:
ddcClient.read(ddcUri, readOptions)
API is used [code].
pieceRouterV1.getPiece
[code].
/api/rest/pieces/<cid>?bucketId=<bucket>
:
cid
, updates metric counters [code].