Difference between revisions of "FatPointer"
(→Scoped data) |
(→Scoped data) |
||
Line 159: | Line 159: | ||
Reference counter is passed with the scoped data, but the counter is not maintained, because the caller holds the reference. The reference count is incremented when escaped or copied to a fat pointer. Use scoped stack variables to save on reference counting. | Reference counter is passed with the scoped data, but the counter is not maintained, because the caller holds the reference. The reference count is incremented when escaped or copied to a fat pointer. Use scoped stack variables to save on reference counting. | ||
− | TODO: investigate whether scoped data can be passed with slim pointers. | + | TODO: investigate whether scoped data can be passed with slim pointers. It will be impossible to assign scoped pointer to fat pointer (escape) and take interior pointers. Data structures with slim pointers should be created to allow work on scoped data. |
TODO: tail call optimization. | TODO: tail call optimization. |
Revision as of 13:47, 30 March 2014
This is a proposal for reference counting based on fat pointers. Several use cases are considered.
TODO:
- add missing use cases
- destroy
Contents
- 1 Fat pointer
- 2 Mutable data
- 3 Immutable data
- 4 Const data
- 5 Shared data
- 6 Interior pointer
- 7 Scoped data
- 8 Implicit this parameter
- 9 out, ref, lazy
- 10 Function parameters
- 11 Return value
- 12 Function pointer
- 13 Delegate pointer
- 14 Consuming pointers in generic code
- 15 Garbage collector
- 16 Interfacing with C
- 17 Interfacing with Objective-C
- 18 Interfacing with COM
- 19 Variadic arguments
- 20 Custom reference counting
- 21 Cyclic data structures
Fat pointer
Slim pointer is a traditional C-style pointer.
Fat pointer combines slim pointer with a reference counter.
struct ReferenceCounter
{
intptr_t count;
union
{
shared ReferenceCounter* sharedCounter;
intptr_t sharingCount;
}
intptr_t atomicIncrement() shared
{
return atomicOp!"+="(count, 1);
}
intptr_t atomicDecrement() shared
{
return atomicOp!"-="(count, 1);
}
}
struct FatPointer(TPayload)
{
ReferenceCounter* __counter;
TPayload __data;
alias this = __data;
}
Mutable data
Every mutable object allocated from heap has its own ReferenceCounter initialized to { count: 1, sharedCounter: null }
when the object is allocated.
FatPointer!(T) ptr;
ptr.__counter = referenceCounterAllocator.allocate();
ptr.__counter.count = 1;
ptr.__counter.sharedCounter = null;
ptr.__data = data;
Usage:
FatPointer!(T) ptr;
// increment
ptr.__counter.count++;
// decrement
ptr.__counter.count--;
if(ptr.__counter.count==0)
_d_rc_free(ptr.__counter, ptr.__data);
extern(C) void _d_rc_free(ReferenceCounter* counter, void* data)
{
// const and mutable thread-local data have simple counter
if(counter.sharedCounter==null)
{
referenceCounterAllocator.deallocate(counter);
GC.free(data);
return;
}
// immutable and shared data have two chained counters
shared scounter = counter.sharedCounter;
referenceCounterAllocator.deallocate(counter);
if (scounter.atomicDecrement() > 0) return;
referenceCounterAllocator.deallocate(scounter);
GC.free(data);
}
TODO: how to decrement counters of pointers in complex data?
Mutable data allocated in tls section has a thread-local ReferenceCounter allocated in the same section and initialized to { count: 2, sharedCounter: null }
so that the counter never reaches zero even if the reference to the data is lost. When a reference to the data is taken, the counter is incremented non-atomically. This single statically allocated ReferenceCounter can be used for all objects in tls section.
Mutable data allocated (__gshared) in a writable non-tls section has a shared ReferenceCounter allocated in a writable section and initialized to { count: 2, sharedCounter: null }
so that the counter never reaches zero even if the reference to the data is lost. When a reference to the data is taken, the shared counter is incremented atomically, a thread-local ReferenceCounter is allocated, initialized to 1
and chained with the shared counter.
A fat pointer in __gshared data has only shared counter. When it's copied from __gshared data to thread-local data, the shared counter is incremented atomically, a thread-local ReferenceCounter is allocated, initialized to 1
and chained with the shared counter. When it's copied from thread-local data to __gshared data, the shared counter is incremented atomically and stored as the counter for the pointer. If the pointer is only read with atomic increment, the operation is thread-safe, writing is not thread-safe if pointer at the destination is not null and should be decremented. User code can accelerate access to __gshared data by caching it in tls section, in that case only non-atomic increment of a thread-local counter is needed.
Immutable data
Immutable data pointer has chained thread-local and shared counter. In each thread the pointer has a distinct thread-local counter dedicated to that thread, all those counters are chained to the single shared counter. The majority of reference counting is handled by the thread-local counter. When the thread-local counter reaches zero, the shared counter is decremented atomically (see _d_rc_free
).
FatPointer!(T) ptr;
ptr.__counter = referenceCounterAllocator.allocate();
ptr.__counter.count = 1;
ptr.__counter.sharedCounter = referenceCounterAllocator.allocate();
ptr.__counter.sharedCounter.count = 1;
ptr.__data = data;
Usage:
FatPointer!(immutable T) ptr;
// increment
ptr.__counter.count++;
// decrement
ptr.__counter.count--;
if(ptr.__counter.count==0)
_d_rc_free(ptr.__counter, ptr.__data);
Immutable data statically allocated in a readonly section has a shared ReferenceCounter allocated in a writable section; due to constant folding, in static data can be many pointers to single immutable object, so the shared ReferenceCounter should be initialized to number of pointers + 1
so that the counter never reaches zero even if the reference to the data is lost. When a reference to the static immutable data is taken, the shared counter is incremented atomically, a thread-local ReferenceCounter is allocated, initialized to 1
and chained with the shared counter.
TODO: how to cast mutable data to immutable data (assumeUnique)? Mutable data can be casted to immutable - all thread-local counters in the data structure become shared. What to do with pointers to immutable data in mutable data? They have thread-local counters which must not be put to a shared context.
TODO: what to do with thread-local counters when passing immutable data between threads (std.concurrency)?
Const data
Const data can be mutable or immutable. In both cases the immediate counter is thread-local. If the data is mutable, sharedCounter can be null, otherwise it points to the shared counter.
FatPointer!(const T) ptr;
// increment
ptr.__counter.count++;
// decrement
ptr.__counter.count--;
if(ptr.__counter.count==0)
_d_rc_free(ptr.__counter, ptr.__data);
Shared data pointer has chained thread-local and shared counter.
FatPointer!(shared T) ptr;
// increment
ptr.__counter.count++;
// decrement
ptr.__counter.count--;
if(ptr.__counter.count==0)
_d_rc_free(ptr.__counter, ptr.__data);
Shared data statically allocated in a writable section has a shared ReferenceCounter allocated in a writable section and initialized to { count: 2, sharedCounter: null }
so that the counter never reaches zero even if the reference to the data is lost. When a reference to the data is taken, the shared counter is incremented atomically, a thread-local ReferenceCounter is allocated, initialized to 1
and chained with the shared counter.
A fat pointer in shared data has only shared counter. When it's copied from shared data to thread-local data, the shared counter is incremented atomically, a thread-local ReferenceCounter is allocated, initialized to 1
and chained with the shared counter. When it's copied from thread-local data to shared data, the shared counter is incremented atomically and stored as the counter for the pointer. If the pointer is only read with atomic increment, the operation is thread-safe, writing is not thread-safe if pointer at the destination is not null and should be decremented. User code can accelerate access to shared data by caching it in tls section, in that case only non-atomic increment of a thread-local counter is needed.
TODO: how to cast shared data to thread-local data (locking)? Shared counters must not be incremented non-atomically. Immutable data in shared data should have only shared counter - same problem with casting from shared to thread-local.
TODO: what to do with thread-local counters when passing shared data between threads (std.concurrency)?
Interior pointer
A pointer to a field of an object and a slice of an array share the reference counter with the enclosing object. Counting is done transparently, no need to compute the beginning of the object.
TODO: structs are not pointers, so no reference counter.
Scoped data
Reference counter is passed with the scoped data, but the counter is not maintained, because the caller holds the reference. The reference count is incremented when escaped or copied to a fat pointer. Use scoped stack variables to save on reference counting.
TODO: investigate whether scoped data can be passed with slim pointers. It will be impossible to assign scoped pointer to fat pointer (escape) and take interior pointers. Data structures with slim pointers should be created to allow work on scoped data.
TODO: tail call optimization.
Implicit this parameter
Implicit this
parameter is scoped. The reference counter is incremented when escaped or an interior pointer is taken.
TODO: how to call construstor when creating a new object with slim pointer, what to do with reference counter?
out, ref, lazy
The out
and ref
references themselves are slim. If they reference a pointer, that pointer must be fat.
The lazy
delegate is scoped.
Function parameters
Caller doesn't increment the counter when passing a pointer as a function argument. Callee increments the counter when copying the argument to other variable.
Return value
Callee returns a value with incremented reference counter, caller should decrement it if not used. Same for new
operator.
Function pointer
Function pointer is slim. TODO: what to do with functions constructed at run time (trampolines)?
Delegate pointer
Delegate pointer is counted similar to const pointer: it may have shared counter. If the delegate contains thread-local data, it has no shared counter and must be not put to a shared context.
Consuming pointers in generic code
TODO: how?
Garbage collector
GC can be used to collect cycles. The client code can disable it only if you are sure you have no cycles.
Interfacing with C
C structs have slim pointers. In this aspect they differ from D structs. Code, interfacing with C and not declaring C structs as extern(C), is incorrect. Compiler should report error if it detects passing fat pointers to C code.
C function arguments are slim too.
TODO: what to do with return value?
Interfacing with Objective-C
If D will use Objective-C reference counting scheme, it may be simpler to interact with Objective-C code.
Interfacing with COM
Variadic arguments
Custom reference counting
If a data structure needs custom reference counting, it declares opRCFree
alias to a function with signature void function(ReferenceCounter*, void*)
, which handles freeing.
extern void NodeRCFree(@slim ReferenceCounter* counter, @slim void* data);
interface INode
{
INode parent();
INode previousSibling();
INode nextSibling();
INode child();
alias opRCFree = NodeRCFree;
}
Usage:
FatPointer!(INode) ptr;
// increment
ptr.__counter.count++;
// decrement
ptr.__counter.count--;
if(ptr.__counter.count==0)
INode.opRCFree(ptr.__counter, ptr.__data);
Cyclic data structures
Cyclic data structures can be designed for reliable reference counting.
See (incomplete) example below. The cyclic data structure (graph) has single reference counter shared among all nodes, the count is equal to the number of external references to the graph nodes. The client code increments the counter as usual. When the counter reaches zero, the custom opRCFree is called, which frees the entire graph. This way the entire graph stays alive as long as there is a reference to one of its nodes, and the code can navigate the entire graph from any node. The nodes are owned by the graph, to split or merge graphs, ownership should be changed.
interface INode
{
INode parent();
INode previousSibling();
INode nextSibling();
INode child();
void child(INode);
alias opRCFree = NodeRCFree;
}
class Node: INode
{
INode parent()
{
FatPointer!(INode) ptr;
_counter.count++;
ptr.__counter = _counter;
ptr.__data = _parent;
return ptr;
}
//INode previousSibling();
//INode nextSibling();
//INode child();
alias opRCFree = NodeRCFree;
Node newNode(string name)
{
@slim Node node = new Node(name);
node._counter = _counter;
node._parent = this;
FatPointer!(Node) ptr;
_counter.count++;
ptr.__counter = _counter;
ptr.__data = node;
return ptr;
}
void child(INode node) // setter
{
Node newChild = cast(Node)node;
enforce(newChild._counter == _counter, "node should belong to this graph");
//TODO: how to assign? What to do with orphan child node?
}
private:
string _name; // fat pointers for non-cyclic data
@slim ReferenceCounter* _counter;
// slim pointers for cyclic data
@slim Node _parent;
@slim Node _previousSibling;
@slim Node _nextSibling;
@slim Node _child;
void freeRecurse()
{
if (_previousSibling != null)
_previousSibling.freeRecurse(), GC.free(_previousSibling);
if (_nextSibling != null)
_nextSibling.freeRecurse(), GC.free(_nextSibling);
if (_child != null)
_child.freeRecurse(), GC.free(_child);
}
}
void NodeRCFree(@slim ReferenceCounter* counter, @slim void* data)
{
@slim Node root = cast(Node)data;
while (root._parent!=null) root = root._parent;
root.freeRecurse(); // free entire graph
GC.free(root);
referenceCounterAllocator.deallocate(counter);
}