Arc revisions (pt1/2(+?))

pull/255/head
ThePuzzlemaker 4 years ago committed by Alexis Beingessner
parent 9da6fbf6cd
commit 57fd6cf32b

@ -60,7 +60,6 @@
* [Base Code](arc-base.md)
* [Cloning](arc-clone.md)
* [Dropping](arc-drop.md)
* [Deref](arc-deref.md)
* [Final Code](arc-final.md)
* [FFI](ffi.md)
* [Beneath `std`](beneath-std.md)

@ -10,11 +10,6 @@ We'll first need a way to construct an `Arc<T>`.
This is pretty simple, as we just need to box the `ArcInner<T>` and get a
`NonNull<T>` pointer to it.
We start the reference counter at 1, as that first reference is the current
pointer. As the `Arc` is cloned or dropped, it is updated. It is okay to call
`unwrap()` on the `Option` returned by `NonNull` as `Box::into_raw` guarantees
that the pointer returned is not null.
```rust,ignore
impl<T> Arc<T> {
pub fn new(data: T) -> Arc<T> {
@ -28,7 +23,7 @@ impl<T> Arc<T> {
// It is okay to call `.unwrap()` here as we get a pointer from
// `Box::into_raw` which is guaranteed to not be null.
ptr: NonNull::new(Box::into_raw(boxed)).unwrap(),
_marker: PhantomData,
phantom: PhantomData,
}
}
}
@ -43,8 +38,8 @@ more information on these, see [the section on `Send` and
This is okay because:
* You can only get a mutable reference to the value inside an `Arc` if and only
if it is the only `Arc` referencing that data
* We use atomic counters for reference counting
if it is the only `Arc` referencing that data (which only happens in `Drop`)
* We use atomics for the shared mutable reference counting
```rust,ignore
unsafe impl<T: Sync + Send> Send for Arc<T> {}
@ -56,26 +51,59 @@ bounds, it would be possible to share values that are thread-unsafe across a
thread boundary via an `Arc`, which could possibly cause data races or
unsoundness.
## Getting the `ArcInner`
For example, if those bounds were not present, `Arc<Rc<u32>>` would be `Sync` or
`Send`, meaning that you could clone the `Rc` out of the `Arc` to send it across
a thread (without creating an entirely new `Rc`), which would create data races
as `Rc` is not thread-safe.
We'll now want to make a private helper function, `inner()`, which just returns
the dereferenced `NonNull` pointer.
## Getting the `ArcInner`
To dereference the `NonNull<T>` pointer into a `&T`, we can call
`NonNull::as_ref`. This is unsafe, unlike the typical `as_ref` function, so we
must call it like this:
```rust,ignore
// inside the impl<T> Arc<T> block from before:
fn inner(&self) -> &ArcInner<T> {
unsafe { self.ptr.as_ref() }
}
unsafe { self.ptr.as_ref() }
```
We'll be using this snippet a few times in this code (usually with an associated
`let` binding).
This unsafety is okay because while this `Arc` is alive, we're guaranteed that
the inner pointer is valid.
## Deref
Alright. Now we can make `Arc`s (and soon will be able to clone and destroy them correctly), but how do we get
to the data inside?
What we need now is an implementation of `Deref`.
We'll need to import the trait:
```rust,ignore
use std::ops::Deref;
```
And here's the implementation:
```rust,ignore
impl<T> Deref for Arc<T> {
type Target = T;
fn deref(&self) -> &T {
let inner = unsafe { self.ptr.as_ref() };
&inner.data
}
}
```
Pretty simple, eh? This simply dereferences the `NonNull` pointer to the
`ArcInner<T>`, then gets a reference to the data inside.
## Code
Here's all the code from this section:
```rust,ignore
use std::ops::Deref;
impl<T> Arc<T> {
pub fn new(data: T) -> Arc<T> {
// We start the reference count at 1, as that first reference is the
@ -88,17 +116,21 @@ impl<T> Arc<T> {
// It is okay to call `.unwrap()` here as we get a pointer from
// `Box::into_raw` which is guaranteed to not be null.
ptr: NonNull::new(Box::into_raw(boxed)).unwrap(),
_marker: PhantomData,
phantom: PhantomData,
}
}
fn inner(&self) -> &ArcInner<T> {
// This unsafety is okay because while this Arc is alive, we're
// guaranteed that the inner pointer is valid.
unsafe { self.ptr.as_ref() }
}
}
unsafe impl<T: Sync + Send> Send for Arc<T> {}
unsafe impl<T: Sync + Send> Sync for Arc<T> {}
impl<T> Deref for Arc<T> {
type Target = T;
fn deref(&self) -> &T {
let inner = unsafe { self.ptr.as_ref() };
&inner.data
}
}
```

@ -3,13 +3,17 @@
Now that we've got some basic code set up, we'll need a way to clone the `Arc`.
Basically, we need to:
1. Get the `ArcInner` value of the `Arc`
2. Increment the atomic reference count
3. Construct a new instance of the `Arc` from the inner pointer
1. Increment the atomic reference count
2. Construct a new instance of the `Arc` from the inner pointer
Next, we can update the atomic reference count as follows:
First, we need to get access to the `ArcInner`:
```rust,ignore
self.inner().rc.fetch_add(1, Ordering::Relaxed);
let inner = unsafe { self.ptr.as_ref() };
```
We can update the atomic reference count as follows:
```rust,ignore
let old_rc = inner.rc.fetch_add(1, Ordering::Relaxed);
```
As described in [the standard library's implementation of `Arc` cloning][2]:
@ -30,15 +34,37 @@ We'll need to add another import to use `Ordering`:
use std::sync::atomic::Ordering;
```
It is possible that in some contrived programs (e.g. using `mem::forget`) that
the reference count could overflow, but it's unreasonable that would happen in
any reasonable program.
However, we have one problem with this implementation right now. What if someone
decides to `mem::forget` a bunch of Arcs? The code we have written so far (and
will write) assumes that the reference count accurately portrays how many Arcs
are in memory, but with `mem::forget` this is false. Thus, when more and more
Arcs are cloned from this one without them being `Drop`ped and the reference
count being decremented, we can overflow! This will cause use-after-free which
is **INCREDIBLY BAD!**
To handle this, we need to check that the reference count does not go over some
arbitrary value (below `usize::MAX`, as we're storing the reference count as an
`AtomicUsize`), and do *something*.
The standard library's implementation decides to just abort the program (as it
is an incredibly unlikely case in normal code and if it happens, the program is
probably incredibly degenerate) if the reference count reaches `isize::MAX`
(about half of `usize::MAX`) on any thread, on the assumption that there are
probably not about 2 billion threads (or about **9 quintillion** on some 64-bit
machines) incrementing the reference count at once. This is what we'll do.
It's pretty simple to implement this behaviour:
```rust,ignore
if old_rc >= isize::MAX {
std::process::abort();
}
```
Then, we need to return a new instance of the `Arc`:
```rust,ignore
Self {
ptr: self.ptr,
_marker: PhantomData
phantom: PhantomData
}
```
@ -48,12 +74,18 @@ use std::sync::atomic::Ordering;
impl<T> Clone for Arc<T> {
fn clone(&self) -> Arc<T> {
let inner = unsafe { self.ptr.as_ref() };
// Using a relaxed ordering is alright here as knowledge of the original
// reference prevents other threads from wrongly deleting the object.
self.inner().rc.fetch_add(1, Ordering::Relaxed);
inner.rc.fetch_add(1, Ordering::Relaxed);
if old_rc >= isize::MAX {
std::process::abort();
}
Self {
ptr: self.ptr,
_marker: PhantomData,
phantom: PhantomData,
}
}
}

@ -1,25 +0,0 @@
# Deref
Alright. We now have a way to make, clone, and destroy `Arc`s, but how do we get
to the data inside?
What we need now is an implementation of `Deref`.
We'll need to import the trait:
```rust,ignore
use std::ops::Deref;
```
And here's the implementation:
```rust,ignore
impl<T> Deref for Arc<T> {
type Target = T;
fn deref(&self) -> &T {
&self.inner().data
}
}
```
Pretty simple, eh? This simply dereferences the `NonNull` pointer to the
`ArcInner<T>`, then gets a reference to the data inside.

@ -6,22 +6,28 @@ low enough, otherwise the data will live forever on the heap.
To do this, we can implement `Drop`.
Basically, we need to:
1. Get the `ArcInner` value of the `Arc`
2. Decrement the reference count
3. If there is only one reference remaining to the data, then:
4. Atomically fence the data to prevent reordering of the use and deletion of
the data, then:
5. Drop the inner data
1. Decrement the reference count
2. If there is only one reference remaining to the data, then:
3. Atomically fence the data to prevent reordering of the use and deletion of
the data
4. Drop the inner data
Now, we need to decrement the reference count. We can also bring in step 3 by
returning if the reference count is not equal to 1 (as `fetch_sub` returns the
previous value):
First, we'll need to get access to the `ArcInner`:
```rust,ignore
if self.inner().rc.fetch_sub(1, Ordering::Release) != 1 {
let inner = unsafe { self.ptr.as_ref() };
```
Now, we need to decrement the reference count. To streamline our code, we can
also return if the returned value from `fetch_sub` (the value of the reference
count before decrementing it) is not equal to `1` (which happens when we are not
the last reference to the data).
```rust,ignore
if inner.rc.fetch_sub(1, Ordering::???) != 1 {
return;
}
```
TODO
We then need to create an atomic fence to prevent reordering of the use of the
data and deletion of the data. As described in [the standard library's
implementation of `Arc`][3]:
@ -78,7 +84,8 @@ Now, let's wrap this all up inside the `Drop` implementation:
```rust,ignore
impl<T> Drop for Arc<T> {
fn drop(&mut self) {
if self.inner().rc.fetch_sub(1, Ordering::Release) != 1 {
let inner = unsafe { self.ptr.as_ref() };
if inner.rc.fetch_sub(1, Ordering::Release) != 1 {
return;
}
// This fence is needed to prevent reordering of the use and deletion

@ -9,7 +9,7 @@ use std::sync::atomic::{self, AtomicUsize, Ordering};
pub struct Arc<T> {
ptr: NonNull<ArcInner<T>>,
_marker: PhantomData<ArcInner<T>>,
phantom: PhantomData<ArcInner<T>>,
}
pub struct ArcInner<T> {
@ -29,36 +29,47 @@ impl<T> Arc<T> {
// It is okay to call `.unwrap()` here as we get a pointer from
// `Box::into_raw` which is guaranteed to not be null.
ptr: NonNull::new(Box::into_raw(boxed)).unwrap(),
_marker: PhantomData,
phantom: PhantomData,
}
}
fn inner(&self) -> &ArcInner<T> {
// This unsafety is okay because while this Arc is alive, we're
// guaranteed that the inner pointer is valid. Also, ArcInner<T> is
// Sync if T is Sync.
unsafe { self.ptr.as_ref() }
}
}
unsafe impl<T: Sync + Send> Send for Arc<T> {}
unsafe impl<T: Sync + Send> Sync for Arc<T> {}
impl<T> Deref for Arc<T> {
type Target = T;
fn deref(&self) -> &T {
let inner = unsafe { self.ptr.as_ref() };
&inner.data
}
}
use std::sync::atomic::Ordering;
impl<T> Clone for Arc<T> {
fn clone(&self) -> Arc<T> {
let inner = unsafe { self.ptr.as_ref() };
// Using a relaxed ordering is alright here as knowledge of the original
// reference prevents other threads from wrongly deleting the object.
self.inner().rc.fetch_add(1, Ordering::Relaxed);
inner.rc.fetch_add(1, Ordering::Relaxed);
if old_rc >= isize::MAX {
std::process::abort();
}
Self {
ptr: self.ptr,
_marker: PhantomData,
phantom: PhantomData,
}
}
}
impl<T> Drop for Arc<T> {
fn drop(&mut self) {
if self.inner().rc.fetch_sub(1, Ordering::Release) != 1 {
let inner = unsafe { self.ptr.as_ref() };
if inner.rc.fetch_sub(1, Ordering::Release) != 1 {
return;
}
// This fence is needed to prevent reordering of the use and deletion
@ -69,12 +80,4 @@ impl<T> Drop for Arc<T> {
unsafe { Box::from_raw(self.ptr.as_ptr()); }
}
}
impl<T> Deref for Arc<T> {
type Target = T;
fn deref(&self) -> &T {
&self.inner().data
}
}
```

@ -2,14 +2,25 @@
Let's start by making the layout for our implementation of `Arc`.
We'll need at least two things: a pointer to our data and a shared atomic
reference count. Since we're *building* `Arc`, we can't store the reference
count inside another `Arc`. Instead, we can get the best of both worlds and
store our data and the reference count in one structure and get a pointer to
that. This also prevents having to dereference two separate pointers inside our
code, which may also improve performance. (Yay!)
Naively, it'd looks something like this:
An `Arc<T>` provides thread-safe shared ownership of a value of type `T`,
allocated in the heap. Sharing implies immutability in Rust, so we don't need to
design anything that manages access to that value, right? Although interior
mutability types like Mutex allow Arc's users to create shared mutability, Arc
itself doesn't need to concern itself with these issues.
However there _is_ one place where Arc needs to concern itself with mutation:
destruction. When all the owners of the Arc go away, we need to be able to
`drop` its contents and free its allocation. So we need a way for an owner to
know if it's the _last_ owner, and the simplest way to do that is with a count
of the owners -- Reference Counting.
Unfortunately, this reference count is inherently shared mutable state, so Arc
_does_ need to think about synchronization. We _could_ use a Mutex for this, but
that's overkill. Instead, we'll use atomics. And since everyone already needs a
pointer to the T's allocation, we might as well put the reference count in that
same allocation.
Naively, it would look something like this:
```rust,ignore
use std::sync::atomic;
@ -41,7 +52,7 @@ To fix the second problem, we can include a `PhantomData` marker containing an
`ArcInner<T>`. This will tell the drop checker that we have some notion of
ownership of a value of `ArcInner<T>` (which itself contains some `T`).
With these changes, our new structure will look like this:
With these changes we get our final structure:
```rust,ignore
use std::marker::PhantomData;
use std::ptr::NonNull;

Loading…
Cancel
Save