Adjust Vec to build on stable Rust (#223)

Co-authored-by: Yuki Okushi <jtitor@2k36.org>
pull/265/head
Brent Kerby 4 years ago committed by GitHub
parent 132a746984
commit 951371fb74
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,41 +1,52 @@
# Allocating Memory
Using Unique throws a wrench in an important feature of Vec (and indeed all of
the std collections): an empty Vec doesn't actually allocate at all. So if we
can't allocate, but also can't put a null pointer in `ptr`, what do we do in
`Vec::new`? Well, we just put some other garbage in there!
Using `NonNull` throws a wrench in an important feature of Vec (and indeed all of
the std collections): creating an empty Vec doesn't actually allocate at all. This
is not the same as allocating a zero-sized memory block, which is not allowed by
the global allocator (it results in undefined behavior!). So if we can't allocate,
but also can't put a null pointer in `ptr`, what do we do in `Vec::new`? Well, we
just put some other garbage in there!
This is perfectly fine because we already have `cap == 0` as our sentinel for no
allocation. We don't even need to handle it specially in almost any code because
we usually need to check if `cap > len` or `len > 0` anyway. The recommended
Rust value to put here is `mem::align_of::<T>()`. Unique provides a convenience
for this: `Unique::dangling()`. There are quite a few places where we'll
Rust value to put here is `mem::align_of::<T>()`. `NonNull` provides a convenience
for this: `NonNull::dangling()`. There are quite a few places where we'll
want to use `dangling` because there's no real allocation to talk about but
`null` would make the compiler do bad things.
So:
```rust,ignore
use std::mem;
impl<T> Vec<T> {
fn new() -> Self {
assert!(mem::size_of::<T>() != 0, "We're not ready to handle ZSTs");
Vec { ptr: Unique::dangling(), len: 0, cap: 0 }
Vec {
ptr: NonNull::dangling(),
len: 0,
cap: 0,
_marker: PhantomData,
}
}
}
# fn main() {}
```
I slipped in that assert there because zero-sized types will require some
special handling throughout our code, and I want to defer the issue for now.
Without this assert, some of our early drafts will do some Very Bad Things.
Next we need to figure out what to actually do when we *do* want space. For
that, we'll need to use the rest of the heap APIs. These basically allow us to
talk directly to Rust's allocator (`malloc` on Unix platforms and `HeapAlloc`
on Windows by default).
Next we need to figure out what to actually do when we *do* want space. For that,
we use the global allocation functions [`alloc`][alloc], [`realloc`][realloc],
and [`dealloc`][dealloc] which are available in stable Rust in
[`std::alloc`][std_alloc]. These functions are expected to become deprecated in
favor of the methods of [`std::alloc::Global`][Global] after this type is stabilized.
We'll also need a way to handle out-of-memory (OOM) conditions. The standard
library calls `std::alloc::oom()`, which in turn calls the `oom` langitem,
which aborts the program in a platform-specific manner.
library provides a function [`alloc::handle_alloc_error`][handle_alloc_error],
which will abort the program in a platform-specific manner.
The reason we abort and don't panic is because unwinding can cause allocations
to happen, and that seems like a bad thing to do when your allocator just came
back with "hey I don't have any more memory".
@ -152,52 +163,48 @@ such we will guard against this case explicitly.
Ok with all the nonsense out of the way, let's actually allocate some memory:
```rust,ignore
fn grow(&mut self) {
// this is all pretty delicate, so let's say it's all unsafe
unsafe {
let elem_size = mem::size_of::<T>();
use std::alloc::{self, Layout};
let (new_cap, ptr) = if self.cap == 0 {
let ptr = Global.allocate(Layout::array::<T>(1).unwrap());
(1, ptr)
impl<T> Vec<T> {
fn grow(&mut self) {
let (new_cap, new_layout) = if self.cap == 0 {
(1, Layout::array::<T>(1).unwrap())
} else {
// as an invariant, we can assume that `self.cap < isize::MAX`,
// so this doesn't need to be checked.
// This can't overflow since self.cap <= isize::MAX.
let new_cap = 2 * self.cap;
// Similarly this can't overflow due to previously allocating this
let old_num_bytes = self.cap * elem_size;
// check that the new allocation doesn't exceed `isize::MAX` at all
// regardless of the actual size of the capacity. This combines the
// `new_cap <= isize::MAX` and `new_num_bytes <= usize::MAX` checks
// we need to make. We lose the ability to allocate e.g. 2/3rds of
// the address space with a single Vec of i16's on 32-bit though.
// Alas, poor Yorick -- I knew him, Horatio.
assert!(old_num_bytes <= (isize::MAX as usize) / 2,
"capacity overflow");
let c: NonNull<T> = self.ptr.into();
let ptr = Global.grow(c.cast(),
Layout::array::<T>(self.cap).unwrap(),
Layout::array::<T>(new_cap).unwrap());
(new_cap, ptr)
// `Layout::array` checks that the number of bytes is <= usize::MAX,
// but this is redundant since old_layout.size() <= isize::MAX,
// so the `unwrap` should never fail.
let new_layout = Layout::array::<T>(new_cap).unwrap();
(new_cap, new_layout)
};
// If allocate or reallocate fail, oom
if ptr.is_err() {
handle_alloc_error(Layout::from_size_align_unchecked(
new_cap * elem_size,
mem::align_of::<T>(),
))
}
// Ensure that the new allocation doesn't exceed `isize::MAX` bytes.
assert!(new_layout.size() <= isize::MAX as usize, "Allocation too large");
let ptr = ptr.unwrap();
let new_ptr = if self.cap == 0 {
unsafe { alloc::alloc(new_layout) }
} else {
let old_layout = Layout::array::<T>(self.cap).unwrap();
let old_ptr = self.ptr.as_ptr() as *mut u8;
unsafe { alloc::realloc(old_ptr, old_layout, new_layout.size()) }
};
self.ptr = Unique::new_unchecked(ptr.as_ptr() as *mut _);
// If allocation fails, `new_ptr` will be null, in which case we abort.
self.ptr = match NonNull::new(new_ptr as *mut T) {
Some(p) => p,
None => alloc::handle_alloc_error(new_layout),
};
self.cap = new_cap;
}
}
# fn main() {}
```
Nothing particularly tricky here. Just computing sizes and alignments and doing
some careful multiplication checks.
[Global]: ../std/alloc/struct.Global.html
[handle_alloc_error]: ../alloc/alloc/fn.handle_alloc_error.html
[alloc]: ../alloc/alloc/fn.alloc.html
[realloc]: ../alloc/alloc/fn.realloc.html
[dealloc]: ../alloc/alloc/fn.dealloc.html
[std_alloc]: ../alloc/alloc/index.html

@ -7,20 +7,17 @@ ask Rust if `T` `needs_drop` and omit the calls to `pop`. However in practice
LLVM is *really* good at removing simple side-effect free code like this, so I
wouldn't bother unless you notice it's not being stripped (in this case it is).
We must not call `Global.deallocate` when `self.cap == 0`, as in this case we
We must not call `alloc::dealloc` when `self.cap == 0`, as in this case we
haven't actually allocated any memory.
```rust,ignore
impl<T> Drop for Vec<T> {
fn drop(&mut self) {
if self.cap != 0 {
while let Some(_) = self.pop() { }
let layout = Layout::array::<T>(self.cap).unwrap();
unsafe {
let c: NonNull<T> = self.ptr.into();
Global.deallocate(c.cast(),
Layout::array::<T>(self.cap).unwrap());
alloc::dealloc(self.ptr.as_ptr() as *mut u8, layout);
}
}
}

@ -1,79 +1,85 @@
# The Final Code
```rust
#![feature(ptr_internals)]
#![feature(allocator_api)]
#![feature(alloc_layout_extra)]
use std::ptr::{Unique, NonNull, self};
use std::alloc::{self, Layout};
use std::marker::PhantomData;
use std::mem;
use std::ops::{Deref, DerefMut};
use std::marker::PhantomData;
use std::alloc::{
Allocator,
Global,
GlobalAlloc,
Layout,
handle_alloc_error
};
use std::ptr::{self, NonNull};
struct RawVec<T> {
ptr: Unique<T>,
ptr: NonNull<T>,
cap: usize,
_marker: PhantomData<T>,
}
unsafe impl<T: Send> Send for RawVec<T> {}
unsafe impl<T: Sync> Sync for RawVec<T> {}
impl<T> RawVec<T> {
fn new() -> Self {
// !0 is usize::MAX. This branch should be stripped at compile time.
let cap = if mem::size_of::<T>() == 0 { !0 } else { 0 };
// Unique::dangling() doubles as "unallocated" and "zero-sized allocation"
RawVec { ptr: Unique::dangling(), cap: cap }
// `NonNull::dangling()` doubles as "unallocated" and "zero-sized allocation"
RawVec {
ptr: NonNull::dangling(),
cap: cap,
_marker: PhantomData,
}
}
fn grow(&mut self) {
unsafe {
let elem_size = mem::size_of::<T>();
// since we set the capacity to usize::MAX when elem_size is
// 0, getting to here necessarily means the Vec is overfull.
assert!(elem_size != 0, "capacity overflow");
// since we set the capacity to usize::MAX when T has size 0,
// getting to here necessarily means the Vec is overfull.
assert!(mem::size_of::<T>() != 0, "capacity overflow");
let (new_cap, ptr) = if self.cap == 0 {
let ptr = Global.allocate(Layout::array::<T>(1).unwrap());
(1, ptr)
let (new_cap, new_layout) = if self.cap == 0 {
(1, Layout::array::<T>(1).unwrap())
} else {
// This can't overflow because we ensure self.cap <= isize::MAX.
let new_cap = 2 * self.cap;
let c: NonNull<T> = self.ptr.into();
let ptr = Global.grow(c.cast(),
Layout::array::<T>(self.cap).unwrap(),
Layout::array::<T>(new_cap).unwrap());
(new_cap, ptr)
// `Layout::array` checks that the number of bytes is <= usize::MAX,
// but this is redundant since old_layout.size() <= isize::MAX,
// so the `unwrap` should never fail.
let new_layout = Layout::array::<T>(new_cap).unwrap();
(new_cap, new_layout)
};
// If allocate or reallocate fail, oom
if ptr.is_err() {
handle_alloc_error(Layout::from_size_align_unchecked(
new_cap * elem_size,
mem::align_of::<T>(),
))
}
let ptr = ptr.unwrap();
// Ensure that the new allocation doesn't exceed `isize::MAX` bytes.
assert!(
new_layout.size() <= isize::MAX as usize,
"Allocation too large"
);
let new_ptr = if self.cap == 0 {
unsafe { alloc::alloc(new_layout) }
} else {
let old_layout = Layout::array::<T>(self.cap).unwrap();
let old_ptr = self.ptr.as_ptr() as *mut u8;
unsafe { alloc::realloc(old_ptr, old_layout, new_layout.size()) }
};
self.ptr = Unique::new_unchecked(ptr.as_ptr() as *mut _);
// If allocation fails, `new_ptr` will be null, in which case we abort.
self.ptr = match NonNull::new(new_ptr as *mut T) {
Some(p) => p,
None => alloc::handle_alloc_error(new_layout),
};
self.cap = new_cap;
}
}
}
impl<T> Drop for RawVec<T> {
fn drop(&mut self) {
let elem_size = mem::size_of::<T>();
if self.cap != 0 && elem_size != 0 {
unsafe {
let c: NonNull<T> = self.ptr.into();
Global.deallocate(c.cast(),
Layout::array::<T>(self.cap).unwrap());
alloc::dealloc(
self.ptr.as_ptr() as *mut u8,
Layout::array::<T>(self.cap).unwrap(),
);
}
}
}
@ -85,21 +91,30 @@ pub struct Vec<T> {
}
impl<T> Vec<T> {
fn ptr(&self) -> *mut T { self.buf.ptr.as_ptr() }
fn ptr(&self) -> *mut T {
self.buf.ptr.as_ptr()
}
fn cap(&self) -> usize { self.buf.cap }
fn cap(&self) -> usize {
self.buf.cap
}
pub fn new() -> Self {
Vec { buf: RawVec::new(), len: 0 }
Vec {
buf: RawVec::new(),
len: 0,
}
}
pub fn push(&mut self, elem: T) {
if self.len == self.cap() { self.buf.grow(); }
if self.len == self.cap() {
self.buf.grow();
}
unsafe {
ptr::write(self.ptr().offset(self.len as isize), elem);
}
// Can't fail, we'll OOM first.
// Can't overflow, we'll OOM first.
self.len += 1;
}
@ -108,22 +123,22 @@ impl<T> Vec<T> {
None
} else {
self.len -= 1;
unsafe {
Some(ptr::read(self.ptr().offset(self.len as isize)))
}
unsafe { Some(ptr::read(self.ptr().offset(self.len as isize))) }
}
}
pub fn insert(&mut self, index: usize, elem: T) {
assert!(index <= self.len, "index out of bounds");
if self.cap() == self.len { self.buf.grow(); }
if self.cap() == self.len {
self.buf.grow();
}
unsafe {
if index < self.len {
ptr::copy(self.ptr().offset(index as isize),
ptr::copy(
self.ptr().offset(index as isize),
self.ptr().offset(index as isize + 1),
self.len - index);
}
self.len - index,
);
ptr::write(self.ptr().offset(index as isize), elem);
self.len += 1;
}
@ -134,9 +149,11 @@ impl<T> Vec<T> {
unsafe {
self.len -= 1;
let result = ptr::read(self.ptr().offset(index as isize));
ptr::copy(self.ptr().offset(index as isize + 1),
ptr::copy(
self.ptr().offset(index as isize + 1),
self.ptr().offset(index as isize),
self.len - index);
self.len - index,
);
result
}
}
@ -181,23 +198,15 @@ impl<T> Drop for Vec<T> {
impl<T> Deref for Vec<T> {
type Target = [T];
fn deref(&self) -> &[T] {
unsafe {
std::slice::from_raw_parts(self.ptr(), self.len)
}
unsafe { std::slice::from_raw_parts(self.ptr(), self.len) }
}
}
impl<T> DerefMut for Vec<T> {
fn deref_mut(&mut self) -> &mut [T] {
unsafe {
std::slice::from_raw_parts_mut(self.ptr(), self.len)
unsafe { std::slice::from_raw_parts_mut(self.ptr(), self.len) }
}
}
}
struct RawValIter<T> {
start: *const T,
@ -239,8 +248,8 @@ impl<T> Iterator for RawValIter<T> {
fn size_hint(&self) -> (usize, Option<usize>) {
let elem_size = mem::size_of::<T>();
let len = (self.end as usize - self.start as usize)
/ if elem_size == 0 { 1 } else { elem_size };
let len = (self.end as usize - self.start as usize) /
if elem_size == 0 { 1 } else { elem_size };
(len, Some(len))
}
}
@ -262,9 +271,6 @@ impl<T> DoubleEndedIterator for RawValIter<T> {
}
}
pub struct IntoIter<T> {
_buf: RawVec<T>, // we don't actually care about this. Just need it to live.
iter: RawValIter<T>,
@ -272,12 +278,18 @@ pub struct IntoIter<T> {
impl<T> Iterator for IntoIter<T> {
type Item = T;
fn next(&mut self) -> Option<T> { self.iter.next() }
fn size_hint(&self) -> (usize, Option<usize>) { self.iter.size_hint() }
fn next(&mut self) -> Option<T> {
self.iter.next()
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.iter.size_hint()
}
}
impl<T> DoubleEndedIterator for IntoIter<T> {
fn next_back(&mut self) -> Option<T> { self.iter.next_back() }
fn next_back(&mut self) -> Option<T> {
self.iter.next_back()
}
}
impl<T> Drop for IntoIter<T> {
@ -286,9 +298,6 @@ impl<T> Drop for IntoIter<T> {
}
}
pub struct Drain<'a, T: 'a> {
vec: PhantomData<&'a mut Vec<T>>,
iter: RawValIter<T>,
@ -296,12 +305,18 @@ pub struct Drain<'a, T: 'a> {
impl<'a, T> Iterator for Drain<'a, T> {
type Item = T;
fn next(&mut self) -> Option<T> { self.iter.next() }
fn size_hint(&self) -> (usize, Option<usize>) { self.iter.size_hint() }
fn next(&mut self) -> Option<T> {
self.iter.next()
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.iter.size_hint()
}
}
impl<'a, T> DoubleEndedIterator for Drain<'a, T> {
fn next_back(&mut self) -> Option<T> { self.iter.next_back() }
fn next_back(&mut self) -> Option<T> {
self.iter.next_back()
}
}
impl<'a, T> Drop for Drain<'a, T> {
@ -321,6 +336,7 @@ impl<'a, T> Drop for Drain<'a, T> {
#
# mod tests {
# use super::*;
#
# pub fn create_push_pop() {
# let mut v = Vec::new();
# v.push(1);

@ -20,12 +20,10 @@ pub fn insert(&mut self, index: usize, elem: T) {
if self.cap == self.len { self.grow(); }
unsafe {
if index < self.len {
// ptr::copy(src, dest, len): "copy from source to dest len elems"
// ptr::copy(src, dest, len): "copy from src to dest len elems"
ptr::copy(self.ptr.as_ptr().offset(index as isize),
self.ptr.as_ptr().offset(index as isize + 1),
self.len - index);
}
ptr::write(self.ptr.as_ptr().offset(index as isize), elem);
self.len += 1;
}

@ -44,10 +44,11 @@ So we're going to use the following struct:
```rust,ignore
pub struct IntoIter<T> {
buf: Unique<T>,
buf: NonNull<T>,
cap: usize,
start: *const T,
end: *const T,
_marker: PhantomData<T>,
}
```
@ -75,6 +76,7 @@ impl<T> Vec<T> {
} else {
ptr.as_ptr().offset(len as isize)
},
_marker: PhantomData,
}
}
}
@ -134,11 +136,9 @@ impl<T> Drop for IntoIter<T> {
if self.cap != 0 {
// drop any remaining elements
for _ in &mut *self {}
let layout = Layout::array::<T>(self.cap).unwrap();
unsafe {
let c: NonNull<T> = self.buf.into();
Global.deallocate(c.cast(),
Layout::array::<T>(self.cap).unwrap());
alloc::dealloc(self.buf.as_ptr() as *mut u8, layout);
}
}
}

@ -22,64 +22,39 @@ conservatively assume we don't own any values of type `T`. See [the chapter
on ownership and lifetimes][ownership] for all the details on variance and
drop check.
As we saw in the ownership chapter, we should use `Unique<T>` in place of
`*mut T` when we have a raw pointer to an allocation we own. Unique is unstable,
As we saw in the ownership chapter, the standard library uses `Unique<T>` in place of
`*mut T` when it has a raw pointer to an allocation that it owns. Unique is unstable,
so we'd like to not use it if possible, though.
As a recap, Unique is a wrapper around a raw pointer that declares that:
* We are variant over `T`
* We are covariant over `T`
* We may own a value of type `T` (for drop check)
* We are Send/Sync if `T` is Send/Sync
* Our pointer is never null (so `Option<Vec<T>>` is null-pointer-optimized)
We can implement all of the above requirements except for the last
one in stable Rust:
We can implement all of the above requirements in stable Rust. To do this, instead
of using `Unique<T>` we will use [`NonNull<T>`][NonNull], another wrapper around a
raw pointer, which gives us two of the above properties, namely it is covariant
over `T` and is declared to never be null. By adding a `PhantomData<T>` (for drop
check) and implementing Send/Sync if `T` is, we get the same results as using
`Unique<T>`:
```rust,ignore
```rust
use std::ptr::NonNull;
use std::marker::PhantomData;
use std::ops::Deref;
use std::mem;
struct Unique<T> {
ptr: *const T, // *const for variance
_marker: PhantomData<T>, // For the drop checker
}
// Deriving Send and Sync is safe because we are the Unique owners
// of this data. It's like Unique<T> is "just" T.
unsafe impl<T: Send> Send for Unique<T> {}
unsafe impl<T: Sync> Sync for Unique<T> {}
impl<T> Unique<T> {
pub fn new(ptr: *mut T) -> Self {
Unique { ptr: ptr, _marker: PhantomData }
}
pub fn as_ptr(&self) -> *mut T {
self.ptr as *mut T
}
}
```
Unfortunately the mechanism for stating that your value is non-zero is
unstable and unlikely to be stabilized soon. As such we're just going to
take the hit and use std's Unique:
```rust,ignore
pub struct Vec<T> {
ptr: Unique<T>,
ptr: NonNull<T>,
cap: usize,
len: usize,
_marker: PhantomData<T>,
}
```
If you don't care about the null-pointer optimization, then you can use the
stable code. However we will be designing the rest of the code around enabling
this optimization. It should be noted that `Unique::new` is unsafe to call, because
putting `null` inside of it is Undefined Behavior. Our stable Unique doesn't
need `new` to be unsafe because it doesn't make any interesting guarantees about
its contents.
unsafe impl<T: Send> Send for Vec<T> {}
unsafe impl<T: Sync> Sync for Vec<T> {}
# fn main() {}
```
[ownership]: ownership.html
[NonNull]: ../std/ptr/struct.NonNull.html

@ -10,56 +10,64 @@ allocating, growing, and freeing:
```rust,ignore
struct RawVec<T> {
ptr: Unique<T>,
ptr: NonNull<T>,
cap: usize,
_marker: PhantomData<T>,
}
unsafe impl<T: Send> Send for RawVec<T> {}
unsafe impl<T: Sync> Sync for RawVec<T> {}
impl<T> RawVec<T> {
fn new() -> Self {
assert!(mem::size_of::<T>() != 0, "We're not ready to handle ZSTs");
RawVec { ptr: Unique::dangling(), cap: 0 }
assert!(mem::size_of::<T>() != 0, "TODO: implement ZST support");
RawVec {
ptr: NonNull::dangling(),
cap: 0,
_marker: PhantomData,
}
}
// unchanged from Vec
fn grow(&mut self) {
unsafe {
let elem_size = mem::size_of::<T>();
let (new_cap, ptr) = if self.cap == 0 {
let ptr = Global.allocate(Layout::array::<T>(1).unwrap());
(1, ptr)
let (new_cap, new_layout) = if self.cap == 0 {
(1, Layout::array::<T>(1).unwrap())
} else {
// This can't overflow because we ensure self.cap <= isize::MAX.
let new_cap = 2 * self.cap;
let c: NonNull<T> = self.ptr.into();
let ptr = Global.grow(c.cast(),
Layout::array::<T>(self.cap).unwrap(),
Layout::array::<T>(new_cap).unwrap());
(new_cap, ptr)
// Layout::array checks that the number of bytes is <= usize::MAX,
// but this is redundant since old_layout.size() <= isize::MAX,
// so the `unwrap` should never fail.
let new_layout = Layout::array::<T>(new_cap).unwrap();
(new_cap, new_layout)
};
// If allocate or reallocate fail, oom
if ptr.is_err() {
handle_alloc_error(Layout::from_size_align_unchecked(
new_cap * elem_size,
mem::align_of::<T>(),
))
}
// Ensure that the new allocation doesn't exceed `isize::MAX` bytes.
assert!(new_layout.size() <= isize::MAX as usize, "Allocation too large");
let ptr = ptr.unwrap();
let new_ptr = if self.cap == 0 {
unsafe { alloc::alloc(new_layout) }
} else {
let old_layout = Layout::array::<T>(self.cap).unwrap();
let old_ptr = self.ptr.as_ptr() as *mut u8;
unsafe { alloc::realloc(old_ptr, old_layout, new_layout.size()) }
};
self.ptr = Unique::new_unchecked(ptr.as_ptr() as *mut _);
// If allocation fails, `new_ptr` will be null, in which case we abort.
self.ptr = match NonNull::new(new_ptr as *mut T) {
Some(p) => p,
None => alloc::handle_alloc_error(new_layout),
};
self.cap = new_cap;
}
}
}
impl<T> Drop for RawVec<T> {
fn drop(&mut self) {
if self.cap != 0 {
let layout = Layout::array::<T>(self.cap).unwrap();
unsafe {
let c: NonNull<T> = self.ptr.into();
Global.deallocate(c.cast(),
Layout::array::<T>(self.cap).unwrap());
alloc::dealloc(self.ptr.as_ptr() as *mut u8, layout);
}
}
}
@ -75,18 +83,25 @@ pub struct Vec<T> {
}
impl<T> Vec<T> {
fn ptr(&self) -> *mut T { self.buf.ptr.as_ptr() }
fn ptr(&self) -> *mut T {
self.buf.ptr.as_ptr()
}
fn cap(&self) -> usize { self.buf.cap }
fn cap(&self) -> usize {
self.buf.cap
}
pub fn new() -> Self {
Vec { buf: RawVec::new(), len: 0 }
Vec {
buf: RawVec::new(),
len: 0,
}
}
// push/pop/insert/remove largely unchanged:
// * `self.ptr.as_ptr() -> self.ptr()`
// * `self.cap -> self.cap()`
// * `self.grow -> self.buf.grow()`
// * `self.grow() -> self.buf.grow()`
}
impl<T> Drop for Vec<T> {

@ -19,7 +19,7 @@ Thankfully we abstracted out pointer-iterators and allocating handling into
## Allocating Zero-Sized Types
So if the allocator API doesn't support zero-sized allocations, what on earth
do we store as our allocation? `Unique::dangling()` of course! Almost every operation
do we store as our allocation? `NonNull::dangling()` of course! Almost every operation
with a ZST is a no-op since ZSTs have exactly one value, and therefore no state needs
to be considered to store or load them. This actually extends to `ptr::read` and
`ptr::write`: they won't actually look at the pointer at all. As such we never need
@ -38,56 +38,62 @@ impl<T> RawVec<T> {
// !0 is usize::MAX. This branch should be stripped at compile time.
let cap = if mem::size_of::<T>() == 0 { !0 } else { 0 };
// Unique::dangling() doubles as "unallocated" and "zero-sized allocation"
RawVec { ptr: Unique::dangling(), cap: cap }
// `NonNull::dangling()` doubles as "unallocated" and "zero-sized allocation"
RawVec {
ptr: NonNull::dangling(),
cap: cap,
_marker: PhantomData,
}
}
fn grow(&mut self) {
unsafe {
let elem_size = mem::size_of::<T>();
// since we set the capacity to usize::MAX when T has size 0,
// getting to here necessarily means the Vec is overfull.
assert!(mem::size_of::<T>() != 0, "capacity overflow");
// since we set the capacity to usize::MAX when elem_size is
// 0, getting to here necessarily means the Vec is overfull.
assert!(elem_size != 0, "capacity overflow");
let (new_cap, ptr) = if self.cap == 0 {
let ptr = Global.allocate(Layout::array::<T>(1).unwrap());
(1, ptr)
let (new_cap, new_layout) = if self.cap == 0 {
(1, Layout::array::<T>(1).unwrap())
} else {
// This can't overflow because we ensure self.cap <= isize::MAX.
let new_cap = 2 * self.cap;
let c: NonNull<T> = self.ptr.into();
let ptr = Global.grow(c.cast(),
Layout::array::<T>(self.cap).unwrap(),
Layout::array::<T>(new_cap).unwrap());
(new_cap, ptr)
// `Layout::array` checks that the number of bytes is <= usize::MAX,
// but this is redundant since old_layout.size() <= isize::MAX,
// so the `unwrap` should never fail.
let new_layout = Layout::array::<T>(new_cap).unwrap();
(new_cap, new_layout)
};
// If allocate or reallocate fail, oom
if ptr.is_err() {
handle_alloc_error(Layout::from_size_align_unchecked(
new_cap * elem_size,
mem::align_of::<T>(),
))
}
// Ensure that the new allocation doesn't exceed `isize::MAX` bytes.
assert!(new_layout.size() <= isize::MAX as usize, "Allocation too large");
let ptr = ptr.unwrap();
let new_ptr = if self.cap == 0 {
unsafe { alloc::alloc(new_layout) }
} else {
let old_layout = Layout::array::<T>(self.cap).unwrap();
let old_ptr = self.ptr.as_ptr() as *mut u8;
unsafe { alloc::realloc(old_ptr, old_layout, new_layout.size()) }
};
self.ptr = Unique::new_unchecked(ptr.as_ptr() as *mut _);
// If allocation fails, `new_ptr` will be null, in which case we abort.
self.ptr = match NonNull::new(new_ptr as *mut T) {
Some(p) => p,
None => alloc::handle_alloc_error(new_layout),
};
self.cap = new_cap;
}
}
}
impl<T> Drop for RawVec<T> {
fn drop(&mut self) {
let elem_size = mem::size_of::<T>();
// don't free zero-sized allocations, as they were never allocated.
if self.cap != 0 && elem_size != 0 {
unsafe {
let c: NonNull<T> = self.ptr.into();
Global.deallocate(c.cast(),
Layout::array::<T>(self.cap).unwrap());
alloc::dealloc(
self.ptr.as_ptr() as *mut u8,
Layout::array::<T>(self.cap).unwrap(),
);
}
}
}

@ -1,16 +1,10 @@
# Example: Implementing Vec
To bring everything together, we're going to write `std::Vec` from scratch.
Because all the best tools for writing unsafe code are unstable, this
project will only work on nightly (as of Rust 1.9.0). With the exception of the
allocator API, much of the unstable code we'll use is expected to be stabilized
in a similar form as it is today.
However we will generally try to avoid unstable code where possible. In
particular we won't use any intrinsics that could make a code a little
bit nicer or efficient because intrinsics are permanently unstable. Although
many intrinsics *do* become stabilized elsewhere (`std::ptr` and `std::mem`
consist of many intrinsics).
We will limit ourselves to stable Rust. In particular we won't use any
intrinsics that could make our code a little bit nicer or efficient because
intrinsics are permanently unstable. Although many intrinsics *do* become
stabilized elsewhere (`std::ptr` and `std::mem` consist of many intrinsics).
Ultimately this means our implementation may not take advantage of all
possible optimizations, though it will be by no means *naive*. We will

Loading…
Cancel
Save