Apply suggested style fixes

Co-authored-by: Yuki Okushi <jtitor@2k36.org>
pull/223/head
Brent Kerby 4 years ago committed by GitHub
parent 83e41531e1
commit 0e33cf622f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,16 +1,16 @@
# Allocating Memory
Using NonNull throws a wrench in an important feature of Vec (and indeed all of
the std collections): creating an empty Vec doesn't actually allocate at all. This
is not the same as allocating a zero-sized memory block, which is not allowed by
the global allocator (it results in undefined behavior!). So if we can't allocate,
but also can't put a null pointer in `ptr`, what do we do in `Vec::new`? Well, we
Using `NonNull` throws a wrench in an important feature of Vec (and indeed all of
the std collections): creating an empty Vec doesn't actually allocate at all. This
is not the same as allocating a zero-sized memory block, which is not allowed by
the global allocator (it results in undefined behavior!). So if we can't allocate,
but also can't put a null pointer in `ptr`, what do we do in `Vec::new`? Well, we
just put some other garbage in there!
This is perfectly fine because we already have `cap == 0` as our sentinel for no
allocation. We don't even need to handle it specially in almost any code because
we usually need to check if `cap > len` or `len > 0` anyway. The recommended
Rust value to put here is `mem::align_of::<T>()`. NonNull provides a convenience
Rust value to put here is `mem::align_of::<T>()`. `NonNull` provides a convenience
for this: `NonNull::dangling()`. There are quite a few places where we'll
want to use `dangling` because there's no real allocation to talk about but
`null` would make the compiler do bad things.
@ -23,11 +23,11 @@ use std::mem;
impl<T> Vec<T> {
fn new() -> Self {
assert!(mem::size_of::<T>() != 0, "We're not ready to handle ZSTs");
Vec {
ptr: NonNull::dangling(),
len: 0,
Vec {
ptr: NonNull::dangling(),
len: 0,
cap: 0,
_marker: PhantomData
_marker: PhantomData,
}
}
}
@ -45,7 +45,7 @@ and [`dealloc`][dealloc] which are available in stable Rust in
favor of the methods of [`std::alloc::Global`][Global] after this type is stabilized.
We'll also need a way to handle out-of-memory (OOM) conditions. The standard
library provides a function [`alloc::handle_alloc_error`][handle_alloc_error],
library provides a function [`alloc::handle_alloc_error`][handle_alloc_error],
which will abort the program in a platform-specific manner.
The reason we abort and don't panic is because unwinding can cause allocations
to happen, and that seems like a bad thing to do when your allocator just came
@ -171,9 +171,9 @@ impl<T> Vec<T> {
(1, Layout::array::<T>(1).unwrap())
} else {
// This can't overflow since self.cap <= isize::MAX.
let new_cap = 2 * self.cap;
let new_cap = 2 * self.cap;
// Layout::array checks that the number of bytes is <= usize::MAX,
// `Layout::array` checks that the number of bytes is <= usize::MAX,
// but this is redundant since old_layout.size() <= isize::MAX,
// so the `unwrap` should never fail.
let new_layout = Layout::array::<T>(new_cap).unwrap();

@ -16,8 +16,8 @@ impl<T> Drop for Vec<T> {
if self.cap != 0 {
while let Some(_) = self.pop() { }
let layout = Layout::array::<T>(self.cap).unwrap();
unsafe {
alloc::dealloc(self.ptr.as_ptr() as *mut u8, layout);
unsafe {
alloc::dealloc(self.ptr.as_ptr() as *mut u8, layout);
}
}
}

@ -21,7 +21,7 @@ impl<T> RawVec<T> {
// !0 is usize::MAX. This branch should be stripped at compile time.
let cap = if mem::size_of::<T>() == 0 { !0 } else { 0 };
// NonNull::dangling() doubles as "unallocated" and "zero-sized allocation"
// `NonNull::dangling()` doubles as "unallocated" and "zero-sized allocation"
RawVec {
ptr: NonNull::dangling(),
cap: cap,
@ -40,7 +40,7 @@ impl<T> RawVec<T> {
// This can't overflow because we ensure self.cap <= isize::MAX.
let new_cap = 2 * self.cap;
// Layout::array checks that the number of bytes is <= usize::MAX,
// `Layout::array` checks that the number of bytes is <= usize::MAX,
// but this is redundant since old_layout.size() <= isize::MAX,
// so the `unwrap` should never fail.
let new_layout = Layout::array::<T>(new_cap).unwrap();
@ -248,7 +248,7 @@ impl<T> Iterator for RawValIter<T> {
fn size_hint(&self) -> (usize, Option<usize>) {
let elem_size = mem::size_of::<T>();
let len = (self.end as usize - self.start as usize) /
let len = (self.end as usize - self.start as usize) /
if elem_size == 0 { 1 } else { elem_size };
(len, Some(len))
}
@ -336,7 +336,7 @@ impl<'a, T> Drop for Drain<'a, T> {
#
# mod tests {
# use super::*;
#
#
# pub fn create_push_pop() {
# let mut v = Vec::new();
# v.push(1);
@ -354,7 +354,7 @@ impl<'a, T> Drop for Drain<'a, T> {
# assert_eq!(5, x);
# assert_eq!(1, v.len());
# }
#
#
# pub fn iter_test() {
# let mut v = Vec::new();
# for i in 0..10 {
@ -367,7 +367,7 @@ impl<'a, T> Drop for Drain<'a, T> {
# assert_eq!(0, *first);
# assert_eq!(9, *last);
# }
#
#
# pub fn test_drain() {
# let mut v = Vec::new();
# for i in 0..10 {
@ -384,19 +384,19 @@ impl<'a, T> Drop for Drain<'a, T> {
# v.push(Box::new(1));
# assert_eq!(1, *v.pop().unwrap());
# }
#
#
# pub fn test_zst() {
# let mut v = Vec::new();
# for _i in 0..10 {
# v.push(())
# }
#
#
# let mut count = 0;
#
#
# for _ in v.into_iter() {
# count += 1
# }
#
#
# assert_eq!(10, count);
# }
# }

@ -137,8 +137,8 @@ impl<T> Drop for IntoIter<T> {
// drop any remaining elements
for _ in &mut *self {}
let layout = Layout::array::<T>(self.cap).unwrap();
unsafe {
alloc::dealloc(self.buf.as_ptr() as *mut u8, layout);
unsafe {
alloc::dealloc(self.buf.as_ptr() as *mut u8, layout);
}
}
}

@ -33,11 +33,11 @@ As a recap, Unique is a wrapper around a raw pointer that declares that:
* We are Send/Sync if `T` is Send/Sync
* Our pointer is never null (so `Option<Vec<T>>` is null-pointer-optimized)
We can implement all of the above requirements in stable Rust. To do this, instead
of using `Unique<T>` we will use [`NonNull<T>`][NonNull], another wrapper around a
raw pointer, which gives us two of the above properties, namely it is covariant
over `T` and is declared to never be null. By adding a `PhantomData<T>` (for drop
check) and implementing Send/Sync if `T` is, we get the same results as using
We can implement all of the above requirements in stable Rust. To do this, instead
of using `Unique<T>` we will use [`NonNull<T>`][NonNull], another wrapper around a
raw pointer, which gives us two of the above properties, namely it is covariant
over `T` and is declared to never be null. By adding a `PhantomData<T>` (for drop
check) and implementing Send/Sync if `T` is, we get the same results as using
`Unique<T>`:
```rust
@ -57,4 +57,4 @@ unsafe impl<T: Sync> Sync for Vec<T> {}
```
[ownership]: ownership.html
[NonNull]: ../std/ptr/struct.NonNull.html
[NonNull]: ../std/ptr/struct.NonNull.html

@ -51,5 +51,5 @@ pub fn pop(&mut self) -> Option<T> {
Some(ptr::read(self.ptr.as_ptr().offset(self.len as isize)))
}
}
}
}
```

@ -77,7 +77,6 @@ impl<T> Drop for RawVec<T> {
And change Vec as follows:
```rust,ignore
pub struct Vec<T> {
buf: RawVec<T>,
len: usize,

@ -38,7 +38,7 @@ impl<T> RawVec<T> {
// !0 is usize::MAX. This branch should be stripped at compile time.
let cap = if mem::size_of::<T>() == 0 { !0 } else { 0 };
// NonNull::dangling() doubles as "unallocated" and "zero-sized allocation"
// `NonNull::dangling()` doubles as "unallocated" and "zero-sized allocation"
RawVec {
ptr: NonNull::dangling(),
cap: cap,
@ -57,7 +57,7 @@ impl<T> RawVec<T> {
// This can't overflow because we ensure self.cap <= isize::MAX.
let new_cap = 2 * self.cap;
// Layout::array checks that the number of bytes is <= usize::MAX,
// `Layout::array` checks that the number of bytes is <= usize::MAX,
// but this is redundant since old_layout.size() <= isize::MAX,
// so the `unwrap` should never fail.
let new_layout = Layout::array::<T>(new_cap).unwrap();

@ -1,9 +1,9 @@
# Example: Implementing Vec
To bring everything together, we're going to write `std::Vec` from scratch.
We will limit ourselves to stable Rust. In particular we won't use any
intrinsics that could make our code a little bit nicer or efficient because
intrinsics are permanently unstable. Although many intrinsics *do* become
We will limit ourselves to stable Rust. In particular we won't use any
intrinsics that could make our code a little bit nicer or efficient because
intrinsics are permanently unstable. Although many intrinsics *do* become
stabilized elsewhere (`std::ptr` and `std::mem` consist of many intrinsics).
Ultimately this means our implementation may not take advantage of all

Loading…
Cancel
Save