Audit `ignore` annotations

pull/288/head
Yuki Okushi 3 years ago
parent 5de61f9784
commit 5e789618d9
No known key found for this signature in database
GPG Key ID: DABA5B072961C18A

@ -56,6 +56,7 @@ and `output` overlap, such as `compute(&x, &mut x)`.
With that input, we could get this execution:
<!-- ignore: expanded code -->
```rust,ignore
// input == output == 0xabad1dea
// *input == *output == 20

@ -10,6 +10,7 @@ We'll first need a way to construct an `Arc<T>`.
This is pretty simple, as we just need to box the `ArcInner<T>` and get a
`NonNull<T>` pointer to it.
<!-- ignore: simplified code -->
```rust,ignore
impl<T> Arc<T> {
pub fn new(data: T) -> Arc<T> {
@ -41,6 +42,7 @@ This is okay because:
if it is the only `Arc` referencing that data (which only happens in `Drop`)
* We use atomics for the shared mutable reference counting
<!-- ignore: simplified code -->
```rust,ignore
unsafe impl<T: Sync + Send> Send for Arc<T> {}
unsafe impl<T: Sync + Send> Sync for Arc<T> {}
@ -61,6 +63,8 @@ as `Rc` is not thread-safe.
To dereference the `NonNull<T>` pointer into a `&T`, we can call
`NonNull::as_ref`. This is unsafe, unlike the typical `as_ref` function, so we
must call it like this:
<!-- ignore: simplified code -->
```rust,ignore
unsafe { self.ptr.as_ref() }
```
@ -79,11 +83,15 @@ to the data inside?
What we need now is an implementation of `Deref`.
We'll need to import the trait:
<!-- ignore: simplified code -->
```rust,ignore
use std::ops::Deref;
```
And here's the implementation:
<!-- ignore: simplified code -->
```rust,ignore
impl<T> Deref for Arc<T> {
type Target = T;
@ -101,6 +109,8 @@ Pretty simple, eh? This simply dereferences the `NonNull` pointer to the
## Code
Here's all the code from this section:
<!-- ignore: simplified code -->
```rust,ignore
use std::ops::Deref;

@ -9,12 +9,14 @@ Basically, we need to:
First, we need to get access to the `ArcInner`:
<!-- ignore: simplified code -->
```rust,ignore
let inner = unsafe { self.ptr.as_ref() };
```
We can update the atomic reference count as follows:
<!-- ignore: simplified code -->
```rust,ignore
let old_rc = inner.rc.fetch_add(1, Ordering::???);
```
@ -30,13 +32,14 @@ ordering, see [the section on atomics](../atomics.md).
Thus, the code becomes this:
<!-- ignore: simplified code -->
```rust,ignore
let old_rc = inner.rc.fetch_add(1, Ordering::Relaxed);
```
We'll need to add another import to use `Ordering`:
```rust,ignore
```rust
use std::sync::atomic::Ordering;
```
@ -61,6 +64,7 @@ machines) incrementing the reference count at once. This is what we'll do.
It's pretty simple to implement this behavior:
<!-- ignore: simplified code -->
```rust,ignore
if old_rc >= isize::MAX as usize {
std::process::abort();
@ -69,6 +73,7 @@ if old_rc >= isize::MAX as usize {
Then, we need to return a new instance of the `Arc`:
<!-- ignore: simplified code -->
```rust,ignore
Self {
ptr: self.ptr,
@ -78,6 +83,7 @@ Self {
Now, let's wrap this all up inside the `Clone` implementation:
<!-- ignore: simplified code -->
```rust,ignore
use std::sync::atomic::Ordering;

@ -15,6 +15,7 @@ Basically, we need to:
First, we'll need to get access to the `ArcInner`:
<!-- ignore: simplified code -->
```rust,ignore
let inner = unsafe { self.ptr.as_ref() };
```
@ -24,6 +25,7 @@ also return if the returned value from `fetch_sub` (the value of the reference
count before decrementing it) is not equal to `1` (which happens when we are not
the last reference to the data).
<!-- ignore: simplified code -->
```rust,ignore
if inner.rc.fetch_sub(1, Ordering::Relaxed) != 1 {
return;
@ -63,20 +65,17 @@ implementation of `Arc`][3]:
To do this, we do the following:
```rust,ignore
atomic::fence(Ordering::Acquire);
```
We'll need to import `std::sync::atomic` itself:
```rust,ignore
```rust
# use std::sync::atomic::Ordering;
use std::sync::atomic;
atomic::fence(Ordering::Acquire);
```
Finally, we can drop the data itself. We use `Box::from_raw` to drop the boxed
`ArcInner<T>` and its data. This takes a `*mut T` and not a `NonNull<T>`, so we
must convert using `NonNull::as_ptr`.
<!-- ignore: simplified code -->
```rust,ignore
unsafe { Box::from_raw(self.ptr.as_ptr()); }
```
@ -86,6 +85,7 @@ pointer is valid.
Now, let's wrap this all up inside the `Drop` implementation:
<!-- ignore: simplified code -->
```rust,ignore
impl<T> Drop for Arc<T> {
fn drop(&mut self) {

@ -22,7 +22,7 @@ same allocation.
Naively, it would look something like this:
```rust,ignore
```rust
use std::sync::atomic;
pub struct Arc<T> {
@ -31,7 +31,7 @@ pub struct Arc<T> {
pub struct ArcInner<T> {
rc: atomic::AtomicUsize,
data: T
data: T,
}
```
@ -56,18 +56,18 @@ ownership of a value of `ArcInner<T>` (which itself contains some `T`).
With these changes we get our final structure:
```rust,ignore
```rust
use std::marker::PhantomData;
use std::ptr::NonNull;
use std::sync::atomic::AtomicUsize;
pub struct Arc<T> {
ptr: NonNull<ArcInner<T>>,
phantom: PhantomData<ArcInner<T>>
phantom: PhantomData<ArcInner<T>>,
}
pub struct ArcInner<T> {
rc: AtomicUsize,
data: T
data: T,
}
```

@ -27,16 +27,18 @@ exactly what we said but, you know, fast. Wouldn't that be great?
Compilers fundamentally want to be able to do all sorts of complicated
transformations to reduce data dependencies and eliminate dead code. In
particular, they may radically change the actual order of events, or make events
never occur! If we write something like
never occur! If we write something like:
<!-- ignore: simplified code -->
```rust,ignore
x = 1;
y = 3;
x = 2;
```
The compiler may conclude that it would be best if your program did
The compiler may conclude that it would be best if your program did:
<!-- ignore: simplified code -->
```rust,ignore
x = 2;
y = 3;

@ -3,6 +3,7 @@
What the language *does* provide is full-blown automatic destructors through the
`Drop` trait, which provides the following method:
<!-- ignore: function header -->
```rust,ignore
fn drop(&mut self);
```

@ -8,11 +8,15 @@ when we talked about `'a: 'b`, it was ok for `'a` to live _exactly_ as long as
gets dropped at the same time as another, right? This is why we used the
following desugaring of `let` statements:
<!-- ignore: simplified code -->
```rust,ignore
let x;
let y;
```
desugaring to:
<!-- ignore: desugared code -->
```rust,ignore
{
let x;
@ -29,6 +33,7 @@ definition. There are some more details about order of drop in [RFC 1857][rfc185
Let's do this:
<!-- ignore: simplified code -->
```rust,ignore
let tuple = (vec![], vec![]);
```
@ -259,7 +264,8 @@ lifetime `'b` and that the only uses of `T` will be moves or drops, but omit
the attribute from `'a` and `U`, because we do access data with that lifetime
and that type:
```rust,ignore
```rust
#![feature(dropck_eyepatch)]
use std::fmt::Display;
struct Inspector<'a, 'b, T, U: Display>(&'a u8, &'b u8, T, U);
@ -283,7 +289,7 @@ other avenues for such indirect access.)
Here is an example of invoking a callback:
```rust,ignore
```rust
struct Inspector<T>(T, &'static str, Box<for <'r> fn(&'r T) -> String>);
impl<T> Drop for Inspector<T> {
@ -297,7 +303,7 @@ impl<T> Drop for Inspector<T> {
Here is an example of a trait method call:
```rust,ignore
```rust
use std::fmt;
struct Inspector<T: fmt::Display>(T, &'static str);

@ -35,6 +35,7 @@ needs to be careful and consider exception safety.
`Vec::push_all` is a temporary hack to get extending a Vec by a slice reliably
efficient without specialization. Here's a simple implementation:
<!-- ignore: simplified code -->
```rust,ignore
impl<T: Clone> Vec<T> {
fn push_all(&mut self, to_push: &[T]) {
@ -75,7 +76,6 @@ bubble_up(heap, index):
while index != 0 && heap[index] < heap[parent(index)]:
heap.swap(index, parent(index))
index = parent(index)
```
A literal transcription of this code to Rust is totally fine, but has an annoying
@ -147,6 +147,7 @@ way to do this is to store the algorithm's state in a separate struct with a
destructor for the "finally" logic. Whether we panic or not, that destructor
will run and clean up after us.
<!-- ignore: simplified code -->
```rust,ignore
struct Hole<'a, T: 'a> {
data: &'a mut [T],

@ -139,7 +139,7 @@ other is still UB).
The following *could* also compile:
```rust,ignore
```rust,compile_fail
enum Void {}
let res: Result<u32, Void> = Ok(0);

@ -28,6 +28,7 @@ and add `extern crate libc;` to your crate root.
The following is a minimal example of calling a foreign function which will
compile if snappy is installed:
<!-- ignore: requires libc crate -->
```rust,ignore
extern crate libc;
use libc::size_t;
@ -61,6 +62,7 @@ of keeping the binding correct at runtime.
The `extern` block can be extended to cover the entire snappy API:
<!-- ignore: requires libc crate -->
```rust,ignore
extern crate libc;
use libc::{c_int, size_t};
@ -96,6 +98,7 @@ vectors as pointers to memory. Rust's vectors are guaranteed to be a contiguous
length is the number of elements currently contained, and the capacity is the total size in elements of
the allocated memory. The length is less than or equal to the capacity.
<!-- ignore: requires libc crate -->
```rust,ignore
# extern crate libc;
# use libc::{c_int, size_t};
@ -120,6 +123,7 @@ required capacity to hold the compressed output. The vector can then be passed t
`snappy_compress` function as an output parameter. An output parameter is also passed to retrieve
the true length after compression for setting the length.
<!-- ignore: requires libc crate -->
```rust,ignore
# extern crate libc;
# use libc::{size_t, c_int};
@ -146,6 +150,7 @@ pub fn compress(src: &[u8]) -> Vec<u8> {
Decompression is similar, because snappy stores the uncompressed size as part of the compression
format and `snappy_uncompressed_length` will retrieve the exact buffer size required.
<!-- ignore: requires libc crate -->
```rust,ignore
# extern crate libc;
# use libc::{size_t, c_int};
@ -180,6 +185,7 @@ pub fn uncompress(src: &[u8]) -> Option<Vec<u8>> {
Then, we can add some tests to show how to use them.
<!-- ignore: requires libc crate -->
```rust,ignore
# extern crate libc;
# use libc::{c_int, size_t};
@ -452,6 +458,7 @@ Foreign APIs often export a global variable which could do something like track
global state. In order to access these variables, you declare them in `extern`
blocks with the `static` keyword:
<!-- ignore: requires libc crate -->
```rust,ignore
extern crate libc;
@ -470,6 +477,7 @@ Alternatively, you may need to alter global state provided by a foreign
interface. To do this, statics can be declared with `mut` so we can mutate
them.
<!-- ignore: requires libc crate -->
```rust,ignore
extern crate libc;
@ -502,6 +510,7 @@ Most foreign code exposes a C ABI, and Rust uses the platform's C calling conven
calling foreign functions. Some foreign functions, most notably the Windows API, use other calling
conventions. Rust provides a way to tell the compiler which convention to use:
<!-- ignore: requires libc crate -->
```rust,ignore
extern crate libc;
@ -582,7 +591,7 @@ fn main() {
Normal Rust functions can *not* be variadic:
```ignore
```rust,compile_fail
// This will not compile
fn foo(x: i32, ...) {}
@ -613,6 +622,7 @@ callback, which gets called in certain situations. The callback is passed a func
and an integer and it is supposed to run the function with the integer as a parameter. So
we have function pointers flying across the FFI boundary in both directions.
<!-- ignore: requires libc crate -->
```rust,ignore
extern crate libc;
use libc::c_int;
@ -712,6 +722,7 @@ void bar(void *arg);
We can represent this in Rust with the `c_void` type:
<!-- ignore: requires libc crate -->
```rust,ignore
extern crate libc;

@ -28,6 +28,7 @@ fn main() {
If we try to naively desugar this code in the same way that we did in the
lifetimes section, we run into some trouble:
<!-- ignore: desugared code -->
```rust,ignore
struct Closure<F> {
data: (u8, u16),
@ -60,6 +61,7 @@ named until we enter the body of `call`! Also, that isn't some fixed lifetime;
This job requires The Magic of Higher-Rank Trait Bounds (HRTBs). The way we
desugar this is as follows:
<!-- ignore: simplified code -->
```rust,ignore
where for<'a> F: Fn(&'a (u8, u16)) -> &'a u8,
```

@ -69,6 +69,7 @@ unwinding-safe! Easy!
Now consider the following:
<!-- ignore: simplified code -->
```rust,ignore
let mut vec = vec![Box::new(0); 4];
@ -116,6 +117,7 @@ Nope.
Let's consider a simplified implementation of Rc:
<!-- ignore: simplified code -->
```rust,ignore
struct Rc<T> {
ptr: *mut RcBox<T>,
@ -179,6 +181,7 @@ data on their parent's stack without any synchronization over that data by
ensuring the parent joins the thread before any of the shared data goes out
of scope.
<!-- ignore: simplified code -->
```rust,ignore
pub fn scoped<'a, F>(f: F) -> JoinGuard<'a>
where F: FnOnce() + Send + 'a
@ -196,6 +199,7 @@ of the closed-over data goes out of scope in the parent.
Usage looked like:
<!-- ignore: simplified code -->
```rust,ignore
let mut data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
{
@ -224,6 +228,7 @@ let mut data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
In principle, this totally works! Rust's ownership system perfectly ensures it!
...except it relies on a destructor being called to be safe.
<!-- ignore: simplified code -->
```rust,ignore
let mut data = Box::new(0);
{

@ -5,6 +5,7 @@ In order to make common patterns more ergonomic, Rust allows lifetimes to be
A *lifetime position* is anywhere you can write a lifetime in a type:
<!-- ignore: simplified code -->
```rust,ignore
&'a T
&'a mut T
@ -38,6 +39,7 @@ Elision rules are as follows:
Examples:
<!-- ignore: simplified code -->
```rust,ignore
fn print(s: &str); // elided
fn print<'a>(s: &'a str); // expanded
@ -60,5 +62,4 @@ fn args<'a, 'b, T: ToCStr>(&'a mut self, args: &'b [T]) -> &'a mut Command // ex
fn new(buf: &mut [u8]) -> BufWriter; // elided
fn new<'a>(buf: &'a mut [u8]) -> BufWriter<'a> // expanded
```

@ -40,6 +40,7 @@ What happened? Well, we got the exact same reasoning as we did for
[Example 2 in the previous section][ex2]. We desugar the program and we get
the following:
<!-- ignore: desugared code -->
```rust,ignore
struct Foo;

@ -45,6 +45,7 @@ let z = &y;
The borrow checker always tries to minimize the extent of a lifetime, so it will
likely desugar to the following:
<!-- ignore: desugared code -->
```rust,ignore
// NOTE: `'a: {` and `&'b x` is not valid syntax!
'a: {
@ -72,6 +73,7 @@ let y = &x;
z = y;
```
<!-- ignore: desugared code -->
```rust,ignore
'a: {
let x: i32 = 0;
@ -100,6 +102,7 @@ fn as_str(data: &u32) -> &str {
desugars to:
<!-- ignore: desugared code -->
```rust,ignore
fn as_str<'a>(data: &'a u32) -> &'a str {
'b: {
@ -127,6 +130,7 @@ up in our face.
To make this more clear, we can expand the example:
<!-- ignore: desugared code -->
```rust,ignore
fn as_str<'a>(data: &'a u32) -> &'a str {
'b: {
@ -178,6 +182,7 @@ data.push(4);
println!("{}", x);
```
<!-- ignore: desugared code -->
```rust,ignore
'a: {
let mut data: Vec<i32> = vec![1, 2, 3];

@ -17,7 +17,7 @@ issue...). This is a pervasive problem that C and C++ programs need to deal
with. Consider this simple mistake that all of us who have used a non-GC'd
language have made at one point:
```rust,ignore
```rust,compile_fail
fn as_str(data: &u32) -> &str {
// compute the string
let s = format!("{}", data);
@ -46,7 +46,7 @@ verifying that references don't escape the scope of their referent. That's
because ensuring pointers are always valid is much more complicated than this.
For instance in this code,
```rust,ignore
```rust,compile_fail
let mut data = vec![1, 2, 3];
// get an internal reference
let x = &data[0];

@ -17,9 +17,10 @@ Below is shown an example where an application has a different panicking behavio
whether is compiled using the dev profile (`cargo build`) or using the release profile (`cargo build
--release`).
``` rust, ignore
// crate: panic-semihosting -- log panic messages to the host stderr using semihosting
`panic-semihosting` crate -- log panic messages to the host stderr using semihosting:
<!-- ignore: simplified code -->
```rust,ignore
#![no_std]
use core::fmt::{Write, self};
@ -49,8 +50,10 @@ fn panic(info: &PanicInfo) -> ! {
}
```
`panic-halt` crate -- halt the thread on panic; messages are discarded:
<!-- ignore: simplified code -->
```rust,ignore
// crate: panic-halt -- halt the thread on panic; messages are discarded
#![no_std]
@ -62,8 +65,10 @@ fn panic(_info: &PanicInfo) -> ! {
}
```
`app` crate:
<!-- ignore: requires external crate -->
```rust,ignore
// crate: app
#![no_std]

@ -104,6 +104,7 @@ Rust lays out the fields in the order specified, we expect it to pad the
values in the struct to satisfy their alignment requirements. So if Rust
didn't reorder fields, we would expect it to produce the following:
<!-- ignore: explanation code -->
```rust,ignore
struct Foo<u16, u32> {
count: u16,

@ -34,6 +34,7 @@ But unlike normal traits, we can use them as concrete and sized types, just like
Now, say we have a very simple function that takes an Animal, like this:
<!-- ignore: simplified code -->
```rust,ignore
fn love(pet: Animal) {
pet.snuggle();
@ -43,6 +44,7 @@ fn love(pet: Animal) {
By default, static types must match *exactly* for a program to compile. As such,
this code won't compile:
<!-- ignore: simplified code -->
```rust,ignore
let mr_snuggles: Cat = ...;
love(mr_snuggles); // ERROR: expected Animal, found Cat
@ -78,6 +80,7 @@ of our static type system, making it worse than useless (and leading to Undefine
Here's a simple example of this happening when we apply subtyping in a completely naive
"find and replace" way.
<!-- ignore: simplified code -->
```rust,ignore
fn evil_feeder(pet: &mut Animal) {
let spike: Dog = ...;
@ -199,6 +202,7 @@ and look at some examples.
First off, let's revisit the meowing dog example:
<!-- ignore: simplified code -->
```rust,ignore
fn evil_feeder(pet: &mut Animal) {
let spike: Dog = ...;
@ -344,6 +348,7 @@ are guaranteed to be the only one with access to it.
Consider the following code:
<!-- ignore: simplified code -->
```rust,ignore
let mr_snuggles: Box<Cat> = ..;
let spike: Box<Dog> = ..;
@ -369,6 +374,7 @@ Only one thing left to explain: function pointers.
To see why `fn(T) -> U` should be covariant over `U`, consider the following signature:
<!-- ignore: simplified code -->
```rust,ignore
fn get_animal() -> Animal;
```
@ -376,6 +382,7 @@ fn get_animal() -> Animal;
This function claims to produce an Animal. As such, it is perfectly valid to
provide a function with the following signature instead:
<!-- ignore: simplified code -->
```rust,ignore
fn get_animal() -> Cat;
```
@ -388,12 +395,14 @@ just forget that fact.
However, the same logic does not apply to *arguments*. Consider trying to satisfy:
<!-- ignore: simplified code -->
```rust,ignore
fn handle_animal(Animal);
```
with
with:
<!-- ignore: simplified code -->
```rust,ignore
fn handle_animal(Cat);
```

@ -17,6 +17,7 @@ boundaries.
Given a function, any output lifetimes that don't derive from inputs are
unbounded. For instance:
<!-- ignore: simplified code -->
```rust,ignore
fn get_str<'a>() -> &'a str;
```

@ -82,6 +82,7 @@ It's worth spending a bit more time on the loop in the middle, and in particular
the assignment operator and its interaction with `drop`. If we would have
written something like:
<!-- ignore: simplified code -->
```rust,ignore
*x[i].as_mut_ptr() = Box::new(i as u32); // WRONG!
```

@ -17,6 +17,7 @@ want to use `dangling` because there's no real allocation to talk about but
So:
<!-- ignore: explanation code -->
```rust,ignore
use std::mem;
@ -83,6 +84,7 @@ compiler to be able to reason about data dependencies and aliasing.
As a simple example, consider the following fragment of code:
<!-- ignore: simplified code -->
```rust,ignore
*x *= 7;
*y *= 3;
@ -162,6 +164,7 @@ such we will guard against this case explicitly.
Ok with all the nonsense out of the way, let's actually allocate some memory:
<!-- ignore: simplified code -->
```rust,ignore
use std::alloc::{self, Layout};

@ -10,6 +10,7 @@ wouldn't bother unless you notice it's not being stripped (in this case it is).
We must not call `alloc::dealloc` when `self.cap == 0`, as in this case we
haven't actually allocated any memory.
<!-- ignore: simplified code -->
```rust,ignore
impl<T> Drop for Vec<T> {
fn drop(&mut self) {

@ -11,6 +11,7 @@ All we need is `slice::from_raw_parts`. It will correctly handle empty slices
for us. Later once we set up zero-sized type support it will also Just Work
for those too.
<!-- ignore: simplified code -->
```rust,ignore
use std::ops::Deref;
@ -26,6 +27,7 @@ impl<T> Deref for Vec<T> {
And let's do DerefMut too:
<!-- ignore: simplified code -->
```rust,ignore
use std::ops::DerefMut;

@ -4,6 +4,7 @@ Let's move on to Drain. Drain is largely the same as IntoIter, except that
instead of consuming the Vec, it borrows the Vec and leaves its allocation
untouched. For now we'll only implement the "basic" full-range version.
<!-- ignore: simplified code -->
```rust,ignore
use std::marker::PhantomData;
@ -26,6 +27,7 @@ impl<'a, T> Iterator for Drain<'a, T> {
-- wait, this is seeming familiar. Let's do some more compression. Both
IntoIter and Drain have the exact same structure, let's just factor it out.
<!-- ignore: simplified code -->
```rust,ignore
struct RawValIter<T> {
start: *const T,
@ -57,6 +59,7 @@ impl<T> RawValIter<T> {
And IntoIter becomes the following:
<!-- ignore: simplified code -->
```rust,ignore
pub struct IntoIter<T> {
_buf: RawVec<T>, // we don't actually care about this. Just need it to live.
@ -103,6 +106,7 @@ We also take a slice to simplify Drain initialization.
Alright, now Drain is really easy:
<!-- ignore: simplified code -->
```rust,ignore
use std::marker::PhantomData;

@ -12,6 +12,7 @@ definitely happen here).
If we insert at index `i`, we want to shift the `[i .. len]` to `[i+1 .. len+1]`
using the old len.
<!-- ignore: simplified code -->
```rust,ignore
pub fn insert(&mut self, index: usize, elem: T) {
// Note: `<=` because it's valid to insert after everything
@ -33,6 +34,7 @@ pub fn insert(&mut self, index: usize, elem: T) {
Remove behaves in the opposite manner. We need to shift all the elements from
`[i+1 .. len + 1]` to `[i .. len]` using the *new* len.
<!-- ignore: simplified code -->
```rust,ignore
pub fn remove(&mut self, index: usize) -> T {
// Note: `<` because it's *not* valid to remove after everything

@ -42,6 +42,7 @@ dropped.
So we're going to use the following struct:
<!-- ignore: simplified code -->
```rust,ignore
pub struct IntoIter<T> {
buf: NonNull<T>,
@ -54,6 +55,7 @@ pub struct IntoIter<T> {
And this is what we end up with for initialization:
<!-- ignore: simplified code -->
```rust,ignore
impl<T> Vec<T> {
pub fn into_iter(self) -> IntoIter<T> {
@ -85,6 +87,7 @@ impl<T> Vec<T> {
Here's iterating forward:
<!-- ignore: simplified code -->
```rust,ignore
impl<T> Iterator for IntoIter<T> {
type Item = T;
@ -110,6 +113,7 @@ impl<T> Iterator for IntoIter<T> {
And here's iterating backwards.
<!-- ignore: simplified code -->
```rust,ignore
impl<T> DoubleEndedIterator for IntoIter<T> {
fn next_back(&mut self) -> Option<T> {
@ -129,6 +133,7 @@ Because IntoIter takes ownership of its allocation, it needs to implement Drop
to free it. However it also wants to implement Drop to drop any elements it
contains that weren't yielded.
<!-- ignore: simplified code -->
```rust,ignore
impl<T> Drop for IntoIter<T> {
fn drop(&mut self) {

@ -6,6 +6,7 @@ elements that have been initialized.
Naively, this means we just want this design:
<!-- ignore: simplified code -->
```rust,ignore
pub struct Vec<T> {
ptr: *mut T,

@ -17,6 +17,7 @@ target address with the bits of the value we provide. No evaluation involved.
For `push`, if the old len (before push was called) is 0, then we want to write
to the 0th index. So we should offset by the old len.
<!-- ignore: simplified code -->
```rust,ignore
pub fn push(&mut self, elem: T) {
if self.len == self.cap { self.grow(); }
@ -41,6 +42,7 @@ of T there.
For `pop`, if the old len is 1, we want to read out of the 0th index. So we
should offset by the new len.
<!-- ignore: simplified code -->
```rust,ignore
pub fn pop(&mut self) -> Option<T> {
if self.len == 0 {

@ -8,6 +8,7 @@ time to perform some logic compression.
We're going to abstract out the `(ptr, cap)` pair and give them the logic for
allocating, growing, and freeing:
<!-- ignore: simplified code -->
```rust,ignore
struct RawVec<T> {
ptr: NonNull<T>,
@ -76,6 +77,7 @@ impl<T> Drop for RawVec<T> {
And change Vec as follows:
<!-- ignore: simplified code -->
```rust,ignore
pub struct Vec<T> {
buf: RawVec<T>,
@ -114,6 +116,7 @@ impl<T> Drop for Vec<T> {
And finally we can really simplify IntoIter:
<!-- ignore: simplified code -->
```rust,ignore
pub struct IntoIter<T> {
_buf: RawVec<T>, // we don't actually care about this. Just need it to live.

@ -29,6 +29,7 @@ overflow for zero-sized types.
Due to our current architecture, all this means is writing 3 guards, one in each
method of `RawVec`.
<!-- ignore: simplified code -->
```rust,ignore
impl<T> RawVec<T> {
fn new() -> Self {
@ -107,6 +108,7 @@ initialize `start` and `end` as the same value, and our iterators will yield
nothing. The current solution to this is to cast the pointers to integers,
increment, and then cast them back:
<!-- ignore: simplified code -->
```rust,ignore
impl<T> RawValIter<T> {
unsafe fn new(slice: &[T]) -> Self {
@ -130,6 +132,7 @@ Also, our size_hint computation code will divide by 0 for ZSTs. Since we'll
basically be treating the two pointers as if they point to bytes, we'll just
map size 0 to divide by 1.
<!-- ignore: simplified code -->
```rust,ignore
impl<T> Iterator for RawValIter<T> {
type Item = T;

@ -84,6 +84,7 @@ impl<T> Vec<T> {
This code is simple enough to reasonably audit and informally verify. Now consider
adding the following method:
<!-- ignore: simplified code -->
```rust,ignore
fn make_room(&mut self) {
// grow the capacity

Loading…
Cancel
Save