diff --git a/vec-alloc.md b/vec-alloc.md index 6f98220..93efbbb 100644 --- a/vec-alloc.md +++ b/vec-alloc.md @@ -61,9 +61,7 @@ like the standard library as much as possible, so we'll just kill the whole program. We said we don't want to use intrinsics, so doing *exactly* what `std` does is -out. `std::rt::util::abort` actually exists, but it takes a message to print, -which will probably allocate. Also it's still unstable. Instead, we'll call -`std::process::exit` with some random number. +out. Instead, we'll call `std::process::exit` with some random number. ```rust fn oom() { @@ -78,7 +76,7 @@ if cap == 0: allocate() cap = 1 else: - reallocate + reallocate() cap *= 2 ``` @@ -109,7 +107,7 @@ the same location in memory, the operations need to be done to the same value, and they can't just be merged afterwards. When you use GEP inbounds, you are specifically telling LLVM that the offsets -you're about to do are within the bounds of a single allocated entity. The +you're about to do are within the bounds of a single "allocated" entity. The ultimate payoff being that LLVM can assume that if two pointers are known to point to two disjoint objects, all the offsets of those pointers are *also* known to not alias (because you won't just end up in some random place in @@ -162,7 +160,8 @@ elements. This is a runtime no-op because every element takes up no space, and it's fine to pretend that there's infinite zero-sized types allocated at `0x01`. No allocator will ever allocate that address, because they won't allocate `0x00` and they generally allocate to some minimal alignment higher -than a byte. +than a byte. Also generally the whole first page of memory is +protected from being allocated anyway (a whole 4k, on many platforms). However what about for positive-sized types? That one's a bit trickier. In principle, you can argue that offsetting by 0 gives LLVM no information: either diff --git a/vec-drain.md b/vec-drain.md index 2367197..3be295f 100644 --- a/vec-drain.md +++ b/vec-drain.md @@ -83,6 +83,7 @@ impl Vec { pub fn into_iter(self) -> IntoIter { unsafe { let iter = RawValIter::new(&self); + let buf = ptr::read(&self.buf); mem::forget(self); @@ -112,7 +113,7 @@ pub struct Drain<'a, T: 'a> { impl<'a, T> Iterator for Drain<'a, T> { type Item = T; - fn next(&mut self) -> Option { self.iter.next_back() } + fn next(&mut self) -> Option { self.iter.next() } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } diff --git a/vec-layout.md b/vec-layout.md index bce9a2f..325399d 100644 --- a/vec-layout.md +++ b/vec-layout.md @@ -1,7 +1,10 @@ % Layout -First off, we need to come up with the struct layout. Naively we want this -design: +First off, we need to come up with the struct layout. A Vec has three parts: +a pointer to the allocation, the size of the allocation, and the number of +elements that have been initialized. + +Naively, this means we just want this design: ```rust pub struct Vec { diff --git a/vec-raw.md b/vec-raw.md index 40de019..8f78462 100644 --- a/vec-raw.md +++ b/vec-raw.md @@ -1,9 +1,9 @@ % RawVec We've actually reached an interesting situation here: we've duplicated the logic -for specifying a buffer and freeing its memory. Now that we've implemented it -and identified *actual* logic duplication, this is a good time to perform some -logic compression. +for specifying a buffer and freeing its memory in Vec and IntoIter. Now that +we've implemented it and identified *actual* logic duplication, this is a good +time to perform some logic compression. We're going to abstract out the `(ptr, cap)` pair and give them the logic for allocating, growing, and freeing: @@ -64,7 +64,7 @@ impl Drop for RawVec { } ``` -And change vec as follows: +And change Vec as follows: ```rust,ignore pub struct Vec { diff --git a/vec.md b/vec.md index 39d9686..63f8378 100644 --- a/vec.md +++ b/vec.md @@ -12,7 +12,7 @@ bit nicer or efficient because intrinsics are permanently unstable. Although many intrinsics *do* become stabilized elsewhere (`std::ptr` and `str::mem` consist of many intrinsics). -Ultimately this means out implementation may not take advantage of all +Ultimately this means our implementation may not take advantage of all possible optimizations, though it will be by no means *naive*. We will definitely get into the weeds over nitty-gritty details, even when the problem doesn't *really* merit it.