linker_block_alloctor: Remove 4k page size assumption

Block allocator mmaps arbitary large area ("LinkerBlockAllocatorPage")
of size 4kB * 100. In order to reduce mmap-* syscall and kernel VMA
memory usage.

This works fine for 16kB page size since 100 4kB page are equal to 25
16kB pages.

But for 64kB page size the area will include a partial page (16kB sized
region at the end).

Change the size to 96 4kb pages (6 64kB pages), this will work for all
aarch64 supported page sizes.

And remove the use of PAGE_SIZE

Bug: 294438799
Test: atest -c linker-unit-tests
Change-Id: I7782406d1470183097ce9391c9b70b177e1750e6
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
diff --git a/linker/linker_block_allocator.cpp b/linker/linker_block_allocator.cpp
index 60e5e1c..e70e6ae 100644
--- a/linker/linker_block_allocator.cpp
+++ b/linker/linker_block_allocator.cpp
@@ -37,8 +37,9 @@
 
 #include "linker_debug.h"
 
-static constexpr size_t kAllocateSize = PAGE_SIZE * 100;
-static_assert(kAllocateSize % PAGE_SIZE == 0, "Invalid kAllocateSize.");
+static constexpr size_t kMaxPageSize = 65536;
+static constexpr size_t kAllocateSize = kMaxPageSize * 6;
+static_assert(kAllocateSize % kMaxPageSize == 0, "Invalid kAllocateSize.");
 
 struct LinkerBlockAllocatorPage {
   LinkerBlockAllocatorPage* next;