ORIGINAL
|
Hash table implementation with separate chaining. More...
#include <hashTable.h>
Classes | |
class | hashNode |
Internal node type for hash table storage. More... | |
class | Iterator |
Forward iterator for hashTable. More... | |
Public Types | |
using | rebind_alloc_node = ALLOC::template rebind_alloc< hashNode > |
Rebound allocator type for node storage. | |
using | rebind_alloc_pointer = ALLOC::template rebind_alloc< hashNode * > |
Rebound allocator type for pointer storage. | |
using | buckets_type = vector< hashNode *, rebind_alloc_pointer > |
Type representing the hash table buckets container. | |
Public Member Functions | |
buckets_type | bucketsCopy (const buckets_type &old_buckets) const |
Creates a deep copy of the hash table's bucket array. | |
hashNode * | createNode (const K_TYPE &key=K_TYPE{}, const V_TYPE &value=V_TYPE{}, hashNode *next=nullptr) const |
Creates a new hash node. | |
void | destroyNode (hashNode *node) noexcept |
Destroys a hash node. | |
u_integer | getHashCode (const K_TYPE &key) const |
Computes hash code for a key. | |
u_integer | getBucketCount () const |
Gets current number of buckets. | |
hashNode * | getBucket (const K_TYPE &key) const |
Gets bucket head for a key. | |
floating | loadFactor () const |
Calculates current load factor. | |
u_integer | getNextSize () const |
Gets next appropriate bucket size for expansion. | |
u_integer | getPrevSize () const |
Gets previous appropriate bucket size for shrinking. | |
void | rehash (u_integer new_bucket_count) |
Rehashes table to new bucket count. | |
void | adjust () |
Adjusts table size based on load factor. | |
hashTable (HASH hash=HASH{}) | |
Constructs empty hashTable. | |
hashNode * | find (const K_TYPE &key) const |
Finds node for given key. | |
bool | modify (const K_TYPE &key, const V_TYPE &value) |
Modifies value for existing key. | |
bool | insert (const K_TYPE &key, const V_TYPE &value) |
Inserts new key-value pair. | |
bool | erase (const K_TYPE &key) |
Removes key-value pair. | |
virtual | ~hashTable () |
Destroys hashTable. | |
Public Attributes | |
u_integer | size_ |
buckets_type | buckets |
HASH | hash_ |
rebind_alloc_node | rebind_alloc {} |
Static Public Attributes | |
static constexpr floating | LOAD_FACTOR_MIN = 0.25 |
Minimum load factor before shrinking. | |
static constexpr floating | LOAD_FACTOR_MAX = 0.75 |
Maximum load factor before expanding. | |
static constexpr u_integer | BUCKETS_SIZES_COUNT = 30 |
Size of BUCKETS_SIZES BUCKETS_SIZES. | |
static constexpr u_integer | BUCKETS_SIZES [] |
Predefined bucket sizes for hash table resizing. | |
Hash table implementation with separate chaining.
K_TYPE | Key type (must be hashable) |
V_TYPE | Value type |
ALLOC | Allocator type (default: allocator<K_TYPE>) |
HASH | Hash function type (default: hash<K_TYPE>) |
This class provides a generic hash table implementation that serves as the base for hash-based containers. It implements:
Performance Characteristics:
The implementation guarantees:
Constructs empty hashTable.
hash | Hash function to use |
Destroys hashTable.
Cleans up all nodes and buckets
Adjusts table size based on load factor.
Checks current load factor and resizes if:
original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::buckets_type original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::bucketsCopy | ( | const buckets_type & | old_buckets | ) | const |
Creates a deep copy of the hash table's bucket array.
old_buckets | The original old_buckets array to copy |
This function performs a deep copy of the entire old_buckets array used by the hash table, including the internal linked list in each bucket (used for separate chaining).
Copying strategy:
createNode()
to allocate a new node for each entryThis method is typically used in:
The function ensures that:
original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::hashNode * original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::createNode | ( | const K_TYPE & | key = K_TYPE{} , |
const V_TYPE & | value = V_TYPE{} , |
||
hashNode * | next = nullptr |
||
) | const |
Creates a new hash node.
key | Key for new node |
value | Value for new node |
next | Next node in chain |
Destroys a hash node.
node | Node to destroy |
Removes key-value pair.
key | Key to remove |
original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::hashNode * original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::find | ( | const K_TYPE & | key | ) | const |
Finds node for given key.
key | Key to search for |
original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::hashNode * original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::getBucket | ( | const K_TYPE & | key | ) | const |
Gets bucket head for a key.
key | Key to lookup |
original::u_integer original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::getBucketCount | ( | ) | const |
Gets current number of buckets.
original::u_integer original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::getHashCode | ( | const K_TYPE & | key | ) | const |
Computes hash code for a key.
key | Key to hash |
original::u_integer original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::getNextSize | ( | ) | const |
Gets next appropriate bucket size for expansion.
outOfBoundError | if already at maximum size |
original::u_integer original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::getPrevSize | ( | ) | const |
Gets previous appropriate bucket size for shrinking.
bool original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::insert | ( | const K_TYPE & | key, |
const V_TYPE & | value | ||
) |
Inserts new key-value pair.
key | Key to insert |
value | Value to associate |
original::floating original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::loadFactor | ( | ) | const |
Calculates current load factor.
bool original::hashTable< K_TYPE, V_TYPE, ALLOC, HASH >::modify | ( | const K_TYPE & | key, |
const V_TYPE & | value | ||
) |
Modifies value for existing key.
key | Key to modify |
value | New value |
Rehashes table to new bucket count.
new_bucket_count | New number of buckets |
Rebuilds the hash table with new bucket count:
|
staticconstexpr |
Predefined bucket sizes for hash table resizing.
An array of prime numbers carefully selected for hash table bucket sizes. These primes are used during table resizing to maintain optimal performance characteristics.
Key Properties:
Selection Criteria:
Performance Impact:
The sequence continues until reaching sizes suitable for maximum practical in-memory hash tables (over 100 million buckets).