Python 如何实现字典?
How does Python implement dictionaries?
我想知道 python 词典是如何工作的,尤其是动态方面?
当我们创建字典时,它的初始大小是多少?
如果我们用很多元素更新它,我想我们需要扩大散列table。我想我们需要重新计算散列函数以适应新的更大散列 table 的大小,同时与之前的散列保持某种逻辑 table?
如你所见,我没有完全理解这个结构的内部。
(部分)以下答案摘自Upgrade your Python skills: Examining the Dictionary. More information on Python hash tables can be found in Python Hash Tables Under The Hood:
When we create a dictionary what is its initial size?
- 从源码可以看出:
/* PyDict_MINSIZE is the starting size for any new dict.
* 8 allows dicts with no more than 5 active entries; experiments suggested
* this suffices for the majority of dicts (consisting mostly of usually-small
* dicts created to pass keyword arguments).
* Making this 8, rather than 4 reduces the number of resizes for most
* dictionaries, without any significant extra memory use.
*/
#define PyDict_MINSIZE 8
Imagine we update with a lot of key value pairs, i suppose we need to externe the hash table. I suppose we need to recompute the hash function to adapt the size of the new bigger hash table while keeping a kind of logic with the previous hash table....
CPython 会在我们每次添加密钥时检查散列 table 大小。如果 table 已 two-thirds 满,它将通过 GROWTH_RATE
(当前设置为 3)调整散列 table 的大小,并插入所有元素:
/* GROWTH_RATE. Growth rate upon hitting maximum load.
* Currently set to used*3.
* This means that dicts double in size when growing without deletions,
* but have more head room when the number of deletions is on a par with the
* number of insertions. See also bpo-17563 and bpo-33205.
*
* GROWTH_RATE was set to used*4 up to version 3.2.
* GROWTH_RATE was set to used*2 in version 3.3.0
* GROWTH_RATE was set to used*2 + capacity/2 in 3.4.0-3.6.0.
*/
#define GROWTH_RATE(d) ((d)->ma_used*3)
这个USABLE_FRACTION
就是我上面说的三分之二:
/* USABLE_FRACTION is the maximum dictionary load.
* Increasing this ratio makes dictionaries more dense resulting in more
* collisions. Decreasing it improves sparseness at the expense of spreading
* indices over more cache lines and at the cost of total memory consumed.
*
* USABLE_FRACTION must obey the following:
* (0 < USABLE_FRACTION(n) < n) for all n >= 2
*
* USABLE_FRACTION should be quick to calculate.
* Fractions around 1/2 to 2/3 seem to work well in practice.
*/
#define USABLE_FRACTION(n) (((n) << 1)/3)
另外,指数计算为:
i = (size_t)hash & mask;
掩码是 HASH_TABLE_SIZE-1
.
哈希冲突的处理方式如下:
perturb >>= PERTURB_SHIFT;
i = (i*5 + perturb + 1) & mask;
在source code中解释:
The first half of collision resolution is to visit table indices via this
recurrence:
j = ((5*j) + 1) mod 2**i
For any initial j in range(2**i), repeating that 2**i times generates each
int in range(2**i) exactly once (see any text on random-number generation for
proof). By itself, this doesn't help much: like linear probing (setting
j += 1, or j -= 1, on each loop trip), it scans the table entries in a fixed
order. This would be bad, except that's not the only thing we do, and it's
actually *good* in the common cases where hash keys are consecutive. In an
example that's really too small to make this entirely clear, for a table of
size 2**3 the order of indices is:
0 -> 1 -> 6 -> 7 -> 4 -> 5 -> 2 -> 3 -> 0 [and here it's repeating]
If two things come in at index 5, the first place we look after is index 2,
not 6, so if another comes in at index 6 the collision at 5 didn't hurt it.
Linear probing is deadly in this case because there the fixed probe order
is the *same* as the order consecutive keys are likely to arrive. But it's
extremely unlikely hash codes will follow a 5*j+1 recurrence by accident,
and certain that consecutive hash codes do not.
The other half of the strategy is to get the other bits of the hash code
into play. This is done by initializing a (unsigned) vrbl "perturb" to the
full hash code, and changing the recurrence to:
perturb >>= PERTURB_SHIFT;
j = (5*j) + 1 + perturb;
use j % 2**i as the next table index;
Now the probe sequence depends (eventually) on every bit in the hash code,
and the pseudo-scrambling property of recurring on 5*j+1 is more valuable,
because it quickly magnifies small differences in the bits that didn't affect
the initial index. Note that because perturb is unsigned, if the recurrence
is executed often enough perturb eventually becomes and remains 0. At that
point (very rarely reached) the recurrence is on (just) 5*j+1 again, and
that's certain to find an empty slot eventually (since it generates every int
in range(2**i), and we make sure there's always at least one empty slot).
我想知道 python 词典是如何工作的,尤其是动态方面? 当我们创建字典时,它的初始大小是多少? 如果我们用很多元素更新它,我想我们需要扩大散列table。我想我们需要重新计算散列函数以适应新的更大散列 table 的大小,同时与之前的散列保持某种逻辑 table?
如你所见,我没有完全理解这个结构的内部。
(部分)以下答案摘自Upgrade your Python skills: Examining the Dictionary. More information on Python hash tables can be found in Python Hash Tables Under The Hood:
When we create a dictionary what is its initial size?
- 从源码可以看出:
/* PyDict_MINSIZE is the starting size for any new dict.
* 8 allows dicts with no more than 5 active entries; experiments suggested
* this suffices for the majority of dicts (consisting mostly of usually-small
* dicts created to pass keyword arguments).
* Making this 8, rather than 4 reduces the number of resizes for most
* dictionaries, without any significant extra memory use.
*/
#define PyDict_MINSIZE 8
Imagine we update with a lot of key value pairs, i suppose we need to externe the hash table. I suppose we need to recompute the hash function to adapt the size of the new bigger hash table while keeping a kind of logic with the previous hash table....
CPython 会在我们每次添加密钥时检查散列 table 大小。如果 table 已 two-thirds 满,它将通过 GROWTH_RATE
(当前设置为 3)调整散列 table 的大小,并插入所有元素:
/* GROWTH_RATE. Growth rate upon hitting maximum load.
* Currently set to used*3.
* This means that dicts double in size when growing without deletions,
* but have more head room when the number of deletions is on a par with the
* number of insertions. See also bpo-17563 and bpo-33205.
*
* GROWTH_RATE was set to used*4 up to version 3.2.
* GROWTH_RATE was set to used*2 in version 3.3.0
* GROWTH_RATE was set to used*2 + capacity/2 in 3.4.0-3.6.0.
*/
#define GROWTH_RATE(d) ((d)->ma_used*3)
这个USABLE_FRACTION
就是我上面说的三分之二:
/* USABLE_FRACTION is the maximum dictionary load.
* Increasing this ratio makes dictionaries more dense resulting in more
* collisions. Decreasing it improves sparseness at the expense of spreading
* indices over more cache lines and at the cost of total memory consumed.
*
* USABLE_FRACTION must obey the following:
* (0 < USABLE_FRACTION(n) < n) for all n >= 2
*
* USABLE_FRACTION should be quick to calculate.
* Fractions around 1/2 to 2/3 seem to work well in practice.
*/
#define USABLE_FRACTION(n) (((n) << 1)/3)
另外,指数计算为:
i = (size_t)hash & mask;
掩码是 HASH_TABLE_SIZE-1
.
哈希冲突的处理方式如下:
perturb >>= PERTURB_SHIFT;
i = (i*5 + perturb + 1) & mask;
在source code中解释:
The first half of collision resolution is to visit table indices via this
recurrence:
j = ((5*j) + 1) mod 2**i
For any initial j in range(2**i), repeating that 2**i times generates each
int in range(2**i) exactly once (see any text on random-number generation for
proof). By itself, this doesn't help much: like linear probing (setting
j += 1, or j -= 1, on each loop trip), it scans the table entries in a fixed
order. This would be bad, except that's not the only thing we do, and it's
actually *good* in the common cases where hash keys are consecutive. In an
example that's really too small to make this entirely clear, for a table of
size 2**3 the order of indices is:
0 -> 1 -> 6 -> 7 -> 4 -> 5 -> 2 -> 3 -> 0 [and here it's repeating]
If two things come in at index 5, the first place we look after is index 2,
not 6, so if another comes in at index 6 the collision at 5 didn't hurt it.
Linear probing is deadly in this case because there the fixed probe order
is the *same* as the order consecutive keys are likely to arrive. But it's
extremely unlikely hash codes will follow a 5*j+1 recurrence by accident,
and certain that consecutive hash codes do not.
The other half of the strategy is to get the other bits of the hash code
into play. This is done by initializing a (unsigned) vrbl "perturb" to the
full hash code, and changing the recurrence to:
perturb >>= PERTURB_SHIFT;
j = (5*j) + 1 + perturb;
use j % 2**i as the next table index;
Now the probe sequence depends (eventually) on every bit in the hash code,
and the pseudo-scrambling property of recurring on 5*j+1 is more valuable,
because it quickly magnifies small differences in the bits that didn't affect
the initial index. Note that because perturb is unsigned, if the recurrence
is executed often enough perturb eventually becomes and remains 0. At that
point (very rarely reached) the recurrence is on (just) 5*j+1 again, and
that's certain to find an empty slot eventually (since it generates every int
in range(2**i), and we make sure there's always at least one empty slot).