LeetCode 上二和 III 的复杂性

Complexity of Two sum III on LeetCode

我正在尝试在 LeetCode 上学习 Two Sum III。这是问题:

Design and implement a TwoSum class. It should support the following operations:add and find. add - Add the number to an internal data structure. find - Find if there exists any pair of numbers which sum is equal to the value. For example, add(1); add(3); add(5);
find(4) -> true find(7) -> false

我知道两个可以接受的解决方案,第一个使用 List,第二个使用 map。这是我的两个解决方案:

列表实现:

public class TwoSum {
    private List<Integer> list;
    public TwoSum() {
        list = new ArrayList<Integer> ();
    }
    public void add(int num) {
        list.add(num);
    }
    public boolean find (int sum) {
        if(list.size() == 1)
            return list.get(0) == sum;
        for (int i = 0; i < list.size(); i++) {
            if (list.contains(sum - list.get(i))) {
                if (list.indexOf(sum - list.get(i)) == i)
                    continue;
                return true;
            }
        }
        return false;
    }
}

地图实现:

public class TwoSum2 {
    private Map<Integer, Integer> map;
    public TwoSum2() {
        map = new HashMap<Integer, Integer>();
    }
    public void add(int num) {
        if (!map.containsKey(num))
            map.put(num, 1);
        else 
            map.put(num, map.get(num) + 1);
    }
    public boolean find(int sum) {
        if(map.size() == 1)
            return map.containsKey(sum);
        for (int num : map.keySet()) {
            if (map.containsKey(sum - num)) {
                if (num == sum - num && map.get(num) == 1) 
                        continue;
                return true;
            }
        }
        return false;
    }
}

两个解决方案应该给我相同的复杂性,添加 O(1) 和查找 O(n)。但是,对于我拥有的几个测试用例,List 实现稍微快一些。我能想到的唯一原因是需要一些时间来获取映射中的哈希码并找到密钥。但我不确定这个答案。 Map实现较慢的可能原因是什么?

谢谢。

对于HashMap,有一个称为加载因子的变量。如果达到负载因子,HashMap 将重新分配更多内存并重新散列所有键值对。在这种情况下,单个 put 操作的复杂度为 O(n),因此它可能会更慢。看看这个 documentation

An instance of HashMap has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.

As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put). The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.