"Name", "b" => "Address" , "c" => "Phone"} . I want to test the performan...">

The has_key hash of Ruby complexity

I have a hash vars = {"a" => "Name", "b" => "Address" , "c" => "Phone"}

. I want to test the performance of this line:

vars.has_key(:b)?

      

Is it O (1) or O (hash size)?

+3


source to share


4 answers


Simple criterion:

require 'benchmark'

iterations = 10_000
small      = 10
big        = 1_000_000

small_hash = {}
big_hash   = {}

(1..small).each do |i|
  small_hash[i] = i
end

(1..big).each do |i|
  big_hash[i] = i
end

Benchmark.bmbm do |bm|
  bm.report('Small Hash') do
    iterations.times { small_hash.has_key?(1) }
  end

  bm.report('Big Hash') do
    iterations.times { big_hash.has_key?(1) }
  end
end

      

Test run:



$ ruby has_key_test.rb 
                 user     system      total        real
Small Hash   0.000000   0.000000   0.000000 (  0.001167)
Big Hash     0.000000   0.000000   0.000000 (  0.001171)

      

So, yes, I think we can consider the constant O (1) cost (at least without checking the internal MRI implementation).

+4


source


The expected complexity of the method is constant.



+2


source


def fake_h(n)
  n.times.inject({}){|o,x| o[x] = :a; o}
end

n = 1000000;
h1 = fake_h(1);
h10 = fake_h(10);
h100 = fake_h(100);
h1000 = fake_h(1000);
h10000 = fake_h(10000);
h100000 = fake_h(100000);
h1000000 = fake_h(1000000);

Benchmark.bm do |x| 
  x.report { n.times{|t| h1.has_key?(t) } }
  x.report { n.times{|t| h10.has_key?(t) } }
  x.report { n.times{|t| h100.has_key?(t) } }
  x.report { n.times{|t| h1000.has_key?(t) } }
  x.report { n.times{|t| h10000.has_key?(t) } }
  x.report { n.times{|t| h100000.has_key?(t) } }
  x.report { n.times{|t| h1000000.has_key?(t) } }
end

# Result :
    user     system      total         real
0.200000   0.000000   0.200000 (  0.204647)
0.210000   0.000000   0.210000 (  0.205677)
0.210000   0.000000   0.210000 (  0.214393)
0.210000   0.000000   0.210000 (  0.206382)
0.210000   0.000000   0.210000 (  0.208998)
0.200000   0.000000   0.200000 (  0.206821)
0.220000   0.000000   0.220000 (  0.213316)

      

The difference between a hash with 1 record or 1 million records is minimal.

0


source


The source code has_key

is ( http://ruby-doc.org/core-1.9.3/Hash.html#method-i-has_key-3F )

rb_hash_has_key(VALUE hash, VALUE key)
{
    if (!RHASH(hash)->ntbl)
        return Qfalse;
    if (st_lookup(RHASH(hash)->ntbl, key, 0)) {
        return Qtrue;
    }
    return Qfalse;
}

      

st_lookup

has the following snippet ( https://github.com/ruby/ruby/blob/ca6b174078fa15f33655be704d9409fdbc4f9929/st.c#L383 ):

if (table->entries_packed) {
    st_index_t i = find_packed_index(table, hash_val, key);
    if (i < table->real_entries) {
        if (value != 0) *value = PVAL(table, i);
        return 1;
    }
        return 0;
    }

      

Which tells us that if entries_packed

, then ruby ​​uses index (O (1)), otherwise it uses non-index search (O (n)).

The value entries_packed

seems to depend on the size of the hash: ( https://github.com/ruby/ruby/blob/ca6b174078fa15f33655be704d9409fdbc4f9929/st.c#L41 )

#define MAX_PACKED_HASH (int)(ST_DEFAULT_PACKED_TABLE_SIZE * sizeof(st_table_entry*) / sizeof(st_packed_entry))

      

https://github.com/ruby/ruby/blob/ca6b174078fa15f33655be704d9409fdbc4f9929/st.c#L219

tbl->entries_packed = size <= MAX_PACKED_HASH;

      

size

is a kind of index size.

You can find more details in ruby ​​sources, but its complexity is not always O (1), but depends on the hash size. (by the size of its index)

0


source







All Articles