List creation performance in Python

|   Source

What is the fastest way to create an initialized list in Python? I always wondered about it, so when the question was asked in Stackoverflow I saw a chance to measure it once and for all. We have the following alternatives:

Good old loop¶

This is the typical approach in most languages:

In [ ]:
my_list = []
for i in xrange(n):
my_list.append(0)

List comprehension¶

This is the classical pythonic way, in Python 3 replace xrange with range:

In [ ]:
my_list = [0 for _ in xrange(n)]

Sequence multiplication¶

This python idiom has a caveat, all elements will refer to the same immutable int object 0, but for many applications (read-only list) that shouldn't be a problem.

In [ ]:
my_list = [0] * n


In order to run the benchmarks I decided to make my own decorator, this makes the following code more concise. Good alternatives are the timeit module of the standard library or ipython's timeit magic.

In [ ]:
import time

def timeit(f):

def timed(*args, **kwargs):
start = time.clock()
for _ in range(100):
f(*args, **kwargs)
end = time.clock()
return end - start
return timed


Now we can decorate all functions and when called they will return the corresponding timings:

In [ ]:
@timeit
def append_loop(n):
"""Simple loop with append"""
my_list = []
for i in xrange(n):
my_list.append(0)

@timeit
def add_loop(n):
"""Simple loop with +="""
my_list = []
for i in xrange(n):
my_list += [0]

@timeit
def list_comprehension(n):
"""List comprehension"""
my_list = [0 for _ in xrange(n)]

@timeit
def integer_multiplication(n):
"""List and integer multiplication"""
my_list = [0] * n


In order to run the benchmarks we will create a pandas DataFrame with each method as the columns and the lenght of the list as rows (lower values means faster execution):

In [ ]:
import pandas as pd

df = pd.DataFrame([(append_loop(n), add_loop(n), list_comprehension(n), integer_multiplication(n))
for n in range(1000)],
columns=['Simple loop with append',
'Simple loop with +=',
'List comprehension',
'List and integer multiplication'])
df.plot()


As we see there are no surprises here: the sequence multiplication method is the fastest as only one integer object is being created, the rest are just references. This becomes clearer if we analyze the bytecode corresponding to each function, in the first case there is no iteration at all (at the bytecode level):

In [31]:
import dis

dis.dis(integer_multiplication)

  4           0 LOAD_CONST               1 (0)
3 BUILD_LIST               1
6 LOAD_FAST                0 (n)
9 BINARY_MULTIPLY
10 STORE_FAST               1 (my_list)
13 LOAD_CONST               0 (None)
16 RETURN_VALUE

In [30]:
dis.dis(list_comprehension)

  4           0 BUILD_LIST               0
3 LOAD_GLOBAL              0 (xrange)
6 LOAD_FAST                0 (n)
9 CALL_FUNCTION            1
12 GET_ITER
>>   13 FOR_ITER                12 (to 28)
16 STORE_FAST               1 (_)
19 LOAD_CONST               1 (0)
22 LIST_APPEND              2
25 JUMP_ABSOLUTE           13
>>   28 STORE_FAST               2 (my_list)
31 LOAD_CONST               0 (None)
34 RETURN_VALUE


Now, when we talk about performance in Python numpy should at least be mentioned. While this is not exactly a list, for almost all purposes it behaves as such, it can be iterated, fancy indexed, etc. Also important is the fact that the number of elements should be know a priori.

In [ ]:
import numpy as np

@timeit
def numpy_array(n):
my_list = np.zeros(n)


Numpy's optimized C implementation becomes faster for lists longer than approx. 300 elements. Below this value the boilerplate needed to initialize the numpy data structure slows the list creation down. So now we know who is faster:

1. Numpy (list_length is known and greater than 300)
2. Sequence multiplication
3. List comprehension
4. Loop + Append
Comments powered by Disqus