i have project reading in ascii values microcontroller through serial port (looks : aa ff ba 11 43 cf etc) input coming in (38 2 character sets / second). i'm taking input , appending running list of measurements.
after 5 hours, list has grown ~ 855000 entries.
i'm given understand larger list becomes, slower list operations become. intent have test run 24 hours, should yield around 3m results.
is there more efficient, faster way append list list.append()?
thanks everyone.
i'm given understand larger list becomes, slower list operations become.
that's not true in general. lists in python are, despite name, not linked lists arrays. there operations o(n) on arrays (copying , searching, instance), don't seem use of these. rule of thumb: if it's used , idiomatic, smart people went , chose smart way it. list.append
widely-used builtin (and underlying c function used in other places, e.g. list comprehensions). if there faster way, in use.
as see when inspect the source code, lists overallocating, i.e. when resized, allocate more needed 1 item next n items can appended without need resize (which o(n)). growth isn't constant, proportional list size, resizing becomes rarer list grows larger. here's snippet listobject.c:list_resize
determines overallocation:
/* over-allocates proportional list size, making room * additional growth. over-allocation mild, * enough give linear-time amortized behavior on long * sequence of appends() in presence of poorly-performing * system realloc(). * growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... */ new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6);
as mark ransom points out, older python versions (<2.7, 3.0) have bug make gc sabotage this. if have such python version, may want disable gc. if can't because generate garbage (that slips refcounting), you're out of luck though.
Comments
Post a Comment