Python: How to read huge text file into memory
Posted
by asmaier
on Stack Overflow
See other posts from Stack Overflow
or by asmaier
Published on 2009-12-13T14:34:04Z
Indexed on
2010/03/14
20:35 UTC
Read the original article
Hit count: 601
I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file
$ ls -l links.csv; file links.csv; tail links.csv
-rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv
links.csv: ASCII text, with CRLF line terminators
4757187,59883
4757187,99822
4757187,66546
4757187,638452
4757187,4627959
4757187,312826
4757187,6143
4757187,6141
4757187,3081726
4757187,58197
So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available.
However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sys
infile=open("links.csv", "r")
edges=[]
count=0
#count the total number of lines in the file
for line in infile:
count=count+1
total=count
print "Total number of lines: ",total
infile.seek(0)
count=0
for line in infile:
edge=tuple(map(int,line.strip().split(",")))
edges.append(edge)
count=count+1
# for every million lines print memory consumption
if count%1000000==0:
print "Position: ", edge
print "Read ",float(count)/float(total)*100,"%."
mem=sys.getsizeof(edges)
for edge in edges:
mem=mem+sys.getsizeof(edge)
for node in edge:
mem=mem+sys.getsizeof(node)
print "Memory (Bytes): ", mem
The output I got was:
Total number of lines: 30609720
Position: (9745, 2994)
Read 3.26693612356 %.
Memory (Bytes): 64348736
Position: (38857, 103574)
Read 6.53387224712 %.
Memory (Bytes): 128816320
Position: (83609, 63498)
Read 9.80080837067 %.
Memory (Bytes): 192553000
Position: (139692, 1078610)
Read 13.0677444942 %.
Memory (Bytes): 257873392
Position: (205067, 153705)
Read 16.3346806178 %.
Memory (Bytes): 320107588
Position: (283371, 253064)
Read 19.6016167413 %.
Memory (Bytes): 385448716
Position: (354601, 377328)
Read 22.8685528649 %.
Memory (Bytes): 448629828
Position: (441109, 3024112)
Read 26.1354889885 %.
Memory (Bytes): 512208580
Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?
© Stack Overflow or respective owner