Search Results

Search found 13596 results on 544 pages for 'mechanize python'.

Page 221/544 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • how to read specific number of floats from file in python?

    - by sahel
    I am reading a text file from the web. The file starts with some header lines containing the number of data points, followed the actual vertices (3 coordinates each). The file looks like: # comment HEADER TEXT POINTS 6 float 1.1 2.2 3.3 4.4 5.5 6.6 7.7 8.8 9.9 1.1 2.2 3.3 4.4 5.5 6.6 7.7 8.8 9.9 POLYGONS the line starting with the word POINTS contains the number of vertices (in this case we have 3 vertices per line, but that could change) This is how I am reading it right now: ur=urlopen("http://.../file.dat") j=0 contents = [] while 1: line = ur.readline() if not line: break else: line=line.lower() if 'points' in line : myline=line.strip() word=myline.split() node_number=int(word[1]) node_type=word[2] while 'polygons' not in line : line = ur.readline() line=line.lower() myline=line.split() i=0 while(i<len(myline)): contents[j]=float(myline[i]) i=i+1 j=j+1 How can I read a specified number of floats instead of reading line by line as strings and converting to floating numbers? Instead of ur.readline() I want to read the specified number of elements in the file Any suggestion is welcome..

    Read the article

  • What is the fastest way to scale and display an image in Python?

    - by Knut Eldhuset
    I am required to display a two dimensional numpy.array of int16 at 20fps or so. Using Matplotlib's imshow chokes on anything above 10fps. There obviously are some issues with scaling and interpolation. I should add that the dimensions of the array are not known, but will probably be around thirty by four hundred. These are data from a sensor that are supposed to have a real-time display, so the data has to be re-sampled on the fly.

    Read the article

  • python: how to jump to a particular line in a huge text file?

    - by photographer
    Are there any alternatives to the code below: startFromLine = 141978 # or whatever line I need to jump to urlsfile = open(filename, "rb", 0) linesCounter = 1 for line in urlsfile: if linesCounter > startFromLine: DoSomethingWithThisLine(line) linesCounter += 1 if I'm processing a huge text file (~15MB) with lines of unknown but different length, and need to jump to a particular line which number I know in advance? I feel bad by processing them one by one when I know I could ignore at least first half of the file. Looking for more elegant solution if there is any.

    Read the article

  • Trouble with this Python newbie exercise. Using Lists and finding if two adjacent elements are the s

    - by Sergio Tapia
    Here's what I got: # D. Given a list of numbers, return a list where # all adjacent == elements have been reduced to a single element, # so [1, 2, 2, 3] returns [1, 2, 3]. You may create a new list or # modify the passed in list. def remove_adjacent(nums): for number in nums: numberHolder = number # +++your code here+++ return I'm kind of stuck here. What can I do?

    Read the article

  • It's possible make an OCR in Python to check words...

    - by Shady
    in opened applications? I want to automate firefox in some web page and I don't have a way to "know" if the page already load completely or if it still loading... I was thinking about making an OCR to check the status bar... it's difficult ? For example, when the word DONE appears at the status bar, the program continues to the next command...

    Read the article

  • How do you composite an image onto another image with PIL in Python?

    - by Sebastian
    I need to take an image and place it onto a new, generated white background in order for it to be converted into a downloadable desktop wallpaper. So the process would go: 1) Generate new, all white image with 1440x900 dimensions 2) Place existing image on top, centered 3) Save as single image In PIL, I see the ImageDraw object, but nothing indicates it can draw existing image data onto another image. Suggestions or links anyone can recommend?

    Read the article

  • How do I insert data from a Python dictionary to MySQL?

    - by NJTechie
    I manipulated some data from MySQL and the resulting dictionary "data" (print data) displays something like this : {'1': ['1', 'K', abc, 'xyz', None, None, datetime.date(2009, 6, 18)], '2': ['2', 'K', efg, 'xyz', None, None, None, None], '3': ['3', 'K', ijk, 'xyz', None, None, None, datetime.date(2010, 2, 5, 16, 31, 2)]} How do I create a table and insert these values in a MySQL table? In other words, how do I dump them to MySQL or CSV? Not sure how to deal with datetime.date and None values. Any help is appreciated.

    Read the article

  • Python: saving objects and using pickle. Error using pickle.dump

    - by Peterstone
    Hello I have an Error and I don´t the reason: >>> class Fruits:pass ... >>> banana = Fruits() >>> banana.color = 'yellow' >>> banana.value = 30 >>> import pickle >>> filehandler = open("Fruits.obj",'w') >>> pickle.dump(banana,filehandler) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python31\lib\pickle.py", line 1354, in dump Pickler(file, protocol, fix_imports=fix_imports).dump(obj) TypeError: must be str, not bytes >>> I don´t know how to solve this error because I don´t understand it. Thank you so much.

    Read the article

  • Python - Why use anything other than uuid4() for unique strings?

    - by orokusaki
    I see quit a few implementations of unique string generation for things like uploaded image names, session IDs, et al, and many of them employ the usage of hashes like SHA1, or others. I'm not questioning the legitimacy of using custom methods like this, but rather just the reason. If I want a unique string, I just say this: >>> import uuid >>> uuid.uuid4() 07033084-5cfd-4812-90a4-e4d24ffb6e3d And I'm done with it. I wasn't very trusting before I read up on uuid, so I did this: >>> import uuid >>> s = set() >>> for i in range(5000000): # That's 5 million! >>> s.add(uuid.uuid4()) ... ... >>> len(s) 5000000 Not one repeater (I didn't expect one considering the odds are like 1.108e+50, but it's comforting to see it in action). You could even half the odds by just making your string by combining 2 uuid4()s. So, with that said, why do people spend time on random() and other stuff for unique strings, etc? Is there an important security issue or other regarding uuid?

    Read the article

  • How to get the list of price offers on an item from Amazon with python-amazon-product-api item_looku

    - by miernik
    I am trying to write a function to get a list of offers (their prices) for an item based on the ASIN: def price_offers(asin): from amazonproduct import API, ResultPaginator, AWSError from config import AWS_KEY, SECRET_KEY api = API(AWS_KEY, SECRET_KEY, 'de') str_asin = str(asin) node = api.item_lookup(id=str_asin, ResponseGroup='Offers', Condition='All', MerchantId='All') for a in node: print a.Offer.OfferListing.Price.FormattedPrice I am reading http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?ItemLookup.html and trying to make this work, but all the time it just says: Failure instance: Traceback: <type 'exceptions.AttributeError'>: no such child: {http://webservices.amazon.com/AWSECommerceService/2009-10-01}Offer

    Read the article

  • Python: Recursively access dict via attributes as well as index access?

    - by Luke Stanley
    I'd like to be able to do something like this: from dotDict import dotdictify life = {'bigBang': {'stars': {'planets': [] } } } dotdictify(life) #this would be the regular way: life['bigBang']['stars']['planets'] = {'earth': {'singleCellLife': {} }} #But how can we make this work? life.bigBang.stars.planets.earth = {'singleCellLife': {} } #Also creating new child objects if none exist, using the following syntax life.bigBang.stars.planets.earth.multiCellLife = {'reptiles':{},'mammals':{}} My motivations are to improve the succinctness of the code, and if possible use similar syntax to Javascript for accessing JSON objects for efficient cross platform development.(I also use Py2JS and similar.)

    Read the article

  • How do I print out objects in an array in python?

    - by Jonathan
    I'm writing a code which performs a k-means clustering on a set of data. I'm actually using the code from a book called collective intelligence by O'Reilly. Everything works, but in his code he uses the command line and i want to write everything in notepad++. As a reference his line is >>>kclust=clusters.kcluster(data,k=10) >>>[rownames[r] for r in k[0]] Here is my code: from PIL import Image,ImageDraw def readfile(filename): lines=[line for line in file(filename)] # First line is the column titles colnames=lines[0].strip( ).split('\t')[1:] rownames=[] data=[] for line in lines[1:]: p=line.strip( ).split('\t') # First column in each row is the rowname rownames.append(p[0]) # The data for this row is the remainder of the row data.append([float(x) for x in p[1:]]) return rownames,colnames,data from math import sqrt def pearson(v1,v2): # Simple sums sum1=sum(v1) sum2=sum(v2) # Sums of the squares sum1Sq=sum([pow(v,2) for v in v1]) sum2Sq=sum([pow(v,2) for v in v2]) # Sum of the products pSum=sum([v1[i]*v2[i] for i in range(len(v1))]) # Calculate r (Pearson score) num=pSum-(sum1*sum2/len(v1)) den=sqrt((sum1Sq-pow(sum1,2)/len(v1))*(sum2Sq-pow(sum2,2)/len(v1))) if den==0: return 0 return 1.0-num/den class bicluster: def __init__(self,vec,left=None,right=None,distance=0.0,id=None): self.left=left self.right=right self.vec=vec self.id=id self.distance=distance def hcluster(rows,distance=pearson): distances={} currentclustid=-1 # Clusters are initially just the rows clust=[bicluster(rows[i],id=i) for i in range(len(rows))] while len(clust)>1: lowestpair=(0,1) closest=distance(clust[0].vec,clust[1].vec) # loop through every pair looking for the smallest distance for i in range(len(clust)): for j in range(i+1,len(clust)): # distances is the cache of distance calculations if (clust[i].id,clust[j].id) not in distances: distances[(clust[i].id,clust[j].id)]=distance(clust[i].vec,clust[j].vec) #print 'i' #print i #print #print 'j' #print j #print d=distances[(clust[i].id,clust[j].id)] if d<closest: closest=d lowestpair=(i,j) # calculate the average of the two clusters mergevec=[ (clust[lowestpair[0]].vec[i]+clust[lowestpair[1]].vec[i])/2.0 for i in range(len(clust[0].vec))] # create the new cluster newcluster=bicluster(mergevec,left=clust[lowestpair[0]], right=clust[lowestpair[1]], distance=closest,id=currentclustid) # cluster ids that weren't in the original set are negative currentclustid-=1 del clust[lowestpair[1]] del clust[lowestpair[0]] clust.append(newcluster) return clust[0] def kcluster(rows,distance=pearson,k=4): # Determine the minimum and maximum values for each point ranges=[(min([row[i] for row in rows]),max([row[i] for row in rows])) for i in range(len(rows[0]))] # Create k randomly placed centroids clusters=[[random.random( )*(ranges[i][1]-ranges[i][0])+ranges[i][0] for i in range(len(rows[0]))] for j in range(k)] lastmatches=None for t in range(100): print 'Iteration %d' % t bestmatches=[[] for i in range(k)] # Find which centroid is the closest for each row for j in range(len(rows)): row=rows[j] bestmatch=0 for i in range(k): d=distance(clusters[i],row) if d<distance(clusters[bestmatch],row): bestmatch=i bestmatches[bestmatch].append(j) # If the results are the same as last time, this is complete if bestmatches==lastmatches: break lastmatches=bestmatches # Move the centroids to the average of their members for i in range(k): avgs=[0.0]*len(rows[0]) if len(bestmatches[i])>0: for rowid in bestmatches[i]: for m in range(len(rows[rowid])): avgs[m]+=rows[rowid][m] for j in range(len(avgs)): avgs[j]/=len(bestmatches[i]) clusters[i]=avgs return bestmatches

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >