Search Results

Search found 13815 results on 553 pages for 'gae python'.

Page 109/553 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Python string manipulation

    - by paradox
    I'm trying to split a string into a int list for further processing. But somehow I can't remove certain whitespaces in between elements of the list. The string x is supposed to have a length of 1000 instead of 1019. I tried reading the documentation for python and saw the function strip() for stripping whitespaces from strings. However, it only works for trailing and leading whitespaces. How should I go about removing these whitespaces and also how do I convert a str list to a int list? My code is as follows : import array x = """73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450""" y=[] for i in range(0,len(x)): #String is now in a string list if x[i]!='': y.append(x[i]) print(y[i]) print(len(x))

    Read the article

  • JavaScript-like Object in Python standard library?

    - by David Wolever
    Quite often, I find myself wanting a simple, "dump" object in Python which behaves like a JavaScript object (ie, its members can be accessed either with .member or with ['member']). Usually I'll just stick this at the top of the .py: class DumbObject(dict): def __getattr__(self, attr): return self[attr] def __stattr__(self, attr, value): self[attr] = value But that's kind of lame, and there is at least one bug with that implementation (although I can't remember what it is). So, is there something similar in the standard library? And, for the record, simply instanciating object doesn't work: obj = object() obj.airspeed = 42 Traceback (most recent call last): File "", line 1, in AttributeError: 'object' object has no attribute 'airspeed' Thanks, David

    Read the article

  • Lost in UTF-8 hell. (Django and Python)

    - by user140314
    I am working through the Django RSS reader project here. The RSS feed will read something like "OKLAHOMA CITY (AP) — James Harden let". The RSS feed's encoding reads encoding="UTF-8" so I believe I am passing utf-8 to markdown in the code snippet below. The em dash is where it chokes. I get the Django error of "'ascii' codec can't encode character u'\u2014' in position 109: ordinal not in range(128)" which is an UnicodeEncodeError. In the variables being passed I see "OKLAHOMA CITY (AP) \u2014 James Harden". The code line that is not working is: content = content.encode(parsed_feed.encoding, "xmlcharrefreplace") I am using markdown 2.0, django 1.1, and python 2.4. What is the magic sequence of encoding and decoding that I need to do to make this work? Thanks.

    Read the article

  • python list mysteriously getting set to something within my django/piston handler

    - by Anverc
    To start, I'm very new to python, let alone Django and Piston. Anyway, I've created a new BaseHandler class "class BaseApiHandler(BaseHandler)" so that I can extend some of the stff that BaseHandler does. This has been working fine until I added a new filter that could limit results to the first or last result. Now I can refresh the api page over and over and sometimes it will limit the result even if I don't include /limit/whatever in my URL... I've added some debug info into my return value to see what is happening, and that's when it gets more weird. this return value will make more sense after you see the code, but here they are for reference: When the results are correct: "statusmsg": "2 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',}", when the results are incorrect (once you read the code you'll notice two things wrong. First, it doesn't have 'limit':'None', secondly it shouldn't even get this far to begin with. "statusmsg": "1 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',with limit[0,1](limit,None),}", It may be important to note that I'm the only person with access to the server running this right now, so even if it was a cache issue, it doesn't make sense that I can just refresh and get different results by hitting F5 while viewing: http://localhost/api/hours_detail/datestamp/2009-03-02/empid/22 Here's the code broken into urls.py and handlers.py so that you can see what i'm doing: URLS.PY urlpatterns = patterns('', #hours_detail/id/{id}/empid/{empid}/projid/{projid}/datestamp/{datestamp}/daterange/{fromdate}to{todate}/limit/{first|last}/exact #empid is required # id, empid, projid, datestamp, daterange can be in any order url(r'^api/hours_detail/(?:' + \ r'(?:[/]?id/(?P<id>\d+))?' + \ r'(?:[/]?empid/(?P<empid>\d+))?' + \ r'(?:[/]?projid/(?P<projid>\d+))?' + \ r'(?:[/]?datestamp/(?P<datestamp>\d{4,}[-/\.]\d{2,}[-/\.]\d{2,}))?' + \ r'(?:[/]?daterange/(?P<daterange>(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})(?:to|/-)(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})))?' + \ r')+' + \ r'(?:/limit/(?P<limit>(?:first|last)))?' + \ r'(?:/(?P<exact>exact))?$', hours_detail_resource), HANDLERS.PY # inherit from BaseHandler to add the extra functionality i need to process the possibly null URL params class BaseApiHandler(BaseHandler): # keep track of the handler so the data is represented back to me correctly post_name = 'base' # THIS IS THE LIST IN QUESTION - SOMETIMES IT IS GETTING SET TO [0,1] MYSTERIOUSLY # this gets set to a list when the results are to be limited limit = None def has_limit(self): return (isinstance(self.limit, list) and len(self.limit) == 2) def process_kwarg_read(self, key, value, d_post, b_exact): """ this should be overridden in the derived classes to process kwargs """ pass # override 'read' so we can better handle our api's searching capabilities def read(self, request, *args, **kwargs): d_post = {'status':0,'statusmsg':'Nothing Happened'} try: # setup the named response object # select all employees then filter - querysets are lazy in django # the actual query is only done once data is needed, so this may # seem like some memory hog slow beast, but it's actually not. d_post[self.post_name] = self.queryset(request) # this is a string that holds debug information... it's the string I mentioned before pasting this code s_query = '' b_exact = False if 'exact' in kwargs and kwargs['exact'] <> None: b_exact = True s_query = '\'exact\':True,' for key,value in kwargs.iteritems(): # the regex url possibilities will push None into the kwargs dictionary # if not specified, so just continue looping through if that's the case if value == None or key == 'exact': continue # write to the s_query string so we have a nice error message s_query = '%s\'%s\':\'%s\',' % (s_query, key, value) # now process this key/value kwarg self.process_kwarg_read(key=key, value=value, d_post=d_post, b_exact=b_exact) # end of the kwargs for loop else: if self.has_limit(): # THIS SEEMS TO GET HIT SOMETIMES IF YOU CONSTANTLY REFRESH THE API PAGE, EVEN THOUGH # THE LINE IN THE FOR LOOP WHICH UPDATES s_query DOESN'T GET HIS AND THUS self.process_kwarg_read ALSO # DOESN'T GET HIT SO NEITHER DOES limit = [0,1] s_query = '%swith limit[%s,%s](limit,%s),' % (s_query, self.limit[0], self.limit[1], kwargs['limit']) d_post[self.post_name] = d_post[self.post_name][self.limit[0]:self.limit[1]] if d_post[self.post_name].count() == 0: d_post['status'] = 0 d_post['statusmsg'] = '%s not found with query: {%s}' % (self.post_name, s_query) else: d_post['status'] = 1 d_post['statusmsg'] = '%s %s found with query: {%s}' % (d_post[self.post_name].count(), self.post_name, s_query) except: e = sys.exc_info()[1] d_post['status'] = 0 d_post['statusmsg'] = 'error: %s' % e d_post[self.post_name] = [] return d_post class HoursDetailHandler(BaseApiHandler): #allowed_methods = ('GET',) model = HoursDetail exclude = () post_name = 'hours_detail' def process_kwarg_read(self, key, value, d_post, b_exact): if ... # I have several if/elif statements here that check for other things... # 'self.limit =' only shows up in the following elif: elif key == 'limit': order_by = 'clock_time' if value == 'last': order_by = '-clock_time' d_post[self.post_name] = d_post[self.post_name].order_by(order_by) # TO GET HERE, THE ONLY PLACE IN CODE WHERE self.limit IS SET, YOU MUST HAVE GONE THROUGH # THE value == None CHECK???? self.limit = [0, 1] else: raise NameError def read(self, request, *args, **kwargs): # empid is required, so make sure it exists before running BaseApiHandler's read method if not('empid' in kwargs and kwargs['empid'] <> None and kwargs['empid'] >= 0): return {'status':0,'statusmsg':'empid cannot be empty'} else: return BaseApiHandler.read(self, request, *args, **kwargs) Does anyone have a clue how else self.limit might be getting set to [0, 1] ? Am I misunderstanding kwargs or loops or anything in Python?

    Read the article

  • [Python] Detect destination of shortened, or "tiny" url

    - by conradlee
    I have just scraped a bunch of Google Buzz data, and I want to know which Buzz posts reference the same news articles. The problem is that many of the links in these posts have been modified by URL shorteners, so it could be the case that many distinct shortened URLs actually all point to the same news article. Given that I have millions of posts, what is the most efficient way (preferably in python) for me to detect whether a url is a shortened URL (from any of the many URL shortening services, or at least the largest) Find the "destination" of the shortened url, i.e., the long, original version of the shortened URL. Does anyone know if the URL shorteners impose strict request rate limits? If I keep this down to 100/second (all coming form the same IP address), do you think I'll run into trouble?

    Read the article

  • How to structure Python package that contains Cython code

    - by Craig McQueen
    I'd like to make a Python package containing some Cython code. I've got the the Cython code working nicely. However, now I want to know how best to package it. For most people who just want to install the package, I'd like to include the .c file that Cython creates, and arrange for setup.py to compile that to produce the module. Then the user doesn't need Cython installed in order to install the package. But for people who may want to modify the package, I'd also like to provide the Cython .pyx files, and somehow also allow for setup.py to build them using Cython (so those users would need Cython installed). How should I structure the files in the package to cater for both these scenarios? The Cython documentation gives a little guidance. But it doesn't say how to make a single setup.py that handles both the with/without Cython cases.

    Read the article

  • Fastest way to list all primes below N in python

    - by jbochi
    This is the best algorithm I could come up with after struggling with a couple of Project Euler's questions. def get_primes(n): numbers = set(range(n, 1, -1)) primes = [] while numbers: p = numbers.pop() primes.append(p) numbers.difference_update(set(range(p*2, n+1, p))) return primes >>> timeit.Timer(stmt='get_primes.get_primes(1000000)', setup='import get_primes').timeit(1) 1.1499958793645562 Can it be made even faster? EDIT: This code has a flaw: Since numbers is an unordered set, there is no guarantee that numbers.pop() will remove the lowest number from the set. Nevertheless, it works (at least for me) for some input numbers: >>> sum(get_primes(2000000)) 142913828922L #That's the correct sum of all numbers below 2 million >>> 529 in get_primes(1000) False >>> 529 in get_primes(530) True EDIT: The rank so far (pure python, no external sources, all primes below 1 million): Sundaram's Sieve implementation by myself: 327ms Daniel's Sieve: 435ms Alex's recipe from Cookbok: 710ms EDIT: ~unutbu is leading the race.

    Read the article

  • Python In-memory table

    - by nsharish
    What is the right way to forming in-memory table in python with direct lookups for rows and columns.I thought of using dict of dicts this way, class Table(dict): def __getitem__(self, key): if key not in self: self[key]={} return dict.__getitem__(self, key) table = Table() table['row1']['column1'] = 'value11' table['row1']['column2'] = 'value12' table['row2']['column1'] = 'value21' table['row2']['column2'] = 'value22' >>>table {'row1':{'column1':'value11','column2':'value12'},'row2':{'column1':'value21','column2':'value22'}} I had difficulty in looking up for values in columns. >>>'row1' in table True >>>'value11' in table['row1'].values() True Now how do I do lookup if 'column1' has 'value11' Is this method of forming tables wrong?Is there a better way to implement such tables with easier lookups?.Thanks

    Read the article

  • Python Vector Class

    - by sfjedi
    I'm coming from a C# background where this stuff is super easy—trying to translate into Python for Maya. There's gotta' be a better way to do this. Basically, I'm looking to create a Vector class that will simply have x, y and z coordinates, but it would be ideal if this class returned a tuple with all 3 coordinates and if you could edit the values of this tuple through x, y and z properties, somehow. This is what I have so far, but there must be a better way to do this than using an exec statement, right? I hate using exec statements. class Vector(object): '''Creates a Maya vector/triple, having x, y and z coordinates as float values''' def __init__(self, x=0, y=0, z=0): self.x, self.y, self.z = x, y, z def attrsetter(attr): def set_float(self, value): setattr(self, attr, float(value)) return set_float for xyz in 'xyz': exec("%s = property(fget=attrgetter('_%s'), fset=attrsetter('_%s'))" % (xyz, xyz, xyz))

    Read the article

  • Convert string to JSON using Python

    - by Luiz Fernando
    Hi, I'm a little bit confused with JSON in Python. To me, it seems like a dictionary, and for that reason I'm trying to do that: json = """{ "glossary": { "title": "example glossary", "GlossDiv": { "title": "S", "GlossList": { "GlossEntry": { "ID": "SGML", "SortAs": "SGML", "GlossTerm": "Standard Generalized Markup Language", "Acronym": "SGML", "Abbrev": "ISO 8879:1986", "GlossDef": { "para": "A meta-markup language, used to create markup languages such as DocBook.", "GlossSeeAlso": ["GML", "XML"] }, "GlossSee": "markup" } } } } } """ But when I do print dict(json), it gives an error. How can I transform this string into a structure and then call json["title"] to obtain "example glossary"? Thanks.

    Read the article

  • Python: Filter a dictionary

    - by Adam Matan
    Hi, I have a dictionary of points, say: >>> points={'a':(3,4), 'b':(1,2), 'c':(5,5), 'd':(3,3)} I want to create a new dictionary with all the points whose x and y value is smaller than 5, i.e. points 'a', 'b' and 'd'. According to the the book, each dictionary has the items() function, which returns a list of (key, pair) tuple: >>> points.items() [('a', (3, 4)), ('c', (5, 5)), ('b', (1, 2)), ('d', (3, 3))] So I have written this: >>> for item in [i for i in points.items() if i[1][0]<5 and i[1][1]<5]: ... points_small[item[0]]=item[1] ... >>> points_small {'a': (3, 4), 'b': (1, 2), 'd': (3, 3)} Is there a more elegant way? I was expecting Python to have some super-awesome dictionary.filter(f) function... Adam

    Read the article

  • How to generate SSH key pairs with Python

    - by Lee
    Hello, I'm attempting to write a script to generate SSH Identity key pairs for me. from M2Crypto import RSA key = RSA.gen_key(1024, 65337) key.save_key("/tmp/my.key", cipher=None) The file /tmp/my.key looks great now. By running ssh-keygen -y -f /tmp/my.key > /tmp/my.key.pub I can extract the public key. My question is how can I extract the public key from python? Using key.save_pub_key("/tmp/my.key.pub") saves something like: -----BEGIN PUBLIC KEY----- MFwwDQYJKoZIhvcNAQEBBQADASDASDASDASDBarYRsmMazM1hd7a+u3QeMP ... FZQ7Ic+BmmeWHvvVP4Yjyu1t6vAut7mKkaDeKbT3yiGVUgAEUaWMXqECAwEAAQ== -----END PUBLIC KEY----- When I'm looking for something like: ssh-rsa AAAABCASDDBM$%3WEAv/3%$F ..... OSDFKJSL43$%^DFg==

    Read the article

  • Python thread pool similar to the multiprocessing Pool?

    - by Martin
    Is there a Pool class for worker threads, similar to the multiprocessing module's Pool class? I like for example the easy way to parallelize a map function def long_running_func(p): c_func_no_gil(p) p = multiprocessing.Pool(4) xs = p.map(long_running_func, range(100)) however I would like to do it without the overhead of creating new processes. I know about the GIL. However, in my usecase, the function will be an IO-bound C function for which the python wrapper will release the GIL before the actual function call. Do I have to write my own threading pool?

    Read the article

  • Calculating spam probability in python

    - by Hobhouse
    I am building a website in python/django and want to predict wether a user submission is valid or wether it is spam. Users have an accept rate on their submissions, like this website has. Users can moderate other users' submissions; and these moderations are later metamoderated by an admin. Given this: user A with an submission accept rate of 60% submits something. user B moderates A's post as a valid submission. However, his moderations are often wrong, and his moderations' accept rate is a mere 30%. user C moderates A's post as spam. User C is usually right. His moderations' accept rate is 80%. How can I predict the chance of A's post being spam?

    Read the article

  • Python - Test directory permissions

    - by Sean
    In Python on Windows, is there a way to determine if a user has permission to access a directory? I've taken a look at os.access but it gives false results. >>> os.access('C:\haveaccess', os.R_OK) False >>> os.access(r'C:\haveaccess', os.R_OK) True >>> os.access('C:\donthaveaccess', os.R_OK) False >>> os.access(r'C:\donthaveaccess', os.R_OK) True Am I doing something wrong? Is there a better way to check if a user has permission to access a directory?

    Read the article

  • To find first N prime numbers in python

    - by Rahul Tripathi
    Hi All, I am new to the programming world. I was just writing this code in python to generate N prime numbers. User should input the value for N which is the total number of prime numbers to print out. I have written this code but it doesn't throw the desired output. Instead it prints the prime numbers till the Nth number. For eg.: User enters the value of N = 7. Desired output: 2, 3, 5, 7, 11, 13, 19 Actual output: 2, 3, 5, 7 Kindly advise. i=1 x = int(input("Enter the number:")) for k in range (1, (x+1), 1): c=0 for j in range (1, (i+1), 1): a = i%j if (a==0): c = c+1 if (c==2): print (i) else: k = k-1 i=i+1

    Read the article

  • Scraping *.aspx content using Python

    - by tomato
    I'm having difficulties scraping dynamically generated table in ASPX. Trying to scrape the gas prices from a site like this GasPrices. I can extract all the information in the gas price table (address, time submitted etc.), except for the actual gas price. Is there a way I could scrape the gas prices? i.e. somehow get a text representation of it. I'm not very familiar with ASP/ASPX - but what's being generated now is not showing up in the final HTML. I'm using Python to do the scraping, but that's irrelevant unless there's a specific library... Thanks in advance.

    Read the article

  • Scraping *.aspx content using Python

    - by tomato
    I'm having difficulties scrapping dynamically generated table in ASPX. Trying to scrap the gas prices from a site like these GasPrices. I can extract all the information in the gas price table (address, time submitted etc.), except for the actual gas price. Is there a way I could scrap the gas prices? i.e. somehow get a text representation of it. I'm not very familiar with ASP/ASPX - but what's being generated now is not showing up in the final HTML. I'm using Python to do the scrapping, but that's irrelevant unless there's a specific library...

    Read the article

  • Python utf-8, howto align printout

    - by Fredrik
    Hi, I have a array containing japanese caracters as well as "normal". How do I align the printout of these? #!/usr/bin/python # coding=utf-8 a1=['??', '???', 'trazan', '??', '????'] a2=['dipsy', 'laa-laa', 'banarne', 'po', 'tinky winky'] for i,j in zip(a1,a2): print i.ljust(12),':',j print '-'*8 for i,j in zip(a1,a2): print i,len(i) print j,len(j) Output: ?? : dipsy ??? : laa-laa trazan : banarne ?? : po ???? : tinky winky -------- ?? 6 dipsy 5 ??? 9 laa-laa 7 trazan 6 banarne 7 ?? 6 po 2 ???? 12 tinky winky 11 thanks, //Fredrik

    Read the article

  • Python file-io code listing current folder path instead of the specified

    - by Tom Brito
    I have the code: import os import sys fileList = os.listdir(sys.argv[1]) for file in fileList: if os.path.isfile(file): print "File >> " + os.path.abspath(file) else: print "Dir >> " + os.path.abspath(file) Located in my music folder ("/home/tom/Music") When I call it with: python test.py "/tmp" I expected it to list my "/tmp" files and folders with the full path. But it printed lines like: Dir >> /home/tom/Music/seahorse-gw2jNn Dir >> /home/tom/Music/FlashXX9kV847 Dir >> /home/tom/Music/pulse-DcIEoxW5h2gz This is, the correct file names, but the wrong path (and this files are not either in my Music folder).. What's wrong with this code?

    Read the article

  • Python list comprehension to return edge values of a list

    - by mvid
    If I have a list in python such as: stuff = [1, 2, 3, 4, 5, 6, 7, 8, 9] with length n (in this case 9) and I am interested in creating lists of length n/2 (in this case 4). I want all possible sets of n/2 values in the original list, for example: [1, 2, 3, 4], [2, 3, 4, 5], ..., [9, 1, 2, 3] is there some list comprehension code I could use to iterate through the list and retrieve all of those sublists? I don't care about the order of the values within the lists, I am just trying to find a clever method of generating the lists.

    Read the article

  • Error in unzipping a file from a python script running as daemon

    - by Fedrick
    I am getting an error whenever i try to run following unzip command from a python script which is running as a daemon Command : unzip abcd.zip /dev/null Error End-of-central-directory signature not found$ a zip file, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive unzip: cannot find zipfile directory in one of abcd.zip$ abcd.zip.zip, and cannot find abcd.zip.ZIP, period. Could anyone help me in this regard? Thanks in advance.

    Read the article

  • MySQL LOAD DATA LOCAL INFILE example in python?

    - by David Perron
    I am looking for a syntax definition, example, sample code, wiki, etc. for executing a LOAD DATA LOCAL INFILE command from python. I believe I can use mysqlimport as well if that is available, so any feedback (and code snippet) on which is the better route, is welcome. A Google search is not turning up much in the way of current info The goal in either case is the same: Automate loading hundreds of files with a known naming convention & date structure, into a single MySQL table. David

    Read the article

  • OpenCV in Python can't scan through pixels

    - by Marco L.
    Hi everyone, I'm stuck with a problem of the python wrapper for OpenCv. I have this function that returns 1 if the number of black pixels is greater than treshold def checkBlackPixels( img, threshold ): width = img.width height = img.height nchannels = img.nChannels step = img.widthStep dimtot = width * height data = img.imageData black = 0 for i in range( 0, height ): for j in range( 0, width ): r = data[i*step + j*nchannels + 0] g = data[i*step + j*nchannels + 1] b = data[i*step + j*nchannels + 2] if r == 0 and g == 0 and b == 0: black = black + 1 if black >= threshold * dimtot: return 1 else: return 0 The loop (scan each pixel of a given image) works good when the input is an RGB image...but if the input is a single channel image I get this error: for j in range( width ): TypeError: Nested sequences should have 2 or 3 dimensions The input single channel image (called 'rg' in the next example) is taken from an RGB image called 'src' processed with cvSplit and then cvAbsDiff cvSplit( src, r, g, b, 'NULL' ) rg = cvCreateImage( cvGetSize(src), src.depth, 1 ) # R - G cvAbsDiff( r, g, rg ) I've also already noticed that the problem comes from the difference image got from cvSplit... Anyone can help me? Thank you

    Read the article

  • Problem simulating a key press on Python for Symbian platform on a Nokia 5th ed phone

    - by abhishekbhardwaj007
    I am developing an app for Nokia 5800 Music Express (S60 5th edition) using PyS60 (Python for S60) ,I want to simulate a KeyPress say if a message comes,I detect a message and Press a Key. There does exist a Keypress module for PyS60 for 2nd edition phones which allows this. However I have not been able to install it on my 5th ed phone. Is this module portable? If yes then how do I install it (I get certificate error) and If no then any alternatives for simulating a key press on a 5th edition touch phone?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >