Search Results

Search found 19662 results on 787 pages for 'python module'.

Page 120/787 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Lost in UTF-8 hell. (Django and Python)

    - by user140314
    I am working through the Django RSS reader project here. The RSS feed will read something like "OKLAHOMA CITY (AP) — James Harden let". The RSS feed's encoding reads encoding="UTF-8" so I believe I am passing utf-8 to markdown in the code snippet below. The em dash is where it chokes. I get the Django error of "'ascii' codec can't encode character u'\u2014' in position 109: ordinal not in range(128)" which is an UnicodeEncodeError. In the variables being passed I see "OKLAHOMA CITY (AP) \u2014 James Harden". The code line that is not working is: content = content.encode(parsed_feed.encoding, "xmlcharrefreplace") I am using markdown 2.0, django 1.1, and python 2.4. What is the magic sequence of encoding and decoding that I need to do to make this work? Thanks.

    Read the article

  • python list mysteriously getting set to something within my django/piston handler

    - by Anverc
    To start, I'm very new to python, let alone Django and Piston. Anyway, I've created a new BaseHandler class "class BaseApiHandler(BaseHandler)" so that I can extend some of the stff that BaseHandler does. This has been working fine until I added a new filter that could limit results to the first or last result. Now I can refresh the api page over and over and sometimes it will limit the result even if I don't include /limit/whatever in my URL... I've added some debug info into my return value to see what is happening, and that's when it gets more weird. this return value will make more sense after you see the code, but here they are for reference: When the results are correct: "statusmsg": "2 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',}", when the results are incorrect (once you read the code you'll notice two things wrong. First, it doesn't have 'limit':'None', secondly it shouldn't even get this far to begin with. "statusmsg": "1 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',with limit[0,1](limit,None),}", It may be important to note that I'm the only person with access to the server running this right now, so even if it was a cache issue, it doesn't make sense that I can just refresh and get different results by hitting F5 while viewing: http://localhost/api/hours_detail/datestamp/2009-03-02/empid/22 Here's the code broken into urls.py and handlers.py so that you can see what i'm doing: URLS.PY urlpatterns = patterns('', #hours_detail/id/{id}/empid/{empid}/projid/{projid}/datestamp/{datestamp}/daterange/{fromdate}to{todate}/limit/{first|last}/exact #empid is required # id, empid, projid, datestamp, daterange can be in any order url(r'^api/hours_detail/(?:' + \ r'(?:[/]?id/(?P<id>\d+))?' + \ r'(?:[/]?empid/(?P<empid>\d+))?' + \ r'(?:[/]?projid/(?P<projid>\d+))?' + \ r'(?:[/]?datestamp/(?P<datestamp>\d{4,}[-/\.]\d{2,}[-/\.]\d{2,}))?' + \ r'(?:[/]?daterange/(?P<daterange>(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})(?:to|/-)(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})))?' + \ r')+' + \ r'(?:/limit/(?P<limit>(?:first|last)))?' + \ r'(?:/(?P<exact>exact))?$', hours_detail_resource), HANDLERS.PY # inherit from BaseHandler to add the extra functionality i need to process the possibly null URL params class BaseApiHandler(BaseHandler): # keep track of the handler so the data is represented back to me correctly post_name = 'base' # THIS IS THE LIST IN QUESTION - SOMETIMES IT IS GETTING SET TO [0,1] MYSTERIOUSLY # this gets set to a list when the results are to be limited limit = None def has_limit(self): return (isinstance(self.limit, list) and len(self.limit) == 2) def process_kwarg_read(self, key, value, d_post, b_exact): """ this should be overridden in the derived classes to process kwargs """ pass # override 'read' so we can better handle our api's searching capabilities def read(self, request, *args, **kwargs): d_post = {'status':0,'statusmsg':'Nothing Happened'} try: # setup the named response object # select all employees then filter - querysets are lazy in django # the actual query is only done once data is needed, so this may # seem like some memory hog slow beast, but it's actually not. d_post[self.post_name] = self.queryset(request) # this is a string that holds debug information... it's the string I mentioned before pasting this code s_query = '' b_exact = False if 'exact' in kwargs and kwargs['exact'] <> None: b_exact = True s_query = '\'exact\':True,' for key,value in kwargs.iteritems(): # the regex url possibilities will push None into the kwargs dictionary # if not specified, so just continue looping through if that's the case if value == None or key == 'exact': continue # write to the s_query string so we have a nice error message s_query = '%s\'%s\':\'%s\',' % (s_query, key, value) # now process this key/value kwarg self.process_kwarg_read(key=key, value=value, d_post=d_post, b_exact=b_exact) # end of the kwargs for loop else: if self.has_limit(): # THIS SEEMS TO GET HIT SOMETIMES IF YOU CONSTANTLY REFRESH THE API PAGE, EVEN THOUGH # THE LINE IN THE FOR LOOP WHICH UPDATES s_query DOESN'T GET HIS AND THUS self.process_kwarg_read ALSO # DOESN'T GET HIT SO NEITHER DOES limit = [0,1] s_query = '%swith limit[%s,%s](limit,%s),' % (s_query, self.limit[0], self.limit[1], kwargs['limit']) d_post[self.post_name] = d_post[self.post_name][self.limit[0]:self.limit[1]] if d_post[self.post_name].count() == 0: d_post['status'] = 0 d_post['statusmsg'] = '%s not found with query: {%s}' % (self.post_name, s_query) else: d_post['status'] = 1 d_post['statusmsg'] = '%s %s found with query: {%s}' % (d_post[self.post_name].count(), self.post_name, s_query) except: e = sys.exc_info()[1] d_post['status'] = 0 d_post['statusmsg'] = 'error: %s' % e d_post[self.post_name] = [] return d_post class HoursDetailHandler(BaseApiHandler): #allowed_methods = ('GET',) model = HoursDetail exclude = () post_name = 'hours_detail' def process_kwarg_read(self, key, value, d_post, b_exact): if ... # I have several if/elif statements here that check for other things... # 'self.limit =' only shows up in the following elif: elif key == 'limit': order_by = 'clock_time' if value == 'last': order_by = '-clock_time' d_post[self.post_name] = d_post[self.post_name].order_by(order_by) # TO GET HERE, THE ONLY PLACE IN CODE WHERE self.limit IS SET, YOU MUST HAVE GONE THROUGH # THE value == None CHECK???? self.limit = [0, 1] else: raise NameError def read(self, request, *args, **kwargs): # empid is required, so make sure it exists before running BaseApiHandler's read method if not('empid' in kwargs and kwargs['empid'] <> None and kwargs['empid'] >= 0): return {'status':0,'statusmsg':'empid cannot be empty'} else: return BaseApiHandler.read(self, request, *args, **kwargs) Does anyone have a clue how else self.limit might be getting set to [0, 1] ? Am I misunderstanding kwargs or loops or anything in Python?

    Read the article

  • [Python] Detect destination of shortened, or "tiny" url

    - by conradlee
    I have just scraped a bunch of Google Buzz data, and I want to know which Buzz posts reference the same news articles. The problem is that many of the links in these posts have been modified by URL shorteners, so it could be the case that many distinct shortened URLs actually all point to the same news article. Given that I have millions of posts, what is the most efficient way (preferably in python) for me to detect whether a url is a shortened URL (from any of the many URL shortening services, or at least the largest) Find the "destination" of the shortened url, i.e., the long, original version of the shortened URL. Does anyone know if the URL shorteners impose strict request rate limits? If I keep this down to 100/second (all coming form the same IP address), do you think I'll run into trouble?

    Read the article

  • Python In-memory table

    - by nsharish
    What is the right way to forming in-memory table in python with direct lookups for rows and columns.I thought of using dict of dicts this way, class Table(dict): def __getitem__(self, key): if key not in self: self[key]={} return dict.__getitem__(self, key) table = Table() table['row1']['column1'] = 'value11' table['row1']['column2'] = 'value12' table['row2']['column1'] = 'value21' table['row2']['column2'] = 'value22' >>>table {'row1':{'column1':'value11','column2':'value12'},'row2':{'column1':'value21','column2':'value22'}} I had difficulty in looking up for values in columns. >>>'row1' in table True >>>'value11' in table['row1'].values() True Now how do I do lookup if 'column1' has 'value11' Is this method of forming tables wrong?Is there a better way to implement such tables with easier lookups?.Thanks

    Read the article

  • Python: Filter a dictionary

    - by Adam Matan
    Hi, I have a dictionary of points, say: >>> points={'a':(3,4), 'b':(1,2), 'c':(5,5), 'd':(3,3)} I want to create a new dictionary with all the points whose x and y value is smaller than 5, i.e. points 'a', 'b' and 'd'. According to the the book, each dictionary has the items() function, which returns a list of (key, pair) tuple: >>> points.items() [('a', (3, 4)), ('c', (5, 5)), ('b', (1, 2)), ('d', (3, 3))] So I have written this: >>> for item in [i for i in points.items() if i[1][0]<5 and i[1][1]<5]: ... points_small[item[0]]=item[1] ... >>> points_small {'a': (3, 4), 'b': (1, 2), 'd': (3, 3)} Is there a more elegant way? I was expecting Python to have some super-awesome dictionary.filter(f) function... Adam

    Read the article

  • Fastest way to list all primes below N in python

    - by jbochi
    This is the best algorithm I could come up with after struggling with a couple of Project Euler's questions. def get_primes(n): numbers = set(range(n, 1, -1)) primes = [] while numbers: p = numbers.pop() primes.append(p) numbers.difference_update(set(range(p*2, n+1, p))) return primes >>> timeit.Timer(stmt='get_primes.get_primes(1000000)', setup='import get_primes').timeit(1) 1.1499958793645562 Can it be made even faster? EDIT: This code has a flaw: Since numbers is an unordered set, there is no guarantee that numbers.pop() will remove the lowest number from the set. Nevertheless, it works (at least for me) for some input numbers: >>> sum(get_primes(2000000)) 142913828922L #That's the correct sum of all numbers below 2 million >>> 529 in get_primes(1000) False >>> 529 in get_primes(530) True EDIT: The rank so far (pure python, no external sources, all primes below 1 million): Sundaram's Sieve implementation by myself: 327ms Daniel's Sieve: 435ms Alex's recipe from Cookbok: 710ms EDIT: ~unutbu is leading the race.

    Read the article

  • How to structure Python package that contains Cython code

    - by Craig McQueen
    I'd like to make a Python package containing some Cython code. I've got the the Cython code working nicely. However, now I want to know how best to package it. For most people who just want to install the package, I'd like to include the .c file that Cython creates, and arrange for setup.py to compile that to produce the module. Then the user doesn't need Cython installed in order to install the package. But for people who may want to modify the package, I'd also like to provide the Cython .pyx files, and somehow also allow for setup.py to build them using Cython (so those users would need Cython installed). How should I structure the files in the package to cater for both these scenarios? The Cython documentation gives a little guidance. But it doesn't say how to make a single setup.py that handles both the with/without Cython cases.

    Read the article

  • Python Vector Class

    - by sfjedi
    I'm coming from a C# background where this stuff is super easy—trying to translate into Python for Maya. There's gotta' be a better way to do this. Basically, I'm looking to create a Vector class that will simply have x, y and z coordinates, but it would be ideal if this class returned a tuple with all 3 coordinates and if you could edit the values of this tuple through x, y and z properties, somehow. This is what I have so far, but there must be a better way to do this than using an exec statement, right? I hate using exec statements. class Vector(object): '''Creates a Maya vector/triple, having x, y and z coordinates as float values''' def __init__(self, x=0, y=0, z=0): self.x, self.y, self.z = x, y, z def attrsetter(attr): def set_float(self, value): setattr(self, attr, float(value)) return set_float for xyz in 'xyz': exec("%s = property(fget=attrgetter('_%s'), fset=attrsetter('_%s'))" % (xyz, xyz, xyz))

    Read the article

  • How to generate SSH key pairs with Python

    - by Lee
    Hello, I'm attempting to write a script to generate SSH Identity key pairs for me. from M2Crypto import RSA key = RSA.gen_key(1024, 65337) key.save_key("/tmp/my.key", cipher=None) The file /tmp/my.key looks great now. By running ssh-keygen -y -f /tmp/my.key > /tmp/my.key.pub I can extract the public key. My question is how can I extract the public key from python? Using key.save_pub_key("/tmp/my.key.pub") saves something like: -----BEGIN PUBLIC KEY----- MFwwDQYJKoZIhvcNAQEBBQADASDASDASDASDBarYRsmMazM1hd7a+u3QeMP ... FZQ7Ic+BmmeWHvvVP4Yjyu1t6vAut7mKkaDeKbT3yiGVUgAEUaWMXqECAwEAAQ== -----END PUBLIC KEY----- When I'm looking for something like: ssh-rsa AAAABCASDDBM$%3WEAv/3%$F ..... OSDFKJSL43$%^DFg==

    Read the article

  • Convert string to JSON using Python

    - by Luiz Fernando
    Hi, I'm a little bit confused with JSON in Python. To me, it seems like a dictionary, and for that reason I'm trying to do that: json = """{ "glossary": { "title": "example glossary", "GlossDiv": { "title": "S", "GlossList": { "GlossEntry": { "ID": "SGML", "SortAs": "SGML", "GlossTerm": "Standard Generalized Markup Language", "Acronym": "SGML", "Abbrev": "ISO 8879:1986", "GlossDef": { "para": "A meta-markup language, used to create markup languages such as DocBook.", "GlossSeeAlso": ["GML", "XML"] }, "GlossSee": "markup" } } } } } """ But when I do print dict(json), it gives an error. How can I transform this string into a structure and then call json["title"] to obtain "example glossary"? Thanks.

    Read the article

  • To find first N prime numbers in python

    - by Rahul Tripathi
    Hi All, I am new to the programming world. I was just writing this code in python to generate N prime numbers. User should input the value for N which is the total number of prime numbers to print out. I have written this code but it doesn't throw the desired output. Instead it prints the prime numbers till the Nth number. For eg.: User enters the value of N = 7. Desired output: 2, 3, 5, 7, 11, 13, 19 Actual output: 2, 3, 5, 7 Kindly advise. i=1 x = int(input("Enter the number:")) for k in range (1, (x+1), 1): c=0 for j in range (1, (i+1), 1): a = i%j if (a==0): c = c+1 if (c==2): print (i) else: k = k-1 i=i+1

    Read the article

  • Python thread pool similar to the multiprocessing Pool?

    - by Martin
    Is there a Pool class for worker threads, similar to the multiprocessing module's Pool class? I like for example the easy way to parallelize a map function def long_running_func(p): c_func_no_gil(p) p = multiprocessing.Pool(4) xs = p.map(long_running_func, range(100)) however I would like to do it without the overhead of creating new processes. I know about the GIL. However, in my usecase, the function will be an IO-bound C function for which the python wrapper will release the GIL before the actual function call. Do I have to write my own threading pool?

    Read the article

  • Calculating spam probability in python

    - by Hobhouse
    I am building a website in python/django and want to predict wether a user submission is valid or wether it is spam. Users have an accept rate on their submissions, like this website has. Users can moderate other users' submissions; and these moderations are later metamoderated by an admin. Given this: user A with an submission accept rate of 60% submits something. user B moderates A's post as a valid submission. However, his moderations are often wrong, and his moderations' accept rate is a mere 30%. user C moderates A's post as spam. User C is usually right. His moderations' accept rate is 80%. How can I predict the chance of A's post being spam?

    Read the article

  • Python - Test directory permissions

    - by Sean
    In Python on Windows, is there a way to determine if a user has permission to access a directory? I've taken a look at os.access but it gives false results. >>> os.access('C:\haveaccess', os.R_OK) False >>> os.access(r'C:\haveaccess', os.R_OK) True >>> os.access('C:\donthaveaccess', os.R_OK) False >>> os.access(r'C:\donthaveaccess', os.R_OK) True Am I doing something wrong? Is there a better way to check if a user has permission to access a directory?

    Read the article

  • Python utf-8, howto align printout

    - by Fredrik
    Hi, I have a array containing japanese caracters as well as "normal". How do I align the printout of these? #!/usr/bin/python # coding=utf-8 a1=['??', '???', 'trazan', '??', '????'] a2=['dipsy', 'laa-laa', 'banarne', 'po', 'tinky winky'] for i,j in zip(a1,a2): print i.ljust(12),':',j print '-'*8 for i,j in zip(a1,a2): print i,len(i) print j,len(j) Output: ?? : dipsy ??? : laa-laa trazan : banarne ?? : po ???? : tinky winky -------- ?? 6 dipsy 5 ??? 9 laa-laa 7 trazan 6 banarne 7 ?? 6 po 2 ???? 12 tinky winky 11 thanks, //Fredrik

    Read the article

  • Scraping *.aspx content using Python

    - by tomato
    I'm having difficulties scraping dynamically generated table in ASPX. Trying to scrape the gas prices from a site like this GasPrices. I can extract all the information in the gas price table (address, time submitted etc.), except for the actual gas price. Is there a way I could scrape the gas prices? i.e. somehow get a text representation of it. I'm not very familiar with ASP/ASPX - but what's being generated now is not showing up in the final HTML. I'm using Python to do the scraping, but that's irrelevant unless there's a specific library... Thanks in advance.

    Read the article

  • Scraping *.aspx content using Python

    - by tomato
    I'm having difficulties scrapping dynamically generated table in ASPX. Trying to scrap the gas prices from a site like these GasPrices. I can extract all the information in the gas price table (address, time submitted etc.), except for the actual gas price. Is there a way I could scrap the gas prices? i.e. somehow get a text representation of it. I'm not very familiar with ASP/ASPX - but what's being generated now is not showing up in the final HTML. I'm using Python to do the scrapping, but that's irrelevant unless there's a specific library...

    Read the article

  • Python file-io code listing current folder path instead of the specified

    - by Tom Brito
    I have the code: import os import sys fileList = os.listdir(sys.argv[1]) for file in fileList: if os.path.isfile(file): print "File >> " + os.path.abspath(file) else: print "Dir >> " + os.path.abspath(file) Located in my music folder ("/home/tom/Music") When I call it with: python test.py "/tmp" I expected it to list my "/tmp" files and folders with the full path. But it printed lines like: Dir >> /home/tom/Music/seahorse-gw2jNn Dir >> /home/tom/Music/FlashXX9kV847 Dir >> /home/tom/Music/pulse-DcIEoxW5h2gz This is, the correct file names, but the wrong path (and this files are not either in my Music folder).. What's wrong with this code?

    Read the article

  • Python list comprehension to return edge values of a list

    - by mvid
    If I have a list in python such as: stuff = [1, 2, 3, 4, 5, 6, 7, 8, 9] with length n (in this case 9) and I am interested in creating lists of length n/2 (in this case 4). I want all possible sets of n/2 values in the original list, for example: [1, 2, 3, 4], [2, 3, 4, 5], ..., [9, 1, 2, 3] is there some list comprehension code I could use to iterate through the list and retrieve all of those sublists? I don't care about the order of the values within the lists, I am just trying to find a clever method of generating the lists.

    Read the article

  • how to copy a dictionary in python 3.1 and edit ONLY the copy

    - by MadSc13ntist
    can someone please explain this to me??? this doesn't make any sense to me.... I copy a dictionary into another and edit the second and both are changed???? ActivePython 3.1.0.1 (ActiveState Software Inc.) based on Python 3.1 (r31:73572, Jun 28 2009, 19:55:39) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. dict1 = {"key1":"value1", "key2":"value2"} dict2 = dict1 dict2 {'key2': 'value2', 'key1': 'value1'} dict2["key2"] = "WHY?!?!?!!?!?!?!" dict1 {'key2': 'WHY?!?!?!!?!?!?!', 'key1': 'value1'}

    Read the article

  • MySQL LOAD DATA LOCAL INFILE example in python?

    - by David Perron
    I am looking for a syntax definition, example, sample code, wiki, etc. for executing a LOAD DATA LOCAL INFILE command from python. I believe I can use mysqlimport as well if that is available, so any feedback (and code snippet) on which is the better route, is welcome. A Google search is not turning up much in the way of current info The goal in either case is the same: Automate loading hundreds of files with a known naming convention & date structure, into a single MySQL table. David

    Read the article

  • Error in unzipping a file from a python script running as daemon

    - by Fedrick
    I am getting an error whenever i try to run following unzip command from a python script which is running as a daemon Command : unzip abcd.zip /dev/null Error End-of-central-directory signature not found$ a zip file, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive unzip: cannot find zipfile directory in one of abcd.zip$ abcd.zip.zip, and cannot find abcd.zip.ZIP, period. Could anyone help me in this regard? Thanks in advance.

    Read the article

  • OpenCV in Python can't scan through pixels

    - by Marco L.
    Hi everyone, I'm stuck with a problem of the python wrapper for OpenCv. I have this function that returns 1 if the number of black pixels is greater than treshold def checkBlackPixels( img, threshold ): width = img.width height = img.height nchannels = img.nChannels step = img.widthStep dimtot = width * height data = img.imageData black = 0 for i in range( 0, height ): for j in range( 0, width ): r = data[i*step + j*nchannels + 0] g = data[i*step + j*nchannels + 1] b = data[i*step + j*nchannels + 2] if r == 0 and g == 0 and b == 0: black = black + 1 if black >= threshold * dimtot: return 1 else: return 0 The loop (scan each pixel of a given image) works good when the input is an RGB image...but if the input is a single channel image I get this error: for j in range( width ): TypeError: Nested sequences should have 2 or 3 dimensions The input single channel image (called 'rg' in the next example) is taken from an RGB image called 'src' processed with cvSplit and then cvAbsDiff cvSplit( src, r, g, b, 'NULL' ) rg = cvCreateImage( cvGetSize(src), src.depth, 1 ) # R - G cvAbsDiff( r, g, rg ) I've also already noticed that the problem comes from the difference image got from cvSplit... Anyone can help me? Thank you

    Read the article

  • Issue with python string join.

    - by Pradyot
    I have some code in which I apply a join to a list. The list before the join looks like this: ["'DealerwebAgcy_NYK_GW_UAT'", "'DealerwebAgcy'", "'UAT'", '@ECNPhysicalMarketCo nfigId', "'GATEWAY'", "'DEALERWEB_MD_AGCY'", "'NU1MKVETC'", "'mkvetcu'", "'C:\te mp'", '0', "'NYK'", '0', '1', "'isqlw.exe'", 'GetDate()', '12345', "'NYK'", '350 ', '7'] After the join this is the resulting string 'DealerwebAgcy_NYK_GW_UAT','DealerwebAgcy','UAT',@ECNPhysicalMarketConfigId,'GAT EWAY','DEALERWEB_MD_AGCY','NU1MKVETC','mkvetcu','C: emp',0,'NYK',0,1,'isqlw. exe',GetDate(),12345,'NYK',350,7 Note the element "'C:\temp'" which ends up as ,'C: emp', I tried something similar on the python command prompt , but I wasn't able to 2 repeat this. the relevant code responsible for this magic is as follows. values_dict["ECNMarketInstance"] = [strVal(self.EcnInstance_),strVal (self.DisplayName_) ,strVal(self.environment_), '@ECNPhysicalMarketConfigId', strVal(self.EcnGatewaTypeId_),strVal(self.ConnectionComponent_) ,strVal(self.UserName_),strVal(self.Password_),strVal(self.WorkingDir_),"0",strVal(self.region_),"0","1", strVal(self.LUVersion_), "GetDate()" , self.LUUserId_ ,strVal(self.LUOwningSite_),self.QuoteColumnId_ , self.Capabilities_] delim = "," joined = delim.join(values) print values print joined

    Read the article

  • Python mechanize - two buttons of type 'submit'

    - by directedition
    I have a mechanize script written in python that fills out a web form and is supposed to click on the 'create' button. But there's a problem, the form has two buttons. One for 'add attached file' and one for 'create'. Both are of type 'submit', and the attach button is the first one listed. So when I select the forum and do br.submit(), it clicks on the 'attach' button instead of 'create'. Extensive Googling has yielded nothing useful for selecting a specific button in a form. Does anyone know of any methods for skipping over the first 'submit' button and clicking the second?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >