Search Results

Search found 13683 results on 548 pages for 'python sphinx'.

Page 351/548 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Finding inline style with lxml.cssselector

    - by ropa
    New to this library (no more familiar with BeautifulSoup either, sadly), trying to do something very simple (search by inline style): <td style="padding: 20px">blah blah </td> I just want to select all tds where style="padding: 20px", but I can't seem to figure it out. All the examples show how to select td, such as: for col in page.cssselect('td'): but that doesn't help me much.

    Read the article

  • Difference between URLLIB2 call in IDLE and from Django?

    - by danspants
    The following piece of code works as expected when running in a local install of django apache 2.2 fx = urllib2.Request(f); fx.add_header('User-Agent','Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.36 Safari/525.19'); url_opened = urllib2.urlopen(fx); However when I enter that code into IDLE on the same machine I get the following error: url_opened = urllib2.urlopen(fx); File "C:\Python25\lib\urllib2.py", line 124, in urlopen return _opener.open(url, data) File "C:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "C:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 407: Proxy Authentication Required Any ideas?

    Read the article

  • Django stupid mark_safe?

    - by Mark
    I wrote this little function for writing out HTML tags: def html_tag(tag, content=None, close=True, attrs={}): lst = ['<',tag] for key, val in attrs.iteritems(): lst.append(' %s="%s"' % (key, escape_html(val))) if close: if content is None: lst.append(' />') else: lst.extend(['>', content, '</', tag, '>']) else: lst.append('>') return mark_safe(''.join(lst)) Which worked great, but then I read this article on efficient string concatenation (I know it doesn't really matter for this, but I wanted consistency) and decided to update my script: def html_tag(tag, body=None, close=True, attrs={}): s = StringIO() s.write('<%s'%tag) for key, val in attrs.iteritems(): s.write(' %s="%s"' % (key, escape_html(val))) if close: if body is None: s.write(' />') else: s.write('>%s</%s>' % (body, tag)) else: s.write('>') return mark_safe(s.getvalue()) But now my HTML get escaped when I try to render it from my template. Everything else is exactly the same. It works properly if I replace the last line with return mark_safe(unicode(s.getvalue())). I checked the return type of s.getvalue(). It should be a str, just like the first function, so why is this failing?? Also fails with SafeString(s.getvalue()) but succeeds with SafeUnicode(s.getvalue()). I'd also like to point out that I used return mark_safe(s.getvalue()) in a different function with no odd behavior.

    Read the article

  • How to "signal" interested child processes (without signals)?

    - by Teddy
    I'm trying to find a good and simple method to signal child processes (created through SocketServer with ForkingMixIn) from the parent process. While Unix signals could be used, I want to avoid them since only children who are interested should receive the signal, and it would be overkill and complicated to require some kind of registration mechanism to identify to the parent process who is interested. (Please don't suggest threads, as this particular program won't work with threads, and thus has to use forks.)

    Read the article

  • Compound dictionary keys

    - by John Keyes
    I have a particular case where using compound dictionary keys would make a task easier. I have a working solution, but feel it is inelegant. How would you do it? context = { 'database': { 'port': 9990, 'users': ['number2', 'dr_evil'] }, 'admins': ['[email protected]', '[email protected]'], 'domain.name': 'virtucon.com' } def getitem(key, context): if hasattr(key, 'upper') and key in context: return context[key] keys = key if hasattr(key, 'pop') else key.split('.') k = keys.pop(0) if keys: try: return getitem(keys, context[k]) except KeyError, e: raise KeyError(key) if hasattr(context, 'count'): k = int(k) return context[k] if __name__ == "__main__": print getitem('database', context) print getitem('database.port', context) print getitem('database.users.0', context) print getitem('admins', context) print getitem('domain.name', context) try: getitem('database.nosuchkey', context) except KeyError, e: print "Error:", e Thanks.

    Read the article

  • Display additional data while iterating over a Django formset

    - by Jannis
    Hi, I have a list of soccer matches for which I'd like to display forms. The list comes from a remote source. matches = ["A vs. B", "C vs. D", "E vs, F"] matchFormset = formset_factory(MatchForm,extra=len(matches)) formset = MatchFormset() On the template side, I would like to display the formset with the according title (i.e. "A vs. B"). {% for form in formset.forms %} <fieldset> <legend>{{TITLE}}</legend> {{form.team1}} : {{form.team2}} </fieldset> {% endfor %} Now how do I get TITLE to contain the right title for the current form? Or asked in a different way: how do I iterate over matches with the same index as the iteration over formset.forms? Thanks for your input!

    Read the article

  • Crossfading audio with PyQT4 and Phonon

    - by dwelch
    I'm trying to get audio files to crossfade with phonon. I'm using PyQT4. I have tracks queuing properly, but I'm stuck with the fade effect. I think I need to be using the KVolumeFader effect. Here's my current code: def music_play(self): self.delayedInit() self.m_media.setCurrentSource(Phonon.MediaSource(self.playlist[self.playlist_pos])) self.m_media.play() def music_stop(self): self.m_media.stop() def delayedInit(self): if not self.m_media: self.m_media = Phonon.MediaObject(self) audioOutput = Phonon.AudioOutput(Phonon.MusicCategory, self) Phonon.createPath(self.m_media, audioOutput) def enqueueNextSource(self): if len(self.playlist) >= self.playlist_pos+1: self.playlist_pos += 1 self.m_media.enqueue(Phonon.MediaSource(self.playlist[self.playlist_pos])) else: self.m_media.stop() Can anyone give me some advice on implementing the effect?

    Read the article

  • Django and conditional aggregates

    - by piquadrat
    I have two models, authors and articles: class Author(models.Model): name = models.CharField('name', max_length=100) class Article(models.Model) title = models.CharField('title', max_length=100) pubdate = models.DateTimeField('publication date') authors = models.ManyToManyField(Author) Now I want to select all authors and annotate them with their respective article count. That's a piece of cake with Django's aggregates. Problem is, it should only count the articles that are already published. According to ticket 11305 in the Django ticket tracker, this is not yet possible. I tried to use the CountIf annotation mentioned in that ticket, but it doesn't quote the datetime string and doesn't make all the joins it would need. So, what's the best solution, other than writing custom SQL?

    Read the article

  • How should I rewrite my code to make it amenable to unittesting?

    - by justin
    I've been trying to get started with unit-testing while working on a little cli program. My program basically parses the command line arguments and options, and decides which function to call. Each of the functions performs some operation on a database. So, for instance, I might have a create function: def create(self, opts, args): #I've left out the error handling. strtime = datetime.datetime.now().strftime("%D %H:%M") vals = (strtime, opts.message, opts.keywords, False) self.execute("insert into mytable values (?, ?, ?, ?)", vals) self.commit() Should my test case call this function, then execute the select sql to check that the row was entered? That sounds reasonable, but also makes the tests more difficult to maintain. Would you rewrite the function to return something and check for the return value? Thanks

    Read the article

  • Munging non-printable characters to dots using string.translate()

    - by Jim Dennis
    So I've done this before and it's a surprising ugly bit of code for such a seemingly simple task. The goal is to translate any non-printable character into a . (dot). For my purposes "printable" does exclude the last few characters from string.printable (new-lines, tabs, and so on). This is for printing things like the old MS-DOS debug "hex dump" format ... or anything similar to that (where additional whitespace will mangle the intended dump layout). I know I can use string.translate() and, to use that, I need a translation table. So I use string.maketrans() for that. Here's the best I could come up with: filter = string.maketrans( string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]), '.'*len(string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]))) ... which is an unreadable mess (though it does work). From there you can call use something like: for each_line in sometext: print string.translate(each_line, filter) ... and be happy. (So long as you don't look under the hood). Now it is more readable if I break that horrid expression into separate statements: ascii = string.maketrans('','') # The whole ASCII character set nonprintable = string.translate(ascii, ascii, string.printable[:-5]) # Optional delchars argument filter = string.maketrans(nonprintable, '.' * len(nonprintable)) And it's tempting to do that just for legibility. However, I keep thinking there has to be a more elegant way to express this!

    Read the article

  • Check if the integer in a list is not duplicated, and sequential

    - by prosseek
    testGroupList is a list of integer. I need to check the numbers in testGroupList is sequential (i.e, 1-2-3-4...) and not duplicate numbers. Ignore the negative integer. I implemented it as follows, and it's pretty ugly. Is there any clever way to do this? buff = filter(lambda x: x 0, testGroupList) maxval = max(buff) for i in range(maxval): id = i+1 val = buff.count(id) if val == 1: print id, elif val = 2: print "(Test Group %d duplicated %d times)" % (id, val), elif val == 0: print "(Test Group %d missing)" % id,

    Read the article

  • Server authorization with MD5 and SQL.

    - by Charles
    I currently have a SQL database of passwords stored in MD5. The server needs to generate a unique key, then sends to the client. In the client, it will use the key as a salt then hash together with the password and send back to the server. The only problem is that the the SQL DB has the passwords in MD5 already. Therefore for this to work, I would have to MD5 the password client side, then MD5 it again with the salt. Am I doing this wrong, because it doesn't seem like a proper solution. Any information is appreciated.

    Read the article

  • Textually diffing JSON

    - by Richard Levasseur
    As part of my release processes, I have to compare some JSON configuration data used by my application. As a first attempt, I just pretty-printed the JSON and diff'ed them (using kdiff3 or just diff). As that data has grown, however, kdiff3 confuses different parts in the output, making additions look like giant modifies, odd deletions, etc. It makes it really hard to figure out what is different. I've tried other diff tools, too (meld, kompare, diff, a few others), but they all have the same problem. Despite my best efforts, I can't seem to format the JSON in a way that the diff tools can understand. Example data: [ { "name": "date", "type": "date", "nullable": true, "state": "enabled" }, { "name": "owner", "type": "string", "nullable": false, "state": "enabled", } ...lots more... ] The above probably wouldn't cause the problem (the problem occurs when there begin to be hundreds of lines), but thats the gist of what is being compared. Thats just a sample; the full objects are 4-5 attributes, and some attributes have 4-5 attributes in them. The attribute names are pretty uniform, but their values pretty varied. In general, it seems like all the diff tools confuse the closing "}" with the next objects closing "}". I can't seem to break them of this habit. I've tried adding whitespace, changing indentation, and adding some "BEGIN" and "END" strings before and after the respective objects, but the tool still get confused.

    Read the article

  • How to call Twiter's Streaming/Filter Feed with urllib2/httplib?

    - by Simon
    Update: I switched this back from answered as I tried the solution posed in cogent Nick's answer and switched to Google's urlfetch: logging.debug("starting urlfetch for http://%s%s" % (self.host, self.url)) result = urlfetch.fetch("http://%s%s" % (self.host, self.url), payload=self.body, method="POST", headers=self.headers, allow_truncated=True, deadline=5) logging.debug("finished urlfetch") but unfortunately finished urlfetch is never printed - I see the timeout happen in the logs (it returns 200 after 5 seconds), but execution doesn't seem tor return. Hi All- I'm attempting to play around with Twitter's Streaming (aka firehose) API with Google App Engine (I'm aware this probably isn't a great long term play as you can't keep the connection perpetually open with GAE), but so far I haven't had any luck getting my program to actually parse the results returned by Twitter. Some code: logging.debug("firing up urllib2") req = urllib2.Request(url="http://%s%s" % (self.host, self.url), data=self.body, headers=self.headers) logging.debug("called urlopen for %s %s, about to call urlopen" % (self.host, self.url)) fobj = urllib2.urlopen(req) logging.debug("called urlopen") When this executes, unfortunately, my debug output never shows the called urlopen line printed. I suspect what's happening is that Twitter keeps the connection open and urllib2 doesn't return because the server doesn't terminate the connection. Wireshark shows the request being sent properly and a response returned with results. I tried adding Connection: close to my request header, but that didn't yield a successful result. Any ideas on how to get this to work? thanks -Simon

    Read the article

  • Favorite Django Tips & Features?

    - by Haes
    Inspired by the question series 'Hidden features of ...', I am curious to hear about your favorite Django tips or lesser known but useful features you know of. Please, include only one tip per answer. Add Django version requirements if there are any.

    Read the article

  • is the sender of google-app-engine allow my own gmail..

    - by zjm1126
    my gmail is [email protected] i can only use [email protected] in the sender=".." ,yes ?? from google.appengine.api import mail message = mail.EmailMessage(sender="[email protected]", subject="Your account has been approved") message.to = "[email protected]" message.body = """ Dear Albert: Your example.com account has been approved. You can now visit http://www.example.com/ and sign in using your Google Account to access new features. Please let us know if you have any questions. The example.com Team """ message.send() thanks

    Read the article

  • Rewriting Live TCP Streams

    - by user213060
    I want to rewrite TCP/IP streams. Ettercap's etterfilter command lets you perform simple live replacements of TCP/IP data based on fixed strings or regexes. Example: http://ettercap.sourceforge.net/forum/viewtopic.php?t=2833 I would like to rewrite streams based on my own filter program instead of just simple string replacements. Anyone have an idea of how to do this? Is there anything other than Ettercap that can do live replacement like this, maybe as a plugin to a VPN software or something? Thanks!

    Read the article

  • Start PyGTK cellrenderer edit from code

    - by mkotechno
    I have a treeview with an editable CellRendererText: self.renderer = gtk.CellRendererText() self.renderer.set_property('editable', True) But now I need to launch the edition from code instead from user, this is to focus the user attention in the fact he just created a new row and needs to be named. I tried this but does not work: self.renderer.start_editing( gtk.gdk.Event(gtk.gdk.NOTHING), self.treeview, str(index), gtk.gdk.Rectangle(), gtk.gdk.Rectangle(), 0) Neither does not throw errors, but the documentation about for what is each argument is not clear, in fact I really don't know if start_editing method is for this. All suggestions are welcome, thanks.

    Read the article

  • Use the output of logs in the execution of a program

    - by myle
    When I try to create a specific object, the program crashes. However, I use a module (mechanize) which logs useful information just before the crash. If I had somehow this information available I could avoid it. Is there any way to use the information which is logged (when I use the function set_debug_redirects) during the normal execution of the program? Just to be a bit more specific, I try to emulate the login behavior in a webpage. The program crashes because it can't handle a specific Following HTTP-EQUIV=REFRESH to <omitted_url>. Given this url, which is available in the logs but not as part of the exception which is thrown, I could visit this page and complete successfully the login process. Any other suggestions that may solve the problem are welcomed. It follows the code so far. SERVICE_LOGIN_BOX_URL = "https://www.google.com/accounts/ServiceLoginBox?service=adsense&ltmpl=login&ifr=true&rm=hide&fpui=3&nui=15&alwf=true&passive=true&continue=https%3A%2F%2Fwww.google.com%2Fadsense%2Flogin-box-gaiaauth&followup=https%3A%2F%2Fwww.google.com%2Fadsense%2Flogin-box-gaiaauth&hl=en_US" def init_browser(): # Browser br = mechanize.Browser() # Cookie Jar cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) # Browser options br.set_handle_equiv(True) br.set_handle_gzip(False) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(True) br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=30.0, honor_time=False) # Want debugging messages? #br.set_debug_http(True) br.set_debug_redirects(True) #br.set_debug_responses(True) return br def adsense_login(login, password): br = init_browser() r = br.open(SERVICE_LOGIN_BOX_URL) html = r.read() # Select the first (index zero) form br.select_form(nr=0) br.form['Email'] = login br.form['Passwd'] = password br.submit() req = br.click_link(text='click here to continue') try: # this is where it crashes br.open(req) except HTTPError, e: sys.exit("post failed: %d: %s" % (e.code, e.msg)) return br

    Read the article

  • Preserve time stamp when shrinking an image

    - by Ckhrysze
    My digital camera takes pictures with a very high resolution, and I have a PIL script to shrink them to 800x600 (or 600x800). However, it would be nice for the resultant file to retain the original timestamp. I noticed in the docs that I can use a File object instead of a name in PIL's image save method, but I don't know if that will help or not. My code is basically name, ext = os.path.splitext(filename) # open an image file (.bmp,.jpg,.png,.gif) you have in the working folder image = Image.open(filename) width = 800 height = 600 w, h = image.size if h > w: width = 600 height = 800 name = name + ".jpg" shunken = image.resize((width, height), Image.ANTIALIAS) shunken.save(name) Thank you for any help you can give!

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >