Search Results

Search found 27655 results on 1107 pages for 'visual python'.

Page 560/1107 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • Which webserver to use with bottle?

    - by luc
    Bottle can use several webservers: Build-in HTTP development server and support for paste, fapws3, flup, cherrypy or any other WSGI capable server. I am using bottle for a desktop-app and I guess that the development server is enough in this case. I would like to know if some of you have experience with one of the alternative server. Which server for which purpose? Thanks in advance

    Read the article

  • How do I launch background jobs w/ paramiko?

    - by sophacles
    Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, task)): (A, 1), (B, 2), (C, 2), (A,3), (B,3) -- essentially starting servers and clients for some testing in the correct order. Further, because in the tests networking may get mucked up, and because I need some of the output from the tests, I would like to just redirect output to a file. In similar scenarios the common response is to use 'screen -m -d' or to use 'nohup'. However with paramiko's exec_cmd, nohup doesn't actually exit. Using: bash -c -l nohup test_cmd & doesnt work either, exec_cmd still blocks to process end. In the screen case, output redirection doesn't work very well, (actually, doesnt work at all the best I can figure out). So, after all that explanation, my question is: is there an easy elegant way to detach processes and capture output in such a way as to end paramiko's exec_cmd blocking? Update The dtach command works nicely for this!

    Read the article

  • How to turn a list of tuples into a string?

    - by matt
    I have a list of tuples that I'm trying to incorporate into a SQL query but I can't figure out how to join them together without adding slashes. My like this: list = [('val', 'val'), ('val', 'val'), ('val', 'val')] If I turn each tuple into a string and try to join them with a a comma I'll get something like ' (\'val\, \'val\'), ... ' What's the right way to do this, so I can get the list (without brackets) as a string?

    Read the article

  • Serve external template in Django

    - by AlexeyMK
    Hey, I want to do something like return render_to_response("http://docs.google.com/View?id=bla", args) and serve an external page with django arguments. Django doesn't like this (it looks for templates in very particular places). What's the easiest way make this work? Right now I'm thinking to use urllib to save the page to somewhere locally on my server and then serve with the templates pointing to there. Note: I'm not looking for anything particularly scalable here, I realize my proposal above is a little dirty.

    Read the article

  • Loading url with cyrillic symbols

    - by Ockonal
    Hi guys, I have to load some url with cyrillic symbols. My script should work with this: http://wincode.org/%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5/ If I'll use this in browser it would replaced into normal symbols, but urllib code fails with 404 error. How to decode correctly this url? When I'm using that url directly in code, like address = 'that address', it works perfect. But I used parsing page for getting this url. I have a list of urls which contents cyrillic. Maybe they have uncorrect encoding? Here is more code: requestData = urllib2.Request( %SOME_ADDRESS%, None, {"User-Agent": user_agent}) requestHandler = pageHandler.open(requestData) pageData = requestHandler.read().decode('utf-8') soupHandler = BeautifulSoup(pageData) topicLinks = [] for postBlock in soupHandler.findAll('a', href=re.compile('%SOME_REGEXP%')): topicLinks.append(postBlock['href']) postAddress = choice(topicLinks) postRequestData = urllib2.Request(postAddress, None, {"User-Agent": user_agent}) postHandler = pageHandler.open(postRequestData) postData = postHandler.read() File "/usr/lib/python2.6/urllib2.py", line 518, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 404: Not Found

    Read the article

  • Huge amount of time sending data with suds and proxy

    - by Roman
    Hi everyone, I have the following code to send data through a proxy using suds: import suds t = suds.transport.http.HttpTransport() proxy = urllib2.ProxyHandler({'http': 'http://192.168.3.217:3128'}) opener = urllib2.build_opener(proxy) t.urlopener = opener ws = suds.client.Client('http://xxxxxxx/web.asmx?WSDL', transport=t) req = ws.factory.create('ActionRequest.request') req.SerialNumber = 'asdf' req.HostName = 'hola' res = ws.service.ActionRequest(req) I don't know why, but it can be sending data above 2 or 3 minutes, or even more and it raises a "Gateway timeout" exception sometimes. If I don't use the proxy, the amount of time used is above 2 seconds or less. Here is the SOAP reply: (ActionResponse){ Id = None Action = "Action.None" Objects = "" } The proxy is running right with other requests through urllib2, or using normal web browsers like firefox. Does anyone have any idea what's happening here with suds? Thanks a lot in advance!!!

    Read the article

  • How to set up source control in VS2010

    - by Jouke van der Maas
    Hi, I want to set up source control for my project, but it seems like I need a server for this. I've never done this before, and I couldn't find anything helpfull yet. Is there any way to host a server locally so Visual studio can use it? Or do you know any online (free) servers I can use? By the way, if source control is not actually what i should use for keeping track of changes in my files, please suggest a better option. Thanks in advance.

    Read the article

  • Why is win32com so much slower than xlrd?

    - by Josh
    I have the same code, written using win32com and xlrd. xlrd preforms the algorithm in less than a second, while win32com takes minutes. Here is the win32com: def makeDict(ws): """makes dict with key as header name, value as tuple of column begin and column end (inclusive)""" wsHeaders = {} # key is header name, value is column begin and end inclusive for cnum in xrange(9, find_last_col(ws)): if ws.Cells(7, cnum).Value: wsHeaders[str(ws.Cells(7, cnum).Value)] = (cnum, find_last_col(ws)) for cend in xrange(cnum + 1, find_last_col(ws)): #finds end column if ws.Cells(7, cend).Value: wsHeaders[str(ws.Cells(7, cnum).Value)] = (cnum, cend - 1) break return wsHeaders And the xlrd def makeDict(ws): """makes dict with key as header name, value as tuple of column begin and column end (inclusive)""" wsHeaders = {} # key is header name, value is column begin and end inclusive for cnum in xrange(8, ws.ncols): if ws.cell_value(6, cnum): wsHeaders[str(ws.cell_value(6, cnum))] = (cnum, ws.ncols) for cend in xrange(cnum + 1, ws.ncols):#finds end column if ws.cell_value(6, cend): wsHeaders[str(ws.cell_value(6, cnum))] = (cnum, cend - 1) break return wsHeaders

    Read the article

  • Artificial Intelligence in online game using Google App Engine

    - by Hortinstein
    I am currently in the planning stages of a game for google app engine, but cannot wrap my head around how I am going to handle AI. I intend to have persistant NPCs that will move about the map, but short of writing a program that generates the same XML requests I use to control player actions, than run it on another server I am stuck on how to do it. I have looked at the Task Queue feature, but due to long running processes not being an option on the App engine, I am a little stuck. I intend to run multiple server instances with 200+ persistant NPC entities that I will need to update. Most action is slowly roaming around based on player movements/concentrations, and attacking close range players...(you can probably guess the type of game im developing)

    Read the article

  • Error -3 while decompressing data: incorrect header check

    - by Rahul99
    I have .zip file which contain csv data. I am reading .zip file using <input type = "file" name = "select_file"/> I want to decompress that .zip file and read csv data. file_data = self.request.get('select_file') file_str = zlib.decompress(file_data) #file_data_list = file_str.split('\n') #file_Reader = csv.reader(file_data_list,quoting=csv.QUOTE_NONE ) I am expecting csv data in file_str but I am getting error. error :: Error -3 while decompressing data: incorrect header check What I have to use?

    Read the article

  • Simple XML over http web service

    - by Mark
    I have a simple html service, developed in django. You enter your name - it posts this, and returns a value (male/female). I need to ofer this as a web service. I have no idea where to start. I want to accept a xml request, and provide an xml response - thats it. Can anyone give ma any pointers - Googling it is difficult when you dont know what your searching for.

    Read the article

  • Way to get VS 2008 to stop forcing indentation on namespaces?

    - by Earlz
    I've never really been a big fan of the way most editors handle namespaces. They always force you to add an extra pointless level of indentation. For instance, I have a lot of code in a page that I would much rather prefer formatted as namespace mycode{ class myclass{ void function(){ foo(); } void foo(){ bar(); } void bar(){ //code.. } } } and not something like namespace mycode{ class myclass{ void function(){ foo(); } void foo(){ bar(); } void bar(){ //code.. } } } Honestly, I don't really even like the class thing being indented most of the time because I usually only have 1 class per file. And it doesn't look as bad here, but when you get a ton of code and lot of scopes, you can easily have indentation that forces you off the screen, and plus here I just used 2-space tabs and not 4-space as is used by us. Anyway, is there some way to get Visual Studio to stop trying to indent namespaces for me like that?

    Read the article

  • How can I detect whether an image is a PNG or APNG format?

    - by perlit
    APNG is backwards compatible with PNG. I opened up an apng and png file in a hex editor and the first few bytes look identical. So if a user uploads either of these formats, how do I detect what the format really is? I've seen this done on some sites that block apng. I'm guessing the ImageMagick library makes this easy, but what if I were to do the detect without the use of an image processing library (for learning purposes)? Can I look for specific bytes that tell me if the file is apng? Solutions in any language is welcome.

    Read the article

  • Avoiding nesting two for loops

    - by chavanak
    Hi, Please have a look at the code below: import string from collections import defaultdict first_complex=open( "residue_a_chain_a_b_backup.txt", "r" ) first_complex_lines=first_complex.readlines() first_complex_lines=map( string.strip, first_complex_lines ) first_complex.close() second_complex=open( "residue_a_chain_a_c_backup.txt", "r" ) second_complex_lines=second_complex.readlines() second_complex_lines=map( string.strip, second_complex_lines ) second_complex.close() list_1=[] list_2=[] for x in first_complex_lines: if x[0]!="d": list_1.append( x ) for y in second_complex_lines: if y[0]!="d": list_2.append( y ) j=0 list_3=[] list_4=[] for a in list_1: pass for b in list_2: pass if a==b: list_3.append( a ) kvmap=defaultdict( int ) for k in list_3: kvmap[k]+=1 print kvmap Normally I use izip or izip_longest to club two for loops, but this time the length of the files are different. I don't want a None entry. If I use the above method, the run time becomes incremental and useless. How am I supposed to get the two for loops going? Cheers, Chavanak

    Read the article

  • App only spawns one thread

    - by tipu
    I have what I thought was a thread-friendly app, and after doing some output I've concluded that of the 15 threads I am attempting to run, only one does. I have if __name__ == "__main__": fhf = FileHandlerFactory() tweet_manager = TweetManager("C:/Documents and Settings/Administrator/My Documents/My Dropbox/workspace/trie/Tweet Search Engine/data/partitioned_raw_tweets/raw_tweets.txt.001") start = time.time() for i in range(15): Indexer(tweet_manager, fhf).start() Then in my thread-entry point, I do def run(self): print(threading.current_thread()) self.index() That results in this: <Indexer(Thread-3, started 1168)> So of 15 threads that I thought were running, I'm only running one. Any idea as to why? Edit: code

    Read the article

  • Django: Get remote IP address inside settings.py

    - by Silver Light
    Hello! I want to enable debug (DEBUG = True) For my Django project only if it runs on localhost. How can I get user IP address inside settings.py? I would like something like this to work: #Debugging only on localhost if user_ip = '127.0.0.1': DEBUG = True else: DEBUG = False How do I put user IP address in user_ip variable inside settings.py file?

    Read the article

  • How am I supposed to deploy an ASP.NET MVC 4.0 website?

    - by Matt Frear
    What's the recommended way to deploy a website created with ASP.NET 4.0 and Visual Studio 2010? I've previously always added a web setup project to my solution, and used that to create an MSI, even for small applications. But when I build a web setup project in VS2010 it kind of works but some stuff still seems broken: 1. I need to turn on IIS 6 Compatibility on a Win 2008 R2 box to get the msi to run. 2. The msi includes web.config, web.debug.config, and web.release.config. I thought VS's web.config transformations was supposed to take care of that. -Matt

    Read the article

  • How can I display multiple django modelformset forms together?

    - by JT
    I have a problem with needing to provide multiple model backed forms on the same page. I understand how to do this with single forms, i.e. just create both the forms call them something different then use the appropriate names in the template. Now how exactly do you expand that solution to work with modelformsets? The wrinkle, of course, is that each 'form' must be rendered together in the appropriate fieldset. For example I want my template to produce something like this: <fieldset> <label for="id_base-0-desc">Home Base Description:</label> <input id="id_base-0-desc" type="text" name="base-0-desc" maxlength="100" /> <label for="id_likes-0-icecream">Want ice cream?</label> <input type="checkbox" name="likes-0-icecream" id="id_likes-0-icecream" /> </fieldset> <fieldset> <label for="id_base-1-desc">Home Base Description:</label> <input id="id_base-1-desc" type="text" name="base-1-desc" maxlength="100" /> <label for="id_likes-1-icecream">Want ice cream?</label> <input type="checkbox" name="likes-1-icecream" id="id_likes-1-icecream" /> </fieldset> I am using a loop like this to process the results for base_form, likes_form in map(None, base_forms, likes_forms): which works as I'd expect (I'm using map because the # of forms can be different). The problem is that I can't figure out a way to do the same thing with the templating engine. The system does work if I layout all the base models together then all the likes models after wards, but it doesn't meet the layout requirements.

    Read the article

  • .NET Application using a native DLL (build management)

    - by moogs
    I have a .NET app that is dependent on a native DLL. I have the .NET app set as AnyCPU. In the post-build step, I plan to copy the correct native DLL from some directory (x86 or AMD64) and place it in the target path. However, this doesn't work. On a 64-bit machine, the environment variable PROCESSOR_ARCHITECTURE is "x86" in Visual Studio. My alternative right now is to create a small tool that outputs the processor architecture. This will be used by the post-build step. Is there a better alternative? (Side Note: when deploying/packaging the app, the right native DLL is copied to the right platform. But this means we have two separate release folders for x86 and AMD64, which is OK since this is for a device driver. The app is a utility tool for the driver).

    Read the article

  • How do I store multiple copies of the same field in Django?

    - by Alistair
    I'm storing OLAC metadata which describes linguistic resources. Many of the elements of the metadata are repeatable -- for example, a resource can have two languages, three authors and four dates associated with it. Is there any way of storing this in one model? It seems like overkill to define a model for each repeatable metadata element -- especially since the models will only have one field: it's value.

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >