Search Results

Search found 14007 results on 561 pages for 'python embedding'.

Page 105/561 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • why does python.subprocess hang after proc.communicate()?

    - by ccfenix
    I've got an interactive program called my_own_exe. First, it prints out alive, then you input S\n and then it prints out alive again. Finally you input L\n. It does some processing and exits. However, when I call it from the following python script, the program seemed to hang after printing out the first 'alive'. Can anyone here tell me why this is happening? Thanks proc2 = subprocess.Popen("my_own_exe", shell=True , stdin=subprocess.PIPE, stdout=subprocess.PIPE) print proc2.communicate()[0] time.sleep(2); print "alive" # 'hang' after print this line proc2.communicate('S\n')[0] print "alive" print proc2.communicate()[0] time.sleep(6)

    Read the article

  • Howto install distribute for Python 3

    - by chris.nullptr
    I am trying to install distribute using ActivePython 3.1.2 on Windows. Running python distribute_setup.py as described at the cheese shop give me: No setuptools distribution found running install ... File "build\src\setuptools\command\easy_install.py", line 16, in <module> from setuptools.sandbox import run_setup File "build\src\setuptools\sandbox.py", line 164, in <module> fromlist=['__name__']).__file__) AttributeError: 'module' object has no attribute '__file__' Something went wrong during the installation. See the error message above. Is there possibly an unknown dependency that I'm missing?

    Read the article

  • Python-McNuggets problem

    - by challarao
    Hi! I am a student of IIIT.I am new to python.This question is one of the problems in my problem set.Please Help me writing program such as in what I should do it. Show that it is possible to buy exactly 50, 51, 52, 53, 54, and 55 McNuggets, by finding solutions to the Diophantine equation. You can solve this in your head, using paper and pencil, or writing a program. However you chose to solve this problem, list the combinations of 6, 9 and 20 packs of McNuggets you need to buy in order to get each of the exact amounts. Given that it is possible to buy sets of 50, 51, 52, 53, 54 or 55 McNuggets by combinations of 6, 9 and 20 packs, show that it is possible to buy 56, 57,..., 65 McNuggets. In other words, show how, given solutions for 50-55, one can derive solutions for 56-65 6a + 9b + 20c = n

    Read the article

  • python coockie,request another page

    - by polovinamozga
    #!/usr/bin/python # -*- coding: utf-8 -*- import urllib2 import urllib import httplib import Cookie import cookielib Login = 'user' Password = 'password' Domain = 'inbox.ru' Auth = 'https://auth.mail.ru/cgi-bin/auth' cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) login_data = urllib.urlencode({'Login' : Login, 'Domain' :Domain, 'Password' : Password }) opener.open('https://auth.mail.ru/cgi-bin/auth', login_data) resp = opener.open('https://auth.mail.ru/cgi-bin/auth').read() print resp.decode('cp1251') #output page in cp1251 When script sucessfully executed i see in print resp.decode('cp1251') my page with auth. But when a try to request another page for example http://my.mail.ru i see autorization request. How i can use cookie with another page?

    Read the article

  • elegant way to make directory recursively in Python?

    - by user248237
    I would like to make a directory in a "recursive" way, i.e. have a function make_directory() that behaves as follows: make_directory("a/b/c/d/") should create directory a, then child b, then c, then d. If any of the parent directories exist it should not make them. E.g. if "a" exists, then b should be a subdir of that, and then c should be made inside b, etc. how can I do this in python? os.mkdir does not have this behavior.

    Read the article

  • pb with callback in the python optparse module

    - by PierrOz
    Hi Guys, I'm playing with Python 2.6 and its optparse module. I would like to convert one of my arguments to a datetime through a callback but it fails. Here is the code: def parsedate(option, opt_str, value, parser): option.date = datetime.strptime(value, "%Y/%m/%d") def parse_options(args): parser = OptionParser(usage="%prog -l LOGFOLDER [-e]", version="%prog 1.0") parser.add_option("-d", "--date", action="callback", callback="parsedate", dest="date") global options (options, args) = parser.parse_args(args) print option.date.strftime() if __name__ == "__main__": parse_options(sys.argv[1:]) I get an error File: optparse.py in _check_callback "callback not callable". I guess I'm doing something wrong in the way I define my callback but what ? and why ? Can anyone help ?

    Read the article

  • How to play sound in Python WITHOUT interrupting music/other sounds from playing

    - by Morlock
    I'm working on a timer in python which sounds a chime when the waiting time is over. I use the following code: from wave import open as wave_open from ossaudiodev import open as oss_open def _play_chime(): """ Play a sound file once. """ sound_file = wave_open('chime.wav','rb') (nc,sw,fr,nf,comptype, compname) = sound_file.getparams( ) dsp = oss_open('/dev/dsp','w') try: from ossaudiodev import AFMT_S16_NE except ImportError: if byteorder == "little": AFMT_S16_NE = ossaudiodev.AFMT_S16_LE else: AFMT_S16_NE = ossaudiodev.AFMT_S16_BE dsp.setparameters(AFMT_S16_NE, nc, fr) data = sound_file.readframes(nf) sound_file.close() dsp.write(data) dsp.close() It works pretty good, unless any other device is already outputing sound. How could I do basically the same (under linux) without having the prerequisite that no sound is being played? If you think the process would require an API to ensure software mixing, please suggest a method :) Thx for the support

    Read the article

  • vlc python bindings - how to receive keyboard input?

    - by itsadok
    I'm trying to use VLC's python bindings to create my own little video player. The demo implementation is quite simple and nice, but it requires all the keyboard commands to be typed into the console from which the script was run. Is there any way I can handle keyboard input also when the video player itself has focus? Specifically, I care about controlling the video while in fullscreen mode. Perhaps there's a way to keep the keyboard focus in the console (or maybe another window) while showing the video? I'm using Windows XP, if that has any relevance.

    Read the article

  • Python unit-testing with nose: Making sequential tests

    - by cool-RR
    I am just learning how to do unit-testing. I'm on Python / nose / Wing IDE. (The project that I'm writing tests for is a simulations framework, and among other things it lets you run simulations both synchronously and asynchronously, and the results of the simulation should be the same in both.) The thing is, I want some of my tests to use simulation results that were created in other tests. For example, synchronous_test calculates a certain simulation in synchronous mode, but then I want to calculate it in asynchronous mode, and check that the results came out the same. How do I structure this? Do I put them all in one test function, or make a separate asynchronous_test? Do I pass these objects from one test function to another? Also, keep in mind that all these tests will run through a test generator, so I can do the tests for each of the simulation packages included with my program.

    Read the article

  • computing z-scores for 2D matrices in scipy/numpy in Python

    - by user248237
    How can I compute the z-score for matrices in Python? Suppose I have the array: a = array([[ 1, 2, 3], [ 30, 35, 36], [2000, 6000, 8000]]) and I want to compute the z-score for each row. The solution I came up with is: array([zs(item) for item in a]) where zs is in scipy.stats.stats. Is there a better built-in vectorized way to do this? Also, is it always good to z-score numbers before using hierarchical clustering with euclidean or seuclidean distance? Can anyone discuss the relative advantages/disadvantages? thanks.

    Read the article

  • Python re.sub MULTILINE caret match

    - by cdleary
    The Python docs say: re.MULTILINE: When specified, the pattern character '^' matches at the beginning of the string and at the beginning of each line (immediately following each newline)... By default, '^' matches only at the beginning of the string... So what's going on when I get the following unexpected result? >>> import re >>> s = """// The quick brown fox. ... // Jumped over the lazy dog.""" >>> re.sub('^//', '', s, re.MULTILINE) ' The quick brown fox.\n// Jumped over the lazy dog.'

    Read the article

  • python: how to convert list of lists into a single nested list

    - by Bhuski
    I have a python list of lists as shown below: mylist=[ [['orphan1', ['some value1']]], [['parent1', ['child1', ['child', ['some value2']]]]], [['parent1', ['child2', ['child', ['some value3']]]]] ] I need to convert the above list to some thing like this: result=[ ['orphan1', ['some value1']], ['parent1', ['child1', ['child', ['some value2']]], ['child2', ['child', ['some value3']]]] ] Kindly help me approach this problem. I have given only simple list. In actual scenario here, in my list, even grand parents/grand childs are there. How much ever deep the input nested list is, I need to convert it to a single nested list, with common list elements (parents and grand parents) appearing only once. (but the next to innermost list element('child' in above example) should appear as many times it occurs in the input list. I have been trying to do this last two days, but did not end up with working solution :(. I need to use the output in django template filter: unordered_list so that the resultant nested list appears as a nested unordered list in my html page ..

    Read the article

  • python, accessing a psycopg2 form a def?

    - by i-Malignus
    i'm trying to make a group of defs in one file so then i just can import them whenever i want to make a script in python i have tried this: def get_dblink( dbstring): """ Return a database cnx. """ global psycopg2 try cnx = psycopg2.connect( dbstring) except Exception, e: print "Unable to connect to DB. Error [%s]" % ( e,) exit( ) but i get this error: global name 'psycopg2' is not defined in my main file script.py i have: import psycopg2, psycopg2.extras from misc_defs import * hostname = '192.168.10.36' database = 'test' username = 'test' password = 'test' dbstring = "host='%s' dbname='%s' user='%s' password='%s'" % ( hostname, database, username, password) cnx = get_dblink( dbstring) can anyone give me a hand?

    Read the article

  • Run Python CGI Script on Windows XP

    - by daveywc
    I have a Windows XP machine that has Apache installed via a VisualSVNServer installation. I am . trying to get a simple python cgi script to run in my browser e.g. http://build.procepts.com.au:8080/hg/cgi-bin/test.cgi. However despite trying all the recommended approaches the browser only ever displays the plain text from the cgi script. Amongst many other attempted solutions I have followed the instructions contained here. My ultimate aim is to be able to use the Apache web server to serve repositories from a new Mercurial installation. Seeing as Apache is already installed from VisualSVNServer I thought I might as well make use of it. Is there some other trick to get this working?

    Read the article

  • Web scraping with Python

    - by Jack
    I'm currently trying to scrape a website that has fairly poorly-formatted HTML (often missing closing tags, no use of classes or ids so it's incredibly difficult to go straight to the element you want, etc.). I've been using BeautifulSoup with some success so far but every once and a while (though quite rarely), I run into a page where BeautifulSoup creates the HTML tree a bit differently from (for example) Firefox or Webkit. While this is understandable as the formatting of the HTML leaves this ambiguous, if I were able to get the same parse tree as Firefox or Webkit produces I would be able to parse things much more easily. The problems are usually something like the site opens a <b> tag twice and when BeautifulSoup sees the second <b> tag, it immediately closes the first while Firefox and Webkit nest the <b> tags. Is there a web scraping library for Python (or even any other language (I'm getting desperate)) that can reproduce the parse tree generated by Firefox or WebKit (or at least get closer than BeautifulSoup in cases of ambiguity).

    Read the article

  • plotting results of hierarchical clustering ontop of a matrix of data in python

    - by user248237
    How can I plot a dendrogram right on top of a matrix of values, reordered appropriately to reflect the clustering, in Python? An example is in the bottom of the following figure: http://www.coriell.org/images/microarray.gif I use scipy.cluster.dendrogram to make my dendrogram and perform hierarchical clustering on a matrix of data. How can I then plot the data as a matrix where the rows have been reordered to reflect a clustering induced by the cutting the dendrogram at a particular threshold, and have the dendrogram plotted alongside the matrix? I know how to plot the dendrogram in scipy, but not how to plot the intensity matrix of data with the right scale bar next to it. Any help on this would be greatly appreciated.

    Read the article

  • Python File Meta Tag reading

    - by Jeff
    Anyone know of a Python module that can pull Tag data from multiple media formats? Trying to build an app that allows for manipulation of ASF (Windows Media Player files, ie WMA, WMV, etc), ID3, including both ID3v1 and ID3v2 (MPEG files, ie MP3), MPEG Audio Bit Stream (ie ABS, MP1, MP2, MP3), MPEG Program Stream (MPEG movies, and DVD and HD DVD video discs, ie MPG, MPEG, VOB, EVO), and ISO Base Media File Format (eg QuickTime, MPEG-4 and iTunes AAC files, ie QT, MOV, MP4, M4A, M4B, M4P, M4V, etc). Don't need ALL of that but just most standard consumer formats like mov and mpeg. I can't seem to find a good module to support that or a library. Any recommendations?

    Read the article

  • Importing a function/class from a Python module of the same name

    - by Brendan
    I have a Python package mymodule with a sub-package utils (i.e. a subdirectory which contains modules each with a function). The functions have the same name as the file/module in which they live. I would like to be able to access the functions as follows, from mymodule.utils import a_function Strangely however, sometimes I can import functions using the above notation, however other times I cannot. I have not been able to work out why though (recently, for example, I renamed a function and the file it was in and reflected this rename in the utils.__init__.py file but it no longer imported as a functions (rather as a module) in one of my scripts. The utils.__init__.py reads something like, __all__ = ['a_function', 'b_function' ...] from a_function import a_function from b_function import b_function ... mymodule.__init__.py has no reference to utils Ideas?

    Read the article

  • Web scraping with Python

    - by Jack
    I'm currently trying to scrape a website that has fairly poorly-formatted HTML (often missing closing tags, no use of classes or ids so it's incredibly difficult to go straight to the element you want, etc.). I've been using BeautifulSoup with some success so far but every once and a while (though quite rarely), I run into a page where BeautifulSoup creates the HTML tree a bit differently from (for example) Firefox or Webkit. While this is understandable as the formatting of the HTML leaves this ambiguous, if I were able to get the same parse tree as Firefox or Webkit produces I would be able to parse things much more easily. The problems are usually something like the site opens a <b> tag twice and when BeautifulSoup sees the second <b> tag, it immediately closes the first while Firefox and Webkit nest the <b> tags. Is there a web scraping library for Python (or even any other language (I'm getting desperate)) that can reproduce the parse tree generated by Firefox or WebKit (or at least get closer than BeautifulSoup in cases of ambiguity).

    Read the article

  • Python's equivalence?

    - by user304014
    Is there anyway to transform the following code in Java to Python's equivalence? public class Animal{ public enum AnimalBreed{ Dog, Cat, Cow, Chicken, Elephant } private static final int Animals = AnimalBreed.Dog.ordinal(); private static final String[] myAnimal = new String[Animals]; private static Animal[] animal = new Animal[Animals]; public static final Animal DogAnimal = new Animal(AnimalBreed.Dog, "woff"); public static final Animal CatAnimal = new Animal(AnimalBreed.Cat, "meow"); private AnimalBreed breed; public static Animal myDog (String name) { return new Animal(AnimalBreed.Dog, name); } }

    Read the article

  • Python string manipulation

    - by paradox
    I'm trying to split a string into a int list for further processing. But somehow I can't remove certain whitespaces in between elements of the list. The string x is supposed to have a length of 1000 instead of 1019. I tried reading the documentation for python and saw the function strip() for stripping whitespaces from strings. However, it only works for trailing and leading whitespaces. How should I go about removing these whitespaces and also how do I convert a str list to a int list? My code is as follows : import array x = """73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450""" y=[] for i in range(0,len(x)): #String is now in a string list if x[i]!='': y.append(x[i]) print(y[i]) print(len(x))

    Read the article

  • JavaScript-like Object in Python standard library?

    - by David Wolever
    Quite often, I find myself wanting a simple, "dump" object in Python which behaves like a JavaScript object (ie, its members can be accessed either with .member or with ['member']). Usually I'll just stick this at the top of the .py: class DumbObject(dict): def __getattr__(self, attr): return self[attr] def __stattr__(self, attr, value): self[attr] = value But that's kind of lame, and there is at least one bug with that implementation (although I can't remember what it is). So, is there something similar in the standard library? And, for the record, simply instanciating object doesn't work: obj = object() obj.airspeed = 42 Traceback (most recent call last): File "", line 1, in AttributeError: 'object' object has no attribute 'airspeed' Thanks, David

    Read the article

  • Lost in UTF-8 hell. (Django and Python)

    - by user140314
    I am working through the Django RSS reader project here. The RSS feed will read something like "OKLAHOMA CITY (AP) — James Harden let". The RSS feed's encoding reads encoding="UTF-8" so I believe I am passing utf-8 to markdown in the code snippet below. The em dash is where it chokes. I get the Django error of "'ascii' codec can't encode character u'\u2014' in position 109: ordinal not in range(128)" which is an UnicodeEncodeError. In the variables being passed I see "OKLAHOMA CITY (AP) \u2014 James Harden". The code line that is not working is: content = content.encode(parsed_feed.encoding, "xmlcharrefreplace") I am using markdown 2.0, django 1.1, and python 2.4. What is the magic sequence of encoding and decoding that I need to do to make this work? Thanks.

    Read the article

  • python list mysteriously getting set to something within my django/piston handler

    - by Anverc
    To start, I'm very new to python, let alone Django and Piston. Anyway, I've created a new BaseHandler class "class BaseApiHandler(BaseHandler)" so that I can extend some of the stff that BaseHandler does. This has been working fine until I added a new filter that could limit results to the first or last result. Now I can refresh the api page over and over and sometimes it will limit the result even if I don't include /limit/whatever in my URL... I've added some debug info into my return value to see what is happening, and that's when it gets more weird. this return value will make more sense after you see the code, but here they are for reference: When the results are correct: "statusmsg": "2 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',}", when the results are incorrect (once you read the code you'll notice two things wrong. First, it doesn't have 'limit':'None', secondly it shouldn't even get this far to begin with. "statusmsg": "1 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',with limit[0,1](limit,None),}", It may be important to note that I'm the only person with access to the server running this right now, so even if it was a cache issue, it doesn't make sense that I can just refresh and get different results by hitting F5 while viewing: http://localhost/api/hours_detail/datestamp/2009-03-02/empid/22 Here's the code broken into urls.py and handlers.py so that you can see what i'm doing: URLS.PY urlpatterns = patterns('', #hours_detail/id/{id}/empid/{empid}/projid/{projid}/datestamp/{datestamp}/daterange/{fromdate}to{todate}/limit/{first|last}/exact #empid is required # id, empid, projid, datestamp, daterange can be in any order url(r'^api/hours_detail/(?:' + \ r'(?:[/]?id/(?P<id>\d+))?' + \ r'(?:[/]?empid/(?P<empid>\d+))?' + \ r'(?:[/]?projid/(?P<projid>\d+))?' + \ r'(?:[/]?datestamp/(?P<datestamp>\d{4,}[-/\.]\d{2,}[-/\.]\d{2,}))?' + \ r'(?:[/]?daterange/(?P<daterange>(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})(?:to|/-)(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})))?' + \ r')+' + \ r'(?:/limit/(?P<limit>(?:first|last)))?' + \ r'(?:/(?P<exact>exact))?$', hours_detail_resource), HANDLERS.PY # inherit from BaseHandler to add the extra functionality i need to process the possibly null URL params class BaseApiHandler(BaseHandler): # keep track of the handler so the data is represented back to me correctly post_name = 'base' # THIS IS THE LIST IN QUESTION - SOMETIMES IT IS GETTING SET TO [0,1] MYSTERIOUSLY # this gets set to a list when the results are to be limited limit = None def has_limit(self): return (isinstance(self.limit, list) and len(self.limit) == 2) def process_kwarg_read(self, key, value, d_post, b_exact): """ this should be overridden in the derived classes to process kwargs """ pass # override 'read' so we can better handle our api's searching capabilities def read(self, request, *args, **kwargs): d_post = {'status':0,'statusmsg':'Nothing Happened'} try: # setup the named response object # select all employees then filter - querysets are lazy in django # the actual query is only done once data is needed, so this may # seem like some memory hog slow beast, but it's actually not. d_post[self.post_name] = self.queryset(request) # this is a string that holds debug information... it's the string I mentioned before pasting this code s_query = '' b_exact = False if 'exact' in kwargs and kwargs['exact'] <> None: b_exact = True s_query = '\'exact\':True,' for key,value in kwargs.iteritems(): # the regex url possibilities will push None into the kwargs dictionary # if not specified, so just continue looping through if that's the case if value == None or key == 'exact': continue # write to the s_query string so we have a nice error message s_query = '%s\'%s\':\'%s\',' % (s_query, key, value) # now process this key/value kwarg self.process_kwarg_read(key=key, value=value, d_post=d_post, b_exact=b_exact) # end of the kwargs for loop else: if self.has_limit(): # THIS SEEMS TO GET HIT SOMETIMES IF YOU CONSTANTLY REFRESH THE API PAGE, EVEN THOUGH # THE LINE IN THE FOR LOOP WHICH UPDATES s_query DOESN'T GET HIS AND THUS self.process_kwarg_read ALSO # DOESN'T GET HIT SO NEITHER DOES limit = [0,1] s_query = '%swith limit[%s,%s](limit,%s),' % (s_query, self.limit[0], self.limit[1], kwargs['limit']) d_post[self.post_name] = d_post[self.post_name][self.limit[0]:self.limit[1]] if d_post[self.post_name].count() == 0: d_post['status'] = 0 d_post['statusmsg'] = '%s not found with query: {%s}' % (self.post_name, s_query) else: d_post['status'] = 1 d_post['statusmsg'] = '%s %s found with query: {%s}' % (d_post[self.post_name].count(), self.post_name, s_query) except: e = sys.exc_info()[1] d_post['status'] = 0 d_post['statusmsg'] = 'error: %s' % e d_post[self.post_name] = [] return d_post class HoursDetailHandler(BaseApiHandler): #allowed_methods = ('GET',) model = HoursDetail exclude = () post_name = 'hours_detail' def process_kwarg_read(self, key, value, d_post, b_exact): if ... # I have several if/elif statements here that check for other things... # 'self.limit =' only shows up in the following elif: elif key == 'limit': order_by = 'clock_time' if value == 'last': order_by = '-clock_time' d_post[self.post_name] = d_post[self.post_name].order_by(order_by) # TO GET HERE, THE ONLY PLACE IN CODE WHERE self.limit IS SET, YOU MUST HAVE GONE THROUGH # THE value == None CHECK???? self.limit = [0, 1] else: raise NameError def read(self, request, *args, **kwargs): # empid is required, so make sure it exists before running BaseApiHandler's read method if not('empid' in kwargs and kwargs['empid'] <> None and kwargs['empid'] >= 0): return {'status':0,'statusmsg':'empid cannot be empty'} else: return BaseApiHandler.read(self, request, *args, **kwargs) Does anyone have a clue how else self.limit might be getting set to [0, 1] ? Am I misunderstanding kwargs or loops or anything in Python?

    Read the article

  • [Python] Detect destination of shortened, or "tiny" url

    - by conradlee
    I have just scraped a bunch of Google Buzz data, and I want to know which Buzz posts reference the same news articles. The problem is that many of the links in these posts have been modified by URL shorteners, so it could be the case that many distinct shortened URLs actually all point to the same news article. Given that I have millions of posts, what is the most efficient way (preferably in python) for me to detect whether a url is a shortened URL (from any of the many URL shortening services, or at least the largest) Find the "destination" of the shortened url, i.e., the long, original version of the shortened URL. Does anyone know if the URL shorteners impose strict request rate limits? If I keep this down to 100/second (all coming form the same IP address), do you think I'll run into trouble?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >