Search Results

Search found 13596 results on 544 pages for 'mechanize python'.

Page 123/544 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • reading csv files in scipy/numpy in Python

    - by user248237
    I am having trouble reading a csv file, delimited by tabs, in python. I use the following function: def csv2array(filename, skiprows=0, delimiter='\t', raw_header=False, missing=None, with_header=True): """ Parse a file name into an array. Return the array and additional header lines. By default, parse the header lines into dictionaries, assuming the parameters are numeric, using 'parse_header'. """ f = open(filename, 'r') skipped_rows = [] for n in range(skiprows): header_line = f.readline().strip() if raw_header: skipped_rows.append(header_line) else: skipped_rows.append(parse_header(header_line)) f.close() if missing: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows, missing=missing) else: if delimiter != '\t': data = genfromtxt(filename, dtype=None, names=with_header, delimiter=delimiter, deletechars='', skiprows=skiprows) else: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows) if data.ndim == 0: data = array([data.item()]) return (data, skipped_rows) the problem is that genfromtxt complains about my files, e.g. with the error: Line #27100 (got 12 columns instead of 16) I am not sure where these errors come from. Any ideas? Here's an example file that causes the problem: #Gene 120-1 120-3 120-4 30-1 30-3 30-4 C-1 C-2 C-5 genesymbol genedesc ENSMUSG00000000001 7.32 9.5 7.76 7.24 11.35 8.83 6.67 11.35 7.12 Gnai3 guanine nucleotide binding protein alpha ENSMUSG00000000003 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Pbsn probasin Is there a better way to write a generic csv2array function? thanks.

    Read the article

  • Python many-to-one mapping (creating equivalence classes)

    - by Adam Matan
    Hi, I have a project of converting one database to another. One of the original database columns defines the row's category. This coulmn should be mapepd to a new category in the new databse. For example, let's assume the original categories are:parrot, spam, cheese_shop, Cleese, Gilliam, Palin Now that's a little verbose for me, And I want to have these rows categorized as sketch, actor - That is, define all the sketches and all the actors as two equivalence classes. >>> monty={'parrot':'sketch', 'spam':'sketch', 'cheese_shop':'sketch', 'Cleese':'actor', 'Gilliam':'actor', 'Palin':'actor'} >>> monty {'Gilliam': 'actor', 'Cleese': 'actor', 'parrot': 'sketch', 'spam': 'sketch', 'Palin': 'actor', 'cheese_shop': 'sketch'} That's quite awkward- I would prefer having something like: monty={ ('parrot','spam','cheese_shop'): 'sketch', ('Cleese', 'Gilliam', 'Palin') : 'actors'} But this, of course, sets the entire tuple as a key: >>> monty['parrot'] Traceback (most recent call last): File "<pyshell#29>", line 1, in <module> monty['parrot'] KeyError: 'parrot' Any ideas how to create an elegant many-to-one dictionary in Python? Thanks, Adam

    Read the article

  • Python access an object byref / Need tagging

    - by Aaron C. de Bruyn
    I need to suck data from stdin and create a object. The incoming data is between 5 and 10 lines long. Each line has a process number and either an IP address or a hash. For example: pid=123 ip=192.168.0.1 - some data pid=123 hash=ABCDEF0123 - more data hash=ABCDEF123 - More data ip=192.168.0.1 - even more data I need to put this data into a class like: class MyData(): pid = None hash = None ip = None lines = [] I need to be able to look up the object by IP, HASH, or PID. The tough part is that there are multiple streams of data intermixed coming from stdin. (There could be hundreds or thousands of processes writing data at the same time.) I have regular expressions pulling out the PID, IP, and HASH that I need, but how can I access the object by any of those values? My thought was to do something like this: myarray = {} for each line in sys.stdin.readlines(): if pid and ip: #If we can get a PID out of the line myarray[pid] = MyData().pid = pid #Create a new MyData object, assign the PID, and stick it in myarray accessible by PID. myarray[pid].ip = ip #Add the IP address to the new object myarray[pid].lines.append(data) #Append the data myarray[ip] = myarray[pid] #Take the object by PID and create a key from the IP. <snip>do something similar for pid and hash, hash and ip, etc...</snip> This gives my an array with two keys (a PID and an IP) and they both point to the same object. But on the next iteration of the loop, if I find (for example) an IP and HASH and do: myarray[hash] = myarray[ip] The following is False: myarray[hash] == myarray[ip] Hopefully that was clear. I hate to admit that waaay back in the VB days, I remember being able handle objects byref instead of byval. Is there something similar in Python? Or am I just approaching this wrong?

    Read the article

  • Python socket error on UDP data receive. (10054)

    - by Charles
    I currently have a problem using UDP and Python socket module. We have a server and clients. The problem occurs when we send data to a user. It's possible that user may have closed their connection to the server through a client crash, disconnect by ISP, or some other improper method. As such, it is possible to send data to a closed socket. Of course with UDP you can't tell if the data really reached or if it's closed, as it doesn't care (atleast, it doesn't bring up an exception). However, if you send data and it is closed off, you get data back somehow (???), which ends up giving you a socket error on sock.recvfrom. [Errno 10054] An existing connection was forcibly closed by the remote host. Almost seems like an automatic response from the connection. Although this is fine, and can be handled by a try: except: block (even if it lowers performance of the server a little bit). The problem is, I can't tell who this is coming from or what socket is closed. Is there anyway to find out 'who' (ip, socket #) sent this? It would be great as I could instantly just disconnect them and remove them from the data. Any suggestions? Thanks.

    Read the article

  • Figure out if element is present in multi-dimensional array in python

    - by Terje
    I am parsing a log containing nicknames and hostnames. I want to end up with an array that contains the hostname and the latest used nickname. I have the following code, which only creates a list over the hostnames: hostnames = [] # while(parsing): # nick = nick_on_current_line # host = host_on_current_line if host in hostnames: # Hostname is already present. pass else: # Hostname is not present hostnames.append(host) print hostnames # ['[email protected]', '[email protected]', '[email protected]'] I thought it would be nice to end up with something along the lines of the following: # [['[email protected]', 'John'], ['[email protected]', 'Mary'], ['[email protected]', 'Joe']] My problem is finding out if the hostname is present in such a list hostnames = [] # while(parsing): # nick = nick_on_current_line # host = host_on_current_line if host in hostnames[0]: # This doesn't work. # Hostname is already present. # Somehow check if the nick stored together # with the hostname is the latest one else: # Hostname is not present hostnames.append([host, nick]) Are there any easy fix to this, or should I try a different approach? I could always have an array with objects or structs (if there is such a thing in python), but I would prefer a solution to my array problem.

    Read the article

  • Quickly determine if a number is prime in Python for numbers < 1 billion

    - by Frór
    Hi, My current algorithm to check the primality of numbers in python is way to slow for numbers between 10 million and 1 billion. I want it to be improved knowing that I will never get numbers bigger than 1 billion. The context is that I can't get an implementation that is quick enough for solving problem 60 of project Euler: I'm getting the answer to the problem in 75 seconds where I need it in 60 seconds. http://projecteuler.net/index.php?section=problems&id=60 I have very few memory at my disposal so I can't store all the prime numbers below 1 billion. I'm currently using the standard trial division tuned with 6k±1. Is there anything better than this? Do I already need to get the Rabin-Miller method for numbers that are this large. primes_under_100 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] def isprime(n): if n <= 100: return n in primes_under_100 if n % 2 == 0 or n % 3 == 0: return False for f in range(5, int(n ** .5), 6): if n % f == 0 or n % (f + 2) == 0: return False return True How can I improve this algorithm?

    Read the article

  • Python multiprocessing doesn't play nicely with uuid.uuid4().

    - by yig
    I'm trying to generate a uuid for a filename, and I'm also using the multiprocessing module. Unpleasantly, all of my uuids end up exactly the same. Here is a small example: import multiprocessing import uuid def get_uuid( a ): ## Doesn't help to cycle through a bunch. #for i in xrange(10): uuid.uuid4() ## Doesn't help to reload the module. #reload( uuid ) ## Doesn't help to load it at the last minute. ## (I simultaneously comment out the module-level import). #import uuid ## uuid1() does work, but it differs only in the first 8 characters and includes identifying information about the computer. #return uuid.uuid1() return uuid.uuid4() def main(): pool = multiprocessing.Pool( 20 ) uuids = pool.map( get_uuid, range( 20 ) ) for id in uuids: print id if __name__ == '__main__': main() I peeked into uuid.py's code, and it seems to depending-on-the-platform use some OS-level routines for randomness, so I'm stumped as to a python-level solution (to do something like reload the uuid module or choose a new random seed). I could use uuid.uuid1(), but only 8 digits differ and I think there are derived exclusively from the time, which seems dangerous especially given that I'm multiprocessing (so the code could be executing at exactly the same time). Is there some Wisdom out there about this issue?

    Read the article

  • Managing Instances in Python

    - by BeensTheGreat
    Hello, I am new to Python and this is my first time asking a stackOverflow question, but a long time reader. I am working on a simple card based game but am having trouble managing instances of my Hand class. If you look below you can see that the hand class is a simple container for cards(which are just int values) and each Player class contains a hand class. However, whenever I create multiple instances of my Player class they all seem to manipulate a single instance of the Hand class. From my experience in C and Java it seems that I am somehow making my Hand class static. If anyone could help with this problem I would appreciate it greatly. Thank you, Thad To clarify: An example of this situation would be p = player.Player() p1 = player.Player() p.recieveCard(15) p1.recieveCard(21) p.viewHand() which would result in: [15,21] even though only one card was added to p Hand class: class Hand: index = 0 cards = [] #Collections of cards #Constructor def __init__(self): self.index self.cards def addCard(self, card): """Adds a card to current hand""" self.cards.append(card) return card def discardCard(self, card): """Discards a card from current hand""" self.cards.remove(card) return card def viewCards(self): """Returns a collection of cards""" return self.cards def fold(self): """Folds the current hand""" temp = self.cards self.cards = [] return temp Player Class import hand class Player: name = "" position = 0 chips = 0 dealer = 0 pHand = [] def __init__ (self, nm, pos, buyIn, deal): self.name = nm self.position = pos self.chips = buyIn self.dealer = deal self.pHand = hand.Hand() return def recieveCard(self, card): """Recieve card from the dealer""" self.pHand.addCard(card) return card def discardCard(self, card): """Throw away a card""" self.pHand.discardCard(card) return card def viewHand(self): """View the players hand""" return self.pHand.viewCards() def getChips(self): """Get the number of chips the player currently holds""" return self.chips def setChips(self, chip): """Sets the number of chips the player holds""" self.chips = chip return def makeDealer(self): """Makes this player the dealer""" self.dealer = 1 return def notDealer(self): """Makes this player not the dealer""" self.dealer = 0 return def isDealer(self): """Returns flag wether this player is the dealer""" return self.dealer def getPosition(self): """Returns position of the player""" return self.position def getName(self): """Returns name of the player""" return self.name

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • Confused as to use a class or a function: Writing XML files using lxml and Python

    - by PulpFiction
    Hi. I need to write XML files using lxml and Python. However, I can't figure out whether to use a class to do this or a function. The point being, this is the first time I am developing a proper software and deciding where and why to use a class still seems mysterious. I will illustrate my point. For example, consider the following function based code I wrote for adding a subelement to a etree root. from lxml import etree root = etree.Element('document') def createSubElement(text, tagText = ""): etree.SubElement(root, text) # How do I do this: element.text = tagText createSubElement('firstChild') createSubElement('SecondChild') As expected, the output of this is: <document> <firstChild/> <SecondChild/> </document> However as you can notice the comment, I have no idea how to do set the text variable using this approach. Is using a class the only way to solve this? And if yes, can you give me some pointers on how to achieve this?

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

  • Python: Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • Intellisense on custom types in Iron Python

    - by Anish Patel
    Hi everybody, I'm just starting to play around with IronPython and am having a hard time using it with custom types created in C#. I can get IronPython to load in assemblies from C# classes, but I'm struggling without the help of intellisense. If I have a class in C# as defined below, how can I make it so that IronPython will be able to see the methods/properties that are available in it? public class Person { public string Name { get; set; } public int Age{ get; set; } public double Weight{ get; set; } public double Height { get; set; } public double CalculateBMI() { return Weight/Math.Pow(Height, 2); } } In Iron python I'd instance a Person object as follows: newPerson = Person() newPerson.Name = 'John' newPerson.Age = 25 newPerson.Weight = 75 newPerson.Height = 1.70 newPerson.CalculateBMI() The thing that is annoying me is that I want to be able to say newPerson = Person() And then be able to see all the methods and properties associated with the person object whenever I type: newPerson. Anyone have any ideas if this can be done?

    Read the article

  • python mongokit Connection() AssertionError

    - by zalew
    just installed mongokit and can't figure out why I get AssertionError python console: >>> from mongokit import Connection >>> c = Connection() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/mongokit-0.5.3-py2.6.egg/mongokit/connection.py", line 35, in __init__ super(Connection, self).__init__(*args, **kwargs) File "build/bdist.linux-i686/egg/pymongo/connection.py", line 169, in __init__ File "build/bdist.linux-i686/egg/pymongo/connection.py", line 338, in __find_master File "build/bdist.linux-i686/egg/pymongo/connection.py", line 226, in __master File "build/bdist.linux-i686/egg/pymongo/database.py", line 220, in command File "build/bdist.linux-i686/egg/pymongo/collection.py", line 356, in find_one File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 485, in next File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 461, in _refresh File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 429, in __send_message File "build/bdist.linux-i686/egg/pymongo/helpers.py", line 98, in _unpack_response AssertionError >>> mongodb console: Wed Mar 31 10:27:34 connection accepted from 127.0.0.1:60480 #30 Wed Mar 31 10:27:34 end connection 127.0.0.1:60480 db 1.5 pymongo 1.5 (tested also on 1.4.) mongokit 0.5.3 (also 0.5.2)

    Read the article

  • Multiple Unpacking Assignment in Python when you don't know the sequence length

    - by doug
    The textbook examples of multiple unpacking assignment are something like: import numpy as NP M = NP.arange(5) a, b, c, d, e = M # so of course, a = 0, b = 1, etc. M = NP.arange(20).reshape(5, 4) # numpy 5x4 array a, b, c, d, e = M # here, a = M[0,:], b = M[1,:], etc. (ie, a single row of M is assigned each to a through e) (My Q is not numpy specfic; indeed, i would prefer a pure python solution.) W/r/t the piece of code i'm looking at now, i see two complications on that straightforward scenario: i usually won't know the shape of M; and i want to unpack a certain number of items (definitely less than all items) and i want to put the remainder into a single container so back to the 5x4 array above, what i would very much like to be able to do is, for instance, assign the first three rows of M to a, b, and c respectively (exactly as above) and the rest of the rows (i have no idea how many there will be, just some positive integer) to a single container, all_the_rest = []. I'm not sure if i have explained this clearly; in any event, if i get feedback i'll promptly edit my Question.

    Read the article

  • Python: slow read & write for millions of small files

    - by Jami
    I am building directory tree which has tons of subdirectories and files. The total directory count is somewhere along 256^32 subdirectories with 256 files in each end which are only a few bytes long. I did this so I would have fast access to these files (since i'm not searching and i'm just directly accessing then via a known file path) I have a python script that builds this filesystem and reads & writes those files. The problem is that when I reach more than 1Gb of total filesize, the read and write methods become extremely slow. Here's the function I have that reads the contents of a file (the file contains an integer string), adds a certain number to it, then writes it back to the original file. def addInFile(path, scoreToAdd): num = scoreToAdd try: shutil.copyfile(path, '/tmp/tmp.txt') fp = open('/tmp/tmp.txt', 'r') num += int(fp.readlines()[0]) fp.close() except: pass fp = open('/tmp/tmp.txt', 'w') fp.write(str(num)) fp.close() shutil.copyfile('/tmp/tmp.txt', path) I previously tried performing linux console commands but it was slower. I copy the file to a temporary file first then access/modify it then copy it back because i found this was faster than directly accessing the file. I think the cause of the slowdown is because there're tons of files. performing this function 1000 times sometimes reach 1 minute now, but before (when there were only a few files, 1000 calls was performed for only less than 1 second) How do you suggest I fix this?

    Read the article

  • Python IOError: Not a gzipped file (Gzip and Blowfish Encrypt/Compress)

    - by notbad.jpeg
    I'm having some problems with python's built-in library gzip. Looked through almost every other stack question about it, and none of them seem to work. MY PROBLEM IS THAT WHEN I TRY TO DECOMPRESS I GET THE IOError I'm Getting: Traceback (most recent call last): File "mymodule.py", line 61, in return gz.read() File "/usr/lib/python2.7/gzip.py", line 245, readself._read(readsize) File "/usr/lib/python2.7/gzip.py", line 287, in _readself._read_gzip_header() File "/usr/lib/python2.7/gzip.py", line 181, in _read_gzip_header raise IOError, 'Not a gzipped file'IOError: Not a gzipped file This is my code to send it over SMB, it might not make sense why i do things, but it's normally in a while loop and memory efficient, I just simplified it. buffer = cStringIO.StringIO(output) #output is from a subprocess call small_buffer = cStringIO.StringIO() small_string = buffer.read() #need a string to write to buffer gzip_obj = gzip.GzipFile(fileobj=small_buffer,compresslevel=6, mode='wb') gzip_obj.write(small_string) compressed_str = small_buffer.getvalue() blowfish = Blowfish.new('abcd', Blowfish.MODE_ECB) remainder = '|'*(8 - (len(compressed_str) % 8)) compressed_str += remainder encrypted = blowfish.encrypt(compressed_str) #i send it over smb, then retrieve it later Then this is the code that retrieves it: #buffer is a cStringIO object filled with data from smb retrieval decrypter = Blowfish.new('abcd', Blowfish.MODE_ECB) value = buffer.getvalue() decrypted = decrypter.decrypt(value) buff = cStringIO.StringIO(decrypted) buff.seek(0) gz = gzip.GzipFile(fileobj=buff) return gz.read() Here's the problem return gz.read()

    Read the article

  • Python script to calculate aded combinations from a dictionary

    - by dayde
    I am trying to write a script that will take a dictionary of items, each containing properties of values from 0 - 10, and add the various elements to select which combination of items achieve the desired totals. I also need the script to do this, using only items that have the same "slot" in common. For example: item_list = { 'item_1': {'slot': 'top', 'prop_a': 2, 'prop_b': 0, 'prop_c': 2, 'prop_d': 1 }, 'item_2': {'slot': 'top', 'prop_a': 5, 'prop_b': 0, 'prop_c': 1, 'prop_d':-1 }, 'item_3': {'slot': 'top', 'prop_a': 2, 'prop_b': 5, 'prop_c': 2, 'prop_d':-2 }, 'item_4': {'slot': 'mid', 'prop_a': 5, 'prop_b': 5, 'prop_c':-5, 'prop_d': 0 }, 'item_5': {'slot': 'mid', 'prop_a':10, 'prop_b': 0, 'prop_c':-5, 'prop_d': 0 }, 'item_6': {'slot': 'mid', 'prop_a':-5, 'prop_b': 2, 'prop_c': 3, 'prop_d': 5 }, 'item_7': {'slot': 'bot', 'prop_a': 1, 'prop_b': 3, 'prop_c':-4, 'prop_d': 4 }, 'item_8': {'slot': 'bot', 'prop_a': 2, 'prop_b': 2, 'prop_c': 0, 'prop_d': 0 }, 'item_9': {'slot': 'bot', 'prop_a': 3, 'prop_b': 1, 'prop_c': 4, 'prop_d':-4 }, } The script would then need to select which combinations from the "item_list" dict that using 1 item per "slot" that would achieve a desired result when added. For example, if the desired result was: 'prop_a': 3, 'prop_b': 3, 'prop_c': 8, 'prop_d': 0, the script would select 'item_2', 'item_6', and 'item_9', along with any other combination that worked. 'item_2': {'slot': 'top', 'prop_a': 5, 'prop_b': 0, 'prop_c': 1, 'prop_d':-1 } 'item_6': {'slot': 'mid', 'prop_a':-5, 'prop_b': 2, 'prop_c': 3, 'prop_d': 5 } 'item_9': {'slot': 'bot', 'prop_a': 3, 'prop_b': 1, 'prop_c': 4, 'prop_d':-4 } 'total': 'prop_a': 3, 'prop_b': 3, 'prop_c': 8, 'prop_d': 0 Any ideas how to accomplish this? It does not need to be in python, or even a thorough script, but just an explanation on how to do this in theory would be enough for me. I have tried working out looping through every combination, but that seems to very quickly get our of hand and unmanageable. The actual script will need to do this for about 1,000 items using 20 different "slots", each with 8 properties. Thanks for the help!

    Read the article

  • Finding specific words in a file (Python language)

    - by Caroline Yi
    I have to write a program in python where the user is given a menu with four different "word games". There is a file called dictionary.txt and one of the games requires the user to input a) the number of letters in a word and b) a letter to exclude from the words being searched in the dictionary (dictionary.txt has the whole dictionary). Then the program prints the words that follow the user's requirements. My question is how on earth do I open the file and search for words with a certain length in that file. I only have a basic code which only asks the user for inputs. I'm am very new at this please help :( this is what I have up to the first option. The others are fine and I know how to break the loop but this specific one is really giving me trouble. I have tried everything and I just keep getting errors. Honestly, I only took this class because someone said it would be fun. It is, but recently I've really been falling behind and I have no idea what to do now. This is an intro level course so please be nice I've never done this before until now :( print print "Choose Which Game You Want to Play" print "a) Find words with only one vowel and excluding a specific letter." print "b) Find words containing all but one of a set of letters." print "c) Find words containing a specific character string." print "d) Find words containing state abbreviations." print "e) Find US state capitals that start with months." print "q) Quit." print choice = raw_input("Enter a choice: ") choice = choice.lower() print choice while choice != "q": if choice == "a": #wordlen = word length user is looking for.s wordlen = raw_input("Please enter the word length you are looking for: ") wordlen = int(wordlen) print wordlen #letterex = letter user wishes to exclude. letterex = raw_input("Please enter the letter you'd like to exclude: ") letterex = letterex.lower() print letterex

    Read the article

  • Python and a "time value of money" problem.

    - by jamieb
    (I asked this question earlier today, but I did a poor job of explaining myself. Let me try again) I have a client who is an industrial maintenance company. They sell service agreements that are prepaid 20 hour blocks of a technician's time. Some of their larger customers might burn through that agreement in two weeks while customers with fewer problems might go eight months on that same contract. I would like to use Python to help model projected sales revenue and determine how many billable hours per month that they'll be on the hook for. If each customer only ever bought a single service contract (never renewed) it would be easy to figure sales as monthly_revenue = contract_value * qty_contracts_sold. Billable hours would also be easy: billable_hrs = hrs_per_contract * qty_contracts_sold. However, how do I account for renewals? Assuming that 90% (or some other arbitrary amount) of customers renew, then their monthly revenue ought to grow geometrically. Another important variable is how long the average customer burns through a contract. How do I determine what the revenue and billable hours will be 3, 6, or 12 months from now, based on various renewal and burn rates? I assume that I'd use some type of recursive function but math was never one of my strong points. Any suggestions please? Edit: I'm thinking that the best way to approach this is to think of it as a "time value of money" problem. I've retitled the question as such. The problem is probably a lot more common if you think of "monthly sales" as something similar to annuity payments.

    Read the article

  • Extracting a .app from a zip file in Python, using ZipFile

    - by Yakattak
    I'm trying to extract new revisions of Chromium.app from their snapshots, and I can download the file fine, but when it comes to extracting it, ZipFile either extracts the chrome-mac folder within as a file, says that directories don't exist, etc. I am very new to python, so these errors make little sense to me. Here is what I have so far. import urllib2 response = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/LATEST') latestRev = response.read() print latestRev # we have the revision, now we need to download the zip and extract it latestZip = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/%i/chrome-mac.zip' % (int(latestRev)), '~/Desktop/ChromiumUpdate/%i-update' % (int(latestRev))) #declare some vars that hold paths n shit workingDir = '/Users/slehan/Desktop/ChromiumUpdate/' chromiumZipPath = '%s%i-update.zip' % (workingDir, (int(latestRev))) chromiumAppPath = 'chrome-mac/' #the path of the chromium executable within the zip file chromiumAppExtracted = '%s/Chromium.app' % (workingDir) # path of the extracted executable output = open(chromiumZipPath, 'w') #delete any current file there output.write(latestZip.read()) output.close() # we have the .zip now we need to extract the Chromium.app file, it's in ziproot/chrome-mac/Chromium.app import zipfile, os zippedFile = open(chromiumZipPath) zippedChromium = zipfile.ZipFile(zippedFile, 'r') zippedChromium.extract(chromiumAppPath, workingDir) #print zippedChromium.namelist() zippedChromium.close() #zippedChromium.close() Any ideas?

    Read the article

  • Need help running Python app as service in Ubuntu with Upstart

    - by GreeenGuru
    I have written a logging application in Python that is meant to start at boot, but I've been unable to start the app with Ubuntu's Upstart init daemon. When run from the terminal with sudo /usr/local/greeenlog/main.pyw, the application works perfectly. Here is what I've tried for the Upstart job: /etc/init/greeenlog.conf # greeenlog description "I log stuff." start on startup stop on shutdown script exec /usr/local/greeenlog/main.pyw end script My application starts one child thread, in case that is important. I've tried the job with the expect fork stanza without any change in the results. I've also tried this with sudo and without the script statements (just a lone exec statement). In all cases, after boot, running status greeenlog returns greeenlog stop/waiting and running start greeenlog returns: start: Rejected send message, 1 matched rules; type="method_call", sender=":1.61" (uid=1000 pid=2496 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) Can anyone see what I'm doing wrong? I appreciate any help you can give. Thanks.

    Read the article

  • Python subprocess: 64 bit windows server PIPE doesn't exist :(

    - by Spaceman1861
    I have a GUI that launches selected python scripts and runs it in cmd next to the gui window. I am able to get my launcher to work on my (windows xp 32 bit) laptop but when I upload it to the server(64bit windows iss7) I am running into some issues. The script runs, to my knowledge but spits back no information into the cmd window. My script is a bit of a Frankenstein that I have hacked and slashed together to get it to work I am fairly certain that this is a very bad example of the subprocess module. Just wondering if i could get a hand :). My question is how do i have to alter my code to work on a 64bit windows server. :) from Tkinter import * import pickle,subprocess,errno,time,sys,os PIPE = subprocess.PIPE if subprocess.mswindows: from win32file import ReadFile, WriteFile from win32pipe import PeekNamedPipe import msvcrt else: import select import fcntl def recv_some(p, t=.1, e=1, tr=5, stderr=0): if tr < 1: tr = 1 x = time.time()+t y = [] r = '' pr = p.recv if stderr: pr = p.recv_err while time.time() < x or r: r = pr() if r is None: if e: raise Exception(message) else: break elif r: y.append(r) else: time.sleep(max((x-time.time())/tr, 0)) return ''.join(y) def send_all(p, data): while len(data): sent = p.send(data) if sent is None: raise Exception(message) data = buffer(data, sent) The code above isn't mine def Run(): print filebox.get(0) location = filebox.get(0) location = location.__str__().replace(listbox.get(ANCHOR).__str__(),"") theTime = time.asctime(time.localtime(time.time())) lastbox.delete(0, END) lastbox.insert(END,theTime) for line in CookieCont: if listbox.get(ANCHOR) in line and len(line) > 4: line[4] = theTime else: "Fill In the rip Details to record the time" if __name__ == '__main__': if sys.platform == 'win32' or sys.platform == 'win64': shell, commands, tail = ('cmd', ('cd "'+location+'"',listbox.get(ANCHOR).__str__()), '\r\n') else: return "Please use contact admin" a = Popen(shell, stdin=PIPE, stdout=PIPE) print recv_some(a) for cmd in commands: send_all(a, cmd + tail) print recv_some(a) send_all(a, 'exit' + tail) print recv_some(a, e=0) The Code above is mine :)

    Read the article

  • Python virtualenv questions

    - by orokusaki
    I'm using VirtualEnv on Windows XP. I'm wondering if I have my brain wrapped around it correctly. I ran virtualenv ENV and it created C:\WINDOWS\system32\ENV. I then changed my PATH variable to include C:\WINDOWS\system32\ENV\Scripts instead of C:\Python27\Scripts. Then, I checked out Django into C:\WINDOWS\system32\ENV\Lib\site-packages\django-trunk, updated my PYTHON_PATH variable to point the new Django directory, and continued to easy_install other things (which of course go into my new C:\WINDOWS\system32\ENV\Lib\site-packages directory). I understand why I should use VirtualEnv so I can run multiple versions of Django, and other libraries on the same machine, but does this mean that to switch between environments I have to basically change my PATH and PYTHON_PATH variable? So, I go from developing one Django project which uses Django 1.2 in an environment called ENV and then change my PATH and such so that I can use an environment called ENV2 which has the dev version of Django? Is that basically it, or is there some better way to automatically do all this (I could update my path in Python code, but that would require me to write machine-specific code in my application)? Also, how does this process compare to using VirtualEnv on Linux (I'm quite the beginner at Linux).

    Read the article

  • Internationalizing a Python 2.6 application via Babel

    - by Malcolm
    We're evaluating Babel 0.9.5 [1] under Windows for use with Python 2.6 and have the following questions that we we've been unable to answer through reading the documentation or googling. 1) I would like to use an _ like abbreviation for ungettext. Is there a concencus on whether one should use n_ or N_ for this? n_ does not appear to work. Babel does not extract text. N_ appears to partially work. Babel extracts text like it does for gettext, but does not format for ngettext (missing plural argument and msgstr[ n ].) 2) Is there a way to set the initial msgstr fields like the following when creating a POT file? I suspect there may be a way to do this via Babel cfg files, but I've been unable to find documentation on the Babel cfg file format. "Project-Id-Version: PROJECT VERSION\n" "Language-Team: en_US \n" 3) Is there a way to preserve 'obsolete' msgid/msgstr's in our PO files? When I use the Babel update command, newly created obsolete strings are marked with #~ prefixes, but existing obsolete message strings get deleted. Thanks, Malcolm [1] http://babel.edgewall.org/

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >