Search Results

Search found 13693 results on 548 pages for 'python metaprogramming'.

Page 148/548 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Replace text in file with Python

    - by Aaron Hoffman
    I'm trying to replace some text in a file with a value. Everything works fine but when I look at the file after its completed there is a new (blank) line after each line in the file. Is there something I can do to prevent this from happening. Here is the code as I have it: import fileinput for line in fileinput.FileInput("testfile.txt",inplace=1): line = line.replace("newhost",host) print line Thank you, Aaron

    Read the article

  • Python Numpy Structured Array (recarray) assigning values into slices

    - by user368877
    Hi, The following example shows what I want to do: >>> test rec.array([(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)], dtype=[('ifAction', '|i1'), ('ifDocu', '|i1'), ('ifComedy', '|i1')]) >>> test[['ifAction', 'ifDocu']][0] (0, 0) >>> test[['ifAction', 'ifDocu']][0] = (1,1) >>> test[['ifAction', 'ifDocu']][0] (0, 0) So, I want to assign the values (1,1) to test[['ifAction', 'ifDocu']][0]. (Eventually, I want to do something like test[['ifAction', 'ifDocu']][0:10] = (1,1), assigning the same values for for 0:10. I have tried many ways but never succeeded. Is there any way to do this? Thank you, Joon

    Read the article

  • python lxml problem

    - by David ???
    I'm trying to print/save a certain element's HTML from a web-page. I've retrieved the requested element's XPath from firebug. All I wish is to save this element to a file. I don't seem to succeed in doing so. (tried the XPath with and without a /text() at the end) I would appreciate any help, or past experience. 10x, David import urllib2,StringIO from lxml import etree url='http://www.tutiempo.net/en/Climate/Londres_Heathrow_Airport/12-2009/37720.htm' seite = urllib2.urlopen(url) html = seite.read() seite.close() parser = etree.HTMLParser() tree = etree.parse(StringIO.StringIO(html), parser) xpath = "/html/body/table/tbody/tr/td[2]/div/table/tbody/tr[6]/td/table/tbody/tr/td[3]/table/tbody/tr[3]/td/table/tbody/tr/td/table/tbody/tr/td/table/tbody/text()" elem = tree.xpath(xpath) print elem[0].strip().encode("utf-8")

    Read the article

  • Fetch wrong SVN credentials with Python

    - by user1029968
    Could anyone here let me know how can I check if provided SVN credentials (username and password) are proper? With pysvn there is callback_get_login parameter but in case credentials are wrong, callback is prompted over and over without any way to cancel this and return failure information. Please let me know how can I (not neccesserily with pysvn) check if provided SVN credentials are okay. Thank you in advance!

    Read the article

  • Use of infix operator hack in production code (Python)

    - by Casebash
    What is your opinion of using the infix operator hack in production code? Issues: The effect this will have on speed. The potential for a clashes with an object with these operators already defined. This seems particularly dangerous with generic code that is intended to handle objects of any type. It is a shame that this isn't built in - it really does improve readability

    Read the article

  • Read a file on App Eninge with Python?

    - by PanosJee
    Is it possible to open a file on GAE just to read its contents and get the last modified tag? I get a IOError: [Errno 13] file not accessible: I know that i cannot delete or update but i believe reading should be possible Has anyone faced a similar problem? os.stat(f,'r').st_mtim

    Read the article

  • Python beginner confused by a complex line of code

    - by Protean
    I understand the gist of the code, that it forms permutations; however, I was wondering if someone could explain exactly what is going on in the return statement. def perm(l): sz = len(l) print (l) if sz <= 1: print ('sz <= 1') return [l] return [p[:i]+[l[0]]+p[i:] for i in range(sz) for p in perm(l[1:])]

    Read the article

  • problems with unpickling a 80 megabyte file in python

    - by tipu
    I am using the pickle module to read and write large amounts of data to a file. After writing to the file a 80 megabyte pickled file, I load it in a SocketServer using class MyTCPHandler(SocketServer.BaseRequestHandler): def handle(self): print("in handle") words_file_handler = open('/home/tipu/Dropbox/dev/workspace/search/words.db', 'rb') words = pickle.load(words_file_handler) tweets = shelve.open('/home/tipu/Dropbox/dev/workspace/search/tweets.db', 'r'); results_per_page = 25 query_details = self.request.recv(1024).strip() query_details = eval(query_details) query = query_details["query"] page = int(query_details["page"]) - 1 return_ = [] booleanquery = BooleanQuery(MyTCPHandler.words) if query.find("(") > -1: result = booleanquery.processAdvancedQuery(query) else: result = booleanquery.processQuery(query) result = list(result) i = 0 for tweet_id in result and i < 25: #return_.append(MyTCPHandler.tweets[str(tweet_id)]) return_.append(tweet_id) i += 1 self.request.send(str(return_)) However the file never seems to load after the pickle.load line and it eventually halts the connection attempt. Is there anything I can do to speed this up?

    Read the article

  • Python regular expression help

    - by dlw
    Hi SO, I can't seem to create the correct regular expression to extract the correct tokens from my string. Padding the beginning of the string with a space generates the correct output, but seems less than optimal: >>> import re >>> s = '-edge_0triggered a-b | -level_Sensitive c-d | a-b-c' >>> re.findall(r'\W(-[\w_]+)',' '+s) ['-edge_0triggered', '-level_Sensitive'] # correct output Here are some of the regular expressions I've tried, does anyone have a regex suggestion that doesn't involve changing the original string and generates the correct output >>> re.findall(r'(-[\w_]+)',s) ['-edge_0triggered', '-b', '-level_Sensitive', '-d', '-b', '-c'] >>> re.findall(r'\W(-[\w_]+)',s) ['-level_Sensitive'] Thanks -- DW

    Read the article

  • Compute divergence of vector field using python

    - by nyvltak
    Is there a function that could be used for calculation of the divergence of the vectorial field? (in matlab http://www.mathworks.ch/help/techdoc/ref/divergence.html) I would expect it exists in numpy/scipy but I can not find it using google :(. # I need to calculate div[A * grad(F)], where F = np.array([[1,2,3,4],[5,6,7,8]]) (2D numpy ndarray) A = np.array([[1,2,3,4],[1,2,3,4]]) (2D numpy ndarray) so grad(F) is a set of 2D ndarrays # I know, I can calculate divergence like this: http://en.wikipedia.org/wiki/Divergence#Application_in_Cartesian_coordinates but do not want to reinvent the wheel. (and also I expent there is some optimized function)

    Read the article

  • Python: Traffic-Simulation (cars on a road)

    - by kame
    Hello! I want to create a traffic simulator like here: http://www.doobybrain.com/wp-content/uploads/2008/03/traffic-simulation.gif But I didn't thougt very deep about this. I would create the class car. Every car has his own color, position and so on. And I could create the road with an array. But how to tell the car where to go? Could I hear your ideas? EDIT: Is it forbidden to get new ideas from good programmers? Why do some people want to close this thread? Or were to ask such questions? I dont understand them. :(

    Read the article

  • Python csv reader acting weird

    - by PylonsN00b
    So OK if I run this wrong code: csvReader1 = csv.reader(file('new_categories.csv', "rU"), delimiter=',') for row1 in csvReader1: print row1[0] print row1[8] category_sku = str(row[8]) if category_sku == sku: classifications["Craft"] = row[0] classifications["Theme"] = row[1] I get: Knitting 391 Traceback (most recent call last): File "upload_all_inventory_ebay.py", line 403, in <module> inventory_item_list = get_item_list(product) File "upload_all_inventory_ebay.py", line 294, in get_item_list category_sku = str(row[8]) NameError: global name 'row' is not defined Where Knitting and 391 are exactly right, of course I need to refer to row[8] as row1[8]...k so I do this: csvReader1 = csv.reader(file('new_categories.csv', "rU"), delimiter=',') for row1 in csvReader1: print row1[0] print row1[8] category_sku = str(row1[8]) if category_sku == sku: classifications["Craft"] = row1[0] classifications["Theme"] = row1[1] And I get this: ........... Crochet 107452 Knitting 107454 Knitting 107455 Knitting 107456 Knitting 107457 Crochet 108200 Crochet 108201 Crochet 108205 Crochet 108213 Crochet 108214 Crochet 108217 108432 Quilt 108451 108482 108488 Scrapbooking 108711 Knitting 122363 Needlework Beading Crafts & Decorating Crochet Crochet Crochet Traceback (most recent call last): File "upload_all_inventory_ebay.py", line 403, in <module> inventory_item_list = get_item_list(product) File "upload_all_inventory_ebay.py", line 292, in get_item_list print row1[0] IndexError: list index out of range Where the output you see there is every effing thing in column 0 and column 1 !!!!!!!!!! Why? And WHY is row1[0] out of range if it wasn't before. YAY Fridays!

    Read the article

  • Python - List of Lists Slicing Behavior

    - by Dan Dobint
    When I define a list and try to change a single item like this: list_of_lists = [['a', 'a', 'a'], ['a', 'a', 'a'], ['a', 'a', 'a']] list_of_lists[1][1] = 'b' for row in list_of_lists: print row It works as intended. But when I try to use list comprehension to create the list: row = ['a' for range in xrange(3)] list_of_lists = [row for range in xrange(3)] list_of_lists[1][1] = 'b' for row in list_of_lists: print row It results in an entire column of items in the list being changed. Why is this? How can I achieve the desired effect with list comprehension?

    Read the article

  • how to check the read write status of storing media in python

    - by mukul sharma
    Hi All, How can i check the read/ write permission of the file storing media? ie assume i have to write some file inside a directory and that directory may be available on read only media like (cd or dvd)or etc. So how can i check that storing media ( cd, hard disk) having a read only or read write both permission. I am using windows xp os. Thanks.

    Read the article

  • Python: Script works, but seems to deadlock after some time

    - by sberry2A
    I have the following script, which is working for the most part Link to PasteBin The script's job is to start a number of threads which in turn each start a subprocess with Popen. The output from each subprocess is as follows: 1 2 3 . . . n Done Bascially the subprocess is transferring 10M records from tables in one database to different tables in another db with a lot of data massaging/manipulation in between because of the different schemas. If the subprocess fails at any time in it's execution (bad records, duplicate primary keys, etc), or it completes successfully, it will output "Done\n". If there are no more records to select against for transfer then it will output "NO DATA\n" My intent was to create my script "tableTransfer.py" which would spawn a number of these processes, read their output, and in turn output information such as number of updates completed, time remaining, time elapsed, and number of transfers per second. I started running the process last night and checked in this morning to see it had deadlocked. There were not subprocceses running, there are still records to be updated, and the script had not exited. It was simply sitting there, no longer outputting the current information because no subprocces were running to update the total number complete which is what controls updates to the output. This is running on OS X. I am looking for three things: I would like to get rid of the possibility of this deadlock occurring so I don't need to check in on it as frequently. Is there some issue with locking? Am I doing this in a bad way (gThreading variable to control looping of spawning additional thread... etc.) I would appreciate some suggestions for improving my overall methodology. How should I handle ctrl-c exit? Right now I need to kill the process, but assume I should be able to use the signal module or other to catch the signal and kill the threads, is that right? I am not sure whether I should be pasting my entire script here, since I usually just paste snippets. Let me know if I should paste it here as well.

    Read the article

  • python: send a list/dict over network

    - by facha
    Hi, everyone I'm looking for an easy way of packing/unpacking data structures for sending over the network: on client just before sending: a = ((1,2),(11,22,),(111,222)) message = pack(a) and then on server: a = unpack(message) Is there a library that could do pack/unpack magic? Thanks in advance

    Read the article

  • [Python]Download an image embedded in a mime multipart message

    - by michele
    Hi, I have to download some images from links. This links return me a file where is embedded a multipart mime and a tiff image. I have writed this code but it downloads the file with mime. How I can remove the mime from this file and have the image returned? Can I do this with wget or curl? My code: def download(url,local): import urllib urllib.urlretrieve(url,local) urllib.urlcleanup() Thanks a lot.

    Read the article

  • ImageChops.duplicate - python

    - by ariel
    Hi I am tring to use the function ImageChops.dulpicate from the PIL module and I get an error I don't understand: this is the code import PIL import Image import ImageChops import os PathDemo4a='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4a' PathDemo4b='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4b' PathDemo4c='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4c' PathBlackBoard='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/BlackBoard.bmp' Slides=os.listdir(PathDemo4a) for slide in Slides: #BB=Image.open(PathBlackBoard) BB=ImageChops.duplicate(PathBlackBoard) #BB=BlackBoard and this is the error; Traceback (most recent call last): File "", line 1, in ImageChops.duplicate('c:/1.BMP') File "C:\Python26\lib\site-packages\PIL\ImageChops.py", line 57, in duplicate return image.copy() AttributeError: 'str' object has no attribute 'copy' any help would be much appriciated Ariel

    Read the article

  • Python Twisted Client Connection Lost

    - by MovieYoda
    I have this twisted client, which connects with a twisted server having an index. I ran this client from command-line. It worked fine. Now I modified it to run in loop (see main()) so that I can keep querying. But the client runs only once. Next time it simply says connection lost \n Connection lost - goodbye!. What am i doing wrong? In the loop I am reconnecting to the server, it that wrong? from twisted.internet import reactor from twisted.internet import protocol from settings import AS_SERVER_HOST, AS_SERVER_PORT # a client protocol class Spell_client(protocol.Protocol): """Once connected, send a message, then print the result.""" def connectionMade(self): self.transport.write(self.factory.query) def dataReceived(self, data): "As soon as any data is received, write it back." if data == '!': self.factory.results = '' else: self.factory.results = data self.transport.loseConnection() def connectionLost(self, reason): print "\tconnection lost" class Spell_Factory(protocol.ClientFactory): protocol = Spell_client def __init__(self, query): self.query = query self.results = '' def clientConnectionFailed(self, connector, reason): print "\tConnection failed - goodbye!" reactor.stop() def clientConnectionLost(self, connector, reason): print "\tConnection lost - goodbye!" reactor.stop() # this connects the protocol to a server runing on port 8090 def main(): print 'Connecting to %s:%d' % (AS_SERVER_HOST, AS_SERVER_PORT) while True: print query = raw_input("Query:") if query == '': return f = Spell_Factory(query) reactor.connectTCP(AS_SERVER_HOST, AS_SERVER_PORT, f) reactor.run() print f.results return if __name__ == '__main__': main()

    Read the article

  • Python PyQT4 - Adding an unknown number of QComboBox widgets to QGridLayout

    - by ZZ
    Hi all, I want to retrieve a list of people's names from a queue and, for each person, place a checkbox with their name to a QGridLayout using the addWidget() function. I can successfully place the items in a QListView, but they just write over the top of each other rather than creating a new row. Does anyone have any thoughts on how I could fix this? self.chk_People = QtGui.QListView() items = self.jobQueue.getPeopleOffQueue() for item in items: QtGui.QCheckBox('%s' % item, self.chk_People) self.jobQueue.getPeopleOffQueue() would return something like ['Bob', 'Sally', 'Jimmy'] if that helps.

    Read the article

  • regular expression search in python

    - by Richard
    Hello all, I am trying to parse some data and just started reading up on regular Expressions so I am pretty new to it. This is the code I have so far String = "MEASUREMENT 3835 303 Oxygen: 235.78 Saturation: 90.51 Temperature: 24.41 DPhase: 33.07 BPhase: 29.56 RPhase: 0.00 BAmp: 368.57 BPot: 18.00 RAmp: 0.00 RawTem.: 68.21" String = String.strip('\t\x11\x13') String = String.split("Oxygen:") print String[1] String[1].lstrip print String[1] What I am trying to do is to do is remove the oxygen data (235.78) and put it in its own variable using an regular expression search. I realize that there should be an easy solution but I am trying to figure out how regular expressions work and they are making my head hurt. Thanks for any help Richard

    Read the article

  • Message queue proxy in Python + Twisted

    - by gasper_k
    Hi, I want to implement a lightweight Message Queue proxy. It's job is to receive messages from a web application (PHP) and send them to the Message Queue server asynchronously. The reason for this proxy is that the MQ isn't always avaliable and is sometimes lagging, or even down, but I want to make sure the messages are delivered, and the web application returns immediately. So, PHP would send the message to the MQ proxy running on the same host. That proxy would save the messages to SQLite for persistence, in case of crashes. At the same time it would send the messages from SQLite to the MQ in batches when the connection is available, and delete them from SQLite. Now, the way I understand, there are these components in this service: message listener (listens to the messages from PHP and writes them to a Incoming Queue) DB flusher (reads messages from the Incoming Queue and saves them to a database; due to SQLite single-threadedness) MQ connection handler (keeps the connection to the MQ server online by reconnecting) message sender (collects messages from SQlite db and sends them to the MQ server, then removes them from db) I was thinking of using Twisted for #1 (TCPServer), but I'm having problem with integrating it with other points, which aren't event-driven. Intuition tells me that each of these points should be running in a separate thread, because all are IO-bound and independent of each other, but I could easily put them in a single thread. Even though, I couldn't find any good and clear (to me) examples on how to implement this worker thread aside of Twisted's main loop. The example I've started with is the chatserver.py, which uses service.Application and internet.TCPServer objects. If I start my own thread prior to creating TCPServer service, it runs a few times, but the it stops and never runs again. I'm not sure, why this is happening, but it's probably because I don't use threads with Twisted correctly. Any suggestions on how to implement a separate worker thread and keep Twisted? Do you have any alternative architectures in mind?

    Read the article

  • how to create a dynamic sql statement w/ python and mysqldb

    - by Elias Bachaalany
    I have the following code: def sql_exec(self, sql_stmt, args = tuple()): """ Executes an SQL statement and returns a cursor. An SQL exception might be raised on error @return: SQL cursor object """ cursor = self.conn.cursor() if self.__debug_sql: try: print "sql_exec: " % (sql_stmt % args) except: print "sql_exec: " % sql_stmt cursor.execute(sql_stmt, args) return cursor def test(self, limit = 0): result = sql_exec(""" SELECT * FROM table """ + ("LIMIT %s" if limit else ""), (limit, )) while True: row = result.fetchone() if not row: break print row result.close() How can I nicely write test() so it works with or without 'limit' without having to write two queries?

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >