Search Results

Search found 13693 results on 548 pages for 'python metaprogramming'.

Page 141/548 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • assign operator to variable in python?

    - by abhilashm86
    Usual method of applying mathematics to variables is a * b Is it able to calculate and manipulate two operands like this? a = input('enter a value') b = input('enter a value') op = raw_input('enter a operand') then how do i connect op and two variables a and b?? i know i can compare op to +, -, %, $ and then assign and compute.... but can i do something like a op b , how to tell compiler that op is an operator?? any tweaks possible?

    Read the article

  • A Python Wrapper for Shutterfly. Uploading an Image

    - by iJames
    I'm working on a Django app in which I want to order prints through Shutterfly's Open API: http://www.shutterfly.com/documentation/start.sfly So far I've been able to build the appropriate POSTs and GETs using the modules and suggested code snippets including httplib, httplib2, urllib, urllib2, mimetype, etc. But I'm stuck on the image uploading when placing an order (the ordering process is not the same process as uploading images to albums which I haven't tried.) From what I can tell, I'm supposed to basically create the multipart form data by concatenating the HTTP request body together with the binary data of the image. I take the strings: --myuniqueboundary1273149960.175.1 Content-Disposition: form-data; name="AuthenticationID" auniqueauthenticationid --myuniqueboundary1273149960.175.1 Content-Disposition: file; name="Image.Data"; filename="1_41_orig.jpg" Content-Type: image/jpeg and I put this data into it and end with the final boundary: ...\xb5|\xf88\x1dj\t@\xd9\'\x1f\xc6j\x88{\x8a\xc0\x18\x8eGaJG\x03\xe9J-\xd8\x96[\x91T\xc3\x0eTu\xf4\xaa\xa5Ty\x80\x01\x8c\x9f\xe9Z\xad\x8cg\xba# g\x18\xe2\xaa:\x829\x02\xb4["\x17Q\xe7\x801\xea?\xad7j\xfd\xa2\xdf\x81\xd2\x84D\xb6)\xa8\xcb\xc8O\\\x9a\xaf(\x1cqM\x98\x8d*\xb8\'h\xc8+\x8e:u\xaa\xf3*\x9b\x95\x05F8\xedN%\xcb\xe1B2\xa9~Tw\xedF\xc4\xfe\xe8\xfc\xa9\x983\xff\xd9... That ends up making it look like this (when I use print to debug): ... --myuniqueboundary1273149960.175.1 Content-Disposition: file; name="Image.Data"; filename="1_41_orig.jpg" Content-Type: image/jpeg ????q?ExifMM* ? ??(1?2?<??i?b?NIKON CORPORATIONNIKON D40HHQuickTime 7.62009:02:17 13:05:25Mac OS X 10.5.6%??????"?'??0220?????? ???? ? ?|_???,b???50??5 ... --myuniqueboundary1273149960.175.1-- My code for grabbing the binary data is pretty much this: filedata = open('myjpegfile.jpeg','rb').read() Which I then add to the rest of the body. I've see something like this code everywhere. I'm then using this to post the full request (with the headers too): response = urllib2.urlopen(request).read() This seems to me to be the standard way that form POSTS with files happens. Am I missing something here? At some point I might be able to make this into a library worth posting up on github, but this problem has stopped me cold in my tracks. Thanks for any insight!

    Read the article

  • unevenly centered subplots in matplotlib in Python?

    - by user248237
    I am plotting a simple pair of subplots in matplotlib that are for some reason unevenly centered. I plot them as follows: plt.figure() # first subplot s1 = plt.subplot(2, 1, 1) plt.bar([1, 2, 3], [4, 5, 6]) # second subplot s2 = plt.subplot(2, 1, 2) plt.pcolor(rand(5,5)) # add colorbar plt.colorbar() # square axes axes_square(s1) axes_square(s2) where axes_square is simply: def axes_square(plot_handle): plot_handle.axes.set_aspect(1/plot_handle.axes.get_data_ratio()) The plot I get is attached. The top and bottom plots are unevenly centered. I'd like their yaxis to be aligned and their boxes to be aligned. If I remove the plt.colorbar() call, the plots become centered. How can I have the plots centered while the colorbar of pcolor is still shown? I want the axes to be centered and have the colorbar be outside of that alignment, either to the left or to the right of the pcolor matrix. image of plots link thanks.

    Read the article

  • In python: how to apply itertools.product to elements of a list of lists

    - by Guilherme Rocha
    I have a list of arrays and I would like to get the cartesian product of the elements in the arrays. I will use an example to make this more concrete... itertools.product seems to do the trick but I am stuck in a little detail. arrays = [(-1,+1), (-2,+2), (-3,+3)]; If I do cp = list(itertools.product(arrays)); I get cp = cp0 = [((-1, 1),), ((-2, 2),), ((-3, 3),)] But what I want to get is cp1 = [(-1,-2,-3), (-1,-2,+3), (-1,+2,-3), (-1,+2,+3), ..., (+1,+2,-3), (+1,+2,+3)]. I have tried a few different things: cp = list(itertools.product(itertools.islice(arrays, len(arrays)))); cp = list(itertools.product(iter(arrays, len(arrays)))); They all gave me cp0 instead of cp1. Any ideas? Thanks in advance.

    Read the article

  • recursively implementing 'minimum number of coins' in python

    - by user5198
    This problem is same as asked in here. Given a list of coins, their values (c1, c2, c3, ... cj, ...), and the total sum i. Find the minimum number of coins the sum of which is i (we can use as many coins of one type as we want), or report that it's not possible to select coins in such a way that they sum up to S. I"m just introduced to dynamic programming yesterday and I tried to make a code for it. # Optimal substructure: C[i] = 1 + min_j(C[i-cj]) cdict = {} def C(i, coins): if i <= 0: return 0 if i in cdict: return cdict[i] else: answer = 1 + min([C(i - cj, coins) for cj in coins]) cdict[i] = answer return answer Here, C[i] is the optimal solution for amount of money 'i'. And available coins are {c1, c2, ... , cj, ...} for the program, I've increased the recursion limit to avoid maximum recursion depth exceeded error. But, this program gives the right answer only someones and when a solution is not possible, it doesn't indicate that. What is wrong with my code and how to correct it?

    Read the article

  • Python class decorator and maximum recursion depth exceeded

    - by Michal Lula
    I try define class decorator. I have problem with __init__ method in decorated class. If __init__ method invokes super the RuntimeError maximum recursion depth exceeded is raised. Code example: def decorate(cls): class NewClass(cls): pass return NewClass @decorate class Foo(object): def __init__(self, *args, **kwargs): super(Foo, self).__init__(*args, **kwargs) What I doing wrong? Thanks, Michal

    Read the article

  • Reading Python Documentation for 3rd party modules

    - by Shadyabhi
    I recently downloaded IMDbpy moduele.. When I do, import imdb help(imdb) i dont get the full documentation.. I have to do im = imdb.IMDb() help(im) to see the available methods. I dont like this console interface. Is there any better way of reading the doc. I mean all the doc related to module imdb in one page..

    Read the article

  • python socket.recv/sendall call blocking

    - by fsm
    Hi everyone. This post is incorrectly tagged 'send' since I cannot create new tags. I have a very basic question about this simple echo server. Here are some code snippets. client while True: data = raw_input("Enter data: ") mySock.sendall(data) echoedData = mySock.recv(1024) if not echoedData: break print echoedData server while True: print "Waiting for connection" (clientSock, address) = serverSock.accept() print "Entering read loop" while True: print "Waiting for data" data = clientSock.recv(1024) if not data: break clientSock.send(data) clientSock.close() Now this works alright, except when the client sends an empty string (by hitting the return key in response to "enter data: "), in which case I see some deadlock-ish behavior. Now, what exactly happens when the user presses return on the client side? I can only imagine that the sendall call blocks waiting for some data to be added to the send buffer, causing the recv call to block in turn. What's going on here? Thanks for reading!

    Read the article

  • Python - How to pickle yourself?

    - by Mark
    I want my class to implement Save and Load functions which simply do a pickle of the class. But apparently you cannot use 'self' in the fashion below. How can you do this? self = cPickle.load(f) cPickle.dump(self,f,2)

    Read the article

  • Simplest way to handle and display errors in a Python Pylons controller without a helper class

    - by ensnare
    I have a class User() that throw exceptions when attributes are incorrectly set. I am currently passing the exceptions from the models through the controller to the templates by essentially catching exceptions two times for each variable. Is this a correct way of doing it? Is there a better (but still simple) way? I prefer not to use any third party error or form handlers due to the extensive database queries we already have in place in our classes. Furthermore, how can I "stop" the chain of processing in the class if one of the values is invalid? Is there like a "break" syntax or something? Thanks. >>> u = User() >>> u.name = 'Jason Mendez' >>> u.password = '1234' Traceback (most recent call last): File "<stdin>", line 1, in <module> File "topic/model/user.py", line 79, in password return self._password ValueError: Your password must be greater than 6 characters In my controller "register," I have: class RegisterController(BaseController): def index(self): if request.POST: c.errors = {} u = User() try: u.name = c.name = request.POST['name'] except ValueError, error: c.errors['name'] = error try: u.email = c.email = request.POST['email'] except ValueError, error: c.errors['email'] = error try: u.password = c.password = request.POST['password'] except ValueError, error: c.errors['password'] = error try: u.commit() except ValueError, error: pass return render('/register.mako')

    Read the article

  • Python Copy Through Assignment?

    - by Marcus Whybrow
    I would expect that the following code would just initialise the dict_a, dict_b and dict_c dictionaries. But it seams to have a copt through effect: dict_a = dict_b = dict_c = {} dict_c['hello'] = 'goodbye' print dict_a print dict_b print dict_c As you can see the result is as follows: {'hello': 'goodbye'} {'hello': 'goodbye'} {'hello': 'goodbye'} Why does that program give the previous result, When I would expect it to return: {} {} {'hello': 'goodbye'}

    Read the article

  • python duration of a file object in an argument list

    - by msw
    In the pickle module documentation there is a snippet of example code: reader = pickle.load(open('save.p', 'rb')) which upon first read looked like it would allocate a system file descriptor, read its contents and then "leak" the open descriptor for there isn't any handle accessible to call close() upon. This got me wondering if there was any hidden magic that takes care of this case. Diving into the source, I found in Modules/_fileio.c that file descriptors are closed by the fileio_dealloc() destructor which led to the real question. What is the duration of the file object returned by the example code above? After that statement executes does the object indeed become unreferenced and therefore will the fd be subject to a real close(2) call at some future garbage collection sweep? If so, is the example line good practice, or should one not count on the fd being released thus risking kernel per-process descriptor table exhaustion?

    Read the article

  • Python Process won't call atexit

    - by Brian M. Hunt
    I'm trying to use atexit in a Process, but unfortunately it doesn't seem to work. Here's some example code: import time import atexit import logging import multiprocessing logging.basicConfig(level=logging.DEBUG) class W(multiprocessing.Process): def run(self): logging.debug("%s Started" % self.name) @atexit.register def log_terminate(): # ever called? logging.debug("%s Terminated!" % self.name) while True: time.sleep(10) @atexit.register def log_exit(): logging.debug("Main process terminated") logging.debug("Main process started") a = W() b = W() a.start() b.start() time.sleep(1) a.terminate() b.terminate() The output of this code is: DEBUG:root:Main process started DEBUG:root:W-1 Started DEBUG:root:W-2 Started DEBUG:root:Main process terminated I would expect that the W.run.log_terminate() would be called when a.terminate() and b.terminate() are called, and the output to be something likeso (emphasis added)!: DEBUG:root:Main process started DEBUG:root:W-1 Started DEBUG:root:W-2 Started DEBUG:root:W-1 Terminated! DEBUG:root:W-2 Terminated! DEBUG:root:Main process terminated Why isn't this working, and is there a better way to log a message (from the Process context) when a Process is terminated? Thank you for your input - it's much appreciated.

    Read the article

  • Fast iterating over first n items of an iterable in python

    - by martinthenext
    Hello! I'm looking for a pythonic way of iterating over first n items of a list, and it's quite important to do this as fast as possible. This is how I do it now: count = 0 for item in iterable: do_somethin(item) count += 1 if count >= n: break Doesn't seem neat to me. Another way of doing this is: for item in itertools.islice(iterable, n): do_something(item) This looks good, the question is it fast enough to use with some generator(s)? For example: pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) for item in itertools.islice(pair_generator(iterable), n): so_something(item) Will it run fast enough as compared to the first method? Is there some easier way to do it?

    Read the article

  • Double linking array in Python

    - by cdecker
    Since I'm pretty new this question'll certainly sound stupid but I have no idea about how to approach this. I'm trying take a list of nodes and for each of the nodes I want to create an array of predecessors and successors in the ordered array of all nodes. Currently my code looks like this: nodes = self.peers.keys() nodes.sort() peers = {} numPeers = len(nodes) for i in nodes: peers[i] = [self.coordinator] for i in range(0,len(nodes)): peers[nodes[i%numPeers]].append(nodes[(i+1)%numPeers]) peers[nodes[(i+1)%numPeers]].append(nodes[i%numPeers]) # peers[nodes[i%numPeers]].append(nodes[(i+4)%numPeers]) # peers[nodes[(i+4)%numPeers]].append(nodes[i%numPeers]) The last two lines should later be used to create a skip graph, but that's not really important. The problem is that it doesn't really work reliably, sometimes a predecessor or a successor is skipped, and instead the next one is used, and so forth. Is this correct at all or is there a better way to do this? Basically I need to get the array indices with certain offsets from each other. Any ideas?

    Read the article

  • Reading numeric Excel data as text using xlrd in Python

    - by Brian
    Hi guys, I am trying to read in an Excel file using xlrd, and I am wondering if there is a way to ignore the cell formatting used in Excel file, and just import all data as text? Here is the code I am using for far: import xlrd xls_file = 'xltest.xls' xls_workbook = xlrd.open_workbook(xls_file) xls_sheet = xls_workbook.sheet_by_index(0) raw_data = [['']*xls_sheet.ncols for _ in range(xls_sheet.nrows)] raw_str = '' feild_delim = ',' text_delim = '"' for rnum in range(xls_sheet.nrows): for cnum in range(xls_sheet.ncols): raw_data[rnum][cnum] = str(xls_sheet.cell(rnum,cnum).value) for rnum in range(len(raw_data)): for cnum in range(len(raw_data[rnum])): if (cnum == len(raw_data[rnum]) - 1): feild_delim = '\n' else: feild_delim = ',' raw_str += text_delim + raw_data[rnum][cnum] + text_delim + feild_delim final_csv = open('FINAL.csv', 'w') final_csv.write(raw_str) final_csv.close() This code is functional, but there are certain fields, such as a zip code, that are imported as numbers, so they have the decimal zero suffix. For example, is there is a zip code of '79854' in the Excel file, it will be imported as '79854.0'. I have tried finding a solution in this xlrd spec, but was unsuccessful.

    Read the article

  • Python: How efficient is subtring extraction?

    - by Cameron
    I've got the entire contents of a text file (at least a few KB) in string myStr. Will the following code create a copy of the string (less the first character) in memory? myStr = myStr[1:] I'm hoping it just refers to a different location in the same internal buffer. If not, is there a more efficient way to do this? Thanks!

    Read the article

  • Threading in python: retrieve return value when using target=

    - by Philipp Keller
    I want to get the "free memory" of a bunch of servers like this: def get_mem(servername): res = os.popen('ssh %s "grep MemFree /proc/meminfo | sed \'s/[^0-9]//g\'"' % servername) return res.read().strip() since this can be threaded I want to do something like that: import threading thread1 = threading.Thread(target=get_mem, args=("server01", )) thread1.start() But now: how can I access the return value(s) of the get_mem functions? Do I really need to go the full fledged way creating a class MemThread(threading.Thread) and overwriting __init__ and __run__?

    Read the article

  • Confusion Matrix with number of classified/misclassified instances on it (Python/Matplotlib)

    - by Pinkie
    I am plotting a confusion matrix with matplotlib with the following code: from numpy import * import matplotlib.pyplot as plt from pylab import * conf_arr = [[33,2,0,0,0,0,0,0,0,1,3], [3,31,0,0,0,0,0,0,0,0,0], [0,4,41,0,0,0,0,0,0,0,1], [0,1,0,30,0,6,0,0,0,0,1], [0,0,0,0,38,10,0,0,0,0,0], [0,0,0,3,1,39,0,0,0,0,4], [0,2,2,0,4,1,31,0,0,0,2], [0,1,0,0,0,0,0,36,0,2,0], [0,0,0,0,0,0,1,5,37,5,1], [3,0,0,0,0,0,0,0,0,39,0], [0,0,0,0,0,0,0,0,0,0,38] ] norm_conf = [] for i in conf_arr: a = 0 tmp_arr = [] a = sum(i,0) for j in i: tmp_arr.append(float(j)/float(a)) norm_conf.append(tmp_arr) plt.clf() fig = plt.figure() ax = fig.add_subplot(111) res = ax.imshow(array(norm_conf), cmap=cm.jet, interpolation='nearest') cb = fig.colorbar(res) savefig("confmat.png", format="png") But I want to the confusion matrix to show the numbers on it like this graphic (the right one): http://i48.tinypic.com/2e30kup.jpg How can I plot the conf_arr on the graphic?

    Read the article

  • Python and Gstreamer

    - by Seif Sallam
    hi, I'm creating a streaming application, using GStreamer with TCP pipeline, and i implemented start, pause, and stop. but the problem is, that i can't seek, i tried to change the playback value from the server side, then i tried on the client side, and Finally tried to change the value on both at the same time, but in all cases it doesn't work. and I even tried to pause the playback then continue but nothing happens. I'm having this problem with the seek and the volume. Any help please, I searched everywhere but i couldn't find anything that worked. this is the code that i use for seeking self.pipeline.seek_simple(gst.FORMAT_TIME, gst.SEEK_FLAG_FLUSH, time)

    Read the article

  • sending instant messages through python (msn)

    - by code_by_night
    ok I am well aware there are many other questions about this, but I have been searching and have yet to find a solid proper answer that doesnt revolve around jabber or something worse. (no offense to jabber users, just I don't want all the extras that come with it) I currently have msnp and twisted.words, I simply want to send and receive messages, have read many examples that have failed to work, and msnp is poorly documented. My preference is msnp as it requires much less code, I'm not looking for something complicated. Using this code I can login, and view my friends that are online (can't send them messages though.): import msnp import time, threading msn = msnp.Session() msn.login('[email protected]', 'XXXXXX') msn.sync_friend_list() class MSN_Thread(threading.Thread): def run(self): msn.start_chat("[email protected]") #this does not work while True: msn.process() time.sleep(1) start_msn = MSN_Thread() start_msn.start() I hope I have been clear enough, its pretty late and my head is not in a clear state after all this msn frustration. edit: since it seems msnp is extremely outdated could anyone recommend with simple examples on how I could achieve this? Don't need anything fancy that requires other accounts.

    Read the article

  • hierarchical clustering on correlations in Python scipy/numpy?

    - by user248237
    How can I run hierarchical clustering on a correlation matrix in scipy/numpy? I have a matrix of 100 rows by 9 columns, and I'd like to hierarchically clustering by correlations of each entry across the 9 conditions. I'd like to use 1-pearson correlation as the distances for clustering. Assuming I have a numpy array "X" that contains the 100 x 9 matrix, how can I do this? I tried using hcluster, based on this example: Y=pdist(X, 'seuclidean') Z=linkage(Y, 'single') dendrogram(Z, color_threshold=0) however, pdist is not what I want since that's euclidean distance. Any ideas? thanks.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >