Search Results

Search found 27205 results on 1089 pages for 'python imaging library'.

Page 159/1089 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • python duration of a file object in an argument list

    - by msw
    In the pickle module documentation there is a snippet of example code: reader = pickle.load(open('save.p', 'rb')) which upon first read looked like it would allocate a system file descriptor, read its contents and then "leak" the open descriptor for there isn't any handle accessible to call close() upon. This got me wondering if there was any hidden magic that takes care of this case. Diving into the source, I found in Modules/_fileio.c that file descriptors are closed by the fileio_dealloc() destructor which led to the real question. What is the duration of the file object returned by the example code above? After that statement executes does the object indeed become unreferenced and therefore will the fd be subject to a real close(2) call at some future garbage collection sweep? If so, is the example line good practice, or should one not count on the fd being released thus risking kernel per-process descriptor table exhaustion?

    Read the article

  • Python Process won't call atexit

    - by Brian M. Hunt
    I'm trying to use atexit in a Process, but unfortunately it doesn't seem to work. Here's some example code: import time import atexit import logging import multiprocessing logging.basicConfig(level=logging.DEBUG) class W(multiprocessing.Process): def run(self): logging.debug("%s Started" % self.name) @atexit.register def log_terminate(): # ever called? logging.debug("%s Terminated!" % self.name) while True: time.sleep(10) @atexit.register def log_exit(): logging.debug("Main process terminated") logging.debug("Main process started") a = W() b = W() a.start() b.start() time.sleep(1) a.terminate() b.terminate() The output of this code is: DEBUG:root:Main process started DEBUG:root:W-1 Started DEBUG:root:W-2 Started DEBUG:root:Main process terminated I would expect that the W.run.log_terminate() would be called when a.terminate() and b.terminate() are called, and the output to be something likeso (emphasis added)!: DEBUG:root:Main process started DEBUG:root:W-1 Started DEBUG:root:W-2 Started DEBUG:root:W-1 Terminated! DEBUG:root:W-2 Terminated! DEBUG:root:Main process terminated Why isn't this working, and is there a better way to log a message (from the Process context) when a Process is terminated? Thank you for your input - it's much appreciated.

    Read the article

  • hierarchical clustering on correlations in Python scipy/numpy?

    - by user248237
    How can I run hierarchical clustering on a correlation matrix in scipy/numpy? I have a matrix of 100 rows by 9 columns, and I'd like to hierarchically clustering by correlations of each entry across the 9 conditions. I'd like to use 1-pearson correlation as the distances for clustering. Assuming I have a numpy array "X" that contains the 100 x 9 matrix, how can I do this? I tried using hcluster, based on this example: Y=pdist(X, 'seuclidean') Z=linkage(Y, 'single') dendrogram(Z, color_threshold=0) however, pdist is not what I want since that's euclidean distance. Any ideas? thanks.

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Passing list and dictionary type parameter with Python

    - by prosseek
    When I run this code def func(x, y, *w, **z): print x print y if w: print w if z: print z else: print "None" func(10,20, 1,2,3,{'k':'a'}) I get the result as follows. 10 20 (1, 2, 3, {'k': 'a'}) None But, I expected as follows, I mean the list parameters (1,2,3) matching *w, and dictionary matching **z. 10 20 (1,2,3) {'k':'a'} Q : What went wrong? How can I pass the list and dictionary as parameters? Added func(10,20, 10,20,30, k='a') seems to be working

    Read the article

  • Python - How to pickle yourself?

    - by Mark
    I want my class to implement Save and Load functions which simply do a pickle of the class. But apparently you cannot use 'self' in the fashion below. How can you do this? self = cPickle.load(f) cPickle.dump(self,f,2)

    Read the article

  • Web-based game in Python + Django and client browser polling

    - by ty
    I am creating a text-based game that implements a basic model in which multiple (10+) players interact with data and one moderator watches them and sets certain environmental statistics that affect gameplay. Recently I have begun to familiarize myself with Django. It seems to me that it would be an excellent tool for creating a game quickly, particularly because the nature of my game depends largely on sets of data (which lends itself quite well to a database). I am wondering how to "push" changes made by the game moderator to the players (for example, the moderator can decide to display an image to all players). The game is turn-based, not real-time, but certain messages need to be pushed out in roughly real-time. My thoughts: I could have each player's browser poll a status periodically (say, every 30 seconds) to see if there is a message from a moderator. But this forces a lag and means different players might receive it at different times. And reducing this interval to <10 seems like a bad idea for the server. Is there a better way to inform clients of changes? Would you suggest something other than using a web framework like Django? Thanks!

    Read the article

  • Problem building a complete binary tree of height 'h' in Python

    - by Jack
    Here is my code. The complete binary tree has 2^k nodes at depth k. class Node: def __init__(self, data): # initializes the data members self.left = None self.right = None self.data = data root = Node(data_root) def create_complete_tree(): row = [root] for i in range(h): newrow = [] for node in row: left = Node(data1) right = Node(data2) node.left = left node.right = right newrow.append(left) newrow.append(right) row = copy.deepcopy(newrow) def traverse_tree(node): if node == None: return else: traverse_tree(node.left) print node.data traverse_tree(node.right) create_complete_tree() print 'Node traversal' traverse_tree(root) The tree traversal only gives the data of root and its children. What am I doing wrong?

    Read the article

  • Confusion Matrix with number of classified/misclassified instances on it (Python/Matplotlib)

    - by Pinkie
    I am plotting a confusion matrix with matplotlib with the following code: from numpy import * import matplotlib.pyplot as plt from pylab import * conf_arr = [[33,2,0,0,0,0,0,0,0,1,3], [3,31,0,0,0,0,0,0,0,0,0], [0,4,41,0,0,0,0,0,0,0,1], [0,1,0,30,0,6,0,0,0,0,1], [0,0,0,0,38,10,0,0,0,0,0], [0,0,0,3,1,39,0,0,0,0,4], [0,2,2,0,4,1,31,0,0,0,2], [0,1,0,0,0,0,0,36,0,2,0], [0,0,0,0,0,0,1,5,37,5,1], [3,0,0,0,0,0,0,0,0,39,0], [0,0,0,0,0,0,0,0,0,0,38] ] norm_conf = [] for i in conf_arr: a = 0 tmp_arr = [] a = sum(i,0) for j in i: tmp_arr.append(float(j)/float(a)) norm_conf.append(tmp_arr) plt.clf() fig = plt.figure() ax = fig.add_subplot(111) res = ax.imshow(array(norm_conf), cmap=cm.jet, interpolation='nearest') cb = fig.colorbar(res) savefig("confmat.png", format="png") But I want to the confusion matrix to show the numbers on it like this graphic (the right one): http://i48.tinypic.com/2e30kup.jpg How can I plot the conf_arr on the graphic?

    Read the article

  • In Python BeautifulSoup How to move tags

    - by JJ
    I have a partially converted XML document in soup coming from HTML. After some replacement and editing in the soup, the body is essentially - <Text...></Text> # This replaces <a href..> tags but automatically creates the </Text> <p class=norm ...</p> <p class=norm ...</p> <Text...></Text> <p class=norm ...</p> and so forth. I need to "move" the <p> tags to be children to <Text> or know how to suppress the </Text>. I want - <Text...> <p class=norm ...</p> <p class=norm ...</p> </Text> <Text...> <p class=norm ...</p> </Text> I've tried using item.insert and item.append but I'm thinking there must be a more elegant solution. for item in soup.findAll(['p','span']): if item.name == 'span' and item.has_key('class') and item['class'] == 'section': xBCV = short_2_long(item._getAttrMap().get('value','')) if currentnode: pass currentnode = Tag(soup,'Text', attrs=[('TypeOf', 'Section'),... ]) item.replaceWith(currentnode) # works but creates end tag elif item.name == 'p' and item.has_key('class') and item['class'] == 'norm': childcdatanode = None for ahref in item.findAll('a'): if childcdatanode: pass newlink = filter_hrefs(str(ahref)) childcdatanode = Tag(soup, newlink) ahref.replaceWith(childcdatanode) Thanks

    Read the article

  • Reading numeric Excel data as text using xlrd in Python

    - by Brian
    Hi guys, I am trying to read in an Excel file using xlrd, and I am wondering if there is a way to ignore the cell formatting used in Excel file, and just import all data as text? Here is the code I am using for far: import xlrd xls_file = 'xltest.xls' xls_workbook = xlrd.open_workbook(xls_file) xls_sheet = xls_workbook.sheet_by_index(0) raw_data = [['']*xls_sheet.ncols for _ in range(xls_sheet.nrows)] raw_str = '' feild_delim = ',' text_delim = '"' for rnum in range(xls_sheet.nrows): for cnum in range(xls_sheet.ncols): raw_data[rnum][cnum] = str(xls_sheet.cell(rnum,cnum).value) for rnum in range(len(raw_data)): for cnum in range(len(raw_data[rnum])): if (cnum == len(raw_data[rnum]) - 1): feild_delim = '\n' else: feild_delim = ',' raw_str += text_delim + raw_data[rnum][cnum] + text_delim + feild_delim final_csv = open('FINAL.csv', 'w') final_csv.write(raw_str) final_csv.close() This code is functional, but there are certain fields, such as a zip code, that are imported as numbers, so they have the decimal zero suffix. For example, is there is a zip code of '79854' in the Excel file, it will be imported as '79854.0'. I have tried finding a solution in this xlrd spec, but was unsuccessful.

    Read the article

  • How to display locale sensitive time format without seconds in python

    - by Tim Kersten
    I can output a locale sensitive time format using strftime('%X'), but this always includes seconds. How might I display this time format without seconds? >>> import locale >>> import datetime >>> locale.setlocale(locale.LC_ALL, 'en_IE.utf-8') 'en_IE.utf-8' >>> print datetime.datetime.now().strftime('%X') 12:22:43 >>> locale.setlocale(locale.LC_ALL, 'zh_TW.utf-8') 'zh_TW.utf-8' >>> print datetime.datetime.now().strftime('%X') 12?22?58? The only way I can think of doing this is attempting to parse the output of locale.nl_langinfo(locale.T_FMT) and strip out the seconds bit, but that brings it's own trickery. >>> print locale.nl_langinfo(locale.T_FMT) %H?%M?%S? >>> locale.setlocale(locale.LC_ALL, 'en_IE.utf-8') 'en_IE.utf-8' >>> print locale.nl_langinfo(locale.T_FMT) %T

    Read the article

  • Python and Gstreamer

    - by Seif Sallam
    hi, I'm creating a streaming application, using GStreamer with TCP pipeline, and i implemented start, pause, and stop. but the problem is, that i can't seek, i tried to change the playback value from the server side, then i tried on the client side, and Finally tried to change the value on both at the same time, but in all cases it doesn't work. and I even tried to pause the playback then continue but nothing happens. I'm having this problem with the seek and the volume. Any help please, I searched everywhere but i couldn't find anything that worked. this is the code that i use for seeking self.pipeline.seek_simple(gst.FORMAT_TIME, gst.SEEK_FLAG_FLUSH, time)

    Read the article

  • Python: How efficient is subtring extraction?

    - by Cameron
    I've got the entire contents of a text file (at least a few KB) in string myStr. Will the following code create a copy of the string (less the first character) in memory? myStr = myStr[1:] I'm hoping it just refers to a different location in the same internal buffer. If not, is there a more efficient way to do this? Thanks!

    Read the article

  • Python: deleting rows in a text file

    - by Jenny
    A sample of the following text file i have is: > 1 -4.6 -4.6 -7.6 > > 2 -1.7 -3.8 -3.1 > > 3 -1.6 -1.6 -3.1 the data is separated by tabs in the text file and the first column indicates the position. I need to iterate through every value in the text file apart from column 0 and find the lowest value. once the lowest value has been found that value needs to be written to a new text file along with the column name and position. Column 0 has the name "position" Column 1 "fifteen", column 2 "sixteen" and column 3 "seventeen" for example the lowest value in the above data is "-7.6" and is in column 3 which has the name "seventeen". Therefore "7.6", "seventeen" and its position value which in this case is 1 need to be written to the new text file. I then need a number of rows deleted from the above text file. E.G. the lowest value above is "-7.6" and is found at position "1" and is found in column 3 which as the name "seventeen". I therefore need seventeen rows deleted from the text file starting from and including position 1 so the the column in which the lowest value is found denotes the amount of rows that needs to be deleted and the position it is found at states the start point of the deletion

    Read the article

  • [Python] - Getting data from external program

    - by Kenny M.
    Hey, I need a method to get the data from an external editor. def _get_content(): from subprocess import call file = open(file, "w").write(some_name) call(editor + " " + file, shell=True) file.close() file = open(file) x = file.readlines() [snip] I personally think this is a very ugly way. You see I need to interact with an external editor and get the data. Do you know any better approaches/have better ideas?

    Read the article

  • Python - Blackjack

    - by user335932
    def showCards(): #SUM sum = playerCards[0] + playerCards[1] #Print cards print "Player's Hand: " + str(playerCards) + " : " + "sum print "Dealer's Hand: " + str(compCards[0]) + " : " + "sum" compCards = [Deal(),Deal()] playerCards = [Deal(),Deal()] How can i add up the interger element of a list containing to values? under #SUM error is can combine lists like ints...

    Read the article

  • Accessing Class Variables from a List in a nice way in Python

    - by Dennis
    Suppose I have a list X = [a, b, c] where a, b, c are instances of the same class C. Now, all these instances a,b,c, have a variable called v, a.v, b.v, c.v ... I simply want a list Y = [a.v, b.v, c.v] Is there a nice command to do this? The best way I can think of is: Y = [] for i in X Y.append(i.v) But it doesn't seem very elegant ~ since this needs to be repeated for any given "v" Any suggestions? I couldn't figure out a way to use "map" to do this.

    Read the article

  • Python: Figure out local timezone

    - by Adam Matan
    I want to compare UTC timestamps from a log file with local timestamps. When creating the local datetime object, I use something like: >>> local_time=datetime.datetime(2010, 4, 27, 12, 0, 0, 0, tzinfo=pytz.timezone('Israel')) I want to find an automatic tool that would replace thetzinfo=pytz.timezone('Israel') with the current local time zone. Any ideas?

    Read the article

  • strange behavior in python

    - by fsm
    The tags might not be accurate since I am not sure where the problem is. I have a module where I am trying to read some data from a socket, and write the results into a file (append) It looks something like this, (only relevant parts included) if __name__ == "__main__": <some init code> for line in file: t = Thread(target=foo, args=(line,)) t.start() while nThreads > 0: time.sleep(1) Here are the other modules, def foo(text): global countLock, nThreads countLock.acquire() nThreads += 1 countLock.release() """connect to socket, send data, read response""" writeResults(text, result) countLock.acquire() nThreads -= 1 countLock.release() def writeResults(text, result): """acquire file lock""" """append to file""" """release file lock""" Now here's the problem. Initially, I had a typo in the function 'foo', where I was passing the variable 'line' to writeResults instead of 'text'. 'line' is not defined in the function foo, it's defined in the main block, so I should have seen an error, but instead, it worked fine, except that the data was appended to the file multiple times, instead of being written just once, which is the required behavior, which I got when I fixed the typo. My question is, 1) Why didn't I get an error? 2) Why was the writeResults function being called multiple times?

    Read the article

  • python- scipy optimization

    - by pear
    In scipy fmin_slsqp (Sequential Least Squares Quadratic Programming), I tried reading the code 'slsqp.py' provided with the scipy package, to find what are the criteria to get the exit_modes 0? I cannot find which statements in the code produce this exit mode? Please help me 'slsqp.py' code as follows, exit_modes = { -1 : "Gradient evaluation required (g & a)", 0 : "Optimization terminated successfully.", 1 : "Function evaluation required (f & c)", 2 : "More equality constraints than independent variables", 3 : "More than 3*n iterations in LSQ subproblem", 4 : "Inequality constraints incompatible", 5 : "Singular matrix E in LSQ subproblem", 6 : "Singular matrix C in LSQ subproblem", 7 : "Rank-deficient equality constraint subproblem HFTI", 8 : "Positive directional derivative for linesearch", 9 : "Iteration limit exceeded" } def fmin_slsqp( func, x0 , eqcons=[], f_eqcons=None, ieqcons=[], f_ieqcons=None, bounds = [], fprime = None, fprime_eqcons=None, fprime_ieqcons=None, args = (), iter = 100, acc = 1.0E-6, iprint = 1, full_output = 0, epsilon = _epsilon ): # Now do a lot of function wrapping # Wrap func feval, func = wrap_function(func, args) # Wrap fprime, if provided, or approx_fprime if not if fprime: geval, fprime = wrap_function(fprime,args) else: geval, fprime = wrap_function(approx_fprime,(func,epsilon)) if f_eqcons: # Equality constraints provided via f_eqcons ceval, f_eqcons = wrap_function(f_eqcons,args) if fprime_eqcons: # Wrap fprime_eqcons geval, fprime_eqcons = wrap_function(fprime_eqcons,args) else: # Wrap approx_jacobian geval, fprime_eqcons = wrap_function(approx_jacobian, (f_eqcons,epsilon)) else: # Equality constraints provided via eqcons[] eqcons_prime = [] for i in range(len(eqcons)): eqcons_prime.append(None) if eqcons[i]: # Wrap eqcons and eqcons_prime ceval, eqcons[i] = wrap_function(eqcons[i],args) geval, eqcons_prime[i] = wrap_function(approx_fprime, (eqcons[i],epsilon)) if f_ieqcons: # Inequality constraints provided via f_ieqcons ceval, f_ieqcons = wrap_function(f_ieqcons,args) if fprime_ieqcons: # Wrap fprime_ieqcons geval, fprime_ieqcons = wrap_function(fprime_ieqcons,args) else: # Wrap approx_jacobian geval, fprime_ieqcons = wrap_function(approx_jacobian, (f_ieqcons,epsilon)) else: # Inequality constraints provided via ieqcons[] ieqcons_prime = [] for i in range(len(ieqcons)): ieqcons_prime.append(None) if ieqcons[i]: # Wrap ieqcons and ieqcons_prime ceval, ieqcons[i] = wrap_function(ieqcons[i],args) geval, ieqcons_prime[i] = wrap_function(approx_fprime, (ieqcons[i],epsilon)) # Transform x0 into an array. x = asfarray(x0).flatten() # Set the parameters that SLSQP will need # meq = The number of equality constraints if f_eqcons: meq = len(f_eqcons(x)) else: meq = len(eqcons) if f_ieqcons: mieq = len(f_ieqcons(x)) else: mieq = len(ieqcons) # m = The total number of constraints m = meq + mieq # la = The number of constraints, or 1 if there are no constraints la = array([1,m]).max() # n = The number of independent variables n = len(x) # Define the workspaces for SLSQP n1 = n+1 mineq = m - meq + n1 + n1 len_w = (3*n1+m)*(n1+1)+(n1-meq+1)*(mineq+2) + 2*mineq+(n1+mineq)*(n1-meq) \ + 2*meq + n1 +(n+1)*n/2 + 2*m + 3*n + 3*n1 + 1 len_jw = mineq w = zeros(len_w) jw = zeros(len_jw) # Decompose bounds into xl and xu if len(bounds) == 0: bounds = [(-1.0E12, 1.0E12) for i in range(n)] elif len(bounds) != n: raise IndexError, \ 'SLSQP Error: If bounds is specified, len(bounds) == len(x0)' else: for i in range(len(bounds)): if bounds[i][0] > bounds[i][1]: raise ValueError, \ 'SLSQP Error: lb > ub in bounds[' + str(i) +'] ' + str(bounds[4]) xl = array( [ b[0] for b in bounds ] ) xu = array( [ b[1] for b in bounds ] ) # Initialize the iteration counter and the mode value mode = array(0,int) acc = array(acc,float) majiter = array(iter,int) majiter_prev = 0 # Print the header if iprint >= 2 if iprint >= 2: print "%5s %5s %16s %16s" % ("NIT","FC","OBJFUN","GNORM") while 1: if mode == 0 or mode == 1: # objective and constraint evaluation requird # Compute objective function fx = func(x) # Compute the constraints if f_eqcons: c_eq = f_eqcons(x) else: c_eq = array([ eqcons[i](x) for i in range(meq) ]) if f_ieqcons: c_ieq = f_ieqcons(x) else: c_ieq = array([ ieqcons[i](x) for i in range(len(ieqcons)) ]) # Now combine c_eq and c_ieq into a single matrix if m == 0: # no constraints c = zeros([la]) else: # constraints exist if meq > 0 and mieq == 0: # only equality constraints c = c_eq if meq == 0 and mieq > 0: # only inequality constraints c = c_ieq if meq > 0 and mieq > 0: # both equality and inequality constraints exist c = append(c_eq, c_ieq) if mode == 0 or mode == -1: # gradient evaluation required # Compute the derivatives of the objective function # For some reason SLSQP wants g dimensioned to n+1 g = append(fprime(x),0.0) # Compute the normals of the constraints if fprime_eqcons: a_eq = fprime_eqcons(x) else: a_eq = zeros([meq,n]) for i in range(meq): a_eq[i] = eqcons_prime[i](x) if fprime_ieqcons: a_ieq = fprime_ieqcons(x) else: a_ieq = zeros([mieq,n]) for i in range(mieq): a_ieq[i] = ieqcons_prime[i](x) # Now combine a_eq and a_ieq into a single a matrix if m == 0: # no constraints a = zeros([la,n]) elif meq > 0 and mieq == 0: # only equality constraints a = a_eq elif meq == 0 and mieq > 0: # only inequality constraints a = a_ieq elif meq > 0 and mieq > 0: # both equality and inequality constraints exist a = vstack((a_eq,a_ieq)) a = concatenate((a,zeros([la,1])),1) # Call SLSQP slsqp(m, meq, x, xl, xu, fx, c, g, a, acc, majiter, mode, w, jw) # Print the status of the current iterate if iprint > 2 and the # major iteration has incremented if iprint >= 2 and majiter > majiter_prev: print "%5i %5i % 16.6E % 16.6E" % (majiter,feval[0], fx,linalg.norm(g)) # If exit mode is not -1 or 1, slsqp has completed if abs(mode) != 1: break majiter_prev = int(majiter) # Optimization loop complete. Print status if requested if iprint >= 1: print exit_modes[int(mode)] + " (Exit mode " + str(mode) + ')' print " Current function value:", fx print " Iterations:", majiter print " Function evaluations:", feval[0] print " Gradient evaluations:", geval[0] if not full_output: return x else: return [list(x), float(fx), int(majiter), int(mode), exit_modes[int(mode)] ]

    Read the article

  • Python - Nested List to Tab Delimited File?

    - by Seafoid
    Hi, I have a nested list comprising ~30,000 sub-lists, each with three entries, e.g., nested_list = [['x', 'y', 'z'], ['a', 'b', 'c']]. I wish to create a function in order to output this data construct into a tab delimited format, e.g., x y z a b c Any help greatly appreciated! Thanks in advance, Seafoid.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >