Search Results

Search found 99 results on 4 pages for 'xrange'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Idiomatic Python: 'times' loop

    - by perimosocordiae
    Say I have a function foo that I want to call n times. In Ruby, I would write: n.times { foo } In Python, I could write: for _ in xrange(n): foo() But that seems like a hacky way of doing things. My question: Is there an idiomatic way of doing this in Python?

    Read the article

  • Ruby: Continue a loop after catching an exception

    - by Santa
    Basically, I want to do something like this (in Python, or similar imperative languages): for i in xrange(1, 5): try: do_something_that_might_raise_exceptions(i) except: continue # continue the loop at i = i + 1 How do I do this in Ruby? I know there are the redo and retry keywords, but they seem to re-execute the "try" block, instead of continuing the loop: for i in 1..5 begin do_something_that_might_raise_exceptions(i) rescue retry # do_something_* again, with same i end end

    Read the article

  • Memory efficient int-int dict in Python

    - by Bolo
    Hi, I need a memory efficient int-int dict in Python that would support the following operations in O(log n) time: d[k] = v # replace if present v = d[k] # None or a negative number if not present I need to hold ~250M pairs, so it really has to be tight. Do you happen to know a suitable implementation (Python 2.7)? EDIT Removed impossible requirement and other nonsense. Thanks, Craig and Kylotan! To rephrase. Here's a trivial int-int dictionary with 1M pairs: >>> import random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> d = {} >>> for _ in xrange(1000000): ... d[random.randint(0, sys.maxint)] = random.randint(0, sys.maxint) ... >>> h.heap() Partition of a set of 1999530 objects. Total size = 49161112 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 0 25165960 51 25165960 51 dict (no owner) 1 1999521 100 23994252 49 49160212 100 int On average, a pair of integers uses 49 bytes. Here's an array of 2M integers: >>> import array, random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> a = array.array('i') >>> for _ in xrange(2000000): ... a.append(random.randint(0, sys.maxint)) ... >>> h.heap() Partition of a set of 14 objects. Total size = 8001108 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 7 8000028 100 8000028 100 array.array On average, a pair of integers uses 8 bytes. I accept that 8 bytes/pair in a dictionary is rather hard to achieve in general. Rephrased question: is there a memory-efficient implementation of int-int dictionary that uses considerably less than 49 bytes/pair?

    Read the article

  • Static lock in Python?

    - by roddik
    Hello. I've got the following code: import time import threading class BaseWrapper: #static class lock = threading.Lock() @staticmethod def synchronized_def(): BaseWrapper.lock.acquire() time.sleep(5) BaseWrapper.lock.release() def test(): print time.ctime() if __name__ is '__main__': for i in xrange(10): threading.Thread(target = test).start() I want to have a method synchronized using static lock. However the above code prints the same time ten times, so it isn't really locking. How can I fix it? TIA

    Read the article

  • Using fft2 with reshaping for an RGB filter

    - by Mahmoud Aladdin
    I want to apply a filter on an image, for example, blurring filter [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]. Also, I'd like to use the approach that convolution in Spatial domain is equivalent to multiplication in Frequency domain. So, my algorithm will be like. Load Image. Create Filter. convert both Filter & Image to Frequency domains. multiply both. reconvert the output to Spatial Domain and that should be the required output. The following is the basic code I use, the image is loaded and displayed as cv.cvmat object. Image is a class of my creation, it has a member image which is an object of scipy.matrix and toFrequencyDomain(size = None) uses spf.fftshift(spf.fft2(self.image, size)) where spf is scipy.fftpack and dotMultiply(img) uses scipy.multiply(self.image, image) f = Image.fromMatrix([[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]) lena = Image.fromFile("Test/images/lena.jpg") print lena.image.shape lenaf = lena.toFrequencyDomain(lena.image.shape) ff = f.toFrequencyDomain(lena.image.shape) lenafm = lenaf.dotMultiplyImage(ff) lenaff = lenafm.toTimeDomain() lena.display() lenaff.display() So, the previous code works pretty well, if I told OpenCV to load the image via GRAY_SCALE. However, if I let the image to be loaded in color ... lena.image.shape will be (512, 512, 3) .. so, it gives me an error when using scipy.fttpack.ftt2 saying "When given, Shape and Axes should be of same length". What I tried next was converted my filter to 3-D .. as [[[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]] And, not knowing what the axes argument do, I added it with random numbers as (-2, -1, -1), (-1, -1, -2), .. etc. until it gave me the correct filter output shape for the dotMultiply to work. But, of course it wasn't the correct value. Things were totally worse. My final trial, was using fft2 function on each of the components 2-D matrices, and then re-making the 3-D one, using the following code. # Spiltting the 3-D matrix to three 2-D matrices. for i, row in enumerate(self.image): r.append(list()) g.append(list()) b.append(list()) for pixel in row: r[i].append(pixel[0]) g[i].append(pixel[1]) b[i].append(pixel[2]) rfft = spf.fftshift(spf.fft2(r, size)) gfft = spf.fftshift(spf.fft2(g, size)) bfft = spf.fftshift(spf.fft2(b, size)) newImage.image = sp.asarray([[[rfft[i][j], gfft[i][j], bfft[i][j]] for j in xrange(len(rfft[i]))] for i in xrange(len(rfft))] ) return newImage Any help on what I made wrong, or how can I achieve that for both GreyScale and Coloured pictures.

    Read the article

  • Core Plot: x-axis labels not plotted when using scaleToFitPlots

    - by AlexR
    Problem: I can't get Core Plot (1.1) to plot automatic labels for my x-axis when using autoscaling ([plotSpace scaleToFitPlots:[graph allPlots]). What I have tried: I changed the values for the offsets and paddings, but this did not change the result. However, when turning autoscale off (not using [plotSpace scaleToFitPlots:[graph allPlots]]and setting the y scale automatically, the automatic labeling of the x-axis works. Question: Is there a bug in Core Plot or what did I do wrong? I would appreciate any help! Thank you! This is how I have set up my chart: CPTBarPlot *barPlot = [CPTBarPlot tubularBarPlotWithColor:[CPTColor blueColor] horizontalBars:NO]; barPlot.baseValue = CPTDecimalFromInt(0); barPlot.barOffset = CPTDecimalFromFloat(0.0f); // CPTDecimalFromFloat(0.5f); barPlot.barWidth = CPTDecimalFromFloat(0.4f); barPlot.barCornerRadius = 4; barPlot.labelOffset = 5; barPlot.dataSource = self; barPlot.delegate = self; graph = [[CPTXYGraph alloc]initWithFrame:self.view.bounds]; self.hostView.hostedGraph = graph; graph.paddingLeft = 40.0f; graph.paddingTop = 30.0f; graph.paddingRight = 30.0f; graph.paddingBottom = 50.0f; [graph addPlot:barPlot]; graph.plotAreaFrame.masksToBorder = NO; graph.plotAreaFrame.cornerRadius = 0.0f; graph.plotAreaFrame.borderLineStyle = borderLineStyle; double xAxisStart = 0; CPTXYAxisSet *xyAxisSet = (CPTXYAxisSet *)graph.axisSet; CPTXYAxis *xAxis = xyAxisSet.xAxis; CPTMutableLineStyle *lineStyle = [xAxis.axisLineStyle mutableCopy]; lineStyle.lineCap = kCGLineCapButt; xAxis.axisLineStyle = lineStyle; xAxis.majorTickLength = 10; xAxis.orthogonalCoordinateDecimal = CPTDecimalFromDouble(yAxisStart); xAxis.paddingBottom = 5; xyAxisSet.delegate = self; xAxis.delegate = self; xAxis.labelOffset = 0; xAxis.labelingPolicy = CPTAxisLabelingPolicyAutomatic; [plotSpace scaleToFitPlots:[graph allPlots]]; CPTMutablePlotRange *yRange = plotSpace.yRange.mutableCopy; [yRange expandRangeByFactor:CPTDecimalFromDouble(1.3)]; plotSpace.yRange = yRange; NSInteger xLength = CPTDecimalIntegerValue(plotSpace.xRange.length) + 1; plotSpace.xRange = [CPTPlotRange plotRangeWithLocation:CPTDecimalFromDouble(xAxisStart) length:CPTDecimalFromDouble(xLength)] ;

    Read the article

  • Project Euler #18 - how to brute force all possible paths in tree-like structure using Python?

    - by euler user
    Am trying to learn Python the Atlantic way and am stuck on Project Euler #18. All of the stuff I can find on the web (and there's a LOT more googling that happened beyond that) is some variation on 'well you COULD brute force it, but here's a more elegant solution'... I get it, I totally do. There are really neat solutions out there, and I look forward to the day where the phrase 'acyclic graph' conjures up something more than a hazy, 1 megapixel resolution in my head. But I need to walk before I run here, see the state, and toy around with the brute force answer. So, question: how do I generate (enumerate?) all valid paths for the triangle in Project Euler #18 and store them in an appropriate python data structure? (A list of lists is my initial inclination?). I don't want the answer - I want to know how to brute force all the paths and store them into a data structure. Here's what I've got. I'm definitely looping over the data set wrong. The desired behavior would be to go 'depth first(?)' rather than just looping over each row ineffectually.. I read ch. 3 of Norvig's book but couldn't translate the psuedo-code. Tried reading over the AIMA python library for ch. 3 but it makes too many leaps. triangle = [ [75], [95, 64], [17, 47, 82], [18, 35, 87, 10], [20, 4, 82, 47, 65], [19, 1, 23, 75, 3, 34], [88, 2, 77, 73, 7, 63, 67], [99, 65, 4, 28, 6, 16, 70, 92], [41, 41, 26, 56, 83, 40, 80, 70, 33], [41, 48, 72, 33, 47, 32, 37, 16, 94, 29], [53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14], [70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57], [91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48], [63, 66, 4, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31], [04, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 4, 23], ] def expand_node(r, c): return [[r+1,c+0],[r+1,c+1]] all_paths = [] my_path = [] for i in xrange(0, len(triangle)): for j in xrange(0, len(triangle[i])): print 'row ', i, ' and col ', j, ' value is ', triangle[i][j] ??my_path = somehow chain these together??? if my_path not in all_paths all_paths.append(my_path) Answers that avoid external libraries (like itertools) preferred.

    Read the article

  • speed of map() vs. list comprehension vs. numpy vectorized function in python

    - by mcstrother
    I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing 'a': a = [foo(i) for i in xrange(100)] , a = map(foo, range(100)) , and vfoo = numpy.vectorize(foo) vfoo(range(100)) ? (I don't care whether the output is a list or a numpy array). Is there some other better way of doing this? Thanks.

    Read the article

  • List comprehension, map, and numpy.vectorize performance

    - by mcstrother
    I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing a: a = [foo(i) for i in xrange(100)] a = map(foo, range(100)) vfoo = numpy.vectorize(foo) a = vfoo(range(100)) (I don't care whether the output is a list or a numpy array.) Is there a better way?

    Read the article

  • Incremental PCA

    - by smichak
    Hi, Lately, I've been looking into an implementation of an incremental PCA algorithm in python - I couldn't find something that would meet my needs so I did some reading and implemented an algorithm I found in some paper. Here is the module's code - the relevant paper on which it is based is mentioned in the module's documentation. I would appreciate any feedback from people who are interested in this. Micha #!/usr/bin/env python """ Incremental PCA calculation module. Based on P.Hall, D. Marshall and R. Martin "Incremental Eigenalysis for Classification" which appeared in British Machine Vision Conference, volume 1, pages 286-295, September 1998. Principal components are updated sequentially as new observations are introduced. Each new observation (x) is projected on the eigenspace spanned by the current principal components (U) and the residual vector (r = x - U(U.T*x)) is used as a new principal component (U' = [U r]). The new principal components are then rotated by a rotation matrix (R) whose columns are the eigenvectors of the transformed covariance matrix (D=U'.T*C*U) to yield p + 1 principal components. From those, only the first p are selected. """ __author__ = "Micha Kalfon" import numpy as np _ZERO_THRESHOLD = 1e-9 # Everything below this is zero class IPCA(object): """Incremental PCA calculation object. General Parameters: m - Number of variables per observation n - Number of observations p - Dimension to which the data should be reduced """ def __init__(self, m, p): """Creates an incremental PCA object for m-dimensional observations in order to reduce them to a p-dimensional subspace. @param m: Number of variables per observation. @param p: Number of principle components. @return: An IPCA object. """ self._m = float(m) self._n = 0.0 self._p = float(p) self._mean = np.matrix(np.zeros((m , 1), dtype=np.float64)) self._covariance = np.matrix(np.zeros((m, m), dtype=np.float64)) self._eigenvectors = np.matrix(np.zeros((m, p), dtype=np.float64)) self._eigenvalues = np.matrix(np.zeros((1, p), dtype=np.float64)) def update(self, x): """Updates with a new observation vector x. @param x: Next observation as a column vector (m x 1). """ m = self._m n = self._n p = self._p mean = self._mean C = self._covariance U = self._eigenvectors E = self._eigenvalues if type(x) is not np.matrix or x.shape != (m, 1): raise TypeError('Input is not a matrix (%d, 1)' % int(m)) # Update covariance matrix and mean vector and centralize input around # new mean oldmean = mean mean = (n*mean + x) / (n + 1.0) C = (n*C + x*x.T + n*oldmean*oldmean.T - (n+1)*mean*mean.T) / (n + 1.0) x -= mean # Project new input on current p-dimensional subspace and calculate # the normalized residual vector g = U.T*x r = x - (U*g) r = (r / np.linalg.norm(r)) if not _is_zero(r) else np.zeros_like(r) # Extend the transformation matrix with the residual vector and find # the rotation matrix by solving the eigenproblem DR=RE U = np.concatenate((U, r), 1) D = U.T*C*U (E, R) = np.linalg.eigh(D) # Sort eigenvalues and eigenvectors from largest to smallest to get the # rotation matrix R sorter = list(reversed(E.argsort(0))) E = E[sorter] R = R[:,sorter] # Apply the rotation matrix U = U*R # Select only p largest eigenvectors and values and update state self._n += 1.0 self._mean = mean self._covariance = C self._eigenvectors = U[:, 0:p] self._eigenvalues = E[0:p] @property def components(self): """Returns a matrix with the current principal components as columns. """ return self._eigenvectors @property def variances(self): """Returns a list with the appropriate variance along each principal component. """ return self._eigenvalues def _is_zero(x): """Return a boolean indicating whether the given vector is a zero vector up to a threshold. """ return np.fabs(x).min() < _ZERO_THRESHOLD if __name__ == '__main__': import sys def pca_svd(X): X = X - X.mean(0).repeat(X.shape[0], 0) [_, _, V] = np.linalg.svd(X) return V N = 1000 obs = np.matrix([np.random.normal(size=10) for _ in xrange(N)]) V = pca_svd(obs) print V[0:2] pca = IPCA(obs.shape[1], 2) for i in xrange(obs.shape[0]): x = obs[i,:].transpose() pca.update(x) U = pca.components print U

    Read the article

  • Project Euler 14: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 14.  As always, any feedback is welcome. # Euler 14 # http://projecteuler.net/index.php?section=problems&id=14 # The following iterative sequence is defined for the set # of positive integers: # n -> n/2 (n is even) # n -> 3n + 1 (n is odd) # Using the rule above and starting with 13, we generate # the following sequence: # 13 40 20 10 5 16 8 4 2 1 # It can be seen that this sequence (starting at 13 and # finishing at 1) contains 10 terms. Although it has not # been proved yet (Collatz Problem), it is thought that all # starting numbers finish at 1. Which starting number, # under one million, produces the longest chain? # NOTE: Once the chain starts the terms are allowed to go # above one million. import time start = time.time() def collatz_length(n): # 0 and 1 return self as length if n <= 1: return n length = 1 while (n != 1): if (n % 2 == 0): n /= 2 else: n = 3*n + 1 length += 1 return length starting_number, longest_chain = 1, 0 for x in xrange(1, 1000001): l = collatz_length(x) if l > longest_chain: starting_number, longest_chain = x, l print starting_number print longest_chain # Slow 31 seconds print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Project Euler 12: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 12.  As always, any feedback is welcome. # Euler 12 # http://projecteuler.net/index.php?section=problems&id=12 # The sequence of triangle numbers is generated by adding # the natural numbers. So the 7th triangle number would be # 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms # would be: # 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... # Let us list the factors of the first seven triangle # numbers: # 1: 1 # 3: 1,3 # 6: 1,2,3,6 # 10: 1,2,5,10 # 15: 1,3,5,15 # 21: 1,3,7,21 # 28: 1,2,4,7,14,28 # We can see that 28 is the first triangle number to have # over five divisors. What is the value of the first # triangle number to have over five hundred divisors? import time start = time.time() from math import sqrt def divisor_count(x): count = 2 # itself and 1 for i in xrange(2, int(sqrt(x)) + 1): if ((x % i) == 0): if (i != sqrt(x)): count += 2 else: count += 1 return count def triangle_generator(): i = 1 while True: yield int(0.5 * i * (i + 1)) i += 1 triangles = triangle_generator() answer = 0 while True: num = triangles.next() if (divisor_count(num) >= 501): answer = num break; print answer print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Memory limiting solutions for greedy applications that can crash OS?

    - by Hooked
    I use my computer for scientific programming. It has a healthy 8GB of RAM and 12GB of swap space. Often, as my problems have gotten larger, I exceed all of the available RAM. Rather than crashing (which would be preferred), it seems Ubuntu starts loading everything into swap, including Unity and any open terminals. If I don't catch a run-away program in time, there is nothing I can do but wait - it takes 4-5 minutes to switch to a command prompt eg. Ctrl-Alt-F2 where I can kill the offending process. Since my own stupidity is out of scope of this forum, how can I prevent Ubuntu from crashing via thrashing when I use up all of the available memory from a single offending program? At-home experiment*! Open a terminal, launch python and if you have numpy installed try this: >>> import numpy >>> [numpy.zeros((10**4, 10**4)) for _ in xrange(50)] * Warning: may have adverse effects, monitor the process via iotop or top to kill it in time. If not, I'll see you after your reboot.

    Read the article

  • Combinatorics, probability, dice

    - by TarGz
    A friend of mine asked: if I have two dice and I throw both of them, what is the most frequent sum (of the two dice' numbers)? I wrote a small script: from random import randrange d = dict((i, 0) for i in range(2, 13)) for i in xrange(100000): d[randrange(1, 7) + randrange(1, 7)] += 1 print d Which prints: 2: 2770, 3: 5547, 4: 8379, 5: 10972, 6: 13911, 7: 16610, 8: 14010, 9: 11138, 10: 8372, 11: 5545, 12: 2746 The question I have, why is 11 more frequent than 12? In both cases there is only one way (or two, if you count reverse too) how to get such sum (5 + 6, 6 + 6), so I expected the same probability..?

    Read the article

  • Using core plot for iPhone, drawing date on x axis

    - by xmax
    I have available an array of dictionary that contains NSDate and NSNumber values. I wanted to plot date on X axis. for plotting I need to supply xRanges to plot with some decimal values.I don't understand how can i supply NSdate values to xRange (low and length). And what should be there in this method: -(NSNumber *)numberForPlot:(CPPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index I mean how my date value will be returned as NSNumber..? I think i should use some interval over there.but what should be the exact conversion..? can any one explain me what are the exact requirement to plot the date on xAxis..? I am plotting my plot in half of the view.

    Read the article

  • Cython Speed Boost vs. Usability

    - by zubin71
    I just came across Cython, while I was looking out for ways to optimize Python code. I read various posts on stackoverflow, the python wiki and read the article "General Rules for Optimization". Cython is something which grasps my interest the most; instead of writing C-code for yourself, you can choose to have other datatypes in your python code itself. Here is a silly test i tried, #!/usr/bin/python # test.pyx def test(value): for i in xrange(value): i**2 if(i==1000000): print i test(10000001) $ time python test.pyx real 0m16.774s user 0m16.745s sys 0m0.024s $ time cython test.pyx real 0m0.513s user 0m0.196s sys 0m0.052s Now, honestly, i`m dumbfounded. The code which I have used here is pure python code, and all I have changed is the interpreter. In this case, if cython is this good, then why do people still use the traditional Python interpretor? Are there any reliability issues for Cython?

    Read the article

  • Why Tokyo Tyrant so slow

    - by Tantra
    I have follow situation tyrant server lunched on freebsd host, like this: ttserver -uas -log /data/tyrant/1.log -sid 1 -thnum 8 -tout 5 /data/tyrant/data/1.tct And i try to communicate this server on windows from python and pyrant-0.3.5: like this: import pyrant; import time; t = pyrant.Tyrant(host="192.168.0.220", port=1978); tbegin = time.time(); for i in xrange(4000000): if i and ((i % 10000) == 0): print time.time() - tbegin; tbegin = time.time(); t[i] = {"text": "ruslan text", "value": i}; and have i think very slow performance about 5-6 per 10,000 records. But if i start this code on the same machine like server(ttserver). Performance are good - about 0.5 sec per 10,000 records What i must do to workaround this problem?

    Read the article

  • How to make socket.listen(1) work for some time and then continue rest of code???

    - by Rami Jarrar
    I'm making server that make a tcp socket and work over port range, with each port it will listen on that port for some time, then continue the rest of the code. like this:: import socket sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) msg ='' ports = [x for x in xrange(4000)] while True: try: for i in ports: sck.bind(('',i)) ## sck.listen(1) ## make it just for some time and then continue this ## if there a connection do this conn, addr = sck.accept() msg = conn.recv(2048) ## do something ##if no connection continue the for loop conn.close() except KeyboardInterrupt: exit() so how i could make sck.listen(1) work just for some time ??

    Read the article

  • Simple Python Challenge: Fastest Bitwise XOR on Data Buffers

    - by user213060
    Challenge: Perform a bitwise XOR on two equal sized buffers. The buffers will be required to be the python str type since this is traditionally the type for data buffers in python. Return the resultant value as a str. Do this as fast as possible. The inputs are two 1 megabyte (2**20 byte) strings. The challenge is to substantially beat my inefficient algorithm using python or existing third party python modules (relaxed rules: or create your own module.) Marginal increases are useless. from os import urandom from numpy import frombuffer,bitwise_xor,byte def slow_xor(aa,bb): a=frombuffer(aa,dtype=byte) b=frombuffer(bb,dtype=byte) c=bitwise_xor(a,b) r=c.tostring() return r aa=urandom(2**20) bb=urandom(2**20) def test_it(): for x in xrange(1000): slow_xor(aa,bb)

    Read the article

  • Python: Dynamic attribute name generation without exec() or eval()

    - by PyNewbie27
    Hi, I'm trying to dynamically create buttons at runtime with PyQT4.7 However, this being my first python program I'm not sure how to get the functionality I want. I would like to be able to substitute a text string for an attribute name: i.e. for each in xrange(4): myname = "tab1_button%s" % each #tab1_button0, tab1_button1, tab1_button2 #self.ui.tab1_button0 = QtGui.QPushButton(self.ui.tab) <--normal code to create a named button setattr(self.ui,myname,QtGui.QPushButton(self.ui.tab)) #rewrite of line above to dynamicly generate a button #here's where I get stuck. this code isn't valid, but it shows what i want to do self.ui.gridLayout.addWidget(self.ui.%s) % myname #I need to have %s be tab1_button1, tab1_button2, etc. I know the % is for string substituion but how can I substitute the dynamically generated attribute name into that statement? I assume there's a basica language construct I'm missing that allows this. Since it's my first program, please take it easy on me ;)

    Read the article

  • Iterate over a dict or list in Python

    - by Chris Dutrow
    Just wrote some nasty code that iterates over a dict or a list in Python. I have a feeling this was not the best way to go about it. The problem is that in order to iterate over a dict, this is the convention: for key in dict_object: dict_object[key] = 1 But modifying the object properties by key does not work if the same thing is done on a list: # Throws an error because the value of key is the property value, not # the list index: for key in list_object: list_object[key] = 1 The way I solved this problem was to write this nasty code: if isinstance(obj, dict): for key in obj: do_loop_contents(obj, key) elif isinstance(obj, list): for i in xrange(0, len(obj)): do_loop_contents(obj, i) def do_loop_contents(obj, key): obj[key] = 1 Is there a better way to do this? Thanks!

    Read the article

  • Binning into timeslots - Is there a better way than using list comp?

    - by flyingcrab
    I have a dataset of events (tweets to be specific) that I am trying to bin / discretize. The following code seems to work fine so far (assuming 100 bins): HOUR = timedelta(hours=1) start = datetime.datetime(2009,01,01) z = [dt + x*HOUR for x in xrange(1, 100)] But then, I came across this fateful line at python docs 'This makes possible an idiom for clustering a data series into n-length groups using zip(*[iter(s)]*n)'. The zip idiom does indeed work - but I can't understand how (what is the * operator for instance?). How could I use to make my code prettier? I'm guessing this means I should make a generator / iterable for time that yields the time in graduations of an HOUR?

    Read the article

  • Any better algorithm possible here?

    - by Cupidvogel
    I am trying to solve this problem in Python. Noting that only the first kiss requires the alternation, any kiss that is not a part of the chain due to the first kiss can very well have a hug on the 2nd next person, this is the code I have come up with. This is just a simple mathematical calculation, no looping, no iteration, nothing. But still I am getting a timed-out message. Any means to optimize it? import psyco psyco.full() testcase = int(raw_input()) for i in xrange(0,testcase): n = int(raw_input()) if n%2: m = n/2; ans = 2 + 4*(2**m-1); ans = ans%1000000007; print ans else: m = n/2 - 1 ans = 2 + 2**(n/2) + 4*(2**m-1); ans = ans%1000000007 print ans

    Read the article

  • Python | How to create dynamic and expandable dictionaries

    - by MMRUser
    I want to create a Python dictionary which holds values in a multidimensional accept and it should be able to expand, this is the structure that the values should be stored :- userdata = {'data':[{'username':'Ronny Leech','age':'22','country':'Siberia'},{'username':'Cronulla James','age':'34','country':'USA'}]} Lets say I want to add another user def user_list(): users = [] for i in xrange(5, 0, -1): lonlatuser.append(('username','%s %s' % firstn, lastn)) lonlatuser.append(('age',age)) lonlatuser.append(('country',country)) return dict(user) This will only returns a dictionary with a single value in it (since the key names are same values will overwritten).So how do I append a set of values to this dictionary. Note: assume age, firstn, lastn and country are dynamically generated. Thanks.

    Read the article

< Previous Page | 1 2 3 4  | Next Page >