Search Results

Search found 143 results on 6 pages for 'scipy'.

Page 3/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • The most efficient way to calculate an integral in a dataset range

    - by Annalisa
    I have an array of 10 rows by 20 columns. Each columns corresponds to a data set that cannot be fitted with any sort of continuous mathematical function (it's a series of numbers derived experimentally). I would like to calculate the integral of each column between row 4 and row 8, then store the obtained result in a new array (20 rows x 1 column). I have tried using different scipy.integrate modules (e.g. quad, trpz,...). The problem is that, from what I understand, scipy.integrate must be applied to functions, and I am not sure how to convert each column of my initial array into a function. As an alternative, I thought of calculating the average of each column between row 4 and row 8, then multiply this number by 4 (i.e. 8-4=4, the x-interval) and then store this into my final 20x1 array. The problem is...ehm...that I don't know how to calculate the average over a given range. The question I am asking are: Which method is more efficient/straightforward? Can integrals be calculated over a data set like the one that I have described? How do I calculate the average over a range of rows?

    Read the article

  • How to remove the boundary effects arising due to zero padding in scipy/numpy fft?

    - by Omkar
    I have made a python code to smoothen a given signal using the Weierstrass transform, which is basically the convolution of a normalised gaussian with a signal. The code is as follows: #Importing relevant libraries from __future__ import division from scipy.signal import fftconvolve import numpy as np def smooth_func(sig, x, t= 0.002): N = len(x) x1 = x[-1] x0 = x[0] # defining a new array y which is symmetric around zero, to make the gaussian symmetric. y = np.linspace(-(x1-x0)/2, (x1-x0)/2, N) #gaussian centered around zero. gaus = np.exp(-y**(2)/t) #using fftconvolve to speed up the convolution; gaus.sum() is the normalization constant. return fftconvolve(sig, gaus/gaus.sum(), mode='same') If I run this code for say a step function, it smoothens the corner, but at the boundary it interprets another corner and smoothens that too, as a result giving unnecessary behaviour at the boundary. I explain this with a figure shown in the link below. Boundary effects This problem does not arise if we directly integrate to find convolution. Hence the problem is not in Weierstrass transform, and hence the problem is in the fftconvolve function of scipy. To understand why this problem arises we first need to understand the working of fftconvolve in scipy. The fftconvolve function basically uses the convolution theorem to speed up the computation. In short it says: convolution(int1,int2)=ifft(fft(int1)*fft(int2)) If we directly apply this theorem we dont get the desired result. To get the desired result we need to take the fft on a array double the size of max(int1,int2). But this leads to the undesired boundary effects. This is because in the fft code, if size(int) is greater than the size(over which to take fft) it zero pads the input and then takes the fft. This zero padding is exactly what is responsible for the undesired boundary effects. Can you suggest a way to remove this boundary effects? I have tried to remove it by a simple trick. After smoothening the function I am compairing the value of the smoothened signal with the original signal near the boundaries and if they dont match I replace the value of the smoothened func with the input signal at that point. It is as follows: i = 0 eps=1e-3 while abs(smooth[i]-sig[i])> eps: #compairing the signals on the left boundary smooth[i] = sig[i] i = i + 1 j = -1 while abs(smooth[j]-sig[j])> eps: # compairing on the right boundary. smooth[j] = sig[j] j = j - 1 There is a problem with this method, because of using an epsilon there are small jumps in the smoothened function, as shown below: jumps in the smooth func Can there be any changes made in the above method to solve this boundary problem?

    Read the article

  • Enthought Python, Sage, or others (in Unix clusters)

    - by vailen
    I am currently get access to a cluster of Unix machines, but they don't have the software I need (numpy, scipy, matplotlib, etc), and I have to install them by myself (I don't have the root permission, either, so commands like apt-get or yast doesn't work). In the worst case, I have to compile them all from source. Is there any better way to do so? I hear something about Enthought Python and Sage, but not sure what is the best way to do so. Any suggestion?

    Read the article

  • STFT and ISTFT in Python

    - by endolith
    Is there any form of short-time Fourier transform with corresponding inverse transform built into SciPy or NumPy or whatever? There's the pyplot specgram function in matplotlib, which calls ax.specgram(), which calls mlab.specgram(), which calls _spectral_helper(): #The checks for if y is x are so that we can use the same function to #implement the core of psd(), csd(), and spectrogram() without doing #extra calculations. We return the unaveraged Pxy, freqs, and t. I'm not sure if this can be used to do an STFT and ISTFT, though. Is there anything else, or should I translate something like this?

    Read the article

  • Interpolating 2d data that is piecewise constant on faces

    - by celil
    I have an irregular mesh which is described by two variables - a faces array that stores the indices of the vertices that constitute each face, and a verts array that stores the coordinates of each vertex. I also have a function that is assumed to be piecewise constant over each face, and it is stored in the form of an array of values per face. I am looking for a way to construct a function f from this data. Something along the following lines: faces = [[0,1,2], [1,2,3], [2,3,4] ...] verts = [[0,0], [0,1], [1,0], [1,1],....] vals = [0.0, 1.0, 0.5, 3.0,....] f = interpolate(faces, verts, vals) f(0.2, 0.2) = 0.0 # point inside face [0,1,2] f(0.6, 0.6) = 1.0 # point inside face [1,2,3] The manual way of evaluating f(x,y) would be to find the corresponding face that the point x,y lies in, and return the value that is stored in that face. Is there a function that already implements this in scipy (or in matlab)?

    Read the article

  • Using numpy.apply

    - by andylei
    What's wrong with this snippet of code? import numpy as np from scipy import stats d = np.arange(10.0) cutoffs = [stats.scoreatpercentile(d, pct) for pct in range(0, 100, 20)] f = lambda x: np.sum(x > cutoffs) fv = np.vectorize(f) # why don't these two lines output the same values? [f(x) for x in d] # => [0, 1, 2, 2, 3, 3, 4, 4, 5, 5] fv(d) # => array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) Any ideas?

    Read the article

  • hierarchical clustering with gene expression matrix in python

    - by user248237
    how can I do a hierarchical clustering (in this case for gene expression data) in Python in a way that shows the matrix of gene expression values along with the dendrogram? What I mean is like the example here: http://www.mathworks.cn/access/helpdesk/help/toolbox/bioinfo/ug/a1060813239b1.html shown after bullet point 6 (Figure 1), where the dendrogram is plotted to the left of the gene expression matrix, where the rows have been reordered to reflect the clustering. How can I do this in Python using numpy/scipy or other tools? Also, is it computationally practical to do this with a matrix of about 11,000 genes, using euclidean distance as a metric? thanks.

    Read the article

  • Compute divergence of vector field using python

    - by nyvltak
    Is there a function that could be used for calculation of the divergence of the vectorial field? (in matlab http://www.mathworks.ch/help/techdoc/ref/divergence.html) I would expect it exists in numpy/scipy but I can not find it using google :(. # I need to calculate div[A * grad(F)], where F = np.array([[1,2,3,4],[5,6,7,8]]) (2D numpy ndarray) A = np.array([[1,2,3,4],[1,2,3,4]]) (2D numpy ndarray) so grad(F) is a set of 2D ndarrays # I know, I can calculate divergence like this: http://en.wikipedia.org/wiki/Divergence#Application_in_Cartesian_coordinates but do not want to reinvent the wheel. (and also I expent there is some optimized function)

    Read the article

  • "painting" one array onto another using python / numpy

    - by Nate
    I'm writing a library to process gaze tracking in Python, and I'm rather new to the whole numpy / scipy world. Essentially, I'm looking to take an array of (x,y) values in time and "paint" some shape onto a canvas at those coordinates. For example, the shape might be a blurred circle. The operation I have in mind is more or less identical to using the paintbrush tool in Photoshop. I've got an interative algorithm that trims my "paintbrush" to be within the bounds of my image and adds each point to an accumulator image, but it's slow(!), and it seems like there's probably a fundamentally easier way to do this. Any pointers as to where to start looking?

    Read the article

  • numpy arange with multiple intervals

    - by Heiko Westermann
    Hi, i have an numpy array which represents multiple x-intervals of a function: In [137]: x_foo Out[137]: array([211, 212, 213, 214, 215, 216, 217, 218, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950]) as you can see, in x_foo are two intervals: one from 211 to 218, and one from 940 to 950. these are intervals, which i want to interpolate with scipy. for this, i need to adjust the spacing, e.g "211.0 211.1 211.2 ..." which you would normaly do with: arange( x_foo[0], x_foo[-1], 0.1 ) in the case of multiple intervals, this is not possible. so heres my question: is there a numpy-thonic way to do this in array-style? or do i need to write a function which loops over the whole array and split if the difference is 1? thanks!

    Read the article

  • Selecting dictionary items by key efficiently in Python

    - by user248237
    suppose I have a dictionary whose keys are strings. How can I efficiently make a new dictionary from that which contains only the keys present in some list? for example: # a dictionary mapping strings to stuff mydict = {'quux': ..., 'bar': ..., 'foo': ...} # list of keys to be selected from mydict keys_to_select = ['foo', 'bar', ...] The way I came up with is: filtered_mydict = [mydict[k] for k in mydict.keys() \ if k in keys_to_select] but I think this is highly inefficient because: (1) it requires enumerating the keys with keys(), (2) it requires looking up k in keys_to_select each time. at least one of these can be avoided, I would think. any ideas? I can use scipy/numpy too if needed.

    Read the article

  • How to solve non-linear equations using python

    - by stars83clouds
    I have the following code: #!/usr/bin/env python from scipy.optimize import fsolve import math h = 6.634e-27 k = 1.38e-16 freq1 = 88633.9360e6 freq2 = 88631.8473e6 freq3 = 88630.4157e6 def J(freq,T): return (h*freq/k)/(math.exp(h*freq/(k*T))-1) def equations(x,y,z,w,a,b,c,d): f1 = a*(J(freq1,y)-J(freq1,2.73))*(1-math.exp(-a*z))-(J(freq2,x)-J(freq2,2.73))*(1-math.exp(-z)) f2 = b*(J(freq3,w)-J(freq3,2.73))*(1-math.exp(-b*z))-(J(freq2,x)-J(freq2,2.73))*(1-math.exp(-z)) f3 = c*(J(freq3,w)-J(freq3,2.73))*(1-math.exp(-b*z))-(J(freq1,y)-J(freq1,2.73))*(1-math.exp(-a*z)) f4 = d*(J((freq3+freq1)/2,(y+w)/2)-J((freq3+freq1)/2,2.73))-(J(freq2,x)-J(freq2,2.73))*(1-math.exp(-z)) return (f1,f2,f3,f4) So, I have defined the equations in the above code. However, I now wish to solve the above set of equations using fsolve or other alternative non-linear numerical routine. I tried the following syntax but with no avail: x,y,z,w = fsolve(equations, (1,1,1,1)) I keep getting the error that "x" is not defined. I am executing all commands at the command-line, since I have no idea how to run a batch of commands as above automatically in python. I welcome any advice on how to solve this.

    Read the article

  • pylab.savefig() and pylab.show() image difference

    - by Jack1990
    I'm making an script to automatically create plots from .xvg files, but there's a problem when I'm trying to use pylab's savefig() method. Using pylab.show() and saving from there, everything's fine. Using pylab.show() Using pylab.savefig() def producePlot(timestep, energy_values,type_line = 'r', jump = 1,finish = 100): fc = sp.interp1d(timestep[::jump], energy_values[::jump],kind='cubic') xnew = numpy.linspace(0, finish, finish*2) pylab.plot(xnew, fc(xnew),type_line) pylab.xlabel('Time in ps ') pylab.ylabel('kJ/mol') pylab.xlim(xmin=0, xmax=finish) def produceSimplePlot(timestep, energy_values,type_line = 'r', jump = 1,finish = 100): pylab.plot(timestep, energy_values,type_line) pylab.xlabel('Time in ps ') pylab.ylabel('kJ/mol') pylab.xlim(xmin=0, xmax=finish) def linearRegression(timestep, energy_values, type_line = 'g'): #, jump = 1,finish = 100): from scipy import stats import numpy #print 'fuck' timestep = numpy.asarray(timestep) slope, intercept, r_value, p_value, std_err = stats.linregress(timestep,energy_values) line = slope*timestep+intercept pylab.plot(timestep, line, type_line) def plottingTime(Title,file_name, timestep, energy_values ,loc, jump , finish): pylab.title(Title) producePlot(timestep,energy_values, 'b',jump, finish) linearRegression(timestep,energy_values) import numpy Average = numpy.average(energy_values) #print Average pylab.legend(("Average = %.2f" %(Average),'Linear Reg'),loc) #pylab.show() pylab.savefig('%s.jpg' %file_name[:-4], bbox_inches= None, pad_inches=0) #if __name__ == '__main__': #plottingTime(Title,timestep1, energy_values, jump =10, finish = 4800) def specialCase(Title,file_name, timestep, energy_values,loc, jump, finish): #print 'Working here ...?' pylab.title(Title) producePlot(timestep,energy_values, 'b',jump, finish) import numpy from pylab import * Average = numpy.average(energy_values) #print Average pylab.legend(("Average = %.2g" %(Average), Title),loc) locs,labels = yticks() yticks(locs, map(lambda x: "%.3g" % x, locs)) #pylab.show() pylab.savefig('%s.jpg' %file_name[:-4] , bbox_inches= None, pad_inches=0) Thanks in advance, John

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • Reordering matrix elements to reflect column and row clustering in naiive python

    - by bgbg
    Hello, I'm looking for a way to perform clustering separately on matrix rows and than on its columns, reorder the data in the matrix to reflect the clustering and putting it all together. The clustering problem is easily solvable, so is the dendrogram creation (for example in this blog or in "Programming collective intelligence"). However, how to reorder the data remains unclear for me. Eventually, I'm looking for a way of creating graphs similar to the one below using naive Python (with any "standard" library such as numpy, matplotlib etc, but without using R or other external tools).

    Read the article

  • Stretch array (Numpy, Python)

    - by Snej
    I have a numpy array [1,2,3,4,5,6,7,8,9,10,11,12,13,14] and want to have an array structured like [[1,2,3,4], [2,3,4,5], [3,4,5,6], ..., [11,12,13,14]]. Sure this is possible by looping over the large array and adding arrays of length four to the new array, but I'm curious if there is some secret 'magic' Python method doing just this :)

    Read the article

  • How to draw line inside a scatter plot

    - by ruffy
    I can't believe that this is so complicated but I tried and googled for a while now. I just want to analyse my scatter plot with a few graphical features. For starters, I want to add simply a line. So, I have a few (4) points and like in this plot [1] I want to add a line to it. http://en.wikipedia.org/wiki/File:ROC_space-2.png [1] Now, this won't work. And frankly, the documentation-examples-gallery combo and content of matplotlib is a bad source for information. My code is based upon a simple scatter plot from the gallery: # definitions for the axes left, width = 0.1, 0.85 #0.65 bottom, height = 0.1, 0.85 #0.65 bottom_h = left_h = left+width+0.02 rect_scatter = [left, bottom, width, height] # start with a rectangular Figure fig = plt.figure(1, figsize=(8,8)) axScatter = plt.axes(rect_scatter) # the scatter plot: p1 = axScatter.scatter(x[0], y[0], c='blue', s = 70) p2 = axScatter.scatter(x[1], y[1], c='green', s = 70) p3 = axScatter.scatter(x[2], y[2], c='red', s = 70) p4 = axScatter.scatter(x[3], y[3], c='yellow', s = 70) p5 = axScatter.plot([1,2,3], "r--") plt.legend([p1, p2, p3, p4, p5], [names[0], names[1], names[2], names[3], "Random guess"], loc = 2) # now determine nice limits by hand: binwidth = 0.25 xymax = np.max( [np.max(np.fabs(x)), np.max(np.fabs(y))] ) lim = ( int(xymax/binwidth) + 1) * binwidth axScatter.set_xlim( (-lim, lim) ) axScatter.set_ylim( (-lim, lim) ) xText = axScatter.set_xlabel('FPR / Specificity') yText = axScatter.set_ylabel('TPR / Sensitivity') bins = np.arange(-lim, lim + binwidth, binwidth) plt.show() Everything works, except the p5 which is a line. Now how is this supposed to work? What's good practice here?

    Read the article

  • removing pairs of elements from numpy arrays that are NaN (or another value) in Python

    - by user248237
    I have an array with two columns in numpy. For example: a = array([[1, 5, nan, 6], [10, 6, 6, nan]]) a = transpose(a) I want to efficiently iterate through the two columns, a[:, 0] and a[:, 1] and remove any pairs that meet a certain condition, in this case if they are NaN. The obvious way I can think of is: new_a = [] for val1, val2 in a: if val2 == nan or val2 == nan: new_a.append([val1, val2]) But that seems clunky. What's the pythonic numpy way of doing this? thanks.

    Read the article

  • problem plotting on logscale in matplotlib in python

    - by user248237
    I am trying to plot the following numbers on a log scale as a scatter plot in matplotlib. Both the quantities on the x and y axes have very different scales, and one of the variables has a huge dynamic range (nearly 0 to 12 million roughly) while the other is between nearly 0 and 2. I think it might be good to plot both on a log scale. I tried the following, for a subset of the values of the two variables: fig = plt.figure(figsize(8, 8)) ax = fig.add_subplot(1, 1, 1) ax.set_yscale('log') ax.set_xscale('log') plt.scatter([1.341, 0.1034, 0.6076, 1.4278, 0.0374], [0.37, 0.12, 0.22, 0.4, 0.08]) The x-axes appear log scaled but the points do not appear -- only two points appear. Any idea how to fix this? Also, how can I make this log scale appear on a square axes, so that the correlation between the two variables can be interpreted from the scatter plot? thanks.

    Read the article

  • A good matplot tutorial (from beginner to intermidiate)?

    - by morpheous
    Can anyone recommend a good matplot tutorial. I am a complete beginner - but have used similar software (matlab, R etc), in my halcyon days at University (i.e. a long time ago). A google search brings up a list of dubious quality, and the 'official' docs are too terse, or provide examples that are more 'edge case' (e.g. drawing dolphins swimming in a bubble), than one is likely to meet in practise. I want a manual that provides the following information in a well structured manner: Introduction to the data types Introduction to 2D plotting with some simple practical examples (simple 2D graphs) Introduction to 3D plotting with some simple practical examples (simple 2D graphs: contour and surface)

    Read the article

  • Incremental PCA

    - by smichak
    Hi, Lately, I've been looking into an implementation of an incremental PCA algorithm in python - I couldn't find something that would meet my needs so I did some reading and implemented an algorithm I found in some paper. Here is the module's code - the relevant paper on which it is based is mentioned in the module's documentation. I would appreciate any feedback from people who are interested in this. Micha #!/usr/bin/env python """ Incremental PCA calculation module. Based on P.Hall, D. Marshall and R. Martin "Incremental Eigenalysis for Classification" which appeared in British Machine Vision Conference, volume 1, pages 286-295, September 1998. Principal components are updated sequentially as new observations are introduced. Each new observation (x) is projected on the eigenspace spanned by the current principal components (U) and the residual vector (r = x - U(U.T*x)) is used as a new principal component (U' = [U r]). The new principal components are then rotated by a rotation matrix (R) whose columns are the eigenvectors of the transformed covariance matrix (D=U'.T*C*U) to yield p + 1 principal components. From those, only the first p are selected. """ __author__ = "Micha Kalfon" import numpy as np _ZERO_THRESHOLD = 1e-9 # Everything below this is zero class IPCA(object): """Incremental PCA calculation object. General Parameters: m - Number of variables per observation n - Number of observations p - Dimension to which the data should be reduced """ def __init__(self, m, p): """Creates an incremental PCA object for m-dimensional observations in order to reduce them to a p-dimensional subspace. @param m: Number of variables per observation. @param p: Number of principle components. @return: An IPCA object. """ self._m = float(m) self._n = 0.0 self._p = float(p) self._mean = np.matrix(np.zeros((m , 1), dtype=np.float64)) self._covariance = np.matrix(np.zeros((m, m), dtype=np.float64)) self._eigenvectors = np.matrix(np.zeros((m, p), dtype=np.float64)) self._eigenvalues = np.matrix(np.zeros((1, p), dtype=np.float64)) def update(self, x): """Updates with a new observation vector x. @param x: Next observation as a column vector (m x 1). """ m = self._m n = self._n p = self._p mean = self._mean C = self._covariance U = self._eigenvectors E = self._eigenvalues if type(x) is not np.matrix or x.shape != (m, 1): raise TypeError('Input is not a matrix (%d, 1)' % int(m)) # Update covariance matrix and mean vector and centralize input around # new mean oldmean = mean mean = (n*mean + x) / (n + 1.0) C = (n*C + x*x.T + n*oldmean*oldmean.T - (n+1)*mean*mean.T) / (n + 1.0) x -= mean # Project new input on current p-dimensional subspace and calculate # the normalized residual vector g = U.T*x r = x - (U*g) r = (r / np.linalg.norm(r)) if not _is_zero(r) else np.zeros_like(r) # Extend the transformation matrix with the residual vector and find # the rotation matrix by solving the eigenproblem DR=RE U = np.concatenate((U, r), 1) D = U.T*C*U (E, R) = np.linalg.eigh(D) # Sort eigenvalues and eigenvectors from largest to smallest to get the # rotation matrix R sorter = list(reversed(E.argsort(0))) E = E[sorter] R = R[:,sorter] # Apply the rotation matrix U = U*R # Select only p largest eigenvectors and values and update state self._n += 1.0 self._mean = mean self._covariance = C self._eigenvectors = U[:, 0:p] self._eigenvalues = E[0:p] @property def components(self): """Returns a matrix with the current principal components as columns. """ return self._eigenvectors @property def variances(self): """Returns a list with the appropriate variance along each principal component. """ return self._eigenvalues def _is_zero(x): """Return a boolean indicating whether the given vector is a zero vector up to a threshold. """ return np.fabs(x).min() < _ZERO_THRESHOLD if __name__ == '__main__': import sys def pca_svd(X): X = X - X.mean(0).repeat(X.shape[0], 0) [_, _, V] = np.linalg.svd(X) return V N = 1000 obs = np.matrix([np.random.normal(size=10) for _ in xrange(N)]) V = pca_svd(obs) print V[0:2] pca = IPCA(obs.shape[1], 2) for i in xrange(obs.shape[0]): x = obs[i,:].transpose() pca.update(x) U = pca.components print U

    Read the article

  • unevenly centered subplots in matplotlib in Python?

    - by user248237
    I am plotting a simple pair of subplots in matplotlib that are for some reason unevenly centered. I plot them as follows: plt.figure() # first subplot s1 = plt.subplot(2, 1, 1) plt.bar([1, 2, 3], [4, 5, 6]) # second subplot s2 = plt.subplot(2, 1, 2) plt.pcolor(rand(5,5)) # add colorbar plt.colorbar() # square axes axes_square(s1) axes_square(s2) where axes_square is simply: def axes_square(plot_handle): plot_handle.axes.set_aspect(1/plot_handle.axes.get_data_ratio()) The plot I get is attached. The top and bottom plots are unevenly centered. I'd like their yaxis to be aligned and their boxes to be aligned. If I remove the plt.colorbar() call, the plots become centered. How can I have the plots centered while the colorbar of pcolor is still shown? I want the axes to be centered and have the colorbar be outside of that alignment, either to the left or to the right of the pcolor matrix. image of plots link thanks.

    Read the article

  • taking intersection of N-many lists in python

    - by user248237
    what's the easiest way to take the intersection of N-many lists in python? if I have two lists a and b, I know I can do: a = set(a) b = set(b) intersect = a.intersection(b) but I want to do something like a & b & c & d & ... for an arbitrary set of lists (ideally without converting to a set first, but if that's the easiest / most efficient way, I can deal with that.) I.e. I want to write a function intersect(*args) that will do it for arbitrarily many sets efficiently. What's the easiest way to do that? EDIT: My own solution is reduce(set.intersection, [a,b,c]) -- is that good? thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >