Search Results

Search found 338 results on 14 pages for 'numpy'.

Page 8/14 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Selecting dictionary items by key efficiently in Python

    - by user248237
    suppose I have a dictionary whose keys are strings. How can I efficiently make a new dictionary from that which contains only the keys present in some list? for example: # a dictionary mapping strings to stuff mydict = {'quux': ..., 'bar': ..., 'foo': ...} # list of keys to be selected from mydict keys_to_select = ['foo', 'bar', ...] The way I came up with is: filtered_mydict = [mydict[k] for k in mydict.keys() \ if k in keys_to_select] but I think this is highly inefficient because: (1) it requires enumerating the keys with keys(), (2) it requires looking up k in keys_to_select each time. at least one of these can be avoided, I would think. any ideas? I can use scipy/numpy too if needed.

    Read the article

  • Python point lookup (coordinate binning?)

    - by Rince
    Greetings, I am trying to bin an array of points (x, y) into an array of boxes [(x0, y0), (x1, y0), (x0, y1), (x1, y1)] (tuples are the corner points) So far I have the following routine: def isInside(self, point, x0, x1, y0, y1): pr1 = getProduct(point, (x0, y0), (x1, y0)) if pr1 >= 0: pr2 = getProduct(point, (x1, y0), (x1, y1)) if pr2 >= 0: pr3 = getProduct(point, (x1, y1), (x0, y1)) if pr3 >= 0: pr4 = getProduct(point, (x0, y1), (x0, y0)) if pr4 >= 0: return True return False def getProduct(origin, pointA, pointB): product = (pointA[0] - origin[0])*(pointB[1] - origin[1]) - (pointB[0] - origin[0])*(pointA[1] - origin[1]) return product Is there any better way then point-by-point lookup? Maybe some not-obvious numpy routine? Thank you!

    Read the article

  • What is most efficient way of setting row to zeros for a sparce scipy matrix?

    - by Alex Reinking
    I'm trying to convert the following MATLAB code to Python and am having trouble finding a solution that works in any reasonable amount of time. M = diag(sum(a)) - a; where = vertcat(in, out); M(where,:) = 0; M(where,where) = 1; Here, a is a sparse matrix and where is a vector (as are in/out). The solution I have using Python is: M = scipy.sparse.diags([degs], [0]) - A where = numpy.hstack((inVs, outVs)).astype(int) M = scipy.sparse.lil_matrix(M) M[where, :] = 0 # This is the slowest line M[where, where] = 1 M = scipy.sparse.csc_matrix(M) But since A is 334863x334863, this takes like three minutes. If anyone has any suggestions on how to make this faster, please contribute them! For comparison, MATLAB does this same step imperceptibly fast. Thanks!

    Read the article

  • List of objects or parallel arrays of properties?

    - by Headcrab
    The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension?

    Read the article

  • Python optimization problem?

    - by user342079
    Alright, i had this homework recently (don't worry, i've already done it, but in c++) but I got curious how i could do it in python. The problem is about 2 light sources that emit light. I won't get into details tho. Here's the code (that I've managed to optimize a bit in the latter part): import math, array import numpy as np from PIL import Image size = (800,800) width, height = size s1x = width * 1./8 s1y = height * 1./8 s2x = width * 7./8 s2y = height * 7./8 r,g,b = (255,255,255) arr = np.zeros((width,height,3)) hy = math.hypot print 'computing distances (%s by %s)'%size, for i in xrange(width): if i%(width/10)==0: print i, if i%20==0: print '.', for j in xrange(height): d1 = hy(i-s1x,j-s1y) d2 = hy(i-s2x,j-s2y) arr[i][j] = abs(d1-d2) print '' arr2 = np.zeros((width,height,3),dtype="uint8") for ld in [200,116,100,84,68,52,36,20,8,4,2]: print 'now computing image for ld = '+str(ld) arr2 *= 0 arr2 += abs(arr%ld-ld/2)*(r,g,b)/(ld/2) print 'saving image...' ar2img = Image.fromarray(arr2) ar2img.save('ld'+str(ld).rjust(4,'0')+'.png') print 'saved as ld'+str(ld).rjust(4,'0')+'.png' I have managed to optimize most of it, but there's still a huge performance gap in the part with the 2 for-s, and I can't seem to think of a way to bypass that using common array operations... I'm open to suggestions :D

    Read the article

  • Incremental PCA

    - by smichak
    Hi, Lately, I've been looking into an implementation of an incremental PCA algorithm in python - I couldn't find something that would meet my needs so I did some reading and implemented an algorithm I found in some paper. Here is the module's code - the relevant paper on which it is based is mentioned in the module's documentation. I would appreciate any feedback from people who are interested in this. Micha #!/usr/bin/env python """ Incremental PCA calculation module. Based on P.Hall, D. Marshall and R. Martin "Incremental Eigenalysis for Classification" which appeared in British Machine Vision Conference, volume 1, pages 286-295, September 1998. Principal components are updated sequentially as new observations are introduced. Each new observation (x) is projected on the eigenspace spanned by the current principal components (U) and the residual vector (r = x - U(U.T*x)) is used as a new principal component (U' = [U r]). The new principal components are then rotated by a rotation matrix (R) whose columns are the eigenvectors of the transformed covariance matrix (D=U'.T*C*U) to yield p + 1 principal components. From those, only the first p are selected. """ __author__ = "Micha Kalfon" import numpy as np _ZERO_THRESHOLD = 1e-9 # Everything below this is zero class IPCA(object): """Incremental PCA calculation object. General Parameters: m - Number of variables per observation n - Number of observations p - Dimension to which the data should be reduced """ def __init__(self, m, p): """Creates an incremental PCA object for m-dimensional observations in order to reduce them to a p-dimensional subspace. @param m: Number of variables per observation. @param p: Number of principle components. @return: An IPCA object. """ self._m = float(m) self._n = 0.0 self._p = float(p) self._mean = np.matrix(np.zeros((m , 1), dtype=np.float64)) self._covariance = np.matrix(np.zeros((m, m), dtype=np.float64)) self._eigenvectors = np.matrix(np.zeros((m, p), dtype=np.float64)) self._eigenvalues = np.matrix(np.zeros((1, p), dtype=np.float64)) def update(self, x): """Updates with a new observation vector x. @param x: Next observation as a column vector (m x 1). """ m = self._m n = self._n p = self._p mean = self._mean C = self._covariance U = self._eigenvectors E = self._eigenvalues if type(x) is not np.matrix or x.shape != (m, 1): raise TypeError('Input is not a matrix (%d, 1)' % int(m)) # Update covariance matrix and mean vector and centralize input around # new mean oldmean = mean mean = (n*mean + x) / (n + 1.0) C = (n*C + x*x.T + n*oldmean*oldmean.T - (n+1)*mean*mean.T) / (n + 1.0) x -= mean # Project new input on current p-dimensional subspace and calculate # the normalized residual vector g = U.T*x r = x - (U*g) r = (r / np.linalg.norm(r)) if not _is_zero(r) else np.zeros_like(r) # Extend the transformation matrix with the residual vector and find # the rotation matrix by solving the eigenproblem DR=RE U = np.concatenate((U, r), 1) D = U.T*C*U (E, R) = np.linalg.eigh(D) # Sort eigenvalues and eigenvectors from largest to smallest to get the # rotation matrix R sorter = list(reversed(E.argsort(0))) E = E[sorter] R = R[:,sorter] # Apply the rotation matrix U = U*R # Select only p largest eigenvectors and values and update state self._n += 1.0 self._mean = mean self._covariance = C self._eigenvectors = U[:, 0:p] self._eigenvalues = E[0:p] @property def components(self): """Returns a matrix with the current principal components as columns. """ return self._eigenvectors @property def variances(self): """Returns a list with the appropriate variance along each principal component. """ return self._eigenvalues def _is_zero(x): """Return a boolean indicating whether the given vector is a zero vector up to a threshold. """ return np.fabs(x).min() < _ZERO_THRESHOLD if __name__ == '__main__': import sys def pca_svd(X): X = X - X.mean(0).repeat(X.shape[0], 0) [_, _, V] = np.linalg.svd(X) return V N = 1000 obs = np.matrix([np.random.normal(size=10) for _ in xrange(N)]) V = pca_svd(obs) print V[0:2] pca = IPCA(obs.shape[1], 2) for i in xrange(obs.shape[0]): x = obs[i,:].transpose() pca.update(x) U = pca.components print U

    Read the article

  • Create matplotlib legend out of the figure

    - by Werner
    I added the legend this way: leg = fig.legend((l0,l1,l2,l3,l4,l5,l6), ('0 Cl : r2, slope, origin', '1 Cl :'+str(r1b)+' , '+str(m1)+' , '+str(b1), '2 Cl :'+str(r2b)+' , '+str(m2)+' , '+str(b2), '3 Cl :'+str(r3b)+' , '+str(m3)+' , '+str(b3), '4 Cl :'+str(r4b)+' , '+str(m4)+' , '+str(b4), '5 Cl :'+str(r5b)+' , '+str(m5)+' , '+str(b5), '6 Cl :'+str(r6b)+' , '+str(m6)+' , '+str(b6), ), 'upper right') but the legend appears inside the plot. How can I tell matplotlib to put it to the right of the plot and at the right?

    Read the article

  • combine two arrays and sort

    - by Jun
    Given two arrays like the following: a = array([1,2,4,5,6,8,9]) b = array([3,4,7,10]) I would like the output to be: c = array([1,2,3,4,5,6,7,8,9,10]) or: c = array([1,2,3,4,4,5,6,7,8,9,10]) I'm aware that I can do the following: c = sort(unique(concatenate((a,b))) I'm just wondering if there is a faster way to do it as the arrays I'm dealing with have millions of elements. Any idea is welcomed. Thanks

    Read the article

  • How to draw line inside a scatter plot

    - by ruffy
    I can't believe that this is so complicated but I tried and googled for a while now. I just want to analyse my scatter plot with a few graphical features. For starters, I want to add simply a line. So, I have a few (4) points and like in this plot [1] I want to add a line to it. http://en.wikipedia.org/wiki/File:ROC_space-2.png [1] Now, this won't work. And frankly, the documentation-examples-gallery combo and content of matplotlib is a bad source for information. My code is based upon a simple scatter plot from the gallery: # definitions for the axes left, width = 0.1, 0.85 #0.65 bottom, height = 0.1, 0.85 #0.65 bottom_h = left_h = left+width+0.02 rect_scatter = [left, bottom, width, height] # start with a rectangular Figure fig = plt.figure(1, figsize=(8,8)) axScatter = plt.axes(rect_scatter) # the scatter plot: p1 = axScatter.scatter(x[0], y[0], c='blue', s = 70) p2 = axScatter.scatter(x[1], y[1], c='green', s = 70) p3 = axScatter.scatter(x[2], y[2], c='red', s = 70) p4 = axScatter.scatter(x[3], y[3], c='yellow', s = 70) p5 = axScatter.plot([1,2,3], "r--") plt.legend([p1, p2, p3, p4, p5], [names[0], names[1], names[2], names[3], "Random guess"], loc = 2) # now determine nice limits by hand: binwidth = 0.25 xymax = np.max( [np.max(np.fabs(x)), np.max(np.fabs(y))] ) lim = ( int(xymax/binwidth) + 1) * binwidth axScatter.set_xlim( (-lim, lim) ) axScatter.set_ylim( (-lim, lim) ) xText = axScatter.set_xlabel('FPR / Specificity') yText = axScatter.set_ylabel('TPR / Sensitivity') bins = np.arange(-lim, lim + binwidth, binwidth) plt.show() Everything works, except the p5 which is a line. Now how is this supposed to work? What's good practice here?

    Read the article

  • plotting results of hierarchical clustering ontop of a matrix of data in python

    - by user248237
    How can I plot a dendrogram right on top of a matrix of values, reordered appropriately to reflect the clustering, in Python? An example is in the bottom of the following figure: http://www.coriell.org/images/microarray.gif I use scipy.cluster.dendrogram to make my dendrogram and perform hierarchical clustering on a matrix of data. How can I then plot the data as a matrix where the rows have been reordered to reflect a clustering induced by the cutting the dendrogram at a particular threshold, and have the dendrogram plotted alongside the matrix? I know how to plot the dendrogram in scipy, but not how to plot the intensity matrix of data with the right scale bar next to it. Any help on this would be greatly appreciated.

    Read the article

  • problem plotting on logscale in matplotlib in python

    - by user248237
    I am trying to plot the following numbers on a log scale as a scatter plot in matplotlib. Both the quantities on the x and y axes have very different scales, and one of the variables has a huge dynamic range (nearly 0 to 12 million roughly) while the other is between nearly 0 and 2. I think it might be good to plot both on a log scale. I tried the following, for a subset of the values of the two variables: fig = plt.figure(figsize(8, 8)) ax = fig.add_subplot(1, 1, 1) ax.set_yscale('log') ax.set_xscale('log') plt.scatter([1.341, 0.1034, 0.6076, 1.4278, 0.0374], [0.37, 0.12, 0.22, 0.4, 0.08]) The x-axes appear log scaled but the points do not appear -- only two points appear. Any idea how to fix this? Also, how can I make this log scale appear on a square axes, so that the correlation between the two variables can be interpreted from the scatter plot? thanks.

    Read the article

  • unevenly centered subplots in matplotlib in Python?

    - by user248237
    I am plotting a simple pair of subplots in matplotlib that are for some reason unevenly centered. I plot them as follows: plt.figure() # first subplot s1 = plt.subplot(2, 1, 1) plt.bar([1, 2, 3], [4, 5, 6]) # second subplot s2 = plt.subplot(2, 1, 2) plt.pcolor(rand(5,5)) # add colorbar plt.colorbar() # square axes axes_square(s1) axes_square(s2) where axes_square is simply: def axes_square(plot_handle): plot_handle.axes.set_aspect(1/plot_handle.axes.get_data_ratio()) The plot I get is attached. The top and bottom plots are unevenly centered. I'd like their yaxis to be aligned and their boxes to be aligned. If I remove the plt.colorbar() call, the plots become centered. How can I have the plots centered while the colorbar of pcolor is still shown? I want the axes to be centered and have the colorbar be outside of that alignment, either to the left or to the right of the pcolor matrix. image of plots link thanks.

    Read the article

  • ANCOVA in Python with Scipy/Numpy stats

    - by Shax
    I would like to know a way of performing ANCOVA(analysis of covariance) using Python with scipy. It is basically a statistical comparison of regression lines. I know Python can do ANOVA and it can also do regression line fitting with Scipy.stats. I'm not sure how to put those together to get an effective ANCOVA though, if it is possible. Regards, Shax

    Read the article

  • Computing complex math equations in python

    - by dassouki
    Are there any libraries or techniques that simplify computing equations ? Take the following two examples: F = B * { [ a * b * sumOf (A / B ''' for all i ''' ) ] / [ sumOf(c * d * j) ] } where: F = cost from i to j B, a, b, c, d, j are all vectors in the format [ [zone_i, zone_j, cost_of_i_to_j], [..]] This should produce a vector F [ [1,2, F_1_2], ..., [i,j, F_i_j] ] T_ij = [ P_i * A_i * F_i_j] / [ SumOf [ Aj * F_i_j ] // j = 1 to j = n ] where: n is the number of zones T = vector [ [1, 2, A_1_2, P_1_2], ..., [i, j, A_i_j, P_i_j] ] F = vector [1, 2, F_1_2], ..., [i, j, F_i_j] so P_i would be the sum of all P_i_j for all j and Aj would be sum of all P_j for all i I'm not sure what I'm looking for, but perhaps a parser for these equations or methods to deal with multiple multiplications and products between vectors? To calculate some of the factors, for example A_j, this is what i use from collections import defaultdict A_j_dict = defaultdict(float) for A_item in TG: A_j_dict[A_item[1]] += A_item[3] Although this works fine, I really feel that it is a brute force / hacking method and unmaintainable in the case we want to add more variables or parameters. Are there any math equation parsers you'd recommend? Side Note: These equations are used to model travel. Currently I use excel to solve a lot of these equations; and I find that process to be daunting. I'd rather move to python where it pulls the data directly from our database (postgres) and outputs the results into the database. All that is figured out. I'm just struggling with evaluating the equations themselves. Thanks :)

    Read the article

  • Reducing size of a character array in Numpy

    - by Morgoth
    Given a character array: In [21]: x = np.array(['a ','bb ','cccc ']) One can remove the whitespace using: In [22]: np.char.strip(x) Out[22]: array(['a', 'bb', 'cccc'], dtype='|S8') but is there a way to also shrink the width of the column to the minimum required size, in the above case |S4?

    Read the article

  • taking intersection of N-many lists in python

    - by user248237
    what's the easiest way to take the intersection of N-many lists in python? if I have two lists a and b, I know I can do: a = set(a) b = set(b) intersect = a.intersection(b) but I want to do something like a & b & c & d & ... for an arbitrary set of lists (ideally without converting to a set first, but if that's the easiest / most efficient way, I can deal with that.) I.e. I want to write a function intersect(*args) that will do it for arbitrarily many sets efficiently. What's the easiest way to do that? EDIT: My own solution is reduce(set.intersection, [a,b,c]) -- is that good? thanks.

    Read the article

  • Easiest way to plot values as symbols in scatter plot?

    - by AllenH
    In an answer to an earlier question of mine regarding fixing the colorspace for scatter images of 4D data, Tom10 suggested plotting values as symbols in order to double-check my data. An excellent idea. I've run some similar demos in the past, but I can't for the life of me find the demo I remember being quite simple. So, what's the easiest way to plot numerical values as the symbol in a scatter plot instead of 'o' for example? Tom10 suggested plt.txt(x,y,value)- and that is the implementation used in a number of examples. I however wonder if there's an easy way to evaluate "value" from my array of numbers? Can one simply say: str(valuearray) ? Do you need a loop to evaluate the values for plotting as suggested in the matplotlib demo section for 3D text scatter plots? Their example produces: However, they're doing something fairly complex in evaluating the locations as well as changing text direction based on data. So, is there a cute way to plot x,y,C data (where C is a value often taken as the color in the plot data- but instead I wish to make the symbol)? Again, I think we have a fair answer to this- I just wonder if there's an easier way?

    Read the article

  • problem with hierarchical clustering in Python

    - by user248237
    I am doing a hierarchical clustering a 2 dimensional matrix by correlation distance metric (i.e. 1 - Pearson correlation). My code is the following (the data is in a variable called "data"): from hcluster import * Y = pdist(data, 'correlation') cluster_type = 'average' Z = linkage(Y, cluster_type) dendrogram(Z) The error I get is: ValueError: Linkage 'Z' contains negative distances. What causes this error? The matrix "data" that I use is simply: [[ 156.651968 2345.168618] [ 158.089968 2032.840106] [ 207.996413 2786.779081] [ 151.885804 2286.70533 ] [ 154.33665 1967.74431 ] [ 150.060182 1931.991169] [ 133.800787 1978.539644] [ 112.743217 1478.903191] [ 125.388905 1422.3247 ]] I don't see how pdist could ever produce negative numbers when taking 1 - pearson correlation. Any ideas on this? thank you.

    Read the article

  • Convert object to DateRange

    - by user655832
    I'm querying an underlying PostgreSQL database using Pandas 0.8. Pandas is returning the DataFrame properly but the underlying timestamp column in my database is being returned as a generic "object" type in Pandas. As I would eventually like to seasonal normalization of my data I am curious as to how to convert this generic "object" column to something that is appropriate for analysis. Here is my current code to retrieve the data: # get records from db example import pandas.io.sql as psql import psycopg2 # define query to get all subs created this year QRY = """ select i i, i * random() f, case when random() > 0.5 then true else false end t, (current_date - (i*random())::int)::timestamp with time zone tsz from generate_series(1,1000) as s(i) order by 4 ; """ CONN_STRING = "host='localhost' port=5432 dbname='postgres' user='postgres'" # connect to db conn = psycopg2.connect(CONN_STRING) # get some data set index on relid column df = psql.frame_query(QRY, con=conn) print "Row count retrieved: %i" % (len(df),) Thanks for any help you can render. M

    Read the article

  • Doing arithmetic with up to two decimal places in Python?

    - by user248237
    I have two floats in Python that I'd like to subtract, i.e. v1 = float(value1) v2 = float(value2) diff = v1 - v2 I want "diff" to be computed up to two decimal places, that is compute it using %.2f of v1 and %.2f of v2. How can I do this? I know how to print v1 and v2 up to two decimals, but not how to do arithmetic like that. The particular issue I am trying to avoid is this. Suppose that: v1 = 0.982769777778 v2 = 0.985980444444 diff = v1 - v2 and then I print to file the following: myfile.write("%.2f\t%.2f\t%.2f\n" %(v1, v2, diff)) then I will get the output: 0.98 0.99 0.00, suggesting that there's no difference between v1 and v2, even though the printed result suggests there's a 0.01 difference. How can I get around this? thanks.

    Read the article

  • how to set a fixed color bar for pcolor in python matplotlib?

    - by user248237
    I am using pcolor with a custom color map to plot a matrix of values. I set my color map so that low values are white and high values are red, as shown below. All of my matrices have values between 0 and 20 (inclusive) and I'd like 20 to always be pure red and 0 to always be pure white, even if the matrix has values that don't span the entire range. For example, if my matrix only has values between 2 and 7, I don't want it to plot 2 as white and 7 as red, but rather color it as if the range is still 0 to 20. How can I do this? I tried using the "ticks=" option of colorbar but it did not work. Here is my current code (assume "my_matrix" contains the values to be plotted): cdict = {'red': ((0.0, 1.0, 1.0), (0.5, 1.0, 1.0), (1.0, 1.0, 1.0)), 'green': ((0.0, 1.0, 1.0), (0.5, 1.0, 1.0), (1.0, 0.0, 0.0)), 'blue': ((0.0, 1.0, 1.0), (0.5, 1.0, 1.0), (1.0, 0.0, 0.0))} my_cmap = matplotlib.colors.LinearSegmentedColormap('my_colormap', cdict, 256) colored_matrix = plt.pcolor(my_matrix, cmap=my_cmap) plt.colorbar(colored_matrix, ticks=[0, 5, 10, 15, 20]) any idea how I can fix this to get the right result? thanks very much.

    Read the article

  • A faster alternative to Pandas `isin` function

    - by user3576212
    I have a very large data frame df that looks like: ID Value1 Value2 1345 3.2 332 1355 2.2 32 2346 1.0 11 3456 8.9 322 And I have a list that contains a subset of IDs ID_list. I need to have a subset of df for the ID contained in ID_list. Currently, I am using df_sub=df[df.ID.isin(ID_list)] to do it. But it takes a lot time. IDs contained in ID_list doesn't have any pattern, so it's not within certain range. (And I need to apply the same operation to many similar dataframes. I was wondering if there is any faster way to do this. Will it help a lot if make ID as the index? Thanks!

    Read the article

  • Get information about a function in python, looking at source code

    - by Werner
    Hi, the following code comes from the matplotlib gallery: #!/usr/bin/env python from pylab import * x = array([10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]) y = array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]) I am new to python, and would like to change the content of x and y from an input file. I have two short questions: I could guess what array means, but once I see it on the code, how can I know to which library it belongs and more information about it? Should I use some kind of python debug commands? How do I insert the content of my input file into x? Thanks

    Read the article

  • doing arithmetic upto two significant figures in Python?

    - by user248237
    I have two floats in Python that I'd like to subtract, i.e. v1 = float(value1) v2 = float(value2) diff = v1 - v2 I want "diff" to be computed upto two significant figures, that is compute it using %.2f of v1 and %.2f of v2. How can I do this? I know how to print v1 and v2 up to two decimals, but not how to do arithmetic like that. thanks.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >