Search Results

Search found 2210 results on 89 pages for 'sum'.

Page 19/89 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • scheme2lisp::define function and pass it as parameter

    - by Stas
    Hi! Im need translate some code from scheme to common lisp. Now I have something like this (defun sum (term a next b) (if (> a b) 0 (+ (term a) (sum term (next a) b)))) (defun sum-int (a b) (defun (ident x) x ) (sum ident a 1+ b)) But it doesn't interprete with out errors. * - DEFUN: the name of a function must be a symbol, not (IDENT X) Help me plese. Thanks

    Read the article

  • Beginner += in Ruby

    - by WANNABE
    Looking at this block, I can follow the whole program until I hit, sum += square. What is he point of this line, what does it say??? sum = 0 [1, 2, 3, 4].each do |value| square = value * value sum += square end puts sum

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • codility challenge, test case OK , Evaluation report Wrong Answer

    - by Hussein Fawzy
    the aluminium 2014 gives me wrong answer [3 , 9 , -6 , 7 ,-3 , 9 , -6 , -10] got 25 expected 28 but when i repeated the challenge with the same code and make case test it gives me the correct answer Your test case [3, 9, -6, 7, -3, 9, -6, -10] : NO RUNTIME ERRORS (returned value: 28) what is the wrong with it ??? the challenge :- A non-empty zero-indexed array A consisting of N integers is given. A pair of integers (P, Q), such that 0 = P = Q < N, is called a slice of array A. The sum of a slice (P, Q) is the total of A[P] + A[P+1] + ... + A[Q]. The maximum sum is the maximum sum of any slice of A. For example, consider array A such that: A[0] = 3 A[1] = 2 A[2] = -6 A[3] = 3 A[4] = 1 For example (0, 1) is a slice of A that has sum A[0] + A[1] = 5. This is the maximum sum of A. You can perform a single swap operation in array A. This operation takes two indices I and J, such that 0 = I = J < N, and exchanges the values of A[I] and A[J]. To goal is to find the maximum sum you can achieve after performing a single swap. For example, after swapping elements 2 and 4, you will get the following array A: A[0] = 3 A[1] = 2 A[2] = 1 A[3] = 3 A[4] = -6 After that, (0, 3) is a slice of A that has the sum A[0] + A[1] + A[2] + A[3] = 9. This is the maximum sum of A after a single swap. Write a function: class Solution { public int solution(int[] A); } that, given a non-empty zero-indexed array A of N integers, returns the maximum sum of any slice of A after a single swap operation. For example, given: A[0] = 3 A[1] = 2 A[2] = -6 A[3] = 3 A[4] = 1 the function should return 9, as explained above. and my code is :- import java.math.*; class Solution { public int solution(int[] A) { if(A.length == 1) return A[0]; else if (A.length==2) return A[0]+A[1]; else{ int finalMaxSum = A[0]; for (int l=0 ; l<A.length ; l++){ for (int k = l+1 ; k<A.length ; k++ ){ int [] newA = A; int temp = newA[l]; newA [l] = newA[k]; newA[k]=temp; int maxSum = newA[0]; int current_max = newA[0]; for(int i = 1; i < newA.length; i++) { current_max = Math.max(A[i], current_max + newA[i]); maxSum = Math.max(maxSum, current_max); } finalMaxSum = Math.max(finalMaxSum , maxSum); } } return finalMaxSum; } } } i don't know what's the wrong with it ??

    Read the article

  • Performance surprise with "as" and nullable types

    - by Jon Skeet
    I'm just revising chapter 4 of C# in Depth which deals with nullable types, and I'm adding a section about using the "as" operator, which allows you to write: object o = ...; int? x = o as int?; if (x.HasValue) { ... // Use x.Value in here } I thought this was really neat, and that it could improve performance over the C# 1 equivalent, using "is" followed by a cast - after all, this way we only need to ask for dynamic type checking once, and then a simple value check. This appears not to be the case, however. I've included a sample test app below, which basically sums all the integers within an object array - but the array contains a lot of null references and string references as well as boxed integers. The benchmark measures the code you'd have to use in C# 1, the code using the "as" operator, and just for kicks a LINQ solution. To my astonishment, the C# 1 code is 20 times faster in this case - and even the LINQ code (which I'd have expected to be slower, given the iterators involved) beats the "as" code. Is the .NET implementation of isinst for nullable types just really slow? Is it the additional unbox.any that causes the problem? Is there another explanation for this? At the moment it feels like I'm going to have to include a warning against using this in performance sensitive situations... Results: Cast: 10000000 : 121 As: 10000000 : 2211 LINQ: 10000000 : 2143 Code: using System; using System.Diagnostics; using System.Linq; class Test { const int Size = 30000000; static void Main() { object[] values = new object[Size]; for (int i = 0; i < Size - 2; i += 3) { values[i] = null; values[i+1] = ""; values[i+2] = 1; } FindSumWithCast(values); FindSumWithAs(values); FindSumWithLinq(values); } static void FindSumWithCast(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is int) { int x = (int) o; sum += x; } } sw.Stop(); Console.WriteLine("Cast: {0} : {1}", sum, (long) sw.ElapsedMilliseconds); } static void FindSumWithAs(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { int? x = o as int?; if (x.HasValue) { sum += x.Value; } } sw.Stop(); Console.WriteLine("As: {0} : {1}", sum, (long) sw.ElapsedMilliseconds); } static void FindSumWithLinq(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = values.OfType<int>().Sum(); sw.Stop(); Console.WriteLine("LINQ: {0} : {1}", sum, (long) sw.ElapsedMilliseconds); } }

    Read the article

  • is Boost Library's weighted median broken?

    - by user624188
    I confess that I am no expert in C++. I am looking for a fast way to compute weighted median, which Boost seemed to have. But it seems I am not able to make it work. #include <iostream> #include <boost/accumulators/accumulators.hpp> #include <boost/accumulators/statistics/stats.hpp> #include <boost/accumulators/statistics/median.hpp> #include <boost/accumulators/statistics/weighted_median.hpp> using namespace boost::accumulators; int main() { // Define an accumulator set accumulator_set<double, stats<tag::median > > acc1; accumulator_set<double, stats<tag::median >, float> acc2; // push in some data ... acc1(0.1); acc1(0.2); acc1(0.3); acc1(0.4); acc1(0.5); acc1(0.6); acc2(0.1, weight=0.); acc2(0.2, weight=0.); acc2(0.3, weight=0.); acc2(0.4, weight=1.); acc2(0.5, weight=1.); acc2(0.6, weight=1.); // Display the results ... std::cout << " Median: " << median(acc1) << std::endl; std::cout << "Weighted Median: " << median(acc2) << std::endl; return 0; } produces the following output, which is clearly wrong. Median: 0.3 Weighted Median: 0.3 Am I doing something wrong? Any help will be greatly appreciated. * however, the weighted sum works correctly * @glowcoder: The weighted sum works perfectly fine like this. #include <iostream> #include <boost/accumulators/accumulators.hpp> #include <boost/accumulators/statistics/stats.hpp> #include <boost/accumulators/statistics/sum.hpp> #include <boost/accumulators/statistics/weighted_sum.hpp> using namespace boost::accumulators; int main() { // Define an accumulator set accumulator_set<double, stats<tag::sum > > acc1; accumulator_set<double, stats<tag::sum >, float> acc2; // accumulator_set<double, stats<tag::median >, float> acc2; // push in some data ... acc1(0.1); acc1(0.2); acc1(0.3); acc1(0.4); acc1(0.5); acc1(0.6); acc2(0.1, weight=0.); acc2(0.2, weight=0.); acc2(0.3, weight=0.); acc2(0.4, weight=1.); acc2(0.5, weight=1.); acc2(0.6, weight=1.); // Display the results ... std::cout << " Median: " << sum(acc1) << std::endl; std::cout << "Weighted Median: " << sum(acc2) << std::endl; return 0; } and the result is Sum: 2.1 Weighted Sum: 1.5

    Read the article

  • Difficulty analyzing text from a file

    - by Nikko
    I'm running into a rather amusing error with my output on this lab and I was wondering if any of you might be able to hint at where my problem lies. The goal is find the high, low, average, sum of the record, and output original record. I started with a rather basic program to solve for one record and when I achieved this I expanded the program to work with the entire text file. Initially the program would correctly output: 346 130 982 90 656 117 595 High# Low# Sum# Average# When I expanded it to work for the entire record my output stopped working how I had wanted it to. 0 0 0 0 0 0 0 High: 0 Low: 0 Sum: 0 Average: 0 0 0 0 0 0 0 0 High: 0 Low: 0 Sum: 0 Average: 0 etc... I cant quite figure out why my ifstream just completely stopped bothering to input the values from file. I'll go take a walk and take another crack at it. If that doesn't work I'll be back here to check for any responses =) Thank you! #include <iostream> #include <fstream> #include <iomanip> #include <string> using namespace std; int main() { int num; int high = 0; int low = 1000; double average = 0; double sum = 0; int numcount = 0; int lines = 1; char endoline; ifstream inData; ofstream outData; inData.open("c:\\Users\\Nikko\\Desktop\\record5ain.txt"); outData.open("c:\\Users\\Nikko\\Desktop\\record5aout.txt"); if(!inData) //Reminds me to change path names when working on different computers. { cout << "Could not open file, program will exit" << endl; exit(1); } while(inData.get(endoline)) { if(endoline == '\n') lines++; } for(int A = 0; A < lines; A++) { for(int B = 0; B < 7; B++) { while(inData >> num) inData >> num; numcount++; sum += num; if(num < low) low = num; if(num > high) high = num; average = sum / numcount; outData << num << '\t'; } outData << "High: " << high << " " << "Low: " << low << " " << "Sum: " << sum << " " << "Average: " << average << endl; } inData.close(); outData.close(); return(0); }

    Read the article

  • Multiple aggregate functions in one SQL query from the same table using different conditions

    - by Eric Ness
    I'm working on creating a SQL query that will pull records from a table based on the value of two aggregate functions. These aggregate functions are pulling data from the same table, but with different filter conditions. The problem that I run into is that the results of the SUMs are much larger than if I only include one SUM function. I know that I can create this query using temp tables, but I'm just wondering if there is an elegant solution that requires only a single query. I've created a simplified version to demonstrate the issue. Here are the table structures: EMPLOYEE TABLE EMPID 1 2 3 ABSENCE TABLE EMPID DATE HOURS_ABSENT 1 6/1/2009 3 1 9/1/2009 1 2 3/1/2010 2 And here is the query: SELECT E.EMPID ,SUM(ATOTAL.HOURS_ABSENT) AS ABSENT_TOTAL ,SUM(AYEAR.HOURS_ABSENT) AS ABSENT_YEAR FROM EMPLOYEE E INNER JOIN ABSENCE ATOTAL ON ATOTAL.EMPID = E.EMPID INNER JOIN ABSENCE AYEAR ON AYEAR.EMPID = E.EMPID WHERE AYEAR.DATE > '1/1/2010' GROUP BY E.EMPID HAVING SUM(ATOTAL.HOURS_ABSENT) > 10 OR SUM(AYEAR.HOURS_ABSENT) > 3 Any insight would be greatly appreciated.

    Read the article

  • MySQL GROUP BY and JOIN

    - by christian
    Guys what's wrong with this SQL query: $sql = "SELECT res.Age, res.Gender, answer.*, $get_sum, SUM(CASE WHEN res.Gender='Male' THEN 1 else 0 END) AS males, SUM(CASE WHEN res.Gender='Female' THEN 1 else 0 END) AS females FROM Respondents AS res INNER JOIN Answers as answer ON answer.RespondentID=res.RespondentID INNER JOIN Questions as question ON answer.Answer=question.id WHERE answer.Question='Q1' GROUP BY res.Age ORDER BY res.Age ASC"; the $get_sum is an array of sql statement derived from another table: $sum[]= "SUM(CASE WHEN answer.Answer=".$db->f("id")." THEN 1 else 0 END) AS item".$db->f("id"); $get_sum = implode(', ', $sum); the query above return these values: Age: 20 item1 0 item2 1 item3 1 item4 1 item5 0 item6 0 Subtotal for Age 20 3 Age: 24 item1 2 item2 2 item3 2 item4 2 item5 1 item6 0 Subtotal for Age 24 9 It should return: Subtotal for Age 20 1 Subtotal for Age 24 2 In my sample data there are 3 respondents 2 are 24 yrs of age and the other one is 20 years old.

    Read the article

  • How to solve such system with given parts of it? (maple)

    - by Kabumbus
    So I had a system #for given koefs k:=3; n:=3; #let us solve system: koefSolution:= solve({ sum(a[i], i = 0 .. k) = 0, sum(a[i], i = 0 .. k)-(sum(b[i], i = 0 .. k)) = 0, sum(i^n*a[i], i = 0 .. k)-(sum(i^(n-1)*b[i], i = 0 .. k)) = 0 }); So I have a vector like koefSolution := { a[0] = 7*a[2]+26*a[3]-b[1]-4*b[2]-9*b[3], a[1] = -8*a[2]-27*a[3]+b[1]+4*b[2]+9*b[3], a[2] = a[2], a[3] = a[3], b[0] = -b[1]-b[2]-b[3], b[1] = b[1], b[2] = b[2], b[3] = b[3]} I have a[0] so I try solve({koefSolution, a[0] = 1}); why it does not solve my system for given a[0]? ( main point here is to fill koefSolution with given a[] and b[] and optimize.)

    Read the article

  • Is there a way to optimize this update query?

    - by SchlaWiener
    I have a master table called "parent" and a related table called "childs" Now I run a query against the master table to update some values with the sum from the child table like this. UPDATE master m SET quantity1 = (SELECT SUM(quantity1) FROM childs c WHERE c.master_id = m.id), quantity2 = (SELECT SUM(quantity2) FROM childs c WHERE c.master_id = m.id), count = (SELECT COUNT(*) FROM childs c WHERE c.master_id = m.id) WHERE master_id = 666; Which works as expected but is not a good style because I basically make multiple SELECT querys on the same result. Is there a way to optimize that? (Making a query first and storing the values is not an option. I tried this: UPDATE master m SET (quantity1, quantity2, count) = ( SELECT SUM(quantity1), SUM(quantity2), COUNT(*) FROM childs c WHERE c.master_id = m.id ) WHERE master_id = 666; but that doesn't work.

    Read the article

  • Excel 2003 - ADDRESS() function issues

    - by hairdresser-101
    I finally thought I had found a way to acutally use excel productively but the code that I followed does not appear to work. I'm thinking that the code is very limited and can't do what I want but I thought I'd ask to confirm - maybe it is my function that is the problem. I want to calculate the sum of a row of values for the previous month based on how many days we are into this month (i.e. It is the 20th of April so I want to sum the first 20 days of March to compare against.) =SUM(G4:ADDRESS(ROW(),7+$BR$3,4)) I basically want to SUM(G4:AA4) and have used the address function to return the cell reference AA4 by taking G4 and adding 20 to the column count. ADDRESS(ROW(),7+$BR$3,4) This successfully returns AA7 as expected HOWEVER, when I try to use the returning value in the SUM() function it throws an error... Am I not able to use this reference in my calculation?

    Read the article

  • Eunit Expected Exception

    - by dagda1
    Hi, Is there a way in Eunit to test whether an exception has been thrown under certain cicumstances? Say I have a function sum like this: sum(N, M) when N > M -> throw({"start is bigger than end", N, M}); sum(N, M) when N =:= M -> N; sum(N, M) when N =< M -> N + sum(N + 1, M). Can I test that if N is bigger than M then an exception is thrown? Cheers Paul

    Read the article

  • compute mean in python for a generator

    - by nmaxwell
    Hi, I'm doing some statistics work, I have a (large) collection of random numbers to compute the mean of, I'd like to work with generators, because I just need to compute the mean, so I don't need to store the numbers. The problem is that numpy.mean breaks if you pass it a generator. I can write a simple function to do what I want, but I'm wondering if there's a proper, built-in way to do this? It would be nice if I could say "sum(values)/len(values)", but len doesn't work for genetators, and sum already consumed values. here's an example: import numpy def my_mean(values): n = 0 Sum = 0.0 try: while True: Sum += next(values) n += 1 except StopIteration: pass return float(Sum)/n X = [k for k in range(1,7)] Y = (k for k in range(1,7)) print numpy.mean(X) print my_mean(Y) these both give the same, correct, answer, buy my_mean doesn't work for lists, and numpy.mean doesn't work for generators. I really like the idea of working with generators, but details like this seem to spoil things. thanks for any help -nick

    Read the article

  • Have I checked every consecutive subset of this list?

    - by Nathan
    I'm trying to solve problem 50 on Project Euler. Don't give me the answer or solve it for me, just try to answer this specific question. The goal is to find the longest sum of consecutive primes that adds to a prime below one million. I use wrote a sieve to find all the primes below n, and I have confirmed that it is correct. Next, I am going to check the sum of each subset of consecutive primes using the following method: I have a empty list sums. For each prime number, I add it to each element in sums and check the new sum, then I append the prime to sums. Here it is in python primes = allPrimesBelow(1000000) sums = [] for p in primes: for i in range(len(sums)): sums[i] += p check(sums[i]) sums.append(p) I want to know if I have called check() for every sum of two or more consecutive primes below one million The problem says that there is a prime, 953, that can be written as the sum of 21 consecutive primes, but I am not finding it.

    Read the article

  • python multiprocessing.Process.Manager not producing consistent results?

    - by COpython
    I've written the following code to illustrate the problem I'm seeing. I'm trying to use a Process.Manager.list() to keep track of a list and increment random indices of that list. Each time there are 100 processes spawned, and each process increments a random index of the list by 1. Therefore, one would expect the SUM of the resulting list to be the same each time, correct? I get something between 203 and 205. from multiprocessing import Process, Manager import random class MyProc(Process): def __init__(self, A): Process.__init__(self) self.A = A def run(self): i = random.randint(0, len(self.A)-1) self.A[i] = self.A[i] + 1 if __name__ == '__main__': procs = [] M = Manager() a = M.list(range(15)) print('A: {0}'.format(a)) print('sum(A) = {0}'.format(sum(a))) for i in range(100): procs.append(MyProc(a)) map(lambda x: x.start(), procs) map(lambda x: x.join(), procs) print('A: {0}'.format(a)) print('sum(A) = {0}'.format(sum(a)))

    Read the article

  • OCaml Summation

    - by Supervisor
    I'm trying to make a function in OCaml which does the summation function in math. I tried this: sum n m f = if n = 0 then 0 else if n > m then f else f + sum (n + 1) m f;; However, I get an error - "Characters 41-44: else f * sum(n + 1) m f;; Error: Unbound value sum and sum is underlined (has carrot signs pointing to it) I looked at this: Simple OCaml exercise It's the same question, but I see a lot of other things that I do not have. For example, for my n = m case, I do not have f n and then in the else case, I do not have f m. Why do you need f n if you want the function to return an integer? D: What's the problem!? Thanks in advance.

    Read the article

  • problem in using while loop in php&mysql

    - by Mac Taylor
    hey guys im using a while loop to show my latest forum topics now i need to count some fields either I'm trying to do it in one query and here is my code : $result = $db->sql_query("SELECT t.*,p.*, SUM(t.topic_approved='1') AS Amount_Of_Topics, SUM(t.topic_views) AS Amount_Of_Topic_Views, SUM(t.topic_replies) AS Amount_Of_Topic_Replies, SUM(p.post_approved ='1') AS Amount_Of_Posts FROM bb3topics t left join bb3posts p ON t.topic_id=p.topic_id ORDER BY t.topic_last_post_id DESC LIMIT 10 " ); while( $row = $db->sql_fetchrow($result) ) { problem : this code shows only one forum topic and not the rest , but if i remove sum() part from it , then it shows the rest is there anything wrong with my query code ?!

    Read the article

  • Ruby: totalling amounts

    - by Michael
    I have a Donation.rb model with an amount column that takes an integer. I want to sum all the individual donations together and show the total on the home page. In the home_controller, I'm doing @donations = Donation.all and then in the view I do <% sum = 0 %> <% @donations.each do |donation| %> <%= sum += donation.amount if donation.amount? %> <% end %> The problem is that this is printing the running sum each time a new donation is added to it. I just want the total sum at the end after they've all been added together.

    Read the article

  • return type while using GMP.h header file

    - by Raveesh_Kumar
    While i m using gmp.h header file. i need a fuction which takes inputs of type mpz_t and return mpz_t type too. I m very beginner of using gmp.h So, Here is snaps follows of my approached code.... mpz_t sum_upto(mpz_t max) { mpz_t sum; mpz_init(sum); mpz_init(result); for(int i=0;i<=max-1;i++) mpz_add_ui(sum,sum,pow(2,i)); return sum; } but it will show error 1 ." pow has been not used in this scope.",although i have added math.h in very beginnig of the file. 2 . sum_upto declared as function returning an array... do tell as soon as possible. i have to complete my very big project....

    Read the article

  • incompatible types in java

    - by user2975357
    Should I point out that I am a begginer at this? double averageMonthlyTemp() { double[] amt = new double[52]; int sum = 0; int index = 0; for (int i = 0; i < temp.length - 1; i = i + 7) { //where temp is an existiing //previously initialized array //of 365 elements, form 0 to 364 for (int j = 0; j < 7; j++) { sum = sum + temp[i + j]; if (j % 7 == 6) { double average = ((double) sum) / 7; amt[index] = average; index++; sum = (int) 0; } } } return amt; } When I try to compile, I get an "incompatible types" error, with the "amt" at return amt marked in red. Does somebody know why?

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • How to invalidate cache when benchmarking?

    - by Michael Buen
    I have this code, that when swapping the order of UsingAs and UsingCast, their performance also swaps. using System; using System.Diagnostics; using System.Linq; using System.IO; class Test { const int Size = 30000000; static void Main() { object[] values = new MemoryStream[Size]; UsingAs(values); UsingCast(values); Console.ReadLine(); } static void UsingCast(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = (MemoryStream)o; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("Cast: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } static void UsingAs(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = o as MemoryStream; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("As: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } } Outputs: As: 0 : 322 Cast: 0 : 281 When doing this... UsingCast(values); UsingAs(values); ...Results to this: Cast: 0 : 322 As: 0 : 281 When doing just this... UsingAs(values); ...Results to this: As: 0 : 322 When doing just this: UsingCast(values); ...Results to this: Cast: 0 : 322 Aside from running them independently, how to invalidate the cache so the second code being benchmarked won't receive the cached memory of first code? Benchmarking aside, just loved the fact that modern processors do this caching magic :-)

    Read the article

  • How to call Postgres function returning SETOF record?

    - by Peter
    I have written the following function: -- Gets stats for all markets CREATE OR REPLACE FUNCTION GetMarketStats ( ) RETURNS SETOF record AS $$ BEGIN SELECT 'R approved offer' AS Metric, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketAPlus24, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketAPlus36, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketA24, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketA36, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketB24, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketB36 FROM "Market" M INNER JOIN "Listing" L ON L.MarketID = M.MarketID INNER JOIN "ListingOffer" LO ON L.ListingID = LO.ListingID; END $$ LANGUAGE plpgsql; And when trying to call it like this... select * from GetMarketStats() AS (Metric VARCHAR(50),MarketAPlus24 INT,MarketAPlus36 INT,MarketA24 INT,MarketA36 INT,MarketB24 INT,MarketB36 INT); I get an error: ERROR: query has no destination for result data HINT: If you want to discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL function "getmarketstats" line 2 at SQL statement I don't understand this output. I've tried using perform too, but I thought one only had to use that if the function doesn't return anything.

    Read the article

  • subtotals in columns usind reshape2 in R

    - by user1043144
    I have spent some time now learning RESHAPE2 and plyr but I still do not get it. This time I have a problem with (a) subtotals and (b) passing different aggregate functions . Here an example using data from the excellent tutorial on the blog of mrdwab http://news.mrdwab.com/ # libraries library(plyr) library(reshape2) # get data and add few more variables book.sales = read.csv("http://news.mrdwab.com/data-booksales") book.sales$Stock = book.sales$Quantity + 10 book.sales$SubjCat[(book.sales$Subject == 'Economics') | (book.sales$Subject == 'Management') ] <- '1_EconSciences' book.sales$SubjCat[book.sales$Subject %in% c('Anthropology', 'Politics', 'Sociology') ] <- '2_SocSciences' book.sales$SubjCat[book.sales$Subject %in% c('Communication', 'Fiction', 'History', 'Research', 'Statistics') ] <- '3_other' # to get to my starting dataframe (close to the project I am working on) book.sales1 <- ddply(book.sales, c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), summarize, Stock = sum(Stock), Sold = sum(Quantity), Ratio = round((100 * sum(Quantity)/ sum(Stock)), digits = 1)) #melt it m.book.sales = melt(data = book.sales1, id.vars = c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), measured.vars = c('Stock', 'Sold', 'Ratio')) # cast it Tab1 <- dcast(data = m.book.sales, formula = Region + Representative ~ Publisher + variable, fun.aggregate = sum, margins = c('Region', 'Representative')) Now my questions : I have been able to add the subtotals in rows. But is it possible also to add margins in the columns. Say for example, Totals of Stock for one Publisher ? Sorry I meant to say example total sold for all publishers There is a problem with the columns with “ratio”. How can I get “mean” instead of “sum” for this variable ? P.S: I have seen some examples using reshape. Will you recommend to use it instead of reshape2 (which seems not to include the functionalities of two functions).

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >