Search Results

Search found 3047 results on 122 pages for 'subset sum'.

Page 10/122 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • While Loop in TSQL with Sum totals

    - by RPS
    I have the following TSQL Statement, I am trying to figure out how I can keep getting the results (100 rows at a time), store them in a variable (as I will have to add the totals after each select) and continue to select in a while loop until no more records are found and then return the variable totals to the calling function. SELECT [OrderUser].OrderUserId, ISNULL(SUM(total.FileSize), 0), ISNULL(SUM(total.CompressedFileSize), 0) FROM ( SELECT DISTINCT TOP(100) ProductSize.OrderUserId, ProductSize.FileInfoId, CAST(ProductSize.FileSize AS BIGINT) AS FileSize, CAST(ProductSize.CompressedFileSize AS BIGINT) AS CompressedFileSize FROM ProductSize WITH (NOLOCK) INNER JOIN [Version] ON ProductSize.VersionId = [Version].VersionId ) AS total RIGHT OUTER JOIN [OrderUser] WITH (NOLOCK) ON total.OrderUserId = [OrderUser].OrderUserId WHERE NOT ([OrderUser].isCustomer = 1 AND [OrderUser].isEndOrderUser = 0 OR [OrderUser].isLocation = 1) AND [OrderUser].OrderUserId = 1 GROUP BY [OrderUser].OrderUserId

    Read the article

  • Sum up values in SQL once all values are available

    - by James Brown
    I have events flowing into a MySQL database and I need to group and sum the events to transactions and store away into another table. The data looks like: +----+---------+------+-------+ | id | transid | code | value | +----+---------+------+-------+ | 1 | 1 | b | 12 | | 2 | 1 | i | 23 | | 3 | 2 | b | 34 | | 4 | 1 | e | 45 | | 5 | 3 | b | 56 | | 6 | 2 | i | 67 | | 7 | 2 | e | 78 | | 8 | 3 | i | 89 | | 9 | 3 | i | 90 | +----+---------+------+-------+ The events arrive in batches and I would like to create the transaction by summing up the values for each transid, like: select transid, sum(value) from eventtable group by transid; but only after all the events for that transid have arrived. That is determined by the event with the code e (b for the beginning, e for the end and i for varying amount of intermediates). Being a novice in SQL, how could I implement the requirement for the existance of the end code before the summing?

    Read the article

  • Get the sum by comparing between two tables

    - by Ismail Gunes
    I have to tables ProdBiscuit As tb and StockData As sd , I have to get the sum of the quantity in StockData (quantite) with the condition of if (sd.status0 AND sd.prodid = tb.id AND sd.matcuisine = 3) Here is my sql query SELECT tb.id, tb.nom, tb.proddate, tb.qty, tb.stockrecno FROM ProdBiscuit AS tb JOIN (SELECT id, prodid, matcuisine, status, SUM(quantite) AS rq FROM StockData) AS sd ON (tb.id = sd.prodid AND sd.status > 0 AND sd.matcuisine = 3) LIMIT 25 OFFSET @Myid This gives me no rows at all ? There is only 3 rows in ProdBiscuit and 11 rows in Stockdata and there is only 2 rows in StockData good with the condition. And as shown in the picture there is only two rows which give the condition. What is wrong in my query ? PS: The green lines on the image shows the condition in my query.

    Read the article

  • Failed to sum splited text

    - by user1784753
    I have a problem when summing all of bx3.text to t2.text. first I split bx3.text with space private void total() { string[] ps = bx3.Text.Split(new string[] {" "}, StringSplitOptions.None ); t2.Text = ps.Select(x => Convert.ToInt32(x)).Sum().ToString(); } I did try with t2.text = ps[1] and the number showed was correct. but when i try to sum it all, I got error "Input string was not in a correct format" on (x = Convert.ToInt32(x)) bx3.text is full of user-input number separated by single space " "

    Read the article

  • Sum of a matrix, even or odd

    - by user1790201
    This function receives a numeric matrix represented as a list of rows, where each row is in turn a list. Assume that it is a square matrix: all rows have the same length and there are as many rows as elements in each row. Also assume that the matrix is at least of dimension 2 by 2 (i.e. minimally the matrix has 2 rows with 2 elements each) The function should return a list with as many elements as number of rows. Element i in the resulting list should have the sum of the values in row i. For example, if the matrix is 1 2 3 10 20 30 100 200 300 then this function should return the list [6,60,600] That is, addValuesInAllRows( [ [1,2,3], [10,20,30], [100,200,300] ] ) should return [6,60,600] Isn't this sort of similar but how would you sum up the list individually

    Read the article

  • Caluculating sum of activity

    - by Maddy
    I have a table which is with following kind of information activity cost order date other information 10 1 100 -- 20 2 100 10 1 100 30 4 100 40 4 100 20 2 100 40 4 100 20 2 100 10 1 101 10 1 101 20 1 101 My requirement is to get sum of all activities over a work order ex: for order 100 1+2+4+4=11 1(for activity 10) 2(for activity 20) 4 (for activity 30) etc. i tried with group by, its taking lot time for calculation. There are 1lakh plus records in warehouse. is there any possibility in efficient way. SELECT SUM(MIN(cost)) FROM COST_WAREHOUSE a WHERE order = 100 GROUP BY (order, ACTIVITY)

    Read the article

  • sort django queryset by latest instance of a subset of related model

    - by rsp
    I have an Order model and order_event model. Each order_event has a foreignkey to order. so from an order instance i can get: myorder.order_event_set. I want to get a list of all orders but i want them to be sorted by the date of the last event. A statement like this works to sort by the latest event date: queryset = Order.objects.all().annotate(latest_event_date=Max('order_event__event_datetime')).order_by('latest_event_date') However, what I really need is a list of all orders sorted by latest date of A SUBSET OF EVENTS. For example my events are categorized into "scheduling", "processing", etc. So I should be able to get a list of all orders sorted by the latest scheduling event. This django doc (https://docs.djangoproject.com/en/dev/topics/db/aggregation/#filter-and-exclude) shows how I can get the latest schedule event using a filter but this excludes orders without a scheduling event. I thought I could combine the filtered queryset with a queryset that includes back those orders that are missing a scheduling event...but I'm not quite sure how to do this. I saw answers related to using python list but it would be much more useful to have a proper django queryset (ie for a view with pagination, etc.)

    Read the article

  • Fastest way to perform subset test operation on a large collection of sets with same domain

    - by niktech
    Assume we have trillions of sets stored somewhere. The domain for each of these sets is the same. It is also finite and discrete. So each set may be stored as a bit field (eg: 0000100111...) of a relatively short length (eg: 1024). That is, bit X in the bitfield indicates whether item X (of 1024 possible items) is included in the given set or not. Now, I want to devise a storage structure and an algorithm to efficiently answer the query: what sets in the data store have set Y as a subset. Set Y itself is not present in the data store and is specified at run time. Now the simplest way to solve this would be to AND the bitfield for set Y with bit fields of every set in the data store one by one, picking the ones whose AND result matches Y's bitfield. How can I speed this up? Is there a tree structure (index) or some smart algorithm that would allow me to perform this query without having to AND every stored set's bitfield? Are there databases that already support such operations on large collections of sets?

    Read the article

  • Copy subset of xml input using xslt

    - by mdfaraz
    I need an XSLT file to transform input xml to another with a subset of nodes in the input xml. For ex, if input has 10 nodes, I need to create output with about 5 nodes Input <Department diffgr:id="Department1" msdata:rowOrder="0"> <Department>10</Department> <DepartmentDescription>BABY PRODUCTS</DepartmentDescription> <DepartmentSeq>7</DepartmentSeq> <InsertDateTime>2011-09-29T13:19:28.817-05:00</InsertDateTime> </Department> Output: <Department diffgr:id="Department1" msdata:rowOrder="0"> <Department>10</Department> <DepartmentDescription>BABY PRODUCTS</DepartmentDescription> </Department> I found one way to suppress nodes that we dont need XSLT: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes"/> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="Department/DepartmentSeq"/> <xsl:template match="Department/InsertDateTime"/> </xsl:stylesheet> I need an xslt that helps me select the nodes I need and not "copy all and filter out what I dont need", since i may have to change my xslt whenever input schema adds more nodes.

    Read the article

  • Algorithm: Find smallest subset containing K 0's

    - by Vishal
    I have array of 1's and 0's only. Now I want to find contiguous subset/subarray which contains at least K 0's. Example Array is 1 1 0 1 1 0 1 1 0 0 0 0 1 0 1 1 0 0 0 1 1 0 0 1 0 0 0 and K(6) should be 0 0 1 0 1 1 0 0 0 or 0 0 0 0 1 0 1 1 0.... My Solution Array: 1 1 0 1 1 0 1 1 0 0 0 0 1 0 1 1 0 0 0 1 1 0 0 Index: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Sum: 1 2 2 3 4 4 5 6 6 6 6 6 7 7 8 9 9 9 9 10 11 11 11 Diff(I-S): 0 0 1 1 1 2 2 2 3 4 5 6 6 7 7 7 8 9 10 10 10 11 12 For K(6) Start with 9-15 = Store difference in diff. Next increase difference 8-15(Difference in index) 8-14(Compare Difference in index) So on keep moving to find element with least elements... I am looking for better algorithm for this solution.

    Read the article

  • Time calculations with MySQL TIMEDIFF

    - by Oli
    Hi there, I have the following table: mysql> SELECT id,start1,stop1,start2,stop2 FROM times; +----+---------------------+---------------------+---------------------+---------------------+ | id | start1 | stop1 | start2 | stop2 | +----+---------------------+---------------------+---------------------+---------------------+ | 4 | 2010-04-23 08:05:00 | 2010-04-23 12:15:00 | 2010-04-23 12:45:00 | 2010-04-23 16:50:00 | | 2 | 2010-04-26 09:30:00 | 2010-04-26 12:10:00 | 2010-04-26 12:50:00 | 2010-04-26 16:50:00 | | 7 | 2010-04-28 08:45:00 | 2010-04-28 11:45:00 | 2010-04-28 13:10:00 | 2010-04-28 17:29:00 | | 6 | 2010-04-27 09:30:00 | 2010-04-27 12:15:00 | 2010-04-27 12:55:00 | 2010-04-27 18:44:00 | +----+---------------------+---------------------+---------------------+---------------------+ I want to sum total worktime and the difference to the "needed work hours". It works pretty well with the statement below, but for unknown reasons it doesn't work for id 6. start*/stop* fields are in format datetime. SELECT *, TIME_FORMAT(TIMEDIFF(totaltime,'08:24'),'%H:%i') AS diff, totaltime > '08:24' AS redorgreen FROM ( SELECT DATE_FORMAT(start1,'%a %e. %M %Y') AS date, TIME_FORMAT(SUM(TIMEDIFF(stop1,start1) + TIMEDIFF(stop2,start2)),'%H:%i') AS totaltime, TIME_FORMAT(start1,'%H:%i') AS start1, TIME_FORMAT(stop1,'%H:%i') AS stop1, TIME_FORMAT(start2,'%H:%i') AS start2, TIME_FORMAT(stop2,'%H:%i') AS stop2, id as id FROM times GROUP BY id ASC ) AS somethingwedontneed; This is the result: select id, TIME_FORMAT(SUM(TIMEDIFF(stop1,start1) + TIMEDIFF(stop2,start2)),'%H:%i') AS totaltime from times group by id; +----+-----------+ | id | totaltime | +----+-----------+ | 2 | 06:40 | | 4 | 08:15 | | 6 | NULL | | 7 | 07:19 | +----+-----------+ Thanks in advance for every hint.

    Read the article

  • subset complete or balance dataset in r

    - by SHRram
    I have a dataset that unequal number of repetition. I want to subset a data by removing those entries that are incomplete (i.e. replication less than maximum). Just small example: set.seed(123) mydt <- data.frame (name= rep ( c("A", "B", "C", "D", "E"), c(1,2,4,4, 3)), var1 = rnorm (14, 3,1), var2 = rnorm (14, 4,1)) mydt name var1 var2 1 A 2.439524 3.444159 2 B 2.769823 5.786913 3 B 4.558708 4.497850 4 C 3.070508 2.033383 5 C 3.129288 4.701356 6 C 4.715065 3.527209 7 C 3.460916 2.932176 8 D 1.734939 3.782025 9 D 2.313147 2.973996 10 D 2.554338 3.271109 11 D 4.224082 3.374961 12 E 3.359814 2.313307 13 E 3.400771 4.837787 14 E 3.110683 4.153373 summary(mydt) name var1 var2 A:1 Min. :1.735 Min. :2.033 B:2 1st Qu.:2.608 1st Qu.:3.048 C:4 Median :3.120 Median :3.486 D:4 Mean :3.203 Mean :3.688 E:3 3rd Qu.:3.446 3rd Qu.:4.412 Max. :4.715 Max. :5.787 I want to get rid of A, B, E from the data as they are incomplete. Thus expected output: name var1 var2 4 C 3.070508 2.033383 5 C 3.129288 4.701356 6 C 4.715065 3.527209 7 C 3.460916 2.932176 8 D 1.734939 3.782025 9 D 2.313147 2.973996 10 D 2.554338 3.271109 11 D 4.224082 3.374961 Please note the dataset is big, the following may not a option: mydt[mydt$name == "C",] mydt[mydt$name == "D", ]

    Read the article

  • Best practices concerning view model and model updates with a subset of the fields

    - by Martin
    By picking MVC for developing our new site, I find myself in the midst of "best practices" being developed around me in apparent real time. Two weeks ago, NerdDinner was my guide but with the development of MVC 2, even it seems outdated. It's an thrilling experience and I feel privileged to be in close contact with intelligent programmers daily. Right now I've stumbled upon an issue I can't seem to get a straight answer on - from all the blogs anyway - and I'd like to get some insight from the community. It's about Editing (read: Edit action). The bulk of material out there, tutorials and blogs, deal with creating and view the model. So while this question may not spell out a question, I hope to get some discussion going, contributing to my decision about the path of development I'm to take. My model represents a user with several fields like name, address and email. All the names, in fact, on field each for first name, last name and middle name. The Details view displays all these fields but you can change only one set of fields at a time, for instance, your names. The user expands a form while the other fields are still visible above and below. So the form that is posted back contains a subset of the fields representing the model. While this is appealing to us and our layout concerns, for various reasons, it is to be shunned by serious MVC-developers. I've been reading about some patterns and best practices and it seems that this is not in key with the paradigm of viewmodel == view. Or have I got it wrong? Anyway, NerdDinner dictates using FormCollection och UpdateModel. All the null fields are happily ignored. Since then, the MVC-community has abandoned this approach to such a degree that a bug in MVC 2 was not discovered. UpdateModel does not work without a complete model in your formcollection. The view model pattern receiving most praise seems to be Dedicated view model that contains a custom view model entity and is the only one that my design issue could be made compatible with. It entails a tedious amount of mapping, albeit lightened by the use of AutoMapper and the ideas of Jimmy Bogard, that may or may not be worthwhile. He also proposes a 1:1 relationship between view and view model. In keeping with these design paradigms, I am to create a view and associated view for each of my expanding sets of fields. The view models would each be nearly identical, differing only in the fields which are read-only, the views also containing much repeated markup. This seems absurd to me. In future I may want to be able to display two, more or all sets of fields open simultaneously. I will most attentively read the discussion I hope to spark. Many thanks in advance.

    Read the article

  • What is wrong here (will update): subset in geom_point does not work as expected

    - by Andreas
    I ask the following in the hope that someone might come up with a generic description about the problem.Basically I have no idea whats wrong with my code. When I run the code below, plot nr. 8 turns out wrong. Specifically the subset in geom_point does not work the way it should. If somebody can tell me what the problem is, I'll update this post. SOdata <- structure(list(id = 10:55, one = c(7L, 8L, 7L, NA, 7L, 8L, 5L, 7L, 7L, 8L, NA, 10L, 8L, NA, NA, NA, NA, 6L, 5L, 6L, 8L, 4L, 7L, 6L, 9L, 7L, 5L, 6L, 7L, 6L, 5L, 8L, 8L, 7L, 7L, 6L, 6L, 8L, 6L, 8L, 8L, 7L, 7L, 5L, 5L, 8L), two = c(7L, NA, 8L, NA, 10L, 10L, 8L, 9L, 4L, 10L, NA, 10L, 9L, NA, NA, NA, NA, 7L, 8L, 9L, 10L, 9L, 8L, 8L, 8L, 8L, 8L, 9L, 10L, 8L, 8L, 8L, 10L, 9L, 10L, 8L, 9L, 10L, 8L, 8L, 7L, 10L, 8L, 9L, 7L, 9L), three = c(7L, 10L, 7L, NA, 10L, 10L, NA, 10L, NA, NA, NA, NA, 10L, NA, NA, 4L, NA, 7L, 7L, 4L, 10L, 10L, 7L, 4L, 7L, NA, 10L, 4L, 7L, 7L, 7L, 10L, 10L, 7L, 10L, 4L, 10L, 10L, 10L, 4L, 10L, 10L, 10L, 10L, 7L, 10L), four = c(7L, 10L, 4L, NA, 10L, 7L, NA, 7L, NA, NA, NA, NA, 10L, NA, NA, 4L, NA, 10L, 10L, 7L, 10L, 10L, 7L, 7L, 7L, NA, 10L, 7L, 4L, 10L, 4L, 7L, 10L, 2L, 10L, 4L, 12L, 4L, 7L, 10L, 10L, 12L, 12L, 4L, 7L, 10L), five = c(7L, NA, 6L, NA, 8L, 8L, 7L, NA, 9L, NA, NA, NA, 9L, NA, NA, NA, NA, 7L, 8L, NA, NA, 7L, 7L, 4L, NA, NA, NA, NA, 5L, 6L, 5L, 7L, 7L, 6L, 9L, NA, 10L, 7L, 8L, 5L, 7L, 10L, 7L, 4L, 5L, 10L), six = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), .Label = c("2010-05-25", "2010-05-27", "2010-06-07"), class = "factor"), seven = c(0.777777777777778, 0.833333333333333, 0.333333333333333, 0.888888888888889, 0.5, 0.888888888888889, 0.777777777777778, 0.722222222222222, 0.277777777777778, 0.611111111111111, 0.722222222222222, 1, 0.888888888888889, 0.722222222222222, 0.555555555555556, NA, 0, 0.666666666666667, 0.666666666666667, 0.833333333333333, 0.833333333333333, 0.833333333333333, 0.833333333333333, 0.722222222222222, 0.833333333333333, 0.888888888888889, 0.666666666666667, 1, 0.777777777777778, 0.722222222222222, 0.5, 0.833333333333333, 0.722222222222222, 0.388888888888889, 0.722222222222222, 1, 0.611111111111111, 0.777777777777778, 0.722222222222222, 0.944444444444444, 0.555555555555556, 0.666666666666667, 0.722222222222222, 0.444444444444444, 0.333333333333333, 0.777777777777778), eight = c(0.666666666666667, 0.333333333333333, 0.833333333333333, 0.666666666666667, 1, 1, 0.833333333333333, 0.166666666666667, 0.833333333333333, 0.833333333333333, 1, 1, 0.666666666666667, 0.666666666666667, 0.333333333333333, 0.5, 0, 0.666666666666667, 0.5, 1, 0.666666666666667, 0.5, 0.666666666666667, 0.666666666666667, 0.666666666666667, 0.333333333333333, 0.333333333333333, 1, 0.666666666666667, 0.833333333333333, 0.666666666666667, 0.666666666666667, 0.5, 0, 0.833333333333333, 1, 0.666666666666667, 0.5, 0.666666666666667, 0.666666666666667, 0.5, 1, 0.833333333333333, 0.666666666666667, 0.833333333333333, 0.666666666666667), nine = c(0.307692307692308, NA, 0.461538461538462, 0.538461538461538, 1, 0.769230769230769, 0.538461538461538, 0.692307692307692, 0, 0.153846153846154, 0.769230769230769, NA, 0.461538461538462, NA, NA, NA, NA, 0, 0.615384615384615, 0.615384615384615, 0.769230769230769, 0.384615384615385, 0.846153846153846, 0.923076923076923, 0.615384615384615, 0.692307692307692, 0.0769230769230769, 0.846153846153846, 0.384615384615385, 0.384615384615385, 0.461538461538462, 0.384615384615385, 0.461538461538462, NA, 0.923076923076923, 0.692307692307692, 0.615384615384615, 0.615384615384615, 0.769230769230769, 0.0769230769230769, 0.230769230769231, 0.692307692307692, 0.769230769230769, 0.230769230769231, 0.769230769230769, 0.615384615384615), ten = c(0.875, 0.625, 0.375, 0.75, 0.75, 0.75, 0.625, 0.875, 1, 0.125, 1, NA, 0.625, 0.75, 0.75, 0.375, NA, 0.625, 0.5, 0.75, 0.875, 0.625, 0.875, 0.75, 0.625, 0.875, 0.5, 0.75, 0, 0.5, 0.875, 1, 0.75, 0.125, 0.5, 0.5, 0.5, 0.625, 0.375, 0.625, 0.625, 0.75, 0.875, 0.375, 0, 0.875), elleven = c(1, 0.8, 0.7, 0.9, 0, 1, 0.9, 0.5, 0, 0.8, 0.8, NA, 0.8, NA, NA, 0.8, NA, 0.4, 0.8, 0.5, 1, 0.4, 0.5, 0.9, 0.8, 1, 0.8, 0.5, 0.3, 0.9, 0.2, 1, 0.8, 0.1, 1, 0.8, 0.5, 0.2, 0.7, 0.8, 1, 0.9, 0.6, 0.8, 0.2, 1), twelve = c(0.666666666666667, NA, 0.133333333333333, 1, 1, 0.8, 0.4, 0.733333333333333, NA, 0.933333333333333, NA, NA, 0.6, 0.533333333333333, NA, 0.533333333333333, NA, 0, 0.6, 0.533333333333333, 0.733333333333333, 0.6, 0.733333333333333, 0.666666666666667, 0.533333333333333, 0.733333333333333, 0.466666666666667, 0.733333333333333, 1, 0.733333333333333, 0.666666666666667, 0.533333333333333, NA, 0.533333333333333, 0.6, 0.866666666666667, 0.466666666666667, 0.533333333333333, 0.333333333333333, 0.6, 0.6, 0.866666666666667, 0.666666666666667, 0.6, 0.6, 0.533333333333333)), .Names = c("id", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "elleven", "twelve"), class = "data.frame", row.names = c(NA, -46L)) iqr <- function(x, ...) { qs <- quantile(as.numeric(x), c(0.25, 0.5, 0.75), na.rm = T) names(qs) <- c("ymin", "y", "ymax") qs } magic <- function(y, ...) { high <- median(SOdata[[y]], na.rm=T)+1.5*sd(SOdata[[y]],na.rm=T) low <- median(SOdata[[y]], na.rm=T)-1.5*sd(SOdata[[y]],na.rm=T) ggplot(SOdata, aes_string(x="six", y=y))+ stat_summary(fun.data="iqr", geom="crossbar", fill="grey", alpha=0.3)+ geom_point(data = SOdata[SOdata[[y]] > high,], position=position_jitter(w=0.1, h=0),col="green", alpha=0.5)+ geom_point(data = SOdata[SOdata[[y]] < low,], position=position_jitter(w=0.1, h=0),col="red", alpha=0.5)+ stat_summary(fun.y=median, geom="point",shape=18 ,size=4, col="orange") } for (i in names(SOdata)[-c(1,7)]) { p<- magic(i) ggsave(paste("magig_plot_",i,".png",sep=""), plot=p, height=3.5, width=5.5) }

    Read the article

  • Sum of XML duration elements in SQL2008

    - by Matt
    I have a XML column that holds information about my games. Here's a sample of the information looks like. <game xmlns="http://my.name.space" > <move> <player>PlayerA</player> <start movetype="Move">EE5</start> <end movetype="Move">DF6</end> <movetime>PT1S</movetime> </move> <move> <player>PlayerB</player> <start movetype="Move">CG7</start> <end movetype="Move">DE6</end> <movetime>PT3S</movetime> </move> <move> <player>PlayerA</player> <start movetype="Move">FD3</start> <end movetype="Move">EG8</end> <movetime>PT4S</movetime> </move> </game> I'm trying to design an XML query to take the sum of my movetime element. Basically I need the sum of each players move time. So using the above sample, PlayerA would have a total move time of 5 seconds and PlayerB would have a total move time of 3 seconds. Here's the XML query that I've been currently been working with SELECT GameHistory.query('declare default element namespace "http://my.name.space"; data(/game/move/movetime)') AS Value FROM GamesWHERE Id=560 I'm a newbie to XSLT / XPATH functions :P

    Read the article

  • Core Data @sum aggregate

    - by nasim
    I am getting an exception when I try to get @sum on a column in iPhone Core-Data application. My two models are following - Task model: @interface Task : NSManagedObject { } @property (nonatomic, retain) NSString * taskName; @property (nonatomic, retain) NSSet* completion; @end @interface Task (CoreDataGeneratedAccessors) - (void)addCompletionObject:(NSManagedObject *)value; - (void)removeCompletionObject:(NSManagedObject *)value; - (void)addCompletion:(NSSet *)value; - (void)removeCompletion:(NSSet *)value; @end Completion model: @interface Completion : NSManagedObject { } @property (nonatomic, retain) NSNumber * percentage; @property (nonatomic, retain) NSDate * time; @property (nonatomic, retain) Task * task; @end And here is the fetch: NSFetchRequest *request = [[NSFetchRequest alloc] init]; request.entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:context]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"taskName" ascending:YES]; request.sortDescriptors = [NSArray arrayWithObject:sortDescriptor]; NSError *error; NSArray *results = [context executeFetchRequest:request error:&error]; NSArray *parents = [results valueForKeyPath:@"taskName"]; NSArray *children = [results valueForKeyPath:@"[email protected]"]; NSLog(@"%@ %@", parents, children); [request release]; [sortDescriptor release]; The exception is thrown at the fourth line from bottom. The thrown exception is: *** -[NSCFSet decimalValue]: unrecognized selector sent to instance 0x3b25a30 *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFSet decimalValue]: unrecognized selector sent to instance 0x3b25a30' I would very much appreciate any kind of help. Thanks.

    Read the article

  • SQL: Join multiple tables and get a grouped sum

    - by Scienceprodigy
    I have a database with 3 tables that have related data. One table has transactions, and the other two relate to transaction categories. Basically it's financial data, so each transaction has a category (i.e. "gasoline" for a gas purchase transaction). A short version of my Transactions table looks like this- Transactions Table: ________________________________ | ID | Type | Amount | Category | --------------------------------- I also have two more tables relating a category to a categories parent. So basically, every Category entry in the Transactions Table belongs to a parent category (i.e. "gasoline" would belong to say "Automotive Expenses"). For categories, and their parent, I have two tables - Category Children: ____________________________________________ | ID | Parent Category ID | Child Category | -------------------------------------------- Category Parent: ________________________ | ID | Parent Category | ------------------------ What I'm trying to do is query the database and have it return a total spending by parent category. To get "spending" the Type of transactions must be "Debit". I tried the following statement: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category = 'category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id = category_parents._id WHERE trans_type = 'Debit' GROUP BY parent_category ORDER BY totals DESC but it gives me the following exception: 12-31 13:51:21.515: ERROR/Exception on query(4403): android.database.sqlite.SQLiteException: no such column: category_children.parent_category_id: , while compiling: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category='category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id=category_parents._id where trans_type='Debit' group by parent_category order by totals desc Any help is appreciated. (EXTRA CREDIT: I also need to make another statement to do spending by child category, given the parent category)

    Read the article

  • Trying to generate all sequences of specified numbers up to a max sum

    - by Stecy
    Hi, Given the following list of descending unique numbers (3,2,1) I wish to generate all the sequences consisting of those numbers up to a maximum sum. Let's say that the sum should be below 10. Then the sequences I'm after are: 3 3 3 3 3 2 1 3 3 2 3 3 1 1 1 3 3 1 1 3 3 1 3 3 3 2 2 2 3 2 2 1 1 3 2 2 1 3 2 2 3 2 1 1 1 1 3 2 1 1 1 3 2 1 1 3 2 1 3 2 3 1 1 1 1 1 1 3 1 1 1 1 1 3 1 1 1 1 3 1 1 1 3 1 1 3 1 3 2 2 2 2 1 2 2 2 2 2 2 2 1 1 1 2 2 2 1 1 2 2 2 1 2 2 2 2 2 1 1 1 1 1 2 2 1 1 1 1 2 2 1 1 1 2 2 1 1 2 2 1 2 2 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 1 1 1 2 1 1 1 1 2 1 1 1 2 1 1 2 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 I'm sure that there's a "standard" way to generate this. I thought of using linq but can't figure it out. Also, I am trying a stack based approach but I am still not successful. Any idea?

    Read the article

  • Ordering by Sum from a separate controller part 2

    - by bgadoci
    Ok, so I had this working just fine before making a few controller additions and the relocation of some code. I feel like I am missing something really simple here but have spent hours trying to figure out what is going on. Here is the situation. class Question < ActiveRecord::Base has_many :sites end and class Sites < ActiveRecord::Base belongs_to :questions end I am trying to display my Sites in order of the sum of the 'like' column in the Sites Table. From my previous StackOverflow question I had this working when the partial was being called in the /views/sites/index.html.erb file. I then moved the partial to being called in the /views/questions/show.html.erb file and it successfully displays the Sites but fails to order them as it did when being called from the Sites view. I am calling the partial from the /views/questions/show.html.erb file as follows: <%= render :partial => @question.sites %> and here is the SitesController#index code class SitesController < ApplicationController def index @sites = @question.sites.all(:select => "sites.*, SUM(likes.like) as like_total", :joins => "LEFT JOIN likes AS likes ON likes.site_id = sites.id", :group => "sites.id", :order => "like_total DESC") respond_to do |format| format.html # index.html.erb format.xml { render :xml => @sites } end end

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • Sum and chart in excel

    - by Chris Lively
    I have data like the following: Both Hour and Count are columns in an excel file. Hour Count 17 79 18 122 19 123 20 142 21 150 22 78 23 15 13 33 14 33 15 40 16 33 17 56 18 46 19 35 20 67 21 65 22 45 23 36 What I want is to create a chart that shows over a period of 1 to 24 (hours) the total count. What's the easiest way to do this. The chart should have a horizontal axis that runs from 1 to 24; and a vertical axis that goes from 0 on up. In the case above the values should be combined like: 1 - 0 2 - 0 3 - 0 4 - 0 5 - 0 6 - 0 7 - 0 8 - 0 9 - 0 10 - 0 11 - 0 12 - 0 13 - 33 14 - 33 15 - 40 16 - 33 17 - 135 18 - 168 19 - 158 20 - 209 21 - 215 22 - 123 23 - 51 24 - 0

    Read the article

  • What is the fastest MD5 sum calculator?

    - by netvope
    I tested the speed of md5sum on a few Ubuntu 8.04 servers Pentium III 700 MHz: 52 MB/s Atom 1.6 GHz, 32-bit: 119 MB/s Core 2 (Yorkfield) 2.5GHz, 32-bit: 194 MB/s Core 2 (Yorkfield) 2.5GHz, 64-bit: 222 MB/s Then I downloaded a tool (by apt-get install) called md5deep and found that it's roughly 20% faster (as tested on the 32-bit Core 2 server). This makes me feel that the "vanilla" md5sum included in Ubuntu isn't the fastest one. Questions: Other than md5deep, are you aware of any MD5 calculators that are potentially faster than md5sum? (Answers for software from other OS are also welcomed.) If I want to compile md5sum myself, where can I download the source? What compiler options would you suggest for the Core 2 server? (note: gcc 4.2.4 in Ubuntu 8.04 does not seem to support -march=core2)

    Read the article

  • Excel sum from column based on another column

    - by jsmars
    I have two columns. The values in the first one are either blank or have a 1. The values in the second one is a number. I also have a variable field. At the bottom of each column, I'd like to have a "total" field, which checks if there is a value (of 1) in the first column, and if there is, adds this up from the value of the second column (on the same row) and multiplies it by the variable. for example: variable 10 name1 name2 counter 1 2 1 3 1 1 3 1 4 totals 100 50 since name1 has 3 1's in it's column, it takes each value from the counter column, and multiplies it by the variable, and outputs the total I'm sorry if this has been asked, I've tried searching but I have a hard time understanding the excel syntaxes. Thanks!

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >