Search Results

Search found 754 results on 31 pages for 'aggregate'.

Page 8/31 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • How to avoid timestamp issue in a long query?

    - by pingi
    Hi, I have the following 2 tables: items: id int primary key bla text events: id_items int num int when timestamp without time zone ble text composite primary key: id_items, num and want to select to each item the most recent event (the newest 'when'). I wrote an request, but I don't know if it could be written more efficiently. Also on PostgreSQL there is a issue with comparing Timestamp objects: 2010-05-08T10:00:00.123 == 2010-05-08T10:00:00.321 so I select with 'MAX(num)' Any thoughts how to make it better? Thanks. SELECT i.*, ea.* FROM items AS i JOIN ( SELECT t.s AS t_s, t.c AS t_c, max(e.num) AS o FROM events AS e JOIN ( SELECT DISTINCT id_item AS s, MAX(when) AS c FROM events GROUP BY s ORDER BY c ) AS t ON t.s = e.id_item AND e.when = t.c GROUP BY t.s, t.c ) AS tt ON tt.t_s = i.id JOIN events AS ea ON ea.id_item = tt.t_s AND ea.cas = tt.t_c AND ea.num = tt.o;

    Read the article

  • GQL, Aggregation and Order By

    - by Koran
    Hi, How can GQL support ORDER BY when it does not support aggregation? The question is - if say the result of the query is more than 1000, does ORDER BY return fully ordered list or only the first 1000 items which is then ordered? To explain the question more: is conceptually MIN() same as query.orderby('asc').fetch(1)? If it is properly ordering the list, then how can it not provide COUNT(), since to properly order the list, GQL possibly has to parse through the whole list - in which case, COUNT() is not an issue at all? Or is item indexed and kept in some type of tree so that it does not need to parse it all the time?

    Read the article

  • Mysql - Grouping the result based on a mathematical operation and SUM() function

    - by SpikETidE
    Hi all... I'm having the following two tables... Table : room_type type_id type_name no_of_rooms max_guests rate 1 Type 1 15 2 1254 2 Type 2 10 1 3025 Table : reservation reservation_id start_date end_date room_type booked_rooms 1 2010-04-12 2010-04-15 1 8 2 2010-04-12 2010-04-15 1 2 Now... I have this query SELECT type_id, type_name FROM room_type WHERE id NOT IN (SELECT room_type FROM reservation WHERE start_date >= '$start_date' AND end_date <= '$end_date') What the query does is it selects the rooms that are not booked between the start date and end date. Also, as you can see from the reservation table, we also have 'number of rooms booked between the two dates' factor also... I need to add this 'no.of booked rooms between the two dates' factor also in to the query... The query should return the type of rooms for which at least one room is free between the two dates. I worked out the logic but just can't represent it as a query....! How will you do this...? Thanks for your suggestions..!

    Read the article

  • Fetch Max from a date column grouped by a particular field

    - by vamyip
    Hi, I have a table similar to this: LogId RefId Entered ================================== 1 1 2010-12-01 2 1 2010-12-04 3 2 2010-12-01 4 2 2010-12-06 5 3 2010-12-01 6 1 2010-12-10 7 3 2010-12-05 8 4 2010-12-01 Here, LogId is unique; For each RefId, there are multiple entries with timestamp. What I want to extract is LogId for each latest RefId. I tried solutions from this link:http://stackoverflow.com/questions/121387/sql-fetch-the-row-which-has-the-max-value-for-a-column. But, it returns multiple rows with same RefId. Can someone help me with this? Thanks Vamyip

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

  • SQL aggregation query, grouping by entries in junction table

    - by cm007
    I have TableA in a many-to-many relationship with TableC via TableB. That is, TableA TableB TableC id | val fkeyA | fkeyC id | data I wish the do select sum(val) on TableA, grouping by the relationship(s) to TableC. Every entry in TableA has at least one relationship with TableC. For example, TableA 1 | 25 2 | 30 3 | 50 TableB 1 | 1 1 | 2 2 | 1 2 | 2 2 | 3 3 | 1 3 | 2 should output 75 30 since rows 1 and 3 in Table have the same relationships to TableC, but row 2 in TableA has a different relationship to TableC. How can I write a SQL query for this?

    Read the article

  • Request for advice about class design, inheritance/aggregation

    - by Lasse V. Karlsen
    I have started writing my own WebDAV server class in .NET, and the first class I'm starting with is a WebDAVListener class, modelled after how the HttpListener class works. Since I don't want to reimplement the core http protocol handling, I will use HttpListener for all its worth, and thus I have a question. What would the suggested way be to handle this: Implement all the methods and properties found inside HttpListener, just changing the class types where it matters (ie. the GetContext + EndGetContext methods would return a different class for WebDAV contexts), and storing and using a HttpListener object internally Construct WebDAVListener by passing it a HttpListener class to use? Create a wrapper for HttpListener with an interface, and constrct WebDAVListener by passing it an object implementing this interface? If going the route of passing a HttpListener (disguised or otherwise) to the WebDAVListener, would you expose the underlying listener object through a property, or would you expect the program that used the class to keep a reference to the underlying HttpListener? Also, in this case, would you expose some of the methods of HttpListener through the WebDAVListener, like Start and Stop, or would you again expect the program that used it to keep the HttpListener reference around for all those things? My initial reaction tells me that I want a combination. For one thing, I would like my WebDAVListener class to look like a complete implementation, hiding the fact that there is a HttpListener object beneath it. On the other hand, I would like to build unit-tests without actually spinning up a networked server, so some kind of mocking ability would be nice to have as well, which suggests I would like the interface-wrapper way. One way I could solve this would be this: public WebDAVListener() : WebDAVListener(new HttpListenerWrapper()) { } public WebDAVListener(IHttpListenerWrapper listener) { } And then I would implement all the methods of HttpListener (at least all those that makes sense) in my own class, by mostly just chaining the call to the underlying HttpListener object. What do you think? Final question: If I go the way of the interface, assuming the interface maps 1-to-1 onto the HttpListener class, and written just to add support for mocking, is such an interface called a wrapper or an adapter?

    Read the article

  • How can I do this Aggrigate, group by, in query in LINQ?

    - by Ólafur Waage
    Please do not give me a full working example, I want to know how this is done rather than to get some code I can copy paste This is the query I need, and can't for the life of me create it in LINQ. SELECT * FROM dbo.Schedules s, dbo.Videos v WHERE s.VideoID = v.ID AND s.ID IN ( SELECT MAX(ID) FROM dbo.Schedules WHERE ChannelID = 1 GROUP BY VideoID ) ORDER BY v.Rating DESC, s.StartTime DESC I have the "IN" query in LINQ I think, it's something like this var uniqueList = from schedule in db.Schedules where schedule.ChannelID == channelID group schedule by schedule.VideoID into s select new { id = s.Max(i => i.ID) }; It is possibly wrong, but now I can not check in another query for this in a where clause uniqueList.Contains(schedule.ID) There is possibly a better way to write this query, if you have any idea I would love some hints. I get this error and it's not making much sense. The type arguments for method 'System.Linq.Queryable.Contains(System.Linq.IQueryable, TSource)' cannot be inferred from the usage. Try specifying the type arguments explicitly.

    Read the article

  • django access to parent

    - by SledgehammerPL
    model: class Product(models.Model): name = models.CharField(max_length = 128) (...) def __unicode__(self): return self.name class Receipt(models.Model): name = models.CharField(max_length=128) (...) components = models.ManyToManyField(Product, through='ReceiptComponent') def __unicode__(self): return self.name class ReceiptComponent(models.Model): product = models.ForeignKey(Product) receipt = models.ForeignKey(Receipt) quantity = models.FloatField(max_length=9) unit = models.ForeignKey(Unit) def __unicode__(self): return unicode(self.quantity!=0 and self.quantity or '') + ' ' + unicode(self.unit) + ' ' + self.product.genitive And now I'd like to get list of the most often useable products: ReceiptComponent.objects.values('product').annotate(Count('product')).order_by('-product__count' the example result: [{'product': 3, 'product__count': 5}, {'product': 6, 'product__count': 4}, {'product': 5, 'product__count': 3}, {'product': 7, 'product__count': 2}, {'product': 1, 'product__count': 2}, {'product': 11, 'product__count': 1}, {'product': 8, 'product__count': 1}, {'product': 4, 'product__count': 1}, {'product': 9, 'product__count': 1}] It's almost what I need. But I'd prefer having Product object not product value, because I'd like to use this in views.py for generating list.

    Read the article

  • Filter on count(*) in oracle

    - by chris
    I have a grouped query, and would like to filter it based on count(*) Can I do this without a subquery? This is what I have currently: select * from (select ID, count(*) cnt from name group by ID) where cnt > 1;

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Average of a Sum in Mysql query

    - by chupeman
    I am having some problems creating a query that gives me the average of a sum. I read a few examples here in stackoverflow and couldn't do it. Can anyone help me to understand how to do this please? This is the data I have: Basically I need the average transaction value by cashier. I can't run a basic avg because it will take all rows but each transaction can have multiple rows. At the end I want to have: Cashier| Average| 131 | 44.31 |(Which comes from the sum divided by 3 transactions not 5 rows) 130 | 33.15 | etc. This is the query I have to SUM the transactions but don't know how or where to include the AVG function. SELECT `products`.`Transaction_x0020_Number`, Sum(`products`.`Sales_x0020_Value`) AS `SUM of Sales_x0020_Value`, `products`.`Cashier` FROM `products` GROUP BY `products`.`Transaction_x0020_Number`, `products`.`Date`, `products`.`Cashier` HAVING (`products`.`Date` ={d'2010-06-04'}) Any help is appreciated.

    Read the article

  • Use subform record set as domain argument in DAvg()

    - by harto
    Is it possible to use a subform's 'current' record set as the domain argument to DAvg() (etc.)? Basically, I have a subform that displays a subset of records from a query. I would like to run DAvg() over this subset. This is how I've gotten around it: =DAvg([FieldToAvg], [SubformQuery], "ChildField=Forms.MasterForm.MasterField And FieldToAvg > 0") but what I actually want is something like: =DAvg([FieldToAvg], [SubformCurrentlyDisplayedData], "FieldToAvg > 0") Is this possible in Access 2007?

    Read the article

  • MySQL COUNT() total posts within a specific criteria?

    - by newbtophp
    Hey, I've been losing my hair trying to figure out what I'm doing wrong, let me explain abit about my MySQL structure (so you get a better understanding) before I go straight to the question. I have a simple PHP forum and I have a column in both tables (for posts and topics) named 'deleted' if it equals 0 that means its displayed (considered not deleted/exists) or if it equals 1 it hidden (considered deleted/doesn't exist) - bool/lean. Now, the 'specific criteria' I'm on about...I'm wanting to get a total post count within a specific forum using its id (forum_id), ensuring it only counts posts which are not deleted (deleted = 0) and their parent topics are not deleted either (deleted = 0). The column/table names are self explanatory (see my efforts below for them - if needed). I've tried the following (using a 'simple' JOIN): SELECT COUNT(t1.post_id) FROM forum_posts AS t1, forum_topics AS t2 WHERE t1.forum_id = '{$forum_id}' AND t1.deleted = 0 AND t1.topic_id = t2.topic_id AND t2.deleted = 0 LIMIT 1 I've also tried this (using a Subquery): SELECT COUNT(t1.post_id) FROM forum_posts AS t1 WHERE t1.forum_id = '{$forum_id}' AND t1.deleted = 0 AND (SELECT deleted FROM forum_topics WHERE topic_id = t1.topic_id) = 0 LIMIT 1 But both don't comply with the specific criteria. Appreciate all help! :)

    Read the article

  • Fetch Products Grouped By Total Sales ?

    - by David
    Hi, I have the following MySQL tables: TABLE: Products ---------------------- id | productname 1030 | xBox 360 1031 | PlayStation 3 1032 | iPod Touche TABLE: Sales ---------------------- productid | saledate 1031 | 2010-06-14 06:30:12 1031 | 2010-06-14 08:54:38 1030 | 2010-06-14 08:58:10 1032 | 2010-06-14 10:12:47 I want to fetch using php the products i sold today and groupe them by sales number and order by sale date (if possible) , example of Output: Today's statistics: -Playstation 3 (2 sales) -Xbox 360 (1 sale) -iPod Touche (1 sale) Thanks

    Read the article

  • If I'm projecting with linq and not using a range variable what is the proper syntax?

    - by itchi
    I have a query that sums and aggregates alot of data something like this: var anonType = from x in collection let value = collection.Where(c=>c.Code == "A") select new { sum = value.Sum(v=>v.Amount) }; I find it really weird that I have to declare the range variable x, especially if I'm not using it. So, am I doing something wrong or is there a different format I should be following? Also, keep in mind that anonType has about 15 different properties that are all types of aggregates (sums,counts, etc). So I couldn't do something like: int x = collection.Where(c=>c.Code == "A").Sum(v=>v.Amount);

    Read the article

  • News Aggregater of sorts

    - by shinjuo
    There is a website that my company uses that updates information about 3 specific things throughout the day. We use the information from 1 of them and what we are wanting to do is pull this information as it is added to their site and add it to a page of our own to view easier. Is this even possible? Can anyone point me in the direction of setting this up? It is all text that we want to pull.

    Read the article

  • In the Aggregate: How Will We Maintain Legacy Systems?

    - by Jim G.
    NEW YORK - With a blast that made skyscrapers tremble, an 83-year-old steam pipe sent a powerful message that the miles of tubes, wires and iron beneath New York and other U.S. cities are getting older and could become dangerously unstable. July 2007 Story About a Burst Steam Pipe in Manhattan We've heard about software rot and technical debt. And we've heard from the likes of: "Uncle Bob" Martin - Who warned us about "the consequences of making a mess". Michael C. Feathers - Who gave us guidance for 'Working Effectively With Legacy Code'. So certainly the software engineering community is aware of these issues. But I feel like our aggregate society does not appreciate how these issues can plague working systems and applications. As Steve McConnell notes: ...Unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. If this is true, and I believe that it is, then I fear that governments and businesses may defer regular maintenance and fortification against hackers until it is too late. [Much like NYC and the steam pipes.] My Question: Do you share my concern? And if so, is there a way that we can avoid the software equivalent of NYC and the steam pipes?

    Read the article

  • In the Aggregate: How Will We Maintain Legacy Systems? [closed]

    - by Jim G.
    NEW YORK - With a blast that made skyscrapers tremble, an 83-year-old steam pipe sent a powerful message that the miles of tubes, wires and iron beneath New York and other U.S. cities are getting older and could become dangerously unstable. July 2007 Story About a Burst Steam Pipe in Manhattan We've heard about software rot and technical debt. And we've heard from the likes of: "Uncle Bob" Martin - Who warned us about "the consequences of making a mess". Michael C. Feathers - Who gave us guidance for 'Working Effectively With Legacy Code'. So certainly the software engineering community is aware of these issues. But I feel like our aggregate society does not appreciate how these issues can plague working systems and applications. As Steve McConnell notes: ...Unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. If this is true, and I believe that it is, then I fear that governments and businesses may defer regular maintenance and fortification against hackers until it is too late. [Much like NYC and the steam pipes.] My Question: Is there a way that we can avoid the software equivalent of NYC and the steam pipes?

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • T-SQL Tuesday: Aggregations in SSIS

    - by andyleonard
    Introduction Jes Borland ( Blog | @grrl_geek ) is hosting this month's T-SQL Tuesday - started by SQLBlog's own Adam Machanic ( Blog | @AdamMachanic ) - and it is about aggregation. I thought I'd show a couple ways to do aggregation using SSIS. The Aggregate Transformation in SSIS The Aggregate transform in SSIS is fast . I built an SSIS package (AggregateScripts.dtsx) with two Data Flow Tasks (Using the Aggregate Transform and Using a Script Component). Using the Aggregate Transform looks like this:...(read more)

    Read the article

  • Novell eDirectory—How can I aggregate account lockout events?

    - by bshacklett
    I'm seeing an account become locked out pretty frequently and I wanted to pull an aggregated log together of all of the lockout events so I could get a better idea of what times it's occurring. Normally I'd do this with EventCombMT.exe, but I'm in a Novell environment at the moment. Is there a Novell equivalent to Microsoft's ALTools or another diagnostic utility I could use to help aggregate lockout events into an easy to read log file?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >