Search Results

Search found 19090 results on 764 pages for 'greatest n per group'.

Page 74/764 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Simple SELECT query problem...

    - by Povylas
    Hi, I have this MySql select query witch seams to have a bug but I am quit "green" so I simple can not see, so maybe you could help? Here is the query: SELECT node_id FROM rate WHERE node_id='".$cat_node_id_string."' LIMIT ".$node_count_star.",".$node_count_end." ORDER BY SUM(amount) GROUP BY node_id Thanks for help in advance...

    Read the article

  • How to get multiple counts with one SQL query?

    - by Crobzilla
    I am wondering how to write this query. I know this actual syntax is bogus, but it will help you understand what I am wanting. I need it in this format, because it is part of a much bigger query. SELECT distributor_id, COUNT(*) AS TOTAL, COUNT(*) WHERE level = 'exec', COUNT(*) WHERE level = 'personal' I need this all returned in one query. Also, it need to be in one row, so the following won't work: 'SELECT distributor_id, COUNT(*) GROUP BY distributor_id'

    Read the article

  • Calculating average (AVG) and grouping by week on large data set takes too long

    - by caioiglesias
    I'm getting average prices by week on 7 million rows, it's taking around 30 seconds to get the job done. This is the simple query: SELECT AVG(price) as price, yearWEEK(FROM_UNIXTIME(timelog)) as week from pricehistory where timelog > $range and product_id = $id GROUP BY week The only week that actually gets data changed and is worth averaging every time is always the last one, so this calculation for the whole period is a waste of resources. I just wanted to know if mysql has a tool to help out on this.

    Read the article

  • Published software not displayed in Add/Remove Programs

    - by vikramsjn
    I just followed How to use Group Policy to remotely install software in Windows Server 2003 to try publishing a software (MSI file). I could follow all the steps, but the supposedly successfully published software does not appear on client/user machine's Add/Remove Programs. Could some help figure why this may not be working. Update: On reading this question on Experts-Exchange, tried gpresults. Output extract follows: COMPUTER SETTINGS The following GPOs were not applied because they were filtered out XADistribution Filtering: Denied (Security) Default Domain Policy Filtering: Denied (Security)

    Read the article

  • Why do two patterns (/.*) and (.*) match different strings? @per-directory (.htaccess) mod_rewrite RewriteRule

    - by Leftium
    Shouldn't the two patterns (/.*) and (.*) match the same string? My real question is actually: where did the "abc" go? Something funky seems to be happening inside the mod_rewrite engine... Given this .htaccess file in www/dir/: Options +FollowSymlinks RewriteEngine on RewriteRule (/.*) print_url_args.php?result=$1 A request for http://localhost/dir/abc/123/ results in: result ($1) = "/123/" $_REQUEST_URI = "/dir/abc/123/" If the / is removed from the pattern like RewriteRule (.*) print_url_args.php?result=$1 The same request for http://localhost/dir/abc/123/ results in: result ($1) = "print_url_args.php" $_REQUEST_URI = "/dir/abc/123/" update: posted rewrite log. 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) [perdir C:/db/www/dir/] add path info postfix: C:/db/www/dir/abc - C:/db/www/dir/abc/123/ 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) [perdir C:/db/www/dir/] strip per-dir prefix: C:/db/www/dir/abc/123/ - abc/123/ 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) [perdir C:/db/www/dir/] applying pattern '(/.*)$' to uri 'abc/123/' 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (2) [perdir C:/db/www/dir/] rewrite 'abc/123/' - 'print_url_args.php?result=/123/' 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) split uri=print_url_args.php?result=/123/ - uri=print_url_args.php, args=result=/123/ 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (3) [perdir C:/db/www/dir/] add per-dir prefix: print_url_args.php - C:/db/www/dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (2) [perdir C:/db/www/dir/] strip document_root prefix: C:/db/www/dir/print_url_args.php - /dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#23cd4a8/initial] (1) [perdir C:/db/www/dir/] internal redirect with /dir/print_url_args.php [INTERNAL REDIRECT] 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#43833c8/initial/redir#1] (3) [perdir C:/db/www/dir/] strip per-dir prefix: C:/db/www/dir/print_url_args.php - print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#43833c8/initial/redir#1] (3) [perdir C:/db/www/dir/] applying pattern '(/.*)$' to uri 'print_url_args.php' 127.0.0.1 - - [15/Feb/2011:14:21:51 +0900] [localhost/sid#1333140][rid#43833c8/initial/redir#1] (1) [perdir C:/db/www/dir/] pass through C:/db/www/dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) [perdir C:/db/www/dir/] add path info postfix: C:/db/www/dir/abc - C:/db/www/dir/abc/123/ 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) [perdir C:/db/www/dir/] strip per-dir prefix: C:/db/www/dir/abc/123/ - abc/123/ 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) [perdir C:/db/www/dir/] applying pattern '(.*)$' to uri 'abc/123/' 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (2) [perdir C:/db/www/dir/] rewrite 'abc/123/' - 'print_url_args.php?result=abc/123/' 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) split uri=print_url_args.php?result=abc/123/ - uri=print_url_args.php, args=result=abc/123/ 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (3) [perdir C:/db/www/dir/] add per-dir prefix: print_url_args.php - C:/db/www/dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (2) [perdir C:/db/www/dir/] strip document_root prefix: C:/db/www/dir/print_url_args.php - /dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23bf470/initial] (1) [perdir C:/db/www/dir/] internal redirect with /dir/print_url_args.php [INTERNAL REDIRECT] 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (3) [perdir C:/db/www/dir/] strip per-dir prefix: C:/db/www/dir/print_url_args.php - print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (3) [perdir C:/db/www/dir/] applying pattern '(.*)$' to uri 'print_url_args.php' 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (2) [perdir C:/db/www/dir/] rewrite 'print_url_args.php' - 'print_url_args.php?result=print_url_args.php' 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (3) split uri=print_url_args.php?result=print_url_args.php - uri=print_url_args.php, args=result=print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (3) [perdir C:/db/www/dir/] add per-dir prefix: print_url_args.php - C:/db/www/dir/print_url_args.php 127.0.0.1 - - [15/Feb/2011:14:24:54 +0900] [localhost/sid#1333140][rid#23fda10/initial/redir#1] (1) [perdir C:/db/www/dir/] initial URL equal rewritten URL: C:/db/www/dir/print_url_args.php [IGNORING REWRITE]

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    Hi all. We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • SQL IO and SAN troubles

    - by James
    We are running two servers with identical software setup but different hardware. The first one is a VM on VMWare on a normal tower server with dual core xeons, 16 GB RAM and a 7200 RPM drive. The second one is a VM on XenServer on a powerful brand new rack server, with 4 core xeons and shared storage. We are running Dynamics AX 2012 and SQL Server 2008 R2. When I insert 15 000 records into a table on the slow tower server (as a test), it does so in 13 seconds. On the fast server it takes 33 seconds. I re-ran these tests several times with the same results. I have a feeling it is some sort of IO bottleneck, so I ran SQLIO on both. Here are the results for the slow tower server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 226.97 MBs/sec: 1.77 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 281 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 99 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 91.34 MBs/sec: 0.71 latency metrics: Min_Latency(ms): 14 Avg_Latency(ms): 699 Max_Latency(ms): 1124 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1094.50 MBs/sec: 68.40 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 58 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1155.31 MBs/sec: 72.20 latency metrics: Min_Latency(ms): 17 Avg_Latency(ms): 55 Max_Latency(ms): 205 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Here are the results of the fast rack server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 2575.77 MBs/sec: 20.12 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 24 Max_Latency(ms): 655 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 5 8 9 9 9 8 5 3 1 1 1 1 0 0 0 0 0 0 0 0 0 37 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1141.39 MBs/sec: 8.91 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 55 Max_Latency(ms): 652 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 91 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 341.37 MBs/sec: 21.33 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 186 Max_Latency(ms): 120037 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1024.07 MBs/sec: 64.00 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 61 Max_Latency(ms): 81632 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Three of the four tests are, to my mind, within reasonable parameters for the rack server. However, the 64 write test is incredibly slow on the rack server. (68 mb/sec on the slow tower vs 21 mb/s on the rack). The read speed for 64k also seems slow. Is this enough to say there is some sort of bottleneck with the shared storage? I need to know if I can take this evidence and say we need to launch an investigation into this. Any help is appreciated.

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • How many megabytes per second may I expect from a gigabit USB 2.0 network card under linux?

    - by Nakedible
    I'm interested in the actual real-word throughput attainable with an external 1000BaseT USB 2.0 network card under Linux. I have been able to attain 90 megabytes per second on a PCI-E interface, but the USB 2.0 bus has a theoretical limit of 480Mbit/s, and in practice less than 40 megabytes per second. Is the actual throughput attainable with such a card under linux 40, 30, 20, or even as low as 10 megabytes per second, eg. no better than a normal 100BaseT network card?

    Read the article

  • How many guesses per second are possible against an encrypted disk? [closed]

    - by HappyDeveloper
    I understand that guesses per second depends on the hardware and the encryption algorithm, so I don't expect an absolute number as answer. For example, with an average machine you can make a lot (thousands?) of guesses per second for a hash created with a single md5 round, because md5 is fast, making brute force and dictionary attacks a real danger for most passwords. But if instead you use bcrypt with enough rounds, you can slow the attack down to 1 guess per second, for example. 1) So how does disk encryption usually work? This is how I imagine it, tell me if it is close to reality: When I enter the passphrase, it is hashed with a slow algorithm to generate a key (always the same?). Because this is slow, brute force is not a good approach to break it. Then, with the generated key, the disk is unencrypted on the fly very fast, so there is not a significant performance lose. 2) How can I test this with my own machine? I want to calculate the guesses per second my machine can make. 3) How many guesses per second are possible against an encrypted disk with the fastest PC ever so far?

    Read the article

  • Is it possible to restrict the connection duration per client on the router (say with OpenWRT)?

    - by static
    How to limit the connection duration per client per period (say, one MAC-address can connect only for 3 hours per week to the network). Where could be defined such a rule? In the firewall? So the rule should define not statically times (this is simple), when the client is allowed to access the network, but the duration of the connection per day/week/month, etc. How/where to implement such rules? Is it possible to do so with OpenWRT/DD-WRT?

    Read the article

  • How do I add additional data to a DataGridRowGroupHeader?

    - by Jeff Yates
    I have a DataGrid that is showing some data via a PagedCollectionView with one group definition. I have created a Style for the corresponding DataGridRowGroupHeader under which I have added a ControlTemplate containing an additional TextBlock and a spacing Rectangle. I would like to bind the widths of these controls to the widths of particular columns, but I am struggling to get this working. I would also like to bind the Text property of the TextBlock to a value. I tried binding the widths via the Width property of a Rectangle in resources but this didn't work (possibly because the Rectangle was never drawn and therefore didn't calculate it's layout). However, I believe both sets of bindings can be performed with some use of one or more ValueConverter implementations, but I was wondering if there was a better way. Can any of this be achieved through the definition of a ControlTemplate?

    Read the article

  • SQL server query not showing daily date result

    - by Andrew Jahn
    I have a simple user production report where daily quotas are tracked. The sql returns a weeks worth of data for all users and for each day it tracks their totals. The problem is if they are out for a day and have a zero production for that day then the result query just skips that day and leaves it out. I want to return days even if the table has no entries for the person on that day. table: user date andy 3/22/10 andy 3/22/10 andy 3/23/10 andy 3/24/10 andy 3/26/10 result: andy 3/22/10 2 3/23/10 1 3/24/10 1 3/25/10 0 3/26/10 1 So my question is how do I get the query to return that 3/25/10 date with a count of 0. (current query I'm using): SELECT A.USUS_ID as Processor, CONVERT(VARCHAR(10),A.CLST_STS_DTM,101) as Date, COUNT(A.CLCL_ID) as DailyTotal FROM CMC_CLST_STATUS A WHERE A.CLST_STS_DTM >= (@Day) AND DATEADD(d, 5, @Day) > A.CLST_STS_DTM GROUP BY A.USUS_ID, CONVERT(VARCHAR(10),A.CLST_STS_DTM,101) ORDER BY A.USUS_ID, CONVERT(VARCHAR(10),A.CLST_STS_DTM,101)

    Read the article

  • Row Number for group by ?

    - by Damien Joe
    I have table structure like Category EmpName 1 Harry 1 John 1 Ford 2 James 2 Mark 2 Shane 3 Oliver 3 Ted I want results like Category EmpName RowNumber 1 Harry 1 1 John 2 1 Ford 3 2 James 1 2 Mark 2 2 Shane 3 3 Oliver 1 3 Ted 2 I am using db2 and row_number() is not working for different groups of records.

    Read the article

  • Report group headings not repeating on every page.

    - by ProfK
    I have an RDLC report with three tables and associated data sets. In my second table, I cannot get the two 'header' rows to repeat on each printed page. When viewed interactively, each table is on its own page and this isn't a problem. When I switch to print layout, e.g. my second table now spans two pages, and the second page gets no header rows. Am I missing a setting or something? ADDED: I do have the 'Repeat Header columns on each page' checked.

    Read the article

  • Python class design - Splitting up big classes into multiple ones to group functionality

    - by Ivo Wetzel
    OK I've got 2 really big classes 1k lines each that I currently have split up into multiple ones. They then get recombined using multiple inheritance. Now I'm wondering, if there is any cleaner/better more pythonic way of doing this. Completely factoring them out would result in endless amounts of self.otherself.do_something calls, which I don't think is the way it should be done. To make things clear here's what it currently looks like: from gui_events import GUIEvents # event handlers from gui_helpers import GUIHelpers # helper methods that don't directly modify the GUI # GUI.py class GUI(gtk.Window, GUIEvents, GUIHelpers): # general stuff here stuff here One problem that is result of this is Pylint complaining giving me trillions of "init not called" / "undefined attribute" / "attribute accessed before definition" warnings.

    Read the article

  • Linking Simple product to their page listed under Grouped product in Magento

    - by Yogesh
    I want to add url for each simple product listed under Grouped product in Magento i have changed it with below code in app\design\frontend\blank\default\template\catalog\product\view\type\grouped.phtml but still not work for me it's link but with main group product (Example: Main Grouped product and three simple products Item1 Item2 Item3 but all simple product show same url of ain Grouped product) <td><a href="<?php $_item->getUrlPath() ?>"><?php echo $this->htmlEscape($_item->getName()) ?></a> </td> and this also <td><a href="<?php $_item->getProductUrl() ?>"><?php echo $this->htmlEscape($_item->getName()) ?></a> </td> i am doing any mistek pls. help how and where to change it

    Read the article

  • How to split string in group in vb.net

    - by amol kadam
    Hi. i'm amol kadam,I want to know how to split string in two part.My string is in Time format (12:12).& I want to seperate this in hour & minute format.the datatype for all variables are string. for hour variable used strTimeHr & for minute strTimeMin .I tried below code but their was a exception "Index and length must refer to a location within the string. Parameter name: length" If Not (objDS.Tables(0).Rows(0)("TimeOfAccident") Is Nothing Or objDS.Tables(0).Rows(0)("TimeOfAccident") Is System.DBNull.Value) Then strTime = objDS.Tables(0).Rows(0)("TimeOfAccident") 'strTime taking value 12:12 index = strTime.IndexOf(":") 'index taking value 2 lastIndex = strTime.Length 'Lastindex taking value 5 strTimeHr = strTime.Substring(0, index) 'strTime taking value 12 correctly strTimeMin = strTime.Substring(index + 1, lastIndex) 'BUT HERE IS PROBLEM OCCURE strTimeMin Doesn't taking any value Me.NumUpDwHr.Text = strTimeHr Me.NumUpDwMin.Text = strTimeMin End If

    Read the article

  • MS Access group development

    - by Hubidubi
    We are planning to redesign quite a huge MS Access application. Is there any way to work concurently on the same application or is it possible to merge two seperate instance of the same file (not the data, but the forms and code). Now Access contains the data, but in the future version MySQL will host the data and Access will be only the frontend (via ODBC)

    Read the article

  • XML Deserialization of complex object

    - by nils_gate
    I have xml structure like this: <Group id="2" name="Third" parentid="0" /> <Group id="6" name="Five" parentid="4" /> <Group id="3" name="Four" parentid="2" /> <Group id="4" name="Six" parentid="1" /> parent is denotes Group's Id. The Constructor of Group reads like: public Group(string name, int ID, Group parent) While De-serializing, how do I get parent using Id and pass into group?

    Read the article

  • Add comma-separated value of grouped rows to existing query

    - by Peter Lang
    I've got a view for reports, that looks something like this: SELECT a.id, a.value1, a.value2, b.value1, /* (+50 more such columns)*/ FROM a JOIN b ON (b.id = a.b_id) JOIN c ON (c.id = b.c_id) LEFT JOIN d ON (d.id = b.d_id) LEFT JOIN e ON (e.id = d.e_id) /* (+10 more inner/left joins) */ It joins quite a few tables and returns lots of columns, but indexes are in place and performance is fine. Now I want to add another column to the result, showing comma-separated values ordered by value from table y outer joined via intersection table x if a.value3 IS NULL, else take a.value3 To comma-separate the grouped values I use Tom Kyte's stragg, could use COLLECT later. Pseudo-code for the SELECT would look like that: SELECT xx.id, COALESCE( a.value3, stragg( xx.val ) ) value3 FROM ( SELECT x.id, y.val FROM x WHERE x.a_id = a.id JOIN y ON ( y.id = x.y_id ) ORDER BY y.val ASC ) xx GROUP BY xx.id What is the best way to do it? Any tips?

    Read the article

  • SQL COUNT of COUNT

    - by cryptic-star
    I have some data I am querying. The table is composed of two columns - a unique ID, and a value. I would like to count the number of times each unique value appears (which can easily be done with a COUNT and GROUP BY), but I then want to be able to count that. So, I would like to see how many items appear twice, three times, etc. So for the following data (ID, val)... 1, 2 2, 2 3, 1 4, 2 5, 1 6, 7 7, 1 The intermediate step would be (val, count)... 1, 3 2, 3 7, 1 And I would like to have(count_from_above, new_count)... 3, 2 -- since three appears twice in the previous table 1, 1 -- since one appears once in the previous table Is there any query which can do that? If it helps, I'm working with Postgres. Thanks!

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >