Search Results

Search found 11543 results on 462 pages for 'partition wise join'.

Page 431/462 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Python Beautiful Soup .content Property

    - by Robert Birch
    What does BeautifulSoup's .content do? I am working through crummy.com's tutorial and I don't really understand what .content does. I have looked at the forums and I have not seen any answers. Looking at the code below.... from BeautifulSoup import BeautifulSoup import re doc = ['<html><head><title>Page title</title></head>', '<body><p id="firstpara" align="center">This is paragraph <b>one</b>.', '<p id="secondpara" align="blah">This is paragraph <b>two</b>.', '</html>'] soup = BeautifulSoup(''.join(doc)) print soup.contents[0].contents[0].contents[0].contents[0].name I would expect the last line of the code to print out 'body' instead of... File "pe_ratio.py", line 29, in <module> print soup.contents[0].contents[0].contents[0].contents[0].name File "C:\Python27\lib\BeautifulSoup.py", line 473, in __getattr__ raise AttributeError, "'%s' object has no attribute '%s'" % (self.__class__.__name__, attr) AttributeError: 'NavigableString' object has no attribute 'name' Is .content only concerned with html, head and title? If, so why is that? Thanks for the help in advance.

    Read the article

  • Is it possible to have a mysql table accept a null value for a primary_key column referencing a diff

    - by Dr.Dredel
    I have a table that has a column which holds the id of a row in another table. However, when table A is being populated, table B may or may not have a row ready for table A. My question is, is it possible to have mysql prevent an invalid value from being entered but be ok with a NULL? or does a foreign key necessitate a valid related value? So... what I'm looking for (in pseudo code) is this: Table "person" id | name Table "people" id | group_name | person_id (foreign key id from table person) insert into person (1, 'joe'); insert into people (1, 'foo', 1)//kosher insert into people (1, 'foo', NULL)//also kosher insert into people(1, 'foo', 7)// should fail since there is no id 7 in the person table. The reason I need this is that I'm having a chicken and egg issue where it makes perfect sense for the rows in the people table to be created before hand (in this example, I'm creating the groups and would like them to pre-exist the people who join them). And I realize that THIS example is silly and I would just put the group id in the person table rather than vice-versa, but in my real-world problem that is not workable. Just curious if I need to allow any and all values in order to make this work, or if there's some way to allow for null.

    Read the article

  • codeigniter differentiate field names in JOINed tables

    - by Patrick
    Hi, I need to retrieve data from two tables. the first is a list of events, the second is a list of venues. I have a fields with the same name in both tables: events.venue (which is an ID) and venues.venue is the name of the place, say "blues bar". The tables can be joined on events.venue = venues.id. Snippet of my model: $this->db->select('events.*, venues.*'); $this->db->join('venues', 'events.venue = venues.id'); if ($date != 'all') { $this->db->where('date', $date); } if ($keyword) { $this->db->like('description', $keyword); $this->db->or_like('band', $keyword); $this->db->or_like('venue', $keyword); $this->db->or_like('genre', $keyword); } $Q = $this->db->get('events'); if ($Q->num_rows() > 0) { foreach ($Q->result() as $row) { $data[] = $row; } } $Q->free_result(); return $data; Snippet of the View: foreach ($events as $row) { echo "<p>{$row->band} ({$row->genre})<br />"; echo "Playing at: {$row->venue}<br /></p>"; // echoes "blues bar" //more here... } 2 Questions: 1) Why does $row-venue echo venues.venue, instead of events.venue? B) how can I differentiate them? eg. What if I want to echo both events.venue and venues.venue? I can probably do something like "SELECT venues.venue as name_of_the_venue", but how can I do this when I've already selected *?

    Read the article

  • Replace beginning words(SQL SERVER 2005, SET BASED)

    - by Newbie
    I have the below tables. tblInput Id WordPosition Words -- ----------- ----- 1 1 Hi 1 2 How 1 3 are 1 4 you 2 1 Ok 2 2 This 2 3 is 2 4 me tblReplacement Id ReplacementWords --- ---------------- 1 Hi 2 are 3 Ok 4 This The tblInput holds the list of words while the tblReplacement hold the words that we need to search in the tblInput and if a match is found then we need to replace those. But the problem is that, we need to replace those words if any match is found at the beginning. i.e. in the tblInput, in case of ID 1, the words that will be replaced is only 'Hi' and not 'are' since before 'are', 'How' is there and it is not in the tblReplacement list. in case of Id 2, the words that will be replaced are 'Ok' & 'This'. Since these both words are present in the tblReplacement table and after the first word i.e. 'Ok' is replaced, the second word which is 'This' here comes first in the list of ID category 2 . Since it is available in the tblReplacement, and is the first word now, so this will also be replaced. So the desired output will be Id NewWordsAfterReplacement --- ------------------------ 1 How 1 are 1 you 2 is 2 me My approach so far: ;With Cte1 As( Select t1.Id ,t1.Words ,t2.ReplacementWords From tblInput t1 Cross Join tblReplacement t2) ,Cte2 As( Select Id, NewWordsAfterReplacement = REPLACE(Words,ReplacementWords,'') From Cte1) Select * from Cte2 where NewWordsAfterReplacement <> '' But I am not getting the desired output. It is replacing all the matching words. Urgent help needed*.( SET BASED )* I am using SQL SERVER 2005. Thanks

    Read the article

  • Stored procedure or function expects parameter which is not supplied

    - by user2920046
    I am trying to insert data into a SQL Server database by calling a stored procedure, but I am getting the error Procedure or function 'SHOWuser' expects parameter '@userID', which was not supplied. My stored procedure is called "SHOWuser". I have checked it thoroughly and no parameters is missing. My code is: public void SHOWuser(string userName, string password, string emailAddress, List preferences) { SqlConnection dbcon = new SqlConnection(conn); try { SqlCommand cmd = new SqlCommand(); cmd.Connection = dbcon; cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "SHOWuser"; cmd.Parameters.AddWithValue("@userName", userName); cmd.Parameters.AddWithValue("@password", password); cmd.Parameters.AddWithValue("@emailAddress", emailAddress); dbcon.Open(); int i = Convert.ToInt32(cmd.ExecuteScalar()); cmd.Parameters.Clear(); cmd.CommandText = "tbl_pref"; foreach (int preference in preferences) { cmd.Parameters.Clear(); cmd.Parameters.AddWithValue("@userID", Convert.ToInt32(i)); cmd.Parameters.AddWithValue("@preferenceID", Convert.ToInt32(preference)); cmd.ExecuteNonQuery(); } } catch (Exception) { throw; } finally { dbcon.Close(); } and the stored procedure is: ALTER PROCEDURE [dbo].[SHOWuser] -- Add the parameters for the stored procedure here ( @userName varchar(50), @password nvarchar(50), @emailAddress nvarchar(50) ) AS BEGIN INSERT INTO tbl_user(userName,password,emailAddress) values(@userName,@password,@emailAddress) select tbl_user.userID,tbl_user.userName,tbl_user.password,tbl_user.emailAddress, stuff((select ',' + preferenceName from tbl_pref_master inner join tbl_preferences on tbl_pref_master.preferenceID = tbl_preferences.preferenceID where tbl_preferences.userID=tbl_user.userID FOR XML PATH ('')),1,1,' ' ) AS Preferences from tbl_user SELECT SCOPE_IDENTITY(); END Pls help, Thankx in advance...

    Read the article

  • How can I read a continuously updating log file in Perl?

    - by Octopus
    I have a application generating logs in every 5 sec. The logs are in below format. 11:13:49.250,interface,0,RX,0 11:13:49.250,interface,0,TX,0 11:13:49.250,interface,1,close,0 11:13:49.250,interface,4,error,593 11:13:49.250,interface,4,idle,2994215 and so on for other interfaces... I am working to convert these into below CSV format: Time,interface.RX,interface.TX,interface.close.... 11:13:49,0,0,0,.... Simple as of now but the problem is, I have to get the data in CSV format online, i.e as soon the log file updated the CSV should also be updated. What I have tried to read the output and make the header is: #!/usr/bin/perl -w use strict; use File::Tail; my $head=["Time"]; my $pos={}; my $last_pos=0; my $current_event=[]; my $events=[]; my $file = shift; $file = File::Tail->new($file); while(defined($_=$file->read)) { next if $_ =~ some filters; my ($time,$interface,$count,$eve,$value) = split /[,\n]/, $_; my $key = $interface.".".$eve; if (not defined $pos->{$eve_key}) { $last_pos+=1; $pos->{$eve_key}=$last_pos; push @$head,$eve; } print join(",", @$head) . "\n"; } Is there any way to do this using Perl?

    Read the article

  • Refactoring. Your way to reduce code complexity of big class with big methods

    - by Andrew Florko
    I have a legacy class that is rahter complex to maintain: class OldClass { method1(arg1, arg2) { ... 200 lines of code ... } method2(arg1) { ... 200 lines of code ... } ... method20(arg1, arg2, arg3) { ... 200 lines of code ... } } methods are huge, unstructured and repetitive (developer loved copy/paste aprroach). I want to split each method into 3-5 small functions, whith one pulic method and several helpers. What will you suggest? Several ideas come to my mind: Add several private helper methods to each method and join them in #region (straight-forward refactoring) Use Command pattern (one command class per OldClass method in a separate file). Create helper static class per method with one public method & several private helper methods. OldClass methods delegate implementation to appropriate static class (very similiar to commands). ? Thank you in advance!

    Read the article

  • Return multiple IDs from a function and use the result in a query

    - by NewK
    I have this function that returns me all children of a tree node: CREATE OR REPLACE FUNCTION fn_category_get_childs_v2(id_pai integer) RETURNS integer[] AS $BODY$ DECLARE ids_filhos integer array; BEGIN SELECT array ( SELECT category_id FROM category WHERE category_id IN ( (WITH RECURSIVE parent AS ( SELECT category_id , parent_id from category WHERE category_id = id_pai UNION ALL SELECT t.category_id , t.parent_id FROM parent INNER JOIN category t ON parent.category_id = t.parent_id ) SELECT category_id FROM parent WHERE category_id <> id_pai ) ) ) into ids_filhos; return ids_filhos; END; and I would like to use it in a select statement like this: select * from teste1_elements where category_id in (select * from fn_category_get_childs_v2(12)) I've also tried this way with the same result: select * from teste1_elements where category_id=any(select * from fn_category_get_childs_v2(12))) But I get the following error: ERROR: operator does not exist: integer = integer[] LINE 1: select * from teste1_elements where category_id in (select *... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. The function returns an integer array, is that the problem? SELECT * from fn_category_get_childs_v2(12) retrieves the following array (integer[]): '{30,32,34,20,19,18,17,16,15,14}'

    Read the article

  • How to end a thread in perl

    - by user1672190
    I am new to perl and i have a question about perl thread. I am trying to create a new thread to check if the running function is timed out, and my way of doing it is as below. Logic is 1.create a new thread 2.run the main function and see if it is timed out, if ture, kill it Sample code: $exit_tread = false; # a flag to make sure timeout thread will run my $thr_timeout = threads->new( \&timeout ); execute main function here; $exit_thread = true # set the flag to true to force thread ends $thr_timeout->join(); #wait for the timeout thread ends Code of timeout function sub timeout { $timeout = false; my $start_time = time(); while (!$exit_thread) { sleep(1); last if (main function is executed); if (time() - $start_time >= configured time ) { logmsg "process is killed as request timed out"; _kill_remote_process(); $timeout = true; last; } } } now the code is running as i expected, but i am just not very clear if the code $exit_thread = true works because there is a "last" at the end of while loop. Can anybody give me a answer? Thanks

    Read the article

  • Visutal Studio Warning "Content is not allowed" in ASP.NET project

    - by pstar
    Hi, I am just started working as a programmer last month, so there will be plenty of newbie question come from me, stay tuned... I am now working on modify the provided template (from DevExpress) to create new web form using ASP.NET 2.0 on Visual Studio 2008. While the functionality of that web form is there, I am in the process of get rid of ninety something warning message, most of them come from the provided template. One of them puzzled me for a while is this one: "Warning 75 Content is not allowed between the opening and closing tags for element 'ClientSideEvents'." And here is the code: <dxe:ASPxListBox id="edtMultiResource" runat="server" width="100%" SelectionMode="CheckColumn" DataSource='<%# ResourceDataSource %>' Border-BorderWidth="0"> <ClientSideEvents SelectedIndexChanged="function(s, e) { var resourceNames = new Array(); var items = s.GetSelectedItems(); var count = items.length; if (count > 0) { for(var i=0; i<count; i++) _aspxArrayPush(resourceNames, items[i].text); } else _aspxArrayPush(resourceNames, ddResource.cp_Caption_ResourceNone); ddResource.SetValue(resourceNames.join(', ')); }"></ClientSideEvents> </dxe:ASPxListBox> I couldn't see anything wrong with the code myself, so please help me out here.

    Read the article

  • Conditional WHERE Clauses in SQL Server 2008

    - by user336786
    Hello, I am trying to execute a query on a table in my SQL Server 2008 database. I have a stored procedure that uses five int parameters. Currently, my parameters are defined as follows: @memberType int, @color int, @preference int, @groupNumber int, @departmentNumber int This procedure will be passed -1 or higher for each parameter. A value of -1 means that the WHERE clause should not consider that parameter in the join/clause. If the value of the parameter is greater than -1, I need to consider the value in my WHERE clause. I would prefer to NOT use an IF-ELSE statement because it seems sloppy for this case. I saw this question here. However, it did not work for me. I think the reason why is because each of the columns in my table can have a NULL value. Someone pointed this scenario out in the fifth answer. That appears to be happening to me. Is there a slick approach to my question? Or do I just need to brute force it (I hope not :(). Thank you!

    Read the article

  • Hibernate CreateSQL Query Problem

    - by Shaded
    Hello All I'm trying to use hibernates built in createsql function but it seems that it doesn't like the following query. List =hibernateSession.createSQLQuery("SELECT number, location FROM table WHERE other_number IN (SELECT f.number FROM table2 AS f JOIN table3 AS g on f.number = g.number WHERE g.other_number = " + var + ") ORDER BY number").addEntity(Table.class).list(); I have a feeling it's from the nested select statement, but I'm not sure. The inner select is used elsewhere in the code and it returns results fine. This is my mapping for the first table: <hibernate-mapping> <class name="org.efs.openreports.objects.Table" table="table"> <id name="id" column="other_number" type="java.lang.Integer"> <generator class="native"/> </id> <property name="number" column="number" not-null="true" unique="true"/> <property name="location" column="location" not-null="true" unique="true"/> </class> </hibernate-mapping> And the .java public class Table implements Serializable { private Integer id;//panel_facility private Integer number; private String location; public Table() { } public void setId(Integer id) { this.id = id; } public Integer getId() { return id; } public void setNumber(Integer number) { this.number = number; } public Integer number() { return number; } public String location() { return location; } public void setLocation(String location) { this.location = location; } } Any suggestions? Edit (Added mapping)

    Read the article

  • Ruby on Rails How do I access variables of a model inside itself like in this example?

    - by banditKing
    I have a Model like so: # == Schema Information # # Table name: s3_files # # id :integer not null, primary key # owner :string(255) # notes :text # created_at :datetime not null # updated_at :datetime not null # last_accessed_by_user :string(255) # last_accessed_time_stamp :datetime # upload_file_name :string(255) # upload_content_type :string(255) # upload_file_size :integer # upload_updated_at :datetime # class S3File < ActiveRecord::Base #PaperClip methods attr_accessible :upload attr_accessor :owner Paperclip.interpolates :prefix do |attachment, style| I WOULD LIKE TO ACCESS VARIABLE= owner HERE- HOW TO DO THAT? end has_attached_file( :upload, :path => ":prefix/:basename.:extension", :storage => :s3, :s3_credentials => {:access_key_id => "ZXXX", :secret_access_key => "XXX"}, :bucket => "XXX" ) #Used to connect to users through the join table has_many :user_resource_relationships has_many :users, :through => :user_resource_relationships end Im setting this variable in the controller like so: # POST /s3_files # POST /s3_files.json def create @s3_file = S3File.new(params[:s3_file]) @s3_file.owner = current_user.email respond_to do |format| if @s3_file.save format.html { redirect_to @s3_file, notice: 'S3 file was successfully created.' } format.json { render json: @s3_file, status: :created, location: @s3_file } else format.html { render action: "new" } format.json { render json: @s3_file.errors, status: :unprocessable_entity } end end end Thanks, any help would be appreciated.

    Read the article

  • Many tables for many users?

    - by Seagull
    I am new to web programming, so excuse the ignorance... ;-) I have a web application that in many ways can be considered to be a multi-tenant environment. By this I mean that each user of the application gets their own 'custom' environment, with absolutely no interaction between those users. So far I have built the web application as a 'single user' environment. In other words, I haven't actually done anything to support multi-users, but only worked on the functionality I want from the app. Here is my problem... What's the best way to build a multi-user environment: All users point to the same 'core' backend. In other words, I build the logic to separate users via appropriate SQL queries (eg. select * from table where user='123' and attribute='456'). Each user points to a unique tablespace, which is built separately as they join the system. In this case I would simply generate ALL the relevant SQL tables per user, with some sort of suffix for the user. (eg. now a query would look like 'select * from table_ where attribute ='456'). In short, it's a difference between "select * from table where USER=" and "select * from table_USER".

    Read the article

  • splice() not working on correctly

    - by adardesign
    I am setting a cookie for each navigation container that is clicked on. It sets an array that is joined and set the cookie value. if its clicked again then its removed from the array. It somehow buggy. It only splices after clicking on other elements. and then it behaves weird. It might be that splice is not the correct method Thanks much. var navLinkToOpen; var setNavCookie = function(value){ var isSet = false; var checkCookies = checkNavCookie() setCookieHelper = checkCookies? checkCookies.split(","): []; for(i in setCookieHelper){ if(value == setCookieHelper[i]){ setCookieHelper.splice(value,1); isSet = true; } } if(!isSet){setCookieHelper.push(value)} setCookieHelper.join(",") document.cookie = "navLinkToOpen"+"="+setCookieHelper; } var checkNavCookie = function(){ var allCookies = document.cookie.split( ';' ); for (i = 0; i < allCookies.length; i++ ){ temp = allCookies[i].split("=") if(temp[0].match("navLinkToOpen")){ var getValue = temp[1] } } return getValue || false } $(document).ready(function() { $("#LeftNav li").has("b").addClass("navHeader").not(":first").siblings("li").hide() $(".navHeader").click(function(){ $(this).toggleClass("collapsed").nextUntil("li:has('b')").slideToggle(300); setNavCookie($('.navHeader').index($(this))) return false }) var testCookies = checkNavCookie(); if(testCookies){ finalArrayValue = testCookies.split(",") for(i in finalArrayValue){ $(".navHeader").eq(finalArrayValue[i]).toggleClass("collapsed").nextUntil(".navHeader").slideToggle (0); } } });

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Help Me With This MS-Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obtain the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • Further filter SQL results

    - by eric
    I've got a query that returns a proper result set, using SQL 2005. It is as follows: select case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end as [Quarter], bugtypes.bugtypename, count(bug.bugid) as [Total] from bug left outer join bugtypes on bug.crntbugtypeid = bugtypes.bugtypeid and bug.projectid = bugtypes.projectid where (bug.projectid = 44 and bug.currentowner in (-1000000031,-1000000045) and bug.crntplatformid in (42,37,25,14)) or (bug.projectid = 44 and bug.currentowner in (select memberid from groupmembers where projectid = 44 and groupid in (87,88)) and bug.crntplatformid in (42,37,25,14)) group by case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end, bugtypes.bugtypename order by 1,3 desc It produces a nicely grouped list of years and quarters, an associated descriptor, and a count of incidents in descending count order. What I'd like to do is further filter this so it shows only the 10 most submitted incidents per quarter. What I'm struggling with is how to take this result set and achieve that.

    Read the article

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • How to add an object to a html string?

    - by Philippe Maes
    I'm trying to load several images by a drop action and than resizing them and adding them as a thumbnail. The resize part is very important because the images can be very large and I want to make the thumbnails small of size. Here is my code: loadingGif(drop); for (var i=0;i<files.length;i++) { var file = files[i]; var reader = new FileReader(); reader.onload = function(e) { var src = e.target.result; var img = document.createElement('img'); img.src = src; var scale = 100/img.height; img.height = 100; img.width = scale*img.width; output.push('<div id="imagecontainer"><div id="image">'+img+'</div><div id="delimage"><img src="img/del.jpg"" /></div></div>'); if(output.length == files.length) { drop.removeChild(drop.lastChild); drop.innerHTML += output.join(''); output.length = 0; } } reader.readAsDataURL(file); } As you can probably tell I insert a loading gif image in my dropzone until all files are loaded (output.length == files.length). When a file is loaded I add html to an array which I will print if the load is complete. The problem is I can't seem to add the img object (I need this object to resize the image) to the html string, which seems obvious as the img is an object... So my question to you guys is: how do I do this? :)

    Read the article

  • Unknown reason for code executing the way it does in python

    - by Jasper
    Hi I am a beginner programmer, using python on Mac. I created a function as a part of a game which receives the player's input for the main character's name. The code is: import time def newGameStep2(): print ' ****************************************** ' print '\nStep2\t\t\t\tCharacter Name' print '\nChoose a name for your character. This cannot\n be changed during the game. Note that there\n are limitations upon the name.' print '\nLimitations:\n\tYou cannot use:\n\tCommander\n\tLieutenant\n\tMajor\n\t\tas these are reserved.\n All unusual capitalisations will be removed.\n There is a two-word-limit on names.' newStep2Choice = raw_input('>>>') newStep2Choice = newStep2Choice.lower() if 'commander' in newStep2Choice or 'lieutenant' in newStep2Choice or 'major' in newStep2Choice: print 'You cannot use the terms \'commander\', \'lieutenant\' or \'major\' in the name. They are reserved.\n' print time.sleep(2) newGameStep2() else: newStep2Choice = newStep2Choice.split(' ') newStep2Choice = [newStep2Choice[0].capitalize(), newStep2Choice[1].capitalize()] newStep2Choice = ' ' .join(newStep2Choice) return newStep2Choice myVar = newGameStep2() print myVar When I was testing, I inputted 'major a', and when it asked me to input another name, i inputted 'a b'. However, when it returned the output of the function, it returns 'major a'. I went through this with a debugger, yet I still can't seem to find where the problem occurred. Thanks for any help, Jasper

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • custom sorting or ordering a table without resorting the whole shebang

    - by fuugus
    for ten years we've been using the same custom sorting on our tables, i'm wondering if there is another solution which involves fewer updates, especially since today we'd like to have a replication/publication date and would'nt like to have our replication replicate unnecessary entries. i had a look into nested sets, but it does'nt seem to do the job for us. base table: id | a_sort ---+------- 1 10 2 20 3 30 after inserting insert into table (a_sort) values(15) an entry at the second position. id | a_sort ---+------- 1 10 2 20 3 30 4 15 ordering the table with select * from table order by a_sort and resorting all the a_sort entries, updating at least id=(2,3,4) will of course produce the desired output id | a_sort ---+------- 1 10 4 20 2 30 3 40 the column names, the column count, datatypes, a possible join, possible triggers or the way the resorting is done is/are irrelevant to the problem. also we've found some pretty neat ways to do this task fast. only; how the heck can we reduce the updates in the db to 1 or 2 max. seems like an awfully common problem. the captain obvious in me thougth once "use an a_sort float(53), insert using a fixed value of ordervaluefirstentry+abs(ordervaluefirstentry-ordervaluenextentry)/2".. but this would only allow around 1040 "in between" entries - so never resorting seems a bit problematic ;)

    Read the article

  • Passing values for method

    - by Kasun
    I beginner for programming. So can you please show me how to pass values for your compile() method. class CL { private const string clexe = @"cl.exe"; private const string exe = "Test.exe", file = "test.cpp"; private string args; public CL(String[] args) { this.args = String.Join(" ", args); this.args += (args.Length > 0 ? " " : "") + "/Fe" + exe + " " + file; } public Boolean Compile(String content, ref string errors) { //remove any old copies if (File.Exists(exe)) File.Delete(exe); if (File.Exists(file)) File.Delete(file); File.WriteAllText(file, content); Process proc = new Process(); proc.StartInfo.UseShellExecute = false; proc.StartInfo.RedirectStandardOutput = true; proc.StartInfo.RedirectStandardError = true; proc.StartInfo.FileName = clexe; proc.StartInfo.Arguments = this.args; proc.StartInfo.CreateNoWindow = true; proc.Start(); //errors += proc.StandardError.ReadToEnd(); errors += proc.StandardOutput.ReadToEnd(); proc.WaitForExit(); bool success = File.Exists(exe); return success; } }

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >