Search Results

Search found 32223 results on 1289 pages for 'sql 2012'.

Page 714/1289 | < Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

  • select for update with ruby oci8

    - by ash34
    how do I do a 'select for update' and then 'update' the row using ruby oci8. I have two fields counter1 and counter2 in a table which has only 1 record. I want to select the values from this table and then increment them by locking the row using select for update. thanks.

    Read the article

  • Is closing/disposing an SqlDataReader needed if you are already closing the sqlconnection?

    - by Brian
    I noticed This question, but my question is a bit more specific. Is there any advantage to using using (SqlConnection conn = new SqlConnection(conStr)) { using (SqlCommand command = new SqlCommand()) { // dostuff } } instead of using (SqlConnection conn = new SqlConnection(conStr)) { SqlCommand command = new SqlCommand(); // dostuff } Obviously it does matter run more than one command with the same connection, since closing an SqlDataReader is more efficient than closing and reopening a connection (calling conn.Close();conn.Open(); will also free up the connection). I see many people insist that failure to close the DataReader means leaving open connection resources around, but doesn't that only apply if you don't close the connection?

    Read the article

  • Trying to verify understanding of Foreign Keys MSSQL

    - by msarchet
    So I'm working on just a learning project to expose myself to doing some things I do not get to do at work. I'm just making a simple bug and case tracking app (I know there are a million this is just to work with some tools I don't get to). So I was designing my database and realized I've never actually used Foreign Keys before in any of my projects, I've used them before but never actually setting up a column as a FK. So I've designed my database as follows, which I think is close to correct (at least for the initial layout). However When I try to add the FK's to the linking Tables I get an error saying, "The tables present in the relationship must have the same number of columns". I'm doing this by in SQLSMS by going to the Keys 'folder' and adding a FK. Is there something that I am doing wrong here, I don't understand why the tables would have to have the same number of columns for me to add a FK relationship between the tables?

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • How do I perform 'WHERE' on groups of rows?

    - by Drew
    I have a table, which looks like: +-----------+----------+ + person_id + group_id + +-----------+----------+ + 1 + 10 + + 1 + 20 + + 1 + 30 + + 2 + 10 + + 2 + 20 + + 3 + 10 + +-----------+----------+ I need a query such that only person_ids with groups 10 AND 20 AND 30 are returned (only person_id: 1). I am not sure how to do this, as from what I can see it would require me to group the rows by person_id and then select the rows which contain all group_ids. I'm looking for something which will preserve the use of keys without resorting to string operations on group_concat() or such.

    Read the article

  • Update a list of things without hitting every entry

    - by bobobobo
    I have a list in a database that the user should be able to order. itemname| order value (int) --------+--------------------- salad | 1 mango | 2 orange | 3 apples | 4 On load from the database, I simply order by order_value. By drag 'n drop, he should be able to move apples so that it appears at the top of the list.. itemname| order value (int) --------+--------------------- apples | 4 salad | 1 mango | 2 orange | 3 Ok. So now internally I have to update EVERY LIST ITEM! If the list has 20 or 100 items, that's a lot of updates for a simple drag operation. itemname| order value (int) --------+--------------------- apples | 1 salad | 2 mango | 3 orange | 4 I'd rather do it with only one update. One way I thought of is if "internal Order" is a double value. itemname| order value (double) --------+--------------------- salad | 1.0 mango | 2.0 orange | 3.0 apples | 4.0 SO after the drag n' drop operation, I assign apples has a value that is less than the item it is to appear in front of: itemname| order value (double) --------+--------------------- apples | 0.5 salad | 1.0 mango | 2.0 orange | 3.0 .. and if an item is dragged into the middle somewhere, its order_value is bigger than the one it appears after .. here I moved orange to be between salad and mango: itemname| order value (double) --------+--------------------- apples | 0.5 salad | 1.0 orange | 1.5 mango | 2.0 Any thoughts on better ways to do this?

    Read the article

  • DataSet with many OR clauses

    - by Silvan
    Hello :) I've got a little problem with a query which I create in the Visual Studio Designer. I need a query with a lot of 'OR'-clauses for the same column. I found the operator 'IN', but I don't know how to use that in the Visual Studio Designer: Example IN: SELECT EmployeeID, FirstName, LastName, HireDate, City FROM Employees WHERE City IN ('Seattle', 'Tacoma', 'Redmond') I tried to do it in this way: SELECT [MachineryId], [ConstructionSiteId], [DateTime], [Latitude], [Longitude], [HoursCounter] FROM [PositionData] WHERE [MachineryID] IN @MachineryIDs But this doesn't work. Is there another way to handle a lot of OR clauses? Thank you very much for your help.

    Read the article

  • Insert query results into table in ms access 2010

    - by CodeMed
    I need to transform data from one schema into another in an MS Access database. This involves writing queries to select data from the old schema and then inserting the results of the queries into tables in the new schema. The below is an example of what I am trying to do. The SELECT component of the below works fine, but the INSERT component does not work. Can someone show me how to fix the below so that it effectively inserts the results of the SELECT statement into the destination table? INSERT INTO CompaniesTable (CompanyName) VALUES ( SELECT DISTINCT IIF(a.FIRM_NAME IS NULL, b.SUBACCOUNT_COMPANY_NAME, a.FIRM_NAME) AS CompanyName FROM (SELECT ContactID, FIRM_NAME, SUBACCOUNT_COMPANY_NAME FROM qrySummaryData) AS a LEFT JOIN (SELECT ContactID, FIRM_NAME, SUBACCOUNT_COMPANY_NAME FROM qrySummaryData) AS b ON a.ContactID = b.ContactID ); The definition of the target table (CompaniesTable) is: CompanyID Autonumber CompanyName Text Description Text WebSite Text Email Text TypeNumber Number

    Read the article

  • SSAS dimension source table changed - how to propagate changes to analysis server?

    - by Phil
    Hi, Sorry if the question isn't phrased very well but I'm new to SSAS and don't know the correct terms. I have changed the name of a table and its columns. I am using said table as a dimension for my cube, so now the cube won't process. Presumably I need to make updates in the analysis server to reflect changes to the source database? I have no idea where to start - any help gratefully received. Thanks Phil

    Read the article

  • What is the difference between these two LINQ statements?

    - by jamone
    I had the 1nd statement in my code and found it not giving an accurate count, it was returning 1 when the correct answer is 18. To try and debug the problem I broke it out creating the 2nd statement here and the count returns 18. I just don't see what the difference is between these two. It seems like the 1st is just more compact. I'm currently running these two statements back to back and I'm sure that the database isn't changing between the two. int count = (from s in surveysThisQuarter where s.FacilityID == facility.LocationID select s.Deficiencies).Count(); vs var tempSurveys = from s in surveysThisQuarter where s.FacilityID == facility.LocationID select s; int count = 0; foreach (Survey s in tempSurveys) count += s.Deficiencies.Count();

    Read the article

  • LINQ + Find count of non-null values

    - by Ashutosh
    I have a table with the below structure. ID VALUE 1 3.2 2 NULL 4 NULL 5 NULL 7 NULL 10 1.8 11 NULL 12 3.2 15 4.7 17 NULL 22 NULL 24 NULL 25 NULL 27 NULL 28 7 I would like to get the max count of consecutive null values in the table. Any help would be greatly appreciated. THanks Ashutosh

    Read the article

  • how to avoid sub-query to gain performance

    - by chun
    hi i have a reporting query which have 2 long sub-query SELECT r1.code_centre, r1.libelle_centre, r1.id_equipe, r1.equipe, r1.id_file_attente, r1.libelle_file_attente,r1.id_date, r1.tranche, r1.id_granularite_de_periode,r1.granularite, r1.ContactsTraites, r1.ContactsenParcage, r1.ContactsenComm, r1.DureeTraitementContacts, r1.DureeComm, r1.DureeParcage, r2.AgentsConnectes, r2.DureeConnexion, r2.DureeTraitementAgents, r2.DureePostTraitement FROM ( SELECT cc.id_centre_contact, cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_file_attente, f.libelle_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode, g.granularite, sum(Nb_Contacts_Traites) as ContactsTraites, sum(Nb_Contacts_en_Parcage) as ContactsenParcage, sum(Nb_Contacts_en_Communication) as ContactsenComm, sum(Duree_Traitement/1000) as DureeTraitementContacts, sum(Duree_Communication / 1000 + Duree_Conference / 1000 + Duree_Com_Interagent / 1000) as DureeComm, sum(Duree_Parcage/1000) as DureeParcage FROM agr_synthese_activite_media_fa_agent a, centre_contact cc, direction_contact dc, granularite_de_periode g, media m, file_attente f WHERE m.id_media = a.id_media AND cc.id_centre_contact = a.id_centre_contact AND a.id_direction_contact = dc.id_direction_contact AND dc.direction_contact ='INCOMING' AND a.id_file_attente = f.id_file_attente AND m.media = 'PHONE' AND ( ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') or ( g.granularite = 'Heure' and a.id_th_heure = g.id_granularite_de_periode) ) GROUP by cc.id_centre_contact, a.id_equipe, a.id_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode) r1, ( (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact, a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode ) UNION (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.granularite = 'Heure' AND a.id_th_heure = g.id_granularite_de_periode) AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact,a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode) ) r2 WHERE r1.id_centre_contact = r2.id_centre_contact AND r1.id_equipe = r2.id_equipe AND r1.id_date = r2.id_date AND r1.tranche = r2.tranche AND r1.id_granularite_de_periode = r2.id_granularite_de_periode GROUP BY r1.id_centre_contact , r1.id_equipe, r1.id_file_attente, r1.id_date, r1.tranche, r1.id_granularite_de_periode ORDER BY r1.code_centre, r1.libelle_centre, r1.equipe, r1.libelle_file_attente, r1.id_date, r1.id_granularite_de_periode,r1.tranche the EXPLAIN shows | id | select_type | table | type| possible_keys | key | key_len | ref| rows | Extra | '1', 'PRIMARY', '<derived3>', 'ALL', NULL, NULL, NULL, NULL, '2520', 'Using temporary; Using filesort' '1', 'PRIMARY', '<derived2>', 'ALL', NULL, NULL, NULL, NULL, '4378', 'Using where; Using join buffer' '3', 'DERIVED', 'a', 'ALL', 'fk_Activite_Agent_centre_contact', NULL, NULL, NULL, '83433', 'Using temporary; Using filesort' '3', 'DERIVED', 'g', 'ref', 'Index_granularite,Index_Valeur_min', 'Index_Valeur_min', '23', 'func', '1', 'Using where' '3', 'DERIVED', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' '4', 'UNION', 'g', 'ref', 'PRIMARY,Index_granularite', 'Index_granularite', '23', '', '24', 'Using where; Using temporary; Using filesort' '4', 'UNION', 'a', 'ref', 'fk_Activite_Agent_centre_contact,fk_activite_agent_TH_heure', 'fk_activite_agent_TH_heure', '5', 'reporting_acd.g.Id_Granularite_de_periode', '2979', 'Using where' '4', 'UNION', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' NULL, 'UNION RESULT', '<union3,4>', 'ALL', NULL, NULL, NULL, NULL, NULL, '' '2', 'DERIVED', 'g', 'range', 'PRIMARY,Index_granularite,Index_Valeur_min', 'Index_granularite', '23', NULL, '389', 'Using where; Using temporary; Using filesort' '2', 'DERIVED', 'a', 'ALL', 'fk_agr_synthese_activite_media_fa_agent_centre_contact,fk_agr_synthese_activite_media_fa_agent_direction_contact,fk_agr_synthese_activite_media_fa_agent_file_attente,fk_agr_synthese_activite_media_fa_agent_media,fk_agr_synthese_activite_media_fa_agent_th_heure', NULL, NULL, NULL, '20903', 'Using where; Using join buffer' '2', 'DERIVED', 'cc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Centre_Contact', '1', '' '2', 'DERIVED', 'f', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_File_Attente', '1', '' '2', 'DERIVED', 'dc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Direction_Contact', '1', 'Using where' '2', 'DERIVED', 'm', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Media', '1', 'Using where' don't know it very clear, but i think is the problem of seems it take full scaning than i change all the sub-query to views(create view as select sub-query), and the result is the same thanks for any advice

    Read the article

  • C# - Rollback SqlTransaction in catch block - Problem with object accessability

    - by Marks
    Hi there. I've got a problem, and all articles or examples i found seem to not care about it. I want to do some database actions in a transaction. What i want to do is very similar to most examples: using (SqlConnection Conn = new SqlConnection(_ConnectionString)) { try { Conn.Open(); SqlTransaction Trans = Conn.BeginTransaction(); using (SqlCommand Com = new SqlCommand(ComText, Conn)) { /* DB work */ } } catch (Exception Ex) { Trans.Rollback(); return -1; } } But the problem is, that the SqlTransaction Trans is declared inside the try block. So it is not accessable in the catch() block. Most examples just do Conn.Open() and Conn.BeginTransaction() before the try block. But i think thats a bit risky, since both can throw multiple exceptions. Am I wrong, or do most people just ignore this risk? Whats the best solution to be able to rollback, if an exception happens. Thanks in advance, Marks

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • deadlocks in Client Server Application

    - by fakhrad
    hi (excuse me about my english) I'm a dotnet propgrammer. recently i wrote a client-server application that use system.net.sockets for connecting and use .net remoting for comunications. when number of client increased(e. up to 100) sometimes server application freezed and after several minuts comebacks. i use sql2005 with pooling and timeout. Plz Help Me!

    Read the article

  • How to update a table with a list of values at a time?

    - by VJ
    I have update NewLeaderBoards set MonthlyRank=(Select RowNumber() from LeaderBoards) I tried it this way - (Select RowNumber() from LeaderBoards) as NewRanks update NewLeaderBoards set MonthlyRank = NewRanks But it doesnt work for me..Can anyone suggest me how can i perform an update in such a way..

    Read the article

  • Select a row having a column with max value - On a date range

    - by Abhi
    Excuse me for posting a similar question. Please consider this: date value 18/5/2010, 1 pm 40 18/5/2010, 2 pm 20 18/5/2010, 3 pm 60 18/5/2010, 4 pm 30 18/5/2010, 5 pm 60 18/5/2010, 6 pm 25 19/5/2010, 6 pm 300 19/5/2010, 6 pm 450 19/5/2010, 6 pm 375 20/5/2010, 6 pm 250 20/5/2010, 6 pm 310 The query is to get the date and value for each day such that the value obtained for that day is max. If the max value is repeated on that day, the lowest time stamp is selected. The result should be like: 18/5/2010, 3 pm 60 19/5/2010, 6 pm 450 20/5/2010, 6 pm 310 The query should take in a date range like the one given below and find results for that range in the above fashion: where date = to_date('26/03/2010','DD/MM/YYYY') AND date < to_date('27/03/2010','DD/MM/YYYY')

    Read the article

  • Custom SQL function for NHibernate dialect

    - by Kristoffer Ahl
    I want to be able to call a custom function called "recent_date" as part of my HQL. Like this: [Date] >= recent_date() I created a new dialect, inheriting from MsSql2000Dialect and specified the dialect for my configuration. public class NordicMsSql2000Dialect : MsSql2000Dialect { public NordicMsSql2000Dialect() { RegisterFunction( "recent_date", new SQLFunctionTemplate( NHibernateUtil.Date, "dateadd(day, -15, getdate())" ) ); } } var configuration = Fluently.Configure() .Database( MsSqlConfiguration.MsSql2000 .ConnectionString(c => .... ) .Cache(c => c.UseQueryCache().ProviderClass<HashtableCacheProvider>()) .Dialect<NordicMsSql2000Dialect>() ) .Mappings(m => ....) .BuildConfiguration(); When calling recent_date() I get the following error: System.Data.SqlClient.SqlException: 'recent_date' is not a recognized function name I'm using it in a where statement for a HasMany-mapping like below. HasMany(x => x.RecentValues) .Access.CamelCaseField(Prefix.Underscore) .Cascade.SaveUpdate() .Where("Date >= recent_date()"); What am I missing here?

    Read the article

  • Meaning of tA tC and tP

    - by Greg Wiley
    If this is a duplicate I appoligize, but looking for two-letter strings is quite hard in any search. I'm looking for the meaning of tA tC and tP in the context of a mysql query. And in the spirit of "teaching a man how to fish" it would be great if you could point me in the right direction of where to find this info in the future. Edit: The Query $wpdb->get_row($wpdb->prepare("SELECT tA.* FROM ".AMYLITE_ADS." tA, ".AMYLITE_ADS_CAMPAIGNS." tC, ".AMYLITE_PACKAGES." tP WHERE tA.id=tC.ad_id AND tC.campaign_id=tP.campaign_id AND tP.zone_id=%d AND tP.date_end>CURDATE() GROUP BY tA.id ORDER BY RAND()", $zone->id));

    Read the article

< Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >