Search Results

Search found 28052 results on 1123 pages for 't sql tuesday'.

Page 639/1123 | < Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >

  • adodb .FIND question

    - by every_answer_gets_a_point
    i am using excel to connect to a mysql database i am doing this: rs.Find "rowid='105'" If Not rs.EOF Then cn.Execute "delete from batchinfo where rowid='105'" and it works well however, i need to be able to match data on multiple columns for example like this: rs. find "rowid='105'" and "something='sometext'" and "somethingelse='moretext'" i need to know whether or not rs.find matched ALL of the data. how can i do this? according to this i can't: http://articles.techrepublic.com.com/5100-10878_11-1045830.html# however perhaps there's a way i can rs.execute "some select statement" can someone help with this? would this do the trick for me and then i would check EOF: rs.Filter "LastName='Adams' and FirstName='Lamont'"

    Read the article

  • How to apply GROUP_CONCAT in mysql Query

    - by Query Master
    How to apply GROUP_CONCAT in this Query if you guys have any idea or any alternate solution about this please share me. Helps are definitely appreciated also (see Query or result required) Query SELECT WEEK(cpd.added_date) AS week_no,COUNT(cpd.result) AS death_count FROM cron_players_data cpd WHERE cpd.player_id = 81 AND cpd.result = 2 AND cpd.status = 1 GROUP BY WEEK(cpd.added_date); Query output result screen Result Required 23,24,25 AS week_no 2,3,1 AS death_count

    Read the article

  • Why is SQLite3 using covering indices instead of the indices I created?

    - by Geoff
    I have an extremely large database (contacts has ~3 billion entries, people has ~280 million entries, and the other tables have a negligible number of entries). Most other queries I've run are really fast. However, I've encountered a more complicated query that's really slow. I'm wondering if there's any way to speed this up. First of all, here is my schema: CREATE TABLE activities (id INTEGER PRIMARY KEY, name TEXT NOT NULL); CREATE TABLE contacts ( id INTEGER PRIMARY KEY, person1_id INTEGER NOT NULL, person2_id INTEGER NOT NULL, duration REAL NOT NULL, -- hours activity_id INTEGER NOT NULL -- FOREIGN_KEY(person1_id) REFERENCES people(id), -- FOREIGN_KEY(person2_id) REFERENCES people(id) ); CREATE TABLE people ( id INTEGER PRIMARY KEY, state_id INTEGER NOT NULL, county_id INTEGER NOT NULL, age INTEGER NOT NULL, gender TEXT NOT NULL, -- M or F income INTEGER NOT NULL -- FOREIGN_KEY(state_id) REFERENCES states(id) ); CREATE TABLE states ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, abbreviation TEXT NOT NULL ); CREATE INDEX activities_name_index on activities(name); CREATE INDEX contacts_activity_id_index on contacts(activity_id); CREATE INDEX contacts_duration_index on contacts(duration); CREATE INDEX contacts_person1_id_index on contacts(person1_id); CREATE INDEX contacts_person2_id_index on contacts(person2_id); CREATE INDEX people_age_index on people(age); CREATE INDEX people_county_id_index on people(county_id); CREATE INDEX people_gender_index on people(gender); CREATE INDEX people_income_index on people(income); CREATE INDEX people_state_id_index on people(state_id); CREATE INDEX states_abbreviation_index on states(abbreviation); CREATE INDEX states_name_index on states(name); Note that I've created an index on every column in the database. I don't care about the size of the database; speed is all I care about. Here's an example of a query that, as expected, runs almost instantly: SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; Here's the troublesome query: SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); Now, what I think would happen is that each subquery would use each relevant index I've created to speed this up. However, when I show the query plan, I see this: sqlite> EXPLAIN QUERY PLAN SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); 0|0|0|SEARCH TABLE contacts USING INTEGER PRIMARY KEY (rowid=?) (~25 rows) 0|0|0|EXECUTE LIST SUBQUERY 1 2|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 2|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 2|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person1_id_index (person1_id=?) (~12 rows) 3|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 3|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 3|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person2_id_index (person2_id=?) (~12 rows) 1|0|0|COMPOUND SUBQUERIES 2 AND 3 USING TEMP B-TREE (INTERSECT) In fact, if I show the query plan for the first query I posted, I get this: sqlite> EXPLAIN QUERY PLAN SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; 0|0|1|SEARCH TABLE states USING COVERING INDEX states_abbreviation_index (abbreviation=?) (~1 rows) 0|1|0|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) Why is SQLite using covering indices instead of the indices I created? Shouldn't the search in the people table be able to happen in log(n) time given state_id which in turn is found in log(n) time?

    Read the article

  • Free tool to watch database for changes?

    - by 01
    Im looking for a tool that can watch database(mysql and oracle) for changes. When someone inserts or updates something in any table i want to get to know about it. I know that it can be done using triggers(http://stackoverflow.com/questions/167254/watching-a-table-for-change-in-mysql), but im more interested in some tool that can do it. free tool.

    Read the article

  • mysql count rows and grop them by month

    - by user2661296
    I have a table called cc_calls and there I have many call records I want to count them and group them in months I have a timestamp called starttime and I can use that row to extract the month, also limit the count for 12 months the results should be like: Month Count January 768768 February 876786 March 987979 April 765765 May 898797 June 876876 July 786575 August 765765 September 689787 October 765879 November 897989 December 876876 Can anyone guide me or show me the mysql query that I need to get this result.

    Read the article

  • MySQL query for initial filling of order column

    - by Sejanus
    Sorry for vague question title. I've got a table containing huge list of, say, products, belonging to different categories. There's a foreign key column indicating which category that particular product belongs to. I.e. in "bananas" row category might be 3 which indicates "fruits". Now I added additional column "order" which is for display order within that particular category. I need to do initial ordering. Since the list is big, I dont wanna change every row by hand. Is it possible to do with one or two queries? I dont care what initial order is as long as it starts with 1 and goes up. I cant do something like SET order = id because id counts from 1 up regardless of product category and order must start anew from 1 up for every different category.

    Read the article

  • How to retrieve indentity column vaule after insert using LINQ

    - by Hobey
    Could any of you please show me how to complete the following tasks? // Prepare object to be saved // Note that MasterTable has MasterTableId as a Primary Key and it is an indentity column MasterTable masterTable = new MasterTable(); masterTable.Column1 = "Column 1 Value"; masterTable.Column2 = 111; // Instantiate DataContext DataContext myDataContext = new DataContext("<<ConnectionStrin>>"); // Save the record myDataContext.MasterTables.InsertOnSubmit(masterTable); myDataContext.SubmitChanges(); // ?QUESTION? // Now I need to retrieve the value of MasterTableId for the record just inserted above. Kind Regards

    Read the article

  • Change column names of a cube action as they appear in Visual Studio

    - by hermann
    the title pretty much says it all. I have a cube with data in it and I have yet to find a way to change the column names. They appear in a very ugly manner like [cubeName].[$dimension.columnName]. I have tried everything I know and anything I found on the web but nothing seems to be working. What I tried to do in most cases is create an Action in the Actions tab and write some MDX query language in there. No results whatsoever. As if the action is never run. Does anyone know how to do this? I've spent about 3 days trying to figure this out. Thank you.

    Read the article

  • Storing datetime in database?

    - by Curtis White
    I'm working on a blog and want to show my posts in eastern time zone. i figured that storing everything UTC would be the proper way. This creates a few challenges though: I have to convert all times from UTC to Eastern. This is not a biggie but adds a lot of code. And the "biggie" is that I use a short-date time to reference the posts by passing in a query, ala blogger. The problem is that there is no way to convert the short date time to the proper UTC date because I'm lacking the posted time info. Hmm, any problem to just storing all dates in eastern time? This would certainly make it easier for the rest of the application but if I needed to change time zones everything would be stored wrong.

    Read the article

  • introduce a join to this query, possible?

    - by Iain Urquhart
    I'm trying to introduce a join to this query: SELECT `n`.*, round((`n`.`rgt` - `n`.`lft` - 1) / 2, 0) AS childs, count(*) - 1 + (`n`.`lft` > 1) + 1 AS level, ((min(`p`.`rgt`) - `n`.`rgt` - (`n`.`lft` > 1)) / 2) > 0 AS lower, (((`n`.`lft` - max(`p`.`lft`) > 1))) AS upper FROM `exp_node_tree_6` `n`, `exp_node_tree_6` `p`, `exp_node_tree_6` WHERE `n`.`lft` BETWEEN `p`.`lft` AND `p`.`rgt` AND ( `p`.`node_id` != `n`.`node_id` OR `n`.`lft` = 1 ) GROUP BY `n`.`node_id` ORDER BY `n`.`lft` by adding LEFT JOIN `exp_channel_titles` ON (`n`.`entry_id`=`exp_channel_titles`.`entry_id`) after the FROM statement... But when I introduce it, it fails with "Unknown column 'n.entry_id' in 'on clause'" Is it even possible to add a join to this query? Can anybody help, thanks!

    Read the article

  • Indexes and multi column primary keys

    - by David Jenings
    Went searching and didn't find the answer to this specific noob question. My apologies if I missed it. In a MySQL database I have a table with the following primary key PRIMARY KEY id (invoice, item) In my application I will also frequently be selecting on "item" by itself and less frequently on only "invoice". I'm assuming I would benefit from indexes on these columns. MySQL does not complain when I define the following: INDEX (invoice), INDEX (item), PRIMARY KEY id (invoice, item) But I don't see any evidence (using DESCRIBE -- the only way I know how to look) that separate indexes have been established for these two columns. So the question is, are the columns that make up a primary key automatically indexed individually? Also, is there a better way than DESCRIBE to explore the structure of my table?

    Read the article

  • Why am I unable to create a trigger using my SqlCommand?

    - by acidzombie24
    The line cmd.ExecuteNonQuery(); cmd.CommandText CREATE TRIGGER subscription_trig_0 ON subscription AFTER INSERT AS UPDATE user_data SET msg_count=msg_count+1 FROM user_data JOIN INSERTED ON user_data.id = INSERTED.recipient; The exception: Incorrect syntax near the keyword 'TRIGGER'. Then using VS 2010, connected to the very same file (a mdf file) I run the query above and I get a success message. WTF!

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • Help ! How do I get the total number rows from my mssql paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my MSSQL database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my datalist. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a partcular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my datalist pager.Now I need the total number of all comments(not in the cuurent page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesnt work and raises an error.I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • When calling CRUD check if "parent" exists with read or join?

    - by Trick
    All my entities can not be deleted - only deactivated, so they don't appear in any read methods (SELECT ... WHERE active=TRUE). Now I have some 1:M tables on this entities on which all CRUD operations can be executed. What is more efficient or has better performance? My first solution: To add to all CRUD operations: UPDATE ... JOIN entity e ... WHERE e.active=TRUE My second solution: Before all CRUD operations check if entity is active: if (getEntity(someId) != null) { //do some CRUD } In getEntity there's just SELECT * FROM entity WHERE id=? AND active=TRUE. Or any other solution, recommendation,...?

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

  • Loop Stored Procedure in VB.Net (win form)

    - by Mo
    Hello, I am trying to run a for loop for a backup system and inside that i want to run a SP that will loop. Below is the code that does not work for me.. Any ideas please? Dim TotalTables As Integer Dim i As Integer TotalTables = 10 For i = 1 To TotalTables objDL.BackupTables(220, i, 001) ' (This is a method from the DL and the 3 parameters are integars) Next I tried the SP and it works perfectly in SQLServer

    Read the article

  • How to 'insert if not exists' in MySQL?

    - by warren
    I started by googling, and found this article which talks about mutex tables. I have a table with ~14 million records. If I want to add more data in the same format, is there a way to ensure the record I want to insert does not already exist without using a pair of queries (ie, one query to check and one to insert is the result set is empty)? Does a unique constraint on a field guarantee the insert will fail if it's already there? It seems that with merely a constraint, when I issue the insert via php, the script croaks.

    Read the article

  • MSSQL: Primary Key Schema Largely Guid but Sometimes Integer Types...

    - by Code Sherpa
    OK, this may be a silly question but... I have inherited a project and am tasked with going over the primary key relationships. The project largely uses Guids. I say "largely" because there are examples where tables use integral types to reflect enumerations. For example, dbo.MessageFolder has MessageFolderId of type int to reflect public emum MessageFolderTypes { inbox = 1, sent = 2, trash = 3, etc... } This happens a lot. There are tables with primary keys of type int which is unavoidable because of their reliance on enumerations and tables with primary keys of type Guid which reflect the primary key choice on the part of the previous programmer. Should I care that the PK schema is spotty like this? It doesn't feel right but does it really matter? If this could create a problem, how do I get around it (I really can't move all PKs to type int without serious legwork and I have never heard of enumerations that have guid values)? Thanks.

    Read the article

  • I got an error when implementing tde in sql2008

    - by mahima
    while using USE mssqltips_tde; CREATE DATABASE ENCRYPTION KEY with ALGORITHM = AES_256 ENCRYPTION BY SERVER CERTIFICATE TDECert GO getting error Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'KEY'. Msg 319, Level 15, State 1, Line 3 Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon. please help in resolving the same as i need to implement Encryption on my DB

    Read the article

  • Disable animations when ending search in iPhone

    - by camilo
    Hi. A quicky: is there a way to dismiss the keyboard and the searchDisplayController without animation? I was able to do it when the user presses "Cancel", but when the user presses the black "window thingy" above search field (only visible while the user hasn't inserted any text), the animation always occurs, even when I change the delegate functions. Is there a way to control this, or as an alternative, to disable the user to end searching by pressing the black window? Thanks in advance.

    Read the article

  • DateTime in SqlServer VisualStudio C#

    - by menacheb
    Hi, I Have a DataBase in my project With Table named 'ProcessData' and columns named 'Start_At' (Type: DateTime) and 'End_At' (Type: DateTime) . When I try to enter a new record into this table, it enter the data in the following format: 'YYYY/MM/DD HH:mm', when I actualy want it to be in that format: 'YYYY/MM/DD HH:mm:ss' (the secondes dosen't apper). Does anyone know why, and what should I do in order to fix this? Here is the code I using: con = new SqlConnection("...."); String startAt = "20100413 11:05:28"; String endAt = "20100414 11:05:28"; ... con.Open();//open the connection, in order to get access to the database SqlCommand command = new SqlCommand("insert into ProcessData (Start_At, End_At) values('" + startAt + "','" + endAt + "')", con); command.ExecuteNonQuery();//execute the 'insert' query. con.Close(); Many thanks

    Read the article

< Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >