Search Results

Search found 27337 results on 1094 pages for 't sql'.

Page 652/1094 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • changing mysql password via java

    - by Osama Abukmail
    I'm trying to change a user's password on mysql using java, i successfully changed it on phpmyadmin but same command doesnt work on java SET PASSWORD = PASSWORD('12345') this command will change the current logged in user, i have tried it on java like this statement = connect.createStatement(); statement.executeUpdate("SET PASSWORD = PASSWORD('12345')"); but nothing happened i also tried this with root logged in statement = connect.createStatement(); statement.executeUpdate("SET PASSWORD FOR 'username'@'localhost' = PASSWORD('123456')"); and nothing work,, any help please

    Read the article

  • While Loop in TSQL with Sum totals

    - by RPS
    I have the following TSQL Statement, I am trying to figure out how I can keep getting the results (100 rows at a time), store them in a variable (as I will have to add the totals after each select) and continue to select in a while loop until no more records are found and then return the variable totals to the calling function. SELECT [OrderUser].OrderUserId, ISNULL(SUM(total.FileSize), 0), ISNULL(SUM(total.CompressedFileSize), 0) FROM ( SELECT DISTINCT TOP(100) ProductSize.OrderUserId, ProductSize.FileInfoId, CAST(ProductSize.FileSize AS BIGINT) AS FileSize, CAST(ProductSize.CompressedFileSize AS BIGINT) AS CompressedFileSize FROM ProductSize WITH (NOLOCK) INNER JOIN [Version] ON ProductSize.VersionId = [Version].VersionId ) AS total RIGHT OUTER JOIN [OrderUser] WITH (NOLOCK) ON total.OrderUserId = [OrderUser].OrderUserId WHERE NOT ([OrderUser].isCustomer = 1 AND [OrderUser].isEndOrderUser = 0 OR [OrderUser].isLocation = 1) AND [OrderUser].OrderUserId = 1 GROUP BY [OrderUser].OrderUserId

    Read the article

  • MySQL Query to find consecutive available times of variable lenth

    - by Armaconn
    I have an events table that has user_id, date ('2013-10-01'), time ('04:15:00'), and status_id; What I am looking to find is a solution similar to http://stackoverflow.com/questions/2665574/find-consecutive-rows-calculate-duration but I need I need two additional components: 1) Take date into consideration, so 10/1/2013 at 11:00 PM - 10/2/2013 at 3:00AM. Feel free to just put in a fake date range (like '2013-10-01' to '2013-10-31') 2) Limit output to only include when there are 4+ consecutive times (each event is 15 minutes and I want it to display minimum blocks of an hour, but would also like to be able to switch this restriction to 1.5 hours or some other duration if possible). SUMMARY - Looking for a query that provides the start and end times for a set of events that have the same user_id, status_id, and are in a continuous series based on date and time. For which I can restrict results based on date range and minimum series duration. So the output should have: user_id, date_start, time_start, date_end, time_end, status_id, duration CREATE TABLE `events` ( `event_id` int(11) NOT NULL auto_increment COMMENT 'ID', `user_id` int(11) NOT NULL, `date` date NOT NULL, `time` time NOT NULL, `status_id` int(11) default NULL, PRIMARY KEY (`event_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1568 ; INSERT INTO `events` VALUES(1, 101, '2013-08-14', '23:00:00', 2); INSERT INTO `events` VALUES(2, 101, '2013-08-14', '23:15:00', 2); INSERT INTO `events` VALUES(3, 101, '2013-08-14', '23:30:00', 2); INSERT INTO `events` VALUES(4, 101, '2013-08-14', '23:45:00', 2); INSERT INTO `events` VALUES(5, 101, '2013-08-15', '00:00:00', 2); INSERT INTO `events` VALUES(6, 101, '2013-08-15', '00:15:00', 1); INSERT INTO `events` VALUES(7, 500, '2013-08-14', '23:45:00', 1); INSERT INTO `events` VALUES(8, 500, '2013-08-15', '00:00:00', 1); INSERT INTO `events` VALUES(9, 500, '2013-08-15', '00:15:00', 2); INSERT INTO `events` VALUES(10, 500, '2013-08-15', '00:30:00', 2); INSERT INTO `events` VALUES(11, 500, '2013-08-15', '00:45:00', 1); Desired output row |user_id | date_start | time_start | date_end | time_end | status_id | duration 1 |101 |'2013-08-14'| '23:00:00' |'2013-08-15'|'00:15:00'| 2 | 5 2 |101 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:30:00'| 1 | 1 3 |500 |'2013-08-14'| '00:23:45' |'2013-08-15'|'00:15:00'| 1 | 2 4 |500 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:45:00'| 2 | 2 5 |500 |'2013-08-15'| '00:00:45' |'2013-08-15'|'01:00:00'| 2 | 1 *except that rows 2 and 5 wouldn't appear if duration had to be greater than 30 minutes Thanks for any help that you can provide! And please let me know if there is anything I can further clarify!!

    Read the article

  • Aggregate Functions in Index with IBMDB2

    - by Erkan
    Is there any way to pre aggregate results of aggregate functionts (f.i. count()) and store it in an index? The background is: i want to speed up count() queries. So that: Select count(users) from TE123 where region = 'A'; would be supported by an index like Region Count(Users) A 548 E 458 I know that MQTs would also help for this problem. However, in this case it is not possible to use MQT, as we use kind of an ORM and we don't want to define Entities on MQTs. I just slightly remember - one DBA told me - that there is such a function planned for DB2 V10.

    Read the article

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

  • Error using iif in ms access query

    - by naveen
    I am trying to fire this query in MS Access SELECT file_number, IIF(invoice_type='Spent on Coding',SUM(CINT(invoice_amount)), 0) as CodingExpense FROM invoice GROUP BY file_number I am getting this error Error in list of function arguments: '=' not recognized. Unable to parse query text. I tried replacing IIF with SWITCH to no avail. What's wrong with my query and how to correct this?

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • join select from multiple row values?

    - by user1869132
    Two tables 1) product -------------------- id | Name | price 1 | p1 | 10 2 | p2 | 20 3 | p3 | 30 2) product_attributes: --------------------------------------------------- id | product_id | attribute_name | attribute_value --------------------------------------------------- 1 | 1 | size | 10 2 | 1 | colour | red 3 | 2 | size | 20 I need to join these two tables. In the where clause I need to match both the two rows attribute values. Is it possible to get the result based on two rows value. Here if size=10 and colour=red. Output should be 1 | p1 | 10 It could be greatly helpful to get a query for this.

    Read the article

  • Selecting More Than 1 Table in A Single Query

    - by Kamran
    I have 5 Tables in MS Access as Logs [Title, ID, Date, Author] Tape [Title, ID, Date, Author] Maps [Title, ID, Date, Author] VCDs [Title, ID, Date, Author] Book [Title, ID, Date, Author] I tried my level best through this code SELECT Logs.[Author], Tape.[Author], Maps.[Author], VCDs.[Author], Book.[Author] FROM Logs , Tape , Maps , VCDs, Book WHERE ((([Author] & " " & [Author] & " " & [Author] & " " & [Author]& " " & [Author]) Like "*" & [Type the Title or Any Part of the Title and Press Ok] & "*")); I want to select all of these fields in a single query. Suppose there is Adam as author of works in all tables. So when i put Adam in search box it should result from all tables. I know this can be done by having single table or renaming fields names but that's not required. Please help.

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • Transaction within IF THEN ELSE doesn't commit

    - by boris callens
    In my TSQL script I have an IF THEN ELSE structure that checks if a column already exists. If not it creates the column and updates it. IF NOT EXISTS( SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'tableName' AND COLUMN_NAME = 'columnName')) BEGIN BEGIN TRANSACTION ALTER TABLE tableName ADD columnName int NULL COMMIT BEGIN TRANSACTION update tableName set columnName = [something] from [subquery] COMMIT END This doesn't work because the column doesn't exist after the commit. Why doesn't the COMMIT commit?

    Read the article

  • Why is SQLite3 using covering indices instead of the indices I created?

    - by Geoff
    I have an extremely large database (contacts has ~3 billion entries, people has ~280 million entries, and the other tables have a negligible number of entries). Most other queries I've run are really fast. However, I've encountered a more complicated query that's really slow. I'm wondering if there's any way to speed this up. First of all, here is my schema: CREATE TABLE activities (id INTEGER PRIMARY KEY, name TEXT NOT NULL); CREATE TABLE contacts ( id INTEGER PRIMARY KEY, person1_id INTEGER NOT NULL, person2_id INTEGER NOT NULL, duration REAL NOT NULL, -- hours activity_id INTEGER NOT NULL -- FOREIGN_KEY(person1_id) REFERENCES people(id), -- FOREIGN_KEY(person2_id) REFERENCES people(id) ); CREATE TABLE people ( id INTEGER PRIMARY KEY, state_id INTEGER NOT NULL, county_id INTEGER NOT NULL, age INTEGER NOT NULL, gender TEXT NOT NULL, -- M or F income INTEGER NOT NULL -- FOREIGN_KEY(state_id) REFERENCES states(id) ); CREATE TABLE states ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, abbreviation TEXT NOT NULL ); CREATE INDEX activities_name_index on activities(name); CREATE INDEX contacts_activity_id_index on contacts(activity_id); CREATE INDEX contacts_duration_index on contacts(duration); CREATE INDEX contacts_person1_id_index on contacts(person1_id); CREATE INDEX contacts_person2_id_index on contacts(person2_id); CREATE INDEX people_age_index on people(age); CREATE INDEX people_county_id_index on people(county_id); CREATE INDEX people_gender_index on people(gender); CREATE INDEX people_income_index on people(income); CREATE INDEX people_state_id_index on people(state_id); CREATE INDEX states_abbreviation_index on states(abbreviation); CREATE INDEX states_name_index on states(name); Note that I've created an index on every column in the database. I don't care about the size of the database; speed is all I care about. Here's an example of a query that, as expected, runs almost instantly: SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; Here's the troublesome query: SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); Now, what I think would happen is that each subquery would use each relevant index I've created to speed this up. However, when I show the query plan, I see this: sqlite> EXPLAIN QUERY PLAN SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); 0|0|0|SEARCH TABLE contacts USING INTEGER PRIMARY KEY (rowid=?) (~25 rows) 0|0|0|EXECUTE LIST SUBQUERY 1 2|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 2|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 2|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person1_id_index (person1_id=?) (~12 rows) 3|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 3|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 3|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person2_id_index (person2_id=?) (~12 rows) 1|0|0|COMPOUND SUBQUERIES 2 AND 3 USING TEMP B-TREE (INTERSECT) In fact, if I show the query plan for the first query I posted, I get this: sqlite> EXPLAIN QUERY PLAN SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; 0|0|1|SEARCH TABLE states USING COVERING INDEX states_abbreviation_index (abbreviation=?) (~1 rows) 0|1|0|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) Why is SQLite using covering indices instead of the indices I created? Shouldn't the search in the people table be able to happen in log(n) time given state_id which in turn is found in log(n) time?

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • how to reverse the order of words in query or c#

    - by Ranjana
    i have stored in the database as location India,Tamilnadu,Chennai,Annanagar while i bind in the grid view it ll be displaying as 'India,Tamilnadu,Chennai,Annanagar' this format. but i need to be displayed as 'Annanagar,Chennai,Tamilnadu,India' in this format. how to perform this reverse order in query or in c#

    Read the article

  • Which MySql line is faster:

    - by Camran
    I have a classified_id variable which matches one document in a MySql table. I am currently fetching the information about that one record like this: SELECT * FROM table WHERE table.classified_id = $classified_id I wonder if there is a faster approach, for example like this: SELECT 1 FROM table WHERE table.classified_id = $classified_id Wont the last one only select 1 record, which is exactly what I need, so that it doesn't have to scan the entire table but instead stops searching for records after 1 is found? Or am I dreaming this? Thanks

    Read the article

  • counting employee attendance

    - by jjj
    i am trying to write a statment for counting the employees attendance and execute thier id , name and the days that he has working on the last 3 months by counting the duplicate id on NewTimeAttendance for month 1 , 2 and 3 .. i tried to count : Select COUNT(employeeid) from NewTimeAttendance where employeeid=1 and (month=1 or month =2 or month = 3) This is absolutely working ,but just for one employee... the secound try: SELECT COUNT(NewEmployee.EmployeeID) FROM NewEmployee INNER JOIN NewTimeAttendance ON NewEmployee.EmployeeID = NewTimeAttendance.EmployeeID and (month=1 or month =2 or month = 3) This is working , but it counts all employees .. and i want it to execute each EmployeeId, EmployeeName and number of days as new record last try: (before you see the code ... it is wrong ..but i am trying) for i in 0..27 loop SELECT COUNT(NewEmployee.EmployeeID),NewEmployee.EmployeeId,EmployeeName FROM NewEmployee INNER JOIN NewTimeAttendance ON NewEmployee.EmployeeID(i) = NewTimeAttendance.EmployeeID and (month=1 or month =2 or month = 3) end loop i realy need help...thanks in advance

    Read the article

  • Entity Framework EntityKey / Foreign Key problem.

    - by Ronny176
    Hi, I keep getting the same error: Entities in 'VlaamseOverheidMeterEntities.ObjectMeter' participate in the 'FK_ObjectMeter_Meter' relationship. 0 related 'Meter' were found. 1 'Meter' is expected. I have the following table structure: Meter 1 <- * ObjectMeter * - 1 VO_Object It is always the same scenario: The first meter is added to the database, the second meter gives the error above. I have the following code in my manager: public List<string> addTemporary(string username, string meterNaam, string readingType, string parentID) { Meter meter = new Meter(); VO_Object voObject = objectManager.getObjectByID(parentID); ObjectMeter objMeter = new ObjectMeter(); meter.readingType = (int)Enum.Parse(typeof(ReadingType), readingType); meter.isActive = true; meter.name = meterNaam; meter.startDate = DateTime.Now; meter.endDate = DateTime.Now.AddYears(6000); meter.uniqueIdentifier = "N/A"; meter.meterType = (int)Enum.Parse(typeof(MeterType), "NA"); meter.meterCategory = (int)Enum.Parse(typeof(MeterCategory), "NA"); meter.energyType = (int)Enum.Parse(typeof(EnergyType), "NA"); meter.utilityType = (int)Enum.Parse(typeof(UtilityType), "NA"); meter.unitOfMeasure = (int)Enum.Parse(typeof(UnitOfMeasure), "NA"); objMeter.valid_from = meter.startDate; objMeter.valid_until = meter.endDate; objMeter.Meter = meter; objMeter.VO_Object = voObject; createMeter(meter); List<String> str = new List<string>(); str.Add("" + meter.meterID); str.Add(meter.name); return str; } and this in my Dao Class which links to the database: internal void CreateMeter(Meter _meter) { _entities.AddToMeter(_meter); _entities.SaveChanges(); } Can someone please explain this error? Ronald

    Read the article

  • Store latitudes and longitudes in database for proximity/radius search using Google Maps API, .NET a

    - by poojad
    What is the approach for storing the latitudes and longitudes for multiple addresses as a one time set up. I need to find the nearby stores using Google Maps and I have to get the latitudes and longitudes of all the available stores. As the data is huge and may increase or change in future, can anyone suggest an approach taking performance and maintenance into consideration. Thank you.

    Read the article

  • Update table with if statement PS/SQL

    - by Matt
    I am trying to do something like this but am having trouble putting it into oracle coding. BEGIN IF ((SELECT complete_date FROM task_table WHERE task_id = 1) IS NULL) THEN UPDATE task_table SET complete_date = //somedate WHERE task_id = 1; ELSE UPDATE task_table SET complete_date = NULL; END IF; END; But this does not work i also tried IF EXISTS(SELECT complete_date FROM task_table WHERE task_id = 1) with no luck

    Read the article

  • Updating Database From Dataset?

    - by Ases
    I wanna update my database from my dataset. mydataadapter = new MySqlDataAdapter("SELECT * FROM table0; SELECT * FROM table1; SELECT * FROM table2;", con); myda.Fill(dataset); //...... // for example I'm doing a change like this ds.Tables[2].Rows[1][3] = "S"; //Then updating the database MySqlCommandBuilder com = new MySqlCommandBuilder(mydataadapter); mydataadapter.Update(dataset, "table2"); then it returns this error TableMapping['table2'] or DataTable 'table2' didn't find by Update. Do you have any advice?

    Read the article

  • customizing rowsource query in combobox ACCESS

    - by every_answer_gets_a_point
    i have 4 comboboxes and each of them need to have the same query in the rowsource, except there is a slight variation on each query if rowsource = somequery i need it to be select * from somequery where something like 'something1'; the next one needs to be select * from somequery where something like 'something2'; is there a way to customize the rowsource property in this way?

    Read the article

  • Tree structured resource Authorization

    - by user323883
    I have portfolio table with portoflio_id and parent_portfolio_id and I have user table now some users may have access to all portfolios, or selective portfolios or depending on group, everything under a portfolio tree. can someone suggest a good schema or any existing framework

    Read the article

  • How do I select a random record efficiently in MySQL?

    - by user198729
    mysql> EXPLAIN SELECT * FROM urls ORDER BY RAND() LIMIT 1; +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | 1 | SIMPLE | urls | ALL | NULL | NULL | NULL | NULL | 62228 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ The above doesn't qualify as efficient,how should I do it properly?

    Read the article

  • Rails 3 fields_for agressive loading?

    - by Seth
    Hi all, I'm trying to optimize (limit) queries in a view. I am using the fields_for function. I need to reference various properties of the object, such as username for display purposes. However, this is a rel table, so I need to join with my users table. The result is N sub-queries, 1 for each field in fields_for. It's difficult to explain, but I think you'll understand what I'm asking if I paste my code: <%= form_for @election do |f| %> <%= f.fields_for :voters do |voter| %> <%= voter.hidden_field :id %> <%= voter.object.user.preferred_name %> <% end %> <% end %> I have like 10,000 users, and many times each election will include all 10,000 users. That's 10,000 subqueries every time this view is loaded. I want fields_for to JOIN on users. Is this possible? I'd like to do something like: ... <%= f.fields_for :voters, :joins => :users do |voter| %> ... <% end %> ... But that, of course, doesn't work :(

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >