Search Results

Search found 28052 results on 1123 pages for 't sql tuesday'.

Page 603/1123 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • How to clean sys.conversation_endpoints

    - by Manjoor
    I have a table, a trigger on the table implemented using service broker. More than Half million records are inserted daily into the table. The asynchronous SP is used to check sveral condition by using inserted data and update other tables. It was running fine for last 1 month and the SP was get executed withing 2-3 seconds of insertion of record. But now it take more than 90 minute. At present sys.conversation_endpoints have too much records. (Note that all the table are truncated daily as I do not need those records day after) Other database activities are normal (average 60% CPU Utilization). Now where i need to look?? I can re-create database without any problem but i don't think it is a good way to resolve the problem

    Read the article

  • archiving strategies and limitations of data in a table

    - by Samuel
    Environment: Jboss, Mysql, JPA, Hibernate Our web application will be catering to a large amount of users (~ 1,000,000) and there are a lots of child table where user specific data are stored (e.g. personal, health, forum contributions ...). What would be the best practice to archive user & user specific information. [a] Would it be wise to move the archived user & user specific information to their respective tables within the same database (e.g. user_archive, user_forum_comments_archive ...) OR [b] Would you just mark the database entries with a flag in the original table(s) and just query only non archived entries. We have a unique constraint on User.loginid, how do you handle this requirement if the users are archived via 1-[a] (i.e if a user with loginid 'samuel' gets moved into the archive table and if a new user gets added with the same name in the original table, how would you prevent this. What would be the best strategy to address the unique key constraints. We have a requirement to selectively archive records and bring it back if necessary, will you rely on database tools are would you handle this via your persistence APIs exposed by the JPA entity model.

    Read the article

  • mysql result set joining existing table

    - by Yang
    is there any way to avoid using tmp table? I am using a query with aggregate function (sum) to generate the sum of each product: the result looks like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_5 | 300 now i want to join the above result to another table called products. so that i will have a summary like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_3 | 0 product_4 | 0 product_5 | 300 i know 1 way of doing this is the dump the 1st query result to a temp table then join it with products table. is there a better way?

    Read the article

  • Aggregate Functions on subsets of data based on current row values with SQL

    - by aasukisuki
    Hopefully that title makes sense... Let's say I have an employee table: ID | Name | Title | Salary ---------------------------- 1 | Bob | Manager | 15285 2 | Joe | Worker | 10250 3 | Al | Worker | 11050 4 | Paul | Manager | 16025 5 | John | Worker | 10450 What I'd like to do is write a query that will give me the above table, along with an averaged salary column, based on the employee title: ID | Name | Title | Salary | Pos Avg -------------------------------------- 1 | Bob | Manager | 15285 | 15655 2 | Joe | Worker | 10250 | 10583 3 | Al | Worker | 11050 | 10583 4 | Paul | Manager | 16025 | 15655 5 | John | Worker | 10450 | 10583 I've tried doing this with a sub-query along the lines of: Select *, (select Avg(e2.salary) from employee e2 where e2.title = e.title) from employee e But I've come to realize that the sub-query is executed first, and has no knowledge of the table alias'd e I'm sure I'm missing something REALLY obvious here, can anyone point me in the right diretion?

    Read the article

  • Subquery max sequence number

    - by Andy Levesque
    I'm hesitant to ask because I'm sure it's out there, but I just can't seem to come up with the keywords to find the answer. I'm stepping outside my boundaries by starting with subqueries (normally an Access user). I have a query that has TECH_ID, SEQ_NBR, and PELL_FT_AWD_AMT SELECT ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID, ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR, ISRS_V_NEED_ANAL_RESULT_PARENT.PELL_FT_AWD_AMT, ISRS_V_NEED_ANAL_RESULT_PARENT.SEQ_NBR FROM ISRS_V_NEED_ANAL_RESULT_PARENT GROUP BY ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID, ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR, ISRS_V_NEED_ANAL_RESULT_PARENT.PELL_FT_AWD_AMT, ISRS_V_NEED_ANAL_RESULT_PARENT.SEQ_NBR HAVING (((ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR)="2013")) ORDER BY ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID; What I want to return is add a subquery that selects only the max SEQ_NUM for each record, but I can't seem to get the syntax right. In the past I would cheat and have a separate query that first gave me the TECH_ID and max SEQ_NUM, and then have a second query that use the original table and the first query in a join to get the rest. How can I do this in one query? Example: TECH_ID SEQ_NUM PELL 1 1 4000 1 2 4000 1 3 5000 Using just the max of the sequence number still returns: 1; 2; 4000 and 1; 3; 5000 when I'm only wanting the latter.

    Read the article

  • C# SqlDataAdapter not populating DataSet

    - by Wesley
    I have searched the net and searched the net only to not quite find the probably I am running into. I am currently having an issue getting a SqlDataAdapter to populate a DataSet. I am running Visual Studio 2008 and the query is being sent to a local instance of SqlServer 2008. If I run the query itself in SqlServer, I do get information. Code is as follows: string theQuery = "select Password from Employees where employee_ID = '@EmplID'"; SqlDataAdapter theDataAdapter = new SqlDataAdapter(); theDataAdapter.SelectCommand = new SqlCommand(theQuery, conn); theDataAdapter.SelectCommand.Parameters.Add("@EmplID", SqlDbType.VarChar).Value = "EmployeeName"; theDataAdapter.Fill(theSet); The code to read the dataset: foreach (DataRow theRow in theSet.Tables[0].Rows) { //process row info } If there is any more info I can supply please let me know.

    Read the article

  • Merging sql queries to get different results by date

    - by pedalpete
    I am trying to build a 'recent events' feed and can't seem to get either my query correct, or figure out how to possible merge the results from two queries to sort them by date. One table holds games/, and another table holds the actions of these games/. I am trying to get the recent events to show users 1) the actions taken on games that are publicly visible (published) 2) when a new game is created and published. So, my actions table has actionId, gameid, userid, actiontype, lastupdate My games table has gameid, startDate, createdby, published, lastupdate I currently have a query like this (simplified for easy understanding I hope). SELECT actionId, actions.gameid, userid, actiontype, actions.lastupdate FROM actions JOIN ( SELECT games.gameid, startDate, createdby, published, games.lastupdate FROM games WHERE published=1 AND lastupdate>today-2 ) publishedGames on actions.gameid=games.gameid WHERE actions.type IN (0,4,5,6,7) AND actions.lastupdate>games.lastupdate and published=1 OR games.lastupdate>today-2 AND published=1 This query is looking for actions from published games where the action took place after the game was published. That pretty much takes care of the first thing that needs to be shown. However, I also need to get the results of the SELECT games.gameid, startDate, createdby, published, games.lastupdate FROM games WHERE published=1 AND startDate>today-2 so I can include in the actions list, when a new game has been published. When I run the query as I've got it written, I get all the actionids, and their gameids, but I don't get a row which shows the gameid when it was published. I understand that it may be possible that I need to run two seperate queries, and then somehow merge the results afterword with php, but I'm completely lost on where to start with that as well.

    Read the article

  • Round date to fiscal year

    - by Dave Jarvis
    The following database view rounds the date back to the closest fiscal year (April 1st): CREATE OR REPLACE VIEW FISCAL_YEAR_VW AS SELECT CASE WHEN to_number(to_char( SYSDATE, 'MM' )) < 4 THEN to_date('1-APR-'||to_char(add_months(SYSDATE, -12), 'YYYY'), 'dd-MON-yyyy') ELSE to_date('1-APR-'||to_char(SYSDATE, 'YYYY'), 'dd-MON-yyyy') END AS fiscal_year FROM dual; This allows us to calculate the current fiscal year based on today's date. How can this calculation be simplified or optimized?

    Read the article

  • Analysis services with non normalized table

    - by Uwe
    I have a table with several million rows. Each row represents a user session. There is a column called user which is not unique. There can be multiple sessions per user. I want to use Analysis services to get me the additional properties per user. Example: How many users (unique!) had a session longer than x minutes. How is that possible without changing the database. Note: there is no lookup-table and I cannot create one. What I am able of at the moment is to ask how many sessions were longer then x minutes.

    Read the article

  • MS ACCESS: How can i count distinct value using access query?

    - by Sadat
    here is the current complex query given below. SELECT DISTINCT Evaluation.ETCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.TVenue, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Answer.QCode, Answer.Answer, Count(Answer.Answer) AS [Count], Questions.SL, Questions.Question FROM ((Evaluation INNER JOIN Training ON Evaluation.ETCode=Training.TCode) INNER JOIN Answer ON Evaluation.ECode=Answer.ECode) INNER JOIN Questions ON Answer.QCode=Questions.QCode GROUP BY Evaluation.ETCode, Answer.QCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.Tvenue, Answer.Answer, Questions.Question, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Questions.SL ORDER BY Answer.QCode, Answer.Answer; There is an another column Training.TCode. I need to count distinct Training.TCode, can anybody help me? If you need more information please let me know

    Read the article

  • What data structures and algorithms are applied within data warehouse cubes?

    - by Jeff Meatball Yang
    I understand that cubes are optimized data structures for aggregating and "slicing" large amounts of data. I just don't know how they are implemented. I can imagine a lot of this technology is proprietary, but are there any resources that I could use to start implementing my own cube technology? Set theory and lots of math are probably involved (and welcome as suggestions!), but I'm primarily interested in implementations: the data structures and query algorithms. Thanks!

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • SQLce Select query problem

    - by DieHard
    Wrote a Truck show Contest voting app, financial etc using sqlite. decided to write backup app for show day using ce 3.5. Created db moved to data directory, created tables configured dgridviews all is well. Entered some test data started management studio 08 ran select query against table and got null returns. Started app from vs studio and found that test data is gone. Re entered data ran query in MS data gone again. If I use VS Studio can start and enter data, close app restart and data is still there, seems only when using outside tool on select query data deletes. I don't know ce that well but this cannot be right. select * from votes = delete * from votes??????????????

    Read the article

  • Insert Data from to a table

    - by Lee_McIntosh
    I have a table that lists number of comments from a particular site like the following: Date Site Comments Total --------------------------------------------------------------- 2010-04-01 00:00:00.000 1 5 5 2010-04-01 00:00:00.000 2 8 13 2010-04-01 00:00:00.000 4 2 7 2010-04-01 00:00:00.000 7 13 13 2010-04-01 00:00:00.000 9 1 2 I have another table that lists ALL sites for example from 1 to 10 Site ----- 1 2 ... 9 10 Using the following code i can find out which sites are missing entries for the previous month: SELECT s.site from tbl_Sites s EXCEPT SELECT c.site from tbl_Comments c WHERE c.[Date] = DATEADD(mm, DATEDIFF(mm, 0, GetDate()) -1,0) Producing: site ----- 3 5 6 8 10 I would like to be able to insert the missing sites that is listed from my query into the comments table with some default values, i.e '0's Date Site Comments Total --------------------------------------------------------------- 2010-04-01 00:00:00.000 3 0 0 2010-04-01 00:00:00.000 5 0 0 2010-04-01 00:00:00.000 6 0 0 2010-04-01 00:00:00.000 8 0 0 2010-04-01 00:00:00.000 10 0 0 the question is, how did i update/insert the table/values? cheers, Lee

    Read the article

  • What are the disadvantages of VistaDB

    - by PhantomTypist
    I am looking into using a lightweight serverless database engine like SQLite, Firebird, or VistaDB in an upcoming project. Someone asked about What are the advantages of VistaDB. I would like to know what are the disadvantages of using VistaDB versus other technology?

    Read the article

  • To log in stored procedures?

    - by hgulyan
    If you have a long running SP, do you log somehow it's actions or just wait for this message? "Command(s) completed successfully." I assume, that there can be plenty solutions on this subject, but is there any best practice - a simple solution that is frequently used?

    Read the article

  • C# : DBConnection.Open() timeout is too long.

    - by leo
    Hi, I'm trying to connect to a server that the user inputs. When the server doesn't exist, I'd like to give a quick feedback to the end-user so he can correct what he's typed. Is there any way to test if a server exists before trying to connect ? Thanks

    Read the article

  • Help with query

    - by hdoe123
    Hi, I'm trying make a query that looks at a single table and looks to see if a student is a team called CMHT and in a medic team - if they are I don't want to see the result. I only want see if there only in CMHT or medic not both. Would the right direction be using sub query to filer it out? I've done a search on NOT IN but how could you get to see check if its in more then 2 teams are not? Student Team ref 1 CMHT 1 1 Medic 2 2 Medic 3 this would be in the result 3 CMHT 5 this would be in the result So far I've done the following code would I need use a sub query or do a self join and filter it that way? SELECT Table1.Student, Table1.Team, Table1.refnumber FROM Table1 WHERE (((Table1.Team) In ('Medics','CMHT'))

    Read the article

  • MySQL - Finding time overlaps

    - by Jude
    Hi, I have 2 tables in the database with the following attributes: Booking ======= booking_id booking_start booking_end resource_booked =============== booking_id resource_id The second table is an associative entity between "Booking" and "Resource" (i.e., 1 booking can contain many resources). Attributes booking_start and booking_end are timestamps with date and time in it. May I know how I might be able to find out for each resource_id (resource_booked) if the date/time overlaps or clashes with other bookings of similar resource_id? I was doodling the answer on paper, pictorially, to see if it might help me visualize how I could solve this and I got this: Joining the 2 tables (Booking, Booked_resource) into one table with the 4 attributes needed. Follow the answer suggested here : http://stackoverflow.com/questions/689458/find-overlapping-date-time-rows-within-one-table I did step 1 but step 2 is leaving me baffled! I would really appreciate any help on this! Thanks!

    Read the article

  • ISBNdb Retrieving and Managing Info

    - by Pierre Sylvestre
    Given a set of databases how can one you get information on a book with given price first (which consist of the average of a hidden list of prices coming from different web site) and followed by an optional second option that shows the list of the different web site with their page? To take an example given a query for ISBN 9785554443331 - it returns "Chemistry the central science 11 edition" : new:$50 used good condition:$35 used poor condition:$20 If the return does not match with our product list an option to "click here to visit our partner" appears and which returns: Atextbook: $10 Btextbook: $10 Ctextbook: $9 Dtextbook: $8.50 I understand that the first search would be done simultaneous on the web and our database to determine whether or not we have the book and the web to get the average of the price of a given list of web site. Thank you in advance for the help

    Read the article

  • SQLite long to wide formats?

    - by Stephen
    Hi, I wonder if there is a canonical way to convert data from long to wide format in SQLite (is that operation usually in the domain of relational databases?). I tried to follow this example for MySQL but I guess SQLite does not have the same IF construct... Thanks!

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >