Search Results

Search found 23955 results on 959 pages for 'insert query'.

Page 21/959 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Remove trailing slash with htaccess but preserve query string

    - by soundseller
    I am using following directives in my htaccess to remove trailing slashs from my uris to prevent duplicate content. However these directives also remove any query string, that might be present. RewriteCond %{HTTP_HOST} ^(www.)?mydomain\com$ [NC] RewriteRule ^(.+)/$ http://www.mydomain.com/$1 [R=301,L] I'd like to know how to remove a potential trailing slash from my URI, but also preserve query strings.

    Read the article

  • SQL Query with computed column

    - by plotnick
    help me please with a query. Assume that we have a table with columns: Transaction StartTime EndTime Now, I need a query with computed column of (value = EndTime-Startime). Actually I need to group Users(Transaction has a FK for Users) and sort them by average time spent for transaction.

    Read the article

  • Help with Query - Access

    - by yae
    Hi I have 4 tables: machines, categories, users and usersMAchines machines is linked to categories and usersMachines. Machines -------- idMachine machine idCat Categories -------- idCat category Users -------- idUser nameUser UsersMachines -------- idUserMachine idUser IdMachine To list machines, filtered for any field of 3 previous table, I have this query: select distinct machines.*,categories.category from( (machines left join UsersMachines on machines.idMachine=UsersMachines.idMachine) left join categories on machines.idcat=categories.idcat) Ok,runs fine. But Using the same query, how I can do to filter the machines that only have linked users ?

    Read the article

  • Convert sql query

    - by nisha
    I have used this query in sql,pls convert this query to use for access database. Table structure is UserID,Username,LogDate,LogTime WITH [TableWithRowId] as (SELECT ROW_NUMBER() OVER(ORDER BY UserId,LogDate,LogTime) RowId, * FROM @YourTable), [OddRows] as (SELECT * FROM [TableWithRowId] WHERE rowid % 2 = 1), [EvenRows] as (SELECT *, RowId-1 As OddRowId FROM [TableWithRowId] WHERE rowid % 2 = 0) SELECT [OddRows].UserId, [OddRows].UserName, [OddRows].LogDate, [OddRows].LogTime, [EvenRows].LogDate, [EvenRows].LogTime FROM [OddRows] LEFT JOIN [EvenRows] ON [OddRows].RowId = [EvenRows].OddRowId

    Read the article

  • SQL Server Query Question

    - by Lp1
    Running SQL Server 2008, and I am definitely a new SQL user. I have a table that has 4 columns: EmpNum, User, Action, Updatetime A user logs into, and out of a system, it is registered in the database. For example, if user1 logs into the system, then out 5 minutes later, a simple query (select * from update) would look like: EmpNum User Action Updatetime 1 User1 I 2010-01-01 23:00:00:000 1 User1 O 2010-01-01 23:05:00:000 I'm trying to query the Empnum, User, Action, I(in time), O(out time), and the total time.

    Read the article

  • mysql query optimization

    - by vamsivanka
    I would need some help on how to optimize the query. select * from transaction where id < 7500001 order by id desc limit 16 when i do an explain plan on this - the type is "range" and rows is "7500000" According to the some online reference's this is explained as, it took the query 7,500,000 rows to scan and get the data. Is there any way i can optimize so it uses less rows to scan and get the data. Also, id is the primary key column.

    Read the article

  • how to pass variables this in dynamic query

    - by Ranjana
    i using the dynamic query to pass the variables select a.TableName, COUNT(a.columnvalue) as '+'count'+' from Settings.ColumnMapping a where a.ColumnValue in ('+ @columnvalue +') and a.Value in (' + @value +') the @columnvalues = 'a','b','c' @value ='comm(,)','con(:)' how to pass this in dynamic query any idea???

    Read the article

  • Complex query with two tables and multilpe data and price ranges

    - by TiuTalk
    Let's suppose that I have these tables: [ properties ] id (INT, PK) name (VARCHAR) [ properties_prices ] id (INT, PK) property_id (INT, FK) date_begin (DATE) date_end (DATE) price_per_day (DECIMAL) price_per_week (DECIMAL) price_per_month (DECIMAL) And my visitor runs a search like: List the first 10 (pagination) properties where the price per day (price_per_day field) is between 10 and 100 on the period for 1st may until 31 december I know thats a huge query, and I need to paginate the results, so I must do all the calculation and login in only one query... that's why i'm here! :)

    Read the article

  • Oracle Query for getting CURRENT CTC (Salary) of Each Employee

    - by reply2viveksshah
    i want current CTC of each employee following is the design of my table Ecode Implemented Date Salary 7654323 2010-05-20 350000 7654322 2010-05-17 250000 7654321 2003-04-01 350000 7654321 2004-04-01 450000 7654321 2005-04-01 750000 7654321 2007-04-01 650000 i want oracle query for following output Ecode Salary 7654321 650000 7654322 250000 7654323 350000 thanks in advance See also Oracle Query for getting MAximum CTC (Salary) of Each Employee

    Read the article

  • JPA query many to one association

    - by Random Joe
    I want to build the following pseudo query Select a From APDU a where a.group.id= :id group is a field in APDU class of the type APDUGroup.class. I just want to get a list of APDUs based on APDUGroup's id. How do i do that using a standard JPA query?

    Read the article

  • Manipulate data in the DB query or in the code

    - by DrDro
    How do you decide on which side you perform your data manipulation when you can either do it in the code or in the query ? When you need to display a date in a specific format for example. Do you retrieve the desired format directly in the sql query or you retrieve the date then format it through the code ? What helps you to decide : performance, best practice, preference in SQL vs the code language, complexity of the task... ?

    Read the article

  • MySQL - Add Values In Query And Return As Single Value

    - by Sjwdavies
    I'm trying to query a database, and return a set of values. Part of the data i'm trying to return is an insurance premium breakdown. Is it possible to run a query, that selects multiple fields, then adds then and returns them as a single value? I've seen SUM() but the examples i've seen show it as adding up the results of an entire field - where as i need it to add specific fields for each row returned. Any help is much appreciated.

    Read the article

  • mySQL query not returning correct results!

    - by Pete Herbert Penito
    Hi! This query that I have is returning therapists whose 'therapistTable.activated' is equal to false as well as those set to true! so it's basically selecting all of the db, any advice would be appreciated! ` $query = "SELECT therapistTable.* FROM therapistTable WHERE therapistTable.activated = 'true' ORDER BY therapistTable.distance "; `

    Read the article

  • Issue with SQL query for activity stream/feed

    - by blabus
    I'm building an application that allows users to recommend music to each other, and am having trouble building a query that would return a 'stream' of recommendations that involve both the user themselves, as well as any of the user's friends. This is my table structure: Recommendations ID Sender Recipient [other columns...] -- ------ --------- ------------------ r1 u1 u3 ... r2 u3 u2 ... r3 u4 u3 ... Users ID Email First Name Last Name [other columns...] --- ----- ---------- --------- ------------------ u1 ... ... ... ... u2 ... ... ... ... u3 ... ... ... ... u4 ... ... ... ... Relationships ID Sender Recipient Status [other columns...] --- ------ --------- -------- ------------------ rl1 u1 u2 accepted ... rl2 u3 u1 accepted ... rl3 u1 u4 accepted ... rl4 u3 u2 accepted ... So for user 'u4' (who is friends with 'u1'), I want to query for a 'stream' of recommendations relevant to u4. This stream would include all recommendations in which either the sender or recipient is u4, as well as all recommendations in which the sender or recipient is u1 (the friend). This is what I have for the query so far: SELECT * FROM recommendations WHERE recommendations.sender IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') OR recommendations.recipient IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') UNION SELECT * FROM recommendations WHERE recommendations.sender='u4' OR recommendations.recipient='u4' GROUP BY recommendations.id ORDER BY datecreated DESC Which seems to work, as far as I can see (I'm no SQL expert). It returns all of the records from the Recommendations table that would be 'relevant' to a given user. However, I'm now having trouble also getting data from the Users table as well. The Recommendations table has the sender's and recipient's ID (foreign keys), but I'd also like to get the first and last name of each as well. I think I require some sort of JOIN, but I'm lost on how to proceed, and was looking for help on that. (And also, if anyone sees any areas for improvement in my current query, I'm all ears.) Thanks!

    Read the article

  • PHP MSSQL : How to display output when query return no row

    - by vamps
    i have a problem with my PHP-MSSQL query. i have a join table that need to give a result something be like this: Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 the query will count how many employee as per given date (admin will enter data and once submitted, the query will give the above result). The problem is, the final output is a mess when there's no row to be displayed. the column is shifted to the right. i.e: only Group A in IT only Group B in Admin Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 my question is, how to prevent this to happen? i've tried everything with While.... if else.. but the result is still the same. how to display output "0" if no rows to return? echo "0"; this is my QUERY: select DD.DPT_ID,DPT.DEPARTMENT_NAME,TU.EMP_GROUP, sum(DD.WORK_HOUR) AS WORK_HOUR, sum(DD.OT_HOUR) AS OT_HOUR FROM DEPARTMENT_DETAIL DD left join DEPARTMENT DPT ON (DD.DEPT_ID=DPT.DEPT_ID) LEFT JOIN TBL_USERS TU ON (TU.EMP_ID=DD.EMP_ID) WHERE DD_DATE>='2012-01-01' AND DD_DATE<='2012-01-31' AND TU.EMP_GROUP!=2 GROUP BY DD.DEPT_ID, DPT.DEPARTMENT_NAME,TU.EMP_GROUP ORDER BY DPT.DEPARTMENT_NAME this is one of the logic that i've used, but doesn't return the result that i want:: while($row = mssql_fetch_array($displayResult)) { if ((!$row["WORK_HOUR"])&&(!$row["OT_HOUR"])) { echo "<td >"; echo "empty"; echo "&nbsp;</td>"; echo "<td >"; echo "empty"; echo "&nbsp;</td>"; } else { echo "<td>"; echo $row["WORK_HOUR"]; echo "&nbsp;</td>"; echo "<td>"; echo $row["OT_HOUR"]; echo "&nbsp;</td>"; } } please help. i've been doing this for 2 days. @__@

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • Metro: Query Selectors

    - by Stephen.Walther
    The goal of this blog entry is to explain how to perform queries using selectors when using the WinJS library. In particular, you learn how to use the WinJS.Utilities.query() method and the QueryCollection class to retrieve and modify the elements of an HTML document. Introduction to Selectors When you are building a Web application, you need some way of easily retrieving elements from an HTML document. For example, you might want to retrieve all of the input elements which have a certain class. Or, you might want to retrieve the one and only element with an id of favoriteColor. The standard way of retrieving elements from an HTML document is by using a selector. Anyone who has ever created a Cascading Style Sheet has already used selectors. You use selectors in Cascading Style Sheets to apply formatting rules to elements in a document. For example, the following Cascading Style Sheet rule changes the background color of every INPUT element with a class of .required in a document to the color red: input.red { background-color: red } The “input.red” part is the selector which matches all INPUT elements with a class of red. The W3C standard for selectors (technically, their recommendation) is entitled “Selectors Level 3” and the standard is located here: http://www.w3.org/TR/css3-selectors/ Selectors are not only useful for adding formatting to the elements of a document. Selectors are also useful when you need to apply behavior to the elements of a document. For example, you might want to select a particular BUTTON element with a selector and add a click handler to the element so that something happens whenever you click the button. Selectors are not specific to Cascading Style Sheets. You can use selectors in your JavaScript code to retrieve elements from an HTML document. jQuery is famous for its support for selectors. Using jQuery, you can use a selector to retrieve matching elements from a document and modify the elements. The WinJS library enables you to perform the same types of queries as jQuery using the W3C selector syntax. Performing Queries with the WinJS.Utilities.query() Method When using the WinJS library, you perform a query using a selector by using the WinJS.Utilities.query() method.  The following HTML document contains a BUTTON and a DIV element: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <button>Click Me!</button> <div style="display:none"> <h1>Secret Message</h1> </div> </body> </html> The document contains a reference to the following JavaScript file named \js\default.js: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.Utilities.query("button").listen("click", function () { WinJS.Utilities.query("div").clearStyle("display"); }); } }; app.start(); })(); The default.js script uses the WinJS.Utilities.query() method to retrieve all of the BUTTON elements in the page. The listen() method is used to wire an event handler to the BUTTON click event. When you click the BUTTON, the secret message contained in the hidden DIV element is displayed. The clearStyle() method is used to remove the display:none style attribute from the DIV element. Under the covers, the WinJS.Utilities.query() method uses the standard querySelectorAll() method. This means that you can use any selector which is compatible with the querySelectorAll() method when using the WinJS.Utilities.query() method. The querySelectorAll() method is defined in the W3C Selectors API Level 1 standard located here: http://www.w3.org/TR/selectors-api/ Unlike the querySelectorAll() method, the WinJS.Utilities.query() method returns a QueryCollection. We talk about the methods of the QueryCollection class below. Retrieving a Single Element with the WinJS.Utilities.id() Method If you want to retrieve a single element from a document, instead of matching a set of elements, then you can use the WinJS.Utilities.id() method. For example, the following line of code changes the background color of an element to the color red: WinJS.Utilities.id("message").setStyle("background-color", "red"); The statement above matches the one and only element with an Id of message. For example, the statement matches the following DIV element: <div id="message">Hello!</div> Notice that you do not use a hash when matching a single element with the WinJS.Utilities.id() method. You would need to use a hash when using the WinJS.Utilities.query() method to do the same thing like this: WinJS.Utilities.query("#message").setStyle("background-color", "red"); Under the covers, the WinJS.Utilities.id() method calls the standard document.getElementById() method. The WinJS.Utilities.id() method returns the result as a QueryCollection. If no element matches the identifier passed to WinJS.Utilities.id() then you do not get an error. Instead, you get a QueryCollection with no elements (length=0). Using the WinJS.Utilities.children() method The WinJS.Utilities.children() method enables you to retrieve a QueryCollection which contains all of the children of a DOM element. For example, imagine that you have a DIV element which contains children DIV elements like this: <div id="discussContainer"> <div>Message 1</div> <div>Message 2</div> <div>Message 3</div> </div> You can use the following code to add borders around all of the child DIV element and not the container DIV element: var discussContainer = WinJS.Utilities.id("discussContainer").get(0); WinJS.Utilities.children(discussContainer).setStyle("border", "2px dashed red");   It is important to understand that the WinJS.Utilities.children() method only works with a DOM element and not a QueryCollection. Notice that the get() method is used to retrieve the DOM element which represents the discussContainer. Working with the QueryCollection Class Both the WinJS.Utilities.query() method and the WinJS.Utilities.id() method return an instance of the QueryCollection class. The QueryCollection class derives from the base JavaScript Array class and adds several useful methods for working with HTML elements: addClass(name) – Adds a class to every element in the QueryCollection. clearStyle(name) – Removes a style from every element in the QueryCollection. conrols(ctor, options) – Enables you to create controls. get(index) – Retrieves the element from the QueryCollection at the specified index. getAttribute(name) – Retrieves the value of an attribute for the first element in the QueryCollection. hasClass(name) – Returns true if the first element in the QueryCollection has a certain class. include(items) – Includes a collection of items in the QueryCollection. listen(eventType, listener, capture) – Adds an event listener to every element in the QueryCollection. query(query) – Performs an additional query on the QueryCollection and returns a new QueryCollection. removeClass(name) – Removes a class from the every element in the QueryCollection. removeEventListener(eventType, listener, capture) – Removes an event listener from every element in the QueryCollection. setAttribute(name, value) – Adds an attribute to every element in the QueryCollection. setStyle(name, value) – Adds a style attribute to every element in the QueryCollection. template(templateElement, data, renderDonePromiseContract) – Renders a template using the supplied data.  toggleClass(name) – Toggles the specified class for every element in the QueryCollection. Because the QueryCollection class derives from the base Array class, it also contains all of the standard Array methods like forEach() and slice(). Summary In this blog post, I’ve described how you can perform queries using selectors within a Windows Metro Style application written with JavaScript. You learned how to return an instance of the QueryCollection class by using the WinJS.Utilities.query(), WinJS.Utilities.id(), and WinJS.Utilities.children() methods. You also learned about the methods of the QueryCollection class.

    Read the article

  • SQL SERVER – SmallDateTime and Precision – A Continuous Confusion

    - by pinaldave
    Some kinds of confusion never go away. Here is one of the ancient confusing things in SQL. The precision of the SmallDateTime is one concept that confuses a lot of people, proven by the many messages I receive everyday relating to this subject. Let me start with the question: What is the precision of the SMALLDATETIME datatypes? What is your answer? Write it down on your notepad. Now if you do not want to continue reading the blog post, head to my previous blog post over here: SQL SERVER – Precision of SMALLDATETIME. A Social Media Question Since the increase of social media conversations, I noticed that the amount of the comments I receive on this blog is a bit staggering. I receive lots of questions on facebook, twitter or Google+. One of the very interesting questions yesterday was asked on Facebook by Raghavendra. I am re-organizing his script and asking all of the questions he has asked me. Let us see if we could help him with his question: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO Now when the above script is ran, we will get the following result: Well, the expectation of the query was to have the following result. The row which was inserted last was expected to return as first row in result set as the ORDER BY descending. Side note: Because the requirement is to get the latest data, we can’t use any  column other than smalldatetime column in order by. If we use name column in the order by, we will get an incorrect result as it can be any name. My Initial Reaction My initial reaction was as follows: 1) DataType DateTime2: If file precision of the column is expected from the column which store date and time, it should not be smalldatetime. The precision of the column smalldatetime is One Minute (Read Here) for finer precision use DateTime or DateTime2 data type. Here is the code which includes above suggestion: CREATE TABLE #temp (name VARCHAR(100), registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO 2) Tie Breaker Identity: There are always possibilities that two rows were inserted at the same time. In that case, you may need a tie breaker. If you have an increasing identity column, you can use that as a tie breaker as well. CREATE TABLE #temp (ID INT IDENTITY(1,1), name VARCHAR(100),registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY ID DESC GO DROP TABLE #temp GO Those two were the quick suggestions I provided. It is not necessary that you should use both advices. It is possible that one can use only DATETIME datatype or Identity column can have datatype of BIGINT or have another tie breaker. An Alternate NO Solution In the facebook thread this was also discussed as one of the solutions: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO However, I believe it is not the solution and can be further misleading if used in a production server. Here is the example of why it is not a good solution: CREATE TABLE #temp (name VARCHAR(100) NOT NULL,registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO -- Before Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO -- Create Index ALTER TABLE #temp ADD CONSTRAINT [PK_#temp] PRIMARY KEY CLUSTERED (name DESC) GO -- After Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO Now let us examine the resultset. You will notice that an index which is created on the base table which is (indeed) schema change the table but can affect the resultset. As you can see, an index can change the resultset, so this method is not yet perfect to get the latest inserted resultset. No Schema Change Requirement After giving these two suggestions, I was waiting for the feedback of the asker. However, the requirement of the asker was there can’t be any schema change because the application was used by many other applications. I validated again, and of course, the requirement is no schema change at all. No addition of the column of change of datatypes of any other columns. There is no further help as well. This is indeed an interesting question. I personally can’t think of any solution which I could provide him given the requirement of no schema change. Can you think of any other solution to this? Need of Database Designer This question once again brings up another ancient question:  “Do we need a database designer?” I often come across databases which are facing major performance problems or have redundant data. Normalization is often ignored when a database is built fast under a very tight deadline. Often I come across a database which has table with unnecessary columns and performance problems. While working as Developer Lead in my earlier jobs, I have seen developers adding columns to tables without anybody’s consent and retrieving them as SELECT *.  There is a lot to discuss on this subject in detail, but for now, let’s discuss the question first. Do you have any suggestions for the above question? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: CodeProject, Developer Training, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • How to insert countdown into a video file

    - by student
    Is there an easy way to insert a big countdown clock after a given time position of a video file in linux? The countdown clock should count down in seconds from n to 0. Edit: I don't want to insert manually pictures showing the nth second. It is clear to me how to do that. What I want is an automatic way to do this: Set the number of seconds for the countdown (e.g. 64 or 80 seconds) Set the time where to insert the countdown in a given video And finally get the video with inserted countdown at specified position as a result. It would be nice if this would be possible in a graphical video editor like openshot. However it would also be ok to have a command line solution for this.

    Read the article

  • Byte Size Tips: How to Insert a YouTube Video Into a PowerPoint Presentation in Office 2013

    - by Taylor Gibb
    How many times have you needed to show a video during a presentation? Using YouTube and PowerPoint, it is now possible. Insert a YouTube Video Into a PowerPoint Presentation in Office 2013 Go ahead and open PowerPoint and switch over to the Insert tab. Then click on Video, and then Online Video… If this is your first time inserting a video from YouTube, you will need to add it as a provider from the bottom left hand side of the dialog. Once added, you will be able to enter a search term. You can then simply select a video and hit the insert button. That’s all there is to it. Remember Videos come with their own set of editing options, so be sure to take a look around.     

    Read the article

  • sql server 2008 insert statement question

    - by user61752
    I am learning sql server 2008 t-sql. To insert a varchar type, I just need to insert a string 'abc', but for nvarchar type, I need to add N in front (N'abc'). I have a table employee, it has 2 fields, firstname and lastname, they are both nvarchar(20). insert into employee values('abc', 'def'); I test it, it works, seems like N is not required. Why we need to add N in front for nvarchar type, what's the pro or con if we are not using it?

    Read the article

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >