Search Results

Search found 1369 results on 55 pages for 'over clause'.

Page 15/55 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Does the order of conditions in a WHERE clause affect MySQL performance?

    - by Greg
    Say that I have a long, expensive query, packed with conditions, searching a large number of rows. I also have one particular condition, like a company id, that will limit the number of rows that need to be searched considerably, narrowing it down to dozens from hundreds of thousands. Does make any difference to MySQL performance whether I do this: SELECT * FROM clients WHERE (firstname LIKE :foo OR lastname LIKE :foo OR phone LIKE :foo) AND (firstname LIKE :bar OR lastname LIKE :bar OR phone LIKE :bar) AND company = :ugh or this: SELECT * FROM clients WHERE company = :ugh AND (firstname LIKE :foo OR lastname LIKE :foo OR phone LIKE :foo) AND (firstname LIKE :bar OR lastname LIKE :bar OR phone LIKE :bar)

    Read the article

  • How can I do a 'where' clause in Linux shell?

    - by Hoa
    I have a CSV file and I would like to filter all the lines where the 19th column has two or more characters. I know the individual pieces but can't figure out how to glue them together. First I have to cat the file. The following prints the 19th column awk -F "," '{print $19}' file.txt awk also has length and ifs And I know it all has to be glued together using pipes. I'm just getting stuck at the exact syntax since I have not done much bash programming before.

    Read the article

  • How can I do a left outer join where both tables have a where clause?

    - by cdeszaq
    Here's the scenario: I have 2 tables: CREATE TABLE dbo.API_User ( id int NOT NULL, name nvarchar(255) NOT NULL, authorization_key varchar(255) NOT NULL, is_active bit NOT NULL ) ON [PRIMARY] CREATE TABLE dbo.Single_Sign_On_User ( id int NOT NULL IDENTITY (1, 1), API_User_id int NOT NULL, external_id varchar(255) NOT NULL, user_id int NULL ) ON [PRIMARY] What I am trying to return is the following: is_active for a given authorization_key The Single_Sign_On_User.id that matches the external_id/API_User_id pair if it exists or NULL if there is no such pair When I try this query: SELECT Single_Sign_On_User.id, API_User.is_active FROM API_User LEFT OUTER JOIN Single_Sign_On_User ON Single_Sign_On_User.API_User_id = API_User.id WHERE Single_Sign_On_User.external_id = 'test_ext_id' AND API_User.authorization_key = 'test' where the "test" API_User record exists but the "test_ext_id" record does not, and with no other values in either table, I get no records returned. When I use: SELECT Single_Sign_On_User.id, API_User.is_active FROM API_User LEFT OUTER JOIN Single_Sign_On_User ON Single_Sign_On_User.API_User_id = API_User.id WHERE API_User.authorization_key = 'test' I get the results I expect (NULL, 1), but that query doesn't allow me to find the "test_ext_id" record if it exists but would give me all records associated with the "test" API_User record. How can I get the results I am after?

    Read the article

  • Given a Date "03/13/2010", using that in a MYSQL Where Clause?

    - by nobosh
    I would like to pass a MYSQL query via Coldfusion the following date: 03/13/2010 So the query filters against it like so: SELECT * FROM myTable WHERE dateAdded before or on 03/13/2010 I'd also like to be able to take 2 dates as ranges, from: 01/11/2000, to: 03/13/2010 SELECT * FROMT myTable WHERE dateAdded is ON or Between 01/11/2000 through 03/13/2010 thanks

    Read the article

  • removing a case clause: bash expansion in sed regexp: X='a\.b' ; Y=';;' sed -n '/${X}/,/${Y}/d'

    - by ChrisSM
    I'm trying to remove a case clause from a bash script. The clause will vary, but will always have backslashes as part of the case-match string. I was trying sed but could use awk or a perl one-liner within the bash script. The target of the edit is straightforward, resembles: $cat t.sh case N in a\.b); #[..etc., varies] ;; esac I am running afoul of the variable expansion escaping backslashes, semicolons or both. If I 'eval' I strip my backslash escapes. If I don't, the semi-colons catch me up. So I tried subshell expansion within the sed. This fouls the interpreter as I've written it. More escaping the semi-colons doesn't seem to help. X='a\.b' ; Y=';;' sed -i '/$(echo ${X} | sed -n 's/\\/\\\\/g')/,/$(echo ${Y} | sed -n s/\;/\\;/g')/d t.sh And this: perl -i.bak -ne 'print unless /${X}/ .. /{$Y}/' t.sh # which empties t.sh and eval perl -i.bak -ne \'print unless /${X}/ .. /{$Y}/' t.sh # which does nothing

    Read the article

  • Having trouble doing an Update with a Linq to Sql object

    - by Pure.Krome
    Hi folks, i've got a simple linq to sql object. I grab it from the database and change a field then save. No rows have been updated. :( When I check the full Sql code that is sent over the wire, I notice that it does an update to the row, not via the primary key but on all the fields via the where clause. Is this normal? I would have thought that it would be easy to update the field(s) with the where clause linking on the Primary Key, instead of where'ing (is that a word :P) on each field. here's the code... using (MyDatabase db = new MyDatabase()) { var boardPost = (from bp in db.BoardPosts where bp.BoardPostId == boardPostId select bp).SingleOrDefault(); if (boardPost != null && boardPost.BoardPostId > 0) { boardPost.ListId = listId; // This changes the value from 0 to 'x' db.SubmitChanges(); } } and here's some sample sql.. exec sp_executesql N'UPDATE [dbo].[BoardPost] SET [ListId] = @p6 WHERE ([BoardPostId] = @p0) AND .... <snip the other fields>',N'@p0 int,@p1 int,@p2 nvarchar(9),@p3 nvarchar(10),@p4 int,@p5 datetime,@p6 int',@p0=1276,@p1=212787,@p2=N'ttreterte',@p3=N'ttreterte3',@p4=1,@p5='2009-09-25 12:32:12.7200000',@p6=72 Now, i know there's a datetime field in this update .. and when i checked the DB it's value was/is '2009-09-25 12:32:12.720' (less zero's, than above) .. so i'm not sure if that is messing up the where clause condition... but still! should it do a where clause on the PK's .. if anything .. for speed! Yes / no ? UPDATE After reading nitzmahone's reply, I then tried playing around with the optimistic concurrency on some values, and it still didn't work :( So then I started some new stuff ... with the optimistic concurrency happening, it includes a where clause on the field it's trying to update. When that happens, it doesn't work. so.. in the above sql, the where clause looks like this ... WHERE ([BoardPostId] = @p0) AND ([ListId] IS NULL) AND ... <rest snipped>) This doesn't sound right! the value in the DB is null, before i do the update. but when i add the ListId value to the where clause (or more to the point, when L2S add's it because of the optomistic concurrecy), it fails to find/match the row. wtf?

    Read the article

  • Numbered list with subclauses

    - by Barry Clearwater
    I'm trying to create a legal document with decimal numbered subclauses, then alpha and roman subsub and subsubsub clauses. (whew!) `1. MAIN HEADING 1.1 This is an example of a sub-clause and you can see that even though the words continue on to the right, it would be best if it wrapped around and formed a block to the right of the decimal number 1.2 In doing so the normal second clause should also wrap around but the second subsequent clause should hang in from the left and be in a block. See below for the remaining clauses (a) this list is completely for demonstration and should not be construed as legal language in any way, nor should make sense in that (b) should the indentation take more than: i) this many lines it would be overly big 11) legal numbering continues in the sub-sub clauses with the use of lower roman lettering and should flow below in a block iii) and continue the formatting on to the next line but be underneath the body of the the text and not begin directly below the number itself. In this example the text carries out to the right but I need it to wrap around underneath. Sorry its so wordy, need this to show the example. 2. Second Clause Heading 2.1 and so it continues thus I've found the examples for decimal numbering but they do not create a block out to the right of the number, and they carry on with multiple decimals rather than alpha and roman sub clauses.

    Read the article

  • OLL Live webcast - Using SQL for Pattern Matching in Oracle Database

    - by KLaker
    If you are interested in learning about our exciting new 12c SQL pattern matching feature then mark your diaries. On Wednesday, October 30th at 8:00 am (US/Pacific time zone) Supriya Ananth, who is one of our top curriculum developers at Oracle, will be hosting an OLL webcast on our new SQL pattern matching feature. The ability to recognize patterns in a sequence of rows has been a capability that was widely desired, but not possible with SQL until now. Row pattern matching in native SQL improves application and development productivity and query efficiency for row-sequence analysis. With Oracle Database 12c you can use the new MATCH_RECOGNIZE clause to perform pattern matching in SQL to do the following: Logically partition and order the data using the PARTITION BY and ORDER BY clauses Use regular expressions syntax to define patterns of rows to seek using the PATTERN clause. These patterns a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. For more information and to register for this exciting webcast please visit the OLL Live website, see here: https://apex.oracle.com/pls/apex/f?p=44785:145:116820049307135::::P145_EVENT_ID,P145_PREV_PAGE:461,143.  Please note - if the above link does not work then go to OLL (https://apex.oracle.com/pls/apex/f?p=44785:1:) and click the OLL Live icon (upper right, beneath the Login link or logout link if you are already logged in). The pattern matching webcast is listed on the calendar of events on 30 October.

    Read the article

  • PASS: Bylaw Change 2013

    - by Bill Graziano
    PASS launched a Global Growth Initiative in the Summer of 2011 with the appointment of three international Board advisors.  Since then we’ve thought and talked extensively about how we make PASS more relevant to our members outside the US and Canada.  We’ve collected much of that discussion in our Global Growth site.  You can find vision documents, plans, governance proposals, feedback sites, and transcripts of Twitter chats and town hall meetings.  We also address these plans at the Board Q&A during the 2012 Summit. One of the biggest changes coming out of this process is around how we elect Board members.  And that requires a change to the bylaws.  We published the proposed bylaw changes as a red-lined document so you can clearly see the changes.  Our goal in these bylaw changes was to address the changes required by the global growth initiatives, conduct a legal review of the document and address other minor issues in the document.  There are numerous small wording changes throughout the document.  For example, we replaced every reference of “The Corporation” with the word “PASS” so it now reads “PASS is organized…”. Board Composition The biggest change in these bylaw changes is how the Board is composed and elected.  This discussion starts in section VI.2.  This section now says that some elected directors will come from geographic regions.  I think this is the best way to make sure we give all of our members a voice in the leadership of the organization.  The key parts of this section are: The remaining Directors (i.e. the non-Officer Directors and non-Vendor Appointed Directors) shall be elected by the voting membership (“Elected Directors”). Elected Directors shall include representatives of defined PASS regions (“Regions”) as set forth below (“Regional Directors”) and at minimum one (1) additional Director-at-Large whose selection is not limited by region. Regional Directors shall include, but are not limited to, two (2) seats for the Region covering Canada and the United States of America. Additional Regions for the purpose of electing additional Regional Directors and additional Director-at-Large seats for the purpose of expanding the Board shall be defined by a majority vote of the current Board of Directors and must be established prior to the public call for nominations in the general election. Previously defined Regions and seats approved by the Board of Directors shall remain in effect and can only be modified by a 2/3 majority vote by the then current Board of Directors. Currently PASS has six At-Large Directors elected by the members.  These changes allow for a Regional Director position that is elected by the members but must come from a particular region.  It also stipulates that there must always be at least one Director-at-Large who can come from any region. We also understand that PASS is currently a very US-centric organization.  Our Summit is held in America, roughly half our chapters are in the US and Canada and most of the Board members over the last ten years have come from America.  We wanted to reflect that by making sure that our US and Canadian volunteers would continue to play a significant role by ensuring that two Regional seats are reserved specifically for Canada and the US. Other than that, the bylaws don’t create any specific regional seats.  These rules allow us to create Regional Director seats but don’t require it.  We haven’t fully discussed what the criteria will be in order for a region to have a seat designated for it or how many regions there will be.  In our discussions we’ve broadly discussed regions for United States and Canada Europe, Middle East, and Africa (EMEA) Australia, New Zealand and Asia (also known as Asia Pacific or APAC) Mexico, South America, and Central America (LATAM) As you can see, our thinking is that there will be a few large regions.  I’ve also considered a non-North America region that we can gradually split into the regions above as our membership grows in those areas.  The regions will be defined by a policy document that will be published prior to the elections. I’m hoping that over the next year we can begin to publish more of what we do as Board-approved policy documents. While the bylaws only require a single non-region specific At-large Director, I would expect we would always have two.  That way we can have one in each election.  I think it’s important that we always have one seat open that anyone who is eligible to run for the Board can contest.  The Board is required to have any regions defined prior to the start of the election process. Board Elections – Regional Seats We spent a lot of time discussing how the elections would work for these Regional Director seats.  Ultimately we decided that the simplest solution is that every PASS member should vote for every open seat.  Section VIII.3 reads: Candidates who are eligible (i.e. eligible to serve in such capacity subject to the criteria set forth herein or adopted by the Board of Directors) shall be designated to fill open Board seats in the following order of priority on the basis of total votes received: (i) full term Regional Director seats, (ii) full term Director-at-Large seats, (iii) not full term (vacated) Regional Director seats, (iv) not full term (vacated) Director-at-Large seats. For the purposes of clarity, because of eligibility requirements, it is contemplated that the candidates designated to the open Board seats may not receive more votes than certain other candidates who are not selected to the Board. We debated whether to have multiple ballots or one single ballot.  Multiple ballot elections get complicated quickly.  Let’s say we have a ballot for US/Canada and one for Region 2.  After that we’d need a mechanism to merge those two together and come up with the winner of the at-large seat or have another election for the at-large position.  We think the best way to do this is a single ballot and putting the highest vote getters into the most restrictive seats.  Let’s look at an example: There are seats open for Region 1, Region 2 and at-large.  The election results are as follows: Candidate A (eligible for Region 1) – 550 votes Candidate B (eligible for Region 1) – 525 votes Candidate C (eligible for Region 1) – 475 votes Candidate D (eligible for Region 2) – 125 votes Candidate E (eligible for Region 2) – 75 votes In this case, Candidate A is the winner for Region 1 and is assigned that seat.  Candidate D is the winner for Region 2 and is assigned that seat.  The at-large seat is filled by the high remaining vote getter which is Candidate B. The key point to understand is that we may have a situation where a person with a lower vote total is elected to a regional seat and a person with a higher vote total is excluded.  This will be true whether we had multiple ballots or a single ballot.  Board Elections – Vacant Seats The other change to the election process is for vacant Board seats.  The actual changes are sprinkled throughout the document. Previously we didn’t have a mechanism that allowed for an election of a Board seat that we knew would be vacant in the future.  The most common case is when a Board members moves to an Officer role in the middle of their term.  One of the key changes is to allow the number of votes members have to match the number of open seats.  This allows each voter to express their preference on all open seats.  This only applies when we know about the opening prior to the call for nominations.  This all means that if there’s a seat will be open at the start of the next Board term, and we know about it prior to the call for nominations, we can include that seat in the elections.  Ultimately, the aim is to have PASS members decide who sits on the Board in as many situations as possible. We discussed the option of changing the bylaws to just take next highest vote-getter in all other cases.  I think that’s wrong for the following reasons: All voters aren’t able to express an opinion on all candidates.  If there are five people running for three seats, you can only vote for three.  You have no way to express your preference between #4 and #5. Different candidates may have different information about the number of seats available.  A person may learn that a Board member plans to resign at the end of the year prior to that information being made public. They may understand that the top four vote getters will end up on the Board while the rest of the members believe there are only three openings.  This may affect someone’s decision to run.  I don’t think this creates a transparent, fair election. Board members may use their knowledge of the election results to decide whether to remain on the Board or not.  Admittedly this one is unlikely but I don’t want to create a situation where this accusation can be leveled. I think the majority of vacancies in the future will be handled through elections.  The bylaw section quoted above also indicates that partial term vacancies will be filled after the full term seats are filled. Removing Directors Section VI.7 on removing directors has always had a clause that allowed members to remove an elected director.  We also had a clause that allowed appointed directors to be removed.  We added a clause that allows the Board to remove for cause any director with a 2/3 majority vote.  The updated text reads: Any Director may be removed for cause by a 2/3 majority vote of the Board of Directors whenever in its judgment the best interests of PASS would be served thereby. Notwithstanding the foregoing, the authority of any Director to act as in an official capacity as a Director or Officer of PASS may be suspended by the Board of Directors for cause. Cause for suspension or removal of a Director shall include but not be limited to failure to meet any Board-approved performance expectations or the presence of a reason for suspension or dismissal as listed in Addendum B of these Bylaws. The first paragraph is updated and the second and third are unchanged (except cleaning up language).  If you scroll down and look at Addendum B of these bylaws you find the following: Cause for suspension or dismissal of a member of the Board of Directors may include: Inability to attend Board meetings on a regular basis. Inability or unwillingness to act in a capacity designated by the Board of Directors. Failure to fulfill the responsibilities of the office. Inability to represent the Region elected to represent Failure to act in a manner consistent with PASS's Bylaws and/or policies. Misrepresentation of responsibility and/or authority. Misrepresentation of PASS. Unresolved conflict of interests with Board responsibilities. Breach of confidentiality. The bold line about your inability to represent your region is what we added to the bylaws in this revision.  We also added a clause to section VII.3 allowing the Board to remove an officer.  That clause is much less restrictive.  It doesn’t require cause and only requires a simple majority. The Board of Directors may remove any Officer whenever in their judgment the best interests of PASS shall be served by such removal. Other There are numerous other small changes throughout the document. Proxy voting.  The laws around how members and Board members proxy votes are specific in Illinois law.  PASS is an Illinois corporation and is subject to Illinois laws.  We changed section IV.5 to come into compliance with those laws.  Specifically this says you can only vote through a proxy if you have a written proxy through your authorized attorney.  English language proficiency.  As we increase our global footprint we come across more members that aren’t native English speakers.  The business of PASS is conducted in English and it’s important that our Board members speak English.  If we get big enough to afford translators, we may be able to relax this but right now we need English language skills for effective Board members. Committees.  The language around committees in section IX is old and dated.  Our lawyers advised us to clean it up.  This section specifically applies to any committees that the Board may form outside of portfolios.  We removed the term limits, quorum and vacancies clause.  We don’t currently have any committees that this would apply to.  The Nominating Committee is covered elsewhere in the bylaws. Electronic Votes.  The change allows the Board to vote via email but the results must be unanimous.  This is to conform with Illinois state law. Immediate Past President.  There was no mechanism to fill the IPP role if an outgoing President chose not to participate.  We changed section VII.8 to allow the Board to invite any previous President to fill the role by majority vote. Nominations Committee.  We’ve opened the language to allow for the transparent election of the Nominations Committee as outlined by the 2011 Election Review Committee. Revocation of Charters. The language surrounding the revocation of charters for local groups was flagged by the lawyers. We have allowed for the local user group to make all necessary payment before considering returning of items to PASS if required. Bylaw notification. We’ve spent countless meetings working on these bylaws with the intent to not open them again any time in the near future. Should the bylaws be opened again, we have included a clause ensuring that the PASS membership is involved. I’m proud that the Board has remained committed to transparency and accountability to members. This clause will require that same level of commitment in the future even when all the current Board members have rolled off. I think that covers everything.  I’d encourage you to look through the red-line document and see the changes.  It’s helpful to look at the language that’s being removed and the language that’s being added.  I’m happy to answer any questions here or you can email them to [email protected].

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • What languages have a while-else type control structure, and how does it work?

    - by Dan
    A long time ago, I thought I saw a proposal to add an else clause to for or while loops in C or C++... or something like that. I don't remember how it was supposed to work -- did the else clause run if the loop exited normally but not via a break statement? Anyway, this is tough to search for, so I thought maybe I could get some CW answers here for various languages. What languages support adding an else clause to something other than an if statement? What is the meaning of that clause? One language per answer please.

    Read the article

  • Getting following warning while compiling

    - by thetna
    warning: passing argument 1 of 'bsearch' makes pointer from integer without a cast and the corresponding code is Parent =bsearch((const size_t)ParentNum, ClauseVector, Size, sizeof(CLAUSE),pcheck_CompareNumberAndClause); the compilar is gcc. here CLAUSE is defined as *CLAUSE.

    Read the article

  • how does MySQL implement the "group by"?

    - by user188916
    I read from the MySQL Reference Manual and find that when it can take use of index,it just do index scan,other it will create tmp tables and do things like filesort. And I also read from other article that the "Group By" result will sort by group by columns by default,if "order by null" clause added,it won't don filesort. The difference can be found from the "explain ..." clause. so my problem is:what is the difference between "group by" clause that with "order by null" and which doesn't have? I try to use profiling to see what mysql do on the background,and only see result like: result for group clause without order by null: |preparing | 0.000016 | | Creating tmp table | 0.000048 | | executing | 0.000009 | | Copying to tmp table | 0.000109 | **| Sorting result | 0.000023 |** | Sending data | 0.000027 | result for clause with "order by null": preparing | 0.000016 | | Creating tmp table | 0.000052 | | executing | 0.000009 | | Copying to tmp table | 0.000114 | | Sending data | 0.000028 | So I guess what MySQL do when the "order by null" added,it does not use filesort algorithm,maybe when it creates the tmp table,it uses index as well,and then use the index to do group by operation,when completed,it just read result from the table rows and does not sort the result. But my original opinion is that MySQL can use quicksort to sort the items and then do group by,so the result will be sorted as well. Any opinion appreciated,thanks.

    Read the article

  • Wordpress database query running slow - one of the columns doesn't exist!

    - by Pavel
    Hi there. I'm having some problems with the query that wordpress runs. That's the one: SELECT DISTINCT ID,post_title,post_date,post_content,MATCH(post_title,post_content) AGAINST ('S') AS score FROM wp_posts WHERE MATCH (post_title,post_content) AGAINST ('S') AND post_date <= 'S' AND post_status = 'S' AND id != N AND post_type = 'S' ORDER BY score DESC When I'm running this query in phpmyadmin it says that N column doesn't exist so clause "AND id != N" si not making any sense. I ran the query again without this clause and db behaved like fully optimized one. Please can someone give me a hint on that? My questions are: What this clause is used for? What wordpress is trying to find by running this and Can I modify core wordpress files to get rid of this clause? Any response or help greatly appreciated!!

    Read the article

  • How can I keep the the logic to translate a ViewModel's values to a Where clause to apply to a linq query out of My Controller?

    - by Mr. Manager
    This same problem keeps cropping up. I have a viewModel that doesn't have any persistent backing. It is just a ViewModel to generate a search input form. I want to build a large where clause from the values the user entered. If the Action Accepts as a parameter SearchViewModel How do I do this without passing my viewModel to my service layer? Service shouldn't know about ViewModels right? Oh and if I serialize it, then it would be a big string and the key/values would be strongly typed. SearchViewModel this is just a snippet. [Display(Name="Address")] public string AddressKeywords { get; set; } /// <summary> /// Gets or sets the census. /// </summary> public string Census { get; set; } /// <summary> /// Gets or sets the lot block sub. /// </summary> public string LotBlockSub { get; set; } /// <summary> /// Gets or sets the owner keywords. /// </summary> [Display(Name="Owner")] public string OwnerKeywords { get; set; } In my controller action I was thinking of something like this. but I would think all this logic doesn't belong in my Controller. ActionResult GetSearchResults(SearchViewModel model){ var query = service.GetAllParcels(); if(model.Census != null){ query = query.Where(x=>x.Census == model.Census); } if (model.OwnerKeywords != null){ query = query.Where(x=>x.Owners == model.OwnerKeywords); } return View(query.ToList()); }

    Read the article

  • Entity Framework 5 upgrade from 4

    - by user1714591
    I'm having an issue with the Where clause in a search, in my original version EF4 I could add a Where clause with 2 parameters, the where clause (string predicate) and a ObjectParameter list such as var query = context.entities.Where(WhereClause.ToString(), Params.ToArray()); since my upgrade to EF5 I don't seem to have that option am I missing something? This was originally used to build dynamic where clause such as "it.entity_id = @entity_id" then holding the variable value in the ObjectParameter. I'm hoping I don't have to rewrite all the searches that have been built out this way, so any assistance would be greatly appreciated. Cheers

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • Analytic functions – they’re not aggregates

    - by Rob Farley
    SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables. The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012. This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it. The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC. The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition). Consider the following query: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     LAST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same. Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days). The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at. At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING) FROM Sales.SalesOrderHeader GROUP BY YEAR(OrderDate) ; But you can’t. You get that age-old error: Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Hmm. You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group. Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate. And so we end up grouping by doing a DISTINCT. SELECT DISTINCT     YEAR(OrderDate),         FIRST_VALUE(TotalDue)              OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),         LAST_VALUE(TotalDue)             OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)          WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do. If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results. You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented. Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

    Read the article

  • EJB Named Criteria - Apply bind variable in Backingbean

    - by Deepak Siddappa
    EJB Named criteria are predefined and reusable where-clause definitions that are dynamically applied to a ViewObject query. Here we often use to filter the ViewObject SQL statement query based on Where Clause conditions.Take a scenario where we need to filter the SQL statements query based on Where Clause conditions, instead of playing with SQL statements use the EJB Named Criteria which is supported by default in ADF and set the Bind Variable parameter at run time.You can download the sample workspace from here [Runs with Oracle JDeveloper 11.1.2.0.0 (11g R2) + HR Schema] Implementation StepsCreate Java EE Web Application with entity based on Employees table, then create a session bean and data control for the session bean.Open the DataControls.dcx file and create sparse xml for as shown below. In sparse xml navigate to Named criteria tab -> Bind Variable section, create binding variable deptId. Now create a named criteria and map the query attributes to the bind variable. In the ViewController create index.jspx page, from data control palette drop employeesFindAll->Named Criteria->EmployeesCriteria->Table as ADF Read-Only Filtered Table and create the backingBean as "IndexBean".Open the index.jspx page and remove the "filterModel" binding from the table, add <af:inputText />, command button and bind them to backingBean. For command button create the actionListener as "applyEmpCriteria" and add below code to the file. public void applyEmpCriteria(ActionEvent actionEvent) { DCIteratorBinding dc = (DCIteratorBinding)evaluteEL("#{bindings.employeesFindAllIterator}"); ViewObject vo = dc.getViewObject(); vo.applyViewCriteria(vo.getViewCriteriaManager().getViewCriteria("EmployeesCriteria")); vo.ensureVariableManager().setVariableValue("deptId", this.getDeptId().getValue()); vo.executeQuery(); } /** * Programmtic evaluation of EL * * @param el EL to evalaute * @return Result of the evalutaion */ public Object evaluteEL(String el) { FacesContext fctx = FacesContext.getCurrentInstance(); ELContext elContext = fctx.getELContext(); Application app = fctx.getApplication(); ExpressionFactory expFactory = app.getExpressionFactory(); ValueExpression valExp = expFactory.createValueExpression(elContext, el, Object.class); return valExp.getValue(elContext); } Run the index.jspx page, enter departmentId value as 90 and click in ApplyEmpCriteria button. Now the bind variable for the Named criteria will be applied at runtime in the backing bean and it will re-execute ViewObject query to filter based on where clause condition.

    Read the article

  • Subterranean IL: Exception handling 1

    - by Simon Cooper
    Today, I'll be starting a look at the Structured Exception Handling mechanism within the CLR. Exception handling is quite a complicated business, and, as a result, the rules governing exception handling clauses in IL are quite strict; you need to be careful when writing exception clauses in IL. Exception handlers Exception handlers are specified using a .try clause within a method definition. .try <TryStartLabel> to <TryEndLabel> <HandlerType> handler <HandlerStartLabel> to <HandlerEndLabel> As an example, a basic try/catch block would be specified like so: TryBlockStart: // ... leave.s CatchBlockEndTryBlockEnd:CatchBlockStart: // at the start of a catch block, the exception thrown is on the stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s CatchBlockEnd CatchBlockEnd: // method code continues... .try TryBlockStart to TryBlockEnd catch [mscorlib]System.Exception handler CatchBlockStart to CatchBlockEnd There are four different types of handler that can be specified: catch <TypeToken> This is the standard exception catch clause; you specify the object type that you want to catch (for example, [mscorlib]System.ArgumentException). Any object can be thrown as an exception, although Microsoft recommend that only classes derived from System.Exception are thrown as exceptions. filter <FilterLabel> A filter block allows you to provide custom logic to determine if a handler block should be run. This functionality is exposed in VB, but not in C#. finally A finally block executes when the try block exits, regardless of whether an exception was thrown or not. fault This is similar to a finally block, but a fault block executes only if an exception was thrown. This is not exposed in VB or C#. You can specify multiple catch or filter handling blocks in each .try, but fault and finally handlers must have their own .try clause. We'll look into why this is in later posts. Scoped exception handlers The .try syntax is quite tricky to use; it requires multiple labels, and you've got to be careful to keep separate the different exception handling sections. However, starting from .NET 2, IL allows you to use scope blocks to specify exception handlers instead. Using this syntax, the example above can be written like so: .try { // ... leave.s EndSEH}catch [mscorlib]System.Exception { callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s EndSEH}EndSEH:// method code continues... As you can see, this is much easier to write (and read!) than a stand-alone .try clause. Next time, I'll be looking at some of the restrictions imposed by SEH on control flow, and how the C# compiler generated exception handling clauses.

    Read the article

  • SQL SERVER Convert IN to EXISTS Performance Talk

    In recent training one of the attendee asked if I can show simple method to convert IN clause to EXISTS clause. Here is the simple example. USE AdventureWorks GO --useof= SELECT * FROM HumanResources.EmployeeE WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddressEA WHERE EA.EmployeeID = E.EmployeeID) GO --useofexists SELECT * FROM HumanResources.EmployeeE WHERE EXISTS( SELECT [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Numbered paragraphs in Word 2007

    - by Kit
    I have the following styles defined in Word 2007. They all have outline levels 1-6. They also correctly show up in the Table of Contents (not all, I only set the TOC up to Level 3). 1 Heading 1 1.1 Heading 2 1.1.1 Heading 3 1.1.1.1 Heading 4 1.1.1.1.1 Heading 5 1.1.1.1.1.1 Heading 6 This is what I want 1 Heading 1 1.1 Body text under Heading Level 1 1.2 Body text under Heading Level 1 2 Heading 1 2.1 Heading 2 2.1.1 Body text under Heading Level 2 2.1.2 Body text under Heading Level 2 2.1.3 Body text under Heading Level 2 2.2 Heading 2 2.2.1 Body text under Heading Level 2 2.2.2 Body text under Heading Level 2 How do I make two list sequences link to each other? Here's a {fill in the blanks} illustration: {section number} Heading 1 {section number}.{clause number} Body text under Heading Level 1 {section number}.{clause number} Body text under Heading Level 1 The example above should expand to: 1 Heading 1 1.1 Body text under Heading Level 1 1.2 Body text under Heading Level 1 Another example: {section number} Heading 1 {section number}.{subsection number} Heading 2 {section number}.{subsection number}.{clause number} Body text under Heading Level 2 {section number}.{subsection number}.{clause number} Body text under Heading Level 2 should expand to: 2 Heading 1 2.1 Heading 2 2.1.1 Body text under Heading Level 2 2.1.2 Body text under Heading Level 2 2.1.3 Body text under Heading Level 2 The numbered body text paragraphs shouldn't show up the Table of Contents. I couldn't find the right way to do that, whether in multilevel lists, fields, styles, etc. How do I do it right?

    Read the article

  • Passing the CAML thru the EY of the NEEDL

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Passing the CAML thru the EY of the NEEDL Definitions: CAML (Collaborative Application Markup Language) is an XML based markup language used in Microsoft SharePoint technologies  Anonymous: A camel is a horse designed by committee  Dov Trietsch: A CAML is a HORS designed by Microsoft  I was advised against putting any Camel and Sphinx rhymes in here. Look it up in Google!  _____ Now that we have dispensed with the dromedary jokes (BTW, I have many more, but they are not fit to print!), here is an interesting problem and its solution.  We have built a list where the title must be kept unique so I needed to verify the existence (or absence) of a list item with a particular title. Two methods came to mind:  1: Span the list until the title is found (result = found) or until the list ends (result = not found). This is an algorithm of complexity O(N) and for long lists it is a performance sucker. 2: Use a CAML query instead. Here, for short list we’ll encounter some overhead, but because the query results in an SQL query on the content database, it is of complexity O(LogN), which is significantly better and scales perfectly. Obviously I decided to go with the latter and this is where the CAML s--t hit the fan.   A CAML query returns a SPListItemCollection and I simply checked its Count. If it was 0, the item did not already exist and it was safe to add a new item with the given title. Otherwise I cancelled the operation and warned the user. The trouble was that I always got a positive. Most of the time a false positive. The count was greater than 0 regardles of the title I checked (except when the list was empty, which happens only once). This was very disturbing indeed. To solve my immediate problem which was speedy delivery, I reverted to the “Span the list” approach, but the problem bugged me, so I wrote a little console app by which I tested and tweaked and tested, time and again, until I found the solution. Yes, one can pass the proverbial CAML thru the ey of the needle (e’s missing on purpose).  So here are my conclusions:  CAML that does not work:  Note: QT is my quote:  char QT = Convert.ToChar((int)34); string titleQuery = "<Query>><Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where></Query>"; titleQuery += "<ViewFields><FieldRef Name=" + QT + "Title" + QT + "/></ViewFields>";  Why? Even though U2U generates it, the <Query> and </Query> tags do not belong in the query that you pass. Start your query with the <Where> clause.  Also the <ViewFiels> clause does not belong. I used this clause to limit the returned collection to a single column, and I still wish to do it. I’ll show how this is done a bit later.   When you use the <Query> </Query> tags in you query, it’s as if you did not specify the query at all. What you get is the all inclusive default query for the list. It returns evey column and every item. It is expensive for both server and network because it does all the extra processing and eats plenty of bandwidth.   Now, here is the CAML that works  string titleQuery = "<Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where>";  You’ll also notice that inside the unusable <ViewFields> clause above, we have a <FieldRef> clause. This is what we pass to the SPQuery object. Here is how:  SPQuery query = new SPQuery(); query.Query = titleQuery; query.ViewFields = "<FieldRef Name=" + QT + "Title" + QT + "/>"; query.RowLimit = 1; SPListItemCollection col = masterList.GetItems(query);  Two thing to note: we enter the view fields into the SPQuery object and we also limited the number of rows that the query returns. The latter is not always done, but in an existence test, there is no point in returning hundreds of rows. The query will now return one item or none, which is all we need in order to verify the existence (or non-existence) of items. Limiting the number of columns and the number of rows is a great performance enhancer. That’s all folks!!

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • Access 2007 VBA & SQL - Update a Subform pointed at a dynamically created query

    - by Lucretius
    Abstract: I'm using VB to recreate a query each time a user selects one of 3 options from a drop down menu, which appends the WHERE clause If they've selected anything from the combo boxes. I then am attempting to get the information displayed on the form to refresh thereby filtering what is displayed in the table based on user input. 1) Dynamically created query using VB. Private Sub BuildQuery() ' This sub routine will redefine the subQryAllJobsQuery based on input from ' the user on the Management tab. Dim strQryName As String Dim strSql As String ' Main SQL SELECT statement Dim strWhere As String ' Optional WHERE clause Dim qryDef As DAO.QueryDef Dim dbs As DAO.Database strQryName = "qryAllOpenJobs" strSql = "SELECT * FROM tblOpenJobs" Set dbs = CurrentDb ' In case the query already exists we should deleted it ' so that we can rebuild it. The ObjectExists() function ' calls a public function in GlobalVariables module. If ObjectExists("Query", strQryName) Then DoCmd.DeleteObject acQuery, strQryName End If ' Check to see if anything was selected from the Shift ' Drop down menu. If so, begin the where clause. If Not IsNull(Me.cboShift.Value) Then strWhere = "WHERE tblOpenJobs.[Shift] = '" & Me.cboShift.Value & "'" End If ' Check to see if anything was selected from the Department ' drop down menu. If so, append or begin the where clause. If Not IsNull(Me.cboDepartment.Value) Then If IsNull(strWhere) Then strWhere = strWhere & " AND tblOpenJobs.[Department] = '" & Me.cboDepartment.Value & "'" Else strWhere = "WHERE tblOpenJobs.[Department] = '" & Me.cboDepartment.Value & "'" End If End If ' Check to see if anything was selected from the Date ' field. If so, append or begin the Where clause. If Not IsNull(Me.txtDate.Value) Then If Not IsNull(strWhere) Then strWhere = strWhere & " AND tblOpenJobs.[Date] = '" & Me.txtDate.Value & "'" Else strWhere = "WHERE tblOpenJobs.[Date] = '" & Me.txtDate.Value & "'" End If End If ' Concatenate the Select and the Where clause together ' unless all three parameters are null, in which case return ' just the plain select statement. If IsNull(Me.cboShift.Value) And IsNull(Me.cboDepartment.Value) And IsNull(Me.txtDate.Value) Then Set qryDef = dbs.CreateQueryDef(strQryName, strSql) Else strSql = strSql & " " & strWhere Set qryDef = dbs.CreateQueryDef(strQryName, strSql) End If End Sub 2) Main Form where the user selects items from combo boxes. picture of the main form and sub form http://i48.tinypic.com/25pjw2a.png 3) Subform pointed at the query created in step 1. Chain of events: 1) User selects item from drop down list on the main form. 2) Old query is deleted, new query is generated (same name). 3) Subform pointed at query does not update, but if you open the query by itself the correct results are displayed. Name of the Query: qryAllOpenJobs name of the subform: subQryAllOpenJobs Also, the Row Source of subQryAllOpenJobs = qryAllOpenJobs Name of the main form: frmManagement

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >