Search Results

Search found 1369 results on 55 pages for 'clause'.

Page 31/55 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • SQL SERVER – Introduction to PERCENT_RANK() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions PERCENT_RANK(). This function returns relative standing of a value within a query result set or partition. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, RANK() OVER(ORDER BY SalesOrderID) Rnk, PERCENT_RANK() OVER(ORDER BY SalesOrderID) AS PctDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO The above query will give us the following result: Now let us understand the resultset. You will notice that I have also included the RANK() function along with this query. The reason to include RANK() function was as this query is infect uses RANK function and find the relative standing of the query. The formula to find PERCENT_RANK() is as following: PERCENT_RANK() = (RANK() – 1) / (Total Rows – 1) If you want to read more about this function read here. Now let us attempt the same example with PARTITION BY clause USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) Rnk, PERCENT_RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS PctDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO Now you will notice that the same logic is followed in follow result set. I have now quick question to you – how many of you know the logic/formula of PERCENT_RANK() before this blog post? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – ORDER BY ColumnName vs ORDER BY ColumnNumber

    - by pinaldave
    I strongly favor ORDER BY ColumnName. I read one of the blog post where blogger compared the performance of the two SELECT statement and come to conclusion that ColumnNumber has no harm to use it. Let us understand the point made by first that there is no performance difference. Run following two scripts together: USE AdventureWorks GO -- ColumnName (Recommended) SELECT * FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT * FROM HumanResources.Department ORDER BY 3,2 GO If you look at the result and see the execution plan you will see that both of the query will take the same amount of the time. However, this was not the point of this blog post. It is not good enough to stop here. We need to understand the advantages and disadvantages of both the methods. Case 1: When Not Using * and Columns are Re-ordered USE AdventureWorks GO -- ColumnName (Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY 3,2 GO Case 2: When someone changes the schema of the table affecting column order I will let you recreate the example for the same. If your development server where your schema is different than the production server, if you use ColumnNumber, you will get different results on the production server. Summary: When you develop the query it may not be issue but as time passes by and new columns are added to the SELECT statement or original table is re-ordered if you have used ColumnNumber it may possible that your query will start giving you unexpected results and incorrect ORDER BY. One should note that the usage of ORDER BY ColumnName vs ORDER BY ColumnNumber should not be done based on performance but usability and scalability. It is always recommended to use proper ORDER BY clause with ColumnName to avoid any confusion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Column order can matter

    - by Dave Ballantyne
    Ordinarily, column order of a SQL statement does not matter. Select a,b,c from table will produce the same execution plan as   Select c,b,a from table However, sometimes it can make a difference.   Consider this statement (maxdop is used to make a simpler plan and has no impact to the main point):   select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc from sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) If you look at the execution plan, you will see similar to this That is three sorts.  One for RownAsc,  one for RownDesc and the final one for the ‘Order by’ clause.  Sorting is an expensive operation and one that should be avoided if possible.  So with this in mind, it may come as some surprise that the optimizer does not re-order operations to group them together when the incoming data is in a similar (if not exactly the same) sorted sequence.  A simple change to swap the RownAsc and RownDesc columns to produce this statement : select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc , ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc from Sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) Will result a different and more efficient query plan with one less sort. The optimizer, although unable to automatically re-order operations, HAS taken advantage of the data ordering if it is as required.  This is well worth taking advantage of if you have different sorting requirements in one statement. Try grouping the functions that require the same order together and save yourself a few extra sorts.

    Read the article

  • SQL Server Interview Questions

    - by Rodney Vinyard
    User-Defined Functions Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, I can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #007

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Find Stored Procedure Related to Table in Database – Search in All Stored Procedure In 2006 I wrote a small script which will help user  find all the Stored Procedures (SP) which are related to one or more specific tables. This was quite a popular script however, in SQL Server 2012 the same can be achieved using new DMV sys.sql-expression_dependencies. I recently blogged about it over Find Referenced or Referencing Object in SQL Server using sys.sql_expression_dependencies. 2007 SQL SERVER – Versions, CodeNames, Year of Release 1993 – SQL Server 4.21 for Windows NT 1995 – SQL Server 6.0, codenamed SQL95 1996 – SQL Server 6.5, codenamed Hydra 1999 – SQL Server 7.0, codenamed Sphinx 1999 – SQL Server 7.0 OLAP, codenamed Plato 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0) 2003 – SQL Server 2000 64-bit, codenamed Liberty 2005 – SQL Server 2005, codenamed Yukon (version 9.0) 2008 – SQL Server 2008, codenamed Katmai (version 10.0) 2011 – SQL Server 2008, codenamed Denali (version 11.0) Search String in Stored Procedure Searching sting in the stored procedure is one of the most frequent task developer do. They might be searching for a table, view or any other details. I have written a script to do the same in SQL Server 2000 and SQL Server 2005. This is worth bookmarking blog post. There is an alternative way to do the same as well here is the example. 2008 SQL SERVER – Refresh Database Using T-SQL NO! Some of the questions have a single answer NO! You may want to read the question in the original blog post. I had a great time saying No! SQL SERVER – Delete Backup History – Cleanup Backup History SQL Server stores history of all the taken backup forever. History of all the backup is stored in the msdb database. Many times older history is no more required. Following Stored Procedure can be executed with a parameter which takes days of history to keep. In the following example 30 is passed to keep a history of month. 2009 Stored Procedure are Compiled on First Run – SP taking Longer to Run First Time Is stored procedure pre-compiled? Why the Stored Procedure takes a long time to run for the first time?  This is a very common questions often discussed by developers and DBAs. There is an absolutely definite answer but the question has been discussed forever. There is a misconception that stored procedures are pre-compiled. They are not pre-compiled, but compiled only during the first run. For every subsequent runs, it is for sure pre-compiled. Read the entire article for example and demonstration. Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes This is one of the most important performance tuning lesson on my blog. I suggest this weekend you spend time reading them and let me know what you think about the concepts which I have demonstrated in the four part series. Part 1 | Part 2 | Part 3 | Part 4 Seek Predicate is the operation that describes the b-tree portion of the Seek. Predicate is the operation that describes the additional filter using non-key columns. Based on the description, it is very clear that Seek Predicate is better than Predicate as it searches indexes whereas in Predicate, the search is on non-key columns – which implies that the search is on the data in page files itself. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL Server – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administration assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. 2010 Recycle Error Log – Create New Log file without Server Restart Once I observed a DBA to restaring the SQL Server when he needed new error log file. This was funny and sad both at the same time. There is no need to restart the server to create a new log file or recycle the log file. You can run sp_cycle_errorlog and achieve the same result. Get Database Backup History for a Single Database Simple but effective script! Reducing CXPACKET Wait Stats for High Transactional Database The subject is very complex and I have done my best to simplify the concept. In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Information Related to DATETIME and DATETIME2 There are quite a lot of confusion with DATETIME and DATETIME2. DATETIME2 is also one of the underutilized datatype of SQL Server.  In this blog post I have written a follow up of the my earlier datetime series where I clarify a few of the concepts related to datetime. Difference Between GETDATE and SYSDATETIME Difference Between DATETIME and DATETIME2 – WITH GETDATE Difference Between DATETIME and DATETIME2 2011 Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” Puzzle – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY You can see that row number 2, 3, 4, and 5 has same SalesOrderID = 43667. The FIRST_VALUE is 78 and LAST_VALUE is 77. Now if these function was working on maximum and minimum value they should have given answer as 77 and 80 respectively instead of 78 and 77. Also the value of FIRST_VALUE is greater than LAST_VALUE 77. Why? Explain in detail. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available Our book was out of stock in 48 hours of it was arrived in stock! We got call from the online store with a request for more copies within 12 hours. But we had printed only as many as we had sent them. There were no extra copies. We finally talked to the printer to get more copies. However, due to festivals and holidays the copies could not be shipped to the online retailer for two days. We knew for sure that they were going to be out of the book for 48 hours. This is the story of how we overcame that situation! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Sync Google Contacts with QuickBooks

    - by dataintegration
    The RSSBus ADO.NET Providers offer an easy way to integrate with different data sources. In this article, we include a fully functional application that can be used to synchronize contacts between Google and QuickBooks. Like our QuickBooks ADO.NET Provider, the included application supports both the desktop versions of QuickBooks and QuickBooks Online Edition. Getting the Contacts Step 1: Google accounts include a number of contacts. To obtain a list of a user's Google Contacts, issue a query to the Contacts table. For example: SELECT * FROM Contacts. Step 2: QuickBooks stores contact information in multiple tables. Depending on your use case, you may want to synchronize your Google Contacts with QuickBooks Customers, Employees, Vendors, or a combination of the three. To get data from a specific table, issue a SELECT query to that table. For example: SELECT * FROM Customers Step 3: Retrieving all results from QuickBooks may take some time, depending on the size of your company file. To narrow your results, you may want to use a filter by including a WHERE clause in your query. For example: SELECT * FROM Customers WHERE (Name LIKE '%James%') AND IncludeJobs = 'FALSE' Synchronizing the Contacts Synchronizing the contacts is a simple process. Once the contacts from Google and the customers from QuickBooks are available, they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete contacts in either data source as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for QuickBooks V3, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the QuickBooks ADO.NET Data Provider V3, which can be obtained here.

    Read the article

  • The one feature that would make me invest in SSIS 2012

    - by Peter Larsson
    This week I was invited my Microsoft to give two presentations in Slovenia. My presentations went well and I had good energy and the audience was interacting with me. When I had some time over from networking and partying, I attended a few other presentations. At least the ones who where held in English. One of these was "SQL Server Integration Services 2012 - All the News, and More", given by Davide Mauri, a fellow co-worker from SolidQ. We started to talk and soon came into the details of the new things in SSIS 2012. All of the official things Davide talked about are good stuff, but for me, the best thing is one he didn't cover in his presentation. In earlier versions of SSIS than 2012, it is possible to have a stored procedure to act as a data source, as long as it doesn't have a temp table in it. In that case, you will get an error message from SSIS that "Metadata could not be found". This is still true with SSIS 2012, so the thing I am talking about is not really a SSIS feature, it's a SQL Server 2012 feature. And this is the EXECUTE WITH RESULTSETS feature! With this, you can have a stored procedure with a temp table to deliver the resultset to SSIS, if you execute the stored procedure from SSIS and add the "WITH RESULTSETS" option. If you do this, SSIS is able to take the metadata from the code you write in SSIS and not from the stored procedure! And it's very fast too. Let's say you have a stored procedure in earlier versions and when referencing that stored procedure in SSIS forced SSIS to call the stored procedure (which can take hours), to retrieve the metadata. Now, with RESULTSETS, SSIS 2012 can continue in milliseconds! This is because you provide the metadata in the RESULTSETS clause, and if the data from the stored procedure doesn't match this RESULTSETS, you will get an error anyway, so it makes sense Microsoft has provided this optimization for us.

    Read the article

  • Cross Apply Ambiguity

    - by Dave Ballantyne
    Cross apply (and outer apply)  are a very welcome addition to the TSQL language.  However, today after a few hours of head scratching, I have found an simple issue which could cause big big problems. What would you expect from this statement ? select * from sys.objects b join sys.objects a on a.object_id = object_id No prizes for guessing SQL server errors with “Ambiguous column name 'object_id'”. What would you expect from this statement ? Select * from sys.objects a cross apply( Select * from sys.objects b where b.object_id = object_id) as c Surprisingly, perhaps, the result is a cross join of sys.objects.  Well, what happened there ? If you look at the apply statement, within the where clause, only one of the conditions is qualified with a table name.  This meant that is has be interpreted as “b.object_id = b.object_id” causing the cross apply to have no join the the parent sys.objects table and causing the cross join. The fix is , obviously, simple Select * from sys.objects a cross apply( Select * from sys.objects b where b.object_id = a.object_id) as c So why no “Ambiguous column name ” error ?  I’ve raised a connect item on this issue here.

    Read the article

  • MySQL – Grouping by Multiple Columns to Single Column as A String

    - by Pinal Dave
    In this post titled SQL SERVER – Grouping by Multiple Columns to Single Column as A String we have seen how to group multiple column data in comma separate values in a single row grouping by another column by using FOR XML clause. In this post we will see how we can produce the same result using the GROUP_CONCAT function in MySQL. Let us create the following table and data. CREATE TABLE TestTable (ID INT, Col VARCHAR(4)); INSERT INTO TestTable (ID, Col) SELECT 1, 'A' UNION ALL SELECT 1, 'B' UNION ALL SELECT 1, 'C' UNION ALL SELECT 2, 'A' UNION ALL SELECT 2, 'B' UNION ALL SELECT 2, 'C' UNION ALL SELECT 2, 'D' UNION ALL SELECT 2, 'E'; Now to generate csv values of the column col for each ID, use the following code SELECT ID, GROUP_CONCAT(col) AS CSV FROM TestTable GROUP BY ID; The result is ID CSV 1 A,B,C 2 A,B,C,D,E You can also change the delimiters. For example instead of comma, if you want to have a pipe symbol (|), use the following SELECT ID, REPLACE(GROUP_CONCAT(col),',','|') AS CSV FROM TestTable GROUP BY ID; The result is ID CSV 1 A|B|C 2 A|B|C|D|E MySQL makes this very simple with its support of GROUP_CONCAT function. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Do programmers need a union? [closed]

    - by James A. Rosen
    In light of the acrid responses to the intellectual property clause discussed in my previous question, I have to ask: why don't we have a programmers' union? There are many issues we face as employees, and we have very little ability to organize and negotiate. Could we band together with the writers', directors', or musicians' guilds, or are our needs unique? Has anyone ever tried to start one? If so, why did it fail? (Or, alternatively, why have I never heard of it, despite its success?) later: Keith has my idea basically right. I would also imagine the union being involved in many other topics, including: legal liability for others' use/misuse of our work, especially unintended uses evaluating the quality of computer science and software engineering higher education programs -- unlike many other engineering disciplines, we are not required to be certified on receiving our Bachelor's degrees evangelism and outreach -- especially to elementary school students certification -- not doing it, but working with the companies like ISC(2) and others to make certifications meaningful and useful continuing education -- similar to previous conferences -- maintain a go-to list of organizers and other resources our members can use I would see it less so as a traditional trade union, with little emphasis on: pay -- we tend to command fairly good salaries outsourcing and free trade -- most of use tend to be pretty free-market oriented working conditions -- we're the only industry with Aeron chairs being considered anything like "standard"

    Read the article

  • .Net oracle parameter order

    - by jkrebsbach
    Using the ODAC (Oracle Data Access Components) downloaded from Oracle to talk to a handfull of Oracle DBs - Was putting together my DAL to update the DB, and things weren't working as I hoped - UPDATE foo SET bar = :P_BAR WHERE bap = :P_BAP I assign my parameters - objCmd.Parameters.Add(objBap); objCmd.Parameters.Add(objBar);   Execute update command - int result = objCmd.ExecuteNonQuery() and result is zero! ...  Is my filter incorrect? SELECT count(*) FROM foo WHERE bap = :P_BAP ...result is one... Is my new value incorrect?  Am I using Char instead of Varchar somewhere and need an RTRIM?  Is there a transaction getting involved?  An error thrown and not caught? The answer: Order of parameters.   The order parameters are added to the Oracle Command object must match the order the parameters are referenced in the SQL statement.  I was adding the parameters for the WHERE clause before adding the SET value parameters, and for that reason although no error was being thrown, no value was updated either. Flip parameter collection around to match order of params in the SQL statement, and ExecuteNonQuery() is back to returning the number of rows affected.

    Read the article

  • What are the legal considerations when forking a BSD-licensed project?

    - by Thomas Owens
    I'm interested in forking a project released under a two-clause BSD license: Copyright (c) 2010 {copyright holder} All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the disclaimer at the end. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (2) Neither the name of {copyright holder} nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. DISCLAIMER THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. I've never forked a project before, but this project is very similar to something that I need/want. However, I'm not sure how far I'll get, so my plan is to pull the latest from their repository and start working. Maybe, eventually, I'll get it to where I want it, and be able to release it. Is this the right approach? How, exactly, does this impact forking of the project? How do I track who owns what components or sections (what's copyright me, what's copyright the original creators, once I start stomping over their code base)? Can I fork this project? What must I do prior to releasing, and when/if I decide to release the software derived from this BSD-licensed work?

    Read the article

  • PerlRegEx vs RegularExpressionsCore Delphi Units

    - by Jan Goyvaerts
    The RegularExpressionsCore unit that is part of Delphi XE is based on the latest class-based PerlRegEx unit that I developed. Embarcadero only made a few changes to the unit. These changes are insignificant enough that code written for earlier versions of Delphi using the class-based PerlRegEx unit will work just the same with Delphi XE. The unit was renamed from PerlRegEx to RegularExpressionsCore. When migrating your code to Delphi XE, you can choose whether you want to use the new RegularExpressionsCore unit or continue using the PerlRegEx unit in your application. All you need to change is which unit you add to the uses clause in your own units. Indentation and line breaks in the code were changed to match the style used in the Delphi RTL and VCL code. This does not change the code, but makes it harder to diff the two units. Literal strings in the unit were separated into their own unit called RegularExpressionsConsts. These strings are only used for error messages that indicate bugs in your code. If your code uses TPerlRegEx correctly then the user should not see any of these strings. My code uses assertions to check for out of bounds parameters, while Embarcadero uses exceptions. Again, if you use TPerlRegEx correctly, you should never get any assertions or exceptions. The Compile method raises an exception if the regular expression is invalid in both my original TPerlRegEx component and Embarcadero’s version. If your code allows the user to provide the regular expression, you should explicitly call Compile and catch any exceptions it raises so you can tell the user there is a problem with the regular expression. Even with user-provided regular expressions, you shouldn’t get any other assertions or exceptions if your code is correct. Note that Embarcadero owns all the rights to their RegularExpressionsCore unit. Like all the other RTL and VCL units, this unit cannot be distributed by myself or anyone other than Embarcadero. I do retain the rights to my original PerlRegEx unit which I will continue to make available for those using older versions of Delphi.

    Read the article

  • PostgreSQL, Ubuntu, NetBeans IDE (Part 1)

    - by Geertjan
    While setting up PostgreSQL from scratch, with the aim to use it in NetBeans IDE, I found the following resources helpful: http://railskey.wordpress.com/2012/05/19/postgresql-installation-in-ubuntu-12-04/ http://ohdevon.wordpress.com/2011/09/17/postgresql-to-netbeans-1/ http://ohdevon.wordpress.com/2011/09/19/postgresql-to-netbeans-2/ For quite a while I had problems relating to  "/var/run/postgresql/.s.PGSQL.5432", which had something to do with "postmaster.pid", which I somehow solved via a link I can't find anymore, and which may not have been a problem to begin with. A key moment was this one, which was useful for setting the password of a new user I'd created: http://stackoverflow.com/questions/7695962/postgresql-password-authentication-failed-for-user-postgres This was useful for setting up a table in my database, which I did by pasting in the below into NetBeans after I made the connection there: http://use-the-index-luke.com/sql/example-schema/postgresql/where-clause Now I have a database set up with all permissions everywhere (which turned out to be the hard part) correct: The next step will be to create a NetBeans Platform application based on this database. I'm assuming it shouldn't be any different to what's described in the NetBeans Platform CRUD Tutorial.

    Read the article

  • ArchBeat Link-o-Rama for October 24, 2013

    - by OTN ArchBeat
    Video: How To Embed Custom Content Into Fusion Applications Watch this video tutorial from the Fusion Applications Developer Relations YouTube Channel to learn how to embed reports, charts, twitter streams, web pages, news feeds, and other custom content into Fusion Applications. Oracle GoldenGate 12c - New Release, New Features | Michael Rainey Rittman Mead's Michael Rainey takes you on guided tour through the GoldenGate 12c features that "are relevant to data warehouse and data migration work we typically see in the business intelligence world." Reproducing WebLogic Stuck Threads with ADF CreateInsert Operation and ORDER BY Clause | Andrejus Baranovskis Another post from Oracle ACE Director Andrejus Baranovsikis on dealing with WebLogic Stuck Threads. This one includes a test case application you can download. Oracle WebLogic 12.1.2 Installation in VirtualBox with 0 MHz | Dr. Frank Munz Oracle ACE Director Frank Munz shares the results of some detective work to discover the cause of a strange problem in an Oracle WebLogic installation. The Impact of SaaS - The Times They Are A-Changin' | Floyd Teter Oracle ACE Director Floyd Teter shares some truly interesting insight gained in conversations with three Fortune 500 CIOs. Thought for the Day "All the mistakes I ever made were when I wanted to say 'No' and said 'Yes'." — Moss Hart, playwright, screenwriter (October 24, 1904 – December 20, 1961) Source: brainyquote.com

    Read the article

  • Making money from a custom built interpreter?

    - by annoying_squid
    I have been making considerable progress lately on building an interpreter. I am building it from NASM assembly code (for the core engine) and C (cl.exe the Microsoft compiler for the parser). I really don't have a lot of time but I have a lot of good ideas on how to build this so it appeals to a certain niche market. I'd love to finish this but I need to face reality here ... unless I can make some good monetary return on my investment, there is not a lot of time for me to invest. So I ask the following questions to anyone out there, especially those who have experience in monetizing their programs: 1) How easy is it for a programmer to make good money from one design? (I know this is vague but it will be interesting to hear from those who have experience or know of others' experiences). 2) What are the biggest obstacles to making money from a programming design? 3) For the parser, I am using the Microsoft compiler (no IDE) I got from visual express, so will this be an issue? Will I have to pay royalties or a license fee? 4) As far as I know NASM is a 2-clause BSD licensed application. So this should allow me to use NASM for commericial development unless I am missing something? It's good to know these things before launching into the meat and potatoes of the project.

    Read the article

  • Dreaded SQLs

    - by lavanyadeepak
    Dreaded SQLs We used to think that a SQL statement without a where clause is only dangerous right since running that on a server TSQL is just going to impact the entire table like waving the magic wand. For that reason we should cultivate the habit first to write the statement as select and then to modify the select portion as update. Within the T-SQL Window, I would normally prefer the following first: select * from employee where empid in (4,5) and then once I am satisfied with the results, I would go ahead with the following change: --select * delete from employee where empid in (4,5) Today I just discovered another coding horror. This would typically be applicable in a stored procedure and with respect to variable nomenclature. It is always desirable to have a suitable nomenclature for parameters distinct from the column names and internal variables. This would help quicker debugging of the stored procedures besides enhancing the readability. Else in a quick bout of enthusiasm a statement like   if (@CustomerID = @CustomerID) [when the latter is intended to denote the column name and there is a superflous @ prepended], zeroing in on the problem would be little tricky. Had there been a still powerful nomenclature rules then debugging would have been more straight-forward and simpler right?

    Read the article

  • Are there legitimate reasons for returning exception objects instead of throwing them?

    - by stakx
    This question is intended to apply to any OO programming language that supports exception handling; I am using C# for illustrative purposes only. Exceptions are usually intended to be raised when an problem arises that the code cannot immediately handle, and then to be caught in a catch clause in a different location (usually an outer stack frame). Q: Are there any legitimate situations where exceptions are not thrown and caught, but simply returned from a method and then passed around as error objects? This question came up for me because .NET 4's System.IObserver<T>.OnError method suggests just that: exceptions being passed around as error objects. Let's look at another scenario, validation. Let's say I am following conventional wisdom, and that I am therefore distinguishing between an error object type IValidationError and a separate exception type ValidationException that is used to report unexpected errors: partial interface IValidationError { } abstract partial class ValidationException : System.Exception { public abstract IValidationError[] ValidationErrors { get; } } (The System.Component.DataAnnotations namespace does something quite similar.) These types could be employed as follows: partial interface IFoo { } // an immutable type partial interface IFooBuilder // mutable counterpart to prepare instances of above type { bool IsValid(out IValidationError[] validationErrors); // true if no validation error occurs IFoo Build(); // throws ValidationException if !IsValid(…) } Now I am wondering, could I not simplify the above to this: partial class ValidationError : System.Exception { } // = IValidationError + ValidationException partial interface IFoo { } // (unchanged) partial interface IFooBuilder { bool IsValid(out ValidationError[] validationErrors); IFoo Build(); // may throw ValidationError or sth. like AggregateException<ValidationError> } Q: What are the advantages and disadvantages of these two differing approaches?

    Read the article

  • Dynamic Query Generation : suggestion for better approaches

    - by Gaurav Parmar
    I am currently designing a functionality in my Web Application where the verified user of the application can execute queries which he wishes to from the predefined set of queries with where clause varying as per user's choice. For example,Table ABC contains the following Template query called SecretReport "Select def as FOO, ghi as BAR from MNO where " SecretReport can have parameters XYZ, ILP. Again XYZ can have values as 1,2 and ILP can have 3,4 so if the user chooses ILP=3, he will get the result of the following query on his screen "Select def as FOO, ghi as BAR from MNO where ILP=3" Again the user is allowed permutations of XYZ / ILP My initial thought is that User will be shown a list of Report names and each report will have parameters and corresponding values. But this approach although technically simple does not appear intuitive. I would like to extend this functionality to a more generic level. Such that the user can choose a table and query based on his requirements. Of course we do not want the end user to take complete control of DB. But only tables and fields that are relevant to him. At present we are defining what is relevant in the code. But I want the Admin to take over this functionality such that he can decide what is relevant and expose the same to the user. On user's side it should be intuitive what is available to him and what queries he can form. Please share your thoughts what is the most user friendly way to provide this feature to the end user.

    Read the article

  • Nullable types and ?? operator C# [en-US]

    - by ruimachado
    Nullable types vs Non-nullable types   While developing our C# projects its frequent the null comparison operation to avoid null exceptions. This simple operation is mainly coded using the "var x = null" code example inside an if clause. However not all types of variables are nullable, which means that setting a variable to null is not allowed in every cases, it depends on what kind of type are you defining. But what if there was an extension to your non-nullable type that would convert your variable types to nullable? This extension really exists. As I said before in C# you have nullable types which represent all the values of an underlying type, and an additional null value and can be declared easily using "T?", where T is the type of the variable and for example the normal int type cannot be null, so its a non-nullable type, however if you define a "int?" your variable can be null, what you do is convert a non-nullable type to a nullable type. Example: int x=null;     Not allowed     int? x=null;   Allowed     While using nullable types you can check if a variable is null the same way you do it with nullable types:     But what about setting a default value when a certain variable is null?   In this cases the c# .net framework let you set a default value when you try to assign a nullable type to a non-nullable type, using the ?? operator. If you don't use this operator you can still catch the InvalidOperationException which is throw in this cases. For example  without the ?? operator :     Using the ?? operator your code becomes cleaner and more easy to read and you get a bonus, you can set a default value for multiple variables using the ?? in a chain set.     That’s it,   Thanks, Rui Machado rpmachado.wordpress.com

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • How to input data into user defined variables into MySql query

    - by user292791
    Simple Shell script echo "Enter 1 for month of March" echo "Enter 2 for month of April" echo "Enter 3 for month of May" read Month case "$Month" in 1) echo "enter establishment name" read a; mysql -u root -p $a < "March.sql";; 2) echo "enter establishment name" read b; mysql -u root -p $b < "April.sql";; 3) echo "enter establishment name" read c; mysql -u root -p $c < "May.sql";; esac done In this i have three other query files March.sql, April.sql, May.sql. i'm linking this in shell script . Example of .sql file: SELECT DISTINCT substr( a.case_no, 3, 2 ), b.case_type, b.type_name, a.case_no into outfile '/tmp/April.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n' FROM Civil_t AS a, Case_type_t AS b, disposal_proc AS c WHERE substr( a.case_no, 3, 2 ) = b.case_type AND a.date_of_decision BETWEEN '2014-04-01' AND '2014-04-30' AND a.case_no = c.case_no AND a.court_no =1; I have to alter the .sql script every time. Is there any method to read the variables from shell script and use it in mysql. For example:- echo "enter date" read a #input date Now i have read a "date" and i want to use it in March.sql query in where clause. Is there is any method of using this variable in .sql query.

    Read the article

  • How to add new filters to CAML queries in SharePoint 2007

    - by uruit
      Normal 0 21 false false false ES-UY X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One flexibility SharePoint has is CAML (Collaborative Application Markup Language). CAML it’s a markup language like html that allows developers to do queries against SharePoint lists, it’s syntax is very easy to understand and it allows to add logical conditions like Where, Contains, And, Or, etc, just like a SQL Query. For one of our projects we have the need to do a filter on SharePoint views, the problem here is that the view it’s a list containing a CAML Query with the filters the view may have, so in order to filter the view that’s already been filtered before, we need to append our filters to the existing CAML Query. That’s not a trivial task because the where statement in a CAML Query it’s like this: <Where>   <And>     <Filter1 />     <Filter2 />   </And> </Where> If we want to add a new logical operator, like an OR it’s not just as simple as to append the OR expression like the following example: <Where>   <And>     <Filter1 />     <Filter2 />   </And>   <Or>     <Filter3 />   </Or> </Where> But instead the correct query would be: <Where>   <Or>     <And>       <Filter1 />       <Filter2 />     </And>     <Filter3 />   </Or> </Where> Notice that the <Filter# /> tags are for explanation purpose only. In order to solve this problem we created a simple component, it has a method that receives the current query (could be an empty query also) and appends the expression you want to that query. Example: string currentQuery = @“ <Where>    <And>     <Contains><FieldRef Name='Title' /><Value Type='Text'>A</Value></Contains>     <Contains><FieldRef Name='Title' /><Value Type='Text'>B</Value></Contains>   </And> </Where>”; currentQuery = CAMLQueryBuilder.AppendQuery(     currentQuery,     “<Contains><FieldRef Name='Title' /><Value Type='Text'>C</Value></Contains>”,     CAMLQueryBuilder.Operators.Or); The fist parameter this function receives it’s the actual query, the second it’s the filter you want to add, and the third it’s the logical operator, so basically in this query we want all the items that the title contains: the character A and B or the ones that contains the character C. The result query is: <Where>   <Or>      <And>       <Contains><FieldRef Name='Title' /><Value Type='Text'>A</Value></Contains>       <Contains><FieldRef Name='Title' /><Value Type='Text'>B</Value></Contains>     </And>     <Contains><FieldRef Name='Title' /><Value Type='Text'>C</Value></Contains>   </Or> </Where>             The code:   First of all we have an enumerator inside the CAMLQueryBuilder class that has the two possible Options And, Or. public enum Operators { And, Or }   Then we have the main method that’s the one that performs the append of the filters. public static string AppendQuery(string containerQuery, string logicalExpression, Operators logicalOperator){   In this method the first we do is create a new XmlDocument and wrap the current query (that may be empty) with a “<Query></Query>” tag, because the query that comes with the view doesn’t have a root element and the XmlDocument must be a well formatted xml.   XmlDocument queryDoc = new XmlDocument(); queryDoc.LoadXml("<Query>" + containerQuery + "</Query>");   The next step is to create a new XmlDocument containing the logical expression that has the filter needed.   XmlDocument logicalExpressionDoc = new XmlDocument(); logicalExpressionDoc.LoadXml("<root>" + logicalExpression + "</root>"); In these next four lines we extract the expression from the recently created XmlDocument and create an XmlElement.                  XmlElement expressionElTemp = (XmlElement)logicalExpressionDoc.SelectSingleNode("/root/*"); XmlElement expressionEl = queryDoc.CreateElement(expressionElTemp.Name); expressionEl.InnerXml = expressionElTemp.InnerXml;   Below are the main steps in the component logic. The first “if” checks if the actual query doesn’t contains a “Where” clause. In case there’s no “Where” we add it and append the expression.   In case that there’s already a “Where” clause, we get the entire statement that’s inside the “Where” and reorder the query removing and appending elements to form the correct query, that will finally filter the list.   XmlElement whereEl; if (!containerQuery.Contains("Where")) { queryDoc.FirstChild.AppendChild(queryDoc.CreateElement("Where")); queryDoc.SelectSingleNode("/Query/Where").AppendChild(expressionEl); } else { whereEl = (XmlElement)queryDoc.SelectSingleNode("/Query/Where"); if (!containerQuery.Contains("<And>") &&                 !containerQuery.Contains("<Or>"))        {              XmlElement operatorEl = queryDoc.CreateElement(GetName(logicalOperator)); XmlElement existingExpression = (XmlElement)whereEl.SelectSingleNode("/Query/Where/*"); whereEl.RemoveChild(existingExpression);                 operatorEl.AppendChild(existingExpression);               operatorEl.AppendChild(expressionEl);                 whereEl.AppendChild(operatorEl);        }        else        {              XmlElement operatorEl = queryDoc.CreateElement(GetName(logicalOperator)); XmlElement existingOperator = (XmlElement)whereEl.SelectSingleNode("/Query/Where/*");                 whereEl.RemoveChild(existingOperator);               operatorEl.AppendChild(existingOperator);               operatorEl.AppendChild(expressionEl);                 whereEl.AppendChild(operatorEl);         }  }  return queryDoc.FirstChild.InnerXml }     Finally the GetName method converts the Enum option to his string equivalent.   private static string GetName(Operators logicalOperator) {       return Enum.GetName(typeof(Operators), logicalOperator); }        This component helped our team a lot using SharePoint 2007 and modifying the queries, but now in SharePoint 2010; that wouldn’t be needed because of the incorporation of LINQ to SharePoint. This new feature enables the developers to do typed queries against SharePoint lists without the need of writing any CAML code.   Normal 0 21 false false false ES-UY X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Post written by Sebastian Rodriguez - Portals and Collaboration Solutions @ UruIT  

    Read the article

  • How to add new filters to CAML queries in SharePoint 2007

    - by uruit
    Normal 0 21 false false false ES-UY X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One flexibility SharePoint has is CAML (Collaborative Application Markup Language). CAML it’s a markup language like html that allows developers to do queries against SharePoint lists, it’s syntax is very easy to understand and it allows to add logical conditions like Where, Contains, And, Or, etc, just like a SQL Query. For one of our projects we have the need to do a filter on SharePoint views, the problem here is that the view it’s a list containing a CAML Query with the filters the view may have, so in order to filter the view that’s already been filtered before, we need to append our filters to the existing CAML Query. That’s not a trivial task because the where statement in a CAML Query it’s like this: <Where>   <And>     <Filter1 />     <Filter2 />   </And> </Where> If we want to add a new logical operator, like an OR it’s not just as simple as to append the OR expression like the following example: <Where>   <And>     <Filter1 />     <Filter2 />   </And>   <Or>     <Filter3 />   </Or> </Where> But instead the correct query would be: <Where>   <Or>     <And>       <Filter1 />       <Filter2 />     </And>     <Filter3 />   </Or> </Where> Notice that the <Filter# /> tags are for explanation purpose only. In order to solve this problem we created a simple component, it has a method that receives the current query (could be an empty query also) and appends the expression you want to that query. Example: string currentQuery = @“ <Where>    <And>     <Contains><FieldRef Name='Title' /><Value Type='Text'>A</Value></Contains>     <Contains><FieldRef Name='Title' /><Value Type='Text'>B</Value></Contains>   </And> </Where>”; currentQuery = CAMLQueryBuilder.AppendQuery(     currentQuery,     “<Contains><FieldRef Name='Title' /><Value Type='Text'>C</Value></Contains>”,     CAMLQueryBuilder.Operators.Or); The fist parameter this function receives it’s the actual query, the second it’s the filter you want to add, and the third it’s the logical operator, so basically in this query we want all the items that the title contains: the character A and B or the ones that contains the character C. The result query is: <Where>   <Or>      <And>       <Contains><FieldRef Name='Title' /><Value Type='Text'>A</Value></Contains>       <Contains><FieldRef Name='Title' /><Value Type='Text'>B</Value></Contains>     </And>     <Contains><FieldRef Name='Title' /><Value Type='Text'>C</Value></Contains>   </Or> </Where>     The code:   First of all we have an enumerator inside the CAMLQueryBuilder class that has the two possible Options And, Or. public enum Operators { And, Or }   Then we have the main method that’s the one that performs the append of the filters. public static string AppendQuery(string containerQuery, string logicalExpression, Operators logicalOperator){   In this method the first we do is create a new XmlDocument and wrap the current query (that may be empty) with a “<Query></Query>” tag, because the query that comes with the view doesn’t have a root element and the XmlDocument must be a well formatted xml.   XmlDocument queryDoc = new XmlDocument(); queryDoc.LoadXml("<Query>" + containerQuery + "</Query>");   The next step is to create a new XmlDocument containing the logical expression that has the filter needed.   XmlDocument logicalExpressionDoc = new XmlDocument(); logicalExpressionDoc.LoadXml("<root>" + logicalExpression + "</root>"); In these next four lines we extract the expression from the recently created XmlDocument and create an XmlElement.                  XmlElement expressionElTemp = (XmlElement)logicalExpressionDoc.SelectSingleNode("/root/*"); XmlElement expressionEl = queryDoc.CreateElement(expressionElTemp.Name); expressionEl.InnerXml = expressionElTemp.InnerXml;   Below are the main steps in the component logic. The first “if” checks if the actual query doesn’t contains a “Where” clause. In case there’s no “Where” we add it and append the expression.   In case that there’s already a “Where” clause, we get the entire statement that’s inside the “Where” and reorder the query removing and appending elements to form the correct query, that will finally filter the list.   XmlElement whereEl; if (!containerQuery.Contains("Where")) { queryDoc.FirstChild.AppendChild(queryDoc.CreateElement("Where")); queryDoc.SelectSingleNode("/Query/Where").AppendChild(expressionEl); } else { whereEl = (XmlElement)queryDoc.SelectSingleNode("/Query/Where"); if (!containerQuery.Contains("<And>") &&                 !containerQuery.Contains("<Or>"))        {              XmlElement operatorEl = queryDoc.CreateElement(GetName(logicalOperator)); XmlElement existingExpression = (XmlElement)whereEl.SelectSingleNode("/Query/Where/*"); whereEl.RemoveChild(existingExpression);                 operatorEl.AppendChild(existingExpression);               operatorEl.AppendChild(expressionEl);                 whereEl.AppendChild(operatorEl);        }        else        {              XmlElement operatorEl = queryDoc.CreateElement(GetName(logicalOperator)); XmlElement existingOperator = (XmlElement)whereEl.SelectSingleNode("/Query/Where/*");                 whereEl.RemoveChild(existingOperator);               operatorEl.AppendChild(existingOperator);               operatorEl.AppendChild(expressionEl);                 whereEl.AppendChild(operatorEl);         }  }  return queryDoc.FirstChild.InnerXml }     Finally the GetName method converts the Enum option to his string equivalent.   private static string GetName(Operators logicalOperator) {       return Enum.GetName(typeof(Operators), logicalOperator); }        Normal 0 21 false false false ES-UY X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 21 false false false ES-UY X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This component helped our team a lot using SharePoint 2007 and modifying the queries, but now in SharePoint 2010; that wouldn’t be needed because of the incorporation of LINQ to SharePoint. This new feature enables the developers to do typed queries against SharePoint lists without the need of writing any CAML code.  But there is still much development to the 2007 version, so I hope this information is useful for other members.  Post Normal 0 21 false false false ES-UY X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} written by Sebastian Rodriguez - Portals and Collaboration Solutions @ UruIT

    Read the article

  • 12c - flashforward, flashback or see it as of now...

    - by noreply(at)blogger.com (Thomas Kyte)
    Oracle 9i exposed flashback query to developers for the first time.  The ability to flashback query dates back to version 4 however (it just wasn't exposed).  Every time you run a query in Oracle it is in fact a flashback query - it is what multi-versioning is all about.However, there was never a flashforward query (well, ok, the workspace manager has this capability - but with lots of extra baggage).  We've never been able to ask a table "what will you look like tomorrow" - but now we do.The capability is called Temporal Validity.  If you have a table with data that is effective dated - has a "start date" and "end date" column in it - we can now query it using flashback query like syntax.  The twist is - the date we "flashback" to can be in the future.  It works by rewriting the query to transparently the necessary where clause and filter out the right rows for the right period of time - and since you can have records whose start date is in the future - you can query a table and see what it would look like at some future time.Here is a quick example, we'll start with a table:ops$tkyte%ORA12CR1> create table addresses  2  ( empno       number,  3    addr_data   varchar2(30),  4    start_date  date,  5    end_date    date,  6    period for valid(start_date,end_date)  7  )  8  /Table created.the new bit is on line 6 (it can be altered into an existing table - so any table  you have with a start/end date column will be a candidate).  The keyword is PERIOD, valid is an identifier I chose - it could have been foobar, valid just sounds nice in the query later.  You identify the columns in your table - or we can create them for you if they don't exist.  Then you just create some data:ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '123 Main Street', trunc(sysdate-5), trunc(sysdate-2) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '456 Fleet Street', trunc(sysdate-1), trunc(sysdate+1) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '789 1st Ave', trunc(sysdate+2), null );1 row created.and you can either see all of the data:ops$tkyte%ORA12CR1> select * from addresses;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13      1234 456 Fleet Street               01-JUL-13 03-JUL-13      1234 789 1st Ave                    04-JUL-13or query "as of" some point in time - as  you can see in the predicate section - it is just doing a query rewrite to automate the "where" filters:ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate-3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  cthtvvm0dxvva, child number 0-------------------------------------select * from addresses as of period for valid sysdate-3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!-3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!-3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 456 Fleet Street               01-JUL-13 03-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  26ubyhw9hgk7z, child number 0-------------------------------------select * from addresses as of period for valid sysdatePlan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate+3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 789 1st Ave                    04-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  36bq7shnhc888, child number 0-------------------------------------select * from addresses as of period for valid sysdate+3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!+3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!+3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.All in all a nice, easy way to query effective dated information as of a point in time without a complex where clause.  You need to maintain the data - it isn't that a delete will turn into an update the end dates a record or anything - but if you have tables with start/end dates, this will make it much easier to query them.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >