Search Results

Search found 415 results on 17 pages for 'predicate'.

Page 11/17 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • c# Lambda Expression built with LinqKit does not compile

    - by Frank Michael Kraft
    This lambda does not compile, but I do not understand why. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Linq.Expressions; using LinqKit; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { var barModel = new BarModel(); string id = "some"; Console.WriteLine(barModel.subFor(id).ToString()); // output: m => (True AndAlso (m.key == value(ConsoleApplication2.Bar`1+<>c__DisplayClass0[ConsoleApplication2.Model]).id)) Console.ReadKey(); var subworkitems = barModel.list.Where(barModel.subFor(id).Compile()); // Exception {"variable 'm' of type 'ConsoleApplication2.Model' referenced from scope '', but it is not defined"} Console.WriteLine(subworkitems.ToString()); Console.ReadKey(); } } class Bar<TModel> { public Bar(Expression<Func<TModel, string>> foreignKeyExpression) { _foreignKeyExpression = foreignKeyExpression; } private Expression<Func<TModel, string>> _foreignKeyExpression { get; set; } public Expression<Func<TModel, bool>> subFor(string id) { var ex = forTargetId(id); return ex; } public Expression<Func<TModel, bool>> forTargetId(String id) { var fc = _foreignKeyExpression; Expression<Func<TModel, bool>> predicate = m => true; var result = predicate.And(m => fc.Invoke(m) == id).Expand(); return result; } } class Model { public string key; public string value; } class BarModel : Bar<Model> { public List<Model> list; public BarModel() : base(m => m.key) { list = new List<Model>(); } } }

    Read the article

  • How to get properties from Cassandra with get_slice in Erlang?

    - by Zubair
    I am using Erlang to interface with Cassandra and I cannot get the get_slice command to return a list of all the columns of a row. I use: X = thrift_client:call( C, 'get_slice', [ "Keyspace1", K, #columnParent{column_family="KeyValue"}, #slicePredicate{}, 1 ] ), : but I get back : invalidRequestException,<<"predicate column_names and slice_range may not both be null">> : However, using the cassandra-cli interface this works fine. Any ideas?

    Read the article

  • MySQL> Selecting from more tables (with same columns) without UNION

    - by Petr
    Hi, It is probably pretty simple but I cannot figure it out: Say I have tables A and B both with the same columns. I need to do SELECT * FROM A,B without having results merged into one row. I.e. when each table has 2 rows, I need the result to have 4 rows. EDIT: I know about JOIN but dont know how to join the tables without predicate. I need merge them. Thanks

    Read the article

  • Is there an existing delegate in the .NET Framework for comparison?

    - by Neil Barnwell
    The .NET framework provides a few handy general-use delegates for common tasks, such as Predicate<T> and EventHandler<T>. Is there a built-in delegate for the equivalent of CompareTo()? The signature might be something like this: delegate int Comparison<T>(T x, T y); This is to implement sorting in such a way that I can provide a lambda expression for the actual sort routine (ListView.ListViewItemSorter, specifically), so any other approaches welcome.

    Read the article

  • prolog sets problem, stack overflow

    - by garm0nboz1a
    Hi. I'm gonna show some code and ask, what could be optimized and where am I sucked? sublist([], []). sublist([H | Tail1], [H | Tail2]) :- sublist(Tail1, Tail2). sublist(H, [_ | Tail]) :- sublist(H, Tail). less(X, X, _). less(X, Z, RelationList) :- member([X,Z], RelationList). less(X, Z, RelationList) :- member([X,Y], RelationList), less(Y, Z, RelationList), \+less(Z, X, RelationList). lessList(X, LessList, RelationList) :- findall(Y, less(X, Y, RelationList), List), list_to_set(List, L), sort(L, LessList), !. list_mltpl(List1, List2, List) :- findall(X, ( member(X, List1), member(X, List2)), List). chain([_], _). chain([H,T | Tail], RelationList) :- less(H, T, RelationList), chain([T|Tail], RelationList), !. have_inf(X1, X2, RelationList) :- lessList(X1, X1_cone, RelationList), lessList(X2, X2_cone, RelationList), list_mltpl(X1_cone, X2_cone, Cone), chain(Cone, RelationList), !. relations(List, E) :- findall([X1,X2], (member(X1, E), member(X2, E), X1 =\= X2), Relations), sublist(List, Relations). semilattice(List, E) :- forall( (member(X1, E), member(X2, E), X1 < X2), have_inf(X1, X2, List) ). main(E) :- relations(X, E), semilattice(X, E). I'm trying to model all possible graph sets of N elements. Predicate relations(List, E) connects list of possible graphs(List) and input set E. Then I'm describing semilattice predicate to check relations' List for some properties. So, what I have. 1) semilattice/2 is working fast and clear ?- semilattice([[1,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). true. ?- semilattice([[1,3],[1,4],[2,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). false. 2) relations/2 is working not well ?- findall(X, relations(X,[1,2,3,4]), List), length(List, Len), writeln(Len),fail. 4096 false. ?- findall(X, relations(X,[1,2,3,4,5]), List), length(List, Len), writeln(Len),fail. ERROR: Out of global stack ^ Exception: (11) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G263, user:relations(_G263, [1, 2, 3, 4|...]), 17852886, _G268, []), _G835, '$bags':'$destroy_findall_bag'(17852886)) ? abort % Execution Aborted 3) Mix of them to finding all possible semilattice does not work at all. ?- main([1,2]). ERROR: Out of local stack ^ Exception: (15) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G41, user:less(1, _G41, [[1, 2], [2, 1]]), 17852886, _G52, []), _G4767764, '$bags':'$destroy_findall_bag'(17852886)) ?

    Read the article

  • Bulk update & occasional insert (coredata) - Too slow

    - by Andrew
    Update: Currently looking into NSSET's minusSet links: http://stackoverflow.com/questions/1475636/comparing-two-arrays Hi guys, Could benefit from your wisdom here.. I'm using Coredata in my app, on first launch I download a data file and insert over 500 objects (each with 60 attributes) - fast, no problem. Each subsequent launch I download an updated version of the file, from which I need to update all existing objects' attributes (except maybe 5 attributes) and create new ones for items which have been added to the downloaded file. So, first launch I get 500 objects.. say a week later my file now contains 507 items.. I create two arrays, one for existing and one for downloaded. NSArray *peopleArrayDownloaded = [CoreDataHelper getObjectsFromContext:@"person" :@"person_id" :YES :managedObjectContextPeopleTemp]; NSArray *peopleArrayExisting = [CoreDataHelper getObjectsFromContext:@"person" :@"person_id" :YES :managedObjectContextPeople]; If the count of each array is equal then I just do this: NSUInteger index = 0; if ([peopleArrayExisting count] == [peopleArrayDownloaded count]) { NSLog(@"Number of people downloaded is same as the number of people existing"); for (person *existingPerson in peopleArrayExisting) { person *tempPerson = [peopleArrayDownloaded objectAtIndex:index]; // NSLog(@"Updating id: %@ with id: %@",existingPerson.person_id,tempPerson.person_id); // I have 60 attributes which I to update on each object, is there a quicker way other than overwriting existing? index++; } } else { NSLog(@"Number of people downloaded is different to number of players existing"); So now comes the slow part. I end up using this (which is tooooo slow): NSLog(@"Need people added to the league"); for (person *tempPerson in peopeArrayDownloaded) { NSPredicate *predicate = [NSPredicate predicateWithFormat:@"person_id = %@",tempPerson.person_id]; // NSLog(@"Searching for existing person, person_id: %@",existingPerson.person_id); NSArray *filteredArray = [peopleArrayExisting filteredArrayUsingPredicate:predicate]; if ([filteredArray count] == 0) { NSLog(@"Couldn't find an existing person in the downloaded file. Adding.."); person *newPerson = [NSEntityDescription insertNewObjectForEntityForName:@"person" inManagedObjectContext:managedObjectContextPeople]; Is there a way to generate a new array of index items referring to the additional items in my downloaded file? Incidentally, on my tableViews I'm using NSFetchedResultsController so updating attributes will call [cell setNeedsDisplay]; .. about 60 times per cell, not a good thing and it can crash the app. Thanks for reading :)

    Read the article

  • Is there a neater way to get the first occurrence of something?

    - by Phil H
    I have a list which contains a number of things: lista = ['a', 'b', 'foo', 'c', 'd', 'e', 'bar'] I'd like to get the first item in the list that fulfils a predicate, say len(item) > 2. Is there a neater way to do it than itertools' dropwhile and next? first = next(itertools.dropwhile(lambda x: len(x) <= 2, lista)) I did use [item for item in lista if len(item)>2][0] at first, but that requires python to generate the entire list first.

    Read the article

  • If I never ever use HashSet, should I still implement GetHashCode?

    - by Dimitri C.
    I never need to store objects in a hash table. The reason is twofold: coming up with a good hash function is difficult and error prone. an AVL tree is almost always fast enough, and it merely requires a strict order predicate, which is much easier to implement. The Equals() operation on the other hand is a very frequently used function. Therefore I wonder whether it is necessary to implement GetHashCode (which I never need) when implementing the Equals function (which I often need)?

    Read the article

  • SWI-Prolog: how to load rdf triples using semweb/rdf_db library?

    - by Li Li
    Hi, I have a rdf file (file.trp) in n-triples format, where each line is: "subject predicate object ." I tried to use rdf_load in semweb/rdf_db to load it into memory, but failed. Here is what I tried: ?- rdf_load('file.trp'). ?- rdf_load('file.trp', [format(triples]). the manual says it supports xml and triples. But it only loads rdf xml files. How can I load such rdf triple file? Thanks, Li

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Working With Extended Events

    - by Fatherjack
    SQL Server 2012 has made working with Extended Events (XE) pretty simple when it comes to what sessions you have on your servers and what options you have selected and so forth but if you are like me then you still have some SQL Server instances that are 2008 or 2008 R2. For those servers there is no built-in way to view the Extended Event sessions in SSMS. I keep coming up against the same situations – Where are the xel log files? What events, actions or predicates are set for the events on the server? What sessions are there on the server already? I got tired of this being a perpetual question and wrote some TSQL to save as a snippet in SQL Prompt so that these details are permanently only a couple of clicks away. First, some history. If you just came here for the code skip down a few paragraphs and it’s all there. If you want a little time to reminisce about SQL Server then stick with me through the next paragraph or two. We are in a bit of a cross-over period currently, there are many versions of SQL Server but I would guess that SQL Server 2008, 2008 R2 and 2012 comprise the majority of installations. With each of these comes a set of management tools, of which SQL Server Management Studio (SSMS) is one. In 2008 and 2008 R2 Extended Events made their first appearance and there was no way to work with them in the SSMS interface. At some point the Extended Events guru Jonathan Kehayias (http://www.sqlskills.com/blogs/jonathan/) created the SQL Server 2008 Extended Events SSMS Addin which is really an excellent tool to ease XE session administration. This addin will install in SSMS 2008 or 2008R2 but not SSMS 2012. If you use a compatible version of SSMS then I wholly recommend downloading and using it to make your work with XE much easier. If you have SSMS 2012 installed, and there is no reason not to as it will let you work with all versions of SQL Server, then you cannot install this addin. If you are working with SQL Server 2012 then SSMS 2012 has built in functionality to manage XE sessions – this functionality does not apply for 2008 or 2008 R2 instances though. This means you are somewhat restricted and have to use TSQL to manage XE sessions on older versions of SQL Server. OK, those of you that skipped ahead for the code, you need to start from here: So, you are working with SSMS 2012 but have a SQL Server that is an earlier version that needs an XE session created or you think there is a session created but you aren’t sure, or you know it’s there but can’t remember if it is running and where the output is going. How do you find out? Well, none of the information is hidden as such but it is a bit of a wrangle to locate it and it isn’t a lot of code that is unlikely to remain in your memory. I have created two pieces of code. The first examines the SYS.Server_Event_… management views in combination with the SYS.DM_XE_… management views to give the name of all sessions that exist on the server, regardless of whether they are running or not and two pieces of TSQL code. One piece will alter the state of the session: if the session is running then the code will stop the session if executed and vice versa. The other piece of code will drop the selected session. If the session is running then the code will stop it first. Do not execute the DROP code unless you are sure you have the Create code to hand. It will be dropped from the server without a second chance to change your mind. /**************************************************************/ /***   To locate and describe event sessions on a server    ***/ /***                                                        ***/ /***   Generates TSQL to start/stop/drop sessions           ***/ /***                                                        ***/ /***        Jonathan Allen - @fatherjack                    ***/ /***                 June 2013                                ***/ /***                                                        ***/ /**************************************************************/ SELECT  [EES].[name] AS [Session Name - all sessions] ,         CASE WHEN [MXS].[name] IS NULL THEN ISNULL([MXS].[name], 'Stopped')              ELSE 'Running'         END AS SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'ALTER EVENT SESSION [' + [EES].[name]                          + '] ON SERVER STATE = START;')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP;'         END AS ALTER_SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'DROP EVENT SESSION [' + [EES].[name]                          + '] ON SERVER; -- This WILL drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP; ' + CHAR(10)                   + '-- DROP EVENT SESSION [' + [EES].[name]                   + '] ON SERVER; -- This WILL stop and drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.'         END AS DROP_Session FROM    [sys].[server_event_sessions] AS EES         LEFT JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name] WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) ORDER BY SessionState GO I have excluded the system_health and AlwaysOn sessions as I don’t want to accidentally execute the drop script for these sessions that are created as part of the SQL Server installation. It is possible to recreate the sessions but that is a whole lot of aggravation I’d rather avoid. The second piece of code gathers details of running XE sessions only and provides information on the Events being collected, any predicates that are set on those events, the actions that are set to be collected, where the collected information is being logged and if that logging is to a file target, where that file is located. /**********************************************/ /***    Running Session summary                ***/ /***                                        ***/ /***    Details key values of XE sessions     ***/ /***    that are in a running state            ***/ /***                                        ***/ /***        Jonathan Allen - @fatherjack    ***/ /***        June 2013                        ***/ /***                                        ***/ /**********************************************/ SELECT  [EES].[name] AS [Session Name - running sessions] ,         [EESE].[name] AS [Event Name] ,         COALESCE([EESE].[predicate], 'unfiltered') AS [Event Predicate Filter(s)] ,         [EESA].[Action] AS [Event Action(s)] ,         [EEST].[Target] AS [Session Target(s)] ,         ISNULL([EESF].[value], 'No file target in use') AS [File_Target_UNC] -- select * FROM    [sys].[server_event_sessions] AS EES         INNER JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name]         INNER JOIN [sys].[server_event_session_events] AS [EESE] ON [EES].[event_session_id] = [EESE].[event_session_id]         LEFT JOIN [sys].[server_event_session_fields] AS EESF ON ( [EES].[event_session_id] = [EESF].[event_session_id]                                                               AND [EESF].[name] = 'filename'                                                               )         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + sest.name                                         FROM    [sys].[server_event_session_targets]                                                 AS SEST                                         WHERE   [EES].[event_session_id] = [SEST].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Target]                     ) AS EEST         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + [sesa].NAME                                         FROM    [sys].[server_event_session_actions]                                                 AS sesa                                         WHERE   [sesa].[event_session_id] = [EES].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Action]                     ) AS EESA WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) /*Optional to exclude 'out-of-the-box' traces*/ I hope that these scripts are useful to you and I would be obliged if you would keep my name in the script comments. I have no problem with you using it in production or personal circumstances, however it has no warranty or guarantee. Don’t use it unless you understand it and are happy with what it is going to do. I am not ever responsible for the consequences of executing this script on your servers.

    Read the article

  • RIF PRD: Presentation syntax issues

    - by Charles Young
    Over Christmas I got to play a bit with the W3C RIF PRD and came across a few issues which I thought I would record for posterity. Specifically, I was working on a grammar for the presentation syntax using a GLR grammar parser tool (I was using the current CTP of ‘M’ (MGrammer) and Intellipad – I do so hope the MS guys don’t kill off M and Intellipad now they have dropped the other parts of SQL Server Modelling). I realise that the presentation syntax is non-normative and that any issues with it do not therefore compromise the standard. However, presentation syntax is useful in its own right, and it would be great to iron out any issues in a future revision of the standard. The main issues are actually not to do with the grammar at all, but rather with the ‘running example’ in the RIF PRD recommendation. I started with the code provided in Example 9.1. There are several discrepancies when compared with the EBNF rules documented in the standard. Broadly the problems can be categorised as follows: ·      Parenthesis mismatch – the wrong number of parentheses are used in various places. For example, in GoldRule, the RHS of the rule (the ‘Then’) is nested in the LHS (‘the If’). In NewCustomerAndWidgetRule, the RHS is orphaned from the LHS. Together with additional incorrect parenthesis, this leads to orphanage of UnknownStatusRule from the entire Document. ·      Invalid use of parenthesis in ‘Forall’ constructs. Parenthesis should not be used to enclose formulae. Removal of the invalid parenthesis gave me a feeling of inconsistency when comparing formulae in Forall to formulae in If. The use of parenthesis is not actually inconsistent in these two context, but in an If construct it ‘feels’ as if you are enclosing formulae in parenthesis in a LISP-like fashion. In reality, the parenthesis is simply being used to group subordinate syntax elements. The fact that an If construct can contain only a single formula as an immediate child adds to this feeling of inconsistency. ·      Invalid representation of compact URIs (CURIEs) in the context of Frame productions. In several places the URIs are not qualified with a namespace prefix (‘ex1:’). This conflicts with the definition of CURIEs in the RIF Datatypes and Built-Ins 1.0 document. Here are the productions: CURIE          ::= PNAME_LN                  | PNAME_NS PNAME_LN       ::= PNAME_NS PN_LOCAL PNAME_NS       ::= PN_PREFIX? ':' PN_LOCAL       ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* PN_CHARS)? PN_CHARS       ::= PN_CHARS_U                  | '-' | [0-9] | #x00B7                  | [#x0300-#x036F] | [#x203F-#x2040] PN_CHARS_U     ::= PN_CHARS_BASE                  | '_' PN_CHARS_BASE ::= [A-Z] | [a-z] | [#x00C0-#x00D6] | [#x00D8-#x00F6]                  | [#x00F8-#x02FF] | [#x0370-#x037D] | [#x037F-#x1FFF]                  | [#x200C-#x200D] | [#x2070-#x218F] | [#x2C00-#x2FEF]                  | [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD]                  | [#x10000-#xEFFFF] PN_PREFIX      ::= PN_CHARS_BASE ((PN_CHARS|'.')* PN_CHARS)? The more I look at CURIEs, the more my head hurts! The RIF specification allows prefixes and colons without local names, which surprised me. However, the CURIE Syntax 1.0 working group note specifically states that this form is supported…and then promptly provides a syntactic definition that seems to preclude it! However, on (much) deeper inspection, it appears that ‘ex1:’ (for example) is allowed, but would really represent a ‘fragment’ of the ‘reference’, rather than a prefix! Ouch! This is so completely ambiguous that it surely calls into question the whole CURIE specification.   In any case, RIF does not allow local names without a prefix. ·      Missing ‘External’ specifiers for built-in functions and predicates.  The EBNF specification enforces this for terms within frames, but does not appear to enforce (what I believe is) the correct use of External on built-in predicates. In any case, the running example only specifies ‘External’ once on the predicate in UnknownStatusRule. External() is required in several other places. ·      The List used on the LHS of UnknownStatusRule is comma-delimited. This is not supported by the EBNF definition. Similarly, the argument list of pred:list-contains is illegally comma-delimited. ·      Unnecessary use of conjunction around a single formula in DiscountRule. This is strictly legal in the EBNF, but redundant.   All the above issues concern the presentation syntax used in the running example. There are a few minor issues with the grammar itself. Note that Michael Kiefer stated in his paper “Rule Interchange Format: The Framework” that: “The presentation syntax of RIF … is an abstract syntax and, as such, it omits certain details that might be important for unambiguous parsing.” ·      The grammar cannot differentiate unambiguously between strategies and priorities on groups. A processor is forced to resolve this by detecting the use of IRIs and integers. This could easily be fixed in the grammar.   ·      The grammar cannot unambiguously parse the ‘->’ operator in frames. Specifically, ‘-’ characters are allowed in PN_LOCAL names and hence a parser cannot determine if ‘status->’ is (‘status’ ‘->’) or (‘status-’ ‘>’).   One way to fix this is to amend the PN_LOCAL production as follows: PN_LOCAL ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* ((PN_CHARS)-('-')))? However, unilaterally changing the definition of this production, which is defined in the SPARQL Query Language for RDF specification, makes me uncomfortable. ·      I assume that the presentation syntax is case-sensitive. I couldn’t find this stated anywhere in the documentation, but function/predicate names do appear to be documented as being case-sensitive. ·      The EBNF does not specify whitespace handling. A couple of productions (RULE and ACTION_BLOCK) are crafted to enforce the use of whitespace. This is not necessary. It seems inconsistent with the rest of the specification and can cause parsing issues. In addition, the Const production exhibits whitespaces issues. The intention may have been to disallow the use of whitespace around ‘^^’, but any direct implementation of the EBNF will probably allow whitespace between ‘^^’ and the SYMSPACE. Of course, I am being a little nit-picking about all this. On the whole, the EBNF translated very smoothly and directly to ‘M’ (MGrammar) and proved to be fairly complete. I have encountered far worse issues when translating other EBNF specifications into usable grammars.   I can’t imagine there would be any difficulty in implementing the same grammar in Antlr, COCO/R, gppg, XText, Bison, etc. A general observation, which repeats a point made above, is that the use of parenthesis in the presentation syntax can feel inconsistent and un-intuitive.   It isn’t actually inconsistent, but I think the presentation syntax could be improved by adopting braces, rather than parenthesis, to delimit subordinate syntax elements in a similar way to so many programming languages. The familiarity of braces would communicate the structure of the syntax more clearly to people like me.  If braces were adopted, parentheses could be retained around ‘var (frame | ‘new()’) constructs in action blocks. This use of parenthesis feels very LISP-like, and I think that this is my issue. It’s as if the presentation syntax represents the deformed love-child of LISP and C. In some places (specifically, action blocks), parenthesis is used in a LISP-like fashion. In other places it is used like braces in C. I find this quite confusing. Here is a corrected version of the running example (Example 9.1) in compliant presentation syntax: Document(    Prefix( ex1 <http://example.com/2009/prd2> )    (* ex1:CheckoutRuleset *)  Group rif:forwardChaining (     (* ex1:GoldRule *)    Group 10 (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"Silver"])        (Forall ?shoppingCart such that ?customer[ex1:shoppingCart->?shoppingCart]           (If Exists ?value (And(?shoppingCart[ex1:value->?value]                                  External(pred:numeric-greater-than-or-equal(?value 2000))))            Then Do(Modify(?customer[ex1:status->"Gold"])))))      (* ex1:DiscountRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Or( ?customer[ex1:status->"Silver"]                ?customer[ex1:status->"Gold"])         Then Do ((?s ?customer[ex1:shoppingCart-> ?s])                  (?v ?s[ex1:value->?v])                  Modify(?s [ex1:value->External(func:numeric-multiply (?v 0.95))]))))      (* ex1:NewCustomerAndWidgetRule *)    Group (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"New"] )        (If Exists ?shoppingCart ?item                   (And(?customer[ex1:shoppingCart->?shoppingCart]                        ?shoppingCart[ex1:containsItem->?item]                        ?item # ex1:Widget ) )         Then Do( (?s ?customer[ex1:shoppingCart->?s])                  (?val ?s[ex1:value->?val])                  (?voucher ?customer[ex1:voucher->?voucher])                  Retract(?customer[ex1:voucher->?voucher])                  Retract(?voucher)                  Modify(?s[ex1:value->External(func:numeric-multiply(?val 0.90))]))))      (* ex1:UnknownStatusRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Not(Exists ?status                       (And(?customer[ex1:status->?status]                            External(pred:list-contains(List("New" "Bronze" "Silver" "Gold") ?status)) )))         Then Do( Execute(act:print(External(func:concat("New customer: " ?customer))))                  Assert(?customer[ex1:status->"New"]))))  ) )   I hope that helps someone out there :-)

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • getting record from CoreData ?

    - by Meko
    Hi. I am trying to add value to CoreData from .plist and get from coreData and show them on UITable. I made take from .plist and send them to CoreData .But I cant take them from CoreData entity and set to some NSMutable array.. Here my codes Edit : SOLVED NSString *path = [[NSBundle mainBundle] pathForResource:@"FakeData" ofType:@"plist"]; NSArray *myArray=[[NSArray alloc] initWithContentsOfFile:path]; FlickrFetcher *flickrfetcher =[FlickrFetcher sharedInstance]; managedObjectContext =[flickrfetcher managedObjectContext]; for(NSDictionary *dict in myArray){ Photo *newPhoto = (Photo *)[NSEntityDescription insertNewObjectForEntityForName:@"Photo" inManagedObjectContext:managedObjectContext]; // set the attributes for the photo NSLog(@"Creating Photo: %@", [dict objectForKey:@"name"]); [newPhoto setName:[dict objectForKey:@"name"]]; [newPhoto setPath:[dict objectForKey:@"path"]]; NSLog(@"Person is: %@", [dict objectForKey:@"user"]); NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name ==%@",[dict objectForKey:@"user"]]; NSMutableArray *peopleArray = (NSMutableArray *)[flickrfetcher fetchManagedObjectsForEntity:@"Person" withPredicate:predicate]; NSEnumerator *enumerator = [peopleArray objectEnumerator]; Person *person; BOOL exists = FALSE; while (person = [enumerator nextObject]) { NSLog(@" Person is: %@ ", person.name); if([person.name isEqualToString:[dict objectForKey:@"user"]]) { exists = TRUE; NSLog(@"-- Person Exists : %@--", person.name); [newPhoto setPerson:person]; } } // if person does not already exist then add the person if (!exists) { Person *newPerson = (Person *)[NSEntityDescription insertNewObjectForEntityForName:@"Person" inManagedObjectContext:managedObjectContext]; // create the person [newPerson setName:[dict objectForKey:@"user"]]; NSLog(@"-- Person Created : %@--", newPerson.name); // associate the person with the photo [newPhoto setPerson:newPerson]; } //THis IS myArray that I want to save value in it and them show them in UITABLE personList=[[NSMutableArray alloc]init]; NSLog(@"lllll %@",[personList objectAtIndex:0]); // save the data NSError *error; if (![managedObjectContext save:&error]) { // Handle the error. NSLog(@"Unresolved error %@, %@", error, [error userInfo]); exit(-1); } } //To access core data objects you can try the following: // setup our fetchedResultsController fetchedResultsController = [flickrfetcher fetchedResultsControllerForEntity:@"Person" withPredicate:nil]; // execute the fetch NSError *error; BOOL success = [fetchedResultsController performFetch:&error]; if (!success) { // Handle the error. NSLog(@"Unresolved error %@, %@", error, [error userInfo]); exit(-1); } // save our data //HERE PROBLEM THAT I DONW KNOW WHERE IT IS SAVING //[personList addObject:(NSMutableArray *)[fetchedResultsController fetchedObjects]]; [self setPersonList:(NSMutableArray *)[fetchedResultsController fetchedObjects]]; } Here my tableview function - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"ss %d",[personList count]); static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.text=[personList objectAtIndex:indexPath.row]; // Set up the cell... return cell; }

    Read the article

  • Objective-C woes: cellForRowAtIndexPath crashes.

    - by Mr. McPepperNuts
    I want to the user to be able to search for a record in a DB. The fetch and the results returned work perfectly. I am having a hard time setting the UItableview to display the result tho. The application continually crashes at cellForRowAtIndexPath. Please, someone help before I have a heart attack over here. Thank you. @implementation SearchViewController @synthesize mySearchBar; @synthesize textToSearchFor; @synthesize myGlobalSearchObject; @synthesize results; @synthesize tableView; @synthesize tempString; #pragma mark - #pragma mark View lifecycle - (void)viewDidLoad { [super viewDidLoad]; } #pragma mark - #pragma mark Table View - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //handle selection; push view } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView{ /* if(nullResulSearch == TRUE){ return 1; }else { return[results count]; } */ return[results count]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 1; // Test hack to display multiple rows. } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Search Cell Identifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if(cell == nil){ cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleValue2 reuseIdentifier:CellIdentifier] autorelease]; } NSLog(@"TEMPSTRING %@", tempString); cell.textLabel.text = tempString; return cell; } #pragma mark - #pragma mark Memory management - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; } - (void)viewDidUnload { self.tableView = nil; } - (void)dealloc { [results release]; [mySearchBar release]; [textToSearchFor release]; [myGlobalSearchObject release]; [super dealloc]; } #pragma mark - #pragma mark Search Function & Fetch Controller - (NSManagedObject *)SearchDatabaseForText:(NSString *)passdTextToSearchFor{ NSManagedObject *searchObj; UndergroundBaseballAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name == [c]%@", passdTextToSearchFor]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Entry" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; [request setPredicate: predicate]; NSError *error; results = [managedObjectContext executeFetchRequest:request error:&error]; if([results count] == 0){ NSLog(@"No results found"); searchObj = nil; nullResulSearch == TRUE; }else{ if ([[[results objectAtIndex:0] name] caseInsensitiveCompare:passdTextToSearchFor] == 0) { NSLog(@"results %@", [[results objectAtIndex:0] name]); searchObj = [results objectAtIndex:0]; nullResulSearch == FALSE; }else{ NSLog(@"No results found"); searchObj = nil; nullResulSearch == TRUE; } } [tableView reloadData]; [request release]; [sortDescriptors release]; return searchObj; } - (void)searchBarSearchButtonClicked:(UISearchBar *)searchBar{ textToSearchFor = mySearchBar.text; NSLog(@"textToSearchFor: %@", textToSearchFor); myGlobalSearchObject = [self SearchDatabaseForText:textToSearchFor]; NSLog(@"myGlobalSearchObject: %@", myGlobalSearchObject); tempString = [myGlobalSearchObject valueForKey:@"name"]; NSLog(@"tempString: %@", tempString); } @end *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UILongPressGestureRecognizer isEqualToString:]: unrecognized selector sent to instance 0x3d46c20'

    Read the article

  • NSFetchedResultsController: using of NSManagedObjectContext during update brings to crash

    - by Kentzo
    Here is the interface of my controller class: @interface ProjectListViewController : UITableViewController <NSFetchedResultsControllerDelegate> { NSFetchedResultsController *fetchedResultsController; NSManagedObjectContext *managedObjectContext; } @end I use following code to init fetchedResultsController: if (fetchedResultsController != nil) { return fetchedResultsController; } // Create and configure a fetch request with the Project entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Project" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Create the sort descriptors array. NSSortDescriptor *projectIdDescriptor = [[NSSortDescriptor alloc] initWithKey:@"projectId" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:projectIdDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Create and initialize the fetch results controller. NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:nil]; self.fetchedResultsController = aFetchedResultsController; fetchedResultsController.delegate = self; As you can see, I am using the same managedObjectContext as defined in my controller class Here is an adoption of the NSFetchedResultsControllerDelegate protocol: - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { // The fetch controller is about to start sending change notifications, so prepare the table view for updates. [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { UITableView *tableView = self.tableView; switch(type) { case NSFetchedResultsChangeInsert: [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeUpdate: [self _configureCell:(TDBadgedCell *)[tableView cellForRowAtIndexPath:indexPath] atIndexPath:indexPath]; break; case NSFetchedResultsChangeMove: if (newIndexPath != nil) { [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; } else { [tableView reloadSections:[NSIndexSet indexSetWithIndex:indexPath.section] withRowAnimation:UITableViewRowAnimationFade]; } break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { [self.tableView endUpdates]; } Inside of the _configureCell:atIndexPath: method I have following code: NSFetchRequest *issuesNumberRequest = [NSFetchRequest new]; NSEntityDescription *issueEntity = [NSEntityDescription entityForName:@"Issue" inManagedObjectContext:managedObjectContext]; [issuesNumberRequest setEntity:issueEntity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"projectId == %@", project.projectId]; [issuesNumberRequest setPredicate:predicate]; NSUInteger issuesNumber = [managedObjectContext countForFetchRequest:issuesNumberRequest error:nil]; [issuesNumberRequest release]; I am using the managedObjectContext again. But when I am trying to insert new Project, app crashes with following exception: Assertion failure in -[UITableView _endCellAnimationsWithContext:], /SourceCache/UIKit_Sim/UIKit-984.38/UITableView.m:774 Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (4) must be equal to the number of rows contained in that section before the update (4), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted).' Fortunately, I've found a workaround: if I create and use separate NSManagedObjectContext inside of the _configureCell:atIndexPath: method app won't crash! I only want to know, is this behavior correct or not?

    Read the article

  • Objective-C: Scope problems cellForRowAtIndexPath

    - by Mr. McPepperNuts
    How would I set each individual row in cellForRowAtIndexPath to the results of an array populated by a Fetch Request? (Fetch Request made when button is pressed.) - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // ... set up cell code here ... cell.textLabel.text = [results objectAtIndex:indexPath valueForKey:@"name"]; } warning: 'NSArray' may not respond to '-objectAtIndexPath:' Edit: - (NSArray *)SearchDatabaseForText:(NSString *)passdTextToSearchFor{ NSManagedObject *searchObj; XYZAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name contains [cd] %@", passdTextToSearchFor]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Entry" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; [request setPredicate: predicate]; NSError *error; results = [managedObjectContext executeFetchRequest:request error:&error]; // NSLog(@"results %@", results); if([results count] == 0){ NSLog(@"No results found"); searchObj = nil; self.tempString = @"No results found."; }else{ if ([[[results objectAtIndex:0] name] caseInsensitiveCompare:passdTextToSearchFor] == 0) { NSLog(@"results %@", [[results objectAtIndex:0] name]); searchObj = [results objectAtIndex:0]; }else{ NSLog(@"No results found"); self.tempString = @"No results found."; searchObj = nil; } } [tableView reloadData]; [request release]; [sortDescriptors release]; return results; } - (void)searchBarSearchButtonClicked:(UISearchBar *)searchBar{ textToSearchFor = mySearchBar.text; results = [self SearchDatabaseForText:textToSearchFor]; self.tempString = [myGlobalSearchObject valueForKey:@"name"]; NSLog(@"results count: %d", [results count]); NSLog(@"results 0: %@", [[results objectAtIndex:0] name]); NSLog(@"results 1: %@", [[results objectAtIndex:1] name]); } @end Console prints: 2010-06-10 16:11:18.581 XYZApp[10140:207] results count: 2 2010-06-10 16:11:18.581 XYZApp[10140:207] results 0: BB Bugs 2010-06-10 16:11:18.582 XYZApp[10140:207] results 1: BB Annie Program received signal: “EXC_BAD_ACCESS”. (gdb) Edit 2: BT: #0 0x95a91edb in objc_msgSend () #1 0x03b1fe20 in ?? () #2 0x0043cd2a in -[UITableViewRowData(UITableViewRowDataPrivate) _updateNumSections] () #3 0x0043ca9e in -[UITableViewRowData invalidateAllSections] () #4 0x002fc82f in -[UITableView(_UITableViewPrivate) _updateRowData] () #5 0x002f7313 in -[UITableView noteNumberOfRowsChanged] () #6 0x00301500 in -[UITableView reloadData] () #7 0x00008623 in -[SearchViewController SearchDatabaseForText:] (self=0x3d16190, _cmd=0xf02b, passdTextToSearchFor=0x3b29630) #8 0x000086ad in -[SearchViewController searchBarSearchButtonClicked:] (self=0x3d16190, _cmd=0x16492cc, searchBar=0x3d2dc50) #9 0x0047ac13 in -[UISearchBar(UISearchBarStatic) _searchFieldReturnPressed] () #10 0x0031094e in -[UIControl(Deprecated) sendAction:toTarget:forEvent:] () #11 0x00312f76 in -[UIControl(Internal) _sendActionsForEventMask:withEvent:] () #12 0x0032613b in -[UIFieldEditor webView:shouldInsertText:replacingDOMRange:givenAction:] () #13 0x01d5a72d in __invoking___ () #14 0x01d5a618 in -[NSInvocation invoke] () #15 0x0273fc0a in SendDelegateMessage () #16 0x033168bf in -[_WebSafeForwarder forwardInvocation:] () #17 0x01d7e6f4 in ___forwarding___ () #18 0x01d5a6c2 in __forwarding_prep_0___ () #19 0x03320fd4 in WebEditorClient::shouldInsertText () #20 0x0279dfed in WebCore::Editor::shouldInsertText () #21 0x027b67a5 in WebCore::Editor::insertParagraphSeparator () #22 0x0279d662 in WebCore::EventHandler::defaultTextInputEventHandler () #23 0x0276cee6 in WebCore::EventTargetNode::defaultEventHandler () #24 0x0276cb70 in WebCore::EventTargetNode::dispatchGenericEvent () #25 0x0276c611 in WebCore::EventTargetNode::dispatchEvent () #26 0x0279d327 in WebCore::EventHandler::handleTextInputEvent () #27 0x0279d229 in WebCore::Editor::insertText () #28 0x03320f4d in -[WebHTMLView(WebNSTextInputSupport) insertText:] () #29 0x0279d0b4 in -[WAKResponder tryToPerform:with:] () #30 0x03320a33 in -[WebView(WebViewEditingActions) _performResponderOperation:with:] () #31 0x03320990 in -[WebView(WebViewEditingActions) insertText:] () #32 0x00408231 in -[UIWebDocumentView insertText:] () #33 0x003ccd31 in -[UIKeyboardImpl acceptWord:firstDelete:addString:] () #34 0x003d2c8c in -[UIKeyboardImpl addInputString:fromVariantKey:] () #35 0x004d1a00 in -[UIKeyboardLayoutStar sendStringAction:forKey:] () #36 0x004d0285 in -[UIKeyboardLayoutStar handleHardwareKeyDownFromSimulator:] () #37 0x002b5bcb in -[UIApplication handleEvent:withNewEvent:] () #38 0x002b067f in -[UIApplication sendEvent:] () #39 0x002b7061 in _UIApplicationHandleEvent () #40 0x02542d59 in PurpleEventCallback () #41 0x01d55b80 in CFRunLoopRunSpecific () #42 0x01d54c48 in CFRunLoopRunInMode () #43 0x02541615 in GSEventRunModal () #44 0x025416da in GSEventRun () #45 0x002b7faf in UIApplicationMain () #46 0x00002578 in main (argc=1, argv=0xbfffef5c) at /Users/default/Documents/iPhone Projects/XYZApp/main.m:14

    Read the article

  • how do you remember programming related stuff?

    - by dan leadgy
    How do you remember programming related stuff? Did you get the feeling you did encounter the error you have now a few years ago and you could swear you knew the cause but now you forgot it? Did you work with the xsl's string parsing some time ago but now you can't remember exactly which are the string functions altogether from xsl and you have to start from scratch? Or perhaps you forget about some feature from Apache Commons like "filtering a collection by some predicate" that you surely used in the past. So how do you do it? I tried having a blog but when I develop apps, I never find the time to update the blog or write about my experiences. Also, using a wiki is a nice thing but then I found it difficult to keep a clean separation between them since many times I needed to change a blog post to add new information about that topic. This made me think that I actually should have put this topic in the wiki instead of the blog. Do you have any systems that help you remember about your programming experience? What's your setup?

    Read the article

  • Execution plan warnings–All that glitters is not gold

    - by Dave Ballantyne
    In a previous post, I showed you the new execution plan warnings related to implicit and explicit warnings.  Pretty much as soon as i hit ’post’,  I noticed something rather odd happening. This statement : select top(10) SalesOrderHeader.SalesOrderID, SalesOrderNumberfrom Sales.SalesOrderHeaderjoin Sales.SalesOrderDetail on SalesOrderHeader.SalesOrderID = SalesOrderDetail.SalesOrderID   Throws the “Type conversion may affect cardinality estimation” warning.     Ive done no such conversion in my statement why would that be ?  Well, SalesOrderNumber is a computed column , “(isnull(N'SO'+CONVERT([nvarchar](23),[SalesOrderID],0),N'*** ERROR ***'))”,  so thats where the conversion is.   Wait!!! Am i saying that every type conversion will throw the warning ?  Thankfully, no.  It only appears for columns that are used in predicates ,even if the predicate / join condition is fine ,  and the column is indexed ( and/or , presumably has statistics).    Hopefully , this wont lead to to many wild goose chases, but is definitely something to bear in mind.  If you want to see this fixed then upvote my connect item here.

    Read the article

  • Can higher-order functions in FP be interpreted as some kind of dependency injection?

    - by Giorgio
    According to this article, in object-oriented programming / design dependency injection involves a dependent consumer, a declaration of a component's dependencies, defined as interface contracts, an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from Data.List. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where the filter function is the dependent consumer, the signature (a -> Bool) of the function argument is the interface contract, the expression that uses the higher-order is the injector that, in this particular case, injects the implementation (\x -> (mod x 2) == 0) of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object-oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?

    Read the article

  • Origin of common list-processing function names

    - by Heatsink
    Some higher-order functions for operating on lists or arrays have been repeatedly adopted or reinvented. The functions map, fold[l|r], and filter are found together in several programming languages, such as Scheme, ML, and Python, that don't seem to have a common ancestor. I'm going with these three names to keep the question focused. To show that the names are not universal, here is a sampling of names for equivalent functionality in other languages. C++ has transform instead of map and remove_if instead of filter (reversing the meaning of the predicate). Lisp has mapcar instead of map, remove-if-not instead of filter, and reduce instead of fold (Some modern Lisp variants have map but this appears to be a derived form.) C# uses Select instead of map and Where instead of filter. C#'s names came from SQL via LINQ, and despite the name changes, their functionality was influenced by Haskell, which was itself influenced by ML. The names map, fold, and filter are widespread, but not universal. This suggests that they were borrowed from an influential source into other contemporary languages. Where did these function names come from?

    Read the article

  • filtering dates in a data view webpart when using webservices datasource

    - by Patrick Olurotimi Ige
    I was working on a data view web part recently and i had  to filter the data based on dates.Since the data source was web services i couldn't use  the Offset which i blogged about earlier.When using web services to pull data in sharepoint designer you would have to use xpath.So for example this is the soap that populates the rows<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row/>But you would need to add some predicate [] and filter the date nodes.So you can do something like this (marked in red)<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row[ddwrt:FormatDateTime(string(@ows_Created),1033,'yyyyMMdd') &gt;= ddwrt:FormatDateTime(string(substring-after($fd,'#')),1033,'yyyyMMdd')]"/>For the filtering to work you need to have the date formatted  above as yyyyMMdd.One more thing you must have noticed is the $fd variable.This variable is created by me creating a calculated column in the list so something like this [Created]-2So basically that the xpath is doing is get me data only when the Created date  is greater than or equal to the Created date -2 which is 2 date less than the created date.Also not that when using web services in sharepoint designer and try to use the default filtering you won't get to see greater tha or less than in the option list comparison.:(Hope this helps.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17  | Next Page >