Search Results

Search found 415 results on 17 pages for 'predicate'.

Page 9/17 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Can't obtain reference to EKReminder array retrieved from fetchRemindersMatchingPredicate

    - by Scionwest
    When I create an NSPredicate via EKEventStore predicateForRemindersInCalendars; and pass it to EKEventStore fetchRemindersMatchingPredicate:completion:^ I can loop through the reminders array provided by the completion code block, but when I try to store a reference to the reminders array, or create a copy of the array into a local variable or instance variable, both array's remain empty. The reminders array is never copied to them. This is the method I am using, in the method, I create a predicate, pass it to the event store and then loop through all of the reminders logging their title via NSLog. I can see the reminder titles during runtime thanks to NSLog, but the local arrayOfReminders object is empty. I also try to add each reminder into an instance variable of NSMutableArray, but once I leave the completion code block, the instance variable remains empty. Am I missing something here? Can someone please tell me why I can't grab a reference to all of the reminders for use through-out the app? I am not having any issues at all accessing and storing EKEvents, but for some reason I can't do it with EKReminders. - (void)findAllReminders { NSPredicate *predicate = [self.eventStore predicateForRemindersInCalendars:nil]; __block NSArray *arrayOfReminders = [[NSArray alloc] init]; [self.eventStore fetchRemindersMatchingPredicate:predicate completion:^(NSArray *reminders) { arrayOfReminders = [reminders copy]; //Does not work. for (EKReminder *reminder in reminders) { [self.remindersForTheDay addObject:reminder]; NSLog(@"%@", reminder.title); } }]; //Always = 0; if ([self.remindersForTheDay count]) { NSLog(@"Instance Variable has reminders!"); } //Always = 0; if ([arrayOfReminders count]) { NSLog(@"Local Variable has reminders!"); } } The eventStore getter is where I perform my instantiation and get access to the event store. - (EKEventStore *)eventStore { if (!_eventStore) { _eventStore = [[EKEventStore alloc] init]; //respondsToSelector indicates iOS 6 support. if ([_eventStore respondsToSelector:@selector(requestAccessToEntityType:completion:)]) { //Request access to user calendar [_eventStore requestAccessToEntityType:EKEntityTypeEvent completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore calendar granted."); } else { NSLog(@"Access to EventStore calendar denied."); } }]; //Request access to user Reminders [_eventStore requestAccessToEntityType:EKEntityTypeReminder completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore Reminders granted."); } else { NSLog(@"Access to EventStore Reminders denied."); } }]; } else { //iOS 5.x and lower support if Selector is not supported NSLog(@"iOS 5.x < Access to EventStore calendar granted."); } for (EKCalendar *cal in self.calendars) { NSLog(@"Calendar found: %@", cal.title); } [_eventStore reset]; } return _eventStore; } Lastly, just to show that I am initializing my remindersForTheDay instance variable using lazy instantiation. - (NSMutableArray *)remindersForTheDay { if (!_remindersForTheDay) _remindersForTheDay = [[NSMutableArray alloc] init]; return _remindersForTheDay; } I've read through the Apple documentation and it doesn't provide any explanation that I can find to answer this. I read through the Blocks Programming docs and it states that you can access local and instance variables without issues from within a block, but for some reason, the above code does not work. Any help would be greatly appreciated, I've scoured Google for answers but have yet to get this figured out. Thanks everyone! Johnathon.

    Read the article

  • LINQ Generic Query with inherited base class?

    - by sah302
    I am trying to write some generic LINQ queries for my entities, but am having issue doing the more complex things. Right now I am using an EntityDao class that has all my generics and each of my object class Daos (such as Accomplishments Dao) inherit it, am example: using LCFVB.ObjectsNS; using LCFVB.EntityNS; namespace AccomplishmentNS { public class AccomplishmentDao : EntityDao<Accomplishment>{} } Now my entityDao has the following code: using LCFVB.ObjectsNS; using LCFVB.LinqDataContextNS; namespace EntityNS { public abstract class EntityDao<ImplementationType> where ImplementationType : Entity { public ImplementationType getOneByValueOfProperty(string getProperty, object getValue) { ImplementationType entity = null; if (getProperty != null && getValue != null) { //Nhibernate Example: //ImplementationType entity = default(ImplementationType); //entity = Me.session.CreateCriteria(Of ImplementationType)().Add(Expression.Eq(getProperty, getValue)).UniqueResult(Of InterfaceType)() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>(); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); } return entity; } public bool insertRow(ImplementationType entity) { if (entity != null) { //Nhibernate Example: //Me.session.Save(entity, entity.Id) //Me.session.Flush() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>().InsertOnSubmit(entity); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); return true; } else { return false; } } } }             I have gotten the insertRow function working, however I am not even sure how to go about doing getOnebyValueOfProperty, the closest thing I could find on this site was: http://stackoverflow.com/questions/2157560/generic-linq-to-sql-query How can I pass in the column name and the value I am checking against generically using my current set-up? It seems like from that link it's impossible since using a where predicate because entity class doesn't know what any of the properties are until I pass them in. Lastly, I need some way of setting a new object as the return type set to the implementation type, in nhibernate (what I am trying to convert from) it was simply this line that did it: ImplentationType entity = default(ImplentationType); However default is an nhibernate command, how would I do this for LINQ? EDIT: getOne doesn't seem to work even when just going off the base class (this is a partial class of the auto generated LINQ classes). I even removed the generics. I tried: namespace ObjectsNS { public partial class Accomplishment { public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) { Accomplishment entity = new Accomplishment(); if (singleOrDefaultClause != null) { LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here entity = lcfdatacontext.Accomplishments.SingleOrDefault(singleOrDefaultClause); lcfdatacontext.Dispose(); } return entity; } } } Get the following error: Error 1 Overload resolution failed because no accessible 'SingleOrDefault' can be called with these arguments: Extension method 'Public Function SingleOrDefault(predicate As System.Linq.Expressions.Expression(Of System.Func(Of Accomplishment, Boolean))) As Accomplishment' defined in 'System.Linq.Queryable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))'. Extension method 'Public Function SingleOrDefault(predicate As System.Func(Of Accomplishment, Boolean)) As Accomplishment' defined in 'System.Linq.Enumerable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean)'. 14 LCF Okay no problem I changed: public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) to: public Accomplishment getOneByWhereClause(Expression<Func<Accomplishment, bool>> singleOrDefaultClause) Error goes away. Alright, but now when I try to call the method via: Accomplishment accomplishment = new Accomplishment(); var result = accomplishment.getOneByWhereClause(x=>x.Id = 4) It doesn't work it says x is not declared. I also tried getOne, and various other Expression =(

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Introduction to Extended Events

    - by extended_events
    For those fighting with all the Extended Event terminology, let's step back and have a small overall Introduction to Extended Events. This post will give you a simplified end to end view through some of the elements in Extended Events. Before we start, let’s review the first Extented Events that we are going to use: -          Events: The SQL Server code is populated with event calls that, by default, are disabled. Adding events to a session enables those event calls. Once enabled, they will execute the set of functionality defined by the session. -          Target: This is an Extended Event Object that can be used to log event information. Also it is important to understand the following Extended Event concept: -          Session: Server Object created by the user that defines functionality to be executed every time a set of events happen.   It’s time to write a small “Hello World” using Extended Events. This will help understand the above terms. We will use: -          Event sqlserver. error_reported: This event gets fired every time that an error happens in the server. -          Target package0.asynchronous_file_target: This target stores the event data in disk. -          Session: We will create a session that sends all the error_reported events to the ring buffer. Before we get started, a quick note: Don’t run this script in a production environment. Even though, we are going just going to be raise very low severity user errors, we don't want to introduce noise in our servers. -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data1.xel' , metadatafile = 'c:\temp\data1.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- GENERATES AN ERROR RAISERROR (N'HELLO WORLD', -- Message text.            1, -- Severity,            1, 7, 3, N'abcde'); -- Other parameters GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data1*.xel','c:\temp\data1*.xem', null, null) This query will output the event data with our first hello world in the Extended Event format: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-02-27T03:08:04.210Z"><data name="error"><value>50000</value><text /></data><data name="severity"><value>1</value><text /></data><data name="state"><value>1</value><text /></data><data name="user_defined"><value>true</value><text /></data><data name="message"><value>HELLO WORLD</value><text /></data></event> More on parsing event data in this post: Reading event data 101 Now let's move that lets move on to the other three Extended Event objects: -          Actions. This Extended Objects actions get executed before events are published (stored in buffers to be transferred to the targets). Currently they are used additional data (like the TSQL Statement related to an event, the session, the user) or generate a mini dump.   -          Predicates: Predicates express are logical expressions that specify what predicates to fire (E.g. only listen to errors with a severity greater than 16). This are composed of two Extended Objects: o   Predicate comparators: Defines an operator for a pair of values. Examples: §  Severity > 16 §  error_message = ‘Hello World!!’ o   Predicate sources: These are values that can be also used by the predicates. They are generic data that isn’t usually provided in the event (similar to the actions). §  Sqlserver.username = ‘Tintin’ As logical expressions they can be combined using logical operators (and, or, not).  Note: This pair always has to be first an event field or predicate source and then a value         Let’s do another small Example. We will trigger errors but we will use the ones that have severity >= 10 and the error message != ‘filter’. To verify this we will use the action sql_text that will attach the sql statement to the event data: -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported       (ACTION (sqlserver.sql_text) WHERE severity = 2 and (not (message = 'filter'))) ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data2.xel' , metadatafile = 'c:\temp\data2.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- THIS EVENT WILL BE FILTERED BECAUSE SEVERITY != 2 RAISERROR (N'PUBLISH', 1, 1, 7, 3, N'abcde'); GO -- THIS EVENT WILL BE FILTERED BECAUSE MESSAGE = 'FILTER' RAISERROR (N'FILTER', 2, 1, 7, 3, N'abcde'); GO -- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data2*.xel','c:\temp\data2*.xem', null, null)   This last statement will output one event with the following data: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-03-05T23:15:05.481Z">   <data name="error">     <value>50000</value>     <text />   </data>   <data name="severity">     <value>2</value>     <text />   </data>   <data name="state">     <value>1</value>     <text />   </data>   <data name="user_defined">     <value>true</value>     <text />   </data>   <data name="message">     <value>PUBLISH</value>     <text />   </data>   <action name="sql_text" package="sqlserver">     <value>-- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); </value>     <text />   </action> </event> If you see more events, check if you have deleted previous event files. If so, please run   -- Deletes previous event files EXEC SP_CONFIGURE GO EXEC SP_CONFIGURE 'xp_cmdshell', 1 GO RECONFIGURE GO XP_CMDSHELL 'del c:\temp\data*.xe*' GO   or delete them manually.   More Info on Events: Extended Event Events More Info on Targets: Extended Event Targets More Info on Sessions: Extended Event Sessions More Info on Actions: Extended Event Actions More Info on Predicates: Extended Event Predicates Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

  • C# PredicateBuilder Entities: The parameter 'f' was not bound in the specified LINQ to Entities quer

    - by Neothor
    I needed to build a dynamic filter and I wanted to keep using entities. Because of this reason I wanted to use the PredicateBuilder from albahari. I created the following code: var invoerDatums = PredicateBuilder.True<OnderzoeksVragen>(); var inner = PredicateBuilder.False<OnderzoeksVragen>(); foreach (var filter in set.RapportInvoerFilter.ToList()) { if(filter.IsDate) { var date = DateTime.Parse(filter.Waarde); invoerDatums = invoerDatums.Or(o => o.Van >= date && o.Tot <= date); } else { string temp = filter.Waarde; inner = inner.Or(o => o.OnderzoekType == temp); } } invoerDatums = invoerDatums.And(inner); var onderzoeksVragen = entities.OnderzoeksVragen .AsExpandable() .Where(invoerDatums) .ToList(); When I ran the code there was only 1 filter which wasn't a date filter. So only the inner predicate was filled. When the predicate was executed I got the following error. The parameter 'f' was not bound in the specified LINQ to Entities query expression. While searching for an answer I found the following page. But this is already implemented in the LINQKit. Does anyone else experienced this error and know how to solve it?

    Read the article

  • Extracting URIs from RDF web page in Java using Jena Library

    - by Prannoy Mittal
    I have written following code for extratcting URIs from a web page with content type application/rdf-xml for Linked Data application. public static void test(String url) { try { Model read = ModelFactory.createDefaultModel().read(url); System.out.println("to go"); StmtIterator si; si = read.listStatements(); System.out.println("to go"); while(si.hasNext()) { Statement s=si.nextStatement(); Resource r=s.getSubject(); Property p=s.getPredicate(); RDFNode o=s.getObject(); System.out.println(r.getURI()); System.out.println(p.getURI()); System.out.println(o.asResource().getURI()); } } catch(JenaException | NoSuchElementException c) { } } But above code is not extracting all URIs. It provides only few of the URIs. Please guide me where i Went wrong?? hey Rafeel For Eq: for XML File : <?xml version="1.0"?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:ex="http://example.org/stuff/1.0/"> <rdf:Description rdf:about="http://www.w3.org/TR/rdf-syntax-grammar" dc:title="RDF/XML Syntax Specification (Revised)"> <ex:editor> <rdf:Description ex:fullName="Dave Beckett"> <ex:homePage rdf:resource="http://purl.org/net/dajobe/" /> </rdf:Description> </ex:editor> </rdf:Description> </rdf:RDF> The output is : Subject URI is http://www.w3.org/TR/rdf-syntax-grammar Predicate URI is http://example.org/stuff/1.0/editor Object URI is null Subject URI is http://www.w3.org/TR/rdf-syntax-grammar Predicate URI is http://purl.org/dc/elements/1.1/title Website is read

    Read the article

  • Add objects in relationship not work using MagicalRecord saveWithBlock

    - by yong ho
    The code to perform a save block: [MagicalRecord saveWithBlock:^(NSManagedObjectContext *localContext) { for (NSDictionary *stockDict in objects) { NSString *name = stockDict[@"name"]; Stock *stock = [Stock MR_createInContext:localContext]; stock.name = name; NSArray *categories = stockDict[@"categories"]; if ([categories count] > 0) { for (NSDictionary *categoryObject in categories) { NSString *categoryId = categoryObject[@"_id"]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"categoryId == %@", categoryId]; NSArray *matches = [StockCategory MR_findAllWithPredicate:predicate inContext:localContext]; NSLog(@"%@", matches); if ([matches count] > 0) { StockCategory *cat = [matches objectAtIndex:0]; [stock addCategoriesObject:cat]; } } } } } completion:^(BOOL success, NSError *error) { }]; The Stock Model: @class StockCategory; @interface Stock : NSManagedObject @property (nonatomic, retain) NSString * name; @property (nonatomic, retain) NSSet *categories; @end @interface Stock (CoreDataGeneratedAccessors) - (void)addCategoriesObject:(StockCategory *)value; - (void)removeCategoriesObject:(StockCategory *)value; - (void)addCategories:(NSSet *)values; - (void)removeCategories:(NSSet *)values; @end The json look like this: [ { "name": "iPad mini ", "categories": [ { "name": "iPad", "_id": "538c655fae9b3e1502fc5c9e", "__v": 0, "createdDate": "2014-06-02T11:51:59.433Z" } ], }, { "name": "iPad Air ", "categories": [ { "name": "iPad", "_id": "538c655fae9b3e1502fc5c9e", "__v": 0, "createdDate": "2014-06-02T11:51:59.433Z" } ], } ] Open the core data pro, You can see only stock with the name of "iPad air" has it's categories saved. I just can't figure out why. You can see in the saveWithBlock part, I first find in the context for the same _id as in json, and then add the category object in the relationship. It's working, but not all of them. Why is that?

    Read the article

  • Examples of monoids/semigroups in programming

    - by jkff
    It is well-known that monoids are stunningly ubiquitous in programing. They are so ubiquitous and so useful that I, as a 'hobby project', am working on a system that is completely based on their properties (distributed data aggregation). To make the system useful I need useful monoids :) I already know of these: Numeric or matrix sum Numeric or matrix product Minimum or maximum under a total order with a top or bottom element (more generally, join or meet in a bounded lattice, or even more generally, product or coproduct in a category) Set union Map union where conflicting values are joined using a monoid Intersection of subsets of a finite set (or just set intersection if we speak about semigroups) Intersection of maps with a bounded key domain (same here) Merge of sorted sequences, perhaps with joining key-equal values in a different monoid/semigroup Bounded merge of sorted lists (same as above, but we take the top N of the result) Cartesian product of two monoids or semigroups List concatenation Endomorphism composition. Now, let us define a quasi-property of an operation as a property that holds up to an equivalence relation. For example, list concatenation is quasi-commutative if we consider lists of equal length or with identical contents up to permutation to be equivalent. Here are some quasi-monoids and quasi-commutative monoids and semigroups: Any (a+b = a or b, if we consider all elements of the carrier set to be equivalent) Any satisfying predicate (a+b = the one of a and b that is non-null and satisfies some predicate P, if none does then null; if we consider all elements satisfying P equivalent) Bounded mixture of random samples (xs+ys = a random sample of size N from the concatenation of xs and ys; if we consider any two samples with the same distribution as the whole dataset to be equivalent) Bounded mixture of weighted random samples Which others do exist?

    Read the article

  • Parameter based bindings in ninject 2.0

    - by Przemaas
    I want to use conditional binding in ninject, based on passed parameters. I have something like below: public class Subject { } public interface ITarget { } public class Target1 : ITarget { } public class Target2 : ITarget { } And now I need to instantiate ITarget interface: public void MethodName(IKernel kernel) { ITarget target1 = kernel.Get<ITarget>(new Parameter("name", new Subject(), true)); // Should be instance of Target1 ITarget target2 = kernel.Get<ITarget>(); // Should be instance of Target2 } I have problems to define proper bindings. I tried the following: kernel.Bind<ITarget>().To<Target1>().When(Predicate); kernel.Bind<ITarget>().To<Target2>(); private bool Predicate(IRequest request) { IParameter parameter = request.Parameters.Count == 0 ? null : request.Parameters[0]; if (parameter == null) { return false; } object parameterValue = parameter.GetValue( /*what to put here?*/); return parameterValue != null && parameterValue.GetType().IsAssignableFrom(typeof(Subject)); } but I don't know how to get value of passed parameter. I need to pass IContext instance to GetValue method, but don't know how to get valid instance of IContext. Or maybe there is better way to accomplish my task? Regards

    Read the article

  • Update a tableView with a plist took from another table

    - by Pheel
    Background: I have a tab bar application, which has a tableView as the "heart" of the app. It loads data from a plist and, through a button that checks if there are any updates on the remote plist file, updates the local plist with the remote contents. Then, i have another tableView, that should display only those plist items that have a bool value set to YES. Now i want to add a button to the second table that reloads the plist took from the first table. Expected: When i update the local plist from the first table and when i press the button on the second table, the 2nd table is supposed to update and show the cells with that bool value set to YES. (Note: I set YES as default to some items on plist). What happens: The first table updates its content from remote. The second table shows the old items with the value set to YES. When i press the button to refresh data, it reads the plist fine (by logging it, it has the same contents of the first table -only those set to YES-),but it doesn't update data even if i have [self.tableView reloadData];. When i close the app and open it again, the second table is filled with the right items. :\ Code i'm using: //Reading Plist { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *plistPath = [[documentPaths lastObject] stringByAppendingPathComponent:@"myPlist.plist"]; NSFileManager *fMgr = [NSFileManager defaultManager]; if (![fMgr fileExistsAtPath:plistPath]) { plistPath = [[NSBundle mainBundle] pathForResource:@"myPlist" ofType:@"plist"]; } NSMutableArray *returnArr = [NSMutableArray arrayWithContentsOfFile:plistPath]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"isFav == YES"]; for (NSDictionary *sect in returnArr) { NSArray *arr = [sect objectForKey:@"Rows"]; [sect setValue:[arr filteredArrayUsingPredicate:predicate] forKey:@"Rows"]; } [self.tableView reloadData]; } //Refresh data button - (void) refreshTable:(id)sender { NSLog(@"plist read"); [self readPlist]; NSLog(@"refreshed plist:%@",[self readPlist]); [self.tableView reloadData]; } Does anyone know why the table is not updating?

    Read the article

  • Batch faulting in a to-many relationship for a collection of objects

    - by indragie
    Scenario: Let's say I have an entity called Author that has a to-many relationship called books to the Book entity (inverse relationship author). If I have an existing collection of Author objects, I want to fault in the books relationship for all of them in a single fetch request. Code This is what I've tried so far: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Book"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"author IN %@", authors]; Executing this fetch request does not result in the books relationship of the objects in the authors array being faulted in (inspected via logging). I've also tried doing the fetch request the other way around: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Author"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"SELF IN %@", authors]; fetchRequest.relationshipKeypathsForPrefetching = @[@"books"]; This doesn't fire the faults either. What's the appropriate way of going about doing this?

    Read the article

  • Why is my jQuery event not being triggered when the button is clicked?

    - by Ankur
    I am trying to call the .ajax() method when a button with id=go is clicked. This button is displayed when another jQuery event is triggered, namely that a button with a class called predicate is clicked. The button with id=go is displayed without any problems but on clicking it, there is no call to the alert() or getResults() method which are supposed to occur. Howver if I hard code the button with id=go into the page and don't have it generated by jQuery then the thing works fine. Why does the generated code cause a problem. $(document).ready( function() { $(".predicate").click( function() { makeQuery(this.id); //alert(this.id); }); $(".object").click( function() { //alert(this.id); }); var variables = 0; var queryString = ""; var noOfResults; var formValues=""; var goBegin = ""; var goEnd = "<br/><input name='go' id='go' type='button' value='go'/>"; function makeQuery(value) { queryString = queryString + "val" + variables + "=" + value + "&"; formValues=formValues+"<input type='hidden' name='val"+variables+"' value='"+value+"' />"; variables = variables + 1; $("#resultCount").html(goBegin+formValues+queryString+goEnd); } function getResults(){ $.ajax( { type : "GET", url : "ObjectCount", data : queryString + "count=" + variables, success : function(results) { noOfResults = results; $("#resultCount").html(results - 1 + " results"); } }); } $("#go").click( function() { alert("We have been alerted"); getResults(); }); });

    Read the article

  • I get NaN when I try to insert some HTML into a DIV element with jQuery

    - by Ankur
    I am tring to display a text box when a element of class numObj is clicked. For some reason I get NaNNaNaNaNNaNNaNaNaN where I expect to the see the result of the searchForm variable in the code below. I know that NaN stands for Not a Number. What I don't understand is why is Javascript expecting a number? I can't understand why it cares. $(".numObj").live('click',function(){ var preId = $(this).attr('preId'); var arrayPos = $(this).attr('numArrayPos'); alert(preId +" "+arrayPos); var searchForm = "<table border='0' cellspacing='0' cellpadding='4' id='add-tag2'>"+ +"<tr class='normal'><td bgcolor='#EEEEEE' valign='bottom' nowrap='nowrap'><span class='normal-small'>"+ +"<input name='predicate-name2' type='text' class='normal' id='predicate-name2' size='4' />"+ +"</span></td>"+ +"<td bgcolor='#EEEEEE' valign='bottom' nowrap='nowrap'><span class='normal-small'>&lt;=</span></td>"+ +"<td bgcolor='#EEEEEE' valign='bottom' nowrap='nowrap'>x</td>"+ +"<td valign='bottom' bgcolor='#EEEEEE'>&lt;=</td>"+ +"<td valign='bottom' bgcolor='#EEEEEE'><span class='normal-small'>"+ +"<input type='text' name='object-object2' id='object-object2' class='normal' size='4' />"+ +"</span></td>"+ +"</tr>"+ +"</table>"; $(".numObj").filter("[preId='"+preId+"']").filter("[numArrayPos='"+arrayPos+"']").html(searchForm); }); The generated code that has the numObj class looks something like this <td><div class="numObj" preid="73" numarraypos="5"><span class="normal">585.0</span></div></td>

    Read the article

  • Uniquing with Existing Core Data Entities

    - by warrenm
    I'm using Core Data to store a lot (1000s) of items. A pair of properties on each item are used to determine uniqueness, so when a new item comes in, I compare it against the existing items before inserting it. Since the incoming data is in the form of an RSS feed, there are often many duplicates, and the cost of the uniquing step is O(N^2), which has become significant. Right now, I create a set of existing items before iterating over the list of (possible) new items. My theory is that on the first iteration, all the items will be faulted in, and assuming we aren't pressed for memory, most of those items will remain resident over the course of the iteration. I see my options thusly: Use string comparison for uniquing, iterating over all "new" items and comparing to all existing items (Current approach) Use a predicate to filter the set of existing items against the properties of the "new" items. Use a predicate with Core Data to determine uniqueness of each "new" item (without retrieving the set of existing items). Is option 3 likely to be faster than my current approach? Do you know of a better way?

    Read the article

  • Core Data Predicates with Subclassed NSManagedObjects

    - by coneybeare
    I have an AUDIO class. This audio has a SOUND_A subclass and a SOUND_B subclass. This is all done correctly and is working fine. I have another model, lets call it PLAYLIST_JOIN, and this can contain (in the real world) SOUND_A's and SOUND_B's, so we give it a relationship of AUDIO and PLAYLIST. This all works in the app. The problem I am having now is querying the PLAYLIST_JOIN table with an NSPredicate. What I want to do is find an exact PLAYLIST_JOIN item by giving it 2 keys in the predicate sound_a._sound_a_id = %@ && playlist.playlist_id = %@ and sound_b.sound_b_id = %@ && playlist.playlist_id = %@ The main problem is that because the table does not store sound_a and sound_b, but stored audio, I cannot use this syntax. I do not have the option of reorganizing the sound_a and sound_b to use the same _id attribute name, so how do I do this? Can I pass a method to the predicate? something like this: [audio getID] = %@ && playlist_id = %@

    Read the article

  • Project Naming Convention Feedback Please

    - by Sam Striano
    I am creating a ASP.NET MVC 3 application using Entity Framework 4. I am using the Repository/Service Pattern and was looking for feedback. I currently have the following: MVC Application (GTG.dll) GTG GTG.Controllers GTG.ViewModels Business POCO's (GTG.Business.dll) This contains all business objects (Customer, Order, Invoice, etc...) EF Model/Repositories (GTG.Data.dll) GTG.Business (GTG.Context.tt) I used the Entity POCO Generator Templates. GTG.Data.Repositories Service Layer (GTG.Data.Services.dll) GTG.Data.Services - Contains all of the service objects, one per aggregate root. The following is a little sample code: Controller Namespace Controllers Public Class HomeController Inherits System.Web.Mvc.Controller Function Index() As ActionResult Return View(New Models.HomeViewModel) End Function End Class End Namespace Model Namespace Models Public Class HomeViewModel Private _Service As CustomerService Public Property Customers As List(Of Customer) Public Sub New() _Service = New CustomerService _Customers = _Service.GetCustomersByBusinessName("Striano") End Sub End Class End Namespace Service Public Class CustomerService Private _Repository As ICustomerRepository Public Sub New() _Repository = New CustomerRepository End Sub Function GetCustomerByID(ByVal ID As Integer) As Customer Return _Repository.GetByID(ID) End Function Function GetCustomersByBusinessName(ByVal Name As String) As List(Of Customer) Return _Repository.Query(Function(x) x.CompanyName.StartsWith(Name)).ToList End Function End Class Repository Namespace Data.Repositories Public Class CustomerRepository Implements ICustomerRepository Public Sub Add(ByVal Entity As Business.Customer) Implements IRepository(Of Business.Customer).Add End Sub Public Sub Delete(ByVal Entity As Business.Customer) Implements IRepository(Of Business.Customer).Delete End Sub Public Function GetByID(ByVal ID As Integer) As Business.Customer Implements IRepository(Of Business.Customer).GetByID Using db As New GTGContainer Return db.Customers.FirstOrDefault(Function(x) x.ID = ID) End Using End Function Public Function Query(ByVal Predicate As System.Linq.Expressions.Expression(Of System.Func(Of Business.Customer, Boolean))) As System.Linq.IQueryable(Of Business.Customer) Implements IRepository(Of Business.Customer).Query Using db As New GTGContainer Return db.Customers.Where(Predicate) End Using End Function Public Sub Save(ByVal Entity As Business.Customer) Implements IRepository(Of Business.Customer).Save End Sub End Class End Namespace

    Read the article

  • NSPredicate 'OR' filtering based on an NSArray of keys

    - by So Over It
    Consider the following NSArray: NSArray *dataSet = [[NSArray alloc] initWithObjects: [NSDictionary dictionaryWithObjectsAndKeys:@"abc", @"key1", @"def", @"key2", @"hij", @"key3", nil], [NSDictionary dictionaryWithObjectsAndKeys:@"klm", @"key1", @"nop", @"key2", nil], [NSDictionary dictionaryWithObjectsAndKeys:@"qrs", @"key2", @"tuv", @"key3", nil], [NSDictionary dictionaryWithObjectsAndKeys:@"wxy", @"key3", nil], nil]; I am able to filter this array to find dictionary objects that contain the key key1 // Filter our dataSet to only contain dictionary objects with a key of 'key1' NSString *key = @"key1"; NSPredicate *key1Predicate = [NSPredicate predicateWithFormat:@"%@ IN self.@allKeys", key]; NSArray *filteretSet1 = [dataSet filteredArrayUsingPredicate:key1Predicate]; NSLog(@"filteretSet1: %@",filteretSet1); Which appropriately returns: filteretSet1: ( { key1 = abc; key2 = def; key3 = hij; }, { key1 = klm; key2 = nop; } ) Now, I am wanting to filter the dataSet for dictionary objects containing ANY of the keys in an NSArray. For example, using the array: NSArray *keySet = [NSArray arrayWithObjects:@"key1", @"key3", nil]; I want to create a predicate that returns and array of any dictionary objects that contain either 'key1' or 'key3' (ie. in this example all dictionary objects would be returned except for the third object - as it does not contain either 'key1' or 'key3'). Any ideas on how I would achieve this? Would I have to use a compound predicate?

    Read the article

  • Folding a list in F#

    - by bytebuster
    I have a pretty trivial task but I can't figure out how to make the solution prettier. The goal is taking a List and returning results, based on whether they passed a predicate. The results should be grouped. Here's a simplified example: Predicate: isEven Inp : [2; 4; 3; 7; 6; 10; 4; 5] Out: [[^^^^]......[^^^^^^^^]..] Here's the code I have so far: let f p ls = List.foldBack (fun el (xs, ys) -> if p el then (el::xs, ys) else ([], xs::ys)) ls ([], []) |> List.Cons // (1) |> List.filter (not << List.isEmpty) // (2) let even x = x % 2 = 0 let ret = [2; 4; 3; 7; 6; 10; 4; 5] |> f even // expected [[2; 4]; [6; 10; 4]] This code does not seem to be readable that much. Also, I don't like lines (1) and (2). Is there any better solution?

    Read the article

  • Stored Procedures with SSRS? Hmm… not so much

    - by Rob Farley
    Little Bobby Tables’ mother says you should always sanitise your data input. Except that I think she’s wrong. The SQL Injection aspect is for another post, where I’ll show you why I think SQL Injection is the same kind of attack as many other attacks, such as the old buffer overflow, but here I want to have a bit of a whinge about the way that some people sanitise data input, and even have a whinge about people who insist on using stored procedures for SSRS reports. Let me say that again, in case you missed it the first time: I want to have a whinge about people who insist on using stored procedures for SSRS reports. Let’s look at the data input sanitisation aspect – except that I’m going to call it ‘parameter validation’. I’m talking about code that looks like this: create procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     /* First check that @eomdate is a valid date */     if isdate(@eomdate) != 1     begin         select 'Please enter a valid date' as ErrorMessage;         return;     end     /* Then check that time has passed since @eomdate */     if datediff(day,@eomdate,sysdatetime()) < 5     begin         select 'Sorry - EOM is not complete yet' as ErrorMessage;         return;     end         /* If those checks have succeeded, return the data */     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where OrderDate >= dateadd(month,-1,@eomdate)         and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Notice that the code checks that a date has been entered. Seriously??!! This must only be to check for NULL values being passed in, because anything else would have to be a valid datetime to avoid an error. The other check is maybe fair enough, but I still don’t like it. The two problems I have with this stored procedure are the result sets and the small fact that the stored procedure even exists in the first place. But let’s consider the first one of these problems for starters. I’ll get to the second one in a moment. If you read Jes Borland (@grrl_geek)’s recent post about returning multiple result sets in Reporting Services, you’ll be aware that Reporting Services doesn’t support multiple results sets from a single query. And when it says ‘single query’, it includes ‘stored procedure call’. It’ll only handle the first result set that comes back. But that’s okay – we have RETURN statements, so our stored procedure will only ever return a single result set.  Sometimes that result set might contain a single field called ErrorMessage, but it’s still only one result set. Except that it’s not okay, because Reporting Services needs to know what fields to expect. Your report needs to hook into your fields, so SSRS needs to have a way to get that information. For stored procs, it uses an option called FMTONLY. When Reporting Services tries to figure out what fields are going to be returned by a query (or stored procedure call), it doesn’t want to have to run the whole thing. That could take ages. (Maybe it’s seen some of the stored procedures I’ve had to deal with over the years!) So it turns on FMTONLY before it makes the call (and turns it off again afterwards). FMTONLY is designed to be able to figure out the shape of the output, without actually running the contents. It’s very useful, you might think. set fmtonly on exec dbo.GetMonthSummaryPerSalesPerson '20030401'; set fmtonly off Without the FMTONLY lines, this stored procedure returns a result set that has three columns and fourteen rows. But with FMTONLY turned on, those rows don’t come back. But what I do get back hurts Reporting Services. It doesn’t run the stored procedure at all. It just looks for anything that could be returned and pushes out a result set in that shape. Despite the fact that I’ve made sure that the logic will only ever return a single result set, the FMTONLY option kills me by returning three of them. It would have been much better to push these checks down into the query itself. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Now if we run it with FMTONLY turned on, we get the single result set back. But let’s consider the execution plan when we pass in an invalid date. First let’s look at one that returns data. I’ve got a semi-useful index in place on OrderDate, which includes the SalesPersonID and TotalDue fields. It does the job, despite a hefty Sort operation. …compared to one that uses a future date: You might notice that the estimated costs are similar – the Index Seek is still 28%, the Sort is still 71%. But the size of that arrow coming out of the Index Seek is a whole bunch smaller. The coolest thing here is what’s going on with that Index Seek. Let’s look at some of the properties of it. Glance down it with me… Estimated CPU cost of 0.0005728, 387 estimated rows, estimated subtree cost of 0.0044385, ForceSeek false, Number of Executions 0. That’s right – it doesn’t run. So much for reading plans right-to-left... The key is the Filter on the left of it. It has a Startup Expression Predicate in it, which means that it doesn’t call anything further down the plan (to the right) if the predicate evaluates to false. Using this method, we can make sure that our stored procedure contains a single query, and therefore avoid any problems with multiple result sets. If we wanted, we could always use UNION ALL to make sure that we can return an appropriate error message. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, /*Placeholder: */ '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     /* Now include the error messages */     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5     order by SalesPersonID; end But still I don’t like it, because it’s now a stored procedure with a single query. And I don’t like stored procedures that should be functions. That’s right – I think this should be a function, and SSRS should call the function. And I apologise to those of you who are now planning a bonfire for me. Guy Fawkes’ night has already passed this year, so I think you miss out. (And I’m not going to remind you about when the PASS Summit is in 2012.) create function dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) returns table as return (     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5 ); We’ve had to lose the ORDER BY – but that’s fine, as that’s a client thing anyway. We can have our reports leverage this stored query still, but we’re recognising that it’s a query, not a procedure. A procedure is designed to DO stuff, not just return data. We even get entries in sys.columns that confirm what the shape of the result set actually is, which makes sense, because a table-valued function is the right mechanism to return data. And we get so much more flexibility with this. If you haven’t seen the simplification stuff that I’ve preached on before, jump over to http://bit.ly/SimpleRob and watch the video of when I broke a microphone and nearly fell off the stage in Wales. You’ll see the impact of being able to have a simplifiable query. You can also read the procedural functions post I wrote recently, if you didn’t follow the link from a few paragraphs ago. So if we want the list of SalesPeople that made any kind of sales in a given month, we can do something like: select SalesPersonID from dbo.GetMonthSummaryPerSalesPerson(@eomonth) order by SalesPersonID; This doesn’t need to look up the TotalDue field, which makes a simpler plan. select * from dbo.GetMonthSummaryPerSalesPerson(@eomonth) where SalesPersonID is not null order by SalesPersonID; This one can avoid having to do the work on the rows that don’t have a SalesPersonID value, pushing the predicate into the Index Seek rather than filtering the results that come back to the report. If we had joins involved, we might see some of those being simplified out. We also get the ability to include query hints in individual reports. We shift from having a single-use stored procedure to having a reusable stored query – and isn’t that one of the main points of modularisation? Stored procedures in Reporting Services are just a bit limited for my liking. They’re useful in plenty of ways, but if you insist on using stored procedures all the time rather that queries that use functions – that’s rubbish. @rob_farley

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • SQL Server 2008 Copy Database Wizard: Fail

    - by Nai
    I am trying to use the SQL Server 2008 Copy Database Wizard to copy a SQL Server 2008 database. I am using the SQL Management Object method. However, the copy fails with the following error: ERROR : errorCode=-1073548784 description=Executing the query "/* '==============================================..." failed with the following error: "Cannot use a CONTAINS or FREETEXT predicate on table or indexed view 'Product' because it is not full-text indexed." Any ideas on how I can proceed with this will be super helpful Kind Regards Nai

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • SQL SERVER – Beginning New Weekly Series – Memory Lane – #002

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Query to Find ByteSize of All the Tables in Database This was my second blog post and today I do not remember what was the business need which has made me build this query. It was built for SQL Server 2000 and it will not directly run on SQL Server 2005 or later version now. It measured the byte size of the tables in the database. This can be done in many different ways as well for example SP_HELPDB as well SP_HELP. I wish to build similar script in 2005 and later version. 2007 This week I had completed my – 1 Year (365 blogs) and very first 1 Million Views. I was pretty excited at that time with this new achievement. SQL SERVER Versions, CodeNames, Year of Release When I started with SQL Server I did not know all the names correctly for each version and I often used to get confused with this. However, as time passed by I started to remember all the codename as well. In this blog post I have not included SQL Server 2012′s code name as it was not released at the time. SQL Server 2012′s code name is Denali. Here is the question for you – anyone know what is the internal name of the SQL Server’s next version? Searching String in Stored Procedure I have already started to work with 2005 by this time and I was personally converting each of my stored procedures to SQL Server 2005 compatible. As we were upgrading from SQL Server 2000 to SQL Server 2005 we had to search each of the stored procedures and make sure that we remove incompatible code from it. For example, syscolumns of SQL Server 2000 was now being replaced by sys.columns of SQL Server 2005. This stored procedure was pretty helpful at that time. Later on I build few additional versions of the same stored procedure. Version 1: This version finds the Stored Procedures related to Table Version 2: This is specific version which works with SQL Server 2005 and later version 2008 Clear Drop Down List of Recent Connection From SQL Server Management Studio It happens to all of us when we connected to some remote client server and we never ever have to connect to it again. However, it keeps on bothering us that the name shows up in the list all the time. In this blog post I covered a quick tip about how we can remove the same. I also wrote a small article about How to Check Database Integrity for all Databases and there was a funny question from a reader requesting T-SQL code to refresh databases. 2009 Stored Procedure are Compiled on First Run – SP is taking Longer to Run First Time A myth is quite prevailing in the industry that Stored Procedures are pre-compiled and they should always run faster. It is not true. Stored procedures are compiled on very first execution of it and that is the reason why it takes longer when it executes first time. In this blog post I had a great time discussing the same concept. If you do not agree with it, you are welcome to read this blog post. Removing Key Lookup – Seek PredicatePredicate – An Interesting Observation Related to Datatypes Performance Tuning is an interesting concept and my personal favorite one. In many blog posts I have described how to do performance tuning and how to improve the performance of the queries. In this quick quick tip I have explained how one can remove the Key Lookup and improve performance. Here are very relevant articles on this subject: Article 1 | Article 2 | Article 3 2010 Recycle Error Log – Create New Log file without a Server Restart During one of the consulting assignments I noticed DBA restarting server to create new log file. This is absolutely not necessary and restarting server might have many other negative impacts. There is a common sp_cycle_errorlog which can do the same task efficiently and properly. Have you ever used this SP or feature? Additionally I had a great time presenting on SQL Server Best Practices in SharePoint Conference. 2011 SSMS 2012 Reset Keyboard Shortcuts to Default It is very much possible that we mix up various SQL Server shortcuts and at times we feel like resetting it to default. In SQL Server 2012 it is not easy to do it, there is a process to follow and I enjoyed blogging about it. Fundamentals of Columnstore Index Columnstore index is introduced in SQL Server 2012 and have been a very popular subject. It increases the speed of the server dramatically as well can be an extremely useful feature with Datawharehousing. However updating the columnstore index is not as simple as a simple UPDATE statement. Read in a detailed blog post about how Update works with Columnstore Index. Additionally, you can watch a Quick Video on this subject. SQL Server 2012 New Features I had decided to explore SQL Server 2012 features last year and went through pretty much every single concept introduced in separate blog posts. Here are two blog posts where I describe how SQL Server 2012 functions works. Introduction to CUME_DIST – Analytic Functions Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions OVER clause with FIRST_VALUE and LAST_VALUE – Analytic Functions I indeed enjoyed writing about SQL Server 2012 functions last year. Have you gone through all the new features which are introduced in SQL Server 2012? If not, it is still not late to go through them. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >