Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 37/617 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • SQL Server - Get Inserted Record Identity Value when Using a View's Instead Of Trigger

    - by CuppM
    For several tables that have identity fields, we are implementing a Row Level Security scheme using Views and Instead Of triggers on those views. Here is a simplified example structure: -- Table CREATE TABLE tblItem ( ItemId int identity(1,1) primary key, Name varchar(20) ) go -- View CREATE VIEW vwItem AS SELECT * FROM tblItem -- RLS Filtering Condition go -- Instead Of Insert Trigger CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) SELECT Name FROM inserted; END go If I want to insert a record and get its identity, before implementing the RLS Instead Of trigger, I used: DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = SCOPE_IDENTITY(); With the trigger, SCOPE_IDENTITY() no longer works - it returns NULL. I've seen suggestions for using the OUTPUT clause to get the identity back, but I can't seem to get it to work the way I need it to. If I put the OUTPUT clause on the view insert, nothing is ever entered into it. -- Nothing is added to @ItemIds DECLARE @ItemIds TABLE (ItemId int); INSERT INTO vwItem (Name) OUTPUT INSERTED.ItemId INTO @ItemIds VALUES ('MyName'); If I put the OUTPUT clause in the trigger on the INSERT statement, the trigger returns the table (I can view it from SQL Management Studio). I can't seem to capture it in the calling code; either by using an OUTPUT clause on that call or using a SELECT * FROM (). -- Modified Instead Of Insert Trigger w/ Output CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) OUTPUT INSERTED.ItemId SELECT Name FROM inserted; END go -- Calling Code INSERT INTO vwItem (Name) VALUES ('MyName'); The only thing I can think of is to use the IDENT_CURRENT() function. Since that doesn't operate in the current scope, there's an issue of concurrent users inserting at the same time and messing it up. If the entire operation is wrapped in a transaction, would that prevent the concurrency issue? BEGIN TRANSACTION DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = IDENT_CURRENT('tblItem'); COMMIT TRANSACTION Does anyone have any suggestions on how to do this better? I know people out there who will read this and say "Triggers are EVIL, don't use them!" While I appreciate your convictions, please don't offer that "suggestion".

    Read the article

  • NHibernate Empty Set Clauses in SQL Update Statements

    - by Brian Wilkinson
    I'm getting SQL update statements from NHibernate having a empty set clause ie UPDATE SET WHERE x == ? This propblem is occuring when a child entity is being updated for some reason the sql statement is being run against the Parent Entity, for some reason NHibenate is saying the parent has changed but when the set clause is being built no changes are found, can anyone help. The application I am constructing isn 3 tier. The domain entities are light weight data contract only entities.

    Read the article

  • How to escape LIKE %$var% with Doctrine?

    - by Peter Smit
    I am making a Doctrine query and I have to do a wildcard match in the where clause. How should I escape the variable that I want to insert? The query I want to get: SELECT u.* FROM User as u WHERE name LIKE %var% The php code until now: $query = Doctrine_Query::create() ->from('User u') ->where(); What should come in the where clause? The variable I want to match is $name

    Read the article

  • Created query is not supported by my DB

    - by stacker
    I created an application using seam-gen. The created operation to search the DB ends with and exception (syntax error). The query has a where clause like this: lower(barcode0_.barcode_ean) like lower((?||'%')) limit ? Does hibnerate or seam create the where clause which my DB can't understand? Or is there a workaround for SQL statement related issues in seam?

    Read the article

  • wifi provider onLocationChanged is never called

    - by Kelsie
    I got a weird problem. I uses wifi to retrieve user location. I have tested on several phones; on some of them, I could never get a location update. it seemed that the method onLocationChanged is never called. My code is attached below.. Any suggestions appreciated!!! @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); LocationManager locationManager; locationManager = (LocationManager)getSystemService(Context.LOCATION_SERVICE); String provider = locationManager.NETWORK_PROVIDER; Location l = locationManager.getLastKnownLocation(provider); updateWithNewLocation(l); locationManager.requestLocationUpdates(provider, 0, 0, locationListener); } private void updateWithNewLocation(Location location) { TextView myLocationText; myLocationText = (TextView)findViewById(R.id.myLocationText); String latLongString = "No location found"; if (location != null) { double lat = location.getLatitude(); double lng = location.getLongitude(); latLongString = "Lat:" + lat + "\nLong:" + lng; } myLocationText.setText("Your Current Position is:\n" + latLongString); } private final LocationListener locationListener = new LocationListener() { public void onLocationChanged(Location location) { updateWithNewLocation(location); Log.i("onLocationChanged", "onLocationChanged"); Toast.makeText(getApplicationContext(), "location changed", Toast.LENGTH_SHORT).show(); } public void onProviderDisabled(String provider) {} public void onProviderEnabled(String provider) {} public void onStatusChanged(String provider, int status, Bundle extras) {} }; Manifest <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/> <uses-permission android:name="android.permission.INTERNET"/>

    Read the article

  • Design by Contract with Microsoft .Net Code Contract

    - by Fredrik N
    I have done some talks on different events and summits about Defensive Programming and Design by Contract, last time was at Cornerstone’s Developer Summit 2010. Next time will be at SweNug (Sweden .Net User Group). I decided to write a blog post about of some stuffs I was talking about. Users are a terrible thing! Protect your self from them ”Human users have a gift for doing the worst possible thing at the worst possible time.” – Michael T. Nygard, Release It! The kind of users Michael T. Nygard are talking about is the users of a system. We also have users that uses our code, the users I’m going to focus on is the users of our code. Me and you and another developers. “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” – Martin Fowler Good programmers also writes code that humans know how to use, good programmers also make sure software behave in a predictable manner despise inputs or user actions. Design by Contract   Design by Contract (DbC) is a way for us to make a contract between us (the code writer) and the users of our code. It’s about “If you give me this, I promise to give you this”. It’s not about business validations, that is something completely different that should be part of the domain model. DbC is to make sure the users of our code uses it in a correct way, and that we can rely on the contract and write code in a way where we know that the users will follow the contract. It will make it much easier for us to write code with a contract specified. Something like the following code is something we may see often: public void DoSomething(Object value) { value.DoIKnowThatICanDoThis(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Where “value” can be uses directly or passed to other methods and later be used. What some of us can easily forget here is that the “value” can be “null”. We will probably not passing a null value, but someone else that uses our code maybe will do it. I think most of you (including me) have passed “null” into a method because you don’t know if the argument need to be specified to a valid value etc. I bet most of you also have got the “Null reference exception”. Sometimes this “Null reference exception” can be hard and take time to fix, because we need to search among our code to see where the “null” value was passed in etc. Wouldn’t it be much better if we can as early as possible specify that the value can’t not be null, so the users of our code also know it when the users starts to use our code, and before run time execution of the code? This is where DbC comes into the picture. We can use DbC to specify what we need, and by doing so we can rely on the contract when we write our code. So the code above can actually use the DoIKnowThatICanDoThis() method on the value object without being worried that the “value” can be null. The contract between the users of the code and us writing the code, says that the “value” can’t be null.   Pre- and Postconditions   When working with DbC we are specifying pre- and postconditions.  Precondition is a condition that should be met before a query or command is executed. An example of a precondition is: “The Value argument of the method can’t be null”, and we make sure the “value” isn’t null before the method is called. Postcondition is a condition that should be met when a command or query is completed, a postcondition will make sure the result is correct. An example of a postconditon is “The method will return a list with at least 1 item”. Commands an Quires When using DbC, we need to know what a Command and a Query is, because some principles that can be good to follow are based on commands and queries. A Command is something that will not return anything, like the SQL’s CREATE, UPDATE and DELETE. There are two kinds of Commands when using DbC, the Creation commands (for example a Constructor), and Others. Others can for example be a Command to add a value to a list, remove or update a value etc. //Creation commands public Stack(int size) //Other commands public void Push(object value); public void Remove(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   A Query, is something that will return something, for example an Attribute, Property or a Function, like the SQL’s SELECT.   There are two kinds of Queries, the Basic Queries  (Quires that aren’t based on another queries), and the Derived Queries, queries that is based on another queries. Here is an example of queries of a Stack: //Basic Queries public int Count; public object this[int index] { get; } //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To understand about some principles that are good to follow when using DbC, we need to know about the Commands and different Queries. The 6 Principles When working with DbC, it’s advisable to follow some principles to make it easier to define and use contracts. The following DbC principles are: Separate commands and queries. Separate basic queries from derived queries. For each derived query, write a postcondition that specifies what result will be returned, in terms of one or more basic queries. For each command, write a postcondition that specifies the value of every basic query. For every query and command, decide on a suitable precondition. Write invariants to define unchanging properties of objects. Before I will write about each of them I want you to now that I’m going to use .Net 4.0 Code Contract. I will in the rest of the post uses a simple Stack (Yes I know, .Net already have a Stack class) to give you the basic understanding about using DbC. A Stack is a data structure where the first item in, will be the first item out. Here is a basic implementation of a Stack where not contract is specified yet: public class Stack { private object[] _array; //Basic Queries public uint Count; public object this[uint index] { get { return _array[index]; } set { _array[index] = value; } } //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } //Is related to Count and this[] Query public object Top() { return this[Count]; } //Creation commands public Stack(uint size) { Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { this[++Count] = value; } public void Remove() { this[Count] = null; Count--; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: The Stack is implemented in a way to demonstrate the use of Code Contract in a simple way, the implementation may not look like how you would implement it, so don’t think this is the perfect Stack implementation, only used for demonstration.   Before I will go deeper into the principles I will simply mention how we can use the .Net Code Contract. I mention before about pre- and postcondition, is about “Require” something and to “Ensure” something. When using Code Contract, we will use a static class called “Contract” and is located in he “System.Diagnostics.Contracts” namespace. The contract must be specified at the top or our member statement block. To specify a precondition with Code Contract we uses the Contract.Requires method, and to specify a postcondition, we uses the Contract.Ensure method. Here is an example where both a pre- and postcondition are used: public object Top() { Contract.Requires(Count > 0, "Stack is empty"); Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The contract above requires that the Count is greater than 0, if not we can’t get the item at the Top of a Stack. We also Ensures that the results (By using the Contract.Result method, we can specify a postcondition that will check if the value returned from a method is correct) of the Top query is equal to this[Count].   1. Separate Commands and Queries   When working with DbC, it’s important to separate Command and Quires. A method should either be a command that performs an Action, or returning information to the caller, not both. By asking a question the answer shouldn’t be changed. The following is an example of a Command and a Query of a Stack: public void Push(object value) public object Top() .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The Push is a command and will not return anything, just add a value to the Stack, the Top is a query to get the item at the top of the stack.   2. Separate basic queries from derived queries There are two different kinds of queries,  the basic queries that doesn’t rely on another queries, and derived queries that uses a basic query. The “Separate basic queries from derived queries” principle is about about that derived queries can be specified in terms of basic queries. So this principles is more about recognizing that a query is a derived query or a basic query. It will then make is much easier to follow the other principles. The following code shows a basic query and a derived query: //Basic Queries public uint Count; //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   We can see that IsEmpty will use the Count query, and that makes the IsEmpty a Derived query.   3. For each derived query, write a postcondition that specifies what result will be returned, in terms of one or more basic queries.   When the derived query is recognize we can follow the 3ed principle. For each derived query, we can create a postcondition that specifies what result our derived query will return in terms of one or more basic queries. Remember that DbC is about contracts between the users of the code and us writing the code. So we can’t use demand that the users will pass in a valid value, we must also ensure that we will give the users what the users wants, when the user is following our contract. The IsEmpty query of the Stack will use a Count query and that will make the IsEmpty a Derived query, so we should now write a postcondition that specified what results will be returned, in terms of using a basic query and in this case the Count query, //Basic Queries public uint Count; //Derived Queries public bool IsEmpty() { Contract.Ensures(Contract.Result<bool>() == (Count == 0)); return Count == 0; } The Contract.Ensures is used to create a postcondition. The above code will make sure that the results of the IsEmpty (by using the Contract.Result to get the result of the IsEmpty method) is correct, that will say that the IsEmpty will be either true or false based on Count is equal to 0 or not. The postcondition are using a basic query, so the IsEmpty is now following the 3ed principle. We also have another Derived Query, the Top query, it will also need a postcondition and it uses all basic queries. The Result of the Top method must be the same value as the this[] query returns. //Basic Queries public uint Count; public object this[uint index] { get { return _array[index]; } set { _array[index] = value; } } //Derived Queries //Is related to Count and this[] Query public object Top() { Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   4. For each command, write a postcondition that specifies the value of every basic query.   For each command we will create a postconditon that specifies the value of basic queries. If we look at the Stack implementation we will have three Commands, one Creation command, the Constructor, and two others commands, Push and Remove. Those commands need a postcondition and they should include basic query to follow the 4th principle. //Creation commands public Stack(uint size) { Contract.Ensures(Count == 0); Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { Contract.Ensures(Count == Contract.OldValue<uint>(Count) + 1); Contract.Ensures(this[Count] == value); this[++Count] = value; } public void Remove() { Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   As you can see the Create command will Ensures that Count will be 0 when the Stack is created, when a Stack is created there shouldn’t be any items in the stack. The Push command will take a value and put it into the Stack, when an item is pushed into the Stack, the Count need to be increased to know the number of items added to the Stack, and we must also make sure the item is really added to the Stack. The postconditon of the Push method will make sure the that old value of the Count (by using the Contract.OldValue we can get the value a Query has before the method is called)  plus 1 will be equal to the Count query, this is the way we can ensure that the Push will increase the Count with one. We also make sure the this[] query will now contain the item we pushed into the Stack. The Remove method must make sure the Count is decreased by one when the top item is removed from the Stack. The Commands is now following the 4th principle, where each command now have a postcondition that used the value of basic queries. Note: The principle says every basic Query, the Remove only used one Query the Count, it’s because this command can’t use the this[] query because an item is removed, so the only way to make sure an item is removed is to just use the Count query, so the Remove will still follow the principle.   5. For every query and command, decide on a suitable precondition.   We have now focused only on postcondition, now time for some preconditons. The 5th principle is about deciding a suitable preconditon for every query and command. If we starts to look at one of our basic queries (will not go through all Queries and commands here, just some of them) the this[] query, we can’t pass an index that is lower then 1 (.Net arrays and list are zero based, but not the stack in this blog post ;)) and the index can’t be lesser than the number of items in the stack. So here we will need a preconditon. public object this[uint index] { get { Contract.Requires(index >= 1); Contract.Requires(index <= Count); return _array[index]; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Think about the Contract as an documentation about how to use the code in a correct way, so if the contract could be specified elsewhere (not part of the method body), we could simply write “return _array[index]” and there is no need to check if index is greater or lesser than Count, because that is specified in a “contract”. The implementation of Code Contract, requires that the contract is specified in the code. As a developer I would rather have this contract elsewhere (Like Spec#) or implemented in a way Eiffel uses it as part of the language. Now when we have looked at one Query, we can also look at one command, the Remove command (You can see the whole implementation of the Stack at the end of this blog post, where precondition is added to more queries and commands then what I’m going to show in this section). We can only Remove an item if the Count is greater than 0. So we can write a precondition that will require that Count must be greater than 0. public void Remove() { Contract.Requires(Count > 0); Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   6. Write invariants to define unchanging properties of objects.   The last principle is about making sure the object are feeling great! This is done by using invariants. When using Code Contract we can specify invariants by adding a method with the attribute ContractInvariantMethod, the method must be private or public and can only contains calls to Contract.Invariant. To make sure the Stack feels great, the Stack must have 0 or more items, the Count can’t never be a negative value to make sure each command and queries can be used of the Stack. Here is our invariant for the Stack object: [ContractInvariantMethod] private void ObjectInvariant() { Contract.Invariant(Count >= 0); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: The ObjectInvariant method will be called every time after a Query or Commands is called. Here is the full example using Code Contract:   public class Stack { private object[] _array; //Basic Queries public uint Count; public object this[uint index] { get { Contract.Requires(index >= 1); Contract.Requires(index <= Count); return _array[index]; } set { Contract.Requires(index >= 1); Contract.Requires(index <= Count); _array[index] = value; } } //Derived Queries //Is related to Count Query public bool IsEmpty() { Contract.Ensures(Contract.Result<bool>() == (Count == 0)); return Count == 0; } //Is related to Count and this[] Query public object Top() { Contract.Requires(Count > 0, "Stack is empty"); Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } //Creation commands public Stack(uint size) { Contract.Requires(size > 0); Contract.Ensures(Count == 0); Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { Contract.Requires(value != null); Contract.Ensures(Count == Contract.OldValue<uint>(Count) + 1); Contract.Ensures(this[Count] == value); this[++Count] = value; } public void Remove() { Contract.Requires(Count > 0); Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } [ContractInvariantMethod] private void ObjectInvariant() { Contract.Invariant(Count >= 0); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Summary By using Design By Contract we can make sure the users are using our code in a correct way, and we must also make sure the users will get the expected results when they uses our code. This can be done by specifying contracts. To make it easy to use Design By Contract, some principles may be good to follow like the separation of commands an queries. With .Net 4.0 we can use the Code Contract feature to specify contracts.

    Read the article

  • SQL SERVER – ColumnStore Index – Batch Mode vs Row Mode

    - by pinaldave
    What do you do when you are in a hurry and hear someone say things which you do not agree or is wrong? Well, let me tell you what I do or what I recently did. I was walking by and heard someone mentioning “Columnstore Index are really great as they are using Batch Mode which makes them seriously fast.” While I was passing by and I heard this statement my first reaction was I thought Columnstore Index can use both – Batch Mode and Row Mode. I stopped by even though I was in a hurry and asked the person if he meant that Columnstore indexes are seriously fast because they use Batch Mode all the time or Batch Mode is one of the reasons for Columnstore Index to be faster. He responded that Columnstore Indexes can run only in Batch Mode. However, I do not like to confront anybody without hearing their complete story. Honestly, I like to do information sharing and avoid confronting as much as possible. There are always ways to communicate the same positively. Well, this is what I did, I quickly pull up my earlier article on Columnstore Index and copied the script to SQL Server Management Studio. I created two versions of the script. 1) Very Large Table 2) Reasonably Small Table. I a query which uses columnstore index on both of the versions. I found very interesting result of the my tests. I saved my tests and sent it to the person who mentioned about that Columnstore Indexes are using Batch Mode only. He immediately acknowledged that indeed he was incorrect in saying that Columnstore Index uses only Batch Mode. What really caught my attention is that he also thanked me for sending him detail email instead of just having argument where he and I both were standing in the corridor and neither have no way to prove any theory. Here is the screenshots of the both the scenarios. 1) Columnstore Index using Batch Mode 2) Columnstore Index using Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput.  Batch mode processing spreads metadata access costs and overhead over all the rows in a batch.  Batch mode processing operates on compressed data when possible leading superior performance. Here is one last point – Columnstore Index can use Batch Mode or Row Mode but Batch Mode processing is only available in Columnstore Index. I hope this statement truly sums up the whole concept. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Following the Thread in OSB

    - by Antony Reynolds
    Threading in OSB The Scenario I recently led an OSB POC where we needed to get high throughput from an OSB pipeline that had the following logic: 1. Receive Request 2. Send Request to External System 3. If Response has a particular value   3.1 Modify Request   3.2 Resend Request to External System 4. Send Response back to Requestor All looks very straightforward and no nasty wrinkles along the way.  The flow was implemented in OSB as follows (see diagram for more details): Proxy Service to Receive Request and Send Response Request Pipeline   Copies Original Request for use in step 3 Route Node   Sends Request to External System exposed as a Business Service Response Pipeline   Checks Response to Check If Request Needs to Be Resubmitted Modify Request Callout to External System (same Business Service as Route Node) The Proxy and the Business Service were each assigned their own Work Manager, effectively giving each of them their own thread pool. The Surprise Imagine our surprise when, on stressing the system we saw it lock up, with large numbers of blocked threads.  The reason for the lock up is due to some subtleties in the OSB thread model which is the topic of this post.   Basic Thread Model OSB goes to great lengths to avoid holding on to threads.  Lets start by looking at how how OSB deals with a simple request/response routing to a business service in a route node. Most Business Services are implemented by OSB in two parts.  The first part uses the request thread to send the request to the target.  In the diagram this is represented by the thread T1.  After sending the request to the target (the Business Service in our diagram) the request thread is released back to whatever pool it came from.  A multiplexor (muxer) is used to wait for the response.  When the response is received the muxer hands off the response to a new thread that is used to execute the response pipeline, this is represented in the diagram by T2. OSB allows you to assign different Work Managers and hence different thread pools to each Proxy Service and Business Service.  In out example we have the “Proxy Service Work Manager” assigned to the Proxy Service and the “Business Service Work Manager” assigned to the Business Service.  Note that the Business Service Work Manager is only used to assign the thread to process the response, it is never used to process the request. This architecture means that while waiting for a response from a business service there are no threads in use, which makes for better scalability in terms of thread usage. First Wrinkle Note that if the Proxy and the Business Service both use the same Work Manager then there is potential for starvation.  For example: Request Pipeline makes a blocking callout, say to perform a database read. Business Service response tries to allocate a thread from thread pool but all threads are blocked in the database read. New requests arrive and contend with responses arriving for the available threads. Similar problems can occur if the response pipeline blocks for some reason, maybe a database update for example. Solution The solution to this is to make sure that the Proxy and Business Service use different Work Managers so that they do not contend with each other for threads. Do Nothing Route Thread Model So what happens if there is no route node?  In this case OSB just echoes the Request message as a Response message, but what happens to the threads?  OSB still uses a separate thread for the response, but in this case the Work Manager used is the Default Work Manager. So this is really a special case of the Basic Thread Model discussed above, except that the response pipeline will always execute on the Default Work Manager.   Proxy Chaining Thread Model So what happens when the route node is actually calling a Proxy Service rather than a Business Service, does the second Proxy Service use its own Thread or does it re-use the thread of the original Request Pipeline? Well as you can see from the diagram when a route node calls another proxy service then the original Work Manager is used for both request pipelines.  Similarly the response pipeline uses the Work Manager associated with the ultimate Business Service invoked via a Route Node.  This actually fits in with the earlier description I gave about Business Services and by extension Route Nodes they “… uses the request thread to send the request to the target”. Call Out Threading Model So what happens when you make a Service Callout to a Business Service from within a pipeline.  The documentation says that “The pipeline processor will block the thread until the response arrives asynchronously” when using a Service Callout.  What this means is that the target Business Service is called using the pipeline thread but the response is also handled by the pipeline thread.  This implies that the pipeline thread blocks waiting for a response.  It is the handling of this response that behaves in an unexpected way. When a Business Service is called via a Service Callout, the calling thread is suspended after sending the request, but unlike the Route Node case the thread is not released, it waits for the response.  The muxer uses the Business Service Work Manager to allocate a thread to process the response, but in this case processing the response means getting the response and notifying the blocked pipeline thread that the response is available.  The original pipeline thread can then continue to process the response. Second Wrinkle This leads to an unfortunate wrinkle.  If the Business Service is using the same Work Manager as the Pipeline then it is possible for starvation or a deadlock to occur.  The scenario is as follows: Pipeline makes a Callout and the thread is suspended but still allocated Multiple Pipeline instances using the same Work Manager are in this state (common for a system under load) Response comes back but all Work Manager threads are allocated to blocked pipelines. Response cannot be processed and so pipeline threads never unblock – deadlock! Solution The solution to this is to make sure that any Business Services used by a Callout in a pipeline use a different Work Manager to the pipeline itself. The Solution to My Problem Looking back at my original workflow we see that the same Business Service is called twice, once in a Routing Node and once in a Response Pipeline Callout.  This was what was causing my problem because the response pipeline was using the Business Service Work Manager, but the Service Callout wanted to use the same Work Manager to handle the responses and so eventually my Response Pipeline hogged all the available threads so no responses could be processed. The solution was to create a second Business Service pointing to the same location as the original Business Service, the only difference was to assign a different Work Manager to this Business Service.  This ensured that when the Service Callout completed there were always threads available to process the response because the response processing from the Service Callout had its own dedicated Work Manager. Summary Request Pipeline Executes on Proxy Work Manager (WM) Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM. Route Node Request sent using Proxy WM Thread Proxy WM Thread is released before getting response Muxer is used to handle response Muxer hands off response to Business Service (BS) WM Response Pipeline Executes on Routed Business Service WM Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM. No Route Node (Echo functionality) Proxy WM thread released New thread from the default WM used for response pipeline Service Callout Request sent using proxy pipeline thread Proxy thread is suspended (not released) until the response comes back Notification of response handled by BS WM thread so limited by setting of that WM.  If no WM specified then uses WLS default WM. Note this is a very short lived use of the thread After notification by callout BS WM thread that thread is released and execution continues on the original pipeline thread. Route/Callout to Proxy Service Request Pipeline of callee executes on requestor thread Response Pipeline of caller executes on response thread of requested proxy Throttling Request message may be queued if limit reached. Requesting thread is released (route node) or suspended (callout) So what this means is that you may get deadlocks caused by thread starvation if you use the same thread pool for the business service in a route node and the business service in a callout from the response pipeline because the callout will need a notification thread from the same thread pool as the response pipeline.  This was the problem we were having. You get a similar problem if you use the same work manager for the proxy request pipeline and a business service callout from that request pipeline. It also means you may want to have different work managers for the proxy and business service in the route node. Basically you need to think carefully about how threading impacts your proxy services. References Thanks to Jay Kasi, Gerald Nunn and Deb Ayers for helping to explain this to me.  Any errors are my own and not theirs.  Also thanks to my colleagues Milind Pandit and Prasad Bopardikar who travelled this road with me. OSB Thread Model Great Blog Post on Thread Usage in OSB

    Read the article

  • How do I get squid peers to talk SSL to each other?

    - by Marcelo Cantos
    How would I set up a pair of squid proxies so that one uses the other as a parent and all traffic between them is encrypted using SSL? I've read the cache_peer documentation, but it's all very fuzzy to me which certs I need to create (and how), which server uses which cert, and so on. Is there a straightforward HOW-TO for this somewhere? Just to be clear, I don't want to know how to setup squid to proxy https requests, or as a reverse proxy for a web server that uses https.

    Read the article

  • how to rotate one squid user among multiple IPs based on number of requests processed by each IP

    - by Arvind
    I want to set up a Squid ACL in the following manner-- For example, my Squid Proxy Server has 10 IP addresses- now I have a user 'demouser'. I want that for the very first request sent to 'demouser' this user uses IP address #1, for the second request it uses IP address #2, for the 3rd request of the day it uses IP address #3 and so on till it uses up all IPs. One more level of control I would like is that once the user has used up all available IP addresses once per address, then it does not allow the proxy request to go through. How do I set up such a configuration on Squid Proxy server ACL? Even a document or how to would be very helpful. The official wiki talks about one 'weird' case- choosing an IP address based on time of day the request was made to the proxy server. The other cases are all regular use cases which are not even remotely near my requirement as specified above.

    Read the article

  • PerlRegEx vs RegularExpressionsCore Delphi Units

    - by Jan Goyvaerts
    The RegularExpressionsCore unit that is part of Delphi XE is based on the latest class-based PerlRegEx unit that I developed. Embarcadero only made a few changes to the unit. These changes are insignificant enough that code written for earlier versions of Delphi using the class-based PerlRegEx unit will work just the same with Delphi XE. The unit was renamed from PerlRegEx to RegularExpressionsCore. When migrating your code to Delphi XE, you can choose whether you want to use the new RegularExpressionsCore unit or continue using the PerlRegEx unit in your application. All you need to change is which unit you add to the uses clause in your own units. Indentation and line breaks in the code were changed to match the style used in the Delphi RTL and VCL code. This does not change the code, but makes it harder to diff the two units. Literal strings in the unit were separated into their own unit called RegularExpressionsConsts. These strings are only used for error messages that indicate bugs in your code. If your code uses TPerlRegEx correctly then the user should not see any of these strings. My code uses assertions to check for out of bounds parameters, while Embarcadero uses exceptions. Again, if you use TPerlRegEx correctly, you should never get any assertions or exceptions. The Compile method raises an exception if the regular expression is invalid in both my original TPerlRegEx component and Embarcadero’s version. If your code allows the user to provide the regular expression, you should explicitly call Compile and catch any exceptions it raises so you can tell the user there is a problem with the regular expression. Even with user-provided regular expressions, you shouldn’t get any other assertions or exceptions if your code is correct. Note that Embarcadero owns all the rights to their RegularExpressionsCore unit. Like all the other RTL and VCL units, this unit cannot be distributed by myself or anyone other than Embarcadero. I do retain the rights to my original PerlRegEx unit which I will continue to make available for those using older versions of Delphi.

    Read the article

  • route to vpn based on destination

    - by inquam
    I have a VPN connection on a Windows 7 machine. It's set up to connect to a server in US. Is it possible, and if so how, to setup so that .com destinations uses the vpn interface and .se destinations uses the "normal" connection? Edit (clarification): This is for outbound connections. I.e. the machine conencts to a server on foo.com and uses the VPN and the machine connects to bar.se and uses the "normal" interface. Let's say foo.com has an IP filter that ensures users are located in USA, if I go through the VPN I get a US ip and everything is fine. But tif all traffic goes this way the bar.se server that has a IP filter ensuring users are in Sweden will complain. So I want to route the traffic depending on server location. US servers through VPN and others through the normal interface.

    Read the article

  • OWA web access calendar view missing items

    - by Marrie
    I have a user syncing calendar items - uses Outlook 2003, uses iPhone and uses OWA web access to view calendar....some recurring all day events are not showing up in OWA web access view, they are viewable in both iPhone and Outlook (client)....any ideas? ...Using Outlook Web Access version: 8.1.240.5 with Exchange Server 2007 Version: 08.01.0240.006

    Read the article

  • Dependency Injection: where to store dependencies used by only one method?

    - by simoneL
    I developing a project integrated with Dependency Injection (just for reference, I'm using Unity). The problem is that I have some Manager classes with several methods and in many cases I have dependencies used only in one method. public class UserManager : IUserManager { private UserRepository UserRepository {get;set;} private TeamRepository TeamRepository {get;set;} private CacheRepository CacheRepository {get;set;} private WorkgroupRepository WorkgroupRepository {get;set;} public UserManager(UserRepository userRepository, TeamRepository teamRepository, CacheRepository cacheRepository , WorkgroupRepository workgroupRepository, ... // Many other dependencies ) { UserRepository = userRepository; TeamRepository = teamRepository; CacheRepository = cacheRepository ; WorkgroupRepository = workgroupRepository; ... // Setting the remaining dependencies } public void GetLatestUser(){ // Uses only UserRepository } public void GetUsersByTeam(int teamID){ // Uses only TeamRepository } public void GetUserHistory(){ // Uses only CacheRepository } public void GetUsersByWorkgroup(int workgroupID){ // Uses only workgroupRepository } } The UserManager is instantiated in this way: DependencyFactory<IUserManager>(); Note: DependencyFactory is just a wrapper to simplify the access to the UnityContainer Is it OK to have classes like that in the example? Is there a better way to implement avoiding to instantiate all the unnecessary dependencies?

    Read the article

  • invalid context 0x0 under iOS 7.0 and system degradation

    - by Alex
    I've read as many search results I could find on this dreaded problem, unfortunatelly, each one seems to focus on a specific function call. My problem is that I get the same error from multiple functions, which I am guessing are being called back from functions that I use. To make matters worse, the actual code is within a custom private framework which is being imported in another project, and as such, debugging isn't as simple? Can anyone point me to the right direction? I have a feeling I'm calling certain methods wrongly or with bad context, but the output from xcode is not very helpful at this point. : CGContextSetFillColorWithColor: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextSetStrokeColorWithColor: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. CGContextSaveGState: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextSetFlatness: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextAddPath: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextDrawPath: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextRestoreGState: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextGetBlendMode: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. Those errors may occur when a custom view is presented, or one of its inherited classes. At which point they spawn multiple times, until the keyboard won't provide any input. Touch events are still registered, but system slows down, and eventually may lead to unallocated object errors.

    Read the article

  • T-SQL Improvements And Data Types in ms sql 2008

    - by Aamir Hasan
     Microsoft SQL Server 2008 is a new version released in the first half of 2008 introducing new properties and capabilities to SQL Server product family. All these new and enhanced capabilities can be defined as the classic words like secure, reliable, scalable and manageable. SQL Server 2008 is secure. It is reliable. SQL2008 is scalable and is more manageable when compared to previous releases. Now we will have a look at the features that are making MS SQL Server 2008 more secure, more reliable, more scalable, etc. in details.Microsoft SQL Server 2008 provides T-SQL enhancements that improve performance and reliability. Itzik discusses composable DML, the ability to declare and initialize variables in the same statement, compound assignment operators, and more reliable object dependency information. Table-Valued ParametersInserts into structures with 1-N cardinality problematicOne order -> N order line items"N" is variable and can be largeDon't want to force a new order for every 20 line itemsOne database round-trip / line item slows things downNo ARRAY data type in SQL ServerXML composition/decomposition used as an alternativeTable-valued parameters solve this problemTable-Valued ParametersSQL Server has table variablesDECLARE @t TABLE (id int);SQL Server 2008 adds strongly typed table variablesCREATE TYPE mytab AS TABLE (id int);DECLARE @t mytab;Parameters must use strongly typed table variables Table Variables are Input OnlyDeclare and initialize TABLE variable  DECLARE @t mytab;  INSERT @t VALUES (1), (2), (3);  EXEC myproc @t;Procedure must declare variable READONLY  CREATE PROCEDURE usetable (    @t mytab READONLY ...)  AS    INSERT INTO lineitems SELECT * FROM @t;    UPDATE @t SET... -- no!T-SQL Syntax EnhancementsSingle statement declare and initialize  DECLARE @iint = 4;Compound Assignment Operators  SET @i += 1;Row constructors  DECLARE @t TABLE (id int, name varchar(20));  INSERT INTO @t VALUES    (1, 'Fred'), (2, 'Jim'), (3, 'Sue');Grouping SetsGrouping Sets allow multiple GROUP BY clauses in a single SQL statementMultiple, arbitrary, sets of subtotalsSingle read pass for performanceNested subtotals provide ever better performanceGrouping Sets are an ANSI-standardCOMPUTE BY is deprecatedGROUPING SETS, ROLLUP, CUBESQL Server 2008 - ANSI-syntax ROLLUP and CUBEPre-2008 non-ANSI syntax is deprecatedWITH ROLLUP produces n+1 different groupings of datawhere n is the number of columns in GROUP BYWITH CUBE produces 2^n different groupingswhere n is the number of columns in GROUP BYGROUPING SETS provide a "halfway measure"Just the number of different groupings you needGrouping Sets are visible in query planGROUPING_ID and GROUPINGGrouping Sets can produce non-homogeneous setsGrouping set includes NULL values for group membersNeed to distinguish by grouping and NULL valuesGROUPING (column expression) returns 0 or 1Is this a group based on column expr. or NULL value?GROUPING_ID (a,b,c) is a bitmaskGROUPING_ID bits are set based on column expressions a, b, and cMERGE StatementMultiple set operations in a single SQL statementUses multiple sets as inputMERGE target USING source ON ...Operations can be INSERT, UPDATE, DELETEOperations based onWHEN MATCHEDWHEN NOT MATCHED [BY TARGET] WHEN NOT MATCHED [BY SOURCE]More on MERGEMERGE statement can reference a $action columnUsed when MERGE used with OUTPUT clauseMultiple WHEN clauses possible For MATCHED and NOT MATCHED BY SOURCEOnly one WHEN clause for NOT MATCHED BY TARGETMERGE can be used with any table sourceA MERGE statement causes triggers to be fired onceRows affected includes total rows affected by all clausesMERGE PerformanceMERGE statement is transactionalNo explicit transaction requiredOne Pass Through TablesAt most a full outer joinMatching rows = when matchedLeft-outer join rows = when not matched by targetRight-outer join rows = when not matched by sourceMERGE and DeterminismUPDATE using a JOIN is non-deterministicIf more than one row in source matches ON clause, either/any row can be used for the UPDATEMERGE is deterministicIf more than one row in source matches ON clause, its an errorKeeping Track of DependenciesNew dependency views replace sp_dependsViews are kept in sync as changes occursys.dm_sql_referenced_entitiesLists all named entities that an object referencesExample: which objects does this stored procedure use?sys.dm_sql_referencing_entities 

    Read the article

  • SQL SERVER – Identity Fields – Contest Win Joes 2 Pros Combo (USD 198) – Day 2 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following statement is incorrect? a) Identity value can be negative. b) Identity value can have negative interval. c) Identity value can be of datatype VARCHAR d) Identity value can have increment interval larger than 1 Query Hints: BIG HINT POST A simple way to determine if a table contains an identity field is to use the SSMS Object Explorer Design Interface. Navigate to the table, then right-click it and choose Design from the pop-up window. When your design tab opens, select the first field in the table to view its list of properties in the lower pane of the tab (In this case the field is ProductID). Look to see if the Identity Specification property in the lower pane is set to either yes or no. SQL Server will allow you to utilize IDENTITY_INSERT with just one table at a time. After you’ve completed the needed work, it’s very important to reset the IDENTITY_INSERT back to OFF. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 2. SQL Joes 2 Pros Development Series – Output Clause in Simple Examples SQL Joes 2 Pros Development Series – Ranking Functions – Advanced NTILE in Detail SQL Joes 2 Pros Development Series – Ranking Functions – RANK( ), DENSE_RANK( ), and ROW_NUMBER( ) SQL Joes 2 Pros Development Series – Advanced Aggregates with the Over Clause SQL Joes 2 Pros Development Series – Aggregates with the Over Clause SQL Joes 2 Pros Development Series – Overriding Identity Fields – Tricks and Tips of Identity Fields SQL Joes 2 Pros Development Series – Many to Many Relationships Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to call SQL Function with multiple parameters from C# web page

    - by Marshall
    I have an MS SQL function that is called with the following syntax: SELECT Field1, COUNT(*) AS RecordCount FROM GetDecileTable('WHERE ClientID = 7 AND LocationName = ''Default'' ', 10) The first parameter passes a specific WHERE clause that is used by the function for one of the internal queries. When I call this function in the front-end C# page, I need to send parameter values for the individual fields inside of the WHERE clause (in this example, both the ClientID & LocationName fields) The current C# code looks like this: String SQLText = "SELECT Field1, COUNT(*) AS RecordCount FROM GetDecileTable('WHERE ClientID = @ClientID AND LocationName = @LocationName ',10)"; SqlCommand Cmd = new SqlCommand(SQLText, SqlConnection); Cmd.Parameters.Add("@ClientID", SqlDbType.Int).Value = 7; // Insert real ClientID Cmd.Parameters.Add("@LocationName", SqlDbType.NVarChar(20)).Value = "Default"; // Real code uses Location Name from user input SqlDataReader reader = Cmd.ExecuteReader(); When I do this, I get the following code from SQL profiler: exec sp_executesql N'SELECT Field1, COUNT(*) as RecordCount FROM GetDecileTable (''WHERE ClientID = @ClientID AND LocationName = @LocationName '',10)', N'@ClientID int,@LocationID nvarchar(20)', @ClientID=7,@LocationName=N'Default' When this executes, SQL throws an error that it cannot parse past the first mention of @ClientID stating that the Scalar Variable @ClientID must be defined. If I modify the code to declare the variables first (see below), then I receive an error at the second mention of @ClientID that the variable already exists. exec sp_executesql N'DECLARE @ClientID int; DECLARE @LocationName nvarchar(20); SELECT Field1, COUNT(*) as RecordCount FROM GetDecileTable (''WHERE ClientID = @ClientID AND LocationName = @LocationName '',10)', N'@ClientID int,@LocationName nvarchar(20)', @ClientID=7,@LocationName=N'Default' I know that this method of adding parameters and calling SQL code from C# works well when I am selecting data from tables, but I am not sure how to embed parameters inside of the ' quote marks for the embedded WHERE clause being passed to the function. Any ideas?

    Read the article

  • Parameterized Queries /Without/ using queries.

    - by Aren B
    I've got a bit of a poor situation here. I'm stuck working with commerce server, which doesn't do a whole lot of sanitization/parameterization. I'm trying to build up my queries to prevent SQL Injection, however some things like the searches / where clause on the search object need to be built up, and there's no parameterized interface. Basically, I cannot parameterize, however I was hoping to be able to use the same engine to BUILD my query text if possible. Is there a way to do this, aside from writing my own parameterizing engine which will probably still not be as good as parameterized queries? Update: Example The where clause has to be built up as a sql query where clause essentially: CatalogSearch search = /// Create Search object from commerce server search.WhereClause = string.Format("[cy_list_price] > {0} AND [Hide] is not NULL AND [DateOfIntroduction] BETWEEN '{1}' AND '{2}'", 12.99m, DateTime.Now.AddDays(-2), DateTime.Now); *Above Example is how you refine the search, however we've done some testing, this string is NOT SANITIZED. This is where my problem lies, because any of those inputs in the .Format could be user input, and while i can clean up my input from text-boxes easily, I'm going to miss edge cases, it's just the nature of things. I do not have the option here to use a parameterized query because Commerce Server has some insane backwards logic in how it handles the extensible set of fields (schema) & the free-text search words are pre-compiled somewhere. This means I cannot go directly to the sql tables What i'd /love/ to see is something along the lines of: SqlCommand cmd = new SqlCommand("[cy_list_price] > @MinPrice AND [DateOfIntroduction] BETWEEN @StartDate AND @EndDate"); cmd.Parameters.AddWithValue("@MinPrice", 12.99m); cmd.Parameters.AddWithValue("@StartDate", DateTime.Now.AddDays(-2)); cmd.Parameters.AddWithValue("@EndDate", DateTime.Now); CatalogSearch search = /// constructor search.WhereClause = cmd.ToSqlString();

    Read the article

  • ibatis problem using <isNull> whilst iterating over a List

    - by onoma
    Hi, I'm new to iBatis and I'm struggling with the and elements. I want to iterate over a List of Book instances (say) that are passed in as a HashMap: MyParameters. The list will be called listOfBooks. The clause of the overall SQL statement will therefore look like this: <iterate prepend="AND" property="MyParameters.listOfBooks" conjunction="AND" open="(" close=")"> ... </iterate> I also need to produce different SQL within the iterate elements depending on whether a property of each Book instance in the 'listOfBooks' List is null, or not. So, I need a statement something like this: <iterate prepend="AND" property="MyParameters.listOfBooks" conjunction="AND" open="(" close=")"> <isNull property="MyParameter.listOfBooks.title"> <!-- SQL clause #1 here --> </isNull> <isNotNull property="MyParameter.listOfBooks.title"> <!-- SQL clause #2 here --> </isNotNull> When I do this I get an error message stating that there is no "READABLE" property named 'title' in my Book class. However, each Book instance does contain a title property, so I'm confused! I can only assuem that I have managled the syntax in trying to pinpoint the title of particular Book instance in listOfBooks. I'm struggling to find the correct technique for trying to achieve this. If anyone can advise a way forward I'd be grateful. Thanks

    Read the article

  • joining table of oracle

    - by Deven
    Hi friends i am having problem in joining two tables in oracle my two tables are shown bellow table1 looks like id        Name      Jan 7001    Deven    22 7002    Clause    55 7004    Monish    11 table2 looks like id        Name      Feb 7001    Deven    12 7002    Clause    15 7003    Nimesh    20 7004    Monish    21 7005    Ritesh    22 i want to combine this two table and want answer like bellow table2 looks like id        Name      Jan    Feb 7001    Deven    22     12 7002    Clause   55     15 7003    Nimesh    -       20 7004    Monish   11     21 7005    Ritesh    -        22

    Read the article

  • Entity Framework and differences between Contains between SQL and objects using ToLower

    - by John Ptacek
    I have run into an "issue" I am not quite sure I understand with Entity Framework. I am using Entity Framework 4 and have tried to utilize a TDD approach. As a result, I recently implemented a search feature using a Repository pattern. For my test project, I am implementing my repository interface and have a set of "fake" object data I am using for test purposes. I ran into an issue trying to get the Contains clause to work for case invariant search. My code snippet for both my test and the repository class used against the database is as follows: if (!string.IsNullOrEmpty(Description)) { items = items.Where(r => r.Description.ToLower().Contains(Description.ToLower())); } However, when I ran my test cases the results where not populated if my case did not match the underlying data. I tried looking into what I thought was an issue for a while. To clear my mind, I went for a run and wondered if the same code with EF would work against a SQL back end database, since SQL will explicitly support the like command and it executed as I expected, using the same logic. I understand why EF against the database back end supports the Contains clause. However, I was surprised that my unit tests did not. Any ideas why other than the SQL server support of the like clause when I use objects I populate in a collection instead of against the database server? Thanks! John

    Read the article

  • SQLAlchemy & Complex Queries

    - by user356594
    I have to implement ACL for an existing application. So I added the a user, group and groupmembers table to the database. I defined a ManyToMany relationship between user and group via the association table groupmembers. In order to protect some ressources of the app (i..e item) I added a additional association table auth_items which should be used as an association table for the ManyToMany relationship between groups/users and the specific item. item has following columns: user_id -- user table group_id -- group table item_id -- item table at least on of user_id and group_id columns are set. So it's possible to define access for a group or for a user to a specific item. I have used the AssociationProxy to define the relationship between users/groups and items. I now want to display all items which the user has access to and I have a really hard time doing that. Following criteria are used: All items which are owned by the user should be shown (item.owner_id = user.id) All public items should be shown (item.access = public) All items which the user has access to should be shown (auth_item.user_id = user.id) All items which the group of the user has access to should be shown. The first two criteria are quite straightforward, but I have a hard time doing the 3rd one. Here is my approach: clause = and_(item.access == 'public') if user is not None: clause = or_(clause,item.owner == user,item.users.contains(user),item.groups.contains(group for group in user.groups)) The third criteria produces an error. item.groups.contains(group for group in user.groups) I am actually not sure if this is a good approach at all. What is the best approach when filtering manytomany relationships? How I can filter a manytomany relationship based on another list/relationship? Btw I am using the latest sqlalchemy (6.0) and elixir version Thanks for any insights.

    Read the article

  • Partial slideDown with jQuery

    - by Jon
    I have some buttons that have drawer menus and what I'd like to do is add a state so that when the user hovers over the button, the drawer bumps out slightly (maybe with a little wiggle/rubber-banding) so that they know that there's a drawer with more information. I have the sliding working and a hover function set up, but I don't know how to implement a partial slideDown. Any ideas? FIDDLE. Code: <div class="clause"> <div class="icon"><img src="http://i.imgur.com/rTu40.png"/></div> <div class="label">Location</div> <div class="info">Midwest > Indiana, Kentucky; Southwest > Texas, Nevada, Utah; Pacific > California, Hawaii</div> </div> <div class="drawer">Info 1<br/><br/></div> <div class="drawerBottom"></div> $(".clause").click(function() { var tmp = $(this).next("div.drawer"); if(tmp.is(":hidden")) tmp.slideDown('3s'); else tmp.slideUp('3s'); }); $(".clause").hover(function() { });

    Read the article

  • MySQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, Specifications: MySQL 4.1+ I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. Appreciated advice in advance :)

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >