Search Results

Search found 2949 results on 118 pages for 'msdn'.

Page 17/118 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • What&rsquo;s wrong with See[Mike]Code? (no relation)

    - by mbcrump
    I have been hearing a lot about the website See[Mike]Code. Basically, the site creates an interview url and a job candidate url and lets you see the potential programmer’s code (specifically .NET developer). Below is the candidate’s URL   Below is the interviewer url   So you might think, ah, this is a good thing. We can screen candidates cheaper and more efficiently. In reality, this is only a good thing if you want your programmer to develop using notepad.  I use the most efficient tools that exist to do my job. I would simply fire up VS2010 and type “for” and hit the tab key twice and get the following template.   I have no problem keeping MSDN/Google in one of my monitors. I spend time learning VS macros and using Aurora XAML/Expression to produce my XAML for WPF. Sure, I can write a for loop without using the VS Macro, but the real question is, “Why should I?”. My point being, if you really want to test a .NET programmer knowledge then fire up his native working environment and let him use the features of the IDE to develop the simple 10-line program. For a more sophisticated program, then give him 20 minutes and allow access to msdn/google. If the programmer cannot find at the right path then give him the boot.

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • XP Mode (Windows Virtual PC for Windows 7) no longer requires hardware virtualisation - hurrah !

    - by Liam Westley
    Windows Virtual PC (aka XP Mode) When XP Mode was released, it insisted on hardware virtualisation being present on your CPU and enabled in the BIOS.  Given that Windows Virtual PC was based on an improved Virtual PC 2007, which provided hardware virtualisation as a user selectable option, I did wonder why on earth Microsoft thought this was a good idea.  Not only do many people not have a CPU with hardware virtualisation support, some manufacturers don't provide a BIOS option to enable this setting, especially on laptops - yes Sony, Toshiba and Acer, I'm looking at you. Dumb and dumber This issue became a double whammy; not only was Microsoft a bit dumb on not supporting Windows Virtual PC without hardware virtualisation, your hardware manufacturer was also dumb in not supporting the option in the BIOS. Microsoft update to Windows Virtual PC Belatedly, Microsoft has seen the problem with this hardware virtualisation requirement and has now released a new version of Windows Virtual PC that works without hardware virtualisation.  This is really good news for those with older (or limited) CPUs and rubbish BIOS firmware. You can details of how to download the new versions of XP Mode here, http://blogs.msdn.com/virtual_pc_guy/archive/2010/03/18/windows-virtual-pc-no-hardware-virtualization-update-now-available-for-download.aspx And there is also an explanation of why the hardware virtualisation requirement was in place for previous releases, http://blogs.msdn.com/virtual_pc_guy/archive/2010/03/18/windows-virtual-pc-now-without-the-need-for-hardware-virtualization.aspx

    Read the article

  • Start Debugging in Visual Studio

    - by Daniel Moth
    Every developer is familiar with hitting F5 and debugging their application, which starts their app with the Visual Studio debugger attached from the start (instead of attaching later). This is one way to achieve step 1 of the Live Debugging process. Hitting F5, F11, Ctrl+F10 and the other ways to start the process under the debugger is covered in this MSDN "How To". The way you configure the debugging experience, before you hit F5, is by selecting the "Project" and then the "Properties" menu (Alt+F7 on my keyboard bindings). Dependent on your project type there are different options, but if you browse to the Debug (or Debugging) node in the properties page you'll have a way to select local or remote machine debugging, what debug engines to use, command line arguments to use during debugging etc. Currently the .NET and C++ project systems are different, but one would hope that one day they would be unified to use the same mechanism and UI (I don't work on that product team so I have no knowledge of whether that is a goal or if it will ever happen). Personally I like the C++ one better, here is what it looks like (and it is described on this MSDN page): If you were following along in the "Attach to Process" blog post, the equivalent to the "Select Code Type" dialog is the "Debugger Type" dropdown: that is how you change the debug engine. Some of the debugger properties options appear on the standard toolbar in VS. With Visual Studio 11, the Debug Type option has been added to the toolbar If you don't see that in your installation, customize the toolbar to show it - VS 11 tends to be conservative in what you see by default, especially for the non-C++ Visual Studio profiles. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Visual Studio 2010 Service Pack 1, now available for download

    - by Harish Ranganathan
    Visual Studio 2010 Service Pack 1 (SP1) is now available for general download for almost a week now.  The Beta of SP1 came couple of months back and it did a lot of performance enhancements, added support for HTML5 tags and few other stuff related to web development.  Now, the final release of SP1 is available.  The good part is that, if you had installed the SP1 beta, you don’t have to remove the Beta and start all over again.  You can apply the final release on top of the Beta and it works like a charm. So, in simplified terms, what is new in Visual Studio 2010 SP1 Before I start listing it down, I was checking if there was an MSDN article available on this and found http://msdn.microsoft.com/en-us/library/gg442059.aspx  While it reads (Beta), the same holds good for the release candidate as well.  Unlike VS 2008 SP1 and .NET 3.5 SP1 (which came together), this release doesn’t add any new project templates/item templates. However, there are lot of enhancements related to Web Deployment, Debugging and Unit Testing for .NET 3.5 applications. So, how does one find if you are running the correct version of SP1 final release. While the SP1 Beta (Help – About Visual Studio) reads Microsoft Visual Studio 2010 Version 10.0.3118.1 SP1 Rel, once you install the SP1 RTM release, it should read as below The download link for SP1 Beta is here Cheers!!!

    Read the article

  • A message to Denis Pitcher

    - by guybarrette
    Denis Pitcher, You posted this comment on my blog and some other blogs: Devteach's promotion for a one year MSDN subscription was not honoured and attempts to contact them result in a "we sent attendee info to MS, it's not our problem" response while attempts to contact Microsoft result in the suggestion that any queries should be redirect to Devteach. Hopefully not all attendees we're cheated though if you're considering attending a future Devteach it is recommended that you don't hold any expectation that they'll honour their promotions. I spoke to Jean-René Roy, DevTeach organizer and also to MSDN Canada folks.  Looks like the email you used to register for the conference is now bouncing (maybe a typo when you registered?).  That why you haven’t received any news about the offer.  The fact that you’re leaving the same comment on various blogs without your email address doesn’t help at all.  Thay want to contact you!  Also, looks like they never received your emails, maybe you used a the wrong email addresses. Anyway, please contact Jean-René Roy at [email protected] ASAP.

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • Start Debugging in Visual Studio

    - by Daniel Moth
    Every developer is familiar with hitting F5 and debugging their application, which starts their app with the Visual Studio debugger attached from the start (instead of attaching later). This is one way to achieve step 1 of the Live Debugging process. Hitting F5, F11, Ctrl+F10 and the other ways to start the process under the debugger is covered in this MSDN "How To". The way you configure the debugging experience, before you hit F5, is by selecting the "Project" and then the "Properties" menu (Alt+F7 on my keyboard bindings). Dependent on your project type there are different options, but if you browse to the Debug (or Debugging) node in the properties page you'll have a way to select local or remote machine debugging, what debug engines to use, command line arguments to use during debugging etc. Currently the .NET and C++ project systems are different, but one would hope that one day they would be unified to use the same mechanism and UI (I don't work on that product team so I have no knowledge of whether that is a goal or if it will ever happen). Personally I like the C++ one better, here is what it looks like (and it is described on this MSDN page): If you were following along in the "Attach to Process" blog post, the equivalent to the "Select Code Type" dialog is the "Debugger Type" dropdown: that is how you change the debug engine. Some of the debugger properties options appear on the standard toolbar in VS. With Visual Studio 11, the Debug Type option has been added to the toolbar If you don't see that in your installation, customize the toolbar to show it - VS 11 tends to be conservative in what you see by default, especially for the non-C++ Visual Studio profiles. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Adding and accessing custom sections in your C# App.config

    - by deadlydog
    So I recently thought I’d try using the app.config file to specify some data for my application (such as URLs) rather than hard-coding it into my app, which would require a recompile and redeploy of my app if one of our URLs changed.  By using the app.config it allows a user to just open up the .config file that sits beside their .exe file and edit the URLs right there and then re-run the app; no recompiling, no redeployment necessary. I spent a good few hours fighting with the app.config and looking at examples on Google before I was able to get things to work properly.  Most of the examples I found showed you how to pull a value from the app.config if you knew the specific key of the element you wanted to retrieve, but it took me a while to find a way to simply loop through all elements in a section, so I thought I would share my solutions here.   Simple and Easy The easiest way to use the app.config is to use the built-in types, such as NameValueSectionHandler.  For example, if we just wanted to add a list of database server urls to use in my app, we could do this in the app.config file like so: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDatabaseServers" type="System.Configuration.NameValueSectionHandler" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDatabaseServers> 10: <add key="localhost" value="localhost" /> 11: <add key="Dev" value="Dev.MyDomain.local" /> 12: <add key="Test" value="Test.MyDomain.local" /> 13: <add key="Live" value="Prod.MyDomain.com" /> 14: </ConnectionManagerDatabaseServers> 15: </configuration>   And then you can access these values in code like so: 1: string devUrl = string.Empty; 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: devUrl = connectionManagerDatabaseServers["Dev"].ToString(); 6: }   Sometimes though you don’t know what the keys are going to be and you just want to grab all of the values in that ConnectionManagerDatabaseServers section.  In that case you can get them all like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: foreach (var serverKey in connectionManagerDatabaseServers.AllKeys) 6: { 7: string serverValue = connectionManagerDatabaseServers.GetValues(serverKey).FirstOrDefault(); 8: AddDatabaseServer(serverValue); 9: } 10: }   And here we just assume that the AddDatabaseServer() function adds the given string to some list of strings.  So this works great, but what about when we want to bring in more values than just a single string (or technically you could use this to bring in 2 strings, where the “key” could be the other string you want to store; for example, we could have stored the value of the Key as the user-friendly name of the url).   More Advanced (and more complicated) So if you want to bring in more information than a string or two per object in the section, then you can no longer simply use the built-in System.Configuration.NameValueSectionHandler type provided for us.  Instead you have to build your own types.  Here let’s assume that we again want to configure a set of addresses (i.e. urls), but we want to specify some extra info with them, such as the user-friendly name, if they require SSL or not, and a list of security groups that are allowed to save changes made to these endpoints. So let’s start by looking at the app.config: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDataSection" type="ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection, ConnectionManagerUpdater" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDataSection> 10: <ConnectionManagerEndpoints> 11: <add name="Development" address="Dev.MyDomain.local" useSSL="false" /> 12: <add name="Test" address="Test.MyDomain.local" useSSL="true" /> 13: <add name="Live" address="Prod.MyDomain.com" useSSL="true" securityGroupsAllowedToSaveChanges="ConnectionManagerUsers" /> 14: </ConnectionManagerEndpoints> 15: </ConnectionManagerDataSection> 16: </configuration>   The first thing to notice here is that my section is now using the type “ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection” (the fully qualified path to my new class I created) “, ConnectionManagerUpdater” (the name of the assembly my new class is in).  Next, you will also notice an extra layer down in the <ConnectionManagerDataSection> which is the <ConnectionManagerEndpoints> element.  This is a new collection class that I created to hold each of the Endpoint entries that are defined.  Let’s look at that code now: 1: using System; 2: using System.Collections.Generic; 3: using System.Configuration; 4: using System.Linq; 5: using System.Text; 6: using System.Threading.Tasks; 7:  8: namespace ConnectionManagerUpdater.Data.Configuration 9: { 10: public class ConnectionManagerDataSection : ConfigurationSection 11: { 12: /// <summary> 13: /// The name of this section in the app.config. 14: /// </summary> 15: public const string SectionName = "ConnectionManagerDataSection"; 16: 17: private const string EndpointCollectionName = "ConnectionManagerEndpoints"; 18:  19: [ConfigurationProperty(EndpointCollectionName)] 20: [ConfigurationCollection(typeof(ConnectionManagerEndpointsCollection), AddItemName = "add")] 21: public ConnectionManagerEndpointsCollection ConnectionManagerEndpoints { get { return (ConnectionManagerEndpointsCollection)base[EndpointCollectionName]; } } 22: } 23:  24: public class ConnectionManagerEndpointsCollection : ConfigurationElementCollection 25: { 26: protected override ConfigurationElement CreateNewElement() 27: { 28: return new ConnectionManagerEndpointElement(); 29: } 30: 31: protected override object GetElementKey(ConfigurationElement element) 32: { 33: return ((ConnectionManagerEndpointElement)element).Name; 34: } 35: } 36: 37: public class ConnectionManagerEndpointElement : ConfigurationElement 38: { 39: [ConfigurationProperty("name", IsRequired = true)] 40: public string Name 41: { 42: get { return (string)this["name"]; } 43: set { this["name"] = value; } 44: } 45: 46: [ConfigurationProperty("address", IsRequired = true)] 47: public string Address 48: { 49: get { return (string)this["address"]; } 50: set { this["address"] = value; } 51: } 52: 53: [ConfigurationProperty("useSSL", IsRequired = false, DefaultValue = false)] 54: public bool UseSSL 55: { 56: get { return (bool)this["useSSL"]; } 57: set { this["useSSL"] = value; } 58: } 59: 60: [ConfigurationProperty("securityGroupsAllowedToSaveChanges", IsRequired = false)] 61: public string SecurityGroupsAllowedToSaveChanges 62: { 63: get { return (string)this["securityGroupsAllowedToSaveChanges"]; } 64: set { this["securityGroupsAllowedToSaveChanges"] = value; } 65: } 66: } 67: }   So here the first class we declare is the one that appears in the <configSections> element of the app.config.  It is ConnectionManagerDataSection and it inherits from the necessary System.Configuration.ConfigurationSection class.  This class just has one property (other than the expected section name), that basically just says I have a Collection property, which is actually a ConnectionManagerEndpointsCollection, which is the next class defined.  The ConnectionManagerEndpointsCollection class inherits from ConfigurationElementCollection and overrides the requied fields.  The first tells it what type of Element to create when adding a new one (in our case a ConnectionManagerEndpointElement), and a function specifying what property on our ConnectionManagerEndpointElement class is the unique key, which I’ve specified to be the Name field. The last class defined is the actual meat of our elements.  It inherits from ConfigurationElement and specifies the properties of the element (which can then be set in the xml of the App.config).  The “ConfigurationProperty” attribute on each of the properties tells what we expect the name of the property to correspond to in each element in the app.config, as well as some additional information such as if that property is required and what it’s default value should be. Finally, the code to actually access these values would look like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDataSection = ConfigurationManager.GetSection(ConnectionManagerDataSection.SectionName) as ConnectionManagerDataSection; 3: if (connectionManagerDataSection != null) 4: { 5: foreach (ConnectionManagerEndpointElement endpointElement in connectionManagerDataSection.ConnectionManagerEndpoints) 6: { 7: var endpoint = new ConnectionManagerEndpoint() { Name = endpointElement.Name, ServerInfo = new ConnectionManagerServerInfo() { Address = endpointElement.Address, UseSSL = endpointElement.UseSSL, SecurityGroupsAllowedToSaveChanges = endpointElement.SecurityGroupsAllowedToSaveChanges.Split(',').Where(e => !string.IsNullOrWhiteSpace(e)).ToList() } }; 8: AddEndpoint(endpoint); 9: } 10: } This looks very similar to what we had before in the “simple” example.  The main points of interest are that we cast the section as ConnectionManagerDataSection (which is the class we defined for our section) and then iterate over the endpoints collection using the ConnectionManagerEndpoints property we created in the ConnectionManagerDataSection class.   Also, some other helpful resources around using app.config that I found (and for parts that I didn’t really explain in this article) are: How do you use sections in C# 4.0 app.config? (Stack Overflow) <== Shows how to use Section Groups as well, which is something that I did not cover here, but might be of interest to you. How to: Create Custom Configuration Sections Using Configuration Section (MSDN) ConfigurationSection Class (MSDN) ConfigurationCollectionAttribute Class (MSDN) ConfigurationElementCollection Class (MSDN)   I hope you find this helpful.  Feel free to leave a comment.  Happy Coding!

    Read the article

  • Hot fix published for TFS2010 upgrade issues

    - by jehan
    Microsoft has released a hot fix for the issues that are identified after the migration of TFS2005/TFS2008 servers to TFS2010. The issues are related to Merging and Labels: ·         Labels that were created before the upgrade are entirely empty.  Labels could be also have incorrect contents. ·         The merge wizard in Visual Studio does not display all valid merge targets for a given source path/branch. ·         During merging, merge candidates are shown for changes that were already merged prior to the upgrade. If you have not yet upgraded to TFS 2010, the hotfix is now available and is highly recommended to be applied before configuring your team project collections. Because this hotfix applies to the upgrade of version control content, it must be applied after TFS 2010 setup is complete, but before configuration is started.  At the end of the setup experience, the Success screen is shown indicating the completion of the installation.  Normally, users will continue on to the configuration part, but in this case, the user need to cancel the configuration part by un-checking the “Launch Team Foundation Server Configuration Tool” box, which will enable the Cancel button. After exiting setup, the hotfix executable can be run to update the upgrade steps. Once the hotfix is installed, the TFS Configuration Wizard will need to be re-launched from the Start Menu to complete the upgrade process.    The hotfix has been published on MSDN Code Gallery – you can find it here: http://code.msdn.microsoft.com/KB2135068   If you have upgraded to TFS2010 and facing any of the above issues, then checkout this KB for Resolution: http://support.microsoft.com/kb/2193796/en-us

    Read the article

  • Visual Studio 2010 Is Here!

    - by Bill Evjen
    I think back to the days of the first versions of Visual Studio (when it was called Visual Studio .NET, remember?) and I think about how far Microsoft has come with this IDE. It really is the best IDE on the market. There is so much to this IDE it is amazing. It now can really handle managing your complete software application development lifecycle. For me, it is (besides Windows 7) the best and most successful product Microsoft has developed. You can obviously get this now and it is available on MSDN and some other places: MSDN Visual Studio Trial Editions Visual Studio 2010 Express Editions (free) You will also find great info at the Visual Studio Developer Center. Some other interesting tidbits of info: JetBrain’s ReSharper 5.0 has been released for VS2010 Oracle will have the new Oracle Dev Tools for VS2010 within one month - http://bit.ly/9gC9NE Visual Studio 64-bit - Why there is no 64-bit version of VS - http://bit.ly/dhhwAj In installing this version of Visual Studio, if you have been working on the previous RC builds, then you are going to want to uninstall these previous editions of the 2010 product. You can do this through the Add Remove Programs dialog and you are going to want to select the appropriate item from the long list of Visual Studio items. You are then going to want to step through the Visual Studio dialog (it will seem as if you are installing it again) – and you will then come to a point where you can select the option to Uninstall the entire application. If you have installed the Silverlight 4 RC stuff, then you are also going to want to uninstall this and you are also going to want to uninstall the “Update for Visual Studio 2010 (KB976272)” before installing Silverlight RC2 – which you can find on www.silverlight.net. Technorati Tags: vs2010,.net,visualstudio,microsoft

    Read the article

  • Windows CE and the Compact Framework are dead?

    - by Valter Minute
    This is one of the question that I’ve been asked more and more frequently at my public speeches and each time I meet customers. The announcement of the new Windows Phone 7 platform and the release of Visual Studio 2010 generated a bit of confusion around Windows CE and some of the technologies it supports. Windows CE is still alive and a lot of good programmers are working on the new releases (I had a chance to know some of them during the MVP summit in February). Here’s a blog post from Olivier Bloch that describes the situation and provides some good news about the OS: http://blogs.msdn.com/obloch/archive/2010/05/03/windows-ce-is-not-dead.aspx As you can read here, Windows Phone 7 keeps its “roots” inside Windows CE. Regarding the .NET Compact Framework, this article from the excellent “I know the answer (it’s 42)” blog from Abhinaba (it seems that we share a passion for photography, Douglas Adams and embedded development), explains that the .NET CF is the foundation of XNA and Silverlight implementation on the WP7 platform: http://blogs.msdn.com/abhinaba/archive/2010/03/18/what-is-netcf.aspx So Windows CE is here to stay, powering one of the most interesting smart phone platforms and ready to power also your devices. Add those blogs to your RSS reader list and stay tuned for more good news about CE and the Compact Framework!

    Read the article

  • Is Intellisense faster in Visual Studio 2012 compared to Visual Studio 2010 for C++ projects?

    - by syplex
    We switched to VS2010 from VS2003 a few months ago, and there are many many improvements. But the speed of Intellisense is not one of them (although it does generate higher quality results, which is great). I read that Intellisense and the MSDN help system were being improved in VS2012, so I'm curious if its actually faster? The only data I could find were graphs of an early release (VS2011). For the record, I am using a vanilla install of VS2010 with SP1 on Windows 7 SP1 (x64). No plugins or add-ins running. What I'm looking for specifically: Has the speed of intellisense autocomplete improved? Has the speed of F12 (goto definition) improved? The answers to these questions will help in determining if VS2012 is worth the money to upgrade at this time as the intellisense slowness would be the only major reason for upgrading. I'd also be interested in knowing if the help system has improved. I'm currently using MSDN help from VS2008SP1 because it has filtering and is faster.

    Read the article

  • Custom Session Management using HashTable

    - by kaleidoscope
    ASP.NET session state lets you associate a server-side string or object dictionary containing state data with a particular HTTP client session. A session is defined as a series of requests issued by the same client within a certain period of time, and is managed by associating a session ID with each unique client. The ID is supplied by the client on each request, either in a cookie or as a special fragment of the request URL. The session data is stored on the server side in one of the supported session state stores, which include in-process memory, SQL Server™ database, and the ASP.NET State Server service. The latter two modes enable session state to be shared among multiple Web servers on a Web farm and do not require server affinity. Implement Custom session Handler you need to follow following process : 1. Create class library which will inherit from  SessionStateStoreProviderBase abstract Class. 2. Implement all abstract Method in your base class. 3.Change Mode of session to “Custom” in web.config file and provide Provider as your Namespace with classname. <sessionState mode=”Custom” customProvider=”Namespace.classname”> <Providers> <add name=”Name” type=”Namespace.classname”> </sessionstate> For more Details Please refer following links :   http://msdn.microsoft.com/en-us/magazine/cc163730.aspx http://msdn.microsoft.com/en-us/library/system.web.sessionstate.sessionstatestoreproviderbase.aspx - Chandraprakash, S Technorati Tags: Chandraprakash,Session state Managment

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

  • Wordnik Accelerator

    - by prabhpreet
    Wow, creating IE Accelerators is superbly easy. If you want to learn how to create one, go here (some MSDN blog) and the MSDN documentation (clearly written). I was fed up of dictionary.com bringing all those popups and the stupid definitions of Google's dictionary. So I decided to scratch my own itch. I randomly stumbled on the site called Wordnik and it provides with all examples plus definitions plus lots more for words and its popup-free (as far as I know). So I decided to write and accelerator. Here is the source code (Yes, this is it): <?xml version="1.0" encoding="utf-8"?> <os:openServiceDescription xmlns:os="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.wordnik.com</os:homepageUrl> <os:display> <os:name>View on Wordnik</os:name> <os:description>Looking up words on an awesome word site called Wordnik </os:description> <os:icon>http://www.wordnik.com/favicon.ico</os:icon> </os:display> <os:activity category="Define"> <os:activityAction context="selection"> <os:execute method="get" action="http://www.wordnik.com/words/{selection}" ></os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> That’s it. To get it, go here. Enjoy!

    Read the article

  • Mohsen Agsen on C++

    - by raccoon_tim
    As I already blogged a while back, native code has been on the lips of many since TechEd 2011. Microsoft seems very committed to actually putting the language to use again after all these years of radio silence. Regarding this I urge you all guys to watch this video interview of Mohsen Agsen about C++ Today and Tomorrow http://channel9.msdn.com/Shows/Going+Deep/Mohsen-Agsen-C-Today-and-Tomorrow on Channel 9. What I find very inspiring about this interview is that Microsoft has a number of internal projects where they are using C++ and they really understand the value of C++ as a highly performant programming language. He also talks about combining managed code, scripted code and native code to get the most out of each of them. This is something that we are doing a lot in the game industry, since we recognize the need for performant platform code with an easy to write scripting layer on top of that. This is something I intend to blog about in the near future, so stay tuned! Another great thing that I bumped into recently is C++ AMP that was announced at this year’s AMD Fusion Developer Summit. I would recommend watching Herb Sutter’s keynote on the subject at http://channel9.msdn.com/posts/AFDS-Keynote-Herb-Sutter-Heterogeneous-Computing-and-C-AMP.

    Read the article

  • Windows Store now open to ALL developers

    - by CSharpZealot
    A little late, but it should be announced here too... Today’s an especially great day to be a developer. We’re very excited to announce the last significant milestone in the rollout of the Windows Store before the general availability of Windows 8 on October 26. The Store is now open for app submissions from all developers – individuals and companies – in our supported markets, and we’ve added 82 more app submission markets! Now, developers from 120 markets can publish Windows Store apps. Ted Dworkin, Partner Program Manager for the Store, authored this post. --Antoine Source: http://blogs.msdn.com/b/windowsstore/archive/2012/09/11/windows-store-now-open-to-all-developers.aspx About two weeks ago the Windows Store was opened and with the upcoming general availability of Windows 8 in October, it seems that it was good timing. In addition to the store being opened, Microsoft also announced that the MSDN, BizSpark and DreamSpark will get a 1-year Windows Store developer account. That's a different tact than what we saw for the Windows Phone 7, where we didn't get that subscription included. We're already seeing new apps showing up faster and faster, so with the addition of 86 more markets we're only going to see more apps than ever available. Since i'm now back on a Windows 8 platform (was out for about a month) I'm going to start blogging more content around the Windows 8 developer experience. Next stop for me...get my hands on a Windows 8 surface device as quickly as possible :) Keep coding!

    Read the article

  • "System.Data.OracleClient requires Oracle client software version 8.1.7 or greater." Error Message

    - by Jandost Khoso
    Quick resolution: Give full permission to AUTHENTICATED USERS in following folders. a) ORACLE_HOME b) Program Files\ORACLE   Check your PATH. You might have installed different clients in your system and your .NET application is pointing to a home with inappoperiate client. What your .NET application should load is OCI.DLL with File version more than 8.1.7. According to the MSDN document Oracle and ADO.NET:   "The .NET Framework Data Provider for Oracle provides access to an Oracle database using the Oracle Call Interface (OCI) as provided by Oracle Client software. The functionality of the data provider is designed to be similar to that of the .NET Framework data providers for SQL Server, OLE DB, and ODBC. "     The MSDN document System Requirements (Oracle) says: "The .NET Framework Data Provider for Oracle requires Microsoft Data Access Components (MDAC) version 2.6 or later. MDAC 2.8 SP1 is recommended. You must also have Oracle 8i Release 3 (8.1.7) Client or later installed. "   Both the .NET Framework Data Provider for Oracle and Oracle Data Provider for .NET are data providers to access Oracle database. The former ships with .NET Framework and requires Oracle client version 8.1.7 or above. The latter is provided by Oracle company and requires Oracle client version 9.2 or later.     The Oracle Data Provider for .NET (ODP.NET) features optimized ADO.NET data access to the Oracle database. ODP.NET allows developers to take advantage of advanced Oracle database functionality, including Real Application Clusters, XML DB, and advanced security.   See the document Comparing the Microsoft .NET Framework 1.1 Data Provider for Oracle and the Oracle Data Provider for .NET for more information about the difference.

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

  • Limitations of the SharePoint join using CAML

    - by ybbest
    Limitation One In SharePoint 2010, you can join the primary list to a foreign list and include more than one field from the foreign list. However, the limitation is that the included fields from foreign list have to be the following type: Calculated (treated as plain text) ContentTypeId Counter Currency DateTime Guid Integer Note (one-line only) Number Text The above limitation also explains why you cannot include some types of the fields from the remote list when creating a lookup. Limitation Two When using CAML query to join SharePoint lists, there can be joins to multiple lists, multiple joins to the same list, and chains of joins. However, the limitations are only inner and left outer joins are permitted and the field in the primary list must be a Lookup type field that looks up to the field in the foreign list. Limitation Three The support for writing the JOIN query in CAML is very limited.I have to hand-code the CAML query to join the lists,not fun at all.Although some blogs post mentioned about using LINQ to SharePoint and get the CAML code from there , but I never get it to work.You can check this blog post  for this.Let me know if it works for you. References: http://msdn.microsoft.com/en-us/library/ee535502.aspx http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spquery.joins.aspx

    Read the article

  • Wordnik Accelerator

    - by prabhpreet
    Wow, creating IE Accelerators is superbly easy. If you want to learn how to create one, go here (some MSDN blog) and the MSDN documentation (clearly written). I was fed up of dictionary.com bringing all those popups and the stupid definitions of Google's dictionary. So I decided to scratch my own itch. I randomly stumbled on the site called Wordnik and it provides with all examples plus definitions plus lots more for words and its popup-free (as far as I know). So I decided to write and accelerator. Here is the source code (Yes, this is it): <?xml version="1.0" encoding="utf-8"?> <os:openServiceDescription xmlns:os="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.wordnik.com</os:homepageUrl> <os:display> <os:name>View on Wordnik</os:name> <os:description>Looking up words on an awesome word site called Wordnik </os:description> <os:icon>http://www.wordnik.com/favicon.ico</os:icon> </os:display> <os:activity category="Define"> <os:activityAction context="selection"> <os:execute method="get" action="http://www.wordnik.com/words/{selection}" ></os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> That’s it. To get it, go here. Enjoy!

    Read the article

  • Controlling access to site folders if you cannot user Roles

    - by DavidMadden
    I find myself on an assignment where I could not use System.Web.Security.Roles.  That meant that I could not use Visual Studio's Website | ASP.NET Configuration.  I had to go about things another way.  The clues were in these two websites:http://www.csharpaspnetarticles.com/2009/02/formsauthentication-ticket-roles-aspnet.htmlhttp://msdn.microsoft.com/en-us/library/b6x6shw7(v=VS.71).aspxhttp://msdn.microsoft.com/en-us/library/b6x6shw7(v=VS.71).aspxYou can set in your web.config the restrictions on folders without having to set the restrictions in multiple folders through their own web.config file.  In my main default.aspx file in my protected subfolder off my main site, I did the following code due to MultiFormAuthentication (MFA) providing the security to this point:        string role = string.Empty;         if (((Login)Session["Login"]).UserLevelID > 3)         {             role = "PowerUser";         }         else         {             role = "Newbie";         }         FormsAuthenticationTicket ticket =  new FormsAuthenticationTicket( 1,                 ((Login)Session["Login"]).UserID,                 DateTime.Now,                 DateTime.Now.AddMinutes(20),                 false,                 role,                 FormsAuthentication.FormsCookiePath);         string hashCookies = FormsAuthentication.Encrypt(ticket);         HttpCookie cookie =  new HttpCookie(FormsAuthentication.FormsCookieName, hashCookies);         Response.Cookies.Add(cookie); This all gave me the ability to change restrictions on folders without having to restart the website or having to do any hard coding.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >