Search Results

Search found 19953 results on 799 pages for 'post get'.

Page 163/799 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • SQLAuthority News – Three Posts on Reporting – T-SQL Tuesday #005

    - by pinaldave
    If you are following my blog, you already know that I am more of “T-SQL and Performance Tuning” type of person. I do have a good understanding of Business Intelligence suit and I also do certain training sessions on the same subject. When I was writing the blog post for T-SQL Tuesday #005 – Reporting, I realized that I have written a post that clearly explains how to generate reports using SQL Server Management Studio. Here is a quick recap on how one can use SSMS and out-of-the-box reports which can help many developers. Please note that they can be resource-intensive as well, so please use SSMS carefully. SQL SERVER – Generate Report for Index Physical Statistics – SSMS SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005 Junior developers and DBA can use these reports right away and can also start learning and exploring most database performance issues with the help of Sr. DBAs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • Consolidation Strategy References

    - by BuckWoody
    I have a presentation that I give on SQL Server Consolidation Strategies, and in that presentation I talk about a few links that are useful. Here are some that I’ve found – feel free to comment on more, or if these links go stale:   Consolidation using SQL Server: http://msdn.microsoft.com/en-us/library/ee692366.aspx SQL Server Consolidation Guidance:  http://msdn.microsoft.com/en-us/library/ee819082.aspx   More references for SQL Server and Hyper-V: http://www.sqlskills.com/BLOGS/KIMBERLY/post/Virtualization-with-SQL-Server.aspx Quick overview of Virtual Server licensing implications: http://www.microsoft.com/uk/licensing/morethan250/learn/virtualisation.mspx SQL Server and Hyper-V best practices: http://sqlcat.com/whitepapers/archive/2008/10/03/running-sql-server-2008-in-a-hyper-v-environment-best-practices-and-performance-recommendations.aspx High-Availability and Hyper-V: http://technet.microsoft.com/en-us/magazine/2008.10.higha.aspx Virtualization Calculator: http://www.microsoft.com/Windowsserver2008/en/us/hyperv-calculators.aspx   May not be current, but here’s a whitepaper from VMWare for SQL Server: http://www.vmware.com/files/pdf/SQLServerWorkloads.pdf More information on SQL Server and VMWare: http://blogs.msdn.com/cindygross/archive/2009/10/23/considerations-for-installing-sql-server-on-vmware.aspx   Server Virtualization Validation Program: http://www.windowsservercatalog.com/svvp.aspx?svvppage=svvp.htm Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – What is AdventureWorks?

    - by pinaldave
    NOTE: If you know the answer of this question, then I request you to stop reading this post right now. Please do not leave comment about this blog post not being useful to you, if you knew the answer. Few days ago, I received DM asking What is an AdventureWorks database and why in all the examples I use that instead of any other database (e.g. Pubs or  Northwind)? As matter of fact, when I went back to my question list, which I have yet not answered, there were a few more variations of this same question. AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from http://codeplex.com site. AdventureWorks has replaced Northwind and Pubs from the sample database in SQL Server 2005. The Microsoft team keeps updating the sample database as they release new versions. Here are some quick links: AdventureWorks SQL Server 2008 SR4 AdventureWorks 2008R2 November CTP AdventureWorks for SQL Azure (December CTP) AventureWorks for SQL Server 2005 SP2A SQL SERVER – 2008 – Download and Install Samples Database AdventureWorks 2005 – Detail Tutorial I have previously written few other articles on the same subject; you can find them easily here: [email protected] Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Can't start mysql - mysql respawning too fast, stopped

    - by Tom
    Today I did a fresh install of ubuntu 12.04 and went about setting up my local development environment. I installed mysql and edited /etc/mysql/my.cnf to optimise InnoDB but when I try to restart mysql, it fails with a error: [20:53][tom@Pochama:/var/www/website] (master) $ sudo service mysql restart start: Job failed to start The syslog reveals there is a problem with the init script: > tail -f /var/log/syslog Apr 28 21:17:46 Pochama kernel: [11840.884524] type=1400 audit(1335644266.033:184): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=760 comm="apparmor_parser" Apr 28 21:17:47 Pochama kernel: [11842.603773] init: mysql main process (764) terminated with status 7 Apr 28 21:17:47 Pochama kernel: [11842.603841] init: mysql main process ended, respawning Apr 28 21:17:48 Pochama kernel: [11842.932462] init: mysql post-start process (765) terminated with status 1 Apr 28 21:17:48 Pochama kernel: [11842.950393] type=1400 audit(1335644268.101:185): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=811 comm="apparmor_parser" Apr 28 21:17:49 Pochama kernel: [11844.656598] init: mysql main process (815) terminated with status 7 Apr 28 21:17:49 Pochama kernel: [11844.656665] init: mysql main process ended, respawning Apr 28 21:17:50 Pochama kernel: [11845.004435] init: mysql post-start process (816) terminated with status 1 Apr 28 21:17:50 Pochama kernel: [11845.021777] type=1400 audit(1335644270.173:186): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=865 comm="apparmor_parser" Apr 28 21:17:51 Pochama kernel: [11846.721982] init: mysql main process (871) terminated with status 7 Apr 28 21:17:51 Pochama kernel: [11846.722001] init: mysql respawning too fast, stopped Any ideas? Things I tried already: I googled and found a Ubuntu bug with apparmor (https://bugs.launchpad.net/ubuntu/+source/mysql-5.5/+bug/970366), I changed apparmor from enforce mode to complain mode: sudo apt-get install apparmor-utils sudo aa-complain /usr/sbin/mysqld sudo /etc/init.d/apparmor reload but it didn't help. I still can't start mysql. I also thought the issue may be because the InnoDB logfiles were a different size than mysql was expecting. I removed the innodb log files before restarting using: sudo mv /var/lib/mysql/ib_logfile* /tmp. No luck though. Workaround: I re-installed 12.04, made sure not to touch /etc/mysql/my.cnf in any way. Mysql is working so I can get on with what I need to do. But I will need to edit it at some point - Hopefully I'll have figured out a solution, or this question will have been answered by that point...

    Read the article

  • Creating a dynamic proxy generator with c# – Part 4 – Calling the base method

    - by SeanMcAlinden
    Creating a dynamic proxy generator with c# – Part 1 – Creating the Assembly builder, Module builder and caching mechanism Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design Creating a dynamic proxy generator with c# – Part 3 – Creating the constructors   The plan for calling the base methods from the proxy is to create a private method for each overridden proxy method, this will allow the proxy to use a delegate to simply invoke the private method when required. Quite a few helper classes have been created to make this possible so as usual I would suggest download or viewing the code at http://rapidioc.codeplex.com/. In this post I’m just going to cover the main points for when creating methods. Getting the methods to override The first two notable methods are for getting the methods. private static MethodInfo[] GetMethodsToOverride<TBase>() where TBase : class {     return typeof(TBase).GetMethods().Where(x =>         !methodsToIgnore.Contains(x.Name) &&                              (x.Attributes & MethodAttributes.Final) == 0)         .ToArray(); } private static StringCollection GetMethodsToIgnore() {     return new StringCollection()     {         "ToString",         "GetHashCode",         "Equals",         "GetType"     }; } The GetMethodsToIgnore method string collection contains an array of methods that I don’t want to override. In the GetMethodsToOverride method, you’ll notice a binary AND which is basically saying not to include any methods marked final i.e. not virtual. Creating the MethodInfo for calling the base method This method should hopefully be fairly easy to follow, it’s only function is to create a MethodInfo which points to the correct base method, and with the correct parameters. private static MethodInfo CreateCallBaseMethodInfo<TBase>(MethodInfo method) where TBase : class {     Type[] baseMethodParameterTypes = ParameterHelper.GetParameterTypes(method, method.GetParameters());       return typeof(TBase).GetMethod(        method.Name,        BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic,        null,        baseMethodParameterTypes,        null     ); }   /// <summary> /// Get the parameter types. /// </summary> /// <param name="method">The method.</param> /// <param name="parameters">The parameters.</param> public static Type[] GetParameterTypes(MethodInfo method, ParameterInfo[] parameters) {     Type[] parameterTypesList = Type.EmptyTypes;       if (parameters.Length > 0)     {         parameterTypesList = CreateParametersList(parameters);     }     return parameterTypesList; }   Creating the new private methods for calling the base method The following method outline how I’ve created the private methods for calling the base class method. private static MethodBuilder CreateCallBaseMethodBuilder(TypeBuilder typeBuilder, MethodInfo method) {     string callBaseSuffix = "GetBaseMethod";       if (method.IsGenericMethod || method.IsGenericMethodDefinition)     {                         return MethodHelper.SetUpGenericMethod             (                 typeBuilder,                 method,                 method.Name + callBaseSuffix,                 MethodAttributes.Private | MethodAttributes.HideBySig             );     }     else     {         return MethodHelper.SetupNonGenericMethod             (                 typeBuilder,                 method,                 method.Name + callBaseSuffix,                 MethodAttributes.Private | MethodAttributes.HideBySig             );     } } The CreateCallBaseMethodBuilder is the entry point method for creating the call base method. I’ve added a suffix to the base classes method name to keep it unique. Non Generic Methods Creating a non generic method is fairly simple public static MethodBuilder SetupNonGenericMethod(     TypeBuilder typeBuilder,     MethodInfo method,     string methodName,     MethodAttributes methodAttributes) {     ParameterInfo[] parameters = method.GetParameters();       Type[] parameterTypes = ParameterHelper.GetParameterTypes(method, parameters);       Type returnType = method.ReturnType;       MethodBuilder methodBuilder = CreateMethodBuilder         (             typeBuilder,             method,             methodName,             methodAttributes,             parameterTypes,             returnType         );       ParameterHelper.SetUpParameters(parameterTypes, parameters, methodBuilder);       return methodBuilder; }   private static MethodBuilder CreateMethodBuilder (     TypeBuilder typeBuilder,     MethodInfo method,     string methodName,     MethodAttributes methodAttributes,     Type[] parameterTypes,     Type returnType ) { MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, methodAttributes, returnType, parameterTypes); return methodBuilder; } As you can see, you simply have to declare a method builder, get the parameter types, and set the method attributes you want.   Generic Methods Creating generic methods takes a little bit more work. /// <summary> /// Sets up generic method. /// </summary> /// <param name="typeBuilder">The type builder.</param> /// <param name="method">The method.</param> /// <param name="methodName">Name of the method.</param> /// <param name="methodAttributes">The method attributes.</param> public static MethodBuilder SetUpGenericMethod     (         TypeBuilder typeBuilder,         MethodInfo method,         string methodName,         MethodAttributes methodAttributes     ) {     ParameterInfo[] parameters = method.GetParameters();       Type[] parameterTypes = ParameterHelper.GetParameterTypes(method, parameters);       MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName,         methodAttributes);       Type[] genericArguments = method.GetGenericArguments();       GenericTypeParameterBuilder[] genericTypeParameters =         GetGenericTypeParameters(methodBuilder, genericArguments);       ParameterHelper.SetUpParameterConstraints(parameterTypes, genericTypeParameters);       SetUpReturnType(method, methodBuilder, genericTypeParameters);       if (method.IsGenericMethod)     {         methodBuilder.MakeGenericMethod(genericArguments);     }       ParameterHelper.SetUpParameters(parameterTypes, parameters, methodBuilder);       return methodBuilder; }   private static GenericTypeParameterBuilder[] GetGenericTypeParameters     (         MethodBuilder methodBuilder,         Type[] genericArguments     ) {     return methodBuilder.DefineGenericParameters(GenericsHelper.GetArgumentNames(genericArguments)); }   private static void SetUpReturnType(MethodInfo method, MethodBuilder methodBuilder, GenericTypeParameterBuilder[] genericTypeParameters) {     if (method.IsGenericMethodDefinition)     {         SetUpGenericDefinitionReturnType(method, methodBuilder, genericTypeParameters);     }     else     {         methodBuilder.SetReturnType(method.ReturnType);     } }   private static void SetUpGenericDefinitionReturnType(MethodInfo method, MethodBuilder methodBuilder, GenericTypeParameterBuilder[] genericTypeParameters) {     if (method.ReturnType == null)     {         methodBuilder.SetReturnType(typeof(void));     }     else if (method.ReturnType.IsGenericType)     {         methodBuilder.SetReturnType(genericTypeParameters.Where             (x => x.Name == method.ReturnType.Name).First());     }     else     {         methodBuilder.SetReturnType(method.ReturnType);     }             } Ok, there are a few helper methods missing, basically there is way to much code to put in this post, take a look at the code at http://rapidioc.codeplex.com/ to follow it through completely. Basically though, when dealing with generics there is extra work to do in terms of getting the generic argument types setting up any generic parameter constraints setting up the return type setting up the method as a generic All of the information is easy to get via reflection from the MethodInfo.   Emitting the new private method Emitting the new private method is relatively simple as it’s only function is calling the base method and returning a result if the return type is not void. ILGenerator il = privateMethodBuilder.GetILGenerator();   EmitCallBaseMethod(method, callBaseMethod, il);   private static void EmitCallBaseMethod(MethodInfo method, MethodInfo callBaseMethod, ILGenerator il) {     int privateParameterCount = method.GetParameters().Length;       il.Emit(OpCodes.Ldarg_0);       if (privateParameterCount > 0)     {         for (int arg = 0; arg < privateParameterCount; arg++)         {             il.Emit(OpCodes.Ldarg_S, arg + 1);         }     }       il.Emit(OpCodes.Call, callBaseMethod);       il.Emit(OpCodes.Ret); } So in the main method building method, an ILGenerator is created from the method builder. The ILGenerator performs the following actions: Load the class (this) onto the stack using the hidden argument Ldarg_0. Create an argument on the stack for each of the method parameters (starting at 1 because 0 is the hidden argument) Call the base method using the Opcodes.Call code and the MethodInfo we created earlier. Call return on the method   Conclusion Now we have the private methods prepared for calling the base method, we have reached the last of the relatively easy part of the proxy building. Hopefully, it hasn’t been too hard to follow so far, there is a lot of code so I haven’t been able to post it all so please check it out at http://rapidioc.codeplex.com/. The next section should be up fairly soon, it’s going to cover creating the delegates for calling the private methods created in this post.   Kind Regards, Sean.

    Read the article

  • SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005

    - by pinaldave
    This article was written as a response to T-SQL Tuesday #005 – Reporting. The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about  how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the reports for the three said components by default. Configuring Data Collection is a very easy task and can be done very quickly. Please note: There are many different ways to get reports generated for CPU, Memory and IO. You can use DMVs, Extended Events as well Perfmon to trace the data. Keeping the T-SQL Tuesday subject of reporting this post is created to give visual tutorial to quickly configure Data Collection and generate Reports. From Book On-Line: The data collector is a core component of the Data Collection platform for SQL Server 2008 and the tools that are provided by SQL Server. The data collector provides one central point for data collection across your database servers and applications. This collection point can obtain data from a variety of sources and is not limited to performance data, unlike SQL Trace. Let us go over the visual tutorial on how quickly Data Collection can be configured. Expand the management node under the main server node and follow the direction in the pictures. This reports can be exported to PDF as well Excel by writing clicking on reports. Now let us see more additional screenshots of the reports. The reports are very self-explanatory  but can be drilled down to get further details. Click on the image to make it larger. Well, as we can see, it is very easy to configure and utilize this tool. Do you use this tool in your organization? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • SQL SERVER – The Difference between Dual Core vs. Core 2 Duo

    - by pinaldave
    I have decided that I would not write on this subject until I have received a total of 25 questions on this subject. Here are a few questions from the list: Questions: What is the difference between Dual Core and Core 2 Duo? Which one is recommended for SQL Server: Core 2 Duo or Dual Core? Can I upgrade my Dual Core to Core 2 Duo? If Dual Core has 2 CPUs, how many CPUs does Core 2 Duo have? Is it true that Core 2 Duo and Dual Core meant the same thing? Well, let us see the answer. Optimistically, I would be directing everybody to this blog post if I receive a question of the same kind sometime in the future. To verify the information that I provide, visit Intel’s site. For additional information regarding the subject, visit Wikipedia. My Answer: Any computer that has two CPUs or two “cores“ is known as Dual Core. Core Duo is a brand name of Intel for Dual Core. Core 2 Duo is simply a higher version of Core Duo. (e.g. for Pentium brand, it`s like Pentium I, Pentium II, etc.) The computer I am using now has Core 2 Duo. Intel has launched a new brand, which they call i3, i5, and i7.  Here, the numbers are not related to the number of cores; rather, they show the range of the CPU. I3 is of low range and i7 is of high range. Feel free to add more details by adding valuable comments here. And if you still want to ask why I created this blog post, well, I mentioned that I was waiting for 25 questions threshold to hit, before I write about this subject which I didn`t really plan to write about. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Extended JMS Support

    - by ACShorten
    In a previous post I discussed the real time JMS integration we added in FW4.1 and also as patches for FW2.2. There are some additional aspects of this integration I did not mention which may be of interest: JMS Topic Support - In the post I concentrated on talking about JMS Queue support but failed to mention that the MDB and outgoing real time JMS also supports JMS Topics. JMS Queues are typically used for point to point decoupled integration and JMS Topics are used for hub integration that uses Publish and Subscribe. JMS Selector Support - By default the MDB will process every message from a JMS resource (Queue or Topic). If you want to alter this behaviour to selectively filter JMS messages then you can use JMS Selectors to specify the conditions for the MDB to selectively process JMS messages based upon conditions. JMS Selectors allow filters to be specified on elements in the JMS Header and JMS Message Properties using SQL like syntax. Note: JMS Selectors do not support filters on the body elements. JMS Header Support - It is possible to place custom information in the JMS Header and JMS Message Properties for outgoing messages (so that other applications can use JMS selectors if necessary as well). This is only available when installing Patches 11888040 (FW4.1) and 11850795 (FW2.2). These facilities coupled with the JMS facilities described in the previous posts gives the product integration capabilities in JMS which can be used with configuration rather than coding. Of course, the JMS facility I have described can also be used in conjunction with SOA Suite to provide greater levels of traceability and management.

    Read the article

  • Quartz.Net Writing your first Hello World Job

    - by Tarun Arora
    In this blog post I’ll be covering, 01: A few things to consider before you should schedule a Job using Quartz.Net 02: Setting up your solution to use Quartz.Net API 03: Quartz.Net configuration 04: Writing & scheduling a hello world job with Quartz.Net If you are new to Quartz.Net I would recommend going through, A brief introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service A few things to consider before you should schedule a Job using Quartz.Net - An instance of the scheduler service - A trigger - And last but not the least a job For example, if I wanted to schedule a script to run on the server, I should be jotting down answers to the below questions, a. Considering there are multiple machines set up with Quartz.Net windows service, how can I choose the instance of Quartz.Net where I want my script to be run b. What will trigger the execution of the job c. How often do I want the job to run d. Do I want the job to run right away or start after a delay or may be have the job start at a specific time e. What will happen to my job if Quartz.Net windows service is reset f. Do I want multiple instances of this job to run concurrently g. Can I pass parameters to the job being executed by Quartz.Net windows service Setting up your solution to use Quartz.Net API 1. Create a new C# Console Application project and call it “HelloWorldQuartzDotNet” and add a reference to Quartz.Net.dll. I use the NuGet Package Manager to add the reference. This can be done by right clicking references and choosing Manage NuGet packages, from the Nuget Package Manager choose Online from the left panel and in the search box on the right search for Quartz.Net. Click Install on the package “Quartz” (Screen shot below). 2. Right click the project and choose Add New Item. Add a new Interface and call it ‘IScheduledJob.cs’. Mark the Interface public and add the signature for Run. Your interface should look like below. namespace HelloWorldQuartzDotNet { public interface IScheduledJob { void Run(); } }   3. Right click the project and choose Add new Item. Add a class and call it ‘Scheduled Job’. Use this class to implement the interface ‘IscheduledJob.cs’. Look at the pseudo code in the implementation of the Run method. using System; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler // Define the Job to be scheduled // Associate a trigger with the Job // Assign the Job to the scheduler throw new NotImplementedException(); } } }   I’ll get into the implementation in more detail, but let’s look at the minimal configuration a sample configuration file for Quartz.Net service to work. Quartz.Net configuration In the App.Config file copy the below configuration <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> </configSections> <quartz> <add key="quartz.scheduler.instanceName" value="ServerScheduler" /> <add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" /> <add key="quartz.threadPool.threadCount" value="10" /> <add key="quartz.threadPool.threadPriority" value="2" /> <add key="quartz.jobStore.misfireThreshold" value="60000" /> <add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" /> </quartz> </configuration>   As you can see in the configuration above, I have included the instance name of the quartz scheduler, the thread pool type, count and priority, the job store type has been defined as RAM. You have the option of configuring that to ADO.NET JOB store. More details here. Writing & scheduling a hello world job with Quartz.Net Once fully implemented the ScheduleJob.cs class should look like below. I’ll walk you through the details of the implementation… - GetScheduler() uses the name of the quartz.net and listens on localhost port 555 to try and connect to the quartz.net windows service. - Run() an attempt is made to start the scheduler in case it is in standby mode - I have defined a job “WriteHelloToConsole” (that’s the name of the job), this job belongs to the group “IT”. Think of group as a logical grouping feature. It helps you bucket jobs into groups. Quartz.Net gives you the ability to pause or delete all jobs in a group (We’ll look at that in some of the future posts). I have requested for recovery of this job in case the quartz.net service fails over to the other node in the cluster. The jobType is “HelloWorldJob”. This is the class that would be called to execute the job. More details on this below… - I have defined a trigger for my job. I have called the trigger “WriteHelloToConsole”. The Trigger works on the cron schedule “0 0/1 * 1/1 * ? *” which means fire the job once every minute. I would recommend that you look at www.cronmaker.com a free and great website to build and parse cron expressions. The trigger has a priority 1. So, if two jobs are run at the same time, this trigger will have high priority and will be run first. - Use the Job and Trigger to schedule the job. This method returns a datetime offeset. It is possible to see the next fire time for the job from this variable. using System.Collections.Specialized; using System.Configuration; using Quartz; using System; using Quartz.Impl; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler var schd = GetScheduler(); // Start the scheduler if its in standby if (!schd.IsStarted) schd.Start(); // Define the Job to be scheduled var job = JobBuilder.Create<HelloWorldJob>() .WithIdentity("WriteHelloToConsole", "IT") .RequestRecovery() .Build(); // Associate a trigger with the Job var trigger = (ICronTrigger)TriggerBuilder.Create() .WithIdentity("WriteHelloToConsole", "IT") .WithCronSchedule("0 0/1 * 1/1 * ? *") // visit http://www.cronmaker.com/ Queues the job every minute .WithPriority(1) .Build(); // Assign the Job to the scheduler var schedule = schd.ScheduleJob(job, trigger); Console.WriteLine("Job '{0}' scheduled for '{1}'", "", schedule.ToString("r")); } // Get an instance of the Quartz.Net scheduler private static IScheduler GetScheduler() { try { var properties = new NameValueCollection(); properties["quartz.scheduler.instanceName"] = "ServerScheduler"; // set remoting expoter properties["quartz.scheduler.proxy"] = "true"; properties["quartz.scheduler.proxy.address"] = string.Format("tcp://{0}:{1}/{2}", "localhost", "555", "QuartzScheduler"); // Get a reference to the scheduler var sf = new StdSchedulerFactory(properties); return sf.GetScheduler(); } catch (Exception ex) { Console.WriteLine("Scheduler not available: '{0}'", ex.Message); throw; } } } }   The above highlighted values have been taken from the Quartz.config file, this file is available in the Quartz.net server installation directory. Implementation of my HelloWorldJob Class below. The HelloWorldJob class gets called to execute the job “WriteHelloToConsole” using the once every minute trigger set up for this job. The HelloWorldJob is a class that implements the interface IJob. I’ll walk you through the details of the implementation… - context is passed to the method execute by the quartz.net scheduler service. This has everything you need to pull out the job, trigger specific information. - for example. I have pulled out the value of the jobKey name, the fire time and next fire time. using Quartz; using System; namespace HelloWorldQuartzDotNet { class HelloWorldJob : IJob { public void Execute(IJobExecutionContext context) { try { Console.WriteLine("Job {0} fired @ {1} next scheduled for {2}", context.JobDetail.Key, context.FireTimeUtc.Value.ToString("r"), context.NextFireTimeUtc.Value.ToString("r")); Console.WriteLine("Hello World!"); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   I’ll add a call to call the scheduler in the Main method in Program.cs using System; using System.Threading; namespace HelloWorldQuartzDotNet { class Program { static void Main(string[] args) { try { var sj = new ScheduledJob(); sj.Run(); Thread.Sleep(10000 * 10000); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   This was third in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to pass parameters to the scheduled task scheduled on Quartz.net windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Limitation of the View 12

    - by pinaldave
    I have previously written on the subject SQL SERVER – The Limitations of the Views – Eleven and more…. This was indeed a very popular series and I had received lots of feedback on that topic. Today we are going to discuss something very interesting as well. During my recent performance tuning seminar in Hyderabad, I presented on the subject of Views. During the seminar, one of the attendees asked a question: We create a table and create a View on the top of it. On the same view, if we create Index, when querying View, will that index be used? The answer is NOT Always! (There is only one specific condition when it will be used. We will write about that later in the next post). Let us see the test case for the same. In our script we will do following: USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView GO Let us check the execution plan for the last SELECT statement. You can see from the execution plan. That even though we are querying View and the View has index, it is not really using that index. In the next post, we will see the significance of this View and where it can be helpful. Meanwhile, I encourage you to read my View series: SQL SERVER – The Limitations of the Views – Eleven and more…. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQL View, T SQL, Technology

    Read the article

  • Introducing SSIS Reporting Pack for SQL Server code-named Denali

    - by jamiet
    In recent blog posts I have introduced the new SSIS Catalog that is forthcoming in SQL Server Code-named Denali: What's new in SSIS in Denali Introduction to SSIS Projects in Denali Parameters in SSIS In Denali SSIS Server, Catalogs, Environments and Environment Variables in SSIS in Denali The SSIS Catalog is responsible for executing SSIS packages and also for capturing the metadata from those executions. However, at the time of writing there is no mechanism provided to view analyse and drill into that metadata and that is the reason that I am, in this blog post, introducing a suite of SSIS Catalog reports called the SSIS Reporting Pack which you can download from my SkyDrive at http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. In this first release the SSIS Reporting Pack includes five reports: Catalog – A high-level summary of all activity in the Catalog Folders – A summary of activity in each Catalog Folder Folder – Project-level activity per single Folder Executions – A visualisation of all executions per Folder/Project/Package/Environment or subset thereof Execution – Information about an individual execution Here is a screenshot of the Executions report: Notice that the SSIS Reporting Pack provides a visual overview of all executions in the Catalog. Each execution is represented as a bar on the bar chart, the success or otherwise of each execution is indicated by the colour of the bar and the execution time is indicated by the bar height. I have recorded a video that gives an overview of the SSIS Reporting which I have embedded below. If you are having any trouble viewing the video go see it at http://vimeo.com/17617974 I must stress that this is a very early version of the SSIS Reporting Pack and I am expecting it to change a lot over the coming year. I am very keen to get some feedback about this, specifically: let me know if anything does not work as you expect give me your feature requests The easiest way to get hold of of me for now is within the comments section of this blog post. That’s all for now. I hope the SSIS Reporting Pack proves useful and I look forward to hearing your feedback. Lastly, that download link again: http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. @jamiet

    Read the article

  • Building and Deploying Windows Azure Web Sites using Git and GitHub for Windows

    - by shiju
    Microsoft Windows Azure team has released a new version of Windows Azure which is providing many excellent features. The new Windows Azure provides Web Sites which allows you to deploy up to 10 web sites  for free in a multitenant shared environment and you can easily upgrade this web site to a private, dedicated virtual server when the traffic is grows. The Meet Windows Azure Fact Sheet provides the following information about a Windows Azure Web Site: Windows Azure Web Sites enable developers to easily build and deploy websites with support for multiple frameworks and popular open source applications, including ASP.NET, PHP and Node.js. With just a few clicks, developers can take advantage of Windows Azure’s global scale without having to worry about operations, servers or infrastructure. It is easy to deploy existing sites, if they run on Internet Information Services (IIS) 7, or to build new sites, with a free offer of 10 websites upon signup, with the ability to scale up as needed with reserved instances. Windows Azure Web Sites includes support for the following: Multiple frameworks including ASP.NET, PHP and Node.js Popular open source software apps including WordPress, Joomla!, Drupal, Umbraco and DotNetNuke Windows Azure SQL Database and MySQL databases Multiple types of developer tools and protocols including Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix Signup to Windows and Enable Azure Web Sites You can signup for a 90 days free trial account in Windows Azure from here. After creating an account in Windows Azure, go to https://account.windowsazure.com/ , and select to preview features to view the available previews. In the Web Sites section of the preview features, click “try it now” which will enables the web sites feature Create Web Site in Windows Azure To create a web sites, login to the Windows Azure portal, and select Web Sites from and click New icon from the left corner  Click WEB SITE, QUICK CREATE and put values for URL and REGION dropdown. You can see the all web sites from the dashboard of the Windows Azure portal Set up Git Publishing Select your web site from the dashboard, and select Set up Git publishing To enable Git publishing , you must give user name and password which will initialize a Git repository Clone Git Repository We can use GitHub for Windows to publish apps to non-GitHub repositories which is well explained by Phil Haack on his blog post. Here we are going to deploy the web site using GitHub for Windows. Let’s clone a Git repository using the Git Url which will be getting from the Windows Azure portal. Let’s copy the Git url and execute the “git clone” with the git url. You can use the Git Shell provided by GitHub for Windows. To get it, right on the GitHub for Windows, and select open shell here as shown in the below picture. When executing the Git Clone command, it will ask for a password where you have to give password which specified in the Windows Azure portal. After cloning the GIT repository, you can drag and drop the local Git repository folder to GitHub for Windows GUI. This will automatically add the Windows Azure Web Site repository onto GitHub for Windows where you can commit your changes and publish your web sites to Windows Azure. Publish the Web Site using GitHub for Windows We can add multiple framework level files including ASP.NET, PHP and Node.js, to the local repository folder can easily publish to Windows Azure from GitHub for Windows GUI. For this demo, let me just add a simple Node.js file named Server.js which handles few request handlers. 1: var http = require('http'); 2: var port=process.env.PORT; 3: var querystring = require('querystring'); 4: var utils = require('util'); 5: var url = require("url"); 6:   7: var server = http.createServer(function(req, res) { 8: switch (req.url) { //checking the request url 9: case '/': 10: homePageHandler (req, res); //handler for home page 11: break; 12: case '/register': 13: registerFormHandler (req, res);//hamdler for register 14: break; 15: default: 16: nofoundHandler (req, res);// handler for 404 not found 17: break; 18: } 19: }); 20: server.listen(port); 21: //function to display the html form 22: function homePageHandler (req, res) { 23: console.log('Request handler home was called.'); 24: res.writeHead(200, {'Content-Type': 'text/html'}); 25: var body = '<html>'+ 26: '<head>'+ 27: '<meta http-equiv="Content-Type" content="text/html; '+ 28: 'charset=UTF-8" />'+ 29: '</head>'+ 30: '<body>'+ 31: '<form action="/register" method="post">'+ 32: 'Name:<input type=text value="" name="name" size=15></br>'+ 33: 'Email:<input type=text value="" name="email" size=15></br>'+ 34: '<input type="submit" value="Submit" />'+ 35: '</form>'+ 36: '</body>'+ 37: '</html>'; 38: //response content 39: res.end(body); 40: } 41: //handler for Post request 42: function registerFormHandler (req, res) { 43: console.log('Request handler register was called.'); 44: var pathname = url.parse(req.url).pathname; 45: console.log("Request for " + pathname + " received."); 46: var postData = ""; 47: req.on('data', function(chunk) { 48: // append the current chunk of data to the postData variable 49: postData += chunk.toString(); 50: }); 51: req.on('end', function() { 52: // doing something with the posted data 53: res.writeHead(200, "OK", {'Content-Type': 'text/html'}); 54: // parse the posted data 55: var decodedBody = querystring.parse(postData); 56: // output the decoded data to the HTTP response 57: res.write('<html><head><title>Post data</title></head><body><pre>'); 58: res.write(utils.inspect(decodedBody)); 59: res.write('</pre></body></html>'); 60: res.end(); 61: }); 62: } 63: //Error handler for 404 no found 64: function nofoundHandler(req, res) { 65: console.log('Request handler nofound was called.'); 66: res.writeHead(404, {'Content-Type': 'text/plain'}); 67: res.end('404 Error - Request handler not found'); 68: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If there is any change in the local repository folder, GitHub for Windows will automatically detect the changes. In the above step, we have just added a Server.js file so that GitHub for Windows will detect the changes. Let’s commit the changes to the local repository before publishing the web site to Windows Azure. After committed the all changes, you can click publish button which will publish the all changes to Windows Azure repository. The following screen shot shows deployment history from the Windows Azure portal.   GitHub for Windows is providing a sync button which can use for synchronizing between local repository and Windows Azure repository after making any commit on the local repository after any changes. Our web site is running after the deployment using Git Summary Windows Azure Web Sites lets the developers to easily build and deploy websites with support for multiple framework including ASP.NET, PHP and Node.js and can easily deploy the Web Sites using Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix. In this demo, we have deployed a Node.js Web Site to Windows Azure using Git. We can use GitHub for Windows to publish apps to non-GitHub repositories and can use to publish Web SItes to Windows Azure.

    Read the article

  • How to Bridge Two Ethernet Ports on Mac OS X

    - by Rabarberski
    How can I bridge two wired ethernet interfaces on Mac OS X (e.g. the current MacPro comes with two ethernet ports)? Googling turned up (e.g. this Apple forum post and this openvpn post) that this is fairly easy on Linux (using the brctl command) and under Windows (via Network Connections right-click Bridge Connections), but how is it done under Mac OS X? BTW: There also doesn't seem to be a macport for brctl ('port search brctl' didn't turn up any results) Note: I don't want to have 'internet sharing', which creates a new network (by handing out network addresses in a new range). I want to really 'bridge' two interfaces so to keep the same network subnet.

    Read the article

  • Google Translation API Integration in .NET

    - by Jalpesh P. Vadgama
    This blog has been quite for some time because i was very busy at professional font but now I have decided to post on this blog too. I am constantly posting my article on my personal blog at http://jalpesh.blogspot.com. But now this blog will also have same blog post so i can reach to more community. Language localization is one of important thing of site of application nowadays. If you want your site or application more popular then other then it should support more then language. Some time it becomes difficult to translate all the sites into other languages so for i have found a great solution. Now you can use Google Translation API to translate your site or application dynamically. Here are steps you required to follow to integrate Google Translation API into Microsoft.NET Applications. First you need download class library dlls from the following site. http://code.google.com/p/google-language-api-for-dotnet/ Go this site and download GoogleTranslateAPI_0.1.zip. Then once you have done that you need to add reference GoogleTranslateAPI.dll like following. Now you are ready to use the translation API from Google. Here is the code for that. string Text = "This is a string to translate"; Console.WriteLine("Before Translation:{0}", Text); Text=Google.API.Translate.Translator.Translate(Text,Google.API.Translate.Language.English,Google.API.Translate.Language.French); Console.WriteLine("Before Translation:{0}", Text); That’s it it will return the string translated from English to French. But make you are connected to internet :)… Happy Programming Technorati Tags: GoogleAPI,Translate

    Read the article

  • Calling Web Service Functions Asynchronously from a Web Page

    - by SGWellens
    Over on the Asp.Net forums where I moderate, a user had a problem calling a Web Service from a web page asynchronously. I tried his code on my machine and was able to reproduce the problem. I was able to solve his problem, but only after taking the long scenic route through some of the more perplexing nuances of Web Services and Proxies. Here is the fascinating story of that journey. Start with a simple Web Service     public class Service1 : System.Web.Services.WebService    {        [WebMethod]        public string HelloWorld()        {            // sleep 10 seconds            System.Threading.Thread.Sleep(10 * 1000);            return "Hello World";        }    } The 10 second delay is added to make calling an asynchronous function more apparent. If you don't call the function asynchronously, it takes about 10 seconds for the page to be rendered back to the client. If the call is made from a Windows Forms application, the application freezes for about 10 seconds. Add the web service to a web site. Right-click the project and select "Add Web Reference…" Next, create a web page to call the Web Service. Note: An asp.net web page that calls an 'Async' method must have the Async property set to true in the page's header: <%@ Page Language="C#"          AutoEventWireup="true"          CodeFile="Default.aspx.cs"          Inherits="_Default"           Async='true'  %> Here is the code to create the Web Service proxy and connect the event handler. Shrewdly, we make the proxy object a member of the Page class so it remains instantiated between the various events. public partial class _Default : System.Web.UI.Page {    localhost.Service1 MyService;  // web service proxy     // ---- Page_Load ---------------------------------     protected void Page_Load(object sender, EventArgs e)    {        MyService = new localhost.Service1();        MyService.HelloWorldCompleted += EventHandler;          } Here is the code to invoke the web service and handle the event:     // ---- Async and EventHandler (delayed render) --------------------------     protected void ButtonHelloWorldAsync_Click(object sender, EventArgs e)    {        // blocks        ODS("Pre HelloWorldAsync...");        MyService.HelloWorldAsync();        ODS("Post HelloWorldAsync");    }    public void EventHandler(object sender, localhost.HelloWorldCompletedEventArgs e)    {        ODS("EventHandler");        ODS("    " + e.Result);    }     // ---- ODS ------------------------------------------------    //    // Helper function: Output Debug String     public static void ODS(string Msg)    {        String Out = String.Format("{0}  {1}", DateTime.Now.ToString("hh:mm:ss.ff"), Msg);        System.Diagnostics.Debug.WriteLine(Out);    } I added a utility function I use a lot: ODS (Output Debug String). Rather than include the library it is part of, I included it in the source file to keep this example simple. Fire up the project, open up a debug output window, press the button and we get this in the debug output window: 11:29:37.94 Pre HelloWorldAsync... 11:29:37.94 Post HelloWorldAsync 11:29:48.94 EventHandler 11:29:48.94 Hello World   Sweet. The asynchronous call was made and returned immediately. About 10 seconds later, the event handler fires and we get the result. Perfect….right? Not so fast cowboy. Watch the browser during the call: What the heck? The page is waiting for 10 seconds. Even though the asynchronous call returned immediately, Asp.Net is waiting for the event to fire before it renders the page. This is NOT what we wanted. I experimented with several techniques to work around this issue. Some may erroneously describe my behavior as 'hacking' but, since no ingesting of Twinkies was involved, I do not believe hacking is the appropriate term. If you examine the proxy that was automatically created, you will find a synchronous call to HelloWorld along with an additional set of methods to make asynchronous calls. I tried the other asynchronous method supplied in the proxy:     // ---- Begin and CallBack ----------------------------------     protected void ButtonBeginHelloWorld_Click(object sender, EventArgs e)    {        ODS("Pre BeginHelloWorld...");        MyService.BeginHelloWorld(AsyncCallback, null);        ODS("Post BeginHelloWorld");    }    public void AsyncCallback(IAsyncResult ar)    {        String Result = MyService.EndHelloWorld(ar);         ODS("AsyncCallback");        ODS("    " + Result);    } The BeginHelloWorld function in the proxy requires a callback function as a parameter. I tested it and the debug output window looked like this: 04:40:58.57 Pre BeginHelloWorld... 04:40:58.57 Post BeginHelloWorld 04:41:08.58 AsyncCallback 04:41:08.58 Hello World It works the same as before except for one critical difference: The page rendered immediately after the function call. I was worried the page object would be disposed after rendering the page but the system was smart enough to keep the page object in memory to handle the callback. Both techniques have a use: Delayed Render: Say you want to verify a credit card, look up shipping costs and confirm if an item is in stock. You could have three web service calls running in parallel and not render the page until all were finished. Nice. You can send information back to the client as part of the rendered page when all the services are finished. Immediate Render: Say you just want to start a service running and return to the client. You can do that too. However, the page gets sent to the client before the service has finished running so you will not be able to update parts of the page when the service finishes running. Summary: YourFunctionAsync() and an EventHandler will not render the page until the handler fires. BeginYourFunction() and a CallBack function will render the page as soon as possible. I found all this to be quite interesting and did a lot of searching and researching for documentation on this subject….but there isn't a lot out there. The biggest clues are the parameters that can be sent to the WSDL.exe program: http://msdn.microsoft.com/en-us/library/7h3ystb6(VS.100).aspx Two parameters are oldAsync and newAsync. OldAsync will create the Begin/End functions; newAsync will create the Async/Event functions. Caveat: I haven't tried this but it was stated in this article. I'll leave confirming this as an exercise for the student J. Included Code: I'm including the complete test project I created to verify the findings. The project was created with VS 2008 SP1. There is a solution file with 3 projects, the 3 projects are: Web Service Asp.Net Application Windows Forms Application To decide which program runs, you right-click a project and select "Set as Startup Project". I created and played with the Windows Forms application to see if it would reveal any secrets. I found that in the Windows Forms application, the generated proxy did NOT include the Begin/Callback functions. Those functions are only generated for Asp.Net pages. Probably for the reasons discussed earlier. Maybe those Microsoft boys and girls know what they are doing. I hope someone finds this useful. Steve Wellens

    Read the article

  • SQL SERVER – GUID vs INT – Your Opinion

    - by pinaldave
    I think the title is clear what I am going to write in your post. This is age old problem and I want to compile the list stating advantages and disadvantages of using GUID and INT as a Primary Key or Clustered Index or Both (the usual case). Let me start a list by suggesting one advantage and one disadvantage in each case. INT Advantage: Numeric values (and specifically integers) are better for performance when used in joins, indexes and conditions. Numeric values are easier to understand for application users if they are displayed. Disadvantage: If your table is large, it is quite possible it will run out of it and after some numeric value there will be no additional identity to use. GUID Advantage: Unique across the server. Disadvantage: String values are not as optimal as integer values for performance when used in joins, indexes and conditions. More storage space is required than INT. Please note that I am looking to create list of all the generic comparisons. There can be special cases where the stated information is incorrect, feel free to comment on the same. Please leave your opinion and advice in comment section. I will combine a final list and update this blog after a week. By listing your name in post, I will also give due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Data Storage, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Sql Server 2008 - Cannot connect to local default instance

    - by Tone
    I recently rebuilt my machine to Windows 7 x64, installed Sql Server 2008 enterprise. I can connect fine to other remote instances via Management Studio (be they 2000, 2005 or 2008), but i cannot find my local default instance. I have verified that a directory was created for the default instance C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER I can connect to SQLEXPRESS instance fine I have rerun setup to ensure I have everything installed I have verified that the SQLSERVER(MSSQLSERVER) service is running I have tried brownsing for the instance and see all the others available except my local one I have tried using this for servername: TSOUTHERLANDPC\MSSQLSERVER, the first part being my local machine name My issue is not the same as this post or this post. Any ideas?

    Read the article

  • SQL SERVER – Find Most Expensive Queries Using DMV

    - by pinaldave
    The title of this post is what I can express here for this quick blog post. I was asked in recent query tuning consultation project, if I can share my script which I use to figure out which is the most expensive queries are running on SQL Server. This script is very basic and very simple, there are many different versions are available online. This basic script does do the job which I expect to do – find out the most expensive queries on SQL Server Box. SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_logical_reads DESC -- logical reads -- ORDER BY qs.total_logical_writes DESC -- logical writes -- ORDER BY qs.total_worker_time DESC -- CPU time You can change the ORDER BY clause to order this table with different parameters. I invite my reader to share their scripts. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Best of “The Moth” 2013

    - by Daniel Moth
    As previously (2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012) the time has come again to look back over the year’s activities on this blog, and as predicted there were 3 themes 1. It has been just 15 months since I changed role from what at Microsoft we call an “Individual Contributor” (IC) to a managerial role where ICs report to me. Part of being a manager entails sharing career tips with your team and some of those I have put up on my blog over the last year (and hope to continue to next year): Effectiveness and Efficiency, Lead, Follow, or Get out of the way, and Perfect is the enemy of “Good Enough”. 2. It has also been a 15 months that I joined the Visual Studio Diagnostics team, and we have shipped many capabilities in Visual Studio 2013. I helped the members of my team blog about every single one and create videos of many, and then I created a table of contents pointing to all of their blog posts, so if you are interested in what I have been working on over the last year please follow the links from the master blog post here: Visual Studio 2013 Diagnostics Investments. We are busy working on future Visual Studio releases/updates and I will link to those when we are ready… 3. Finally, I used some of my free time (which is becoming eve so scarce) to do some device development and as part of that I shared a few thoughts and code: Debug.Assert replacement for Phone and Store apps, asynchrony is viral, and MyMessageBox for Phone and Store apps. To see what 2014 will bring to this blog, please subscribe using the link on the left… Happy New Year! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Visual Studio RTM, Silverlight 4 RTM and WCF RIA Services download links

    - by Harish Ranganathan
    Its been a long time since I blogged.  Primarily due to Tech Ed India, the ongoing Great Indian Developer Summit (GIDS 2010) and the related travels.  However, here is a quick post with a few updates.  Visual Studio 2010 RTMed in India during Tech Ed.  We had the privilege of having Soma our Senior VP launch VS 2010 RTM in Bangalore, India, during Tech Ed India 2010.   With that we also had Silverlight 4 getting RTMed during the same week. Earlier I had written posts around using the VS 2010 Beta, RC and the corresponding Silverlight, WCF RIA bits etc., and getting them all to work together.  Now that, both VS 2010 and Silverlight have RTMed, I wanted to post a quick update on the necessary downloads. Visual Studio 2010 RTM can be downloaded from MSDN Visual Studio site  If you are doing Silverlight 4 development with Visual studio, then you can download the Silverlight 4 Tools RC2 for Visual Studio  Then, if you are developing with WCF RIA Services, you can download the WCF RIA Services RC 2 for SL4 and VS 2010 And finally, if you want to use WCF RIA Services in ASP.NET you would require the Domain DataSource control.  Also, to use some of the additional Service Utility tools, you would require the WCF RIA Services Toolkit.  You can download the same from WCF RIA Services Toolkit April 2010 Once you have installed all the above, you should be able to see the following in your add-remove programs WCF RIA Services v1.0 for Visual Studio 2010 (Version 4.0.50401.0) WCF RIA Services Toolkit (Version 4.0.50401.0) Microsoft Silverlight (Version 4.0.50401.0) Microsoft Silverlight 4 SDK (Version 4.0.50401.0) Also, you would need the Expression Blend 4 for designing the apps for Silverlight 4.  You can download the release candidate from here Thats it.  You are all set for development with Visual Studio 2010 and Silverlight 4, WCF RIA Services. Cheers !!!

    Read the article

  • SQLAuthority News – Happy Deepavali and Happy News Year

    - by pinaldave
    Diwali or Deepavali is popularly known as the festival of lights. It literally means “array of light” or “row of lamps“. Today we build a small clay maps and fill it with oil and light it up. The significance of lighting the lamp is the triumph of good over evil. I work every single day in a year but today I am spending my time with family and little one. I make sure that my daughter is aware of our culture and she learns to celebrate the festival with the same passion and values which I have. Every year on this day, I do not write a long blog post but rather write a small post with various SQL Tips and Tricks. After reading them you should quickly get back to your friends and family – it is the most important festival day. Here are a few tips and tricks: Take regular full backup of your database Avoid cursors if they can be replaced by set based process Keep your index maintenance script handy and execute them at intervals Consider Solid State Drive (SDD) for crucial database and tempdb placement Update statistics for OLTP transactions at intervals I guess that’s it for today. If you still have more time to learn. Here are few things you should consider. Get FREE Books by Sign up for tomorrow’s webcast by Rick Morelan Watch SQL in Sixty Seconds Series – FREE SQL Learning Read my earlier 2300+ articles Well, I am sure that will keep you busy for the rest of the day! Happy Diwali to All of You! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Obama’s win and the geeky stuff behind his success

    - by Gopinath
    Mr. Obama is elected as President of United States for the second term with a great majority. Here are few geeky bytes associated with Mr.Obama’s success and the world records he set up Obama Creates New Records on Twitter and Facebook – Just after winning the race for president, Barak Obama setup world records on Twitter & Facebook. His tweet “Four more years” re-tweeted more than 770K times and his Facebook post received 3.8 million likes. Twitter Kills the Fail Whale, One Tweet at a Time – Twitter handled insane user loads with millions of tweets related to election and proved that it’s platform is now more robust than ever. In a post on the company’s engineering blog, Twitter said people sent 31 million election-related tweets on Tuesday alone. From 8:11 p.m. to 9:11 p.m. P.S.T., Twitter processed an average 9,965 tweets per second, with a one-second peak of 15,107 tweets per second at 8:20 p.m., the company said. Obama’s win a big vindication for Nate Silver, king of the quants – Nate Silver, a statistician and blogger was spot on in predicting Obama’s win many weeks. He did not depend on astrology or surveys to predict Obama’s success. He used big data and statistical analysis to project votes. CNET says “Despite some incredulous political pundits, the FiveThirtyEight statistician appears to have correctly predicted the winner in all 50 states in the presidential election” Inside the Secret World of the Data Crunchers Who Helped Obama Win – Team Obama used big data and analytical systems to win the elections!! Barack Obama Goes On Reddit For Last Campaign Stop Of Political Career – Reditt is getting a lot of Obama’s attention these days. He is quite often stopping by Reditt to say hi to the nerds & geeks hanging out there. His chat with nerds on Reddit has paid rich dividends.

    Read the article

  • You guys are harsh.

    - by Ratman21
    Tough crowd around here it seems.   Let’s get down to the issues. First: spelling…I do not understand how there can be miss-spelled words, as I use spell check (MS Word) and cut and paste my post in to the blog. As to being defensive or complaining. Hmm as I said this is a vent for my frustrations as well letting others know they are not a lone, in Job less land. Warning, warning, warning, complaint coming. I have been out of work for 18 months now. I have gone in person to sites, emailed and phoned places for work (I am thinking of getting a sign with my resume and walking up and down the main drag until, I get a job). So forgive me if I seem a little frustrated in my post. Now one thing some one pointed out really bugs me. The person called me a Holy Roller and made a comment that this is keeping me from a job.  What! I am born again Christian and not a Holy Roller. What I have put on my web sites about my faith is staying!   Oh my web site is http://beingscottnewman.webs.com/ and my resume is on the home page (and has been since I started the site).

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >