Search Results

Search found 10459 results on 419 pages for 'elaine blog'.

Page 121/419 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Connect Digest : 2011-06-27

    - by AaronBertrand
    Sorry I have fallen off the Connect Digest wagon for the past few weeks; been a little swamped since returning from SQLCruise Alaska. Not sure I'll be able to assemble a digest every week, but I'll certainly try to keep a steady pace. This week I wanted to highlight a few suggestions around indexed views. With the coming of SQL Server code-named "Denali" we will be pushed toward the new columnstore index as an alternative to indexed views. But this won't be for all cases, and it likely won't be available...(read more)

    Read the article

  • [Speaking] Presenting at AITP Region 18 IT Con

    - by AllenMWhite
    The IT Con event this Saturday, June 18, is an event similar to SQL Saturday. It has three tracks, an IT Professional track, a Development track and a Career Development track. I'll be presenting two sessions in the IT Professional track. Gather SQL Server Performance Data with PowerShell We all know how important it is to keep a baseline of performance metrics that allow us to know when something is wrong and help us to track it down and fix the problem. We don't always know how to do this easily...(read more)

    Read the article

  • 24 Hours of PASS next week, pre-con preview style

    - by drsql
    I will be doing my Characteristics of a Great Relational Database , which is a session that I haven’t done since last PASS. When I was asked about doing this Summit Preview version of 24 hours of PASS, I decided that I would do this session, largely because it is kind of light and fun, but also because it is either going to be the basis of the end section of my pre-con at the summit or it is going to be the section of the pre-con we don’t get to because we are so involved in working out designs that...(read more)

    Read the article

  • New Upgrade Technical Reference for SQL Server 2008 R2

    - by Greg Low
    Hi Folks, A year or two back, I was involved in a project with my colleagues from SolidQ (led by Ron Talmage) to construct an Upgrade Technical Reference for SQL Server 2008. It seemed to be well received. We've updated it now to SQL Server 2008 R2 and it's just been published. You'll find it on this web site: http://www.microsoft.com/sqlserver/en/us/product-info/why-upgrade.aspx You'll need to click on the Upgrade Guide link towards the middle of the RHS under the "Why Upgrade" whitepaper. Enjoy!...(read more)

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 3)

    - by Hugo Kornelis
    I showed why T-SQL scalar user-defined functions are bad for performance in two previous posts. In this post, I will show that CLR scalar user-defined functions are bad as well (though not always quite as bad as T-SQL scalar user-defined functions). I will admit that I had not really planned to cover CLR in this series. But shortly after publishing the first part , I received an email from Adam Machanic , which basically said that I should make clear that the information in that post does not apply...(read more)

    Read the article

  • AngularJS: structuring a web application with multiple ng-apps

    - by mg1075
    The blogosphere has a number of articles on the topic of AngularJS app structuring guidelines such as these (and others): http://www.johnpapa.net/angular-app-structuring-guidelines/ http://codingsmackdown.tv/blog/2013/04/19/angularjs-modules-for-great-justice/ http://danorlando.com/angularjs-architecture-understanding-modules/ http://henriquat.re/modularizing-angularjs/modularizing-angular-applications/modularizing-angular-applications.html However, one scenario I have yet to come across for guidelines and best practices is the case where you have a large web application containing multiple "mini-spa" apps, and the mini-spa apps all share a certain amount of code. I am not referring to the case of trying to have multiple ng-app declarations on the same page; rather, I mean different sections of a large site that have their own, unique ng-app declaration. As Scott Allen writes in his OdeToCode blog: One scenario I haven't found addressed very well is the scenario where multiple apps exist in the same greater web application and require some shared code on the client. Are there any recommended approaches to take, pitfalls to avoid, or good sample structures of this scenario that you can point to?

    Read the article

  • Thinking in DAX (#powerpivot and #bism)

    - by Marco Russo (SQLBI)
    Last week Alberto published an interesting post about Counting Products in the Current Status with PowerPivot . Starting from a question raised from a reader, Alberto described how to solve a common issue (let me know the “current status” of each item at a given point in time starting from a transactions table) by using a single DAX formula. I suggest you to read his post to understand the technical details of that. What is inspiring of this example is that we can look at Vertipaq and DAX from several...(read more)

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Where's the source code?

    - by Kyle Burns
    I've been contacted by several people through this blog asking about the missing source code for the "Beginning Windows 8 Application Development - XAML Edition" book (the book is available at http://www.amazon.com/gp/product/1430245662/http://www.amazon.com/gp/product/1430245662/) and wanted to share this with others who may have come to this blog looking for it but may not have communicated with me.  The publisher (Apress) does know that the source code is not posted on the book's product page and will be correcting it.  Apress is located in New York City and things were slowed down a little bit last week due to the storm, but I've been assured they will be correcting the product page as soon as they can.  Thanks to everyone who has bought the book and I especially appreciate your patience.

    Read the article

  • Process Improvement and the Data Professional

    - by BuckWoody
    Don’t be afraid of that title – I’m not talking about Six Sigma or anything super-formal here. In many organizations, there are more folks in other IT roles than in the Data Professional area. In other words, there are more developers, system administrators and so on than there are the “DBA” role. That means we often have more to do than the time we need to do it. And, oddly enough, the first thing that is sacrificed is process improvement – the little things we need to do to make the day go faster in the first place. Then we get even more behind, the work piles up and…well, you know all about that. Earlier I challenged you to find 10-30 minutes a day to study. Some folks wrote back and asked “where do I start”? Well, why not be super-efficient and combine that time with learning how to make yourself more efficient? Try out a new scripting language, learn a new tool that automates things or find out ways others have automated their systems. In general, find out what you’re doing and how, and then see if that can be improved. It’s kind of like doing a performance tuning gig on yourself! If you’re pressed for time, look for bite-sized articles (like the ones I’ve done here for PowerShell and SQL Server) that you can follow in a “serial” fashion. In a short time you’ll have a new set of knowledge you can use to make your day faster. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • DAX editor for SQL Server

    - by Davide Mauri
    One of the major criticism to DAX is the lack of a decent editor and more in general of a dedicated IDE, like the one we have for T-SQL or MDX. Well, this is no more true. On Codeplex a very interesting an promising Visual Studio 2010 extension has been released by the beginning of November 2011: http://daxeditor.codeplex.com/ Intellisense, Syntax Highlighting and all the typical features offered by Visual Studio are available also for DAX. Right now you have to download the source code and compile it, and that’s it!

    Read the article

  • Connect Digest : 2012-07-06

    - by AaronBertrand
    I've filed a few Connect items recently that I think are important. In #752210 , I complain that the documentation for DDL triggers suggests that they can prevent certain DDL from being run, which is not the case at all. http://connect.microsoft.com/SQLServer/feedback/details/752210/doc-ddl-trigger-topic-suggests-that-rollbacks-run-before-action In #745796 , I complain that scripting datetime data in Management Studio yields output that contains a binary representation instead of a human-readable...(read more)

    Read the article

  • SQL Saturday #156 : Providence, RI

    - by AaronBertrand
    Well, East Greenwich, RI. Another successful event, this one put on by John Miner, Brandon Leach, Steve Simon, Scott Abrants and a host of other folks. Several #SQLFamily friends in attendance as well: Grant Fritchey, Mike Walsh, Jack Corbett, Wayne Sheffield and others. I gave a session in the morning and then a session to cap off the day. Thanks to everyone who attended! The downloads are here: T-SQL : Bad Habits & Best Practices The Ins & Outs of Contained Databases...(read more)

    Read the article

  • Rules of Holes #5: Seek Help to Get Out of the Hole

    - by ArnieRowland
    You are moving along, doing good work, maintaining a steady pace. All seems to be going well for you. Then BAM!, a Hole just grabbed you. How the heck did that happen? What went wrong? How did you fall into a Hole? Definitely, you will want to do a post-mortem and try to tease out what misteps led you into the Hole. Certainly you will want to use this opportunity to enhance your Hole avoidance skills. But your first priority is to get out of this Hole right NOW.. Consider the Fifth Rule of Holes...(read more)

    Read the article

  • Genetic Considerations in User Interface Design

    - by John Paul Cook
    There are several different genetic factors that are highly relevant to good user interface design. Color blindness is probably the best known. But did you know about motion sickness and epilepsy? We’ve been discussing how genetic factors should be considered in user interface design in one of my classes at Vanderbilt University School of Nursing. According to the National Library of Medicine, approximately 8% of males and 0.5% of females have red-green color discrimination problems with the most...(read more)

    Read the article

  • DevWeek & SQL Social @ London

    - by Davide Mauri
    Yesterday I had my “SQL Server best practices for developers” session at DevWeek and I really enjoyed it a lot. For all those who asked, I’ll put slides and demos online as soon as possible. I’ve just waiting to know where I can put it (on my website or somewhere else), so it should be just a matter of some days. If you attended my session and would like to rate it, please use SpeakerRate here: http://speakerrate.com/talks/2857-sql-server-best-practices-for-developers I also have to thank Simon Sabin for the very nice event he organized for SQLSocial http://sqlblogcasts.com/blogs/simons/archive/2010/02/16/SQLSocial-presents-Itzik-Ben-gan--Greg-Low-and-Davide-Mauri.aspx A lot of people attended and we really had interesting discussions. And it was my first time doing a session at a pub, and I must say it's *really* funny and enjoyable, expecially when you have free beer :-) Now back to Italy to the “usual” work! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Analyzing the errorlog

    - by TiborKaraszi
    How often do you do this? Look over each message (type) in the errorlog file and determine whether this is something you want to act on. Sure, some (but not all) of you have some monitoring solution in place, but are you 100% confident that it really will notify for all messages that you might find interesting? That there isn't even one little message hiding in there that you would find valuable knowing about? Or how about messages that you typically don't are about, but knowing that you have a high...(read more)

    Read the article

  • Messages do not always appear in [catalog].[event_messages] in the order that they occur [SSIS]

    - by jamiet
    This is a simple heads up for anyone doing SQL Server Integration Services (SSIS) development using SSIS 2012. Be aware that messages do not always appear in [catalog].[event_messages] in the order that they occur, observe… In the following query I am looking at a subset of messages in [catalog].[event_messages] and ordering them by [event_message_id]: SELECT [event_message_id],[event_name],[message_time],[message_source_name]FROM   [catalog].[event_messages] emWHERE  [event_message_id] BETWEEN 290972 AND 290982ORDER  BY [event_message_id] ASC--ORDER BY [message_time] ASC Take a look at the two rows that I have highlighted, note how the OnPostExecute event for “Utility GetTargetLoadDatesPerETLIfcName” appears after the OnPreExecute event for “FELC Loop over TargetLoadDates”, I happen to know that this is incorrect because “Utility GetTargetLoadDatesPerETLIfcName” is a package that gets executed by an Execute Package Task prior to the For Each Loop “FELC Loop over TargetLoadDates”: If we order instead by [message_time] then we see something that makes more sense: SELECT [event_message_id],[event_name],[message_time],[message_source_name]FROM   [catalog].[event_messages] emWHERE  [event_message_id] BETWEEN 290972 AND 290982--ORDER BY [event_message_id] ASCORDER  BY [message_time] ASC We can see that the OnPostExecute for “Utility GetTargetLoadDatesPerETLIfcName” did indeed occur before the OnPreExecute event for “FELC Loop over TargetLoadDates”, they just did not get assigned an [event_message_id] in chronological order. We can speculate as to why that might be (I suspect the explanation is something to do with the two executables appearing in different packages) but the reason is not the important thing here, just be aware that you should be ordering by [message_time] rather than [event_message_id] if you want to get 100% accurate insights into your executions. @Jamiet

    Read the article

  • Updatable columnstore index, sp_spaceused and sys.partitions

    - by Michael Zilberstein
    Columnstore index in SQL Server 2014 contains 2 new important features: it can be clustered and it is updateable. So I decided to play with both. As a “control group” I’ve taken my old columnstore index demo from one of the ISUG (Israeli SQL Server Usergroup) sessions. The script itself isn’t important – it creates partition function with 7 partitions (actually 8 but one remains empty), table on it and populates the table with 63 million rows – 9 million in each partition. So I used the same script...(read more)

    Read the article

  • Free tools for SQL Server - Automating Execution Plan Analysis

    - by jchang
    Since this topic is being discussed, I will plug my own tools, SQL Exec Stats and (a little dated) documentation the main capability is cross-referencing index usuage with specific execution plans. another feature is generating execution plans for all stored procedures in a database, along with the index usage cross-reference. There are several sources of execution plans or plan handles, this could be a live trace, a previously saved trace, previously saved sqlplan files, from dm_exec_cached_plans,...(read more)

    Read the article

  • How much to charge for Wordpress installation?

    - by Jack Duluoz
    I know this isn't properly a technical question but I hope this is ok here. The question is simple: how much should I charge a customer for a Wordpress installation & configuration? Configuration simply means I have to install him a theme (which is not provided by me), various plugins and maybe edit some lines of code here and there to make the whole thing work fine. MORE INFO I don't do this for a living, I'm just doing this for this single customer. He told me he wants to customize some features of the blog which I think will require a bit of code editing, but these will be small modifications, because I already told him that more substantial modifications will be billed separately. I don't know exactly how long will this take, but probably just 1 day for the setup and some more days to adapt the blog to the customer requests which will eventually come up later

    Read the article

  • Best way to redirect users back to the pretty URL who land on the _escaped_fragment_ one?

    - by Ryan
    I am working on an AJAX site and have successfully implemented Google's AJAX recommendation by creating _escape_fragment_ versions of each page for it to index. Thus each page has 2 URLs: pretty: example.com#!blog ugly: example.com?_escaped_fragment_=blog However, I have noticed in my analytics that some users are arriving on the site via the "ugly" URL and am looking for a clean way to redirect them to the pretty URL without impacting Google's ability to index the site. I have considered using a 301 redirect in the head but fear that Googlebot might try to follow it and end up in an endless loop. I have also considered using a JavaScript redirect that Googlebot wouldn't execute but fear that Google may interpret this as cloaking and penalize the website. Is there a good, clean, acceptable way to redirect real users away from the ugly URL if for some reason or another they end up arriving at the site that way?

    Read the article

  • PowerPivot, Stocks Exchange and the Moving Average

    - by AlbertoFerrari
    In this post I want to analyze a data model that perform some basic computations over stocks and uses one of the most commonly needed formula there (and in many other business scenarios): a moving average over a specified period in time. Moving averages can be very easily modeled in PowerPivot, but there is the need for some care and, in this post, I try to provide a solid background on how to compute moving averages. In general, a moving average over period T1..T2 is the average of all the values...(read more)

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >