Search Results

Search found 10594 results on 424 pages for 'vizioz umbraco blog'.

Page 124/424 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Placeholder images for testing reports

    - by Greg Low
    Lorem Ipsum has long been used to provide placeholder text for testing report and document layouts. Programs such as Microsoft Word have also included options for generating sample text. (For example, type =rand() anywhere in a blank area of a Microsoft Word document and hit enter).Matthew Roche and Donald Farmer both sent me a link the other day to an online service that provides placeholder images. This could be quite useful when testing report layouts in SQL Server Reporting Services.You'll find it here: http://lorempixel.com/Nice! As an example, here's a random sports image. Of course I have no idea what you'll see on this page :-)

    Read the article

  • Parameterize Charts using Excel Slicers in PowerPivot

    - by Marco Russo (SQLBI)
    One new nice feature of Excel 2010 is the Slicer. Usually, slicers are used to filter data in a PivotTable. But they might be also useful to parameterize an algorithm or a chart! We discussed this technique in our book , but Alberto Ferrari wrote a post that shows how to use this technique to allow the user to select two stocks that should be compared in an Excel Chart – as you might imagine, this will work also when you will publish the workbook on SharePoint! This is the result: Nice to see that...(read more)

    Read the article

  • PASS Summit Location Redux

    - by andyleonard
    Introduction To quote Ronald Reagan, " There you go again ." The Professional Association for SQL Server (PASS) is considering locations for future PASS Summits. The apparent answer is: You Can Have The Summit Anywhere You Want... ... as long as it's in Seattle. PASS conducted a survey on this about a year ago, and I commented on the results and PASS' (mis-)interpretation of said results in a post entitled On PASS Summit Locations, Time Will Tell . "It's About Community" I think every member of the...(read more)

    Read the article

  • New SQL Server 2012 per core licensing – Thank you Microsoft

    - by jchang
    Many of us have probably seen the new SQL Server 2012 per core licensing, with Enterprise Edition at $6,874 per core super ceding the $27,495 per socket of SQL Server 2008 R2 (discounted to $19,188 for 4-way and $23,370 for 2-way in TPC benchmark reports) with Software Assurance at $6,874 per processor? Datacenter was $57,498 per processor, so the new per-core licensing puts 2012 EE on par with 2008R2 DC, at 8-cores per socket. This is a significant increase for EE licensing on the 2-way Xeon 5600...(read more)

    Read the article

  • What are the most effective learning methods?

    - by BuckWoody
    After I got done speaking at the SQL Server 2008 R2 Launch Event yesterday I came back to the hotel room for a web-meeting with some of the other teachers at the University of Washington. As teachers we are always looking to improve the knowledge transfer to our students – and the Program Director found an interesting study that I thought I might share here. Below is an un-labeled chart showing the effectiveness of learning methods according to a recent study. At the top are the labels. (“Teaching” here means students teaching each other). Try the experiment we did: place the labels where you think they’ll go. I’ll post the completed chart tomorrow. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Help Me Help You Fix That

    - by BuckWoody
    If you've been redirected here because you posted on a forum, or asked a question in an e-mail, the person wanted you to know how to get help quickly from a group of folks who are willing to do so - but whose time is valuable. You need to put a little effort into the question first to get others to assist. This is how to do that. It will only take you a moment to read... 1. State the problem succinctly in the title When an e-mail thread starts, or a forum post is the "head" of the conversation, you'll attract more helpers by using a descriptive headline than a vague one. This: "Driver for Epson Line Printer Not Installing on Operating System XYZ" Not this: "Can't print - PLEASE HELP" 2. Explain the Error Completely Make sure you include all pertinent information in the request. More information is better, there's almost no way to add too much data to the discussion. What you were doing, what happened, what you saw, the error message, visuals, screen shots, whatever you can include. This: "I'm getting error '5203 - Driver not compatible with Operating System since about 25 years ago' in a message box on the screen when I tried to run the SETUP.COM file from my older computer. It was a 1995 Compaq Proliant and worked correctly there.." Not this: "I get an error message in a box. It won't install." 3. Explain what you have done to research the problem If the first thing you do is ask a question without doing any research, you're lazy, and no one wants to help you. Using one of the many fine search engines you can most always find the answer to your problem. Sometimes you can't. Do yourself a favor - open a notepad app, and paste the URL's as you look them up. If you get your answer, don't save the note. If you don't get an answer, send the list along with the problem. It will show that you've tried, and also keep people from sending you links that you've already checked. This: "I read the fine manual, and it doesn't mention Operating System XYZ for some reason. Also, I checked the following links, but the instructions there didn't fix the problem: " Not this: <NULL> 4. Say "Please" and "Thank You" Remember, you're asking for help. No one owes you their valuable time. Ask politely, don't pester, endure the people who are rude to you, and when your question is answered, respond back to the thread or e-mail with a thank you to close it out. It helps others that have your same problem know that this is the correct answer. This: "I could really use some help here - if you have any pointers or things to try, I'd appreciate it." Not this: "I really need this done right now - why are there no responses?" This: "Thanks for those responses - that last one did the trick. Turns out I needed a new printer anyway, didn't realize they were so inexpensive now." Not this: <NULL> There are a lot of motivated people that will help you. Help them do that.

    Read the article

  • Holiday Stress

    - by andyleonard
    Photo by Brian J. Matis Ever have one of these days? I have. According to studies like this one , I am not alone. This is a time of year when vacations loom right alongside project deadlines. There are parties to attend, additional expenses and work around the house, decisions about what to do for whom, and more. If you celebrate by decorating a house, tree, or lawn with lights; you may find yourself fighting them like the young lady pictured here! Stress at work, stress at home – stress everywhere!...(read more)

    Read the article

  • Printing PowerPoint slides in black and white

    - by John Paul Cook
    When I do SQL Server training, sometimes students want to print all of the PowerPoint slides and use them for note taking during class. For such purposes, the background is usually better off being suppressed. This is most efficiently done by changing Print Settings as shown below: Personally I recommend that people take notes directly in the slides instead of printing them. PowerPoint has a notes area. If you do want to print slides and notes, once again use the Print Settings to specify this:...(read more)

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • SSIS 2012 Moving and Resizing

    - by andyleonard
    Occasionally someone will email or post a comment about how all MVPs are Microsoft shills. I’m sure I enjoy those comments about as much as the Developer Teams at Microsoft. Let me start by saying that’s simply not true. In fact, it is completely off the mark – at least when it comes to the MVPs I know and with whom I interact. If anything, we are Microsoft’s harshest critics. Just last week, I sent someone at Microsoft an email with the following complaint (paraphrased): I do not like chasing down...(read more)

    Read the article

  • PowerPivot: editing measures when you reach 45

    - by AlbertoFerrari
    I have always been used to small fonts but now, as I am getting older, I’d better admit that a greater font is much more relaxing. Editing PowerPivot measures has always been a pain, since all you have available is a small text box and I hate to admit that I got used to leverage ZoomIt for a long time to edit measures. Today I ran into a great Windows feature that I did not know about: ctrl-wheel on the mouse inside a textbox increases the font size of the text box. It seems to work with most textboxes...(read more)

    Read the article

  • SQL Saturday 27 (Portland, Oregon)

    - by BuckWoody
    I’m sitting in the Seattle airport, waiting for my flight to Silicon Valley California for the SQL Server 2008 R2 Launch Event. By some quirk of nature, they are asking me to Emcee the event – but that’s another post entirely.   I’m reflecting on the SQL Saturday 27 event that was just held in Portland, Oregon this last Saturday. These are not Microsoft-sponsored events – it’s truly the community at work. Think of a big user-group meeting – I mean REALLY big – held in a central location, like at a college (as ours was) or some larger, inexpensive venue like that. Everyone there is volunteering – it’s my own money and time to drive several hours to a hotel for the night, feed myself and present. It’s their own time and money for the folks that organize the event – unless a vendor or two steps in to help. It’s their own time and money for the attendees to drive a long way, spend the night and their Saturday to listen to the speakers. Why do all this?   Because everybody benefits. Every speaker learns something new, meets new people, and reaches a new audience. Every volunteer does the same. And the attendees? Well, it’s pretty obvious what they get. A 7Am to 10PM extravaganza of knowledge from every corner of the product. In fact, this year the Portland group hooked up with the CodeCamp folks and held a combined event. We had over 850 people, and I had everyone from data professionals to developers in my sessions.   So I’ll take this opportunity to do two things: to say “thank you” to all of the folks who attended, from those who spoke to those who worked and those who came to listen, and to challenge you to attend the next SQL Saturday anywhere near you. You can find the list here: http://www.sqlsaturday.com/. Don’t see anything in your area? Start one! The PASS folks have a package that will show you how. Sure, it’s a big job, but the key is to get as many people helping you as possible. Even if you have only a few dozen folks show up the first time, no worries. The first events I presented at had about 20 in the room. But not this week.   See you at the Launch Event if you’re near the San Francisco area tomorrow, and see you at the Redmond SQL Saturday and TechEd if not.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Windows 8 client virtualization

    - by John Paul Cook
    Hyper-V is coming to Windows 8, but you must have a processor that supports SLAT. Virtual machines created with Virtual PC aren’t easily transferred to Windows 2008 Hyper-V and vice-versa. With Windows 8, it will be easy to move vhds from Windows 8 on your laptop or desktop to Windows 8 server and back again. To find out if your processor supports SLAT, run coreinfo –v from a command window running as administrator. Download coreinfo from here . My MacBook Pro supports SLAT as this output shows:...(read more)

    Read the article

  • Microsoft BI Conference 2011 in Lisbon

    - by AlbertoFerrari
    Anyone interested in BI from Portugal or Spain should not miss the Microsoft BI Conference 2011 in Lisbon : one full day ( March, 25, 2011 ) with three tracks on Business Intelligence: Decision Makers BI pros Intro to BI. I am going to present two sessions on PowerPivot: one is a nice deep dive into DAX for BI pros, the other is about self service BI for decision makers. Titles and the complete agenda will be published in the next days, but I suggest to save the date. The full event is free and it...(read more)

    Read the article

  • SQL Excursions

    - by Grant Fritchey
    Not everyone likes boats. Some people like hanging out, on dry land, maybe sipping a good wine. Interested in doing that AND learning some new stuff about SQL Server? Then you might want to check out SQL Excursions. Denny Cherry (blog|twitter), MVP and terribly sharp individual, has organized this new venture to supply small group training in combination with travel to interesting, and let's face it, comfortable, places. The first trip is already set for Napa Valley. The training will be by Denny and MVP, author, and all round great guy, Thomas LaRock (blog|twitter). Seats for this unique event are going fast. I'd suggest signing up soon. Oh, and did I forget to mention that Red Gate is sponsoring dinner at a fine restaurant? Well, consider it mentioned. Seriously, nice wine, great training, beautiful scenery, networking, all in one place. What are you waiting for?

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • How to Create Effective Error Reports

    - by John Paul Cook
    This post demonstrates some generic problem reporting steps that I encourage all users, whether developers or nontechnical end users, to follow. SQL Server has a feature that can help. So does Windows in some cases. More on those in Step 3. Step 1: Is the problem caused by a particular action undertaken on a gui? If so, you should get a screen capture. But if it is caused by executing some T-SQL code in a query window, just copy/paste the offending code as text. There are several ways to get a screen...(read more)

    Read the article

  • Printing PowerPoint slides in black and white

    - by John Paul Cook
    When I do SQL Server training, sometimes students want to print all of the PowerPoint slides and use them for note taking during class. For such purposes, the background is usually better off being suppressed. This is most efficiently done by changing Print Settings as shown below: Personally I recommend that people take notes directly in the slides instead of printing them. PowerPoint has a notes area. If you do want to print slides and notes, once again use the Print Settings to specify this:...(read more)

    Read the article

  • Do you know your DNS server?

    - by John Paul Cook
    If you don’t know your DNS server is valid, you need to find out before July 9. The FBI found rogue DNS servers and replaced them with clean, safe DNS servers to protect the public. These safe, clean servers will be turned off on July 9, 2012. If your computer was compromised to use the rogue servers, it will stop resolving DNS queries on July 9 when the clean servers are turned off. The FBI has provided full technical details at http://www.fbi.gov/news/stories/2011/november/malware_110911/DNS-changer-malware.pdf...(read more)

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • SIMD Extensions for the Database Storage Engine

    - by jchang
    For the last 15 years, Intel and AMD have been progressively adding special purpose extensions to their processor architectures. The extensions mostly pertain to vector operations with Single Instruction, Multiple Data (SIMD) concept. The reasoning was that achieving significant performance improvement over each successive generation for the general purpose elements had become extraordinarily difficult. On the other hand, SIMD performance could be significantly improved with special purpose registers...(read more)

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • SQL Saturday #146 : Nashua, NH

    - by AaronBertrand
    Today was SQL Saturday #146, put on by Mike Walsh, Jack Corbett, and a host of other volunteers and organizers. Scott and I missed the speaker dinner last night, but we headed up from Rhode Island at 6:00 AM and made a good day of it. We had lots of great conversations with both existing friends and potential customers. After lunch I participated in a panel discussion with Joey D'Antoni and Andrew Kelly, led my Mike. We basically talked about various things DBAs are responsible for - and ultimately...(read more)

    Read the article

  • Did You Know? I'm doing 3 more online seminars with SSWUG!

    - by Kalen Delaney
    As I told you in April , I recorded two more seminars with Stephen Wynkoop, on aspects of Query Processing. The first one will be broadcast on June 30 and the second on August 27. In between, we'll broadcast my Index Internals seminar, on July 23. Workshops can be replayed for up to a week after the broadcast, and you can even buy a DVD of the workshop. You can get more details by clicking on the workshop name, below, or check out the announcement on the SSWUG site at http://www.sswug.org/editorials/default.aspx?id=1948...(read more)

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >