Search Results

Search found 43978 results on 1760 pages for 'select case'.

Page 25/1760 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Outsource SEO - A Strong Business Case

    Outsourcing became quite popular in the 1990's as companies raced to reduce costs by moving non-essential functions out of the corporate cost structure. One of the main methods for doing this was to outsource. The basic business case to move any function to a subcontract was quite simple. Subcontractors that focus only on one thing have probably developed a deeper technical understanding of the process and are more effective. Economies of scale allow the outsourcer to provide the same (or higher quality) service at a lower price.

    Read the article

  • Google I/O Sandbox Case Study: DayZipping

    Google I/O Sandbox Case Study: DayZipping We interviewed DayZipping at the Google I/O Sandbox on May 10, 2011. They explained to us the benefits of integrating with Google Maps. DayZipping is a website where users can find and share day trips for a variety of popular destinations. For more information about developing on Google Maps, visit: code.google.com For more information on DayZipping, visit: www.dayzipping.com From: GoogleDevelopers Views: 33 0 ratings Time: 02:09 More in Science & Technology

    Read the article

  • Google I/O Sandbox Case Study: GoAnimate

    Google I/O Sandbox Case Study: GoAnimate We interviewed GoAnimate at the Google I/O Sandbox on May 10, 2011 and they explained to us the benefits of using partnering with YouTube. GoAnimate is an video creation platform that lets users easily create animated videos and publish them on YouTube. For more information on developing on YouTube, visit: code.google.com For more information on GoAnimate, visit: goanimate.com From: GoogleDevelopers Views: 33 0 ratings Time: 02:17 More in Science & Technology

    Read the article

  • Google I/O Sandbox Case Study: Evite

    Google I/O Sandbox Case Study: Evite We interviewed Evite at the Google I/O Sandbox on May 10, 2011 and they explained to us the benefits of using App Engine to build their website. Evite is the world's largest online personal invitations platform, allowing users to create customized invitations for any type of gathering. For more information on App Engine Developers, visit: code.google.com For more information on Evite, visit: www.evite.com From: GoogleDevelopers Views: 29 0 ratings Time: 01:51 More in Science & Technology

    Read the article

  • Google I/O Sandbox Case Study: VectorUnit

    Google I/O Sandbox Case Study: VectorUnit We interviewed VectorUnit at the Google I/O Sandbox on May 11, 2011 and they explained to us the benefits of building for the Android Platform. VectorUnit creates console-quality video games for the Android. For more information on Android developers, visit: developers.android.com For more information on VectorUnit, visit vectorunit.com From: GoogleDevelopers Views: 13 0 ratings Time: 01:33 More in Science & Technology

    Read the article

  • Google I/O Sandbox Case Study: Assistly

    Google I/O Sandbox Case Study: Assistly We interviewed Assistly at the Google I/O Sandbox on May 11, 2011. They explained to us the benefits of building on Google Apps. Assistly is a customer management system that helps companies deliver top-quality customer service. For more information about developing with Google Apps, visit: code.google.com For more information on Assistly, visit: www.assistly.com From: GoogleDevelopers Views: 21 0 ratings Time: 01:29 More in Science & Technology

    Read the article

  • Google I/O Sandbox Case Study: CloudSherpas

    Google I/O Sandbox Case Study: CloudSherpas We interviewed CloudSherpas at the Google I/O Sandbox on May 10, 2011. They explained to us the benefits of integrating with Google Apps. CloudSherpas helps companies migrate to Google Apps and offers SherpaTools as an additional contact management solution for companies' administrators. For more information about developing with Google Apps, visit: code.google.com For more information on CloudSherpas, visit: www.cloudsherpas.com From: GoogleDevelopers Views: 406 13 ratings Time: 02:29 More in Science & Technology

    Read the article

  • Google I/O Sandbox Case Study: MOVL

    Google I/O Sandbox Case Study: MOVL We interviewed MOVL at the Google I/O Sandbox on May 10, 2011 and they explained to us the benefits of developing on the Google TV Platform. MOVL develops gaming applications that people can play on their Google TV's, using their mobile phones as the controllers. For more information on developing on Google TV, visit: code.google.com For more information on MOVL, visit: movl.com From: GoogleDevelopers Views: 19 0 ratings Time: 02:03 More in Science & Technology

    Read the article

  • Google I/O Sandbox Case Study: MobileASL

    Google I/O Sandbox Case Study: MobileASL We interviewed MobileASL at the Google I/O Sandbox on May 11, 2011 and they explained to us the benefits of developing their accessibility applications on the Android platform. MobileASL is a video compression project that aims to make sign language communication on mobile phones a reality. For more information on Accessibility Developers, visit: google.com For more information on MobileASL, visit: mobileasl.cs.washington.edu From: GoogleDevelopers Views: 14 0 ratings Time: 01:57 More in Science & Technology

    Read the article

  • Google I/O Sandbox Case Study: Storify

    Google I/O Sandbox Case Study: Storify We interviewed Storify in the Google I/O Sandbox on May 10th, 2011 and they explained to us the benefits of integrating their product with YouTube. Storify is a platform that enables users to build stories from the news that gets published on social media and on YouTube. To learn more about YouTube Developers, visit: code.google.com To learn more about Storify, visit: www.storify.com From: GoogleDevelopers Views: 326 15 ratings Time: 01:59 More in Science & Technology

    Read the article

  • Outsource SEO - A Strong Business Case

    Outsourcing became quite popular in the 1990's as companies raced to reduce costs by moving non-essential functions out of the corporate cost structure. One of the main methods for doing this was to outsource. The basic business case to move any function to a subcontract was quite simple. Subcontractors that focus only on one thing have probably developed a deeper technical understanding of the process and are more effective. Economies of scale allow the outsourcer to provide the same (or higher quality) service at a lower price.

    Read the article

  • Is there a more efficient way to run enum values through a switch-case statement in C# than this?

    - by C Patton
    I was wondering if there was a more efficient (efficient as in simpler/cleaner code) way of making a case statement like the one below... I have a dictionary. Its key type is an Enum and its value type is a bool. If the boolean is true, I want to change the color of a label on a form. The variable names were changed for the example. Dictionary<String, CustomType> testDict = new Dictionary<String, CustomType>(); //populate testDict here... Dictionary<MyEnum, bool> enumInfo = testDict[someString].GetEnumInfo(); //GetEnumInfo is a function that iterates through a Dictionary<String, CustomType> //and returns a Dictionary<MyEnum, bool> foreach (KeyValuePair<MyEnum, bool> kvp in enumInfo) { switch (kvp.Key) { case MyEnum.Enum1: if (someDictionary[kvp.Key] == true) { Label1.ForeColor = Color.LimeGreen; } else { Label1.ForeColor = Color.Red; } break; case MyEnum.Enum2: if (someDictionary[kvp.Key] == true) { Label2.ForeColor = Color.LimeGreen; } else { Label2.ForeColor = Color.Red; } break; } } So far, MyEnum has 8 different values.. which means I have 8 different case statements.. I know there must be an easier way to do this, I just can't conceptualize it in my head. If anyone could help, I'd greatly appreciate it. I love C# and I learn new things every day.. I absorb it like a sponge :) -CP

    Read the article

  • Case Order by using Null

    - by molgan
    Hello I have the following test-code: CREATE TABLE #Foo (Foo int) INSERT INTO #Foo SELECT 4 INSERT INTO #Foo SELECT NULL INSERT INTO #Foo SELECT 2 INSERT INTO #Foo SELECT 5 INSERT INTO #Foo SELECT 1 SELECT * FROM #Foo ORDER BY CASE WHEN Foo IS NULL THEN Foo DESC ELSE Foo END DROP TABLE #Foo I'm trying to produce the following output: 1 2 3 4 5 NULL "If null then put it last" How is that done using Sql 2005 /M

    Read the article

  • How to do a case sensitive GROUP BY?

    - by Abe Miessler
    If I execute the code below: with temp as ( select 'Test' as name UNION ALL select 'TEST' UNION ALL select 'test' UNION ALL select 'tester' UNION ALL select 'tester' ) SELECT name, COUNT(name) FROM temp group by name It returns the results: TEST 3 tester 2 Is there a way to have the group by be case sensitive so that the results would be: Test 1 TEST 1 test 1 tester 2

    Read the article

  • Lucene case sensitive & insensitive search

    - by zvikico
    I have a Lucene index which is currently case sensitive. I want to add the option of having a case insensitive search as a fall-back. This means that results that match the case will get more weight and will appear first. For example, if the number of results is limited to 10, and there are 10 matches which match my case, this is enough. If I only found 7 results, I can add 3 more results from the case-insensitive search. My case is actually more complex, since I have items with different weights. Ideally, having a match with "wrong" case will add some weight. Needless to say, I do not want duplicate results. One possible approach is to have 2 indexes. One with case and one without and search both. Naturally, there's some redundancy here, since I need to index twice. Is there a better solution? Ideas?

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • mysql: Can I use two "where"s? Like, "SELECT * FROM table WHERE something and something"?

    - by KeriLynn
    I have a table with my products and I'm trying to write a page that would pull bracelets with certain colors from the database. So here's what I have right now (in php): $query = "SELECT * FROM products WHERE (products.colors LIKE '%black%')"; But I only want to select rows where the value for the column "category" equals "bracelet". I've tried a few different things, but I keep getting warnings and errors. I appreciate any help you can give, thank you!

    Read the article

  • check for null date in CASE statement, where have I gone wrong?

    - by James.Elsey
    Hello, My source table looks like this Id StartDate 1 (null) 2 12/12/2009 3 10/10/2009 I want to create a select statement, that selects the above, but also has an additional column to display a varchar if the date is not null such as : Id StartDate StartDateStatus 1 (null) Awaiting 2 12/12/2009 Approved 3 10/10/2009 Approved I have the following in my select, but it doesn't seem to be working. All of the statuses are set to Approved even though the dates have some nulls select id, StartDate, CASE StartDate WHEN null THEN 'Awaiting' ELSE 'Approved' END AS StartDateStatus FROM myTable The results of my query look like : Id StartDate StartDateStatus 1 (null) Approved 2 12/12/2009 Approved 3 10/10/2009 Approved 4 (null) Approved 5 (null) Approved StartDate is a smalldatetime, is there some exception to how this should be treated? Thanks

    Read the article

  • Super-silent (mid tower) case and fan combo

    - by Dennis G.
    I want to build a HTPC for music/video/blu-ray playback (no gaming). I don't need an expensive HTPC case but just want to go with a standard medium tower case. However, I want it to be super silent so it doesn't make any annoying fan/disk noises when I watch movies. Ideally, it shouldn't make any noticeable noise at all. I understand that choosing a board, CPU and graphic card that run cool and don't consume a lot of power is important for designing a quiet machine, and I think I got that covered. However, there are so many choices in regards to cases, fans and power supplies that it's hard to get started. What are your recommendations for a case/fan (cpu+case)/power supply combination that run absolutely silent and can cool a standard Intel system with a low-power (possibly passively cooled) graphic card? I'm usually a fan of Antec cases, would an Antec Mini P180 be a good starting point? If so, which case fans, CPU fan and power supply would you recommend?

    Read the article

  • Windows Azure Use Case: Web Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many applications have a requirement to be located outside of the organization’s internal infrastructure control. For instance, the company website for a brick-and-mortar retail company may want to post not only static but interactive content to be available to their external customers, and not want the customers to have access inside the organization’s firewall. There are also cases of pure web applications used for a great many of the internal functions of the business. This allows for remote workers, shared customer/employee workloads and data and other advantages. Some firms choose to host these web servers internally, others choose to contract out the infrastructure to an “ASP” (Application Service Provider) or an Infrastructure as a Service (IaaS) company. In any case, the design of these applications often resembles the following: In this design, a server (or perhaps more than one) hosts the presentation function (http or https) access to the application, and this same system may hold the computational aspects of the program. Authorization and Access is controlled programmatically, or is more open if this is a customer-facing application. Storage is either placed on the same or other servers, hosted within an RDBMS or NoSQL database, or a combination of the options, all coded into the application. High-Availability within this scenario is often the responsibility of the architects of the application, and by purchasing more hosting resources which must be built, licensed and configured, and manually added as demand requires, although some IaaS providers have a partially automatic method to add nodes for scale-out, if the architecture of the application supports it. Disaster Recovery is the responsibility of the system architect as well. Implementation: In a Windows Azure Platform as a Service (PaaS) environment, many of these architectural considerations are designed into the system. The Azure “Fabric” (not to be confused with the Azure implementation of Application Fabric - more on that in a moment) is designed to provide scalability. Compute resources can be added and removed programmatically based on any number of factors. Balancers at the request-level of the Fabric automatically route http and https requests. The fabric also provides High-Availability for storage and other components. Disaster recovery is a shared responsibility between the facilities (which have the ability to restore in case of catastrophic failure) and your code, which should build in recovery. In a Windows Azure-based web application, you have the ability to separate out the various functions and components. Presentation can be coded for multiple platforms like smart phones, tablets and PC’s, while the computation can be a single entity shared between them. This makes the applications more resilient and more object-oriented, and lends itself to a SOA or Distributed Computing architecture. It is true that you could code up a similar set of functionality in a traditional web-farm, but the difference here is that the components are built into the very design of the architecture. The API’s and DLL’s you call in a Windows Azure code base contains components as first-class citizens. For instance, if you need storage, it is simply called within the application as an object.  Computation has multiple options and the ability to scale linearly. You also gain another component that you would either have to write or bolt-in to a typical web-farm: the Application Fabric. This Windows Azure component provides communication between applications or even to on-premise systems. It provides authorization in either person-based or claims-based perspectives. SQL Azure provides relational storage as another option, and can also be used or accessed from on-premise systems. It should be noted that you can use all or some of these components individually. Resources: Design Strategies for Scalable Active Server Applications - http://msdn.microsoft.com/en-us/library/ms972349.aspx  Physical Tiers and Deployment  - http://msdn.microsoft.com/en-us/library/ee658120.aspx

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study

    - by Ruud
    This white paper demonstrates the architected latency hiding features of Oracle’s UltraSPARC T2+ and SPARC T4 processors That is the first sentence from this technical white paper, but what does it exactly mean? Let's consider a very simple example, the computation of a = b + c. This boils down to the following (pseudo-assembler) instructions that need to be executed: load @b, r1 load @c, r2 add r1,r2,r3 store r3, @a The first two instructions load variables b and c from an address in memory (here symbolized by @b and @c respectively). These values go into registers r1 and r2. The third instruction adds the values in r1 and r2. The result goes into register r3. The fourth instruction stores the contents of r3 into the memory address symbolized by @a. If we're lucky, both b and c are in a nearby cache and the load instructions only take a few processor cycles to execute. That is the good case, but what if b or c, or both, have to come from very far away? Perhaps both of them are in the main memory and then it easily takes hundreds of cycles for the values to arrive in the registers. Meanwhile the processor is doing nothing and simply waits for the data to arrive. Actually, it does something. It burns cycles while waiting. That is a waste of time and energy. Why not use these cycles to execute instructions from another application or thread in case of a parallel program? That is exactly what latency hiding on the SPARC T-Series processors does. It is a hardware feature totally transparent to the user and application. As soon as there is a delay in the execution, the hardware uses these otherwise idle cycles to execute instructions from another process. As a result, the throughput capacity of the system improves because idle cycles are no longer wasted and therefore more jobs can be run per unit of time. This feature has been in the SPARC T-series from the beginning, so why this paper? The difference with previous publications on this topic is in the amount of detail given. How this all works under the hood is fully explained using two example programs. Starting from the assembly language instructions, it is demonstrated in what way these programs execute. To really see what is happening we go down to the processor pipeline level, where the gaps in the execution are, and show in what way these idle cycles are filled by other copies of the same program running simultaneously. Both the SPARC T4 as well as the older UltraSPARC T2+ processor are covered. You may wonder why the UltraSPARC T2+ is included. The focus of this work is on the SPARC T4 processor, but to explain the basic concept of latency hiding at this very low level, we start with the UltraSPARC T2+ processor because it is architecturally a much simpler design. From the single issue, in-order pipelines of this processor we then shift gears and cover how this all works on the much more advanced dual issue, out-of-order architecture of the T4. The analysis and performance experiments have been conducted on both processors. The results depend on the processor, but in all cases the theoretical estimates are confirmed by the experiments. If you're interested to read a lot more about this and find out how things really work under the hood, you can download a copy of the paper here. A paper like this could not have been produced without the help of several other people. I want to thank the co-author of this paper, Jared Smolens, for his very valuable contributions and our highly inspiring discussions. I'm also indebted to Thomas Nau (Ulm University, Germany), Shane Sigler and Mark Woodyard (both at Oracle) for their feedback on earlier versions of this paper. Karen Perkins (Perkins Technical Writing and Editing) and Rick Ramsey at Oracle were very helpful in providing editorial and publishing assistance.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >