Search Results

Search found 3329 results on 134 pages for 'adam sales'.

Page 31/134 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Problem creating levels using inherited classes/polymorphism

    - by Adam
    I'm trying to write my level classes by having a base class that each level class inherits from...The base class uses pure virtual functions. My base class is only going to be used as a vector that'll have the inherited level classes pushed onto it...This is what my code looks like at the moment, I've tried various things and get the same result (segmentation fault). //level.h class Level { protected: Mix_Music *music; SDL_Surface *background; SDL_Surface *background2; vector<Enemy> enemy; bool loaded; int time; public: Level(); virtual ~Level(); int bgX, bgY; int bg2X, bg2Y; int width, height; virtual void load(); virtual void unload(); virtual void update(); virtual void draw(); }; //level.cpp Level::Level() { bgX = 0; bgY = 0; bg2X = 0; bg2Y = 0; width = 2048; height = 480; loaded = false; time = 0; } Level::~Level() { } //virtual functions are empty... I'm not sure exactly what I'm supposed to include in the inherited class structure, but this is what I have at the moment... //level1.h class Level1: public Level { public: Level1(); ~Level1(); void load(); void unload(); void update(); void draw(); }; //level1.cpp Level1::Level1() { } Level1::~Level1() { enemy.clear(); Mix_FreeMusic(music); SDL_FreeSurface(background); SDL_FreeSurface(background2); music = NULL; background = NULL; background2 = NULL; Mix_CloseAudio(); } void Level1::load() { music = Mix_LoadMUS("music/song1.xm"); background = loadImage("image/background.png"); background2 = loadImage("image/background2.png"); Mix_OpenAudio(48000, MIX_DEFAULT_FORMAT, 2, 4096); Mix_PlayMusic(music, -1); } void Level1::unload() { } //functions have level-specific code in them... Right now for testing purposes, I just have the main loop call Level1 level1; and use the functions, but when I run the game I get a segmentation fault. This is the first time I've tried writing inherited classes, so I know I'm doing something wrong, but I can't seem to figure out what exactly.

    Read the article

  • Double lock screen Ubuntu 14.04

    - by Adam
    So I've got a brandynew System 76 laptop running Ubuntu 14.04, and if I close the lid, putting it to sleep, and reopen it, I am presented with the very nice new Unity lock screen. However, when I enter my password, and it succeeds, it then presents me with a second lock screen. Where I have to enter my password again, before finally being let into the desktop. Hash anyone else seen this sort of behavior?

    Read the article

  • Why would I want to use CRaSH?

    - by Adam Tannon
    Justed stumbled across CRaSH and although it looks mighty interesting, I'm wondering why a Java developer should invest time & energy into learning (yet another) shell language. What sort of standard- and cool-applications can CRaSH be used for? Is this like a Groovy-version of Jython? I guess, when the dust settles, I'm looking for CRaSH's "wow" factors. I'm sure they're there, but after spending ~20 minutes perusing the documentation I'm just not seeing them. Thanks in advance!

    Read the article

  • Java Phones: How to set up JTAPI?

    - by Adam Tannon
    I want to use JTAPI (1.4 - latest) to create an app that will call my phone whenever I need it to. I downloaded JSR043 (JTAPI specification) and have been reading the API docs and it seems pretty straight forward with respect to how to code JTAPI. However I seriously doubt that getting a Java app on my laptop to call my cell phone doesn't require some 3rd party or middle man service provider and possibly other entities/configurations as well. So I ask: besides using the JTAPI in my Java code, what do I need to install or set up to have a Java app that calls my phone? Thanks in advance!

    Read the article

  • Your Transaction is in Jeopardy -- and You Can't Even Know It!

    - by Adam Machanic
    If you're reading this, please take one minute out of your day and vote for the following Connect item : https://connect.microsoft.com/SQLServer/feedback/details/444030/sys-dm-tran-active-transactions-transaction-state-not-updated-when-an-attention-event-occurs If you're really interested, take three minutes: run the steps to reproduce the issue, and then check the box that says that you were able to reproduce the issue. Why? Imagine that ten hours ago you started a big transaction. You're sitting...(read more)

    Read the article

  • best programming language for a web based game?

    - by Adam Geisweit
    what programming language would be best for making a web based game to be played in a browser, and where would i be able to find tutorials on how to use the language? i have looked up silverlight in xna (because that was what i was most fluent with), but it made my projects unusable for a month until i got all of silverlight off my computer. i have looked at java and javascript, but i have found no suitable places where i can learn to create games on either of these, just the basics of the language. does anyone have any advice on this?

    Read the article

  • Perminantly Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • Partitoning to install Ubuntu but already have 4 main partitions

    - by Adam
    I want to install Ubuntu alongside of Windows, but there's a 4 primary partitions limit. So there's: BIOS_RVY (10GB) System (100MB) (Windows) (C:) Windows (D:) Data I'm not sure what to do in this situation. This is my girlfriend's laptop and she doesn't want to remove MSI's pre-installed recovery partition, even though I'm pretty sure she's never used it. What is it exactly? Also, does Grub render Windows's "System" partition redundant?

    Read the article

  • Browser support for internal corporate tools

    - by adam
    We are on the verge of a conversion. For years, our company supported only IE for its internal (intranet) home-built tools. Since a few of our users are still on XP, which means IE only goes up to 8... a heavily JS / jQuery site wont even load! We have been in the process of converting to use Chrome instead, to make use of its javascript performance. But, it has now been suggested that we support all common browsers... internally for these tools. Which means more development time to scale-back some of these new applications, more time to test in all browsers, and we are already under staffed. Are there any good informational sites/posts out there, that already make this argument?

    Read the article

  • Your Transaction is in Jeopardy -- and You Can't Even Know It!

    - by Adam Machanic
    If you're reading this, please take one minute out of your day and vote for the following Connect item : https://connect.microsoft.com/SQLServer/feedback/details/444030/sys-dm-tran-active-transactions-transaction-state-not-updated-when-an-attention-event-occurs If you're really interested, take three minutes: run the steps to reproduce the issue, and then check the box that says that you were able to reproduce the issue. Why? Imagine that ten hours ago you started a big transaction. You're sitting...(read more)

    Read the article

  • Problem with Update(GameTime) Methods and Pause implementation

    - by Adam
    I have the pause function implemented and it works correctly in that it dims the player screen and stops updating the gameplay. The problem is that GameTime continues to increase while it is paused, so my method that checks gameTime versus previousSpawnTime before spawning another enemy gets messed up and if the game is paused too long it is noticeable that the next enemy draws far too early. Here is my code for the enemy update. private void UpdateEnemies(GameTime gameTime) { // Spawn a new enemy every 1.5 seconds if (gameTime.TotalGameTime - previousSpawnTime > enemySpawnTime) { previousSpawnTime = gameTime.TotalGameTime; // Add an Enemy AddEnemy(); } ... I also have other methods that depend on gameTime. I've tried getting the total pause time and subtracting that from the total game time, but I can't seem to get it to work correctly if that is the way I should go about solving this. If you need to see any other code let me know. Thank you.

    Read the article

  • Ubuntu Surround Sound Help?

    - by Adam
    I am running Ubuntu 12.10 on a Sony Vaio VGC-RB34G ( http://reviews.cnet.com/desktops/sony-vaio-vgc-rb34g/4505-3118_7-31289053.html ) with an integrated Realtek High Definition Audio sound card. On Windows, I can set up the sound card to provide sound to my surround sound system from the Line Out, Line In, and Mic ports, as in all of these are producing audio. I have tried to use alsa to achieve this result with no luck. Is there any possible way to do this on Ubuntu? Thanks! Here is a Picture of the manager I have on Windows and the result I want to achieve https://picasaweb.google.com/106733704390489891165/November102012#5809378929755566722

    Read the article

  • ubuntu 12.10 slow application load upon restart

    - by Adam
    new ubuntu user. after boot (boot is quite fast), opening applications such as firefox, libra, system settings, takes close to 10 seconds to open. after the application is loaded in RAM, the application opens fine. the system feels snappy, quick. when i have two FIREFOX open, unity is snappy in showing me the two windows side by side. but upon bootup, loading the application takes 10 seconds. there is not much hdd activity. fresh install. its the same for system settings and browsing system settings. when system settings is opened for the first time (10 second), and i click for example Color, it will take quite long to open the color settings. hardware: i3 4gb ram radeon 5770 250gb sata gigabyte h55m motherboard i am using ubuntu straight out of the box, no propriety drivers. 64 bit. i have reinstalled OS, and still the same.

    Read the article

  • How to make the apt autocompletion work in minimal system (in LXC container)?

    - by Adam Ryczkowski
    When I work inside thin LXC container on 12.04 I have only very basic system. In particular the /etc/bash_completion.d is missing the e.g. apt, that I find particularly useful. Is there any standard package, that installs the autocompletion for the apt, or should I copy the file manually? And just copying the files into /etc/bash_completion.d manually just doesn't seem to work. I use bash as my command interpreter. What am I missing here?

    Read the article

  • External monitor on old Dell 700m?

    - by Adam Henne
    Okay, linux noob. I installed 10.04 on an old Dell 700m in hopes of using it as a media center device. That seems to work great, except now it won't output to the external monitor. VGA cable works fine, TV recognizes other computers fine, it's the laptop that won't output. Turns out that the 700m, unlike every other computer out there, doesn't have a Fn hotkey to switch monitors. The only way to do it in Windows was to go through the Control Panel. In Ubuntu, though, the Monitor setup won't detect or recognize any monitors. The laptop display works fine, but I can't adjust or add anything. I did a little research and found suggestions for older Ubuntu builds involving the xorg.conf. I tried one out, in spite of my unfamiliarity, and it's screwed the whole display big time. I can fix that with a clean reinstall, that's no problem, but I'm cautious about editing xorg again. Any suggestions? Thank you!

    Read the article

  • how to download and install from the internet >> skype and realplayer

    - by ADAM
    i had corrupted win7 by dead blue screen and stopped rebooting or installing or safe mode. just death i installed linux unubtu one i still unable to install my wireless stick modem for surfing internet wirelessly form my provider vividwireless i don't know where or how to download porg and app from the net like real player and skype and others and to intall them. even i cannot find them after download. can i revive and reboot or fix my dead win7 os from linux. i can connect internet from wireless network in the neighborhood linksys!! what is linksys anyway? how to use linux ubuntu same way like windows? ie. insalling, using cd rom for installation and downloading and cleaning history and cache and system restore? please email me to: [email protected] any help..step by step many thanks

    Read the article

  • Subdomains vs. URL Path in shareable links

    - by Adam Matan
    I am building a web application for questions and answers. Each question/answer page has all the required metadata for Facebook and Twitter, and we encourage users to share these pages. I have a dilemma regarding the shared link structure: Option 1 - subdomains Use a questions.example.com and answers.example.com, followed by an ID and optional text. The text is ignored by the request, which only takes the id into account. http://questions.example.com/<question_id>/<question_text> http://questions.example.com/12345/how-long-is-the-queue # Example http://q.example.com/12345 # Example Option 2 - URL path This is the format used by stackoverflow.com and trello.com: http://example.com/questions/<question_id>/<question_text> http://example.com/questions12345/how-long-is-the-queue # Example http://example.com/q/12345 # Example Server-wise, I can easily do both - I have a wildcard SSL certificate and Apache/NGinx configuration is pretty straightforward. Which option - subdomains or URL path - is preferred for shareble links?

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • Obtaining Current Fiscal Year from Hierarchy with MDX

    - by Robert Iver
    I'm building a report in Reporting Services 2005 based on a SSAS 2005 cube. The basic idea of the report is that they want to see sales for this fiscal year to date vs. last year sales year to date. It sounds simple, but being that this is my first "real" report based on SSAS, I'm having a hell of a time. First, how would one calculate the current fiscal year, quarter, or month. I have a Fiscal Date Hierarchy with all that information in it, but I can't figure out how to say: "Based on today's date, find the current fiscal year, quarter, and month." My second, but slightly smaller problem, is getting last years sales vs. this years sales. I have seen MANY examples on how to do this, but they all assume that you select the date manually. Since this is a report and will run pretty much on it's own, I need a way to insert the "current" fiscal year, quarter, and month into the PERIODSTODATE or PARALLELPERIOD functions to get what I want. So, I'm begging for your help on this one. I have to have this report done by tomorrow monring and I'm beginning to freak out a bit. Thanks in advance.

    Read the article

  • How do I build a class that shares a table with multiple columns in MVC3?

    - by Barrett Kuethen
    I have a Job table public class Job { public int JobId { get; set; } public int SalesManagerId { get; set; } public int SalesRepId { get; set; } } and a Person table public class Person { public int PersonId { get; set; } public int FirstName { get; set; } public int LastName { get; set; } } My question is, how do I link the SalesManagerId to the Person (or PersonId) as well as the SalesRepId to the Person (PersonId)? The Sales Manager and Sales Rep are independent of each other. I just don't want to make 2 different lists to support the Sales Manager and Sales Rep roles. I'm new to MVC3, but it seems public virtual Person Person {get; set; } would be the way to go, but that doesn't work. Any help would be appreciated! Thanks!

    Read the article

  • Display and Use Advanced Find for Product Subscriptions by Account in Microsoft CRM 4.0

    - by Chris
    By default when viewing an account in edit mode you have access to Opportunities, Invoices, and Quotes which contain the products being shopped by the account and/or the sales department. I'm trying to determine where to store, display, and use the products that an account has a subscription too. I may not understand the implementation but it seems that there should be "Products" option directly off the root Account management window that will show the user all the products the account has purchased. We are trying to integrate this with our production tracking system where product sales can originate from other channels that will not flow through CRM first. This product subscription does not fit into the Opportunity, Quote, or Invoice model because they are confirmed recurring sales that were automatically purchased via tools like a Public Website, Portal, etc. By enabling this tracking in CRM we can use the advanced find feature to facilitate follow up sales and marketing efforts. Example: Find everyone who is subscribed to model A, so we can notify them of a new holiday campaign where they can get 10% off on all add-ons. It's my assumption that this is a common scenario, however I'd like to better understand how to approach this within the world of Microsoft CRM. Thank you in advance.

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • Finding and marking the largest of three values in a two dimensional array

    - by DavidYell
    I am working on a display screen for our office, and I can't seem to think of a good way to find the largest numerical value in a set of data in a two dimensional array. I've looked at using max() and also asort() but they don't seem to cope with a two dimensional array. I'm returning my data through our mysql class, so the rows are returned in a two dimensional array. Array( [0] => Array( [am] => 12, [sales] => 981), [1] => Array( [am] => 43, [sales] => 1012), [2] => Array( [am] => 17, [sales] => 876) ) I need to output a class when foreaching the data in my table for the AM with the highest sales value. Short of comparing them all in if statements. I have tried to get max() on the array, but it returns an array, as it's look within the dimension. When pointing it at a specific dimension it returns the key not the value. I figured that I could asort() the array and pop the top value off, store it in a variable and then compare against that in my foreach() loop, but that seems to have trouble sorting across two dimensions. Lastly, I figured that I could foreach() the values, comparing them against the previous one each time, untill I found the largest. This approach however means storing every value, luckily only three, but then comparing against them all again. Surely there must be a simpler way to achieve this, short of converting it into a single dimension array, then doing an asort() on that?

    Read the article

  • Analytic functions – they’re not aggregates

    - by Rob Farley
    SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables. The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012. This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it. The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC. The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition). Consider the following query: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     LAST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same. Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days). The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at. At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING) FROM Sales.SalesOrderHeader GROUP BY YEAR(OrderDate) ; But you can’t. You get that age-old error: Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Hmm. You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group. Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate. And so we end up grouping by doing a DISTINCT. SELECT DISTINCT     YEAR(OrderDate),         FIRST_VALUE(TotalDue)              OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),         LAST_VALUE(TotalDue)             OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)          WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do. If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results. You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented. Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >