Search Results

Search found 2476 results on 100 pages for 'aaron james'.

Page 8/100 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • OpenJDK DIO Project Now Live! Java SE Embedded API Accessing Peripherals

    - by hinkmond
    The DIO project on OpenJDK is now live! For those who grew up in the 1970's and 1980's, you might remember Ronnie James Dio, lead singer of Black Sabbath after Ozzy was fired, and lead singer of his own band, Dio. Well, this DIO is not that Dio. This DIO is the OpenJDK Device I/O project which provides a Java-level API for accessing generic device peripherals on embedded devices, like your Raspberry Pi running Java SE Embedded software. See: OpenJDK DIO Project Here's a quote: + General Purpose Input/Output (GPIO) + Inter-Integrated Circuit Bus (I2C) + Universal Asynchronous Receiver/Transmitter (UART) + Serial Peripheral Interface If you're familiar with Pi4J, then you're going to like DIO. And, if you liked Ozzy, you probably liked Ronnie James Dio. This will probably make Robert Savage happy too. The part about DIO being live now, not the part about Dio replacing Ozzy, because everyone likes Ozzy. Hinkmond

    Read the article

  • Preview - Profit, May 2010

    - by Aaron Lazenby
    Whew! Last Friday, we put the finishing touches on the May 2010 edition of Profit, Oracle's quarterly business and technology journal. The issue will be back from the printer and live on the website in mid-April. Here's a preview: 0 0 0 Turning Crisis into OpportunityDuring the depths of the financial crisis, San Francisco California-based Wells Fargo &Company launched a bold acquisition of Wachovia Bank--one of the largest financial services mergers in history. Learn how Oracle software helped Wells Fargo CFO Howard Atkins prepare his office for the merger--and assisted with the integration of the companies once the deal was done.Building on SuccessGlobal construction firm Hill International takes project management to new heightswith Oracle's Primavera solutions.?Product Management, In Black and whiteCatch up with Zebra Technologies to see how Oracle's Agile applications connectwith an existing Oracle E-Business Suite system. A Perfect MatchLearn how technology makes good medicine in this interview with National MarrowDonor Program CIO Michael Jones. The IT Ties the BindHow information systems are help­ing manage knowledge workers in a post-9-to-5work world.I'll post a link to the new edition once it's live. Hope you enjoy!

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • Oracle's CFO Summit: Live Updates Tomorrow

    - by Aaron Lazenby
    Leaving tonight for Oracle's CFO Summit in Atlanta, GA. Will be sending live tweets out over @OracleProfit with updates of the proceedings. Economist Martin Neil Baily will be presenting information about the state of the economy, as will prominent Oracle executives and members of the financial services sector. Should be an informative day--look for updates here and on Twitter. 

    Read the article

  • Podcast Show Notes: The Role of the Cloud Architect

    - by Bob Rhubart
    Ron Batra James Baty If you want to understand what a cloud architect does, what better way than to talk to people in that role? In this program that’s exactly what we’ll do. Joining me for this conversation are cloud architects Ron Batra and Dr. James Baty. Ron is an Oracle ACE Director and product director for cloud computing at AT&T , and Jim is Vice President of Oracle’s Global Enterprise Architecture Program . This interview was recorded on June 12, 2012. The Conversation Listen to Part 1: How cloud computing is driving the supply-chaining of IT and the democratization of the activity of architecture. Listen to Part 2 (July 12): A discussion of DevOps, cloud computing, and the increasing velocity of IT. Listen to Part 3 (July 19): Why architects need to up their game to thrive and succeed in a cloud-driven world. Coming Soon A conversation about the International SOA, Cloud & Service Technology Symposium with a panel that features Thomas Erl and several Oracle community members who will be presenting at that event.

    Read the article

  • Architecture advice for converting biz app from old school to new school?

    - by Aaron Anodide
    I've got a WinForms business application that evolved over the past few years. It's forms over data with a number custom UI experiences taylored to the business, so I don't think it's a candidate to port to something like SharePoint or re-write in LightSwitch (at least not without significant investment). When I started it in 2009 I was new to this type of development (coming from more low level programming and my RDBMS knowledge was just slightly greater than what I got from school). Thus, when I was confronted with a business model that operates on a strict monthly accounting cycle, I made the unfortunate decision to create a separate database for each accounting period. Also, when I started I knew DataSets, then I learned Linq2Sql, then I learned EntityFramework. The screens are a mix and match of those. Now, after a few years developing this thing by myself I've finally got a small team. Ultimately, I want a web front end (for remote access to more straight up screens with grids of data) and a thick client (for the highly customized interfaces). My question is: can you offer me some broad strokes architecture advice that will help me formulate a battle plan to convert over to a single database and lay the foundations for my future goals at the same time? Here's a screen shot showing how an older screen uses DataSets and a newer screen uses EF (I'm thinking this might make it more real for someone reading the question - I'm willing to add any amount of detail if someone is willing to help).

    Read the article

  • Best way to go about sorting 2D sprites in a "RPG Maker" styled RPG

    - by Aaron Stewart
    I am trying to come up with the best way to create overlapping sprites without having any issues. I was thinking of having a SortedDictionary and setting the Entity's key to it's Y position relative to the max bound of the simulation, aka the Z value. I'd update the "Z" value in the update method each frame, if the entity's position has changed at all. For those who don't know what I mean, I want characters who are standing closer in front of another character to be drawn on top, and if they are behind the character, they are drawn behind. I'm leery of using SpriteBatch back to front or front to back, I've been doing some searching and have been under the impression they are a bad idea. and want to know exactly how other people are dealing with their depth sorting. Just ultimately trying to come up with the best method of sorting for good practice before I get too far in to refactor the system effectively.

    Read the article

  • Imaginet Resources acquires Notion Solutions

    - by Aaron Kowall
    Huge news for my company and me especially. http://www.imaginets.com/news--events/imaginet_acquisition_notion.html With the acquisition we become a very significant player in the Microsoft ALM space.  This increases our scale significantly and also our knowledgebase.  We now have a 2 Regional Directors and a pile of MS MVP’s. The timing couldn’t be more perfect since the launch of Visual Studio 2010 and Team Foundation Server 2010 is TODAY!! Oh, and we aren’t done with announcements today… More later. Technorati Tags: VS 2010,TFS 2010,Notion,Imaginet

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • Best S.E.O. practice for backlinking etc

    - by Aaron Lee
    I'm currently working on a website that I am really looking to optimise in terms of search engines, i've been submitting between 5-20 directory submissions daily, i've validated and optimised my code and i've joined a lot of forums etc to speak of the website in question, however, I don't seem to be making much of an impact in terms of Google. I know that S.E.O. takes a while to start making an impact, and that Google prefers sites that a regularly updated and aged, but are there any more practices that can really help with organic results in Search engines. I have looked on Google itself, and a few other SE's but nobody is willing to talk about extensive S.E.O. practices as they normally don't want people knowing their formula's for S.E.O., also does anyone know of a decent piece of software that really looks into the in's and out's of your page and provides feedback, I usually use http://www.woorank.com, but only using one program doesn't show if it's exactly correct in what it's saying. If anyone could help it would be much appreciated, thank you very much.

    Read the article

  • Simulating an object floating on water

    - by Aaron M
    I'm working on a top down fishing game. I want to implement some physics and collision detection regarding the boat moving around the lake. I would like for be able to implement thrust from either the main motor or trolling motor, the effect of wind on the object, and the drag of the water on the object. I've been looking at the farseer physics engine, but not having any experience using a physics engine, I am not quite sure that farseer is suitable for this type of thing(Most of the demos seem to be the application of gravity to a vertical top/down type model). Would the farseer engine be suitable? or would a different engine be more suitable?

    Read the article

  • "Yes, but that's niche."

    - by Geertjan
    JavaOne 2012 has come to an end though it feels like it hasn't even started yet! What happened, time is a weird thing. Too many things to report on. James Gosling's appearance at the JavaOne community keynote was seen, by everyone (which is quite a lot) of people I talked to, as the highlight of the conference. It was interesting that the software for the Duke's Choice Award winning Liquid Robotics that James Gosling is now part of and came to talk about is a Swing application that uses the WorldWind libraries. It was also interesting that James Gosling pointed out to the conference: "There are things you can't do using HTML." That brings me to the wonderful counter argument to the above, which I spend my time running into a lot: "Yes, but that's niche." It's a killer argument, i.e., it kills all discussions completely in one fell swoop. Kind of when you're talking about someone and then this sentence drops into the conversation: "Yes, but she's got cancer now." Here's one implementation of "Yes, but that's niche": Person A: All applications are moving to the web, tablet, and mobile phone. That's especially true now with HTML5, which is going to wipe away everything everywhere and all applications are going to be browser based. Person B: What about air traffic control applications? Will they run on mobile phones too? And do you see defence applications running in a browser? Don't you agree that there are multiple scenarios imaginable where the Java desktop is the optimal platform for running applications? Person A: Yes, but that's niche. Here's another implementation, though it contradicts the above [despite often being used by the same people], since JavaFX is a Java desktop technology: Person A: Swing is dead. Everyone is going to be using purely JavaFX and nothing else. Person B: Does JavaFX have a docking framework and a module system? Does it have a plugin system?  These are some of the absolutely basic requirements of Java desktop software once you get to high end systems, e.g., banks, defence force, oil/gas services. Those kinds of applications need a web browser and so they love the JavaFX WebView component and they also love the animated JavaFX charting components. But they need so much more than that, i.e., an application framework. Aren't there requirements that JavaFX isn't meeting since it is a UI toolkit, just like Swing is a UI toolkit, and what they have in common is their lack, i.e., natively, of any kind of application framework? Don't people need more than a single window and a monolithic application structure? Person A: Yes, but that's niche. In other words, anything that doesn't fit within the currently dominant philosophy is "niche", for no other reason than that it doesn't fit within the currently dominant philosophy... regardless of the actual needs of real developers. Saying "Yes, but that's niche", kills the discussion completely, because it relegates one side of the conversation to the arcane and irrelevant corners of the universe. You're kind of like Cobol now, as soon as "Yes, but that's niche" is said. What's worst about "Yes, but that's niche" is that it doesn't enter into any discussion about user requirements, i.e., there's so few that need this particular solution that we don't even need to talk about them anymore. Note, of course, that I'm not referring specifically or generically to anyone or anything in particular. Just picking up from conversations I've picked up on as I was scurrying around the Hilton's corridors while looking for the location of my next presentation over the past few days. It does, however, mean that there were people thinking "Yes, but that's niche" while listening to James Gosling pointing out that HTML is not the be-all and end-all of absolutely everything. And so this all leaves me wondering: How many applications must be part of a niche for the niche to no longer be a niche? And what if there are multiple small niches that have the same requirements? Don't all those small niches together form a larger whole, one that should be taken seriously, i.e., a whole that is not a niche?

    Read the article

  • Good resources for language design

    - by Aaron Digulla
    There are lots of books about good web design, UI design, etc. With the advent of Xtext, it's very simple to write your own language. What are good books and resources about language design? I'm not looking for a book about compiler building (like the dragon book) but something that answers: How to create a grammar that is forgiving (like adding optional trailing commas)? Which grammar patterns cause problems for users of a language? How create a compact grammar without introducing ambiguities

    Read the article

  • Can't remove libreoffice3.5-dict-de package

    - by Aaron Digulla
    I just installed the wrong version of LibreOffice (x86 instead of 64bit). So I tried to remove it but I get this error: > aptitude remove libreoffice3.5-dict-de Couldn't find package "libreoffice3.5-dict-de". However, the following packages contain "libreoffice3.5-dict-de" in their name: libreoffice3.5-dict-de Couldn't find package "libreoffice3.5-dict-de". However, the following packages contain "libreoffice3.5-dict-de" in their name: libreoffice3.5-dict-de No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 2 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. The same happens when I try to use "Software Management". What's going on? aptitude: 0.6.4-1ubuntu2 Ubuntu 11.10 oneiric

    Read the article

  • Links for PrDC10 Session Visual Studio 2010 Testing Tools

    - by Aaron Kowall
    Here are the links I promised to post from my session on Visual Studio 2010 Testing Tools. To download and configure the TFS 2010 Virtual Machine the best instructions are here: http://blogs.msdn.com/b/briankel/archive/2010/03/18/now-available-visual-studio-2010-release-candidate-virtual-machines-with-sample-data-and-hands-on-labs.aspx To download and configure the Lab Management Virtual Machine, the best instructions are here: http://blogs.msdn.com/b/lab_management/archive/2010/02/12/one-box-lab-management-walkthrough.aspx Thanks to all that attended my presentation!  Hope you learned a bit. Technorati Tags: PrDC10,TFS 2010,VHD,Lab Management

    Read the article

  • Not Drowning, Being Saved By a Dog

    - by Aaron Lazenby
    Really, there's no dog in this story. Just a week without travel to get some actual work done.I had plans to blog ambitiously from from Collaborate 10 (Wi-Fi was limited; iPad is still untested), but it's a much busier week than your agenda suggests.Scheduling sessions is one thing: you can count on those chunks of time being lost to the universe. It's the bumping into people in the hall and dropping in on an impromptu lunch that really knocks things out of whack.Good think too: I met with some great folks from

    Read the article

  • SDL 1.2 reports wrong screen size

    - by Aaron Digulla
    I have a multi-monitor setup with two displays, both 1920x1200. In games, I can only select resolutions 1920x1200 (like 2560x1200) which makes games unusable. Full screen doesn't work either because it switches one display to 800x600 which means I can't reach the close button... I have to kill the game and then, I have to restore my desktop because all windows are moved/resized. How can I force SDL to use any resolution that I want?

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • Why am I not getting an sRGB default framebuffer?

    - by Aaron Rotenberg
    I'm trying to make my OpenGL Haskell program gamma correct by making appropriate use of sRGB framebuffers and textures, but I'm running into issues making the default framebuffer sRGB. Consider the following Haskell program, compiled for 32-bit Windows using GHC and linked against 32-bit freeglut: import Foreign.Marshal.Alloc(alloca) import Foreign.Ptr(Ptr) import Foreign.Storable(Storable, peek) import Graphics.Rendering.OpenGL.Raw import qualified Graphics.UI.GLUT as GLUT import Graphics.UI.GLUT(($=)) main :: IO () main = do (_progName, _args) <- GLUT.getArgsAndInitialize GLUT.initialDisplayMode $= [GLUT.SRGBMode] _window <- GLUT.createWindow "sRGB Test" -- To prove that I actually have freeglut working correctly. -- This will fail at runtime under classic GLUT. GLUT.closeCallback $= Just (return ()) glEnable gl_FRAMEBUFFER_SRGB colorEncoding <- allocaOut $ glGetFramebufferAttachmentParameteriv gl_FRAMEBUFFER gl_FRONT_LEFT gl_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING print colorEncoding allocaOut :: Storable a => (Ptr a -> IO b) -> IO a allocaOut f = alloca $ \ptr -> do f ptr peek ptr On my desktop (Windows 8 64-bit with a GeForce GTX 760 graphics card) this program outputs 9729, a.k.a. gl_LINEAR, indicating that the default framebuffer is using linear color space, even though I explicitly requested an sRGB window. This is reflected in the rendering results of the actual program I'm trying to write - everything looks washed out because my linear color values aren't being converted to sRGB before being written to the framebuffer. On the other hand, on my laptop (Windows 7 64-bit with an Intel graphics chip), the program prints 0 (huh?) and I get an sRGB default framebuffer by default whether I request one or not! And on both machines, if I manually create a non-default framebuffer bound to an sRGB texture, the program correctly prints 35904, a.k.a. gl_SRGB. Why am I getting different results on different hardware? Am I doing something wrong? How can I get an sRGB framebuffer consistently on all hardware and target OSes?

    Read the article

  • How to install drivers for NVIDIA GeForce FX 5200 on Precise

    - by Aaron Simmons
    Ok, I'm a total Linux noob. I was able to install 12.4 without issue, except that it says in the system settings that the graphics card/driver is "unknown". I have NVIDIA GeForce FX 5200, and have not been able to get it installed. I've found the driver at NVIDIA but couldn't figure out how to actually install it. I found instructions that used apt-get to automatically find the current driver and install it, and that came close. At least then it showed up in the 3rd party drivers list. It said it was installed but not being used? And I was unable to find out why that might be, or how to get the system to use it Two questions: 1. CAN/SHOULD my graphics card work with 12.4? 2. If so, HOW? I'm running a 100% fresh install, so what's the step-by-step from there?

    Read the article

  • Kiosk Mode Coding in Chromium

    - by Aaron
    I don't know how easy this would be, since I don't know anything about it, but I need an Ubuntu setup where the machine boots up, displays the login for a few seconds allowing a chance to log in as an admin, and then precedes to automatically log in to a user account which directly opens Chromium (any other browser is acceptable) in a kiosk mode where only the web content is visible, all Chromium keyboard shortcuts are disabled, and all but a select few websites are blocked, redirecting back to the home page after an "Unauthorized web page" warning comes up if the URL constraint is violated. Is it possible to code a kiosk setup like this, or am I asking for too much? If I'm simply uninformed, and there is already much documentation on anything like this, please redirect me to an appropriate page. If you can code or set up something like my description, please reply with step-by-step instructions, and instructions on how to modify the elements of the kiosk mode. Thank you in advance for any help. (Note: I'm currently using Ubuntu 10.04, but any distribution would work.)

    Read the article

  • glColor3f Setting colour

    - by Aaron
    This draws a white vertical line from 640 to 768 at x512: glDisable(GL_TEXTURE_2D); glBegin(GL_LINES); glColor3f((double)R/255,(double)G/255,(double)B/255); glVertex3f(SX, -SPosY, 0); // origin of the line glVertex3f(SX, -EPosY, 0); // ending point of the line glEnd(); glEnable(GL_TEXTURE_2D); This works, but after having a problem where it wouldn't draw it white (Or to any colour passed) I discovered that disabling GL_TEXTURE_2D Before drawing the line, and the re-enabling it afterwards for other things, fixed it. I want to know, is this a normal step a programmer might take? Or is it highly inefficient? I don't want to be causing any slow downs due to a mistake =) Thanks

    Read the article

  • Can I use Ubuntu Server to replace our Windows environment?

    - by Aaron English
    I have recently been put in charge of a network overhaul for our company. I have done plenty with Ubuntu in school but it has been a few years. I would like to replace our current servers with Ubuntu, although I am unaware if it will work. Our current environment runs a Domain, Exchange, and VPN. I know there are solutions capable for this. I guess my man worry is will windows 7 and windows XP be able to use Ubuntu as a Domain Controller? If anyone has had success with this I would love some input. I have a meeting in a couple months that I am suppose to explain our plan. Thank you.

    Read the article

  • Logic in Entity Components Systems

    - by aaron
    I'm making a game that uses an Entity/Component architecture basically a port of Artemis's framework to c++,the problem arises when I try to make a PlayerControllerComponent, my original idea was this. class PlayerControllerComponent: Component { public: virtual void update() = 0; }; class FpsPlayerControllerComponent: PlayerControllerComponent { public: void update() { //handle input } }; and have a system that updates PlayerControllerComponents, but I found out that the artemis framework does not look at sub-classes the way I thought it would. So all in all my question here is should I make the framework aware of subclasses or should I add a new Component like object that is used for logic.

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >