Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 570/835 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • Legality of similar games

    - by Jamie Taylor
    This is my first question on GD.SE, and I hope it's in the right place. A little background: I'm an amateur (read: not explicitly employed to develop games, but am employed as a software developer) game developer and took a ComSci with Games Development degree. My Question: What is the legal situation/standpoint of creating a copycat title? I know that there are only N number of ways of solving a problem, and N number of ways to design a piece of software. Say that an independent developer designed a copycat game (a Tetris clone in this example) for instance, and decided to use that game to generate income for themselves as well as interest for their other products. Say the developer adds a disclaimer into the software along the lines of "based on , originally released c. by ." Are there any legal problems/grey areas with the developer in this example releasing this game, commercially? Would they run into legal problems? Should the developer in this example expect cease and desist orders or law suit claims from original publishers? Have original publishers been known to, effectively, kill independent projects because they are a little too close to older titles? I know that there was, at least, one attempt by a group of independent developers to remake Sonic the Hedgehog 2 and Sega shut them down. I also know of Sega shutting down development of the independent Streets of Rage Remake. I know that "but it's an old game, your honour," isn't a great legal standpoint when it comes to defending yourself. But, could an independent developer have a law suit filed against them for re-implementing an older title in a new way? I know that there are a lot of copycat versions of the older titles like Tetris available on app stores (and similar services), and that it would be very difficult for a major publisher to shut them all down. Regardless of this, is making a Tetris (or other game) copycat/clone illegal? We were taught lots of different things at University, but we never covered copyright law. I'm presuming that their thought behind it was "IF these students get jobs in games development, they wont need to know anything about the legal side of it, because their employers will have legal departments... presumably" tl;dr Is it illegal to create a clone or copycat of an old title, and make money from it?

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • A sample Memento pattern: Is it correct?

    - by TheSilverBullet
    Following this query on memento pattern, I have tried to put my understanding to test. Memento pattern stands for three things: Saving state of the "memento" object for its successful retrieval Saving carefully each valid "state" of the memento Encapsulating the saved states from the change inducer so that each state remains unaltered Have I achieved these three with my design? Problem This is a zero player game where the program is initialized with a particular set up of chess pawns - the knight and queen. Then program then needs to keep adding set of pawns or knights and queens so that each pawn is "safe" for the next one move of every other pawn. The condition is that either both pawns should be placed, or none of them should be placed. The chessboard with the most number of non conflicting knights and queens should be returned. Implementation I have 4 classes for this: protected ChessBoard (the Memento) private int [][] ChessBoard; public void ChessBoard(); protected void SetChessBoard(); protected void GetChessBoard(int); public Pawn This is not related to memento. It holds info about the pawns public enum PawnType: int { Empty = 0, Queen = 1, Knight = 2, } //This returns a value that shown if the pawn can be placed safely public bool IsSafeToAddPawn(PawnType); public CareTaker This corresponds to caretaker of memento This is a double dimentional integer array that keeps a track of all states. The reason for having 2D array is to keep track of how many states are stored and which state is currently active. An example: 0 -2 1 -1 2 0 - This is current state. With second index 0/ 3 1 - This state has been saved, but has been undone private int [][]State; private ChessBoard [] MChessBoard; //This gets the chessboard at the position requested and assigns it to originator public ChessBoard GetChessBoard(int); //This overwrites the chessboard at given position public void SetChessBoard(ChessBoard, int); private int [][]State; public PlayGame (This is the originator) private bool status; private ChessBoard oChessBoard; //This sets the state of chessboard at position specified public SetChessBoard(ChessBoard, int); //This gets the state of chessboard at position specified public ChessBoard GetChessBoard(int); //This function tries to place both the pawns and returns the status of this attempt public bool PlacePawns(Pawn);

    Read the article

  • The State of the Internet -- Retail Edition

    - by David Dorf
    Over at Business Insider, there's a great presentation on the State of the Internet done in the Mary Meeker style.  Its 138 slides so I took the liberty of condensing it down to the 15 slides that directly apply to the retail industry.  However, I strongly recommend looking at the entire deck when you have time.  And while you're at it, Business Insider just launched a retail portal that's dedicated to retail industry content.  Please check it out as well.  My take-aways are below after the slide show. &amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt; [Source: Business Insider] Here are a few things I took away from the statistics: Facebook and Twitter are in their infancy.  While all retailers should have social programs, search is still the driver and therefore should receive the lions share of investment.  Facebook referrals are up 92% year-over-year, but Google still does 80% of the referrals. E-commerce continues to grow at breakneck speed, but in-store commerce is still king. Stores are not showrooms yet.  And social commerce pure-plays like Gilt and Groupon are tiny but worthy of some attention. There are more smartphones than PCs on the internet, and the disparity will continue to grow. PC growth will be flat and Tablet use will continue to grow. Mobile accounts for 12% of all internet traffic. A quarter of smartphone sales come from China, so anyone with a presence there better have a strong mobile strategy. 38% of people have used their smartphone to make a purchase, and many use their smartphones inside stores.  Smartphones are a critical consumer tool for shopping. Mobile is starting to drive significant traffic to e-commerce sites, especially tablets.  Tablet strategies are crucial for retailers. Mobile payments from the likes of Paypal and Square are growing quickly.  It will be interesting to see how NFC plays in this area. Mobile operating systems are losing market share to iOS and Android.  I wonder in Microsoft can finally make a dent? The internet is being dominated by mobile devices, and retailers had better have a strong mobile strategy to meet consumer demand.

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • Writing or extending existing emacs packages: is it worth or should I move to Netbeans/Eclipse?

    - by Andrea
    I'm finishing my master degree course in CS and I've almost become addicted to Emacs. I've used it to write in C, Latex, Java, JSP,XML, CommonLisp, Ada and other languages no other editor supported, like AMPL. I'd like to improve the packages I've been using the most or create new ones, but, in practice, I find that the implementation of Emacs leaves a lot to be desired. There are a lot of poorly-featured/poorly-maintained packages with either overlapping functionalities or obscure incompatibilities, and Elisp just seems to foster the situation by lacking the common features modern lisps have. In contrast Eclipse and Netbeans are actively improved and it does seem they can be effective for non-mainstream languages. I tried Hibachi for Ada in Eclipse and it worked well, there's CUPS for Lisp in Eclipse and LambdaBeans built using NetBeans components. On the other hand those plugins seem to be less active than their Emacs' counterparts, for example Hibachi was archived last year. What's your opinion on this? Which editor should I write extension for? EDIT: To answer Larry Coleman (see comment below): I like Emacs as a user because it is efficient both for me and the computer I'm using. It's fast and the textual interface (i.e. minibuffer) allows for quick interaction. It's solid and packages are usually small and easy to manage. If I need to correct or remove something I usually just have to change a row in my .emacs or an elisp file, or delete a directory. Eclipse plugins rely on a more complicated process that screwed my Eclipse configuration a couple of times, forcing me to do a clean reinstall. Emacs works as long as I use the basic packages. If I need something more complicated the situation gets pretty hairy. As a "power user" I think that the best I can hope for is to write a severely crippled version of the extensions I'd actually like to have; in other words, that it's not worth the trouble. I'd like to write extensions for the things I'd like to have automated in Emacs, for example project support with automated tag-table update on file writing. There are a few projects on this that lack integration, documentation, extensibility and so forth. The best one is probably CEDET, for which I believe the Greenspun's 10th rule can be applied. EDIT: To comment Larry Coleman's answer I'm pretty sure I can pick elisp programming but the extensions I have in mind don't exist yet despite their relative simplicity and the effort more knowledgeable people poured into related projects.This makes me wonder whether it is so because of the way emacs is developed, i.e. people tend to write their own little extensions without coordination, or its implementation, its extension language not being able to keep up with the growing complexity.

    Read the article

  • With AMD style modules in JavaScript is there any benefit to namespaces?

    - by gman
    Coming from C++ originally and seeing lots of Java programmers doing the same we brought namespaces to JavaScript. See Google's closure library as an example where they have a main namespace, goog and under that many more namespaces like goog.async, goog.graphics But now, having learned the AMD style of requiring modules it seems like namespaces are kind of pointless in JavaScript. Not only pointless but even arguably an anti-pattern. What is AMD? It's a way of defining and including modules that removes all direct dependencies. Effectively you do this // some/module.js define([ 'name/of/needed/module', 'name/of/someother/needed/module', ], function( RefToNeededModule, RefToSomeOtherNeededModule) { ...code... return object or function }); This format lets the AMD support code know that this module needs name/of/needed/module.js and name/of/someother/needed/module.js loaded. The AMD code can load all the modules and then, assuming no circular dependencies, call the define function on each module in the correct order, record the object/function returned by the module as it calls them, and then call any other modules' define function with references to those modules. This seems to remove any need for namespaces. In your own code you can call the reference to any other module anything you want. For example if you had 2 string libraries, even if they define similar functions, as long as they follow the AMD pattern you can easily use both in the same module. No need for namespaces to solve that. It also means there's no hard coded dependencies. For example in Google's closure any module could directly reference another module with something like var value = goog.math.someMathFunc(otherValue) and if you're unlucky it will magically work where as with AMD style you'd have to explicitly include the math library otherwise the module wouldn't have a reference to it since there are no globals with AMD. On top of that dependency injection for testing becomes easy. None of the code in the AMD module references things by namespace so there is no hardcoded namespace paths, you can easily mock classes at testing time. Is there any other point to namespaces or is that something that C++ / Java programmers are bringing to JavaScript that arguably doesn't really belong?

    Read the article

  • Alfa AWUS036H USB wireless adapter not recognized

    - by GFiasco
    The Alfa AWUS036H USB wireless adapter will not be recognized by my netbook (Ubuntu 14.04, Asus X201E). As I understand it, the drivers should already be built in to this version of Linux, but I tried a make/make install of the latest Realtek drivers (as mentioned on How do I install drivers for the Alfa AWUS036H USB wireless adapter?) and it didn't work. I then followed the advice of this thread (ALFA AWUS036NH driver) and did a make/make install of the most up-to-date backport of the drivers, but that didn't work. At this point I tried a series of commands from this thread (http://ubuntuforums.org/showthread.php?t=2187780) in an attempt to identify the problem, but at no point could I get the laptop to ever recognize the USB adapter. I have also troubleshot the USB cable itself, tried both the USB 2.0 and 3.0 ports on the laptop, have never received an error message regarding a need to update the firmware, and have seemingly successfully installed all manner of variation of Realtek drivers which were supposed to make the adapter work. (I also tried to delete/clean up after each install, in the hope I wasn't making things worse.) Not sure what I should do next. Please let me know if I need to post any more information. Thanks very much for your help. EDIT: Before inserting Alpha USB adapter: :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub After inserting Alpha USB adapter (USB 3.0 port, no change): :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Ran tail -f /var/log/syslog, inserted device, no recognition (last entry is dated 16:17:01, so an hour ago). Going to check on an Ubuntu 14.04 laptop and Windows XP desktop. I'll update after. Thanks for your help to this point.

    Read the article

  • 5 ways to stop code thrashing&hellip;

    - by MarkPearl
    A few days ago I was programming on a personal project and hit a roadblock. I was applying the MVVM pattern and for some reason my view model was not updating the view when the state changed??? I had applied this pattern many times before and had never had this problem. It just didn’t make sense. So what did I do… I did what anyone would have done in my situation and looked to pass the blame to someone or something else. I tried to blame one of the inherited base classes, but it looked fine, then to Visual Studio, but it seemed to be fine and eventually to any random segment of code I came across. My elementary problem had now mushroomed into one that had lost any logical basis and I was in thrashing mode! So what to do when you begin to thrash? 1) Do a general code cleanup – Now there is a difference between cleaning code and changing code . When you thrash you change code and you want to avoid this. What you really want to do is things like rename variables to have better meaning and go over your comments. 2) Do a proof of concept – if cleaning code doesn’t help. The  you want to isolate the problem and identify the key concepts. When you isolate code you ideally want it to be in a totally separate project with as little complexity as possible. Make the building blocks and try and replicate the functionality that you are getting in your current application. 3) Phone a friend – I have found speaking to someone else about the problem generally helps me solve any thrashing issues I am having. Usually they don’t even have to say anything to solve the problem, just you talking them through the problem helps you get clarity of mind. 4) Let the dust settle – Sometimes time is the best solution. I have had a few problems that no matter who I discussed them with and no matter how much code cleaning I had done I just couldn’t seem to fix it. My brain just seemed to be going in circles. A good nights rest has always helped and often just the break away from the problem has helped me find a solution. 5) Stack overflow it – So similar to phone a friend. I am really surprised to see what a melting pot stack overflow has been and what a help it has been in solving technology specific problems. Just be considerate to those using the site and explain clearly exactly what problem you are having and the technologies you are using or else you will probably not get any useful help…

    Read the article

  • Them and us

    - by Plip
    As much as we try and create inclusive societies throughout the globe time and time again we revert to our tribal and clan origins back in the distant past, be those line split across the obvious like  Nationality, Religion or even the Football teams we follow. Microsoft to me has always been a “them”. I was always on the outside looking in, free to say as I wished and have an external objective viewpoint. Now, after my first week (well four days but who’s counting) Microsoft is an “us” for me. So when I look up in the Atrium of Building 1 at Microsoft’s UK headquarters I see banners like the one above and I already genuinely feel a part of this much bigger community. I looked up at that and I felt a sense of pride to be part of something bigger, something which is out there touching peoples lives everywhere (for the good and the bad). My objectivity has made me who I am today. I’m open to other ideas and concepts, I’ve worked hard to be understanding across the board be it from technology through to cultural differences in my life and it’s vital to me that I preserve that so I now have to learn how I balance the “them” of Microsoft to the “us” of Microsoft and maintain the objectivity. It’s my job to advise people on the best way to do things, which won’t always mean “Use Microsoft Technology X”, sometimes it’ll be my responsibility to say “Don’t use Microsoft Technology X”. My first and foremost responsibility is to the customer, to give them the best advice that I can and I want to maintain that. Yeah, I’m sure I’ll be tarred by some as a Microsoft guy, for many years I’ve had just that, but those out there in the non Microsoft communities I’ve engaged in I think know that I’m the first to say when I think something is a bit naff. So, here’s my ask to you ‘the community’. Keep me honest. If I start to sound like a fanboi I want you to find me and give me a slap. It’s all too easy to forget reality sometimes and I want to make sure I stay well and truly routed in that reality. Also, no matter how much I embed myself within Microsoft I fear I will never understand Microsoft’s marketing team. In the Gents just under the WP7 banner shown above I was faced with this. Draw your own conclusions on what it’s message is.

    Read the article

  • Big Companies Influence Retail in 2010

    - by David Dorf
    From a retail industry perspective, 2010 will go down as the year mobile went mainstream, the economy recovered from the crash, and Facebook surpassed Google as the most influential online property. While the economy certainly had the biggest impact on the retail industry, a few big companies also exerted influence. Here's a rundown and a look back at 2010: Apple -- Steve Jobs and company continued to lead the mobile pack. Consumers are using their iPhones to shop, retailers are using the iPod Touch for mobile checkout, and both are embracing the iPad as the next wave of technology. The Next Technology from Apple Mobile Platforms in Retail Apple Stores, Touch2Systems, and the iPad Google -- Not to be outdone, Google's Android platform grew faster than Apple's, plus they support QRCodes natively and will probably beat Apple to NFC. Google Checkout, Product Search, and Boutiques.com continue to impact the e-commerce scene. Google Leverages Like.com Facebook -- While the movie The Social Network certainly made Facebook a household name, Connect, Places, and seeing the "like" button all over the Web really pushed Facebook everywhere. 2010 set the foundations for f-commerce. Facebook Participatory Promotions Crowd Savers What's the value of a Facebook fan? Step Aside Google Leveraging Social Networks for Retail Social Shopping at Nine West Groupon -- This newcomer executed on a simple concept flawlessly, making them the fasted company to reach $1B in revenue. (See cool chart from Silicon Alley Insider.) Google's offer of $5-6B wasn't enough, so now they are raising an additional $1B in funding, presumably to buy-up all the copycats across the globe. Changing the Way We Shop Amazon -- As if leading the e-commerce charge wasn't enough, Amazon shook things up with their purchase of Woot and release of their Price Checker mobile app. They continue to push boundaries with Kindle, and don't seem worried about the iPad at all. You Can't Win on Price Amazon Looks at Your Social Graph eBay -- Acquiring Skype didn't exactly work out, but eBay's purchase of PayPal and RedLaser are driving the company forward. They are still a major force. Bump the Bill Oracle, SAP, HP, IBM, and Cisco left their marks on the retail industry as well with various acquisitions and CxO shake-ups. We'll just have to wait and see what 2011 brings next.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • eSTEP Newsletter for October 2013 now available

    - by uwes
    Dear Partners,We would like to let you know that the October'13 issue of our Newsletter is now available.The issue contains information on the following topics: Oracle Open World Summary Oracle Cloud: Oracle Engineered Systems Oracle Database and Middleware Oracle Applications and Software as a Service Oracle Industries Oracle Partners and the "Internet of Things" JavaOne News MySQL News Corporate News Create Your HR Strategic Vision at Oracle HCM World Oracle Database Protection Redefined A Preview: Oracle Database Backup Logging Recovery Appliance Oracle closed Tekelec acquisition Congratulations to ORACLE TEAM USA! Tech sectionARC M6 Oracle's SPARC M6 Oracle SuperCluster M6-32 - Oracle’s Most Scalable Engineered System Oracle Multitenant on SPARC Servers and Oracle Solaris Oracle Database 12c Enterprise Edition: Plug into the Cloud Oracle In-Memory Database Cache Oracle Virtual Compute Appliance New Benchmark-Results published (Sept. 2013) Video Interview: Elasticity, the Biggest Challenge Facing Data Centers Today Tech blog Announcing New Sun Storage 2500-M2 Drives SPARC Product Line Update ZFS RAID Calculator v6 What ships with ODA X3-2? Tech Article: Oracle Multitenant on SPARC Servers and Oracle Solaris New release of Sun Rack II capacity calculator available Announcing: Oracle Solaris Cluster Product Bulletin, September 2013 Learning & events Planned TechCasts Quarterly Partner Update Live Webcast: Simplify and Accelerate Oracle Database deployment with Oracle VM Templates Join us for OTN's Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence.Learn from OOW 2013 what is going on in Virtualization How to Implementing Early Arriving Facts in ODI, Part I and Part II: Proof of Concept Overview Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. If Virtualization Is Free, It Can't Be Any Good—Right? Looking beyond System/HW SOA and User Interfaces Overcoming the challenges to developing user interfaces in a service oriented References Vodafone Romania Improves Business Agility and Customer Satisfaction, with 10x Faster Business Intelligence Delivery and 12x Faster Processing Emirates Integrated Telecommunications Captures 47% Market Share in a Competitive Market, Thanks to 24/7 Availability Home Credit and Finance Bank Accelerates Getting New Banking Products to Market Extra A Conversation with Java Champion Johan VosYou can find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • How do you automate a SharePoint 2010 deployment?

    - by Enrique Lima
    In the last couple of months SharePoint traffic (consulting, training and speaking) has picked up.  And with that also the requests for deployments.  There are good, great, bad and really bad things around this. But that is for another topic.  However part of the good and great has been the fact of organizations wanting to do a proof of concept deployment (even when WSS or MOSS has been deployed). We can go through a session (Microsoft has the SDPS concept, SharePoint Deployment Planning Services) of discovering what the customer wants to achieve from their investment in the platform and then also proceed to model the solution that would fit their needs.  But it should not stop there.  The next step should be a POC (as many have requested) to test out. Now, on to the meat of this post.  How do I deploy?  While it is a good process to watch and see all of it take place, not many have the time to sit through that.  Even more so, when that has been part of the description of deploying the platform in the sessions mentioned above. I will, though, break it into a deployment for development purposes and a deployment of a farm. Two tools (or scripts) for those two different types of deployment. First, let me address the development environment.  Around the last week in October, Chris Johnson (SharePoint Product Team) announced a SharePoint Easy Setup for Developers.  The kit itself will assist you in installing SharePoint Server (in standalone mode), the tools that go around Visual Studio, Expression Studio and the Office 2010 tools. Here is the link to Chris’ post: http://blogs.msdn.com/b/cjohnson/archive/2010/10/28/announcing-sharepoint-easy-setup-for-developers.aspx The other scenario is the use of a script in assisting you through the deployment of a farm. Now, this is not to override planning.  It should highlight the need for planning even more.  How?  Having your service accounts planned, the structure of the sites and the scale of your deployment.  Enter AutoSPInstaller.  This is a CodePlex project, and the intent behind this is not only to automate the installation but to give some meaning and get some sense out of what goes on during a SharePoint deployment. How?  Take for example the creation of the databases, when we do the initial OOB deployment by using the wizard, more times than not, we leave the names as they are.  How is that a “bad thing”?  Let’s make it a better practice to rename those Databases, and have them take on a name that is not “GUID-ized”. Having a better naming convention will not hurt, on the other hand will allow for consistency. Here is the link to AutoSPInstaller’s site on CodePlex: http://autospinstaller.codeplex.com/

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • Configurable tables in sql database

    - by dot
    I have the following tables in my database: Config Table: ====================================== Start_Range | End Range | Config_id 10 | 15 | 1 ====================================== Available_UserIDs ========================== ID | UserID | Used_YN | 1 | 10 | t | 1 | 11 | f | 1 | 12 | f | 1 | 13 | f | 1 | 14 | f | 1 | 15 | f | ========================== Users ========================== UserId | FName | LName | 10 |John | Doe | ========================== This is used in a reservation system of sorts... which lets an administrator specify a range of numbers that will be assigned to users in the config table. Once the range has been defined, the system then populates the Available_userIDs table with all the numbers in between the range, and sets the Used_YN flag to false As users sign up, they grab the next user_id number that's not in use... and reserve it. Then the system adds a record to the Users table. Once the admin has specified a range, it is possible that they can change it. For example, they can start with 10-15... and then when the range is used up, they should be able to specify another range like 16 - 99. I've put a unique constraint on the Available_UserIDs table, as well as on the Users table - to ensure that UserIds can't be duplicated. My questions are as follows: What's the best way to prevent the admins from using a range that's already in use? I thought of the following options: -- check either the Users table to see if the start range or ending range numbers are being used. If they are, assume that all the numbers in between are in use too, and reject the range. -- let them specify whatever they want, try to populate the Available_UserIDs table. If there are duplicates, just ignore that specific error message from the database and continue on. How do I find gaps in the number ranges? For example, if they specify 10-15, and then 20-25, it'd be nice to be able to somehow suggest on my web page that 16-19 is currently available. I found this article: http://stackoverflow.com/questions/1312101/how-to-find-a-gap-in-running-counter-with-sql But it only seems to return the first available number... so in my example above, it would only return the number 16. I'm sure there's a simpler way to do things that I'm overlooking!

    Read the article

  • What kind of position matches my skills, experience and interests? [closed]

    - by Ryan
    I work in a large firm and my current job covers a variety of different duties. Due to several factors I am seriously considering finding a new job (hours, pay-cut, limited career growth). I have worked for the company nearly 4 years and almost 2 years ago I transitioned into more of a business analyst role (previously I was working in a client facing role for our audit group). In this role I have overseen all aspects of the development of a large scale re-platforming of our firm's key tool in analyzing investment portfolios. I gathered requirements, wrote specs, designed the UI and functionality, worked closely with developers (onshore and offshore) to see to it the implementation was correct, managed schedules and was the lead tester. This is a large scale system used by thousands of people around the world. I've also written Excel macros, reports in SQL, given trainings, written technical manuals, interfaced with senior managers and partners, etc. I've been on a couple interviews sporadically, most of which were for positions aimed at higher management consulting type positions, dealing with strategy, overall process management, project management, etc. What really interests me is the technical stuff and overseeing a project from beginning to end (although I would rather not have to do so many of the tasks on my own). I genuinely like a lot of what I do, but the company culture and attitude towards overworking people combined with my recent pay-cut (my overtime was cut due to a promotion to a higher level) has lead me to want to seek work elsewhere. The problem is - what type of work could I realistically do? I feel like traditional business analysis is too much business and not enough tech stuff, and I've really taken a shine lately to beefing up my programming abilities and creating small programs to automate things around work. I also feel that because my actual years of experience as a business analyst (figure 1.5 years realistically) puts me at a junior level doing a lot of grunt requirements gathering, when the work that I have been doing with my current company is more in line with what a Program Manager does (depending on your definition I guess). So in reality, when I'm job hunting I get a bit perplexed because I feel like the traditional BA stuff wouldn't really suit me, and even if it did it's usually something along the lines of 5-10 years experience for the type of work that is similar to what I've done (and I've also found most BA jobs to be contract only which at the moment I'm not too keen on). Program Manager is something that interests me, but again I feel like the experience is lacking because that's a much more senior position. Am I in some kind of career no-man's land? Any idea what would best suit me given my experience and abilities, as well as my interests? I plan to keep learning programming on the side, but don't expect to get a job being a straight programmer given my relative inexperience with programming.

    Read the article

  • Is code like this a "train wreck" (in violation of Law of Demeter)?

    - by Michael Kjörling
    Browsing through some code I've written, I came across the following construct which got me thinking. At a first glance, it seems clean enough. Yes, in the actual code the getLocation() method has a slightly more specific name which better describes exactly which location it gets. service.setLocation(this.configuration.getLocation().toString()); In this case, service is an instance variable of a known type, declared within the method. this.configuration comes from being passed in to the class constructor, and is an instance of a class implementing a specific interface (which mandates a public getLocation() method). Hence, the return type of the expression this.configuration.getLocation() is known; specifically in this case, it is a java.net.URL, whereas service.setLocation() wants a String. Since the two types String and URL are not directly compatible, some sort of conversion is required to fit the square peg in the round hole. However, according to the Law of Demeter as cited in Clean Code, a method f in class C should only call methods on C, objects created by or passed as arguments to f, and objects held in instance variables of C. Anything beyond that (the final toString() in my particular case above, unless you consider a temporary object created as a result of the method invocation itself, in which case the whole Law seems to be moot) is disallowed. Is there a valid reasoning why a call like the above, given the constraints listed, should be discouraged or even disallowed? Or am I just being overly nitpicky? If I were to implement a method URLToString() which simply calls toString() on a URL object (such as that returned by getLocation()) passed to it as a parameter, and returns the result, I could wrap the getLocation() call in it to achieve exactly the same result; effectively, I would just move the conversion one step outward. Would that somehow make it acceptable? (It seems to me, intuitively, that it should not make any difference either way, since all that does is move things around a little. However, going by the letter of the Law of Demeter as cited, it would be acceptable, since I would then be operating directly on a parameter to a function.) Would it make any difference if this was about something slightly more exotic than calling toString() on a standard type? When answering, do keep in mind that altering the behavior or API of the type that the service variable is of is not practical. Also, for the sake of argument, let's say that altering the return type of getLocation() is also impractical.

    Read the article

  • Am I deluding myself? Business analyst transition to programmer

    - by Ryan
    Current job: Working as the lead business analyst for a Big 4 firm, leading a team of developers and testers working on a large scale re-platforming project (4 onshore dev, 4 offshore devs, several onshore/offshore testers). Also work in a similar capacity on other smaller scale projects. Extent of my role: Gathering/writing out requirements, creating functional specifications, designing the UI (basically mapping out all front-end aspects of the system), working closely with devs to communicate/clarify requirements and come up with solutions when we hit roadblocks, writing test cases (and doing much of the testing), working with senior management and key stakeholders, managing beta testers, creating user guides and leading training sessions, providing key technical support. I also write quite a few macros in Excel using VBA (several of my macros are now used across the entire firm, so there are maybe around 1000 people using them) and use SQL on a daily basis, both on the SQL compact files the program relies on, our SQL Server data and any Access databases I create. The developers feel that I am quite good in this role because I understand a lot about programming, inherent system limitations, structure of the databases, etc so it's easier for me to communicate ideas and come up with suggestions when we face problems. What really interests me is developing software. I do a fair amount of programming in VBA and have been wanting to learn C# for awhile (the dev team uses C# - I review code occasionally for my own sake but have not had any practical experience using it). I'm interested in not just the business process but also the technical side of things, so the traditional BA role doesn't really whet my appetite for the kind of stuff I want to do. Right now I have a few small projects that managers have given me and I'm finding new ways to do them (like building custom Access applications), so there's a bit here and there to keep me interested. My question is this: what I would like to do is create custom Excel or Access applications for small businesses as a freelance business (working as a one-man shop; maybe having an occasional contractor depending on a project's complexity). This would obviously start out as a part-time venture while I have a day job, but eventually become a full-time job. Am I deluding myself to thinking I can go from BA/part-time VBA programmer to making a full-time go of a freelance business (where I would be starting out just writing custom Excel/Access apps in VBA)? Or is this type of thing not usually attempted until someone gains years of full-time programming experience? And is there even a market for these types of applications amongst small businesses (and maybe medium-sized) businesses?

    Read the article

  • Will you share your SQL Server configuration?

    - by Bill Graziano
    I regularly visit client sites and review their SQL Server configurations.  I come across all kinds of strange settings.  I’ve been thinking about a way to aggregate people’s configurations and see what’s common and what’s unique.  I used to do that with polls on SQLTeam.com.  I think we can find out more interesting things if we look at combinations of settings in relation to size and volume. I’ve been working on an application for another project that is similar.  It will be fairly easy to use that code for this.  I can have something up and running in a few days – if people are interested in it.  I admit that I often come up with ideas that just don’t make sense.  This may be one of them.  One of your biggest concerns has be how secure your data is.  My solution is not to store anything identifying.  The instance name and database names can both be “anonymized” and I don’t store the machine name or IP address or anything to do with logins. Some of the questions I’m curious about are: At what size database does the Enterprise Edition become prevalent? Given the total size of the databases how much RAM is common? How many people have multiple data files?  At what size does that become prevalent? How common is database mirroring?  Replication?  Log shipping? How common is full recovery mode?  At what data size does it become prevalent? I think those are all questions that are easy to answer -- with the right data.  The big question is whether or not people will share their SQL Server configurations.  I understand that organizations in regulated or high security environments can’t participate.  But I think that leaves many, many people that can.  Are you willing to share your configuration and learn about others?  I have a simple sign up form here.  It’s actually a mailing list signup that also captures your edition, number of servers and largest database.  The list will only be used for this project.  Is your SQL Server is configured correctly?  Do you wonder what the next step is as your data grows?  Take a second and sign up.

    Read the article

  • Disk drive for / not ready on boot after upgrade from 10.04 to 12.04

    - by Mathieu M-Gosselin
    After upgrading (using the Upgrade button from the update manager) from 10.04.4 to 12.04.1, I cannot boot anymore. Upon booting, I am greeted with the Ubuntu logo and the error "The disk drive for / is not ready yet or not present". I have the option to wait, to skip and to access a basic shell. Waiting overnight did nothing, skipping just gives me the same error for /tmp, /home, then for a UUID and finally it just goes to a black screen with a white "_" in the top left corner. My setup is a dual boot one with XP on a single hard drive, I use separate partitions for / and /home. Back in the day I installed 8.04 directly from the CD while leaving a partition for XP, which I installed after. This setup had never caused any such issues, even when upgrading from 8.04 to 10.04. I have done plenty of research regarding this issue, as many others seem to have had similar issues after doing the same upgrade as me. However, while for most what fixed the problem was running: apt-get -f install after remounting / in read-write, it didn't do it for me. I got dependency errors (see here), which I also investigated. I found https://bugs.launchpad.net/ubuntu/+source/python-defaults/+bug/990740 where most people say the solution that worked is (prior to running the above command) running: apt-get install -o APT::Immediate-Configure=false -f apt python-minimal but that also got me a lot of dependencies errors as output (see here), similar to #34 in the above thread. I also read that running: dpkg --configure -a could help, at first it wouldn't run because it had trouble parsing /var/lib/dpkg/status since there was an extra blank line in a package description (see https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/916799) but I removed it using vim (and then reran the command). It still gives me output that looks like an error, though. Here it is: http://paste.ubuntu.com/1338074/. I also tried re-running the above apt-get commands after that, to no avail. I'm running out of things to try in the hope of getting this fixed, your help would be very much appreciated! Thank you in advance.

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • Why does my ID3DXSprite appear to be incorrectly scaled?

    - by Bjoern
    I am using D3D9 for rendering some simple things (a movie) as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that. Before adding the buttons everything seemed to have worked fine, and I was using a ID3DXSprite for the text as well (ID3DXFont), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 times their original size. In my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so I expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels) My rendering code is very simple (I am omitting error checks, et cetera, for brevity) // initially viewport was set to width/height of client area // clear device m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 ); // begin scene m_d3dDevice->BeginScene(); // render movie surface (just two triangles to which the movie is rendered) m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false); m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_d3dDevice->SetTexture( 0, m_movieTexture ); m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex)); m_d3dDevice->SetFVF(Vertex::FVF_Flags); m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); // render sprites m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE); // text drop shadow m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow ); // text m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) ); // control object m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this m_sprite->End() // end scene m_d3dDevice->EndScene(); What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. By the way I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device, etc, works fine too). For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file... I am really very inexperienced in D3D, so the answer might be very simple...

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >