Search Results

Search found 15866 results on 635 pages for 'css practice'.

Page 504/635 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How do you handle objects that need custom behavior, and need to exist as an entity in the database?

    - by Scott Whitlock
    For a simple example, assume your application sends out notifications to users when various events happen. So in the database I might have the following tables: TABLE Event EventId uniqueidentifier EventName varchar TABLE User UserId uniqueidentifier Name varchar TABLE EventSubscription EventUserId EventId UserId The events themselves are generated by the program. So there are hard-coded points in the application where an event instance is generated, and it needs to notify all the subscribed users. So, the application itself doesn't edit the Event table, except during initial installation, and during an update where a new Event might be created. At some point, when an event is generated, the application needs to lookup the Event and get a list of Users. What's the best way to link the event in the source code to the event in the database? Option 1: Store the EventName in the program as a fixed constant, and look it up by name. Option 2: Store the EventId in the program as a static Guid, and look it up by ID. Extra Credit In other similar circumstances I may want to include custom behavior with the event type. That is, I'll want subclasses of my Event entity class with different behaviors, and when I lookup an event, I want it to return an instance of my subclass. For instance: class Event { public Guid Id { get; } public Guid EventName { get; } public ReadOnlyCollection<EventSubscription> EventSubscriptions { get; } public void NotifySubscribers() { foreach(var eventSubscription in EventSubscriptions) { eventSubscription.Notify(); } this.OnSubscribersNotified(); } public virtual void OnSubscribersNotified() {} } class WakingEvent : Event { private readonly IWaker waker; public WakingEvent(IWaker waker) { if(waker == null) throw new ArgumentNullException("waker"); this.waker = waker; } public override void OnSubscribersNotified() { this.waker.Wake(); base.OnSubscribersNotified(); } } So, that means I need to map WakingEvent to whatever key I'm using to look it up in the database. Let's say that's the EventId. Where do I store this relationship? Does it go in the event repository class? Should the WakingEvent know declare its own ID in a static member or method? ...and then, is this all backwards? If all events have a subclass, then instead of retrieving events by ID, should I be asking my repository for the WakingEvent like this: public T GetEvent<T>() where T : Event { ... // what goes here? ... } I can't be the first one to tackle this. What's the best practice?

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Professional immigration

    - by etranger
    Hello all, Does anyone here have a practical advice on professional relocation from Russia to Europe? The reasons behind making such a decision are far beyond the subject, perhaps, so I'll stick to the practical part. Having done some of the "common stuff" for finding a job, I am now facing two serious problems: I am a "dual-class" person, with university degree in marketing, and multiple years of self-studied computer competence (hence my writing here). Have professional experience in both areas. I don't currently hold a European work permit. From what I can see, this results in normal HR person throwing out my CV as either being "overqualified" or "too much trouble with making the permit". I do have the skills and character to start my own business, but it requires start-up capital that I don't have, over the last years I had to pay high bills for medical treatment of my family member, who had deceased. Now, I'm almost out of debts. As you can probably guess, English is not a problem, and I'm open to new languages, but first steps of entering the market, or the society, is the problematic part. I live close to Norway, and am trying to get some professional contacts there, but it hasn't got me any practical perspective so far. Any advice is greatly appreciated. EDIT: I am currently making my living off web site development, and occasional consulting services both in IT and marketing. For purely geographic reasons I'm dealing with clients that reside in the same city where I live, pop. 350 000. Being quite local, market requirements for web sites are simple and stable — clients need to control navigation, write articles in a word-like editor, upload illustrations and place ad banners, all with no additional programming. As many web developers do, I'm using my own content management system that fits these expectations. I have also started developing a newer version of this system that has better support for international environments, but I'm too distant from the real market demand in Europe to speak of the right track here. Technically it's based on php/mysql and uses xslt for templating. It allows for quick website deployment, and has architectural neatness, lack of which made me abandon similar opensource solutions (Joomla and the like). Deploying time from rasterized design proofs is normally under 6-8 working hours, don't know how that compares to the world practice. EDIT 2: Can anyone share what Norwegian (Scandinavian) web solutions market currently demands?

    Read the article

  • Criteria for selecting timeout value?

    - by stijn
    Situation: a piece of software reads frames of data from a file in a seperate thread and puts it on a queue, emptied by another thread. That second thread periodically checks on the queue and fails rather gracefully, by showing an error message stating the read timed out, if no data is available within a certain amount of time. Initially this timeout was set to 200mSec. There was no real reasoning behind that constant though, but it worked fine. We measured on a couple of machines and for large data frames, larger than what would be used by customers, a read took like 20mSec whith no other load on the machine. However one customer now gets timeout errors now and then (on the second try all is fine, probably the file is in cache or the virus scanner leaves it alone). The programmers are like 'well, yeah, but that customer's machine is full of cruft, virus scanners, tons of unneeded background processes etc'. Of course the customer is like 'hey this should just work, shouldn't it'? While the programers have a point, since the software is heavy enough to validate the need for a dedicated machine, that does not make the customer happy. Increasing the timeout to 2 seconds, for example, solves the problem. But I'd like to make a proper decision now instead of just randomly pick some magic constant that is probably ok in 99% of cases. What criteria should be used for that? We could just pick a large number, but that feels wrong. (and then we end up with a program that has the horrible bahaviour of hanging when trying to read from a disconnected drive for instance, whereas we'd rather make it show an error right away). Or we could make the timeout value a user setting, but then we need to ducument it clearly and even then not all customers are tech savy enough to really understand what it does. Or we could try and wait until another customer reports timeouts and increase the value again. And again. Until we find something ok for 99.99% of the cases.. Any good practice for this type of situation?

    Read the article

  • Segmentation and Targeting: Your Tools for Personalizing the Online Customer Experience

    - by Christie Flanagan
    In order to deliver the kind of personalized and engaging online experiences that customers expect today, look to segmentation and targeting.  Segmentation is the practice of dividing your site visitors into distinct groups based on shared characteristics or behavior – for example, a segment may consist of site visitors who have visited pages related to certain product type, or they may consist of visitors within the same age group or geographic area.  The idea is that those within a segment are more likely to have common needs, problems or interests that can be served by your business. Targeting is the process by which the most relevant content, whether an article promotion or other piece of content, is delivered to your visitors based on their segment membership. Segmentation and targeting are used to drive greater engagement on your web presence by delivering content to your site visitors that is tailored to their interests, behavior or other attributes.  You may have a number of different goals for your segmentation and targeting efforts: Up-sell or cross-sell to your customers Conduct A/B testing on your offers and creative Offer discounts, promotions or other incentives for the time and duration that you specify Make is easier to find relevant information about products and services Create premium content model There are two different approaches you can take toward segmentation and targeting for you online customer experience initiatives. The first is more of a manual process, in which marketers manage the process of determining which segments to create and which content to target to those segments. The benefit of this approach is that it gives marketers a high level of control over the whole process which works well when you have a thorough understanding of your segments and which content is most likely to serve their needs.  Tools for marketer managed segmentation and targeting are often built right in to your WEM platform, as they are with Oracle WebCenter Sites. The downside is that the more segments and content that you have, the more time consuming and complicated in can be to manage manually.The second approach relies on predictive intelligence to automate the segmentation and targeting process.  This allows optimization of the process to occur in real time. This approach helps reduce the burden of manual segmentation and targeting and can result in new insights into segments that you may never have thought of on your own.  It also provides you with the capability to quickly test new offers and promotions on your site.  Predictive segmentation and targeting can be achieved by using Oracle WebCenter Sites and Oracle Real-Time Decisions together. *****Get a taste for how Oracle WebCenter Sites and Oracle Real-Time Decisions combine to deliver powerful capabilities for predictive segmentation and targeting by watching this on demand webcast introducing Oracle WebCenter Sites 11g or by reading IDC’s take on the latest release of Oracle’s web experience management solution.  Be sure to return to the Oracle WebCenter blog on Thursday for a closer look at how to optimize the online customer experience using these two products together.

    Read the article

  • Why do old programming languages continue to be revised?

    - by SunAvatar
    This question is not, "Why do people still use old programming languages?" I understand that quite well. In fact the two programming languages I know best are C and Scheme, both of which date back to the 70s. Recently I was reading about the changes in C99 and C11 versus C89 (which seems to still be the most-used version of C in practice and the version I learned from K&R). Looking around, it seems like every programming language in heavy use gets a new specification at least once per decade or so. Even Fortran is still getting new revisions, despite the fact that most people using it are still using FORTRAN 77. Contrast this with the approach of, say, the typesetting system TeX. In 1989, with the release of TeX 3.0, Donald Knuth declared that TeX was feature-complete and future releases would contain only bug fixes. Even beyond this, he has stated that upon his death, "all remaining bugs will become features" and absolutely no further updates will be made. Others are free to fork TeX and have done so, but the resulting systems are renamed to indicate that they are different from the official TeX. This is not because Knuth thinks TeX is perfect, but because he understands the value of a stable, predictable system that will do the same thing in fifty years that it does now. Why do most programming language designers not follow the same principle? Of course, when a language is relatively new, it makes sense that it will go through a period of rapid change before settling down. And no one can really object to minor changes that don't do much more than codify existing pseudo-standards or correct unintended readings. But when a language still seems to need improvement after ten or twenty years, why not just fork it or start over, rather than try to change what is already in use? If some people really want to do object-oriented programming in Fortran, why not create "Objective Fortran" for that purpose, and leave Fortran itself alone? I suppose one could say that, regardless of future revisions, C89 is already a standard and nothing stops people from continuing to use it. This is sort of true, but connotations do have consequences. GCC will, in pedantic mode, warn about syntax that is either deprecated or has a subtly different meaning in C99, which means C89 programmers can't just totally ignore the new standard. So there must be some benefit in C99 that is sufficient to impose this overhead on everyone who uses the language. This is a real question, not an invitation to argue. Obviously I do have an opinion on this, but at the moment I'm just trying to understand why this isn't just how things are done already. I suppose the question is: What are the (real or perceived) advantages of updating a language standard, as opposed to creating a new language based on the old?

    Read the article

  • Self-Executing Anonymous Function vs Prototype

    - by Robotsushi
    In Javascript there are a few clearly prominent techniques for create and manage classes/namespaces in javascript. I am curious what situations warrant using one technique vs. the other. I want to pick one and stick with it moving forward. I write enterprise code that is maintained and shared across multiple teams, and I want to know what is the best practice when writing maintainable javascript ? I tend to prefer Self-Executing Anonymous Functions however I am curious what the community vote is on these techniques. Prototype : function obj() { } obj.prototype.test = function() { alert('Hello?'); }; var obj2 = new obj(); obj2.test(); Self-Closing Anonymous Function : //Self-Executing Anonymous Function (function( skillet, $, undefined ) { //Private Property var isHot = true; //Public Property skillet.ingredient = "Bacon Strips"; //Public Method skillet.fry = function() { var oliveOil; addItem( "\t\n Butter \n\t" ); addItem( oliveOil ); console.log( "Frying " + skillet.ingredient ); }; //Private Method function addItem( item ) { if ( item !== undefined ) { console.log( "Adding " + $.trim(item) ); } } }( window.skillet = window.skillet || {}, jQuery )); //Public Properties console.log( skillet.ingredient ); //Bacon Strips //Public Methods skillet.fry(); //Adding Butter & Fraying Bacon Strips //Adding a Public Property skillet.quantity = "12"; console.log( skillet.quantity ); //12 //Adding New Functionality to the Skillet (function( skillet, $, undefined ) { //Private Property var amountOfGrease = "1 Cup"; //Public Method skillet.toString = function() { console.log( skillet.quantity + " " + skillet.ingredient + " & " + amountOfGrease + " of Grease" ); console.log( isHot ? "Hot" : "Cold" ); }; }( window.skillet = window.skillet || {}, jQuery )); //end of skillet definition try { //12 Bacon Strips & 1 Cup of Grease skillet.toString(); //Throws Exception } catch( e ) { console.log( e.message ); //isHot is not defined } I feel that I should mention that the Self-Executing Anonymous Function is the pattern used by the jQuery team. Update When I asked this question I didn't truly see the importance of what I was trying to understand. The real issue at hand is whether or not to use new to create instances of your objects or to use patterns which do not require constructors of the use of the new keyword. I added my own answer, because in my opinion we should make use of patterns which don't use the new keyword. For more information please see my answer.

    Read the article

  • Does the method of adjustment matter, or just the final calibration?

    - by Steve
    A company produces software (and hardware) that is used to both perform automatic adjustments on electronic test equipment as well as perform calibrations of the same equipment. The results of the calibrations are put onto a certificate of calibration that is sent to the customer along with the equipment. This calibration certificate states various conditions of the calibration, such as what hardware (models/serial numbers) and software (version) was used to perform the calibration, as well as things like environmental conditions, etc. Making the assumption that the software used to produce the data (and listed on the calibration certificate) used on the certificate of calibration must have gone through a "test/release" process and must be considered "released" software - does this also mean that the software used for adjustment must also be released? I believe that the method (software/environmental conditions/etc) used or present during adjustment doesn't matter, all that really matters is the end result of the calibration, the conditions present during the calibration, and whether or not the equipment was within the specifications. The real question I'm hoping to get answered: Is there a reputable source (e.g. NIST or somewhere similar) that addresses this question? (I have searched...) The thinking is that during high volume production runs, the "unreleased" system can be used to perform adjustments, as long as a released system is used to perform the calibrations, since the time required to perform the adjustments is much longer than the calibration. This unreleased system will eventually become released for use, but currently is not. Also, please not that there is a distinction between "adjustment" and "calibration". The definition from BIPM International vocabulary of metrology, 2.39: Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication. Followed by NOTE 2 (emphasis in original text): Calibration should not be confused with adjustment of a measuring system, often mistakenly called "self-calibration", nor with verification of calibration As a side note, I'm not sure why this got down voted. It's regarding software and it's use before and after release for use. I believe there is a best practice that can be applied and this is (hopefully) not primarily opinion based.

    Read the article

  • Handling bugs, quirks, or annoyances in vendor-supplied headers

    - by supercat
    If the header file supplied by a vendor of something with whom one's code must interact is deficient in some way, in what cases is it better to: Work around the header's deficiencies in the main code Copy the header file to the local project and fix it Fix the header file in the spot where it's stored as a vendor-supplied tool Fix the header file in the central spot, but also make a local copy and try to always have the two match Do something else As an example, the header file supplied by ST Micro for the STM320LF series contains the lines: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... __IO uint16_t BSRRL; /* BSRR register is split to 2 * 16-bit fields BSRRL */ __IO uint16_t BSRRH; /* BSRR register is split to 2 * 16-bit fields BSRRH */ .... } GPIO_TypeDef; In the hardware, and in the hardware documentation, BSRR is described as a single 32-bit register. About 98% of the time one wants to write to BSRR, one will only be interested in writing the upper half or the lower half; it is thus convenient to be able to use BSSRH and BSSRL as a means of writing half the register. On the other hand, there are occasions when it is necessary that the entire 32-bit register be written as a single atomic operation. The "optimal" way to write it (setting aside white-spacing issues) would be: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... union // Allow BSRR access as 32-bit register or two 16-bit registers { __IO uint32_t BSRR; // 32-bit BSSR register as a whole struct { __IO uint16_t BSRRL, BSRRH; };// Two 16-bit parts }; .... } GPIO_TypeDef; If the struct were defined that way, code could use BSRR when necessary to write all 32 bits, or BSRRH/BSRRL when writing 16 bits. Given that the header isn't that way, would better practice be to use the header as-is, but apply an icky typecast in the main code writing what would be idiomatically written as thePort->BSRR = 0x12345678; as *((uint32_t)&(thePort->BSSRH)) = 0x12345678;, or would be be better to use a patched header file? If the latter, where should the patched file me stored and how should it be managed?

    Read the article

  • Speaker Notes...

    - by wulfers
    At a .Net User Group meeting this week, I experienced two poorly prepared speakers floundering through presentations….  As a Lead Technologist at the company I work for, I have experience training technical staff and also giving presentations at code camps.  Here are a few guidelines for aspiring speakers you might find helpful…   1.       Do not stand in front of your audience and read your slides.  This is  offensive to your audience and not what they came for...  Your slides are there to reinforce the information you are presenting and to give the audience a little clarification on some terms you may use and as a visual aid for some complicated issues. 2.       Have someone review your presentation (slides, notes, …) who speaks the language you will be presenting in fluently.  Also record at least ten minutes of your presentation and have that same person review that.  One of the speakers this week used the word “Basically” fifty times in less than thirty minutes…  I started to flinch every time he used the term. 3.       Be Prepared  -  before the presentation begins.  Don’t make any last minute changes to your presentation or demo code the night before.  Don’t patch your laptop or demo servers the night before.  If possible create a virtual image that you only use for presentations and use that (refreshed before every presentation). 4.       Know the level of expertise of your audience.  Speaking above or below their abilities will make or break your presentation. 5.       Deliver what you promise. The presentation this week was supposed to be on BDD (Behavior Driven Develpment).  The presenter completely ran off track and 90% of the discussion was how his team mistakenly used TDD (Test Driven Development), and was unhappy with the results.  Based on his loss of focus we only heard a rushed 10 minute presentation on DBB which was a disservice to the audience. 6.       Practice your presentation with your own small team before you try this on a room full of people you don’t know.  A side benefit of doing this with your own team is that you can get candid feedback from your team and also get kudos for training your own team.  I find I can also turn my presentations into technical white papers and get a third benefit from the work I’ve put into a presentation. 7.       Sharpen your own saw.  Pick a topic that is fairly current.  Something you would like to learn about and would benefit your current career path. 8.       Have fun doing it.

    Read the article

  • Turn-Based RPG Battle Instance Layout For Larger Groups

    - by SoulBeaver
    What a title, eh? I'm currently designing a videogame; a turn-based RPG like Final Fantasy (because everybody knows Final Fantasy). It's a 2D sprite game. These are my ideas for combat: -The player has a group of 15 members (main character included) -During battle, five of the group are designated as active, and appear in the battle. -These five may be switched out at leisure, or when one of the five die. -At any time, the Waiting members can cast buffs, be healed by the active members, or perform special attacks. -Battles should contain 10+ monsters at least. I'm aiming for 20, but I'm not sure if that's possible yet. -Battles should feel larger than normal due to the interaction of Waiting members, active members and the increased amount of monsters per battle. -The player has two rows in which to put the Active members: front and back. -Depending on the implementation, I might allow comboing of player attacks and skills. These are just design ideas, so beware! I have not been able to test this out yet- I have no idea yet if any of these ideas bunched together will make for a compelling game. What sounds good on paper doesn't necessarily have to be good in practice! What I'm asking now is how to create the layout for this. My starting point are the battles in Final Fantasy VI, with up to 5-6 monsters on the left and the characters on the right- monsters on both sides if it's a pincer attack. However, this view would not work feasible with my goal of 20 monsters and 5 characters. All the monsters on the left would appear cluttered unless I scale them far far back. If I create a pincer-like map, then there would be no real pincer-attack possible. If I space the monsters out I force the player to scroll the screen- a game mechanic I've come across and not enjoyed imho. My question is: does anybody have any layouts or guides for designing battle maps in turn-based RPGs, especially with a larger number of enemies taken into consideration? How should it look? I am not asking for specific combat mechanics, just the layout for the moment.

    Read the article

  • How do I know if I am using Scrum methodologies?

    - by Jake
    When I first started at my current job, my purpose was to rewrite a massive excel-VBA workbook-application to C# Winforms because it was thought that the new C# app will fix all existing problems and have all the new features for a perfect world. If it were a direct port, in theory it would be easy as i just need to go through all the formulas, conditional formatting, validations, VBA etc. to understand it. However, that was not the case. Many of the new features are tightly dependant on business logic which I am unfamiliar with. As a solo programmer, the first year was spent solely on deciphering the excel workbook and writing the C# app. In theory, I had the business people to "help" me specify requirements, how GUI looks and work, and testing of the app etc; but in practice it is like a contant tsunami of feature creep. At the beginning of the second year I managed to convince the management that this is not going anywhere. I made them start from scratch with the excel-VBA. I have this "issue log" saved on the network, each time they found something they didn't like about the excel-VBA app, they will write it in there. I check the log daily and consolidate issues (in my mind) mainly into 2 groups: (1) requires massive change. (2) can be fixed in current version. For massive change issues, I make a copy of the latest excel-VBA and give it a new version number, then work on it whenever I can. For current version fixes, I make the changes in a few days to a week, and then immediately release it. I also ensure I update the same change in any in-progress massive change future versions. This has gone on for about 4 months and I feel it works great. I made many releases and solved many real issues, also understood the business logic more and more. However, my boss (non-IT trained) thinks what I am doing are just adhoc changes and that i am not looking at the "bigger picture". I am struggling to convince my boss that this works. So I hope to formalise my approach and maybe borrow a buzzword to confuse him. Incidentally, I read about Agile and SCRUM, about backlog and sprints. But it's all very vague to me still. QUESTION (finally): I want to tell him that this is SCRUM! But I want to hold my breath first and ask whether my current approach is considered SCRUM or SCRUM-like? How can I make it more SCRUM-like? Note that I have only myself, there's no project leader or teams.

    Read the article

  • Oracle Linux Hands-on Lab from your Home? Yes You Can Do That!

    - by Zeynep Koch
    We're taking the very popular OTN Sysadmin Days and going virtual! We have two days to choose from: Americas - Tuesday January 15th, 2013 9:00 a.m – 1:00 p.m. PT / 12:00 p.m. – 4:00 p.m. ET / 1:00 p.m. - 5:00 p.m. BRT EMEA -  Tuesday January 29th, 2013 - 9:00 a.m – 13:00 p.m GMT / 10:00 a.m – 14:00 p,m CET / 12:00 p.m – 16:00 p.m AST / 13:00 p.m – 17:00 p,m MSK / 14:30 p.m – 18:30 p.m IST You'll be able to perform real-world tasks with Oracle Linux and if you have questions you can ask for help from the Oracle experts through chat window. There's one caveat: you'll have to do a little homework ahead of time. Load the virtual images onto your laptop, find the instructions, and make sure everything is working properly. This wiki https://wikis.oracle.com/display/virtualsysadminday/Home explains what you need to do. If you have questions, ask them as comments to the wiki:https://wikis.oracle.com/display/virtualsysadminday/Home.  Oracle Linux Track  1. Oracle Linux Technology Overview - In this session we will go over the latest Oracle Linux features including tools for Linux administration such as the Unbreakable Linux Network (ULN) and public yum. We will also show you a demo of Ksplice zero downtime kernel updates, only available to Oracle Linux customers. You will see how easy it is to switch from Red Hat support to Oracle Linux support by using ULN. Last but not least, we’ll introduce the 3 hands-on labs that will follow this session in the Linux track. 2. HOL: Package Management -  In this lab session you will use the package management on Oracle Linux using RPM and yum. Some of the tasks that you will experience include listing installed packages, obtaining additional information about packages, searching for packages and installing/updating them as well as verifying package integrity and removing software. We’ll also review Linux services and run levels, how to start and stop them, checking the status of a particular service and enabling a service to be started automatically at system boot. 3. HOL: Storage Management - In this hands-on lab session, you will learn about storage management with LVM2, the Linux Logical Volume Manager, preparing block devices, creating physical and logical volumes, creating file systems on top of logical volumes, and resizing file systems dynamically. You will also practice setting up software RAID devices, configuring encrypted block devices.Btrfs File System - In this hands-on lab session, we will introduce you to Btrfs file system. You will be able to create and mount a Btrfs file system and learn to setup a mirrored/striped file system across multiple block devices. You’ll also learn how to add and remove block devices, and create file system snapshots. Register for this FREE event.

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • Creating an object that is ready to be used & unset properties - with IoC

    - by GetFuzzy
    I have a question regarding the specifics of object creation and the usage of properties. A best practice is to put all the properties into a state such that the object is useful when its created. Object constructors help ensure that required dependencies are created. I've found myself following a pattern lately, and then questioning its appropriateness. The pattern looks like this... public class ThingProcesser { public List<Thing> CalculatedThings { get; set; } public ThingProcesser() { CalculatedThings = new List<Thing>(); } public double FindCertainThing() { CheckForException(); foreach (var thing in CalculatedThings) { //do some stuff with things... } } public double FindOtherThing() { CheckForException(); foreach (var thing in CalculatedThings) { //do some stuff with things... } } private void CheckForException() { if (CalculatedThings.Count < 2) throw new InvalidOperationException("Calculated things must have more than 2 items"); } } The list of items is not being changed, just looked through by the methods. There are several methods on the class, and to avoid having to pass the list of things to each function as a method parameter, I set it once on the class. While this works, does it violate the principle of least astonishment? Since starting to use IoC I find myself not sticking things into the constructor, to avoid having to use a factory pattern. For example, I can argue with myself and say well the ThingProcessor really needs a List to work, so the object should be constructed like this. public class ThingProcesser { public List<Thing> CalculatedThings { get; set; } public ThingProcesser(List<Thing> calculatedThings) { CalculatedThings = calculatedThings; } } However, if I did this, it would complicate things for IoC, and this scenario hardly seems appropriate for something like the factory pattern. So in summary, are there some good guidelines for when something should be part of the object state, vs. passed as a method parameter? When using IoC, is the factory pattern the best way to deal with objects that need created with state? If something has to be passed to multiple methods in a class, does that render it a good candidate to be part of the objects state?

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • Could I be going crazy with Event Handlers? Am I going the "wrong way" with my design?

    - by sensae
    I guess I've decided that I really like event handlers. I may be suffering a bit from analysis paralysis, but I'm concerned about making my design unwieldy or running into some other unforeseen consequence to my design decisions. My game engine currently does basic sprite-based rendering with a panning overhead camera. My design looks a bit like this: SceneHandler Contains a list of classes that implement the SceneListener interface (currently only Sprites). Calls render() once per tick, and sends onCameraUpdate(); messages to SceneListeners. InputHandler Polls the input once per tick, and sends a simple "onKeyPressed" message to InputListeners. I have a Camera InputListener which holds a SceneHandler instance and triggers updateCamera(); events based on what the input is. AgentHandler Calls default actions on any Agents (AI) once per tick, and will check a stack for any new events that are registered, dispatching them to specific Agents as needed. So I have basic sprite objects that can move around a scene and use rudimentary steering behaviors to travel. I've gotten onto collision detection, and this is where I'm not sure the direction my design is going is good. Is it a good practice to have many, small event handlers? I imagine going the way I am that I'd have to implement some kind of CollisionHandler. Would I be better off with a more consolidated EntityHandler which handles AI, collision updates, and other entity interactions in one class? Or will I be fine just implementing many different event handling subsystems which pass messages to each other based on what kind of event it is? Should I write an EntityHandler which is simply responsible for coordinating all these sub event handlers? I realize in some cases, such as my InputHandler and SceneHandler, those are very specific types of events. A large portion of my game code won't care about input, and a large portion won't care about updates that happen purely in the rendering of the scene. Thus I feel my isolation of those systems is justified. However, I'm asking this question specifically approaching game logic type events.

    Read the article

  • Adding Facebook Open Graph Tags to an MVC Application

    - by amaniar
    If you have any kind of share functionality within your application it’s a good practice to add the basic Facebook open graph tags to the header of all pages. For an MVC application this can be as simple as adding these tags to the Head section of the Layouts file.<head> <title>@ViewBag.Title</title> <meta property="og:title" content="@ViewBag.FacebookTitle" /> <meta property="og:type" content="website"/> <meta property="og:url" content="@ViewBag.FacebookUrl"/> <meta property="og:image" content="@ViewBag.FacebookImage"/> <meta property="og:site_name" content="Site Name"/> <meta property="og:description" content="@ViewBag.FacebookDescription"/></head>  These ViewBag properties can then be populated from any action: private ActionResult MyAction() { ViewBag.FacebookDescription = "My Actions Description"; ViewBag.FacebookUrl = "My Full Url"; ViewBag.FacebookTitle = "My Actions Title"; ViewBag.FacebookImage = "My Actions Social Image"; .... } You might want to populate these ViewBag properties with default values when the actions don’t populate them. This can be done in 2 places. 1. In the Layout itself. (check the ViewBag properties and set them if they are empty) @{ ViewBag.FacebookTitle = ViewBag.FacebookTitle ?? "My Default Title"; ViewBag.FacebookUrl = ViewBag.FacebookUrl ?? HttpContext.Current.Request.RawUrl; ViewBag.FacebookImage = ViewBag.FacebookImage ?? "http://www.mysite.com/images/logo_main.png"; ViewBag.FacebookDescription = ViewBag.FacebookDescription ?? "My Default Description"; }  2. Create an action filter and add it to all Controllers or your base controller. public class FacebookActionFilterAttribute : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { var viewBag = filterContext.Controller.ViewBag; viewBag.FacebookDescription = "My Actions Description"; viewBag.FacebookUrl = "My Full Url"; viewBag.FacebookTitle = "My Actions Title"; viewBag.FacebookImage = "My Actions Social Image"; base.OnActionExecuting(filterContext); } } Add attribute to your BaseController. [FacebookActionFilter] public class HomeController : Controller { .... }

    Read the article

  • How can you tell whether to use Composite Pattern or a Tree Structure, or a third implementation?

    - by Aske B.
    I have two client types, an "Observer"-type and a "Subject"-type. They're both associated with a hierarchy of groups. The Observer will receive (calendar) data from the groups it is associated with throughout the different hierarchies. This data is calculated by combining data from 'parent' groups of the group trying to collect data (each group can have only one parent). The Subject will be able to create the data (that the Observers will receive) in the groups they're associated with. When data is created in a group, all 'children' of the group will have the data as well, and they will be able to make their own version of a specific area of the data, but still linked to the original data created (in my specific implementation, the original data will contain time-period(s) and headline, while the subgroups specify the rest of the data for the receivers directly linked to their respective groups). However, when the Subject creates data, it has to check if all affected Observers have any data that conflicts with this, which means a huge recursive function, as far as I can understand. So I think this can be summed up to the fact that I need to be able to have a hierarchy that you can go up and down in, and some places be able to treat them as a whole (recursion, basically). Also, I'm not just aiming at a solution that works. I'm hoping to find a solution that is relatively easy to understand (architecture-wise at least) and also flexible enough to be able to easily receive additional functionality in the future. Is there a design pattern, or a good practice to go by, to solve this problem or similar hierarchy problems? EDIT: Here's the design I have: The "Phoenix"-class is named that way because I didn't think of an appropriate name yet. But besides this I need to be able to hide specific activities for specific observers, even though they are attached to them through the groups. A little Off-topic: Personally, I feel that I should be able to chop this problem down to smaller problems, but it escapes me how. I think it's because it involves multiple recursive functionalities that aren't associated with each other and different client types that needs to get information in different ways. I can't really wrap my head around it. If anyone can guide me in a direction of how to become better at encapsulating hierarchy problems, I'd be very glad to receive that as well.

    Read the article

  • Notes - Part I - Say Hello from Java

    - by Silviu Turuga
    Sometimes we need to take small notes to remember things, one way to do this is to use stick notes and have them all around our desktop. But what happening if you have a lot of notes and a small office? You'll need a piece of software that will sort things for you and also it will provide you a quick way to retrieve the notes when need. Did I mention that this will keep your desktop clean and also will reduce paper waste? During the next days we'll gonna create an application that will let you manage your notes, put them in different categories etc. I'll show you step by step what do you need to do and finally you'll have the application run on multiple systems, such as Mac, Windows, Linux, etc. The only pre-requisition for this lesson is to have JDK 7 with JavaFX installed and an IDE, preferably NetBeans. I'll call this application Notes…. Part I - Say Hello from Java  From NetBeans go to Files->New Project Chose JavaFX->JavaFX FXML Application Project Name: Notes FXML name: NotesUI Check Create Application Class and name it Main After this the project is created and you'll see the following structure As a best practice I advice you to have your code in your own package instead of the default one. right click on Source Packages and chose New->Java Package name it something like this: com.turuga.notes and click Next after the package is created, select all the 3 files from step #3 and drag them over the new package chose Refactor, as this will make sure all the references are correctly moved inside the new package now you should have the following structure if you'll try to run the project you'll get an error: Unable to find class: Main right click on project name Notes and click properties go to Run and you'll see Application Class set to Main, but because we have defined our own packages, this location has been change, so click on Browse and the correct one appear: com.turuga.notes.Main last modification before running the project is to right click on NotesUI.fxml and chose Edit (if you'll double click it will open in JavaFX Scene Builder) look around line 9 and change fx:controller="NotesUIController" to fx:controller="com.turuga.notes.NotesUIController" now you are ready to run it and you should see the following On the next lesson we'll continue to play with NetBeans and start working on the interface of our project

    Read the article

  • Imperative vs. component based programming [closed]

    - by AlexW
    I've been thinking about how programming and more specifically the teaching of programming is advocated amongst the community (online). Often I've heard that Ruby and RoR is an ideal platform for learning to program. I completely disagree... RoR and Ruby are based on the application of the component based paradigm, which means they are ideal for rapid application development. This is much like the MVC model in PHP and ASP.NET But, learning a proper imperative language like Java or C/C++ (or even Perl and PHP) is the only way for a new programmer to explore logic itself, and not get too bogged down in architectural concerns like the need for separation of concerns, and the preference for components. Maybe it's a personal preference thing. I rather think that the most interesting aspects to programming are the procedural bits of code I write that actually do stuff rather than the project planning, and modelling that comes about from fully object oriented engineering or simply using the MVC model. I know this may sound confused to some of you. I feel strongly though that the best way for programming to be taught is through imperative and procedural methods. Architectural (component) methods come later, if at all. After all, none of the amazing algorithms that exist were based on OOP practice! It's all procedural code when it comes to the 'magic'. OOP is useful in creating products and utilities. Algorithms are what makes things happen, and move data around, and so imperative (and/or procedural) code are what matters most. When I see programmers recommending Ruby on Rails to newbie developers, I think it's just so wrong. Just because you write less code with Ruby does not make it easier to do! It's the opposite... you have to know loads more to appreciate its succinct nature. New coders who really want to understand the nuts and bolts of coding need to go away and figure out writing methods/functions (i.e. imperative programming) and working in procedural style, in order to grasp the fundamentals, first, before looking into architectural ways of working. So, my question is: should Ruby ever be recommended as a first language? I think no (obviously)... what arguments are there for it?

    Read the article

  • Should Item Grouping/Filter be in the ViewModel or View layer?

    - by ronag
    I'm in a situation where I have a list of items that need to be displayed depending on their properties. What I'm unsure of is where is the best place to put the filtering/grouping logic of the viewmodel state? Currently I have it in my view using converters, but I'm unsure whether I should have the logic in the viewmodel? e.g. ViewModel Layer: class ItemViewModel { DateTime LastAccessed { get; set; } bool IsActive { get; set; } } class ContainerViewModel { ObservableCollection<Item> Items {get; set;} } View Layer: <TextView Text="Active Items"/> <List ItemsSource={Binding Items, Converter=GroupActiveItemsByDay}/> <TextView Text="Active Items"/> <List ItemsSource={Binding Items, Converter=GroupInActiveItemsByDay}/> or should I build it like this? ViewModel Layer: class ContainerViewModel { ObservableCollection<IGrouping<string, Item>> ActiveItemsByGroup {get; set;} ObservableCollection<IGrouping<string, Item>> InActiveItemsByGroup {get; set;} } View Layer: <TextView Text="Active Items"/> <List ItemsSource={Binding ActiveItemsGroupByDate }/> <TextView Text="Active Items"/> <List ItemsSource={Binding InActiveItemsGroupByDate }/> Or maybe something in between? ViewModel Layer: class ContainerViewModel { ObservableCollection<IGrouping<string, Item>> ActiveItems {get; set;} ObservableCollection<IGrouping<string, Item>> InActiveItems {get; set;} } View Layer: <TextView Text="Active Items"/> <List ItemsSource={Binding ActiveItems, Converter=GroupByDate }/> <TextView Text="Active Items"/> <List ItemsSource={Binding InActiveItems, Converter=GroupByDate }/> I guess my question is what is good practice in terms as to what logic to put into the ViewModel and what logic to put into the Binding in the View, as they seem to overlap a bit?

    Read the article

  • How do I keep user input and rendering independent of the implementation environment?

    - by alex
    I'm writing a Tetris clone in JavaScript. I have a fair amount of experience in programming in general, but am rather new to game development. I want to separate the core game code from the code that would tie it to one environment, such as the browser. My quick thoughts led me to having the rendering and input functions external to my main game object. I could pass the current game state to the rendering method, which could render using canvas, elements, text, etc. I could also map input to certain game input events, such as move piece left, rotate piece clockwise, etc. I am having trouble designing how this should be implemented in my object. Should I pass references to functions that the main object will use to render and process user input? For example... var TetrisClone = function(renderer, inputUpdate) { this.renderer = renderer || function() {}; this.inputUpdate = input || function() {}; this.state = {}; }; TetrisClone.prototype = { update: function() { // Get user input via function passed to constructor. var inputEvents = this.inputUpdate(); // Update game state. // Render the current game state via function passed to constructor. this.renderer(this.state); } }; var renderer = function(state) { // Render blocks to browser page. } var inputEvents = {}; var charCodesToEvents = { 37: "move-piece-left" /* ... */ }; document.addEventListener("keypress", function(event) { inputEvents[event.which] = true; }); var inputUpdate = function() { var translatedEvents = [], event, translatedEvent; for (event in inputEvents) { if (inputEvents.hasOwnProperty(event)) { translatedEvent = charCodesToEvents[event]; translatedEvents.push(translatedEvent); } } inputEvents = {}; return translatedEvents; } var game = new TetrisClone(renderer, inputUpdate); Is this a good game design? How would you modify this to suit best practice in regard to making a game as platform/input independent as possible?

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >