Search Results

Search found 464 results on 19 pages for 'the consequences of cheat'.

Page 10/19 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • The Grenelle II Act In France: A Milestone Towards Integrated Reporting

    - by Evelyn Neumayr
    By Elena Avesani, Principal Product Strategy Manager, Oracle In July of 2010, France took a significant step towards mandating integrated sustainability and financial reporting for all large companies with a new law called Grenelle II. Article 225 of Grenelle II requires that many listed companies on the French stock exchanges incorporate information on the social and environmental consequences of their activities into their annual reports, as well as their societal commitments for sustainable development. The decree that implements Article 225 of Grenelle II was passed in April 2012. Grenelle II is the strongest governmental mandate yet in support of sustainability reporting. The law defines the phase-in process, with large listed companies expected to comply in their 2012 reports and smaller companies expected to comply with their 2014 annual reports. This extra-financial information will have to be embedded in the annual management report, approved by the Board of Directors, verified by a third-party body and given to the annual general meeting. The subjects that must be reported on are grouped into Environmental, Social, and Governance categories. Oracle solutions can help organizations integrate financial and sustainability reporting and provide a more accurate and auditable approach to collecting, consolidating, and reporting such environmental, social, and economic metrics. Through Oracle Environmental Accounting and Reporting and Oracle Hyperion Financial Management Sustainability Starter Kit organizations can collect environmental, social and governance data and collect and consolidate corporate sustainability reporting data from multiple systems and business units. For more information about these solutions please contact [email protected].

    Read the article

  • Problem with WCF-SQL Adapter

    - by Paul Petrov
    When using WCF receive adapter with SQL binding in Polling mode please be aware of the following problem. Problem: At some regular but seemingly random intervals the application stops processing new requests, places a lock on the database and prevent other application from accessing it. Initially it looked like DTC issue, as it was distributed transaction that stalled most of the time. Symptoms: Orchestration instances in Dehydrated state, receive location not picking up new messages, exclusive locks on database tables, errors in DTC trace. Cause: Microsoft has confirmed that there is a bug in the WCF-SQL adapter. In the receive adapter binding configuration there's receiveTimeout property set to 10 minutes by default. If during this period data is not found in the table the adapter would start new thread and allocate more memory without releasing old resources. Thus if there's no new data in the table for a long time a new thread will be created in the host instance every 10 minutes until it reaches threshold (1000) and then there's no threads left for this host instance and it can't start/complete any tasks. Then this host instance won't be able to do anything. If other artifacts are hosted in the instance they will suffer consequences as well. Solution: - Set receiveTimeout to the maximum time 24.20:31:23.6470000. - Place WCF-SQL receive locations in separate host to provide its own thread pool and eliminate impact on other processes - Ensure WCF-SQL dedicated host instances are restarted at interval less or equal to receiveTimeout to flush threads and memory - Monitor performance counters Process/Thread Count/BTSNTSvc{n} for thread count trend and respond to alert if it grows by restarting host instance If you use WCF-SQL Adapter in the Notification mode then make sure to remove sqlAdapterInboundTransactionBehavior otherwise this location will exhibit the same issue. In this case though, setting receiveTimeout doesn't help and new thread will be created at default intervals (10 min) ignoring maximum setting.

    Read the article

  • What is the Sarbanes-Oxley (SOX) Act?

    In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The SOX act targets the financial side of companies, but its impacts can be seen within the technology arena as well because it is their responsibility to store all of a company’s electronic records regardless of file type. This act specifies that all records and electronic messages must be saved for no less than five years according to SearchCIO. In addition, consequences for non-compliance are fines, imprisonment, or both. Sarbanes-Oxley Act: Rules that affect the management of Electronic records according to SearchCIO. Allowed practices regarding destruction, alteration, or falsification of records. Retention period for records storage. Best practices indicate that corporations securely store all business records using the same guidelines set for public accountants. Types of business records that need to be stored Business Records  Business Communications Including Electronic Communications References: SOXLaw: The Sarbanes-Oxley Act 2002 Retrieved May 2011 from http://www.soxlaw.com/ SearchCIO: What is Sarbanes-Oxley Act (SOX)? Retrieved May 2011 from http://searchcio.techtarget.com/definition/Sarbanes-Oxley-Act

    Read the article

  • In the Aggregate: How Will We Maintain Legacy Systems?

    - by Jim G.
    NEW YORK - With a blast that made skyscrapers tremble, an 83-year-old steam pipe sent a powerful message that the miles of tubes, wires and iron beneath New York and other U.S. cities are getting older and could become dangerously unstable. July 2007 Story About a Burst Steam Pipe in Manhattan We've heard about software rot and technical debt. And we've heard from the likes of: "Uncle Bob" Martin - Who warned us about "the consequences of making a mess". Michael C. Feathers - Who gave us guidance for 'Working Effectively With Legacy Code'. So certainly the software engineering community is aware of these issues. But I feel like our aggregate society does not appreciate how these issues can plague working systems and applications. As Steve McConnell notes: ...Unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. If this is true, and I believe that it is, then I fear that governments and businesses may defer regular maintenance and fortification against hackers until it is too late. [Much like NYC and the steam pipes.] My Question: Do you share my concern? And if so, is there a way that we can avoid the software equivalent of NYC and the steam pipes?

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

  • Is there a modified LGPL license that allows static linking?

    - by Petr Pudlák
    úLGPL requires that it if a program uses LGPL-ed library, users must be able to re-link the program with a different version of the library: ... d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. ... However in some cases, this can pose considerable difficulties. In particular, Haskell programs are almost always statically compiled. Moreover, the compiler does cross-module optimizations so it's very hard to satisfy this condition. (See this link at Haskell Wiki.) Therefore, I'm looking for a standard LGPL-like license that wouldn't require the possibility of re-linking. Some projects use their own modification of LGPL, for example wxWidgets. But I'd rather use some standard license that is somewhat more official, perhaps checked by some law experts, and (L)GPL compatible. Is there some like that? (Also I'd be interested to know if are there some unforeseen consequences of such a modification of LGPL.)

    Read the article

  • Warnings When Undo Isn't Possible

    - by ultan o'broin
    Enjoyed this post Never Use a Warning When you Mean Undo by Aza Raskin. It makes sense never to warn users if an undo option is possible. The examples given are from the web space. Here's the conclusion: Warnings cause us to lose our work, to mistrust our computers, and to blame ourselves. A simple but foolproof design methodology solves the problem: "Never use a warning when you mean undo." And when a user is deleting their work, you always mean undo. However, in enterprise apps you may find that an undo option isn't technically possible or desirable. Objects may be shared, part of a flow elsewhere, or undoing something committed to the database (a rollback I guess) may not be feasible if it becomes locked by another process. Plus, what constitutes user ownership of objects isn't always clear to users. The implications of delete (and other) actions need to be clearly communicated out in advance. Really, warnings are important in the enterprise space. Data has a very high value, and users can perform a wide variety of actions that may risk that data, not always within the application itself (at browser level, for example). That said, throwing warnings all over the place when an undo option is possible is annoying. Instead, treat warnings with respect. When there is no undo option possible, use warning messages to communicate potentially dangerous or irrecoverable actions or the downstream consequences of user actions on the process or task flow. Force the user to respond to a warning message by using a modal dialog with clearly labeled action buttons. Here's a couple of examples. A great article that got me thinking. Let's see more like that. Let's not forget there's more types of messages than just error messages. User assistance and user experience professionals need to understand when best to use confirmation, information, and warning types too!

    Read the article

  • Carriers Holding Your OS Updates Hostage

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/10/10/carriers-holding-your-os-updates-hostage.aspx Just a small rant here.  Today the Windows Phone 8 GDR2 update finally became available for Nokia handset users.  Now I’m not sure that it is AT&T fault entirely that Samsung and HTC users got their updates two months ago and we are just finally seeing it.  It may have something to do with the Nokia Amber update.  But every Windows Phone update on AT&T from 7.1 on seems to have been delayed.  How is it that the premiere Windows Phone carrier is always the last one to release updates? Smart phone ecosystems are a partnership between the OS provider, the hardware manufacturer and the carriers.  If any one of those partners does not hold up its responsibilities then everyone gets a black eye.  The goal for all involved should be to release updates as early as possible with reasonable assurance of stability.  This ensures the satisfaction of consumers and increases the likelihood of future sales. From what I have seen so far AT&T has been the one breaking the consumer’s trust in the Windows Phone ecosystem.  Aside from voicing our dissatisfaction we may need to start voting with our feet until they realize that they being a poor citizen has consequences. Technorati Tags: ATT,Windows Phone,Windows Phone 8,Microsoft,Nokia,GDR2

    Read the article

  • Simultaneous AI in turn based games

    - by Eduard Strehlau
    I want to hack together a roguelike. Now I thought about entity and world representation and got to a quite big problem. If you want all the AI to act simultaneously you would normally(in cellular automa for examble) just copy the cell buffer and let all action of indiviual cells depend on the copy. Actions which are not valid anymore after some cell before the cell you are currently operating on changed the original enviourment(blocking the path) are just ignored or reapplied with the "current"(between turns) environment. After all cells have acted you copy the current map to the buffer again. Now for an environment with complex AI and big(datawise) entities the copying would take too long. So I thought you could put every action and entity makes into a que(make no changes to the environment) and execute the whole que after everyone took their move. Every interaction on this que are realy interacting entities, so if a entity tries to attack another entity it sends a message to it, the consequences of the attack would be visible next turn, either by just examining the entity or asking the entity for data. This would remove problems like what happens if an entity dies middle in the cue but got actions or is messaged later on(all messages would go to null, and the messages from the entity would either just be sent or deleted(haven't decided yet) But what would happen if a monster spawns a fireball which by itself tracks the player(in the same turn). Should I add the fireball to the enviourment beforehand, so make a change to the environment before executing the action list or just add the ball to the "need updated" list as a special case so it doesn't exist in the environment and still operates on it, spawing after evaluating the action list? Are there any solutions or papers on this subject which I can take a look at? EDIT: I don't need information on writing a roguelike I need information on turn based ai in respective to a complex enviourment.

    Read the article

  • MVVM - child windows and data contexts

    - by GlenH7
    Should a child window have it's own data context (View-Model) or use the data context of the parent? More broadly, should each View have its own View-Model? Are there are any rules to guide making that decision? What if the various View-Models will be accessing the same Model? I haven't been able to find any consistent guidance on my question. The MS definition of MVVM appears to be silent on child windows. For one example, I have created a warning message notification View. It really didn't need a data context since it was passed the message to display. But if I needed to fancy it up a bit, I would have tapped the parent's data context. I have run into another scenario that needs a child window and is more complicated than the notification box. The parent's View-Model is already getting cluttered, so I had planned on generating a dedicated VM for the child window. But I can't find any guidance on whether this is a good idea or what the potential consequences may be. FWIW, I happen to be working in Silverlight, but I don't know that this question is strictly a Silverlight issue.

    Read the article

  • My Big Break - this is my story and I am sticking to it ;)

    - by dbasnett
    The value of undertaking new and difficult tasks can have many wonderful consequences, don't you agree? Here is the story of my big break. Remember yours? During the mid 70's I was in the Navy and worked as a computer operator at the CNO's Command and Control computer system (WWMCCS) in the Washington Navy Yard. I was a tape ape, but knew that I wanted to be a systems programmer. One day the Lieutenant in charge of the OS group was running a test that required the development system to be re-booted, and I was politely hinting that I wanted out of computer operations. As he watched the accounting tape rewind to BOT and then search for where it had just been (severalminutes) he told me if I would fix "that" he would have me transferred. I couldn't say "Deal" fast enough. Up until then my programming experience had been on Edsger Dijkstra's favorite computer (sic), an IBM 1620. It took almost 6 months of learning the assembler for the Honeywell 6000 and finding the code responsible for rewinding the tape and then forwarding it. After much trial and error at o’dark thirty I succeeded. The tape barely moved and my “patch” was later adopted by many other sites. Lieutenant Jack Cowan kept his promise and I have gone on to have a varied and enjoyable career. To Jack, and the rest of the crew (Ken, Stu, Neil, Tom, Silent W, Mr. Jacobs, Roy, Rocco, etc.) I’d like to thank you all.

    Read the article

  • What Poor Project Management Might Be Costing You

    - by Sylvie MacKenzie, PMP
    For project-intensive organizations, capital investment decisions define both success and failure. Getting them wrong—the risk of delays and schedule and cost overruns are ever present—introduces the potential for huge financial losses. The resulting consequences can be significant, and directly impact both a company’s profit outlook and its share price performance—which in turn is the fundamental measure of executive performance. This intrinsic link between long-term investment planning and short-term market performance is investigated in the independent report Stock Shock, written by a consultant from Clarity Economics and commissioned by the EPPM Board. A new international steering group organized by Oracle, the EPPM Board brings together senior executives from leading public and private sector organizations to explore the critical role played by enterprise project and portfolio management (EPPM). Stock Shock reviews several high-profile recent project failures, and combined with other research reviews the lessons to be learned. It analyzes how portfolio management is an exercise in balancing risk and reward, a process that places the emphasis firmly on executives to correctly determine which potential investments will deliver the greatest value and contribute most to the bottom line. Conversely, it also details how poor evaluation decisions can quickly impact the overall value of an organization’s project portfolio and compromise long-range capital planning goals. Failure to Deliver—In Search of ROI The report also cites figures from the Economist Intelligence Unit survey that found that more organizations (12 percent) expected to deliver planned ROI less than half the time, than those (11 percent) who claim to deliver it 90 percent or more of the time. This fact is linked to a recent report from Booz & Co. that shows how the average tenure of a global chief executive has fallen from 8.1 years to 6.3 years. “Senior executives need to begin looking at effective project delivery not as a bonus, but as an essential facet of business success,” according to Stock Shock author Phil Thornton. “Consolidated and integrated visibility into individual projects is the most practical solution to overcoming these challenges, which explains the increasing popularity of PPM technologies as an effective oversight and delivery platform.” Stock Shock is available for download on the EPPM microsite at http://www.oracle.com/oms/eppm/us/stock-shock-report-1691569.html

    Read the article

  • any online service and/or application to develop a story line for an adventure game?

    - by Gajet
    I with a bunch of friend were talking about an adventure game. there will be too many possibilities in the game and the player can pick from wide varity of choices at each stage to do somthing. there will be consequences for each decision and they may or may not end the story. the result would be somthing like (picture from flashforward series S01E17)or if any of you watched hereos season 1 there is also similar time lines represented as strings in isaac mandez workshop. sorry for bad quality examples but right now I can't think of any better one. do you know any website or application which we can use to create the timeline? these features the least required ones: the ability to represent events as boxes. the ability to connect distant events to each other. the ability to move events on a scene freely the ability to expand the scene easily there should be some color options for the lines representing connections between events easily shareing the idea with one another it's much more better to have a WYSIWYG editor easily explore in the large scene of events in the end if you know any application which could let me create a board just like the one in my sample picture and share it whith other freinds it could help us a lot.

    Read the article

  • How to explain a layperson why a developer should not be interrupted while neck-deep in coding?

    - by András Szepesházi
    If you just consider the second part of my question, "Why a developer should not be interrupted while neck-deep in coding", that has been discussed a number of times by smart people. Heck, even the co-founder of SO, Joel Spolsky, wrote a blog post about "getting in the zone" and "being knocked out of the zone" and why it takes an average of 15 minutes to achieve productivity when participating in complex, software development related tasks. So I think the why has been established. What I'm interested in is how to explain all that to somebody who doesn't know beans about Beans (khmm I mean software development). How to tell the wife, or the funny guy from accounting at the workplace, or the long time friend who pings you on Skype every 30 minutes with a "Wazzzzzzup?!", that all the interruptions have a much deeper impact on your work than the obvious 30 seconds they took from your time. Obviously you can't explain it by sentences like "I have to juggle a lot of variable names in my short term memory" unless you want to be the target of blank stares or friendly abuse. I'd like to be able to explain all that to non-developers in a way that will make them clearly understand - without being offensive, elitist or too technical. EDIT: Thanks to everyone for their great insights. I've accepted EpsilonVector's answer as his analogy was the closest one to my original needs. The "falling asleep" explanation is neither offensive nor technical, almost anyone can relate to it, and the consequences of getting disturbed while falling asleep or while being in the zone are very similar: you experience frustration and you "lose" 15-20 minutes of time.

    Read the article

  • In the Aggregate: How Will We Maintain Legacy Systems? [closed]

    - by Jim G.
    NEW YORK - With a blast that made skyscrapers tremble, an 83-year-old steam pipe sent a powerful message that the miles of tubes, wires and iron beneath New York and other U.S. cities are getting older and could become dangerously unstable. July 2007 Story About a Burst Steam Pipe in Manhattan We've heard about software rot and technical debt. And we've heard from the likes of: "Uncle Bob" Martin - Who warned us about "the consequences of making a mess". Michael C. Feathers - Who gave us guidance for 'Working Effectively With Legacy Code'. So certainly the software engineering community is aware of these issues. But I feel like our aggregate society does not appreciate how these issues can plague working systems and applications. As Steve McConnell notes: ...Unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. If this is true, and I believe that it is, then I fear that governments and businesses may defer regular maintenance and fortification against hackers until it is too late. [Much like NYC and the steam pipes.] My Question: Is there a way that we can avoid the software equivalent of NYC and the steam pipes?

    Read the article

  • 11.10 liveCD black screen

    - by Shaun Killingbeck
    Attempting to install/try ubuntu 11.10 on my new laptop, using a liveCD (and tried USB). I get the purple screen (with the man/keyboard at the bottom) and after that the screen flashes bright white before going black. Ubuntu continues to load in the background, with login sound etc but the screen is off. I have tried as many different solutions as I could find including: using nomodestep, xforcevesa, i915.modeset=0 in boot options (seperately): varying consequences, but either I end up at a blinking cursor with no prompt, a command line (startx fails: no screen found), or the original blank screen again Tried booting from VirtualBox - it crashes at the same place the screen would go blank when using a CD/USB tried 11.04: I don't have this problem BUT when trying to install, I get a ubi-partman error 141 (possibly down to the three partitions that came on my laptop... not sure why HP needed there own separate partition for HP Tools...) Model: HP Pavillion DV6 6B08SA Processor: AMD Quad-Core A6-3410MX APU with Radeon HD 6545G2 Dual Graphics (1.6 GHZ 4 MB L2 cache ) Chipset: AMD RS880M Any help would be greatly appreciated. I just want to be able to partition the drive and install Ubuntu. I'm assuming the issue is graphics card related, although I have no confirmation of that. I have caught a glimpse of some output to do with pulseaudio and [fail], but I can't imagine why that would cause a screen problem and the sound definitely works anyway.

    Read the article

  • Google suddenly only indexes https and not http

    - by spender
    So all of a sudden, searches for our site "radiotuna" give out the result as an HTTPS link. https://www.google.com/?q=radiotuna#hl=en&safe=off&output=search&sclient=psy-ab&q=radiotuna&oq=radiotuna&gs_l=hp.12...0.0.0.3499.0.0.0.0.0.0.0.0..0.0.les%3B..0.0...1c.LnOvBvgDOBk&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&fp=177c7ff705652ec3&biw=1366&bih=602 We only use https for the download of two specific files (these urls are resources used for autoupdate functionality of an app we distribute). All other parts of the site should be served over http. We wouldn't like to see any other traffic over https, nor any of our site links to appear in search engines as https. I'd like to address this issue. It seems that the following solutions are available: hand out an https specific robots.txt as such: User-agent: * Disallow: / and/or at app-level, 301 permanent redirect all requests (except the two above) to HTTP if they come in as HTTPS. My concern with the robots method is that, say (for some reason) google decided not to index http pages, disallowing https pages might mean that google has nothing left to index with disastrous consequences for our ranking. This means I'm inclined to go with a 301 redirect. Any thoughts?

    Read the article

  • What if you've been asked to develop a site and the client later introduces Ts&Cs that you'll breach whilst doing your job?

    - by Matt Lacey
    Disclaimer : this is all made up. Honest. And it represents no clients or employers living or dead, blah blah blah, etc. [Allegedly] As part of a website I've built, I've now been provided the Terms and Conditions of site usage to display on the site. These terms--which must be agreed to to access the site--include my (or any visitor to the sites) compliance with a number of clauses. Many of these clauses refer to general computer use and are not tied specifically to use of the site. Some of these clauses refer to things I have had to previously do as a legitimate part of my job and would expect to have to do again. When I've raised similar issues previously my line manager has said just to ignore it but that doesn't seem to be the professional thing to do. So, what do I do? Abiding by the terms would mean that I could no longer work on the project and would cause issues with my employer and the owner of the business the site is being created for. Ignoring them could lead to possible future issues with the business owner and is not something I'm necessarily happy with (the deliberate breaking of a legal contract). Neither option is one I'd choose and could have major consequences. Any thoughts?

    Read the article

  • Clean MVC design when there is viewer latency

    - by Tony Suffolk 66
    It isn't clear if this question has already been answered, so apologies in advance if this is a duplicate : I am implementing a game and trying to design around a clean MVC pattern - so my Control plane will implement the rules of the game (but not how the game is displayed), and the View plane implements how the game is displayed, and user iteraction - i.e. what game items or controls the user has activated. The challenge that I have is this : In my game the Control Plane can move game items more or less instaneously (The decision about what item to place where - and some of the initial consequences of that placement are reasonably trivial to calculate), but I want to design the Control Plane so that the View plane can display these movements either instaneously or using movement animations. The other complication is that player interaction must be locked out while those game items are moving (similar to chess - you can't attack an opposing piece as it moves past one of your pieces) So do I : Implement all the logic in the Control Plane asynchronously - and separate the descision making from the actions - so the Control plane decides piece 'A' needs to move to a given place - tells the view plane, and but does not implement the move in data until the view plane informs the control plane that the move/animation is complete. A lot of interlock points between the two layers. Implement all the control plane logic in one place - decisions and movement (keeping track of what moved where), and pass all the movements in one go to the View plane to do with what it will. Control Plane is almost fire and forget here. A hybrid of 1 & 2 - The control plane implements all the moves in a temporary data store - but maintains a second store which reflects what is actually visible to the viewer, based on calls and feedback from the View plane. All 3 are relatively easy to implement (target language is python), but having never done a clean MVC pattern with view latency before - I am not sure which design is best

    Read the article

  • What can be the cause of new bugs appearing somewhere else when a known bug is solved?

    - by MainMa
    During a discussion, one of my colleagues told that he has some difficulties with his current project while trying to solve bugs. "When I solve one bug, something else stops working elsewhere", he said. I started to think about how this could happen, but can't figure it out. I have sometimes similar problems when I am too tired/sleepy to do the work correctly and to have an overall view of the part of the code I was working on. Here, the problem seems to be for a few days or weeks, and is not related to the focus of my colleague. I can also imagine this problem arising on a very large project, very badly managed, where teammates don't have any idea of who does what, and what effect on other's work can have a change they are doing. This is not the case here neither: it's a rather small project with only one developer. It can also be an issue with old, badly maintained and never documented codebase, where the only developers who can really imagine the consequences of a change had left the company years ago. Here, the project just started, and the developer doesn't use anyone's codebase. So what can be the cause of such issue on a fresh, small-size codebase written by a single developer who stays focused on his work? What may help? Unit tests (there are none)? Proper architecture (I'm pretty sure that the codebase has no architecture at all and was written with no preliminary thinking), requiring the whole refactoring? Pair programming? Something else?

    Read the article

  • two-part dice pool mechanic

    - by bythenumbers
    I'm working on a dice mechanic/resolution system based off of the Ghost/Echo (hereafter shortened to G/E) tabletop RPG. Specifically, since G/E can be a little harsh with dealing out consequences and failure, I was hoping to soften the system and add a little more player control, as well as offer the chance for players to evolve their characters into something unique, right from creation. So, here's the mechanic: Players roll 2d12 against the two statistics for their character (each is a number from 2-11, and may be rolled above or below depending on the nature of the action attempted, rolling your stat exactly always fails). Depending on the success for that roll, they add dice to the pool rolled for a modified G/E style action. The acting player gets two dice anyhow, and I am debating offering a bonus die for each success, or a single bonus die for succeeding on both of the statistic-compared rolls. One the size of the dice pool is set, the entire pool is rolled, and the players are allowed to assign rolled dice to a goal and a danger. Assigned results are judged as follows: 1-4 means the attempted goal fails, or the danger comes true. 5-8 is a partial success at the goal, or partially avoiding the danger. 9-12 means the goal is achieved, or the danger avoided. My concerns are twofold: Firstly, that the two-stage action is too complicated, with two rolls to judge separately before anything can happen. Secondly, that the statistics involved go too far in softening the game. I've run some basic simulations, and the approximate statistics follow: 2 dice (up to) 3 dice (up to) 4 dice failure ~33% ~25% ~20% partial ~33% ~35% ~35% success ~33% ~40% ~45% I'd appreciate any advice that addresses my concerns or offers to refine my simulation (right now the first roll is statistically modeled as sign(1d12-1d12), where 0 is a success).

    Read the article

  • I've inherited 200K lines of spaghetti code -- what now?

    - by kmote
    I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?

    Read the article

  • Asus A8V overcurrent

    - by user139710
    This is not as much as a question as it is a note to those out there that upgrade their motherboards with better processors and the like. Here's my story. Recently I upgraded my processor. System specifications: • Asus A8V Deluxe • 4GB RAM • ATI Radeon 3870 AGP graphics card (I believe that's it) Anyway, I decided to put a dual core Opteron 180 in this rig, but the problem was that I needed to update the BIOS to V-1017, and not knowing the consequences, I went up to the Asus site and got the newest, the latest and the greatest, 1018.002 thinking that it was the best for this board, however it wasn't. I used the Asus EXFlash, which makes life a lot easier, flashed the BIOS and all of a sudden I start getting this message: USB overcurrent protection, system shutting down in 15 seconds to protect your system. WELL SHIT... This is a new one on me... I read the blogs, all the posts on this thing, and did all that everyone else did to correct the problem, but nothing helped. So i decided to start from square one, went back to Asus and looked at the BIOS download... OMG... IT WAS A BETA. So, I downloaded the update that was suggested 1017< and installed it and wouldn't you know, it took care of the problem, no more USB overcurrent protection, no more crashing. I write this today to let you all know about this, just in case you have an issue such as this. Well there you all go. Fly safe and eat your vegetables.

    Read the article

  • How can I better manage far-reaching changes in my code?

    - by neuviemeporte
    In my work (writing scientific software in C++), I often get asked by the people who use the software to get their work done to add some functionality or change the way things are done and organized right now. Most of the time this is just a matter of adding a new class or a function and applying some glue to do the job, but from time to time, a seemingly simple change turns out to have far-reaching consequences that require me to redesign a substantial amount of existing code, which takes a lot of time and effort, and is difficult to evaluate in terms of time required. I don't think it has as much to do with inter-dependence of modules, as with changing requirements (admittedly, on a smaller scale). To provide an example, I was thinking about the recently-added multi-user functionality in Android. I don't know whether they planned to introduce it from the very beginning, but assuming they didn't, it seems hard to predict all the areas that will be affected by the change (apps preferences, themes, need to store account info somehow, etc...?), even though the concept seems simple enough, and the code is well-organized. How do you deal with such situations? Do you just jump in to code and then sort out the cruft later like I do? Or do you do a detailed analysis beforehand of what will be affected, what needs to be updated and how, and what has to be rewritten? If so, what tools (if any) and approaches do you use?

    Read the article

  • Should sanity be a property of a programmer or a program?

    - by toplel32
    I design and implement languages, that can range from object notations to markup languages. In many cases I have considered restrictions in favor of sanity (common knowledge), like in the case of control characters in identifiers. There are two consequences to consider before doing this: It takes extra computation It narrows liberty I'm interested to learn how developers think of decisions like this. As you may know Microsoft C# is very open on the contrary. If you really want to prefix your integer as Long with 'l' instead of 'L' and so risk other developers of confusing '1' and 'l', no problem. If you want to name your variables in non-latin script so they will contrast with C#'s latin keywords, no problem. Or if you want to distribute a string over multiple lines and so break a series of indentation, no problem. It is cheap to ensure consistency with restrictions and this makes it tempting to implement. But in the case of disallowing non-latin characters (concerning the second example), it means a discredit to Unicode, because one would not take full advantage of its capacity.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >