Search Results

Search found 11513 results on 461 pages for 'level 2'.

Page 248/461 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Standards for how developers work on their own workstations

    - by Jon Hopkins
    We've just come across one of those situations which occasionally comes up when a developer goes off sick for a few days mid-project. There were a few questions about whether he'd committed the latest version of his code or whether there was something more recent on his local machine we should be looking at, and we had a delivery to a customer pending so we couldn't wait for him to return. One of the other developers logged on as him to see and found a mess of workspaces, many seemingly of the same projects, with timestamps that made it unclear which one was "current" (he was prototyping some bits on versions of the project other than his "core" one). Obviously this is a pain in the neck, however the alternative (which would seem to be strict standards for how each developer works on their own machine to ensure that any other developer can pick things up with a minimum of effort) is likely to break many developers personal work flows and lead to inefficiency on an individual level. I'm not talking about standards for checked-in code, or even general development standards, I'm talking about how a developer works locally, a domain generally considered (in my experience) to be almost entirely under the developers own control. So how do you handle situations like this? Are the one of those things that just happens and you have to deal with, the price you pay for developers being allowed to work in the way that best suits them? Or do you ask developers to adhere to standards in this area - use of specific directories, naming standards, notes on a wiki or whatever? And if so what do your standards cover, how strict are they, how do you police them and so on? Or is there another solution I'm missing? [Assume for the sake of argument that the developer can not be contacted to talk through what he was doing here - even if he could knowing and describing which workspace is which from memory isn't going to be simple and flawless and sometimes people genuinely can't be contacted and I'd like a solution which covers all eventualities.]

    Read the article

  • How to fetch only the sprites in the player's range of motion for collision testing? (2D, axis aligned sprites)

    - by Twodordan
    I am working on a 2D sprite game for educational purposes. (In case you want to know, it uses WebGl and Javascript) I've implemented movement using the Euler method (and delta time) to keep things simple. Now I'm trying to tackle collisions. The way I wrote things, my game only has rectangular sprites (axis aligned, never rotated) of various/variable sizes. So I need to figure out what I hit and which side of the target sprite I hit (and I'm probably going to use these intersection tests). The old fashioned method seems to be to use tile based grids, to target only a few tiles at a time, but that sounds silly and impractical for my game. (Splitting the whole level into blocks, having each sprite's bounding box fit multiple blocks I might abide. But if the sprites change size and move around, you have to keep changing which tiles they belong to, every frame, it doesn't sound right.) In Flash you can test collision under one point, but it's not efficient to iterate through all the elements on stage each frame. (hence why people use the tile method). Bottom line is, I'm trying to figure out how to test only the elements within the player's range of motion. (I know how to get the range of motion, I have a good idea of how to write a collisionCheck(playerSprite, targetSprite) function. But how do I know which sprites are currently in the player's vicinity to fetch only them?) Please discuss. Cheers!

    Read the article

  • Organising data access for dependency injection

    - by IanAWP
    In our company we have a relatively long history of database backed applications, but have only just begun experimenting with dependency injection. I am looking for advice about how to convert our existing data access pattern into one more suited for dependency injection. Some specific questions: Do you create one access object per table (Given that a table represents an entity collection)? One interface per table? All of these would need the low level Data Access object to be injected, right? What about if there are dozens of tables, wouldn't that make the composition root into a nightmare? Would you instead have a single interface that defines things like GetCustomer(), GetOrder(), etc? If I took the example of EntityFramework, then I would have one Container that exposes an object for each table, but that container doesn't conform to any interface itself, so doesn't seem like it's compatible with DI. What we do now, in case it helps: The way we normally manage data access is through a generic data layer which exposes CRUD/Transaction capabilities and has provider specific subclasses which handle the creation of IDbConnection, IDbCommand, etc. Actual table access uses Table classes that perform the CRUD operations associated with a particular table and accept/return domain objects that the rest of the system deals with. These table classes expose only static methods, and utilise a static DataAccess singleton instantiated from a config file.

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • Why are we as an industry not more technically critical of our peers? [closed]

    - by Jarrod Roberson
    For example: I still see people in 2011 writing blog posts and tutorials that promote setting the Java CLASSPATH at the OS environment level. I see people writing C and C++ tutorials dated 2009 and newer and the first lines of code are void main(). These are examples, I am not looking for specific answers to the above questions, but to why the culture of accepting sub-par knowledge in the industry is so rampant. I see people posting these same type of empirically wrong suggestions as answers on www.stackoverflow.com and they get lots of up votes and practically no down votes! The ones that get lots of down votes are usually from answering a question that wasn't asked because of lack of reading for comprehension skills, and not incorrect answers per se. Is our industry that ignorant as a whole, I can understand the internet in general being lazy, apathetic and un-informed but our industry should be more on top of things like this and way more critical of people that are promoting bad habits and out-dated techniques and information. If we are really an engineering discipline, why aren't people held to a higher standard as they are in other engineering disciplines? I want to know why people accept bad advice, poor practices as the norm and are not more critical of their peers in the software industry.?

    Read the article

  • Inheriting projects - General Rules?

    - by pspahn
    This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • Calculating the "power" of a player in a "Defend Your Castle" type game

    - by Jesse Emond
    I'm a making a "Defend Your Castle" type game, where each player has a castle and must send units to destroy the opponent's castle. It looks like this (and yeah, this is the actual game, not a quick paint drawing..): Now, I'm trying to implement the AI of the opponent, and I'd like to create 4 different AI levels: Easy, Normal, Hard and Hardcore. I've never made any "serious" AI before and I'd like to create a quite complete one this time. My idea is to calculate a player's "power" score, based on the current health of its castle and the individual "power" score of its units. Then, the AI would just try to keep a score close to the player's one(Easy would stay below it, Normal would stay near it and Hard would try to get above it). But I just don't know how to calculate a player's power score. There are just too many variables to take into account and I don't know how to properly use them to create one significant number(the power level). Could anyone help me out on this one? Here are the variables that should influence a player's power score: Current castle health, the unit's total health, damage, speed and attack range. Also, the player can have increased Income(the money bag), damage(the + Damage) and speed(the + speed)... How could I include them in the score? I'm really stuck here... Or is there an other way that I could implement AI for this type of game? Thanks for your precious time.

    Read the article

  • ORA-600 Troubleshooting

    - by [email protected]
    Have you observed an ORA-0600 or ORA-07445 reported in your alert log? The ORA-600 error is the generic internal error number for Oracle program exceptions. It indicates that a process has encountered a low-level, unexpected condition. The ORA-600 error statement includes a list of arguments in square brackets: ORA 600 "internal error code, arguments: [%s], [%s],[%s], [%s], [%s]" The first argument is the internal message number or character string. This argument and the database version number are critical in identifying the root cause and the potential impact to your system.  The remaining arguments in the ORA-600 error text are used to supply further information (e.g. values of internal variables etc).   Looking for the best way to diagnose? There is an ORA-600 Troubleshooter Tool available in My Oracle Support.  This tool will lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out the first 10 or 15 stack pointers from the associated trace file to match up against known bugs. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] Also, take a quick look at the Master Note for Diagnosing ORA-600 ( MasterNoteORA600.docx) for some tips on diagnosing.

    Read the article

  • Announcing Oracle Environmental Accounting and Reporting

    - by Theresa Hickman
    Oracle just launched a new product called Oracle Environmental Accounting and Reporting designed to help company’s track and report their greenhouse emissions at the operational level.  Companies around the world are facing increasing pressure to improve their energy efficiency and reduce waste in their operations. Also, new worldwide greenhouse gas legislation is putting added pressure on companies to report the impact of their emissions and energy usage on the environment. Today, companies undergo extensive and expensive data audits to maintain a ledger of up-to-date emissions factors that compare figures on an annual basis. Existing “ad hoc” approaches utilizing manual or niche solutions have a high operational cost and weak data security and audit-ability. The ideal solution is to embed environmental usage within the mainstream business operations, such as recording energy usage at the time of invoice entry, and then report on those results. This is precisely what Oracle Environmental Accounting and Reporting is designed to do. You can now capture environmental data either electronically or manually; convert that to greenhouse gas emissions; comply with mandatory and voluntary greenhouse gas reporting schemes; and identify opportunities for CO2 emissions and cost reductions.   Oracle recently acquired the intellectual property for this solution which works with both Oracle E-Business Suite Release 12 and JD Edwards EnterpriseOne Release 9.0. For more information, visit Oracle Environmental Accounting and Reporting.

    Read the article

  • Attend my Tech Ed 2014 session: Debugging Tips and Tricks

    - by Daniel Moth
    Just a week away, at Tech Ed 2014 NA in Houston Texas, I will be giving a demo presentation that you will not want to miss (assuming you code in Visual Studio). Add it to your calendar now: DEV-B352 Debugging Tips and Tricks in Visual Studio 2013 (link) Monday, May 12 1:15-2:30 PM, Room: General Assembly C As a developer, regardless of your programming language or the platform that you target, you use the debugger on a daily basis. Come to this all-demo session to learn how to make the most of the Visual Studio debugger, and hence be more productive and effective in your everyday development. We tour almost all of the debugger surface and many of its commands, throwing in tips and tricks as we go along, and also calling out what is brand new in the latest version of the debugger in Microsoft Visual Studio 2013. Whatever your experience level, you are guaranteed to leave with new knowledge of debugger features that you will want to use immediately when you are back at your computer!   I am also co-presenting another session later in the week. DEV-B313 Diagnosing Issues in Windows Phone 8.1 XAML Applications Using Visual Studio 2013 (link) Thursday, May 15 10:15-11:30 AM, Room: 340 Come to this demo-driven session to learn how to use the latest diagnostic tools in Visual Studio 2013 to make your Windows Phone 8.1 XAML apps reliable, fast, and efficient. Learn how to make the most of existing capabilities in the debugger as well as new debugging features for diagnosing correctness issues. Also, see the Visual Studio Performance and Diagnostics hub in action with its performance analysis tools for diagnosing CPU usage, memory usage, and energy consumption. The techniques covered in this session apply equally well for Windows Store apps as well as Windows Phone Store apps, so all your device development needs will be covered.   Links to both sessions from my Tech Ed speaker page. See you there! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • How or why would this mechanic (not) work to bring game balance to a singleplayer RPG? [closed]

    - by 0xFFF1
    Mechanic details The player, the monsters, and the merchants act as three separate parties. The player needs to beat up monsters for exp points and resources to sell and to buy potions from merchants to continue to fight. The monsters need healing and reviving to survive (also bought from merchants) and the merchants need potion ingredients from the player and the monsters to make potions to sell. These potions are only able to be processed in such bulk by merchants thus their potions would be cheaper than making them yourself. Only the monsters can farm ingredients in bulk. Only the player is or has to be overly aggressive (in bulk). Monsters can farm and produce "Level up candies" that do the work of exp. they are eaten right away after they are made and are never stockpiled or held for fear of the player and merchants who want to sell to the player. The monsters will defend themselves. Reviving is very expensive. The merchants can be found either with a concerned expression or a grinning expression based on how much profit they are making compared to their morale standing. The economies of each monster town and merchant city are distinct but interconnected. Magic Swords are worth a lot. So what I need to know is what concerns would there be to design a game around this mechanic and/or design this mechanic around a developing game. which would fare better? Is game balance an issue here? (how strong the monsters get or how quickly they die off based on the player's input into the system), Or is game balance solely in the hands of the player? (he decides if he overkills monsters or get underleveled.) What do I need to think about to make sure it isn't too easy or too hard to swing the amount/strength of monsters compared to the player and the amount of profit the merchants get vs the player. Would indicating how out of whack things are getting in game help with this?

    Read the article

  • BI&EPM in Focus - November 2011

    - by Mike.Hallett(at)Oracle-BI&EPM
    Enterprise Performance Management A Thing of Beauty, by Alison WeissAvon’s enterprise performance management system delivers accurate information and critical insight to managers at every level of the organization Oracle Crystal Ball Helps Managers Guard Against Volatility, by Alison Weiss The Insight Game, by Aaron LazenbyEnterprise performance management can deliver insights crucial to navigating the volatility of the global economy—and that’s no game of checkers. KPI vs. the Bottom Line, by Edward RoskeFor managers, is tracking the key metrics for their departments enough to ensure success for the entire business? The CEO for Oracle partner interRel shares his opinion. Deep Integration, by Aaron LazenbyThe synthesis of Oracle Hyperion applications and core Oracle technologies can deliver deep benefits to analytics-driven businesses. Oracle Crystal Ball. Oracle's #1 Solution for Risk Management Follow EPM Documentation at Hyperion EPM Info for news about EPM documentation releases and updates (twitter | facebook | Linkedin) Whitepaper: Integrating XBRL Into Your Financial Reporting Process Oracle Hyperion Disclosure Management Customer Story: StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Sherwin-Williams Argentina I.C.S.A. Accelerates Budget Preparation Process by 75% BBDO Germany GmbH Consolidates Financial and Planning Processes for More Than 50 Agencies StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Business Intelligence Webcast Replay: Oracle Data Mining & BI EE - Predictive Analytics (Part 2) Innovation Award Winners - BI/EPM: HealthSouth, State of MD, Clorox Company, Telenor and Dunkin Brands Leeds Teaching Hospitals National Health Service Trust Builds Budget Reports Six Times Faster, Achieves 100% ROI in 12 Months with Oracle Business Intelligence Home Credit Group Consolidates Reporting and Saves Time across All Business Units w/ Oracle Essbase & OBIEE Autoglass Improves Business Visibility and Services to Customers and Partners with Oracle Business Intelligence Events Download Oracle OpenWorld Oct 2011 Presentations select Middleware - BI or Applications - Hyperion Oracle Business Analytics Summits:learn about the latest trends, best practices, and innovations in business intelligence, analytics applications, and data warehousing Webcast Nov 15 9am PST: Running the Last Mile, Beyond Financial Consolidations - Streamlining the Close and Addressing the SEC's XBRL Mandate Webcast Dec 13 1pm PST: Defining Your Mobile BI Strategy (BICG) New Training Available: Oracle BI Publisher 11g R1: Fundamentals Webcast Replay: How to Expand the Usage of Analytics in your Organization while Driving Down IT Spend Webcast Replay: Real-Time Decisions (RTD) Updated Use Cases for Ecommerce Personalization in Financial Services & Retail

    Read the article

  • ArchBeat Link-o-Rama for November 20, 2012

    - by Bob Rhubart
    Oracle Utilities Application Framework V4.2.0.0.0 Released | Anthony Shorten Principal Product Manager Anthony Shorten shares an overview of the changes implemented in the new release. Towards Ultra-Reusability for ADF - Adaptive Bindings | Duncan Mills "The task flow mechanism embodies one of the key value propositions of the ADF Framework," says Duncan Mills. "However, what if we could do more? How could we make task flows even more re-usable than they are today?" As you might expect, Duncan has answers for those questions. Oracle BPM Process Accelerators and process excellence | Andrew Richards "Process Accelerators are ready-to-deploy solutions based on best practices to simplify process management requirements," says Capgemini's Andrew Richards. "They are considered to be 'product grade,' meaning they have been designed; engineered, documented and tested by Oracle themselves to a level that they can be deployed as-is for a solution to a problem or extended as appropriate for a particular scenario." Oracle SOA Suite 11g PS 5 introduces BPEL with conditional correlation for aggregation scenarios | Lucas Jellema An extensive, detailed technical post from Oracle ACE Director Lucas Jellema. Check Box Support in ADF Tree Table Different Levels | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis updates last year's "ADF Tree - How to Autoselect/Deselect Checkbox" post with new information. As Boom Lures App Creators, Tough Part Is Making a Living Great New York Times article about mobile app develoment also touches on other significant IT issues. Thought for the Day "Building large applications is still really difficult. Making them serve an organisation well for many years is almost impossible." — Malcolm P. Atkinson Source: SoftwareQuotes.com

    Read the article

  • Javascript: Machine Constants Applicable?

    - by DavidB2013
    I write numerical routines for students of science and engineering (although they are freely available for use by anybody else as well) and am wondering how to properly use machine constants in a JavaScript program, or if they are even applicable. For example, say I am writing a program in C++ that numerically computes the roots of the following equation: exp(-0.7x) + sin(3x) - 1.2x + 0.3546 = 0 A root-finding routine should be able to compute roots to within the machine epsilon. In C++, this value is specified by the language: DBL_EPSILON. C++ also specifies the smallest and largest values that can be held by a float or double variable. However, how does this convert to JavaScript? Since a Javascript program runs in a web browser, and I don't know what kind of computer will run the program, and JavaScript does not have corresponding predefined values for these quantities, how can I implement my own version of these constants so that my programs compute results to as much accuracy as allowed on the computer running the web browser? My first draft is to simply copy over the literal constants from C++: FLT_MIN: 1.17549435082229e-038 FLT_MAX: 3.40282346638529e+038 DBL_EPSILON: 2.2204460492503131e-16 I am also willing to write small code blocks that could compute these values for each machine on which the program is run. That way, a supercomputer might compute results to a higher accuracy than an old, low-level, PC. BUT, I don't know if such a routine would actually reach the computer, in which case, I would be wasting my time. Anybody here know how to compute and use (in Javascript) values that correspond to machine constants in a compiled language? Is it worth my time to write small programs in Javascript that compute DBL_EPSILON, FLT_MIN, FLT_MIN, etc. for use in numerical routines? Or am I better off simply assigning literal constants that come straight from C++ on a standard Windows PC?

    Read the article

  • Efficient mapping layout in 2D side-scroller, and collisions between character and the world

    - by Jack
    I haven't touched Visual Studio for a couple months now, but I was playing a game from the '90s toady and had an epiphany: I was looking for something what i didn't need, and I wasn't using what I knew correctly. One of those realizations was collision, so let me tell you a bit about my project that I was working on. The project's graphics looks like Mario or Dangerous Dave, etc., you get the idea - old-school pixels. So anyway I remember trying to think of something else than AABB for character form, but I couldn't think of anything. Perhaps I could get a suggestion for this? Another thing is the world - I don't want it to be just linear world, I want mountains, etc.. My idea is to use triangles, and no idea yet what to do if I want just part of the cube, say 3/4 or 2/4 or whatever. Hard-coding such things seems inefficient. P.S. I am not looking at the precision level offered by Box2D. Actually I remember trying to implement it at first, but I failed as my understanding of C++ wasn't advanced enough, as it'll be mentioned below. P.P.S. I am programming in C++, and I haven't done it for a couple months now. I have no means of testing it either, as my PC is broken down, and this one can barely run games from late '90s, not to speak about a compiler or a program with inefficient resource management... I am also not an expert (obviously), I don't even know if I can consider myself an average programmer. In short, I am simply curious about my thoughts and my past experience when programming the game. I may come back to it when my PC is fixed, I'm already filling a note about these things.

    Read the article

  • Sample Browser Visual Studio Extension is localized and introduced to Japan

    - by Jialiang
    http://blogs.msdn.com/b/codefx/archive/2012/10/14/sample-browser-visual-studio-extension-is-localized-and-introduced-to-japan.aspx  ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????From: Japan MVP   "The Sample Browser is very easy to use thanks to the refined interface.  The categorized menu enables faster search. Highly acclaimed.  But it need localization. It may not be a problem for those who can understand English, but I think localizing Sample Browser into Japanese will promote its use in Japan further." This is a prominent feedback collected from the Japan MVP community since we released the last version of Sample Browser, which was only available in English.  Japan developers like the Sample Browser, but they want localized code samples, localized Sample Browser UI, and the localized search experience.  The Japan MVP lead, Satoru Kitabata, observed these needs and expectations.  He started to engage with all local developer MVPs to translate the UI elements in the Sample Browser.  Lots of MVPs signed up to participate in this work.  They had roundtables and newsletters to track the progress.  In short three weeks, every control, every tooltip, every font on every label, was beautifully tuned for Japanese.  The sample search experience was also optimized for Japan developers - they can directly type Japanese query to search for code samples.  Together with Microsoft Japan MVPs, the sample use experience is localized and improved to a new level!    The Japan MVP Lead, Satoru Kitabata, further worked with MSDN Japan site manager and Japan DPE to introduce the good news of localized Sample Browser to Japan Sample Browser  http://msdn.microsoft.com/ja-jp/jj730399 Sample Browser?????? http://msdn.microsoft.com/ja-jp/jj730398     Thanks to the joint effort and Japan MVPs’ feedback and contributions, the Sample Browser gets the chance to benefit the broader Japan developer audience.

    Read the article

  • Is your TRY worth catching?

    - by Maria Zakourdaev
      A very useful error handling TRY/CATCH construct is widely used to catch all execution errors  that do not close the database connection. The biggest downside is that in the case of multiple errors the TRY/CATCH mechanism will only catch the last error. An example of this can be seen during a standard restore operation. In this example I attempt to perform a restore from a file that no longer exists. Two errors are being fired: 3201 and 3013: Assuming that we are using the TRY and CATCH construct, the ERROR_MESSAGE() function will catch the last message only: To workaround this problem you can prepare a temporary table that will receive the statement output. Execute the statement inside the xp_cmdshell stored procedure, connect back to the SQL Server using the command line utility sqlcmd and redirect it's output into the previously created temp table.  After receiving the output, you will need to parse it to understand whether the statement has finished successfully or failed. It’s quite easy to accomplish as long as you know which statement was executed. In the case of generic executions you can query the output table and search for words like“Msg%Level%State%” that are usually a part of the error message.Furthermore, you don’t need TRY/CATCH in the above workaround, since the xp_cmdshell procedure always finishes successfully and you can decide whether to fire the RAISERROR statement or not. Yours, Maria

    Read the article

  • How to analyze a scenario where a bug didn't get caught and adjust development workflow to prevent similar errors

    - by durron597
    I had a bug that was really difficult to track down, because all the unit tests were green, but the production application didn't work properly. Here's what happened: I had a filter class that set my application to ignore data that was not in some specified time windows. The unit test, which seemed thorough to me, turned green. Additionally, my integration tests also produced results as expected. Production, however, did not work. As a result of the first two bullets, this problem was very difficult to find. It turned out the problem was that my test dates were using my time zone (America/Chicago) but the production data was providing dates in UTC, which I did not realize, and the logic for the filter wasn't correct for UTC dates. (I was using joda time DateTime objects). Where did my workflow break down? Did I fail to produce a spec that specified that the logic needed to handle dates in any time zone? Did I fail to thoroughly consider all cases at the unit test level? Did I fail to insure the integration test was sufficiently similar to production? Other? What changes can I make to my workflow to better prevent this sort of mistake in the future? How can I more effectively debug a problem when there is an issue in production but not in testing?

    Read the article

  • Grow Your Oracle Exadata and Manageability Business: Engage With Us to Find Out How

    - by swalker
    Don't miss out on the first EMEA Partner Community Cast! If you are a business decision maker, project leader, technical leader or business development manager you will gain incredible value from these events, and we believe that this introduction to Oracle Partner Communities will bring you a wealth of new opportunities. Join Us on December 7th, 10:00 GMT (11:00 CET) for the first broadcast the Exadata and Manageability solution areas. In just 30 minutes, you will find out more about Oracle's Exadata, Manageability and Oracle Enterprise Manager 12c solutions, and the value they can generate for you and your customers. See the full agenda here. Hosted by Paul Thompson, Senior Director, Alliances and Solutions Partner Programs, Oracle EMEA and Javier Puerta, Director, Core Technology Partner Programs, Oracle EMEA, our special guests include: Steve McNickle, Vice President Europe, cVidya Dave Sanderson, Associate Partner, Technology Reply Patrick Rood, Lead for Indirect Manageability Business, Oracle EMEA Register Now Partner Community Casts are a new series of interactive broadcasts designed to help you truly engage with Oracle on an individual level, build expertise around your specialist solution area and make valuable new contacts in Oracle and other Oracle partners. Community Casts can be viewed live from our online platform. Audience members have the opportunity to submit questions during the show via chat or social media outlets, many of which are answered on-air. Learn more about EMEA Partner Community Casts Register Now to learn how participation in the Exadata and Manageability Partner Communities will help your business flourish!

    Read the article

  • Customer Concepts TV

    - by Richard Lefebvre
    Eliminate the Guesswork in Your Customer's Sales Organization Selling is the lifeblood of every business. In the past, companies would increase headcount to boost sales. In today’s business environment, companies need to re-evaluate the way in which they sell. Sales and marketing organisations must optimise performance, increase team productivity and focus on the best opportunities. Oracle Fusion CRM has been specifically designed with tools to help sales and marketing teams improve efficiency and drive revenue. Territory modeling and management, quota and commission management, collaborative features, real-time customer information and mobile device integration are just some features incorporated. Join us on Customer Concepts TV as we aim to help you find the right strategy for your prospect and customers. Whether they already have a CRM solution in place or are looking for the next level of CRM implementation, this online TV show will give you very practical advice that can help you to make the most out of your CRM implementation.Register now to reserve your spot for this exclusive, live-stream event. Customer Concepts TV comes to you on April 24. Watch the Customer Concepts TV trailer here

    Read the article

  • Question about mipmaps + anisotropic filtering

    - by Telanor
    I'm a bit confused here and maybe someone can explain this to me. I created a simple test texture for my terrain which is nothing more than a solid green color with a black grid overlayed on top of it. If I look at the terrain in the distance with mipmapping on and linear filtering, the grid lines become blurry fairly quickly and further back the grid is pretty much invisible. With these settings, I don't get any moire patterns at all. If I turn on anisotropic filtering, however, the higher the anisotropic level, the more the terrain looks like it did with without mipmapping. The lines are much crisper nearby but in the distance I start to see terrible moire patterns. My understanding was that mipmapping is supposed to get rid of moire patterns. I've always had anisotropic filtering on in every game I play and I've never noticed any moire patterns as a result, so I don't understand why it's happening in my game. I am using logarithmic depth however, could that be causing any problems? And if it is, how do I resolve it? I've created my sampler state like so (I'm using slimdx): ssa = SamplerState.FromDescription(Engine.Device, new SamplerDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, Filter = Filter.Anisotropic, MaximumAnisotropy = anisotropicLevel, MinimumLod = 0, MaximumLod = float.MaxValue });

    Read the article

  • Forum engine with full LDAP integration [closed]

    - by Andrian Nord
    We are looking for forum engine which may actually maintain user data into LDAP, maybe via mods. Core point is about ability to maintain the data, i.e. all user profile settings, like nickname, password, email, avatar, birthday and others (preferably configurable). One example of good ldap integration, level of which I'm expecting, is drupal's ldap integration, which allows to map any user's attribute into ldap and keeps it in sync with database. Year ago I've done a small research over existing Free&FOSS engines and find out few forum engines with LDAP integration, namely SFM, phpBB and something else. The most maintained solution were provided by phpBB3, which supports LDAP integration out-of-box, but it is unable to sync data with changes in LDAP server made by other software. Actually it wasn't even propagating changes back, I'm not saying about ability to map additional attributes (other than name/password/email). Also, I haven't found any forum with architecture which have proper abstraction over user settings, thus I doubt that this engines (including phpBB) are possible to mod such functionality without introducing dramatic changes into core codebase. More recent research showed that even some commercial software, like IPB is unable to keep it's database synced with LDAP directory and map additional attributes. In other words, all support I've seen so far is simple user creation upon first user's login, which is not good for us, as forum is not primary site and should not maintain it's own users base (to reduce risk of possible collisions). LDAP import is required due to many other services (ftp, email, jabber, drupal site) using same users base. Currently we have forum embedded into Drupal site, but we are unsatisfied with it's features. BTW, we are using Linux and this is not duplicate of this question, as it's author seems to be satisfied with behaviour described above. So, my question is: Are there any (preferably FOSS&free) forum engines that may import, export, keep in sync, or otherwise integrade with LDAP user database (preferably with ability to map additional fields to ldap attributes)?

    Read the article

  • Interpreting Others' Source Code

    - by Maxpm
    Note: I am aware of this question. This question is a bit more specific and in-depth, however, focusing on reading the actual code rather than debugging it or asking the author. As a student in an introductory-level computer science class, my friends occasionally ask me to help them with their assignments. Programming is something I'm very proud of, so I'm always happy to oblige. However, I usually have difficulty interpreting their source code. Sometimes this is due to a strange or inconsistent style, sometimes it's due to strange design requirements specified in the assignment, and sometimes it's just due to my stupidity. In any case, I end up looking like an idiot staring at the screen for several minutes saying "Uh..." I usually check for the common errors first - missing semicolons or parentheses, using commas instead of extractor operators, etc. The trouble comes when that fails. I often can't step through with a debugger because it's a syntax error, and I often can't ask the author because he/she him/herself doesn't understand the design decisions. How do you typically read the source code of others? Do you read through the code from top-down, or do you follow each function as it's called? How do you know when to say "It's time to refactor?"

    Read the article

  • Discount Multilingual Day in the Life of User Experience

    - by ultan o'broin
    Super article by the WikiMedia Foundation engineering folks about Designing for the Multilingual Web using the Wikipedia Universal Language Selector user interface as an example. Great ideas about tools that are available, as well as covering the basics of wireframing (mockups), prototyping, and user testing. Lots of inspiration there for developers and builders of apps who want to ensure their user experience (UX) really delivers for a global audience. Check out the use of the Firefox-based Pencil, how to translate your mockups, and how to perform remote user testing using Google+ Hangouts. Paul Giner demonstrates how to translate mockups. A little clunky and homespun in parts (I would prefer if tools such as Pencil or Balsamiq MockUps, and so on, could roundtrip directly from SVG to XLIFF for example, and Pencil doesn't work yet with the latest versions for Firefox) and I am not sure how it can really scales to enterprise-level use. However, the UX methodology is basically sound, and reinforces the importance of designing and testing in more that one language. The most powerful message for me is that you do not need special resources, training or expensive tools to deliver great-looking usable apps if you're a developer. Definitely worth considering if you're building apps out there in the community.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >