Search Results

Search found 19002 results on 761 pages for 'oracle b2b 11g practice'.

Page 653/761 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Which algorithm used in Advance Wars type turn based games

    - by Jan de Lange
    Has anyone tried to develop, or know of an algorithm such as used in a typical turn based game like Advance Wars, where the number of objects and the number of moves per object may be too large to search through up to a reasonable depth like one would do in a game with a smaller search base like chess? There is some path-finding needed to to engage into combat, harvest, or move to an object, so that in the next move such actions are possible. With this you can build a search tree for each item, resulting in a large tree for all items. With a cost function one can determine the best moves. Then the board flips over to the player role (min/max) and the computer searches the best player move, and flips back etc. upto a number of cycles deep. Finally it has found the best move and now it's the players turn. But he may be asleep by now... So how is this done in practice? I have found several good sources on A*, DFS, BFS, evaluation / cost functions etc. But as of yet I do not see how I can put it all together.

    Read the article

  • What are the consequences of immutable classes with references to mutable classes?

    - by glenviewjeff
    I've recently begun adopting the best practice of designing my classes to be immutable per Effective Java [Bloch2008]. I have a series of interrelated questions about degrees of mutability and their consequences. I have run into situations where a (Java) class I implemented is only "internally immutable" because it uses references to other mutable classes. In this case, the class under development appears from the external environment to have state. Do any of the benefits (see below) of immutable classes hold true even by only "internally immutable" classes? Is there an accepted term for the aforementioned "internal mutability"? Wikipedia's immutable object page uses the unsourced term "deep immutability" to describe an object whose references are also immutable. Is the distinction between mutability and side-effect-ness/state important? Josh Bloch lists the following benefits of immutable classes: are simple to construct, test, and use are automatically thread-safe and have no synchronization issues do not need a copy constructor do not need an implementation of clone allow hashCode to use lazy initialization, and to cache its return value do not need to be copied defensively when used as a field make good Map keys and Set elements (these objects must not change state while in the collection) have their class invariant established once upon construction, and it never needs to be checked again always have "failure atomicity" (a term used by Joshua Bloch) : if an immutable object throws an exception, it's never left in an undesirable or indeterminate state

    Read the article

  • unit level testing, agile, and refactoring

    - by dsollen
    I'm working on a very agile development system, a small number of people with my doing the vast majority of progaming myself. I've gotten to the testing phase and find myself writing mostly functional level testing, which I should in theory be leavning for our tester (in practice I don't entirely...trust our tester to detect and identify defects enough to leave him the sole writter of functional tests). In theory what I should be writing is Unit level tests. However, I'm not sure it's worth the expense. Unit testing takes some time to do, more then functional testing since I have to set up mocks and plugs into smaller units that weren't design to run in issolation. More importantly, I find I refactor and redesign heavily-part of this is due to my inherriting code that needed heavy redesign and is still being cleaned up, but even once I've finished removing parts that need work I'm sure in the act of expanding the code I'll still do a decent amount of refactoring and redesign. It feels as if I will break my unit tests, forcing wasted time to refactor them as well, often due to unit test, by definition, having to be coupled so closely to the code structure. So.is it worth all the wasted time when functional tests, that will never break when I refactor/redesign, should find most defects? Do unit tests really provide that much extra defect detetection over through functional? and how does one create good unit tests that work with very quick and agile code that is modified rapidly? ps, I would be fine/happy with links to anything one considers an excellent resource for how to 'do' unit testing in a highly changing enviroment. edit: to clarify I am doing a bit of very unoffical TDD, I just seem to be writing tests on what would be considered a functional level rather then unit level. I think part of this is becaus I own nearly all of the project I don't feel I need to limit the scope as much; and part of it is that it's daunting to think of trying to go back and retroactively add the unit tests needed to cover enough code that I can feel comfortable testing only a unit without the full functionality and trust that unit still works with the rest of the units.

    Read the article

  • How can a non-technical person learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.

    Read the article

  • Optimizing MySQL -

    - by Josh
    I've been researching how to optimize MySQL a bit, but I still have a few questions. MySQL Primer Results http://pastie.org/private/lzjukl8wacxfjbjhge6vw Based on this, the first problem seems to be that the max_connections limit is too low. I had a similar problem with Apache initially, the max connection limit was set to 100, and the web server would frequently lock up and take an excruciatingly long time to deliver pages. Raising the connection limit to 512 fixed this issue, and I read that raising the connection limit on MySQL to match this was considered good practice. Being that MySQL has actually been "locking up" recently as well (connections have been refused entirely for a few minutes at a time at random intervals) I'm assuming this is the main cause of the issue. However, as far as table cache goes, I'm not sure what I should set this as. I've read that setting this too high can hinder performance further, so should I raise this to right around 551, 560, 600, or do something else? Lastly, as far as raising the join_buffer_size value goes, this doesn't even seem to be included in Debian's my.cnf file by default. Assuming there's not much I can do about adding indexes, should I look into raising this? Any suggested values? Any suggestions in general here would be appreciated as well. Edit: Here's the number of open tables the MySQL server is reporting. I believe this value is related to my question (Opened_tables: 22574)

    Read the article

  • Programmer friendly non-voxel art styles?

    - by Overv
    Like many other programmers I've always wanted to make a game, but simply lack the skills to do any production quality graphics. I am however sure that I want to do the models and textures myself, because I need a lot of different objects and I am sure I wouldn't be able to find good matching models on 3D sites. That means I'll have to pick an art style that is "simple", programmer friendly. An extreme example of this is of course Minecraft, but I don't want to go that basic. I'm absolutely against creating a voxel game. What kind of art styles are out there that are relatively simple, i.e. things made out of basic shapes and textures, but are still good enough to form a believable and detailed world? An example of what I mean is wind waker. The objects are formed of relatively simples shapes, but still provide enough detail to create a nice, living world. The environment my game is set in is a city environment. What I'm really asking for here are good examples of "simple" art styles applied in practice, so I can choose one that fits my skills.

    Read the article

  • Proper way to let user enter password for a bash script using only the GUI (with the terminal hidden)

    - by MountainX
    I have made a bash script that uses kdialog exclusively for interacting with the user. It is launched from a ".desktop" file so the user never sees the terminal. It looks 100% like a GUI app (even though it is just a bash script). It runs in KDE only (Kubuntu 12.04). My only problem is handling password input securely and conveniently. I can't find a satisfactory solution. The script was designed to be run as a normal user and to prompt for the password when a sudo command is first needed. In this way, most commands, those not requiring sudo rights, are run as the normal user. What happens (when the script is run from the terminal) is that the user is prompted for their password once and the default sudo timeout allows the script to finish, including any additional sudo commands, without prompting the user again. This is how I want it to work when run behind the GUI too. The main problem is that using kdesudo to launch my script, which is the standard GUI way, means that the entire script is executed by the root user. So file ownerships get assigned to the root user, I can't rely upon ~/ in paths, and many other things are less than ideal. Running the entire script as the root user is just a very unsatisfactory solution and I think it is a bad practice. I appreciate any ideas for letting a user enter the sudo password just once via GUI while not running the whole script as root. Thanks.

    Read the article

  • C# : When to go Fluent

    - by ach
    In many respects I really like the idea of Fluent interfaces, but with all of the modern features of C# (initializers, lambdas, named parameters) I find myself thinking, "is it worth it?", and "Is this the right pattern to use?". Could anyone give me, if not an accepted practice, at least their own experience or decision matrix for when to use the Fluent pattern? Conclusion: Some good rules of thumb from the answers so far: Fluent interfaces help greatly when you have more actions than setters, since calls benefit more from the context pass-through. Fluent interfaces should be thought of as a layer over top of an api, not the sole means of use. The modern features such as lambdas, initializers, and named parameters, can work hand-in-hand to make a fluent interface even more friendly. ... Edit: Here is an example of what I mean by the modern features making it feel less needed. Take for example a (perhaps poor example) Fluent interface that allows me to create an Employee like: Employees.CreateNew().WithFirstName("Peter") .WihtLastName("Gibbons") .WithManager() .WithFirstName("Bill") .WithLastName("Lumbergh") .WithTitle("Manager") .WithDepartment("Y2K"); Could easily be written with initiallizers like: Employees.Add(new Employee() { FirstName = "Peter", LastName = "Gibbons", Manager = new Employee() { FirstName = "Bill", LastName = "Lumbergh", Title = "Manager", Department = "Y2K" } }); I could also have used named parameters in a constructors in this example.

    Read the article

  • 24 Hours of PASS scheduling

    - by Rob Farley
    I have a new appreciation for Tom LaRock (@sqlrockstar), who is doing a tremendous job leading the organising committee for the 24 Hours of PASS event (Twitter: #24hop). We’ve just been going through the list of speakers and their preferences for time slots, and hopefully we’ve kept everyone fairly happy. All the submitted sessions (59 of them) were put up for a vote, and over a thousand of you picking your favourites. The top 28 sessions as voted were all included (24 sessions plus 4 reserves), and duplicates (when a single presenter had two sessions in the top 28) were swapped out for others. For example, both sessions submitted by Cindy Gross were in the top 28. These swaps were chosen by the committee to get a good balance of topics. Amazingly, some big names missed out, and even the top ten included some surprises. T-SQL, Indexes and Reporting featured well in the top ten, and in the end, the mix between BI, Dev and DBA ended up quite nicely too. The ten most voted-for sessions were (in order): Jennifer McCown - T-SQL Code Sins: The Worst Things We Do to Code and Why Michelle Ufford - Index Internals for Mere Mortals Audrey Hammonds - T-SQL Awesomeness: 3 Ways to Write Cool SQL Cindy Gross - SQL Server Performance Tools Jes Borland - Reporting Services 201: the Next Level Isabel de la Barra - SQL Server Performance Karen Lopez - Five Physical Database Design Blunders and How to Avoid Them Julie Smith - Cool Tricks to Pull From Your SSIS Hat Kim Tessereau - Indexes and Execution Plans Jen Stirrup - Dashboards Design and Practice using SSRS I think you’ll all agree this is shaping up to be an excellent event.

    Read the article

  • throw new exception- C#

    - by Jalpesh P. Vadgama
    This post will be in response to my older post throw vs. throw(ex) best practice and difference- c# comment that I should include throw new exception. What’s wrong with throw new exception: Throw new exception is even worse, It will create a new exception and will erase all the earlier exception data. So it will erase stack trace also.Please go through following code. It’s same earlier post the only difference is throw new exception.   using System; namespace Oops { class Program { static void Main(string[] args) { try { DevideByZero(10); } catch (Exception exception) { throw new Exception (string.Format( "Brand new Exception-Old Message:{0}", exception.Message)); } } public static void DevideByZero(int i) { int j = 0; int k = i/j; Console.WriteLine(k); } } } Now once you run this example. You will get following output as expected. Hope you like it. Stay tuned for more..

    Read the article

  • Strict Pomodoro and other time management Chrome extensions

    - by kerry
    I have recently begun using the Pomodoro Technique to increase my productivity. However, I still find myself getting sucked in to the vortex of useless information that is the internet. With that in mind I began searching for a useful chrome extension to replace the Android Pomodoro app I have been using to manage my ‘doros. I even considered writing it myself. Luckily, I stumbled on one that had a similar featureset to what I was looking for. Strict Pomodoro is an excellent Chrome extension for practicing Pomodoro. Though lacking a few key features, such as the ability to set the duration of your pomodoros and breaks, it still has a key feature that helps me stay on task. It blocks time sucking websites. You can set filter lists and it will keep you from accessing them during a Pomodoro. Effectively reminding you to stay on task. Also, the author readily admits that it was quickly put together and new features may be added down the road. For now, it is still an excellent option. For those of you who do not practice Pomodoro but are trying to stay on task. The StayFocusd extension will effectively manage the amount of time you spend on useless (non-productive) sites. It also has a rich feature set that may be better for your work habits. OK, breaks over. Time to get back to work. 25 minutes at a time.

    Read the article

  • Database Schema Usage

    - by CrazyHorse
    I have a question regarding the appropriate use of SQL Server database schemas and was hoping that some database gurus might be able to offer some guidance around best practice. Just to give a bit of background, my team has recently shrunk to 2 people and we have just been merged with another 6 person team. My team had set up a SQL Server environment running off a desktop backing up to another desktop (and nightly to the network), whilst the new team has a formal SQL Server environment, running on a dedicated server, with backups and maintenance all handled by a dedicated team. So far it's good news for my team. Now to the query. My team designed all our tables to belong to a 3-letter schema name (e.g. User = USR, General = GEN, Account = ACC) which broadly speaking relate to specific applications, although there is a lot of overlap. My new team has come from an Access background and have implemented their tables within dbo with a 3-letter perfix followed by "_tbl" so the examples above would be dbo.USR_tblTableName, dbo.GEN_tblTableName and dbo.ACC_tblTableName. Further to this, neither my old team nor my new team has gone live with their SQL Servers yet (we're both coincidentally migrating away from Access environments) and the new team have said they're willing to consider adopting our approach if we can explain how this would be beneficial. We are not anticipating handling table updates at schema level, as we will be using application-level logins. Also, with regards to the unwieldiness of the 7-character prefix, I'm not overly concerned myself as we're using LINQ almost exclusively so the tables can simply be renamed in the DMBL (although I know that presents some challenges when we update the DBML). So therefore, given that both teams need to be aligned with one another, can anyone offer any convincing arguments either way?

    Read the article

  • different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • How can a non-technical person can learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? [edit: Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.]

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • Math questions at a programmer interview?

    - by anon
    So I went to an interview at Samsung here in Dallas, Texas. The way the recruiter described the job, he didn't make it sound like it was too math-oriented. The job basically involved graphics programming and C++. Yes, math is implied in graphics programming, especially shaders, but I still wasn't expecting this... The whole interview lasted about an hour and a half and they asked me nothing but math-related questions. They didn't ask me a single programming question, which I found odd. About all they did was ask me how to write certain math routines as a C++ function, but that's about it. What about programming philosophy questions? Design patterns? Code-correctness? Constness? Exception safety? Thread safety? There are a zillion topics that they could have covered. But they didn't. The main concern I have is that they didn't ask any programming questions. This basically implies to me that any programmer who is good at math can get a job here, but they might put out terrible code. Of course, I think I bombed the interview because I haven't used any sort of linear algebra in about a year and I forget math easily if I haven't used it in practice for a while. Are any of my other fellow programmers out there this way? I'm a game programmer too, so this seems especially odd. The more I learn, the more old knowledge that gets "popped" out of my "stack" (memory). My question is: Does this interview seem suspicious? Is this a typical interview that large corporations have? During the interview they told me that Google's interview process is similar. They have multiple, consecutive interviews where the math problems get more advanced.

    Read the article

  • Handling Errors In PHP When Using MVC

    - by James Jeffery
    I've been using Codeigniter a lot recently, but one thing that gets on my nerves is handling errors and displaying them to the user. I've never been good at handling errors without it getting messy. My main concern is when returning errors to the user. Is it good practice to use exceptions and throw/catch exceptions rather than returning 0 or 1 from functions and then using if/else to handle the errors. Thus, making it easier to inform the user about the issue. I tend to go away from exceptions. My Java tutor at university some years ago told me "exceptions shouldn't be used in production code it's more for debugging". I get the feeling he was lying. But, an example, I have code that adds a user to a database. During the process more than 1 thing could go wrong, such as a database issue, a duplicate entry, a server issue, etc. When an issue happens during registration the user needs to know about it. What's the best way to handle errors in PHP, keeping in mind that I'm using an MVC framework.

    Read the article

  • Why does TDD work?

    - by CesarGon
    Test-driven development (TDD) is big these days. I often see it recommended as a solution for a wide range of problems here in Programmers SE and other venues. I wonder why it works. From an engineering point of view, it puzzles me for two reasons: The "write test + refactor till pass" approach looks incredibly anti-engineering. If civil engineers used that approach for bridge construction, or car designers for their cars, for example, they would be reshaping their bridges or cars at very high cost, and the result would be a patched-up mess with no well thought-out architecture. The "refactor till pass" guideline is often taken as a mandate to forget architectural design and do whatever is necessary to comply with the test; in other words, the test, rather than the user, sets the requirement. In this situation, how can we guarantee good "ilities" in the outcomes, i.e. a final result that is not only correct but also extensible, robust, easy to use, reliable, safe, secure, etc.? This is what architecture usually does. Testing cannot guarantee that a system works; it can only show that it doesn't. In other words, testing may show you that a system contains defects if it fails a test, but a system that passes all tests is not safer than a system that fails them. Test coverage, test quality and other factors are crucial here. The false safe feelings that an "all green" outcomes produces to many people has been reported in civil and aerospace industries as extremely dangerous, because it may be interepreted as "the system is fine", when it really means "the system is as good as our testing strategy". Often, the testing strategy is not checked. Or, who tests the tests? I would like to see answers containing reasons why TDD in software engineering is a good practice, and why the issues that I have explained above are not relevant (or not relevant enough) in the case of software. Thank you.

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

  • .Net search engine architecture and technology choice

    - by shrivb
    I am in the process of designing a search engine for an asp.net site. The site currently uses Microsoft Indexing Server to index and search content which range from simple text files to MS documents to PDFs. MIS is also used to crawl File servers. MIS in tandem with Index Server Companion crawls for content from external sites. I intend to replace MIS with the indexer/crawler I am trying to build. Since my platform is completely on the Microsoft stack, I cant afford to have a Java application server. Thus, Solr, and effectively, SolrNet is ruled out. With this being the context, I have couple of questions. 1.Technology choice I had done my initial investigation and looked at Lucene.Net. There seemed to be 2 issues in using Lucene.Net. First being, it cant crawl external content. There doesn't seem to be a direct port of Nutch in .Net. Second, since it is just an indexer, it cant parse various document types. The parsing is left to the developer. So, what would be best technology choice on the .Net platform to achieve indexing & crawling? Are there any .Net open source libraries available for document parsing? 2.Architectural pattern Is there any general architectural pattern or best practice that needs to be followed in designing such a search engine? Thanks in advance.

    Read the article

  • How to refactor while keeping accuracy and redundancy?

    - by jluzwick
    Before I ask this question I will preface it with our environment. My team consists of 3 programmers working on multiple different projects. Due to this, our use of testing is mostly limited to very general black box testing. Take the following assumptions also: Unit Tests will eventually be written but I'm under strict orders to refactor first Ignore common Test-Driven Development techniques when given this environment, my time is limited. I understand that if this were done correctly, our team would actually save money in the long-term by building Unit-Tests before hand. I'm about to refactor a fairly large portion of the code that is critical. While I believe my code will accurately work when done and after our black box testing, I realize that there will be new data that the new code might not be able to handle. What I wanted to know is how to keep old code that functions 98% of the time so that we can call those subroutines in case the new code doesn't work properly. Right now I'm thinking of separating the old code in a separate class file and adding a variable to our config that will tell the program which code to use. Is there a better way to handle this? NOTE: We do use revision control and we have archived builds so the client could always revert to a previous build, but I would like to see if there is a decent way of doing this besides reverting. I want this so they can use the other new functionality delivered in the new build. Edit: While I agree I will need to write Unit Tests for this, I don't believe I will capture everything with them. I'm looking for ways to easily be able to revert to the old, functional code should anything happen. While I know this is a poor practice, I'm planning on removing this code after our team can guarantee that the new code works to the same standards as the old.

    Read the article

  • Why is Quicksort called "Quicksort"?

    - by Darrel Hoffman
    The point of this question is not to debate the merits of this over any other sorting algorithm - certainly there are many other questions that do this. This question is about the name. Why is Quicksort called "Quicksort"? Sure, it's "quick", most of the time, but not always. The possibility of degenerating to O(N^2) is well known. There are various modifications to Quicksort that mitigate this problem, but the ones which bring the worst case down to a guaranteed O(n log n) aren't generally called Quicksort anymore. (e.g. Introsort). I just wonder why of all the well-known sorting algorithms, this is the only one deserving of the name "quick", which describes not how the algorithm works, but how fast it (usually) is. Mergesort is called that because it merges the data. Heapsort is called that because it uses a heap. Introsort gets its name from "Introspective", since it monitors its own performance to decide when to switch from Quicksort to Heapsort. Similarly for all the slower ones - Bubblesort, Insertion sort, Selection sort, etc. They're all named for how they work. The only other exception I can think of is "Bogosort", which is really just a joke that nobody ever actually uses in practice. Why isn't Quicksort called something more descriptive, like "Partition sort" or "Pivot sort", which describe what it actually does? It's not even a case of "got here first". Mergesort was developed 15 years before Quicksort. (1945 and 1960 respectively according to Wikipedia) I guess this is really more of a history question than a programming one. I'm just curious how it got the name - was it just good marketing?

    Read the article

  • Are you ready for SharePoint 2010?

    - by Michael Van Cleave
    With SharePoint's next release on the horizon (May 12th) many of my clients and colleagues are starting to ramp up for the upcoming tidal wave of functionality. Microsoft has been doing a terrific job of getting as much information out in the public lime light as possible over the last few months and I think that will definitely pay off with regards to acceptance of the new version of SharePoint. However, there are still some aspects of the new platform that are a little murky. Aspects such as: "Should we upgrade?" "Will my current installation upgrade without issues?" "What benefits will I see by upgrading?" "What are the best practices for upgrading or best practice in general relating to 2010?" "How should we plan to deploy SharePoint 2010 in our organization?" There is a ton of information out there, but how do you go about getting some of these questions answered? Well, I am glad you asked. (J) ShareSquared will be delivering a FREE SharePoint 2010 Readiness Webinar that will cover Preparation, Strategies, and Best Practices for the upcoming version of SharePoint. The webinar will be presented by 2 of ShareSquared's outstanding SharePoint MVP's; Gary Lapointe and Paul Stork. As all those T.V. commercials say… "Space is limited, so sign up now!" Just kidding, well kind of but not really. I am sure that the signup will be huge and space is really limited so the sooner you sign up the better. I would hate for any of you to miss out. If you have any questions please don't hesitate to shoot me a e-mail through my blog or contact ShareSquared directly. See you at the webinar! Michael

    Read the article

  • Management Reporter Installation – Lessons Learned

    - by Ryan McBee
    After successfully completing several installations of Management Reporter this year, I wanted to share a few lessons learned that should help you. First, you will want to make sure that you install Management Reporter under a domain account as opposed to a local system or network service account. Management Reporter gives you the option to install under these accounts, but it is a be a best practice approach to use a domain account. Upon installation of Management Report, you will want to make sure that Directory Browsing is enabled within the IIS server of your site or you will have problems when you go to use Management Reporter. By default, it will be disabled in Server 2008 R2 and you will need to make the setting change under the Actions pane shown below. Lastly, you will want to make sure that SQL Server is running under a domain account. I have had multiple situations where reports have been stuck in the Queued status rather than Processing status of Management Reporter. After reviewing resolution 5 of KB 2298248, it was determined that running SQL Server under a domain account is the way to go.

    Read the article

  • Designing configuration for subobjects

    - by Stefano Borini
    I have the following situation: I have a class (let's call it Main) encapsulating a complex process. This class in turn orchestrates a sequence of subalgorithms (AlgoA, AlgoB), each one represented by an individual class. To configure Main, I have a configuration stored into a configuration object MainConfig. This object contains all the config information for AlgoA and AlgoB with their specific parameters. AlgoA has no interest to the information relative to the configuration of AlgoB, so technically I could have (and in practice I have) a contained MainConfig.AlgoAConfig and MainConfig.AlgoBConfig instances, and initialize as AlgoA(MainConfig.AlgoAConfig) and AlgoB(MainConfig.AlgoBConfig). The problem is that there is some common configuration data. One example is the printLevel. I currently have MainConfig.printLevel. I need to propagate this information to both AlgoA and AlgoB, because they have to know how much to print. MainConfig also needs to know how much to print. So the solutions available are I pass the MainConfig to AlgoA and AlgoB. This way, AlgoA has technically access to the whole configuration (even that of AlgoB) and is less self-contained I copy the MainConfig.printLevel into AlgoAConfig and AlgoBConfig, so I basically have three printLevel information repeated. I create a third configuration class PrintingConfig. I have an instance variable MainConfig.printingConfig, and then pass to AlgoA both MainConfig.AlgoAConfig and MainConfig.printingConfig. Have you ever found this situation? How did you solve it ? Which one is stylistically clearer to a new reader of the code ?

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >