Search Results

Search found 385 results on 16 pages for 'reasoning'.

Page 7/16 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Command-Query-Separation and multithreading safe interfaces

    - by Tobias Langner
    I like the command query separation pattern (from OOSC / Eiffel - basically you either return a value or you change the state of the class - but not both). This makes reasoning about the class easier and it is easier to write exception safe classes. Now, with multi threading, I run into a major problem: the separation of the query and the command basically invalidates the result from the query as anything can happen between those 2. So my question is: how do you handle command query separation in an multi-threaded environment? Clarification example: A stack with command query separation would have the following methods: push (command) pop (command - but does not return a value) top (query - returns the value) empty (query) The problem here is - I can get empty as status, but then I can not rely on top really retrieving an element since between the call of empty and the call of top, the stack might have been emptied. Same goes for pop & top. If I get an item using top, I can not be sure that the item that I pop is the same. This can be solved using external locks - but that's not exactly what I call threadsafe design.

    Read the article

  • Are programmers a bunch of heartless robots who are lacking of empathy? [closed]

    - by Graviton
    OK, the provocative title got your attention. My experience as a programmer and dealing with my fellow programmers is that, a programmer is also usually someone who is so consumed by his programming work, so absorbed in his algorithmic construction that he has little passion/ time left for anything else, which includes empathy for other people, love and care for the people whom he love or should love ( such as their spouses, parents, kids, colleagues etc). The better a person is in terms of his programming powers, the more defective he is in terms of love/care because both honing programming skills and loving the surrounding takes time and one has only so much time to be allocated among so many different things. Also, programming ( especially INTERESTING programming job, like, writing an AI to predict the future search trend) is a highly consuming job; it doesn't just consume you from 9 to 5, it will also consume you after 5 and practically every second of your waking hours because a good programmer can't just magically switch off his thinking hat after the office lights go off ( If you can then I don't really think you are a passionate programmer, and the prerequisite of a good programmer is passion). So, a good programmer is necessarily someone who can't love as much as others do because the very nature of the programming job prevents him from loving others as much as he wants to. Do you concur with my observation/ reasoning?

    Read the article

  • The value of an updated specification

    - by Mr. Jefferson
    I'm at the tail end of a large project (around 5 months of my time, 60,000 lines of code) on which I was the only developer. The specification documents for the project were designed well early on, but as always happens during development, some things have changed. For example: Bugs were discovered and fixed in ways that don't correspond well with the spec Additional needs were discovered and dealt with We found areas where the spec hadn't thought far enough ahead and had to change some implementation Etc. Through all this, the spec was not kept updated, because we were lean on resources (I know that's a questionable excuse). I'm hoping to get the spec updated soon when we have time, but it's a matter of convincing management to spend the effort. I believe it's a good idea, but I need some good reasoning; I can't speak from very much experience because I'm only a year and a half out of school. So here are my questions: What value is there in keeping an updated spec? How often should the spec be updated? With every change made that might affect it, just at the end of development, or something in between? EDIT to clarify: this project was internal, so no external client was involved. It's our product.

    Read the article

  • How do you avoid jumping to a solution when under pressure? [closed]

    - by GlenPeterson
    When under a particularly strict programming deadline (like an hour), if I panic at all, my tendency is to jump into coding without a real plan and hope I figure it out as I go along. Given enough time, this can work, but in an interview it's been pretty unsuccessful, if not downright counter-productive. I'm not always comfortable sitting there thinking while the clock ticks away. Is there a checklist or are there techniques to recognize when you understand the problem well enough to start coding? Maybe don't touch the keyboard for the first 5-10 minutes of the problem? At what point do you give up and code a brute-force solution with the hope of reasoning out a better solution later? A related follow-up question might be, "How do you ensure that you are solving the right problem?" Or "When is it most productive to think and design more vs. code some experiments to and figure out the design later?" EDIT: One close vote already, but I'm not sure why. I wrote this in the first person, but I doubt I'm the only programmer to ever choke in an interview. Here is a list of techniques for taking a math test and another for taking an oral exam. Maybe I'm not expressing myself well, but I'm asking if there is a similar list of techniques for handling a programming problem under pressure?

    Read the article

  • Is unit testing development or testing?

    - by Rubio
    I had a discussion with a testing manager about the role of unit and integration testing. She requested that developers report what they have unit and integration tested and how. My perspective is that unit and integration testing are part of the development process, not the testing process. Beyond semantics what I mean is that unit and integration tests should not be included in the testing reports and systems testers should not be concerned about them. My reasoning is based on two things. Unit and integration tests are planned and performed against an interface and a contract, always. Regardless of whether you use formalized contracts you still test what e.g. a method is supposed to do, i.e. a contract. In integration testing you test the interface between two distinct modules. The interface and the contract determine when the test passes. But you always test a limited part of the whole system. Systems testing on the other hand is planned and performed against the system specifications. The spec determines when the test passes. I don't see any value in communicating the breadth and depth of unit and integration tests to the (systems) tester. Suppose I write a report that lists what kind of unit tests are performed on a particular business layer class. What is he/she supposed to take away from that? Judging what should and shouldn't be tested from that is a false conclusion because the system may still not function the way the specs require even though all unit and integration tests pass. This might seem like useless academic discussion but if you work in a strictly formal environment as I do, it's actually important in determining how we do things. Anyway, am I totally wrong? (Sorry for the long post.)

    Read the article

  • What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy?

    - by David Cowden
    If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes. This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you --the Factory design pattern. My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need ask a Factory to give me one. As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls. My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

    Read the article

  • Project Codenames - Yea or Nay?

    - by rmx
    Where I work, most of our projects have (or at least attempt) descriptive, useful names. However we have a few with names that make no sense: I found that an assembly named WiFi which actually has nothing whatsoever to do with wi-fi, but is a codename. When I asked why, I was told that it's to protect company secrets incase some intern has few too many at the pub on Friday and starts chatting about the brand new 'WiFi' project he's been working on. Its clear that some people find enjoyment in finding silly / amusing codenames for their projects (like in this question). My question is: is it really a good idea to use codenames for your projects or are you better off spending the time to decide upon a descriptive name? My opinion is that in the long-run its better to give your projects relevant names. My reasoning is that if you can't think of a decent name, perhaps you don't really know the requirements well enough. I think there are better ways to 'protect company secrets' and I find it quite confusing when the name does not correlate at all with the content. It's just common sense, surely?! So do you use codenames and what the your reasons for or against this seemingly common, yet annoying (to me at least) practice?

    Read the article

  • ArchBeat Link-o-Rama for 10-17-2012

    - by Bob Rhubart
    This is your brain on IT architecture. Oracle Technology Network Architect Day in Los Angeles, Oct 2 Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables—plus a free lunch. Register now. Panel: On the Impact of Software | InfoQ Les Hatton (Oakwood Computing Associates), Clive King (Oracle), Paul Good (Shell), Mike Andrews (Microsoft) and Michiel van Genuchten (moderator) discuss the impact of software engineering on our lives in this panel discussion recorded at the Computer Society Software Experts Summit 2012. OTN APAC Tour 2012: Bangkok, Thailand - Oct 22, 2012 Mike Dietrich shares information on the upcoming OTN APAC Tour stop in Bangkok. Registration is open. Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster | Giri Mandalika Giri Mandalika shares an overview of a new Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3.. As Giri explains, "This solution was centered around the engineered system, SPARC SuperCluster T4-4." The Oldest Big Data Problem: Parsing Human Language | The Data Warehouse Insider Dan McClary offers up a new whitepaper "which details the use of Digital Reasoning Systems' Synthesys software on Oracle Big Data Appliance." Mobile Apps for EBS | Capgemini Oracle Blog Capgemini solution architect Satish Iyer breifly describes how Oracle ADF and Oracle SOA Suite can be used to fill the gap in mobile applications for Oracle EBS. Ease the Chaos with Automated Patching: Enterprise Manager Cloud Control 12c | Porus Homi Havewala This new OTN article is excerpted from Porus Homi Havewala's latest book, Oracle Enteprise Manager Cloud Control 12c: Managing Data Center Chaos (2012, Packt Publishing). Thought for the Day "Never make a technical decision based upon the politics of the situation, and never make a political decision based upon technical issues." — Geoffrey James Source: softwarequotes.com

    Read the article

  • You Can&rsquo;t Upload An Empty File To SharePoint 2007 Or SharePoint 2010

    - by Brian Jackett
    The title of this post is pretty self explanatory, but I thought it worth mentioning since I had never run across this rule until just recently.  A few weeks ago I was testing out a new workflow attached to a SharePoint 2007 document library.  I uploaded various file types to ensure all were handled properly.  One of the files I happened to test with was an empty .txt file to which I got the following error.      As you can see from the error message you aren’t allowed to upload a file that is empty.  Fast forward to this week when I was doing some research for my upcoming SharePoint 2010 beta exams.  I remembered that error I got a few weeks ago and decided to try out with SharePoint 2010 as well.  No surprises I got a similar error. Conclusion     Next time you are uploading files to a SharePoint 2007 or 2010 document library, make sure the file is not empty.  Coincidentally when I tweeted about this issue a few friends replied that they had also found this error recently.  I don’t know the internal reasoning why this is prevented but I assume it has something to do with how the blob for the file is stored in the database.  I assume that this would still be the case even if you had Remote Blob Storage (RBS) configured for your farm, but don’t have access to such a farm to confirm.  If anyone reading this does have access and wants to confirm that would be appreciated, just leave a comment.         -Frog Out

    Read the article

  • Why is a small fixed vocabulary seen as an advantage to RESTful services?

    - by Matt Esch
    So, a RESTful service has a fixed set of verbs in its vocabulary. A RESTful web service takes these from the HTTP methods. There are some supposed advantages to defining a fixed vocabulary, but I don't really grasp the point. Maybe someone can explain it. Why is a fixed vocabulary as outlined by REST better than dynamically defining a vocabulary for each state? For example, object oriented programming is a popular paradigm. RPC is described to define fixed interfaces, but I don't know why people assume that RPC is limited by these contraints. We could dynamically specify the interface just as a RESTful service dynamically describes its content structure. REST is supposed to be advantageous in that it can grow without extending the vocabulary. RESTful services grow dynamically by adding more resources. What's so wrong about extending a service by dynamically specifying a per-object vocabulary? Why don't we just use the methods that are defined on our objects as the vocabulary and have our services describe to the client what these methods are and whether or not they have side effects? Essentially I get the feeling that the description of a server side resource structure is equivalent to the definition of a vocabulary, but we are then forced to use the limited vocabulary in which to interact with these resources. Does a fixed vocabulary really decouple the concerns of the client from the concerns of the server? I surely have to be concerned with some configuration of the server, this is normally resource location in RESTful services. To complain at the use of a dynamic vocabulary seems unfair because we have to dynamically reason how to understand this configuration in some way anyway. A RESTful service describes the transitions you are able to make by identifying object structure through hypermedia. I just don't understand what makes a fixed vocabulary any better than any self-describing dynamic vocabulary, which could easily work very well in an RPC-like service. Is this just a poor reasoning for the limiting vocabulary of the HTTP protocol?

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Can anyone recommend an AI sandbox?

    - by user19433
    I'm passionate person, who has been around AI from a long time [1] but never going in deep enough. Now it's time! I've been really looking for some way to concentrate on AI coding but couldn't succeeded to find an AI environment I can focus on. I just want to use an AI sandbox environment which would let me have tools like: visibility information character controller able to easily define a level, with obstacles of course physics collider management triggers management don't need to be a shiny, eye candy graphical render : this is about pathfinding, tactical reasoning, etc.. I have tried : Unreal Dev Kit : while the new release announce is about C++ coding, this is about external tools and will be released in 2013 Cry Engine : really interesting as AI is presents here but coding with it appears to be an hell: did I got it wrong ? Half Life source, C4, Torque, Dx Studio : either quite old, not very useful or costly these imply to dig in documentation (when provided) to code everything, graphics included. Unity 3D : the most promising platform. While you also need to create your own environment, there are lot of examples. The disadvantage is, in addition to spend time to have this env. working, is the languages choice : C#, Javascript or Boo. C# is not that hard, but this implies you'll allways have to convert papers (I love those from Lars Linden) books codes, or anything you can have in Aigamedev are most often in C++. This is extra work. I've look at "Simple Path", the very good Arong Greenberg work but no source provided and AngryAnt work. AI Sandbox : this seems to be exactly what as AI coder I want to use. I saw some preview but from 2009 we still don't know what it will be about precisely, will it be opensource or free (I strongly doubt), will I be able to buy it? will it really provide me tools I need to focus on AI ? That being said, what is the best environment to be able to focus on AI coding only, is it even possible?

    Read the article

  • Dealing with institutionalized programmers.

    - by Singleton
    Some times programmers who work in a project for long time tend to get institutionalized. It is difficult to convince them with reasoning. Even if we manage to convince them they will be adamant to take suggestion on board. How do we handle the situation without developing friction in team? Institutionalized in terms of practices. I recently joined in a project where build &release process was made so complicated with unnecessary roadblocks. My suggestion was we can get rid of some of the development overheads(like filling few spreadsheets) just by integrating defect management and version controlling tools (both are IBM-Rational tools integration can be very easy and one-off effort). Also by using tools like Maven & Ant (project involves java and some COTS products) build & release can be simplified and reduce manual errors& intervention. I managed to convince and ready to put efforts for developing proof of concept. But the ‘Senior’ developer is not willing to take it on board. One reason could be the current process makes him valuable in team.

    Read the article

  • Why, in WPF, do we set an object to Stretch via its Alignment properties instead of Width/Height?

    - by Jonathan Hobbs
    In WPF's XAML, we can tell an element to fill its container like this: <Button HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> Why is it that when we set an element to Stretch, we do it via the HorizontalAlignment and VerticalAlignment properties? Why did the WPF design team decide to take this approach over having Width="Stretch" and Height="Stretch"? I presume it was a calculated decision, and I'm curious about the reasoning. CSS, among other technologies, follows the convention that stretching is done via the width and height properties, and that alignment affects positioning exclusively. This seems intuitive enough: stretching the element is manipulating its width and height, after all! Using the corresponding alignment property to stretch an element seems counter-intuitive and unusual in comparison. This makes me think they didn't just pick this option for no reason: they made a calculated decision and had reasons behind it. Width and Height use the double data type, which would ordinarily mean assigning it a string would be silly. However, WPF's Window objects can take Width="Auto", which gets treated as double.NaN. Couldn't Width="Stretch" be stored as double.PositiveInfinity or some other value?

    Read the article

  • Should a stack trace be in the error message presented to the user?

    - by Vilx-
    I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: an intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice "An Error has occurred, please submit the below information to the developers" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are, etc, but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development, where you don't have to go hunting through the log files to find what you need on every exception. (But that could be solved with a configuration switch.) Unfortunately there has been some kind of "Security audit" (no idea how they did that without the sources... but whatever), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right?

    Read the article

  • Antenna Aligner part 2: Finding the right direction

    - by Chris George
    Last time I managed to get "my first app(tm)" built, published and running on my iPhone. This was really cool, a piece of my code running on my very own device. Ok, so I'm easily pleased! The next challenge was actually trying to determine what it was I wanted this app to do, and how to do it. Reverting back to good old paper and pen, I started sketching out designs for the app. I knew I wanted it to get a list of transmitters, then clicking on a transmitter would display a compass type view, with an arrow pointing the right way. I figured there would not be much point in continuing until I know I could do the graphical part of the project, i.e. the rotating compass, so armed with that reasoning (plus the fact I just wanted to get on and code!), I once again dived into visual studio. Using my friend (google) I found some example code for getting the compass data from the phone using the PhoneGap framework. // onSuccess: Get the current heading // function onSuccess(heading) {    alert('Heading: ' + heading); } navigator.compass.getCurrentHeading(onSuccess, onError); Using the ripple mobile emulator this showed that it was successfully getting the compass heading. But it didn't work when uploaded to my phone. It turns out that the examples I had been looking at were for PhoneGap 1.0, and Nomad uses PhoneGap 1.4.1. In 1.4.1, getCurrentHeading provides a compass object to onSuccess, not just a numeric value, so the code now looks like // onSuccess: Get the current magnetic heading // function onSuccess(heading) {    alert('Heading: ' + heading.magneticHeading); }; navigator.compass.getCurrentHeading(onSuccess, onError); So the lesson learnt from this... read the documentation for the version you are actually using! This does, however, lead to compatibility problems with ripple as it only supports 1.0 which is a real pain. I hope that the ripple system is updated sometime soon.

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • Antenna Aligner part 2: Finding the right direction

    - by Chris George
    Last time I managed to get "my first app(tm)" built, published and running on my iPhone. This was really cool, a piece of my code running on my very own device. Ok, so I'm easily pleased! The next challenge was actually trying to determine what it was I wanted this app to do, and how to do it. Reverting back to good old paper and pen, I started sketching out designs for the app. I knew I wanted it to get a list of transmitters, then clicking on a transmitter would display a compass type view, with an arrow pointing the right way. I figured there would not be much point in continuing until I know I could do the graphical part of the project, i.e. the rotating compass, so armed with that reasoning (plus the fact I just wanted to get on and code!), I once again dived into visual studio. Using my friend (google) I found some example code for getting the compass data from the phone using the PhoneGap framework. // onSuccess: Get the current heading // function onSuccess(heading) {    alert('Heading: ' + heading); } navigator.compass.getCurrentHeading(onSuccess, onError); Using the ripple mobile emulator this showed that it was successfully getting the compass heading. But it didn't work when uploaded to my phone. It turns out that the examples I had been looking at were for PhoneGap 1.0, and Nomad uses PhoneGap 1.4.1. In 1.4.1, getCurrentHeading provides a compass object to onSuccess, not just a numeric value, so the code now looks like // onSuccess: Get the current magnetic heading // function onSuccess(heading) {    alert('Heading: ' + heading.magneticHeading); }; navigator.compass.getCurrentHeading(onSuccess, onError); So the lesson learnt from this... read the documentation for the version you are actually using! This does, however, lead to compatibility problems with ripple as it only supports 1.0 which is a real pain. I hope that the ripple system is updated sometime soon.

    Read the article

  • Where is Oracle Utilities Application Framework V3?

    - by Anthony Shorten
    You may of noticed that the latest version of the Oracle Utilities Application Framework is V4.0.1. The last release of the Framework was V2.2. So what happened to V3? The short answer is that there is no V3 of the framework. The long answer is that the Oracle Utilities Application Framework has long been associated with Oracle Utilities Customer Care And Billing and Oracle Enterprise Taxation Management only. As more and more of the Oracle Tax And Utilities products are migrated onto the framework the association betweent eh original products on the framework is less appropriate. Therefore it was decided to pick a version number to emphasize the decouplinf of the releases of the Framework with any particular product. To illustrate this, the Oracle Mobile Workforce Management (MWM) V2.0.0 product uses Oracle Utilities Applicaton Framework V4.0.1. If we used the old numberings schema then MWM would be V4.0.1 which makes no sense, given the last release of MWM was V1.x The framework has its own development team and product management. It basicaly has its own schedule (though it is influenced by the products that use it still - which makes sense). So that s the reasoning around the version numbering change for the framework.

    Read the article

  • Views : ViewControllers, many to one, or one to one?

    - by conor
    I have developed an Android application where, typically, each view (layout.xml) displayed on the screen has it's own corresponding fragment (for the purpose of this question I may refer to this as a ViewController). These views and Fragments/ViewControllers are appropriately named to reflect what they display. So this has the effect of allowing the programmer to easily pinpoint the files associated with what they see on any given screen. The above refers to the one to one part of my question. Please note that with the above there are a few exceptions where very similar is displayed on two views so the ViewController is used for two views. (Using a simple switch (type) to determine what layout.xml file to load) On the flip side. I am currently working on the iOS version of the same app, which I didn't develop. It seems that they are adopting more of a one-to-many (ViewController:View) approach. There appears to be one ViewController that handles the display logic for many different types of views. In the ViewController are an assortment of boolean flags and arrays of data (to be displayed) that are used to determine what view to load and how to display it. This seems very cumbersome to me and coupled with no comments/ambiguous variable names I am finding it very difficult to implement changes into the project. What do you guys think of the two approaches? Which one would you prefer? I'm really considering putting in some extra time at work to refactor the iOS into a more 1:1 oriented approach. My reasoning for 1:1 over M:1 is that of modularity and legibility. After all, don't some people measure the quality of code based on how easy it is for another developer to pick up the reigns or how easy it is to pull a piece of code and use it somewhere else?

    Read the article

  • Is error suppression acceptable in role of logic mechanism?

    - by Rarst
    This came up in code review at work in context of PHP and @ operator. However I want to try keep this in more generic form, since few question about it I found on SO got bogged down in technical specifics. Accessing array field, which is not set, results in error message and is commonly handled by following logic (pseudo code): if field value is set output field value Code in question was doing it like: start ignoring errors output field value stop ignoring errors The reasoning for latter was that it's more compact and readable code in this specific case. I feel that those benefits do not justify misuse (IMO) of language mechanics. Is such code is being "clever" in a bad way? Is discarding possible error (for any reason) acceptable practice over explicitly handling it (even if that leads to more extensive and/or intensive code)? Is it acceptable for programming operators to cross boundaries of intended use (like in this case using error handling for controlling output)? Edit I wanted to keep it more generic, but specific code being discussed was like this: if ( isset($array['field']) ) { echo '<li>' . $array['field'] . '</li>'; } vs the following example: echo '<li>' . @$array['field'] . '</li>';

    Read the article

  • How to avoid jumping to a solution when under pressure? [closed]

    - by GlenPeterson
    When under a particularly strict programming deadline (like an hour), if I panic at all, my tendency is to jump into coding without a real plan and hope I figure it out as I go along. Given enough time, this can work, but in an interview it's been pretty unsuccessful, if not downright counter-productive. I'm not always comfortable sitting there thinking while the clock ticks away. Is there a checklist or are there techniques to recognize when you understand the problem well enough to start coding? Maybe don't touch the keyboard for the first 5-10 minutes of the problem? At what point do you give up and code a brute-force solution with the hope of reasoning out a better solution later? When is it most productive to think and design more vs. code some experiments to and figure out the design later? Here is a list of techniques for taking a math test and another for taking an oral exam. Is there is a similar list of techniques for handling a programming problem under pressure? ANSWERS: I think this is a valid answer: How To Solve It. I found the link as an answer to Steps to solve or approach towards a solution. There were also some really good tips at Is thinking out loud during an interview really the best strategy?. A great and concise argument for TDD is the first answer to TDD Writing code vs Figuring out the answer to a problem?. My question may be a near-duplicate of that one.

    Read the article

  • Does it make sense to write tests for legacy code when there is no time for a complete refactoring?

    - by is4
    I usually try to follow the advice of the book Working Effectively with Legacy Code. I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider?

    Read the article

  • Are specific types still necessary?

    - by MKO
    One thing that occurred to me the other day, are specific types still necessary or a legacy that is holding us back. What I mean is: do we really need short, int, long, bigint etc etc. I understand the reasoning, variables/objects are kept in memory, memory needs to be allocated and therefore we need to know how big a variable can be. But really, shouldn't a modern programming language be able to handle "adaptive types", ie, if something is only ever allocated in the shortint range it uses fewer bytes, and if something is suddenly allocated a very big number the memory is allocated accordinly for that particular instance. Float, real and double's are a bit trickier since the type depends on what precision you need. Strings should however be able to take upp less memory in many instances (in .Net) where mostly ascii is used buth strings always take up double the memory because of unicode encoding. One argument for specific types might be that it's part of the specification, ie for example a variable should not be able to be bigger than a certain value so we set it to shortint. But why not have type constraints instead? It would be much more flexible and powerful to be able to set permissible ranges and values on variables (and properties). I realize the immense problem in revamping the type architecture since it's so tightly integrated with underlying hardware and things like serialization might become tricky indeed. But from a programming perspective it should be great no?

    Read the article

  • Is using SVN for development and CM a bad practice?

    - by GatorGuy
    I have a bit of experience with SVN as a pure programmer/developer. Within my company, however, we use SVN as our configuration management tool. I thought using SVN for development at the same time was OK since we could use branches and the trunk for dev, and tags for releases. To me, the tags were the CM part, and the branches/trunk were the dev part. Recently a person, who develops high level code (but outside of the "pure SW" group) mentioned that the existing philosophy (mixing SVN for dev and CM) was wrong... in his opinion. His reasoning is that he thinks the company's CM tool should always link to run-able SW (so branches would break this rule). He also mentioned that a CM tool shouldn't be a backup utility for daily or incremental commits. Finally, he doesn't like the idea of having to jump from revision 143 to 89 in order to get a working copy... and further that CM tools shouldn't allow reversion to a broken state. In general he wants to separate the CM and back-up/dev utilties that SVN offers. Honestly, I am new and the person with this perspective is one of seniority, experience, and success, so I want to field this dilemma with the stackoverflow userbase to see if his approach has merit. My question: Should SVN be purely used for development, and another tool for CM (or vice versa)? Why? If so, what tools would you suggest for this combo? Or do you think that integrating both CM and dev into SVN is the best approach? Why? Thanks.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >