Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 174/663 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Benefits and features of different requirements-management systems and tools available?

    - by Gnark
    I am looking for a good comparision of different available professionial requirement managment tools. I am especially interested in the the features available within the different software solutions. Additionally to the "obvious" features I am looking for a proffesional Requirement Management System that supports for: multi-lingual customizable generation of documentation & history (graphs) search features (e.g. fulltext for comments), ordering, priorities version history bi-directional traceability of changes, artefacts, requirements, changes in requirements, etc. Any kind of integration of V-Model XT would be a really-nice-to-have-feature... Besides, I'd like to hear any personal motivated recommendations and/or experiences with different requirement management systems. Any input is highly appreciated. content consulted : similar question reqm tool with v-model nice, but too old paper (pdf) Tools Journal

    Read the article

  • Is Haskell's type system an obstacle to understanding functional programming?

    - by FarmBoy
    I'm studying Haskell for the purpose of understanding functional programming, with the expectation that I'll apply the insight that I gain in other languages (Groovy, Python, JavaScript mainly.) I choose Haskell because I had the impression that it is very purely functional, and wouldn't allow for any reliance on state. I did not choose to learn Haskell because I was interested in navigating an extremely rigid type system. My question is this: Is a strong type system a necessary by-product of an extremely pure functional language, or is this an unrelated design choice particular to Haskell? If it is the latter, I'm curious what would be the most purely functional language that is dynamically typed. I'm not particularly opposed to strong typing, it has its place, but I'm having a hard time seeing how it benefits me in this educational endeavor.

    Read the article

  • How relevant is UTF-7 when it comes to parsing emails?

    - by J. Pablo Fernández
    I recently implemented incoming emails for an application and boy, did I open the gates of hell? Since then every other day an email arrives that makes the app fail in a different way. One of those things is emails encoded as UTF-7. Most emails come as ASCII, some of the Latin encodings, or thankfully, UTF-8. Hotmail error messages (like email address doesn't exist or quota exceeded) seem to come as UTF-7. Unfortunately, UTF-7 is not an encoding Ruby understands: > "hello world".encode("utf-8", "utf-7") Encoding::ConverterNotFoundError: code converter not found (UTF-7 to UTF-8) > Encoding::UTF_7 => #<Encoding:UTF-7 (dummy)> My application doesn't crash, it actually handles the email quite well, but it does send me a notification about the potential error. I spent some time googling and I can't find anyone that implemented the conversion, at least not as a Ruby 1.9.3 Encoding::Converter. So, my question is, since I never got an email with actual content, from an actual person, in UTF-7, how relevant is that encoding? can I safely ignore it?

    Read the article

  • How to make a license apply to a whole library?

    - by Yannbane
    I'm creating a standard library for a programming language, and I'd like to license each and every single class or function in there under the MIT license, so they're completely FOSS. All of the files reside in a single directory. Would it be enough to put a LICENSE.txt file in the same directory, containing the MIT license? Do I need to say that the following license applies to all features of the library, or is the library itself considered to be a program?

    Read the article

  • Why learning new things is not important on a job hunt? [closed]

    - by IAdapter
    I have just finished my job hunt. I think it was about 40 job interviews, I like to travel and get to know many companies. One thing I did not like is that they don't care about new technologies. I think only 2 persons asked me about new stuff in Java world. Most of them care if I know Java (certification and many years of experiance is not enough for them, they need to test me) For example in IBM they only cared what IBM products do I know. Have I ever used any custom extensions of WebSphere? I don't understand those questions. If I learn new frameworks every day then I can learn whatever technology they have very fast. So why it matters if I have ever used those "great" custom extensions of WebSphere? After those 40 interviews I have no reason to learn any new framework, because I see that they don't care. Why those "developers" don't ask questions about new technologies? are they so long at those comapnies that they don't care about new stuff?

    Read the article

  • What is 'work' in the Pomodoro Technique?

    - by Sachin Kainth
    I have just started to use Pomodoro today and I am trying to work out what I should and should not do during my 25 minute work time. For my 25 minute work stint I started to write some code and realised that I had done something similar in a related project so I opened that solution to copy and paste that existing code. Question is, is this allowed? Also, if during my 25 minutes I realise that there is an important work-related email that I need to send can I do that or should that wait for the next 25 minutes or the break. I am writing this question during my, now extended, 5 minute break. Is this work or is it a break? I really would appreciate some guidance as I really want to use Pomodoro to focus better on my work. Another thing that happened to me was that a Adobe AIR update alert came up on my desktop during the 25 minutes. Should I ignore such things until the break? Sachin

    Read the article

  • Is there a best practice / standard approach to a free trial for a web app

    - by wobbily_col
    I have an idea for a web app, and would be interested in implementing it, and offering a free trial of say 5 uses before asking people to sign up. I can think of numerous ways of doing this (using cookies , logging IP adresses off the top of my head, limiting functionality). Is there a standard approach to this? Are there best practices? Are there any good tutorials on this? (I would prefer not to go the liited functionality route, as it will not show what the app is capable of).

    Read the article

  • What programming languages have you taught your children?

    - by Dubmun
    I'm a C# developer by trade but have had exposure to many languages (including Java, C++, and multiple scripting languages) over the course of my education and career. Since I code in the MS world for work I am most familiar with their stack and so I was excited when Small Basic was announced. I immediately started teaching my oldest to program in it but felt that something was missing from the experience. Being able to look up every command with the IDE's intellisense seemed to take something from the experience. Sure, it was easy to grasp but I found myself thinking that a little more challenge might be in order. I'm looking for something better and I would like to hear your experiences with teaching your children to program in whatever language you have chosen to do so in. What did you like and dislike? How fast did they pick it up? Were they challenged? Frustrated? Thank you very much!

    Read the article

  • A big flat text file or a HTML site for language documentation?

    - by Bad Sector
    A project of mine is a small embeddable Tcl-like scripting language, LIL. While i'm mostly making it for my own use, i think it is interesting enough for others to use, so i want it to have a nice (but not very "wordy") documentation. So far i'm using a single flat readme.txt file. It explains the language's syntax, features, standard functions, how to use the C API, etc. Also it is easy to scan and read in almost every environment out there, from basic text-only terminals to full-fledged high-end graphical desktop environments. However, while i tried to keep things nicely formatted (as much as this is possible in plain text), i still think that being a big (and growing) wall of text, it isn't as easy on the eyes as it could be. Also i feel that sometimes i'm not writing as much as i want in order to avoid expanding the text too much. So i thought i could use another project of mine, QuHelp, which is basically a help site generator for sites like this one with a sidebar that provides a tree of topics/subtopics and offline full text search. With this i can use HTML to format the documentation and if i use QuHelp for some other project that uses LIL, i can import LIL's documentation as part of the other project's documentation. However converting the existing documentation to QuHelp/HTML isn't a small task, especially when it comes to functions (i'll need to put more detail on them than what currently exists in the readme.txt file). Also it loses the wide range of availability that it currently has (even if QuHelp's generated code degrades gracefully down to console-only web browsers, plain text is readable from everywhere, including from popular editors such as Vim and Emacs - i had someone once telling me that he likes LIL's documentation because it is readable without leaving his editor). So, my question is simply this: should i keep the documentation as it is now in the form of a single readme.txt file or should i convert it to something like the site i mentioned above? There is also the option to do both, but i'm not sure if i'll be able to always keep them in sync or if it is worth the effort. After asking around in IRC i've got mixed answers: some liked the wide availability of the single text file, others said that it is looks as bad as a man page (personally i don't mind that - i can read man pages just fine - but other people might have issues reading them). What do you think?

    Read the article

  • Unit-Testing functions which have parameters of classes where source code is not accessible

    - by McMannus
    Relating to this question, I have another question regarding unit testing functions in the utility classes: Assume you have function signatures like this: public function void doSomething(InternalClass obj, InternalElement element) where InternalClass and InternalElement are both Classes which source code are not available, because they are hidden in the API. Additionally, doSomething only operates on obj and element. I thought about mocking those classes away but this option is not possible due to the fact that they do not implement an interface at all which I could use for my Mocking classes. However, I need to fill obj with defined data to test doSomething. How can this problem be solved?

    Read the article

  • Meaningful concise method naming guidelines

    - by Sam
    Recently I started releasing an open source project, while I was the only user of the library I did not care about the names, but know I want to assign clever names to each methods to make it easier to learn, but I also need to use concise names so they are easy to write as well. I was thinking about some guidelines about the naming, I am aware of lots of guidelines that only care about letters casing or some simple notes. Here, I am looking after guidelines for meaningful concise naming. For example, this could be part of the guidelines I am looking after: Use Add when an existing item is going to be added to a target, Use Create when a new item is being created and added to a target. Use Remove when an existing item is going to be removed from a target, Use delete when an item is going to be removed permanently. Pair AddXXX methods with RemoveXXX and Pair CreateXXX methods with DeleteXXX methods, but do not mix them. The above guidance may be intuitive for native English speakers, but for me that English is my second language I need to be told about things like this.

    Read the article

  • moving from wpf to html5

    - by HighCore
    I don't even know if this is the right StackExchange site to post this question. If it isn't, please excuse me and please let me know which would be the right one. I am an experienced WPF developer, and I seriously love the technology. I feel pretty good when working with XAML, bindings, templates, triggers, MVVM and all the WPF world of goodness. Now I have recieved a job offer which surpasses my current salary by 50%. It a position to work as a C# developer in an ASP.Net MVC4 + HTML5 project. I have never EVER in my whole life worked with ASP.Net, nor HTML and I never ever did a web page or web application before. I certainly find myself worried that I will lose all the comfort and joy I live every day coding in WPF. And in the other hand I understand and have seen in these 3/4 months of job hunting that there's a LOT of ASP.Net and really really little or no WPF in the job market (at least here), so I somehow feel forced towards it. So, my question is: Can anybody who had to go thru this type of change tell me the pros and cons of working with these technologies from a developer's perspective? I don't care about open-source / non-microsoft or non-desktop, I care about REAL development experience in every day working with these techs, and whether ASP.Net MVC 4 + HTML + JS is as crappy as I think it is comparing it to WPF.

    Read the article

  • How to avoid big and clumpsy UITableViewController on iOS?

    - by Johan Karlsson
    I have a problem when implementing the MVC-pattern on iOS. I have searched the Internet but seems not to find any nice solution to this problem. Many UITableViewController implementations seems to be rather big. Most example I have seen lets the UITableViewController implement UITableViewDelegate and UITableViewDataSource. These implementations are a big reason why UITableViewControlleris getting big. One solution would be to create separate classes that implements UITableViewDelegate and UITableViewDataSource. Of course these classes would have to have a reference to the UITableViewController. Are there any drawbacks using this solution? In general I think you should delegate the functionality to other "Helper" classes or similar, using the delegate pattern. Are there any well established ways of solving this problem? I do not want the model to contain to much functionality, nor the view. A believe that the logic should really be in the controller class, since this is one of the cornerstones of the MVC-pattern. But the big question is; How should you divide the controller of a MVC-implementation into smaller manageable pieces? (Applies to MVC in iOS in this case) There might be a general pattern for solving this, although I am specifically looking for a solution for iOS. Please give an example of a good pattern for solving this issue. Also an argument why this solution is awesome.

    Read the article

  • Equivalent of #map in ruby in golang

    - by Oct
    I'm playing with Go and run into something I'm unable to find in Google, although there is certainly something that exists: I'm using the following struct: type Syntax struct { name string extensions *regexp.Regexp } type Scanner struct { classifier * bayesian.Classifier save_file string name_to_syntax map[string] *Syntax extensions_to_syntax map[*regexp.Regexp] *Syntax } I'd like to perform the following using Go and I'm quoting ruby because it's how I'd do that using ruby: test_regexpes = my_scanner.extensions_to_syntax.keys My goal is to get an array of *regexp.Regexp . Any idea on how to do that in a simple way ? Thank you !

    Read the article

  • Class Design and Structure Online Web Store

    - by Phorce
    I hope I have asked this in the right forum. Basically, we're designing an Online Store and I am designing the class structure for ordering a product and want some clarification on what I have so far: So a customer comes, selects their product, chooses the quantity and selects 'Purchase' (I am using the Facade Pattern - So subsystems execute when this action is performed). My class structure: < Order > < Product > <Customer > There is no inheritance, more Association < Order has < Product , < Customer has < Order . Does this structure look ok? I've noticed that I don't handle the "Quantity" separately, I was just going to add this into the "Product" class, but, do you think it should be a class of it's own? Hope someone can help.

    Read the article

  • How do you avoid jumping to a solution when under pressure? [closed]

    - by GlenPeterson
    When under a particularly strict programming deadline (like an hour), if I panic at all, my tendency is to jump into coding without a real plan and hope I figure it out as I go along. Given enough time, this can work, but in an interview it's been pretty unsuccessful, if not downright counter-productive. I'm not always comfortable sitting there thinking while the clock ticks away. Is there a checklist or are there techniques to recognize when you understand the problem well enough to start coding? Maybe don't touch the keyboard for the first 5-10 minutes of the problem? At what point do you give up and code a brute-force solution with the hope of reasoning out a better solution later? A related follow-up question might be, "How do you ensure that you are solving the right problem?" Or "When is it most productive to think and design more vs. code some experiments to and figure out the design later?" EDIT: One close vote already, but I'm not sure why. I wrote this in the first person, but I doubt I'm the only programmer to ever choke in an interview. Here is a list of techniques for taking a math test and another for taking an oral exam. Maybe I'm not expressing myself well, but I'm asking if there is a similar list of techniques for handling a programming problem under pressure?

    Read the article

  • How to handle growing QA reporting requirements?

    - by Phillip Jackson
    Some Background: Our company is growing very quickly - in 3 years we've tripled in size and there are no signs of stopping any time soon. Our marketing department has expanded and our IT requirements have as well. When I first arrived everything was managed in Dreamweaver and Excel spreadsheets and we've worked hard to implement bug tracking, version control, continuous integration, and multi-stage deployment. It's been a long hard road, and now we need to get more organized. The Problem at Hand: Management would like to track, per-developer, who is generating the most issues at the QA stage (post unit testing, regression, and post-production issues specifically). This becomes a fine balance because many issues can't be reported granularly (e.g. per-url or per-"page") but yet that's how Management would like reporting to be broken down. Further, severity has to be taken into account. We have drafted standards for each of these areas specific to our environment. Developers don't want to be nicked for 100+ instances of an issue if it was a problem with an include or inheritance... I had a suggestion to "score" bugs based on severity... but nobody likes that. We can't enter issues for every individual module affected by a global issue. [UPDATED] The Actual Questions: How do medium sized businesses and code shops handle bug tracking, reporting, and providing useful metrics to management? What kinds of KPIs are better metrics for employee performance? What is the most common way to provide per-developer reporting as far as time-to-close, reopens, etc.? Do large enterprises ignore the efforts of the individuals and rather focus on the team? Some other questions: Is this too granular of reporting? Is this considered 'blame culture'? If you were the developer working in this environment, what would you define as a measureable goal for this year to track your progress, with the reward of achieving the goal a bonus?

    Read the article

  • Removing hard-coded values and defensive design vs YAGNI

    - by Ben Scott
    First a bit of background. I'm coding a lookup from Age - Rate. There are 7 age brackets so the lookup table is 3 columns (From|To|Rate) with 7 rows. The values rarely change - they are legislated rates (first and third columns) that have stayed the same for 3 years. I figured that the easiest way to store this table without hard-coding it is in the database in a global configuration table, as a single text value containing a CSV (so "65,69,0.05,70,74,0.06" is how the 65-69 and 70-74 tiers would be stored). Relatively easy to parse then use. Then I realised that to implement this I would have to create a new table, a repository to wrap around it, data layer tests for the repo, unit tests around the code that unflattens the CSV into the table, and tests around the lookup itself. The only benefit of all this work is avoiding hard-coding the lookup table. When talking to the users (who currently use the lookup table directly - by looking at a hard copy) the opinion is pretty much that "the rates never change." Obviously that isn't actually correct - the rates were only created three years ago and in the past things that "never change" have had a habit of changing - so for me to defensively program this I definitely shouldn't store the lookup table in the application. Except when I think YAGNI. The feature I am implementing doesn't specify that the rates will change. If the rates do change, they will still change so rarely that maintenance isn't even a consideration, and the feature isn't actually critical enough that anything would be affected if there was a delay between the rate change and the updated application. I've pretty much decided that nothing of value will be lost if I hard-code the lookup, and I'm not too concerned about my approach to this particular feature. My question is, as a professional have I properly justified that decision? Hard-coding values is bad design, but going to the trouble of removing the values from the application seems to violate the YAGNI principle. EDIT To clarify the question, I'm not concerned about the actual implementation. I'm concerned that I can either do a quick, bad thing, and justify it by saying YAGNI, or I can take a more defensive, high-effort approach, that even in the best case ultimately has low benefits. As a professional programmer does my decision to implement a design that I know is flawed simply come down to a cost/benefit analysis?

    Read the article

  • Are More Comments Better in High-Turnover Environments?

    - by joshin4colours
    I was talking with a colleague today. We work on code for two different projects. In my case, I'm the only person working on my code; in her case, multiple people work on the same codebase, including co-op students who come and go fairly regularly (between every 8-12 months). She said that she is liberal with her comments, putting them all over the place. Her reasoning is that it helps her remember where things are and what things do since much of the code wasn't written by her and could be changed by someone other than her. Meanwhile, I try to minimize the comments in my code, putting them in only in places with a unobvious workaround or bug. However, I have a better understanding of my code overall, and have more direct control over it. My opinion in that comments should be minimal and the code should tell most of the story, but her reasoning makes sense too. Are there any flaws in her reasoning? It may clutter the code but it ultimately could be quite helpful if there are many people working on it in the short- to medium-run.

    Read the article

  • Resources for understanding iOS architecture [closed]

    - by BlackJack
    I recently finished reading Randall Hyde's excellent book Write Great Code: Volume 1: Understanding the Machine, and I have a much better knowledge of what's going on under the hood now. I want to start making iPhone apps, and there are lots of guides for that. Embracing my inner Hyde, however, I want to first learn about the iOS system architecture. Apple has a really good overview here: iOS Technology Overview Before I start, I wanted to know if there were any other good resources for understanding iOS architecture and using that knowledge for iPhone programming. Thanks.

    Read the article

  • Online betting system design [closed]

    - by Rafal
    I am a Computer Science student, preparing for my exam in software engineering. I am strugging with answering one of the sample questions to the scenario below. My understanding is that the system design approach should probably be a mixture of agile and plan driven elements but - since I've no practical experience - it's hard for me to decide on the balance and tolls that should be used. I will appreciate any hints from experienced business analysts who were involved in similar kind of projects. Ray Sing is the owner of “Last Betz", a bookmakers with 7 outlets across Louth and Meath. With the advent of smartphones Ray would now like to allow his clients to place their bets online using their mobile devices. Clients would register for an account and password and would log their credit card details via the Last Betz website. To begin using the facility customers must 'load' their accounts with 100 euros. Any winnings, minus commission, will be placed in the account whilst any losses will be automatically deducted from the account. Assuming you have been selected to develop the above system: How would you approach the design of this system? Discuss the design methods and models you would use.

    Read the article

  • Nicest way to map rgb colors from html to led

    - by back_ache
    I have attached an rgb led to a color picker on a webpage and have hit the obvious problem that though the led is 8-bit like html the color rendition is very different so with the more subtle shades the led values for the color are wildly different to the html values. The brute-force method would be for me to have a lookup-table on the webserver to map the two sets of values but would ideally like to do it more elegantly Before I start listing all my 101 ideas for doing this I wondered if anyone else had come across the issue, the end-game would be to be able to abstract the color-rendition of different leds and make it available as a webservice (html value and device id in, led value out)

    Read the article

  • What is the best way to go about testing that we handle failures appropriately?

    - by Earlz
    we're working on error handling in an application. We try to have fairly good automated test coverage. One big problem though is that we don't really know of a way to test some of our error handling. For instance, we need to test that whenever there is an uncaught exception, a message is sent to our server with exception information. The big problem with this is that we strive to never have an uncaught exception(and instead have descriptive error messages). So, how do we test something what we never want to actually happen?

    Read the article

  • How can I quantify the amount of technical debt that exists in a project?

    - by Erik Dietrich
    Does anyone know if there is some kind of tool to put a number on technical debt of a code base, as a kind of code metric? If not, is anyone aware of an algorithm or set of heuristics for it? If neither of those things exists so far, I'd be interested in ideas for how to get started with such a thing. That is, how can I quantify the technical debt incurred by a method, a class, a namespace, an assembly, etc. I'm most interested in analyzing and assessing a C# code base, but please feel free to chime in for other languages as well, particularly if the concepts are language transcendent.

    Read the article

  • "Oracle Certified Expert, Java Platform, Enterprise Edition 6 Java Persistence API Developer" Preparation

    - by Matt
    I have been working with Hibernate for a fews years now, and I want to solidify and demonstrate my knowledge by taking the Oracle JPA certification, also known as: "Oracle Certified Expert, Java Platform, Enterprise Edition 6 Java Persistence API Developer (CX-310-094)" There is a training course provided by Oracle: "Building Database Driven Applications with JPA (SL-370-EE6)" But this costs $1800 and I think it would be overkill for my needs. Ideally, I would like a self study guide that will cover everything in the exam. I have looked for books and these seem like possibilities: Pro JPA 2: Mastering the Java Persistence API (Expert's Voice in Java Technology) and Beginning Java EE 6 with GlassFish 3 2nd Edition (Expert's Voice in Java Technology) But these aren't checklist type study guides as far as I am aware. I found the official SCJP study guide very useful, but I think the equivalent text for the JPA exam isn't out yet. If anyone has taken this exam, I would be grateful to hear how you prepared for it. Thanks!

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >