Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 239/563 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • How do I explain the importance of NUNIT Test cases to my Colleagues [duplicate]

    - by JNL
    This question already has an answer here: How to explain the value of unit testing 6 answers I am currently working in Software Development for applications including lot of Mathematical Calculations. As a result there are lot of test cases that we need to consider. We donot have any NUNIT Test case system, I am wonderring how should I get the advantages of implementing the NUNIT testing in front of my colleagues and my boss. I am pretty sure, it would be of great help for our team. Any help regarding the same, will be higly appreciated.

    Read the article

  • Verification of requirements question

    - by user970696
    Doing a lot of reading about V&V, I would need to clarify the following. A lot of definitons (less formal ones found in books) define verification like that: Verification: The software should conform to its specification. But then they speak about requirement verification, design verification etc. If I say that these items are "software" in terms of applying the definitons, what should I checked them against, what specification should requirements, which is the basic information, conform to? And one more thing: shouldnt be requirements also validated? To make sure they meets the customer needs? All texts I have speak only about SW validation on the end of the dev.process..

    Read the article

  • whats the name of this pattern?

    - by Wes
    I see this a lot in frameworks. You have a master class which other classes register with. The master class then decides which of the registered classes to delegate the request to. An example based passed in class may be something this. public interface Processor { public boolean canHandle(Object objectToHandle); public void handle(Object objectToHandle); } public class EvenNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isEven(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } public class OddNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isOdd(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } //Can optionally implement processor interface public class processorDelegator { private List processors; public void addProcessor(Processor processor) { processors.add(processor); } public void process(Object objectToProcess) { //Lookup relevant processor either by keeping a list of what they can process //Or query each one to see if it can process the object. chosenProcessor=chooseProcessor(objectToProcess); chosenProcessor.handle(objectToProcess); } } Note there are a few variations I see on this. In one variation the sub classes provide a list of things they can process which the ProcessorDelegator understands. The other variation which is listed above in fake code is where each is queried in turn. This is similar to chain of command but I don't think its the same as chain of command means that the processor needs to pass to other processors. The other variation is where the ProcessorDelegator itself implements the interface which means you can get trees of ProcessorDelegators which specialise further. In the above example you could have a numeric processor delegator which delegates to an even/odd processor and a string processordelegator which delegates to different strings. My question is does this pattern have a name.

    Read the article

  • JavaScript evolution -- weeding out the confusion [closed]

    - by good_computer
    There was JavaScript v1.3 (I guess) that we all started with. Then there was JavaScript 2.0 that Adobe implemented (ActionScript) but was abandoned later. Then came E4X. Then ES5. There is also ES harmony. I am really confused about which version is the latest and where is the standards body going. Can someone describe the whole chronology of JavaScript / ECMAScript evolution and the important differences between those versions?

    Read the article

  • My first development job working at a company, what things to look out for?

    - by Kim Jong Woo
    So I've worked on my own all this time, selling software, creating a few web applications on my own. I had an Arts background I was self taught. It was a bit difficult to find a development position after endless trying, I finally landed a LAMP position. What I realized was it was all confidence issue. Before when I didn't know a few things I panicked but after spending such a long time working on my own projects and solving various problems, I felt confident enough that I could fulfill requirements on my own. I hope this helps other people applying for jobs This is the first time I will be developing with other team members in an office, are there anything I should prepare for my first day at work next week? Any tips and pointers while working as a developer at a company? I'm kinda nervous but excited.

    Read the article

  • Should data structures be integrated into the language (as in Python) or be provided in the standard library (as in Java)?

    - by Anto
    In Python, and most likely many other programming languages, common data structures can be found as an integrated part of the core language with their own dedicated syntax. If we put LISP's integrated list syntax aside, I can't think of any other languages that I know which provides some kind of data structure above the array as an integrated part of their syntax, though all of them (but C, I guess) seem to provide them in the standard library. From a language design perspective, what are your opinions on having a specific syntax for data structures in the core language? Is it a good idea, and does the purpose of the language (etc.) change how good this could be of a choice? Edit: I'm sorry for (apparently) causing some confusion about which data structures I mean. I talk about the basic and commonly used ones, but still not the most basic ones. This excludes trees (too complex, uncommon), stacks (too seldom used), arrays (too simple) but includes e.g. sets, lists and hashmaps.

    Read the article

  • How do you track third-party software licenses?

    - by emddudley
    How do you track licenses for third-party libraries that you use in your software? How did you vet the licenses? Sometimes licenses change or libraries switch licenses--how do you stay up to date? At the moment I've got an Excel spreadsheet with worksheets for third-party software, licenses, and the projects we use them on (organized like a relational database). It seems to work OK, but I think it will go out-of-date pretty quickly.

    Read the article

  • How would I go about measuring the impact an article has on the internet?

    - by Jimbo Mombasa
    For an application of mine, I analyze the sentiment of articles, using NLTK, to display sentiment trends. But right now all articles weigh the same amount. This does not show a very accurate picture because some articles have a higher impact on the internet than others. For example, a blog post from some unknown blog should not weigh the same amount as an article from the New York Times. How can I determine their impact?

    Read the article

  • Documenting mathematical logic in code

    - by Kiril Raychev
    Sometimes, although not often, I have to include math logic in my code. The concepts used are mostly very simple, but the resulting code is not - a lot of variables with unclear purpose, and some operations with not so obvious intent. I don't mean that the code is unreadable or unmaintainable, just that it's waaaay harder to understand than the actual math problem. I try to comment the parts which are hardest to understand, but there is the same problem as in just coding them - text does not have the expressive power of math. I am looking for a more efficient and easy to understand way of explaining the logic behind some of the complex code, preferably in the code itself. I have considered TeX - writing the documentation and generating it separately from the code. But then I'd have to learn TeX, and the documentation will not be in the code itself. Another thing I thought of is taking a picture of the mathematical notations, equations and diagrams written on paper/whiteboard, and including it in javadoc. Is there a simpler and clearer way? P.S. Giving descriptive names(timeOfFirstEvent instead of t1) to the variables actually makes the code more verbose and even harder too read.

    Read the article

  • Should extension scripts be run in a sandbox?

    - by Cubic
    In particular, this is about game extensions written in lua (luajit-2.0). I was contemplating whether I should restrict what these scripts can do, and arrived at the conclusion that I probably shouldn't: It's hard to get right. Sounds silly, but chances are my sandbox is gonna end up leaky anyways. The only benefit I could think of would be giving users some sense of security when running third party scripts. The disadvantages would be that it's just incredibly annoying for extension writers. That is, for now, myself (game content will be mostly scripted). The reason I'm asking this now before I actually have anything presentable is that adding a sandbox early on is easy, but would impose said annoying restrictions on myself too. However if I first go on with it and then later decide I do need a sandbox after all, I'm gonna run into problems (I'd either have to rewrite the scripts that are already there, or introduce some form of trust management system which seems to be more trouble than it's worth).

    Read the article

  • Will dolphins die if I use REST "as CRUD"?

    - by l0l0l0l0l
    Recently I moved to Laravel and I was surprised on how good setting the controllers as RESTful is, it made routes and my code cleaner. I'm kinda new on web development and never used REST before since all my clients' projects are basically CRUD operations. There's any cool buzzword to this "approach" or I'm just stupid for doing it? I don't plan to follow any REST patterns, just to make my life easier and code cleaner. Basicallly just GET/POST, the other ones are not native anyway so (emulated on hidden form value).

    Read the article

  • Information Spilling Across Object Boundaries

    - by Winston Ewert
    Many times my business objects tend to have situations where information needs to cross object boundaries too often. When doing OO, we want information to be in one object and as much as possible all code dealing with that information should be in that object. However, business rules do not follow this principle giving me trouble. As an example suppose that we have an Order which has a number of OrderItems which refers to an InventoryItem which has a price. I invoke Order.GetTotal() which sums the result of OrderItem.GetPrice() which multiples a quantity by InventoryItem.GetPrice(). So far so good. But then we find out that some items are sold with a two for one deal. We can handle this by having OrderItem.GetPrice() do something like InventoryItem.GetPrice( quantity ) and letting InventoryItem deal with this. However, then we find out that the two-for-one deal only lasts for a particular time period. This time period needs to be based on the date of the order. Now we change OrderItem.GetPrice() to be InventoryItem.GetPrice( quatity, order.GetDate() ) But then we need to support different prices depending on how long the customer has been in the system: InventoryItem.GetPrice( quantity, order.GetDate(), order.GetCustomer() ) But then it turns out that the two-for-one deals apply not just to buying multiple of the same inventory item but multiple for any item in a InventoryCategory. At this point we throw up our hands and just give the InventoryItem the order item and allow it to travel over the object reference graph via accessors to get the information its needs: InventoryItem.GetPrice( this ) TL;DR I want to have coupling in objects, but business rules often force me to access information from all over the place in order to make particular decisions. Are there good techniques for dealing with this? Do others find the same problem?

    Read the article

  • Simple Architecture Verification

    - by Jean Carlos Suárez Marranzini
    I just made an architecture for an application with the function of scoring, saving and loading tennis games. The architecture has 2 kinds of elements: components & layers. Components: Standalone elements that can be consumed by other components or by layers. They might also consume functionality from the model/bottom layer. Layers: Software components whose functionality rests on previous layers (except for the model layer). -Layers: -Models: Data and it's behavior. -Controllers: A layer that allows interaction between the views and the models. -Views: The presentation layer for interacting with the user. -Components: -Persistence: Makes sure the game data can be stored away for later retrieval. -Time Machine: Records changes in the game through time so it's possible to navigate the game back and forth. -Settings: Contains the settings that determine how some of the game logic will apply. -Game Engine: Contains all the game logic, which it applies to the game data to determine the path the game should take. This is an image of the architecture (I don't have enough rep to post images): http://i49.tinypic.com/35lt5a9.png The requierements which this architecture should satisfy are the following: Save & load games. Move through game history and see how the scoreboard changes as the game evolves. Tie-breaks must be properly managed. Games must be classified by hit-type. Every point can be modified. Match name and player names must be stored. Game logic must be configurable by the user. I would really appreciate any kind of advice or comments on this architecture. To see if it is well built and makes sense as a whole. I took the idea from this link. http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller

    Read the article

  • Is it bad practice for a module to contain more information than it needs?

    - by gekod
    I just wanted to ask for your opinion on a situation that occurs sometimes and which I don't know what would be the most elegant way to solve it. Here it goes: We have module A which reads an entry from a database and sends a request to module B containing ONLY the information from the entry module B would need to accomplish it's job (to keep modularity I just give it the information it needs - module B has nothing to do with the rest of the information from the read DB entry). Now after finishing it's job, module B has to reply to a module C if it succeeded or failed. To do this module B replies with the information it has gotten from module A and some variable meaning success or fail. Now here comes the problem: module C needs to find that entry again BUT the information it has gotten from module B is not enough to uniquely find the exact same entry again. I don't think that module A giving more information to module B which it doesn't need to do it's job but which it could then give back to module C would be a good practice because this would mean giving some module information it doesn't really need. What do you think?

    Read the article

  • Analyzing a programming language

    - by Matt Fenwick
    In SICP, the authors state (Section 1.1) that there are three basic "mechanisms" of programming languages: primitive expressions, which represent the simplest entities the language is concerned with means of combination, by which compound elements are built from simpler ones means of abstraction, by which compound elements can be named and manipulated as units How can I analyze a mainstream programming language (Java, for example) in terms of these elements or mechanisms?

    Read the article

  • Search network device from LAN through my C++ application and change the IP address

    - by Arun Kumar K S
    I am developing an application in C++ to communicate with my network device. I used UDP classes to search the device from the network. I done the code in such a way that from my application a broadcast message will send to the local network. The device will respond to the broadcast message and the application will get the IP address from that response. After establishing a network communication send a message to the device for changing the IP address. That worked fine if my devices IP address is correct. But when I set a wrong IP address and subnet to the device. My application will never get any messages from the device. So I can't able communicate to the device and not able get the device and unable to change the IP address etc. Say example IP address of the device 20.1.1.1 Subnet Mask 255.0.0.0 And in my system that runs the application IP address 192.168.1.23 Subnet Mask 255.255.255.0 I tried the Lantronix device installation software with their Lantronix device in network. It listed the device from the network and I am able to change the IP address from their software. Any one know how this is done in this type of software? How I can search in network to find the device and change the IP address when its IP address is not in range? Which protocol they used to find the device?

    Read the article

  • Modelling work-flow plus interaction with a database - quick and accessible options

    - by cjmUK
    I'm wanting to model a (proposed) manufacturing line, with specific emphasis on interaction with a traceability database. That is, various process engineers have already mapped the manufacturing process - I'm only interested in the various stations along the line that have to talk to the DB. The intended audience is a mixture of project managers, engineers and IT people - the purpose is to identify: points at which the line interacts with the DB (perhaps going so far as indicating the Store Procs called at each point, perhaps even which parameters are passed.) the communication source (PC/Handheld device/PLC) the communication medium (wireless/fibre/copper) control flow (if leak test fails, unit is diverted to repair station) Basically, the model will be used as a focus different groups on outstanding tasks; for example, I'm interested in the DB and any front-end app needed, process engineers need to be thinking about the workflow and liaising with the PLC suppliers, the other IT guys need to make sure we have the hardware and comms in place. Obviously I could just improvise in Visio, but I was wondering if there was a particular modelling technique that might particularly suit my needs or my audience. I'm thinking of a visual model with supporting documentation (as little as possible, as much as is necessary). Clearly, I don't want something that will take me ages to (effectively) learn, nor one that will alienate non-technical members of the project team. So far I've had brief looks at BPMN, EPC Diagrams, standard Flow Diagrams... and I've forgotten most of what I used to know about UML... And I'm not against picking and mixing... as long as it is quick, clear and effective. Conclusion: In the end, I opted for a quasi-workflow/dataflow diagram. I mapped out the parts of the manufacturing process that interact with the traceability DB, and indicated in a significantly-simplified form, the data flows and DB activity. Alongside which, I have a supporting document which outlines each process, the data being transacted for each process (a 'data dictionary' of sorts) and details of hardware and connectivity required. I can't decide whether is a product of genius or a crime against established software development practices, but I do think that is will hit the mark for this particular audience.

    Read the article

  • When should I increment version number?

    - by ahmed
    I didn't learn programming at school and I do not work as a (professional) developer, hence a lot of basics are not quite clear to me. This question tries to clarify one of them. Now let's suppose that I have issues #1, #2 and #3 in my Issues Tracker that are set to be corrected/enhanced for version 1.0.0 and that the last (stable) version is 0.9.0. When should I increment to version 1.0.0 ? When a) just one of the listed above issues is closed or b) when all the issues related to version 1.0 are closed ? Which one is the right way to do it ? And by the right way, I mean what is currently used in the industry. Thanks.

    Read the article

  • Why isn't reflection on the SCJP / OCJP?

    - by Nick Rosencrantz
    I read through Kathy Sierra's SCJP study guide and I will read it again more throughly to improve myself as a Java programmer and be able to take the certification either Java 6 or wait for the Java 7 exam (I'm already employed as Java developer so I'm in no hurry to take the exam.) Now I wonder why reflection is not on the exam? The book it seems covers everything that should be on the exam and AFAIK reflection is at least as important as threads if not more used inpractice since many frameworks use reflection. Do you know why reflection is not part of the SCJP? Do you agree that it's at least important to know reflection as threads? Thanks for any answer

    Read the article

  • How much effort should you put into a junior developer?

    - by Crazy Eddie
    At what point should one give up? I've tried helping them out by having them shadow me. We agree to break a minute, and then they go missing in action for a while...then just go back to their desk. Even when I know they've done this, part of me feels like I shouldn't have to go get them but that they should be showing interest in learning. Frankly, it's a bunch of time I don't have explaining things as I go when I could just do it. Am I expecting too much to expect that if they want to learn they'll make sure I know they're ready and willing? They go to meetings that they where not told they had to, good, but then sit in the corner and sleep...bad. I don't even know what to do with that. Sometimes I give them something small to do and they do it great, so I give them something just a touch harder and they totally fail, hard. Check in things without testing them. Part of me thinks that maybe I should be spending more time with them but at the same time I don't see a lot of interest and I really, honestly don't have time teaching the same things over and over. Sometimes I get asked questions that are really, really easy to answer if you just do a little bit of your own work trying to find out. Other times I'm not asked anything. I'm sure I could be doing better but honestly...I don't really want to anymore.

    Read the article

  • What defines a language as a scripting language? [closed]

    - by Mathew Foscarini
    Possible Duplicate: What is the main difference between Scripting Languages and Programming Languages? I'd like to know what defines a language as a scripting language compared against other programming languages. Some possible scripting languages might include AutoCad LISP, Linux Bash, DOS Batch, Javascript or ActionScript in Flash. Where is the distinction made that makes a language a scripting language? Are there a set of clearly define rules to classify it as such?

    Read the article

  • Company wants to write custom project management tool, rather then use third party product.

    - by Jason Evans
    At the company I work, we are really wanting to get into the agile methodology for developing software. One thing that I'm not excited about is the fact that management wants us to build a custom project management feature inside the company's Intranet. I think this is a total waste of time. There are many great third party tools available (e.g. Axosoft OnTime) that can do everything we need, and more. For how much development time it would cost us to build our own project management module, we could buy numerous licences for a third party product. One concern is that, whilst we are writing code for a client, and using our custom Intranet project management module, we find bugs in the module that need fixing ASAP. That means having to stop work on the client code to fix the Intranet. That just puts shivers down my spine. Another worry I have is lack of functionality. This custom module is going to be so basic, that it will just feel really crap to use. That might sound a bit snooty, but for goodness sake, many third party tools are so feature rich, that the idea of having to write our own tool makes feel very uneasy. In fact, I can't be bothered. What do you guys think? I'm going to raise this issue with my boss, since I feel it's such an important topic to talk about. EDIT: Thanks for the great responses, much appreciated. To summarize some of them: Money Naturally my boss does want to save money, by not forking out a few hundred £'s for licences. However, for us to write a custom tool, it will take x number of days, multiplied by approx £500, which is our costs. I don't see the business value in this. Management have mentioned that they want to sell the Intranet as a product in the future, but it's so custom to our needs (and downright basic), that in order to give it to another client, I can see us having to fork a version of the code and rebuild the majority of it anyway. So it's not like we're gaining anything there in reuse. Features Having our own custom module means not feature bloat - only the functionality we require will be in the product. My issue is that there are plenty of free, open-source project management tools out there with minimal features already. So even if cost is an issue, we could look into open-source. Again it all boils down to the fact that I don't see the point in writing a project management tool in this day and age. It's a bit like writing your own web browser - why?, what's the point? Although management are asking for this tool, just because they are, it does not mean I'm going to please them and do it just because they asked for it. If something does not make sense, then I will raise it as a concern. At the end of the day, it's the developers who write the code, it's the developers who make money for a business. Thus, as far I'm concerned, the devs have a very big role in deciding how a company should manage projects and what tools are used. "I am Spartan, argh!" :) Hmm, I've not been able to make this question a wiki for some reason, thus I'm going to have to pick an answer to accept. Cheers. Jas.

    Read the article

  • What layer to introduce human readable error messages?

    - by MrLane
    One of the things that I have never been happy with on any project I have worked on over the years and have really not been able to resolve myself is exactly at what tier in an application should human readable error information be retrieved for display to a user. A common approach that has worked well has been to return strongly typed/concrete "result objects" from the methods on the public surface of the business tier/API. A method on the interface may be: public ClearUserAccountsResult ClearUserAccounts(ClearUserAccountsParam param); And the result class implementation: public class ClearUserAccountsResult : IResult { public readonly List<Account> ClearedAccounts{get; set;} public readonly bool Success {get; set;} // Implements IResult public readonly string Message{get; set;} // Implements IResult, human readable // Constructor implemented here to set readonly properties... } This works great when the API needs to be exposed over WCF as the result object can be serialized. Again this is only done on the public surface of the API/business tier. The error message can also be looked up from the database, which means it can be changed and localized. However, it has always been suspect to me, this idea of returning human readable information from the business tier like this, partly because what constitutes the public surface of the API may change over time...and it may be the case that the API will need to be reused by other API components in the future that do not need the human readable string messages (and looking them up from a database would be an expensive waste). I am thinking a better approach is to keep the business objects free from such result objects and keep them simple and then retrieve human readable error strings somewhere closer to the UI layer or only in the UI itself, but I have two problems here: 1) The UI may be a remote client (Winforms/WPF/Silverlight) or an ASP.NET web application hosted on another server. In these cases the UI will have to fetch the error strings from the server. 2) Often there are multiple legitimate modes of failure. If the business tier becomes so vague and generic in the way it returns errors there may not be enough information exposed publicly to tell what the error actually was: i.e: if a method has 3 modes of legitimate failure but returns a boolean to indicate failure, you cannot work out what the appropriate message to display to the user should be. I have thought about using failure enums as a substitute, they can indicate a specific error that can be tested for and coded against. This is sometimes useful within the business tier itself as a way of passing via method returns the specifics of a failure rather than just a boolean, but it is not so good for serialization scenarios. Is there a well worn pattern for this? What do people think? Thanks.

    Read the article

  • What effects do various drugs have on coding style / productivity? [closed]

    - by codecraft
    Can anyone tell me what the effect of various drugs are on coding style, and if coding on drugs can be more productive, or more fun? Are some types of drugs better suited to certain tasks and phases of software development? And which programming languages are best suited to coding on drugs? It would be great if you could back up your answers by data, probably even code snippets showcasing the effect of the drug experience.

    Read the article

  • How to detect if an app was already installed before

    - by Dante
    How do software applications keep track of whether the user already installed the application before in it's Windows system? Say you install app X, trial version, remove it, then re install it, and when you run it again it detects you had already installed it before. If you uninstall and clean all registry information it shouldn't know you had already installed it before... Disclaimer: I'm not trying to "hack" any application, just thinking about how this is implemented.

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >