Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 230/563 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • Equation solver project

    - by Victor Barbu
    I would like to start a project destinated to students. My application has to solve any kind of equation, passed by the user as a string, exactly like in Matlab solve function. How shluld I do this? What programming language is the best for this purpose? Thanks in advance. P.S This is a screenshot made in Matlab. This is how I would like the user to insert and receive the answer: Another example:

    Read the article

  • IS it ok to use REST for CRUD operations?

    - by l0l0l0l0l
    Recently I moved to Laravel and I was surprised on how good setting the controllers as RESTful is, it made routes and my code cleaner. I'm kinda new on web development and never used REST before since all my clients' projects are basically CRUD operations. Are there any cool buzzword to this "approach" or I'm just stupid for doing it? I don't plan to follow any REST patterns, just to make my life easier and code cleaner. Basically just GET/POST, the other ones are not native anyway so (emulated on hidden form value).

    Read the article

  • Do you think code is self documenting?

    - by Desolate Planet
    This is a question that was put to me many years ago as a gradute in a job interview and it's kind of picked at my brain now and again and I've never really found a good answer that satisfies me. The interviewer in question was looking for a black and white answer, there was no middle ground. I never got the chance to ask about the rationale behind the question, but I'm curious why that question would be put to a developer and what you would learn from a yes or no answer? From my own point of view, I can read Java, Python, Delphi etc, but if my manager comes up to me and asks me how far along in a project I am and I say "The code is 80% complete" (and before you start shooting me down, I've heard this uttered in a couple of offices by developers), how exactly is that self documenting? Apologies if this question seems strange, but I'd rather ask and get some opinions on it to gain a better understanding of why it would be put to someone in an interview.

    Read the article

  • What's the best version control/QA workflow for a legacy system?

    - by John Cromartie
    I am struggling to find a good balance with our development and testing process. We use Git right now, and I am convinced that ReinH's Git Workflow For Agile Teams is not just great for capital-A Agile, but for pretty much any team on DVCS. That's what I've tried to implement but it's just not catching. We have a large legacy system with a complex environment, hundreds of outstanding and undiscovered defects, and no real good way to set up a test environment with realistic data. It's also hard to release updates without disrupting users. Most of all, it's hard to do thorough QA with this process... and we need thorough testing with this legacy system. I feel like we can't really pull off anything as slick as the Git workflow outlined in the link. What's the way to do it?

    Read the article

  • Who should write the test plan?

    - by Cheng Kiang
    Hi, I am in the in-house development team of my company, and we develop our company's web sites according to the requirements of the marketing team. Before releasing the site to them for acceptance testing, we were requested to give them a test plan to follow. However, the development team feels that since the requirements came from the requestors, they would have the best knowledge of what to test, what to lookout for, how things should behave etc and a test plan is thus not required. We are always in an argument over this, and developers find it a waste of time to write down things like:- Click on button A. Key in XYZ in the form field and click button B. You should see behaviour C. which we have to repeat for each requirement/feature requested. This is basically rephrasing what's already in the requirements document. We are moving towards using an Agile approach for managing our projects and this is also requested at the end of each iteration. Unit and integration testing aside, who should be the one to come up with the end user acceptance test plan? Should it be the reqestors or the developers? Many thanks in advance. Regards CK

    Read the article

  • Create software without programming

    - by Hafizul Amri
    Is there any system or software that can let you create a system or software without have to do programming? UPDATE: Answering to Kennethvr question. Software that I mean is like a web-based software that used to create a simple CRUD system such contact management system, and you can choose how data will be displayed. UPDATE I have found PHPRunner software. It can create a simple CRUD system without you need to touch your coding. Take a look at it!

    Read the article

  • What is the best way to keep track of the median?

    - by Steven Mou
    I read a question in one book: Numbers are randomly generated and stored into an (expanding) array, How would you keep track of the median? There are two data structures can solve the problem. One is the balanced binary tree, the other is two heaps which keep trace of the biggest half and the smallest half of the elements. I think these two solutions has the same running time as O(n lg n), but I am not sure of my judgement. In your opinions, What is the best way to keep track of the median?

    Read the article

  • Is it common practice to hire third parties to do code reviews for contractors?

    - by blueberryfields
    I recently observed some contract offers which included a "code review by third party" clause - the contract would not pay out fully until the code review was completed and it received a pass. I was surprised, especially considering that these were fairly simple, and small-scale contracts (churning out vanity apps for the iPhone). Is this kind of third-party code review a common thing to run into when contracting out as a programmer?

    Read the article

  • Having MSc or Experience worth in industrial environments?

    - by Abimaran
    I'm a fresh graduate in Electronic & Telecommunication field, and in our University, we can have major and minor fields in the relevant subjects. So, I majored in telecommunication and minored in Software Engineering. As I learned programing long before, Now I'm passionate in SE and programming. And, I want drive into the SE field. And, It came to know that, in industries, most of them expecting the candidates to have the experience, or having a MSc in the related field. [I'm referring my surrounding environment, not all the industries]. My Question, How do they consider those MSc and experience guys in the industries? Thanks!

    Read the article

  • Zooming options terminology

    - by Mark
    I've come up with 4 different ways to fit an image inside a viewing region, but I'm trouble coming up with names for them. Perhaps someone can suggest some? Fit image in viewing region, do not enlarge if image is smaller Size image so it fits snuggly inside the viewing region (enlarge if necessary) -- the image is as large as possible while still fitting within the viewing region Size image so that it fills the entire viewing region -- the image will be the same size or bigger than the viewing region 1:1 ratio; 1 pixel in the image corresponds to 1 pixel on screen All zooming options maintain aspect ratio. Stretching is just ugly, so it's not an option :)

    Read the article

  • How could a human factors degree help a computer scientist?

    - by Bob Dole
    I'm wrapping up a masters in CS and already have half the credit hours needed for a degree in Human Factors. I just recently discovered how useful understanding about cognition can help someone that creates user interfaces and am thirsty for more knowledge in the area. For me, it seems that having both a masters in Human Factors and CS would be very marketable but would there be jobs out there that would allow me to apply both? Meaning what I would really like to do is take the requirements for some application, apply different Human Factors theories( GOMS, CE+ ) to developing the interface, maybe do cognitive walk through with users to optimize the UI, then develop the application. Do jobs out there exist like this? The reason I ask, is because I'm wondering if most places just want you to be either a Human Factors Expert or a Developer but not both.

    Read the article

  • How do I tell the cases when it's worth to use LINQ?

    - by Lijo
    Many things in LINQ can be accomplished without the library. But for some scenarios, LINQ is most appropriate. Examples are: SELECT - http://stackoverflow.com/questions/11883262/wrapping-list-items-inside-div-in-a-repeater SelectMany, Contains - http://stackoverflow.com/questions/11778979/better-code-pattern-for-checking-existence-of-value Enumerable.Range - http://stackoverflow.com/questions/11780128/scalable-c-sharp-code-for-creating-array-from-config-file WHERE http://stackoverflow.com/questions/13171850/trim-string-if-a-string-ends-with-a-specific-word What factors to take into account when deciding between LINQ and regular .Net language elements?

    Read the article

  • Today VS 2010 SP1 comes out, any news on the roadmap for Visual Studio 2012?

    - by Abel
    Today Visual Studio 2010 SP1 comes out as general availability release. This made me wondering about the upcoming release of Visual Studio 2012: What are Microsoft's plans for Visual Studio 2012? I heard they'll come with a new version every two years. Are there any open fora or discussions? When will a preview be publicly available? But most importantly: what are the new highlights, improvements in .NET and C#/F#/VB (and C++ of course, request from Stijn)?

    Read the article

  • How do I make the jump from developing for Android to Windows Phone 7?

    - by Rob S.
    I'm planning on making the jump over from developing apps for Android to developing apps for Windows Phone 7 as well. For starters, I figured I would port over my simplest app. The code itself isn't much of a problem as the transition from Java to C# isn't that bad. If anything, this transition is actually easier than I expected. What is troublesome is switching SDKs. I've already compiled some basic Windows Phone 7 apps and ran through some tutorials but I'm still feeling a bit lost. For example, I'm not sure what the equivalent of a ScrollView on Android would be on Windows Phone 7. So does anyone have any advice or any resources they can offer me to help me make this transition? Additionally, any comments on the Windows Phone 7 app market (especially in comparison to the Android market) would also be greatly appreciated. Thank you very much in advance for your time.

    Read the article

  • What would you do if your client required you not to use object-oriented programming?

    - by gunbuster363
    Would you try to persuade your client that using object-oriented programming is much cleaner? Or would you try to follow what he required and give him crappy code? Now I am writing a program to simulate the activity of ants in a grid. The ant can move around, pick up things and drop things. The problem is while the action of the ants and the positions of each ant can be tracked by class attributes easily (and we can easily create many instances of such ants) my client said that since he has a background in functional programming he would like the simulation to be made using functional programming. What would you do?

    Read the article

  • (PHP vs Python vs Perl) vs Ruby [closed]

    - by Dr.Kameleon
    OK, here's what : I've programmed in over 20 different languages and now, because of a large project I'm currently working on for Mac OS X (in Objective-C/Cocoa), I need to make a final decision on which language to use for my background scripting + plugin functionality. Definitely, one factor that'll ultimately influence my decision is which one I'm most familiar with, which is PHP (one of the ugliest languages around, which I however adore... lol), then Python / Perl (the "proven values"... )... and then Ruby (which, to me, is almost confusing and I've only played with it for some time.) Now, here's my considerations : (As previously mentioned) Being familiar with it (anyway, if X is better in my case, I really don't mind studying it from scratch...) Speed Good interaction with the Shell + ease of integration with my Cocoa application Btw, some of the reasons that made me wonder if Ruby would be a good choice is : The hype around it (although, I still don't get why; but that's probably just me...) My major competitor (we're actually talking about the same type of software here) is using Ruby for its backend scripting almost exclusively (ok, along with some BASH). Isn't Ruby considered slower e.g. than Perl? Why did he choose that? Simply, a matter of personal taste? So... your thoughts?

    Read the article

  • When not to use Spring to instantiate a bean?

    - by Rishabh
    I am trying to understand what would be the correct usage of Spring. Not syntactically, but in term of its purpose. If one is using Spring, then should Spring code replace all bean instantiation code? When to use or when not to use Spring, to instantiate a bean? May be the following code sample will help in you understanding my dilemma: List<ClassA> caList = new ArrayList<ClassA>(); for (String name : nameList) { ClassA ca = new ClassA(); ca.setName(name); caList.add(ca); } If I configure Spring it becomes something like: List<ClassA> caList = new ArrayList<ClassA>(); for (String name : nameList) { ClassA ca = (ClassA)SomeContext.getBean(BeanLookupConstants.CLASS_A); ca.setName(name); caList.add(ca); } I personally think using Spring here is an unnecessary overhead, because The code the simpler to read/understand. It isn't really a good place for Dependency Injection as I am not expecting that there will be multiple/varied implementation of ClassA, that I would like freedom to replace using Spring configuration at a later point in time. Am I thinking correct? If not, where am I going wrong?

    Read the article

  • Application qos involving priority and bandwidth

    - by Steve Peng
    Our manager wants us to do applicaiton qos which is quite different from the well-known system qos. We have many services of three types, they have priorites, the manager wants to suspend low priority services requests when there are not enough bandwidth for high priority services. But if the high priority services requests decrease, the bandwidth for low priority services should increase and low priority service requests are allowed again. There should be an algorithm involving priority and bandwidth. I don't know how to design the algorithm, is there any example on the internet? Somebody can give suggestion? Thanks. UPDATE All these services are within a same process. We are setting the maximum bandwidth for the three types of services via ports of services via TC (TC is the linux qos tool whose name means traffic control).

    Read the article

  • How is"cloud computing"different from "client-server"?

    - by BellevueBob
    Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's.

    Read the article

  • How Microsoft Market DotNet?

    - by Fendy
    I just read an Joel's article about Microsoft's breaking change (non-backwards compatibility) with dot net's introduction. It is interesting and explicitly reflected the condition during that time. But now almost 10 years has passed. The breaking change It is mainly on how bad is Microsoft introducing non-backwards compatibility development tools, such as dot net, instead of improving the already-widely used asp classic or VB6. As much have known, dot net is not natively embedded in windows XP (yes in vista or 7), so in order to use the .net apps, you need to install the .net framework of over 300mb (it's big that day). However, as we see that nowadays many business use .net as their main development tools, with asp.net or mvc as their web-based applications. C# nowadays be one of tops programming languages (the most questions in stackoverflow). The more interesing part is, win32api still alive even there is newer technology out there (and still widely used). Imagine if microsoft does not introduce the breaking change, there will many corporates still uses asp classic or vb-based applications (there still is, but not that much). There are many corporates use additional services such as azure or sharepoint (beside how expensive is it). Please note that I also know there are many flagships applications (maybe adobe's and blizzard's) still use C-based or older language and not porting to newer high-level language. The question How can Microsoft persuade the users to migrate their old applications into dot net? As we have known it is very hard and give no immediate value when rewrite the applications (netscape story), and it is very risky. I am more interested in Microsoft's way and not opinion such as "because dot net is OOP, or dot net is dll-embedable, etc". This question may be constructive, as the technology is vastly changes over times lately. As we can see, Microsoft changes Asp.Net webform to MVC, winform is legacy now, it is starting to change to use windows store rather than basic-installment, touchscreen and later on we will have see-through applications such as google class. And that will be breaking changes. We will need to account portability as an issue nowadays. We will need other than just mere technology choice, but also migration plans. Even maybe as critical as we might need multiplatform language compiler, as approached by Joel's Wasabi. (hey, I read his articles too much!)

    Read the article

  • How should I determine my rates for writing custom software?

    - by Carson Myers
    For a custom software that will likely take a year or more to develop, how would I go about determining what to charge as a consultant? I'm having a hard time coming up with a number, and searches online are providing vastly different numbers (between $55/hr and $300/hr). I don't want to shoot too low because it's going to take me so much time (and I'm deferring my education for this project). I also don't want to shoot too high and get unpleasant looks and demand for justification. FWIW I live in Canada, and have approx. 10 years of development experience. I've read the "take your salary and divide it by 1000" rule of thumb, but the thing is I don't have a salary. Currently I'm just doing fairly small programming tasks for a friend who is starting a marketing company, pricing each task fairly arbitrarily. I don't know what I would make over the course of a year doing it, but it would be incredibly low. My responsibilities for the project would be the architecture, programming, database, server, and UX to some degree. It's going to be a public facing web service so I will also need to put a lot of effort into security and scalability. Any advice or experience?

    Read the article

  • OOP oriented PHP app source code samples and advice

    - by abel
    The day I have been dreading has arrived. I never felt OOP or good software design was important(I knew they were important, but I thought I could manage without them.). However having read otherwise almost everywhere on the interwebs, I started dreading the day when my client would ask me for new features in an existing app. The day has come and the pain is unbearable! I have never coded my PHP websites "properly"(PHP is my primary language and the bulk of my work. I am learning Python (using web2py)) I take care that the website doesn't fall apart in a daily use scenario. I code pages like I was creating a list of static html files with bits of "magic code" in each of them(this bugs me a lot). How do I make the whole app more or less a single object? For eg. How do I design the object model for an invoicing app? I use a lot of functions for doing any particular thing in the same fashion throughout the app(for eg. validation, generating ids, calculating taxes etc.). I know the basics of OOP in general. Can anyone point me to source code samples of functional apps written in php? Or can someone provide pointers so I can recode my existing apps in a more modular way.

    Read the article

  • Current iOS version/device statistics?

    - by hotpaw2
    The answer to this SO question has become stale: iOS version/device statistics - where can i find? because time currency wasn't part of that question, and iOS version updates have been release since this question was asked. Is there a web site or other publicly available source which keeps a current or frequently updated list of the percentages of iOS devices and OS versions in use, perhaps by continual monitoring of app analytics or web site logs or other means? And what device or OS information are iOS app analytics currently allowed to report, if any? (...assuming an appropriate privacy policy and adhering to such, of course.)

    Read the article

  • What is required for a scope in an injection framework?

    - by johncarl
    Working with libraries like Seam, Guice and Spring I have become accustomed to dealing with variables within a scope. These libraries give you a handful of scopes and allow you to define your own. This is a very handy pattern for dealing with variable lifecycles and dependency injection. I have been trying to identify where scoping is the proper solution, or where another solution is more appropriate (context variable, singleton, etc). I have found that if the scope lifecycle is not well defined it is very difficult and often failure prone to manage injections in this way. I have searched on this topic but have found little discussion on the pattern. Is there some good articles discussing where to use scoping and what are required/suggested prerequisites for scoping? I interested in both reference discussion or your view on what is required or suggested for a proper scope implementation. Keep in mind that I am referring to scoping as a general idea, this includes things like globally scoped singletons, request or session scoped web variable, conversation scopes, and others. Edit: Some simple background on custom scopes: Google Guice custom scope Some definitions relevant to above: “scoping” - A set of requirements that define what objects get injected at what time. A simple example of this is Thread scope, based on a ThreadLocal. This scope would inject a variable based on what thread instantiated the class. Here's an example of this: “context variable” - A repository passed from one object to another holding relevant variables. Much like scoping this is a more brute force way of accessing variables based on the calling code. Example: methodOne(Context context){ methodTwo(context); } methodTwo(Context context){ ... //same context as method one, if called from method one } “globally scoped singleton” - Following the singleton pattern, there is one object per application instance. This applies to scopes because there is a basic lifecycle to this object: there is only one of these objects instantiated. Here's an example of a JSR330 Singleton scoped object: @Singleton public void SingletonExample{ ... } usage: public class One { @Inject SingeltonExample example1; } public class Two { @Inject SingeltonExample example2; } After instantiation: one.example1 == two.example2 //true;

    Read the article

  • How many developers before continuous integration becomes effective for us?

    - by Carnotaurus
    There is an overhead associated with continuous integration, e.g., set up, re-training, awareness activities, stoppage to fix "bugs" that turn out to be data issues, enforced separation of concerns programming styles, etc. At what point does continuous integration pay for itself? EDIT: These were my findings The set-up was CruiseControl.Net with Nant, reading from VSS or TFS. Here are a few reasons for failure, which have nothing to do with the setup: Cost of investigation: The time spent investigating whether a red light is due a genuine logical inconsistency in the code, data quality, or another source such as an infrastructure problem (e.g., a network issue, a timeout reading from source control, third party server is down, etc., etc.) Political costs over infrastructure: I considered performing an "infrastructure" check for each method in the test run. I had no solution to the timeout except to replace the build server. Red tape got in the way and there was no server replacement. Cost of fixing unit tests: A red light due to a data quality issue could be an indicator of a badly written unit test. So, data dependent unit tests were re-written to reduce the likelihood of a red light due to bad data. In many cases, necessary data was inserted into the test environment to be able to accurately run its unit tests. It makes sense to say that by making the data more robust then the test becomes more robust if it is dependent on this data. Of course, this worked well! Cost of coverage, i.e., writing unit tests for already existing code: There was the problem of unit test coverage. There were thousands of methods that had no unit tests. So, a sizeable amount of man days would be needed to create those. As this would be too difficult to provide a business case, it was decided that unit tests would be used for any new public method going forward. Those that did not have a unit test were termed 'potentially infra red'. An intestesting point here is that static methods were a moot point in how it would be possible to uniquely determine how a specific static method had failed. Cost of bespoke releases: Nant scripts only go so far. They are not that useful for, say, CMS dependent builds for EPiServer, CMS, or any UI oriented database deployment. These are the types of issues that occured on the build server for hourly test runs and overnight QA builds. I entertain that these to be unnecessary as a build master can perform these tasks manually at the time of release, esp., with a one man band and a small build. So, single step builds have not justified use of CI in my experience. What about the more complex, multistep builds? These can be a pain to build, especially without a Nant script. So, even having created one, these were no more successful. The costs of fixing the red light issues outweighed the benefits. Eventually, developers lost interest and questioned the validity of the red light. Having given it a fair try, I believe that CI is expensive and there is a lot of working around the edges instead of just getting the job done. It's more cost effective to employ experienced developers who do not make a mess of large projects than introduce and maintain an alarm system. This is the case even if those developers leave. It doesn't matter if a good developer leaves because processes that he follows would ensure that he writes requirement specs, design specs, sticks to the coding guidelines, and comments his code so that it is readable. All this is reviewed. If this is not happening then his team leader is not doing his job, which should be picked up by his manager and so on. For CI to work, it is not enough to just write unit tests, attempt to maintain full coverage, and ensure a working infrastructure for sizable systems. The bottom line: One might question whether fixing as many bugs before release is even desirable from a business prespective. CI involves a lot of work to capture a handful of bugs that the customer could identify in UAT or the company could get paid for fixing as part of a client service agreement when the warranty period expires anyway.

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >