Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 201/563 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • In the days of modern computing, in 'typical business apps' - why does performance matter?

    - by Prog
    This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.

    Read the article

  • What is a recent programming language of choice for the AI?

    - by Eduard Florinescu
    For a few decades the programming language of choice for AI was either Prolog or LISP, and a few more others that are not so well known. Most of them were designed before the 70's. Changes happens a lot on many other domains specific languages, but in the AI domain it hadn't surfaced so much as in the web specific languages or scripting etc. Are there recent programming languages that were intended to change the game in the AI and learn from the insufficiencies of former languages?

    Read the article

  • Techniques for getting off the ground in any language

    - by AndyBursh
    When I start learning a new language, I have a couple of simple implementations that I like to complete to familiarise myself with the language. Currently, I write: Fibonacci and/or factorial to get the hang of writing and calling methods, and basic recursion Djikstras shortest path (with a node type) to get to grips with making classes (or whatever the language equivalent is) with methods and properties, and also using them in slightly more complex code. I was wondering: does anybody else have any techniques or tools they like to use when getting off the ground in a new language? I'm always looking for new things to add to my "start-up routine".

    Read the article

  • Principles of an extensible data proxy

    - by Wesley
    There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS

    Read the article

  • Any Practical Alternative to the Signals + Slots model for GUI Programming?

    - by IntermediateHacker
    The majority of GUI Toolkits nowadays use the Signals + Slots model. It was Qt and GTK+, if I am not wrong, who pioneered it. You know, the widgets or graphical objects (sometimes even ones that aren't displayed) send signals to the main-loop handler. The main-loop handler then calls the events, callbacks or slots assigned for that widget / graphical object. There are usually default (and in most cases virtual) event-handlers already provided by the toolkit for handling all pre-defined signals, therefore, unlike previous designs where the developer had to write the entire main-loop and handler for each and every message himself (think WINAPI), the developer only has to worry about the signals he needs to implement new functionality on. Now this design is being used in most modern toolkits as far as I know. There are Qt, GTK+, FLTK etc. There is Java Swing. C# even has a language feature for it ( events and delegates ), and Windows Forms has been developed on this design. In fact, over the last decade, this design for GUI programming has become a kind of an unwritten standard. Since it increases productivity and provides greater abstraction. However, my question is: Is there any alternative design, that is parallel or practical for modern GUI programming? i.e Is the Signals + Slots design, the only practical one in town? Is it feasible to do GUI Programming with any other design? Are any modern (preferably successful and popular) GUI toolkits built on an alternative design?

    Read the article

  • documentation of typescript code

    - by Max Beikirch
    my question is rather short: How do I document typescript code properly? I found out that for projects becoming bigger and bigger, it is important to look at a function and immediately know parameters, what it returns and side-effects etc. It is tiring to have just a bunch of comments before a function, most of the time these 'blocks' even look differently in style. What I am looking for is a documentation tool like javadoc or doxygen for typescript. Is there anything out there? Or is it possible to 'abuse' a documentation tool and get it to work with typescript?

    Read the article

  • Improve the business logic

    - by Victor
    In my application,I have a feature like this: The user wants to add a new address to the database. Before adding the address, he needs to perform a search(using input parameters like country,city,street etc) and when the list comes up, he will manually check if the address he wants to add is present or not. If present, he will not add the address. Is there a way to make this process better. maybe somehow eliminate a step, avoid need for manual verification etc.

    Read the article

  • Basis of definitions

    - by Yttrill
    Let us suppose we have a set of functions which characterise something: in the OO world methods characterising a type. In mathematics these are propositions and we have two kinds: axioms and lemmas. Axioms are assumptions, lemmas are easily derived from them. In C++ axioms are pure virtual functions. Here's the problem: there's more than one way to axiomatise a system. Given a set of propositions or methods, a subset of the propositions which is necessary and sufficient to derive all the others is called a basis. So too, for methods or functions, we have a desired set which must be defined, and typically every one has one or more definitions in terms of the others, and we require the programmer to provide instance definitions which are sufficient to allow all the others to be defined, and, if there is an overspecification, then it is consistent. Let me give an example (in Felix, Haskell code would be similar): class Eq[t] { virtual fun ==(x:t,y:t):bool => eq(x,y); virtual fun eq(x:t, y:t)=> x == y; virtual fun != (x:t,y:t):bool => not (x == y); axiom reflex(x:t): x == x; axiom sym(x:t, y:t): (x == y) == (y == x); axiom trans(x:t, y:t, z:t): implies(x == y and y == z, x == z); } Here it is clear: the programmer must define either == or eq or both. If both are defined, the definitions must be equivalent. Failing to define one doesn't cause a compiler error, it causes an infinite loop at run time. Defining both inequivalently doesn't cause an error either, it is just inconsistent. Note the axioms specified constrain the semantics of any definition. Given a definition of == either directly or via a definition of eq, then != is defined automatically, although the programmer might replace the default with something more efficient, clearly such an overspecification has to be consistent. Please note, == could also be defined in terms of !=, but we didn't do that. A characterisation of a partial or total order is more complex. It is much more demanding since there is a combinatorial explosion of possible bases. There is an reason to desire overspecification: performance. There also another reason: choice and convenience. So here, there are several questions: one is how to check semantics are obeyed and I am not looking for an answer here (way too hard!). The other question is: How can we specify, and check, that an instance provides at least a basis? And a much harder question: how can we provide several default definitions which depend on the basis chosen?

    Read the article

  • How should I update Ajax Control Toolkit in VS 2010?

    - by Soham
    Suppose if there is a new version available of Ajax Control Toolkit then how should I install/update it in my visual studio 2010 which has already install older version of same toolkit? I would like to install new one while older toolkit would totally uninstalled, because mostly new toolkit always has all controls that were in the old toolkit & also some new controls. Then 1) Should I've to remove .dll file in my toolkit installation folder and place new .dll file there? If I do so then will VS 2010 automatically delete older entries of toolkit controls in .NET Components list and place new control there? Or 2) Should I've to uninstall old toolkit manually i.e delete .dll file and uncheck all entries in .NET Components list & after that install new one from scratch?

    Read the article

  • Library to fake intermittent failures according to tester-defined policy?

    - by crosstalk
    I'm looking for a library that I can use to help mock a program component that works only intermittently - usually, it works fine, but sometimes it fails. For example, suppose I need to read data from a file, and my program has to avoid crashing or hanging when a read fails due to a disk head crash. I'd like to model that by having a mock data reader function that returns mock data 90% of the time, but hangs or returns garbage otherwise. Or, if I'm stress-testing my full program, I could turn on debugging code in my real data reader module to make it return real data 90% of the time and hang otherwise. Now, obviously, in this particular example I could just code up my mock manually to test against a random() routine. However, I was looking for a system that allows implementing any failure policy I want, including: Fail randomly 10% of the time Succeed 10 times, fail 4 times, repeat Fail semi-randomly, such that one failure tends to be followed by a burst of more failures Any policy the tester wants to define Furthermore, I'd like to be able to change the failure policy at runtime, using either code internal to the program under test, or external knobs or switches (though the latter can be implemented with the former). In pig-Java, I'd envision a FailureFaker interface like so: interface FailureFaker { /** Return true if and only if the mocked operation succeeded. Implementors should override this method with versions consistent with their failure policy. */ public boolean attempt(); } And each failure policy would be a class implementing FailureFaker; for example there would be a PatternFailureFaker that would succeed N times, then fail M times, then repeat, and a AlwaysFailFailureFaker that I'd use temporarily when I need to simulate, say, someone removing the external hard drive my data was on. The policy could then be used (and changed) in my mock object code like so: class MyMockComponent { FailureFaker faker; public void doSomething() { if (faker.attempt()) { // ... } else { throw new RuntimeException(); } } void setFailurePolicy (FailureFaker policy) { this.faker = policy; } } Now, this seems like something that would be part of a mocking library, so I wouldn't be surprised if it's been done before. (In fact, I got the idea from Steve Maguire's Writing Solid Code, where he discusses this exact idea on pages 228-231, saying that such facilities were common in Microsoft code of that early-90's era.) However, I'm only familiar with EasyMock and jMockit for Java, and neither AFAIK have this function, or something similar with different syntax. Hence, the question: Do such libraries as I've described above exist? If they do, where have you found them useful? If you haven't found them useful, why not?

    Read the article

  • Using my own code in freelance projects.

    - by Witchunter
    I have been into freelance business for more than 2 years. While doing projects for other people, I've build a compilation of common tasks that I implement in projects and put them into code. It's kind of a library with some functions that I can reuse without having to rewrite the same thing dozen times. I'm talking about accessing Access databases, downloading information from FTP and similar stuff. Is this acceptable from a legal point of view? What's the difference in reusing the old code and rewriting it from the scratch (using you own brain again, therefore the exact same logic)? I do not hold any copyright to it, of course, and provide the source code for these classes to my clients.

    Read the article

  • Not getting paid for hours you've worked?

    - by Mercfh
    So I was reading from a previous thread about App vs Game Development Here Which brought me to this site: Clicky Alot of it talked about devs working something like 85 hours a week.....and not getting paid overtime, or anything. Just getting paid for the 40 hours.....Is this normal for most software companies? I mean where I work im only an entry level guy....but I get overtime, and Anything over 40 hours is considered this. But it got me thinking "Holy crap" I could never do that. My FREE time is important to me. But is this commonplace in most software companies? or...more a rarity to certain types (game development, etc) Cause it got me scared! Like I understand having to put some extra hours in for a project......but like 80! thats ridiculous.

    Read the article

  • Hadoop and Object Reuse, Why?

    - by Andrew White
    In Hadoop, objects passed to reducers are reused. This is extremely surprising and hard to track down if you're not expecting it. Furthermore, the original tracker for this "feature" doesn't offer any evidence that this change actually improved performance (unless I missed it). It would speed up the system substantially if we reused the keys and values [...] but I think it is worth doing. This seems completely counter to this very popular answer. Is there some credence to the Hadoop developer's claim? Is there something "special" about Hadoop that would invalidate the notion of object creation being cheap?

    Read the article

  • How to choose the right web development language for my app without much programming experience?

    - by twinbornJoint
    I have my own idea for a web application, and I am not a programmer. The application will work similar to Facebook and Twitter, profiles and feeds. I have learned some computer science theory, all the way up to OOP, but have no practical experience. Without any experience, is there a way I can evaluate the different language and platform choices available to me? What kind of things should I be looking at? Ease of setup? How many followers it has? How can I evaluate whether a language will have the capabilities I need?

    Read the article

  • How to explain OOP concepts to a non technical person?

    - by John
    I often try to avoid telling people I'm a programmer because most of the time I end up explaining to them what that really means. When I tell them I'm programming in Java they often ask general questions about the language and how it differs from x and y. I'm also not good at explaining things because 1) I don't have that much experience in the field and 2) I really hate explaining things to non-technical people. They say that you truly understand things once you explain them to someone else, in this case how would you explain OOP terminology and concepts to a non technical person?

    Read the article

  • What is the convention for the star location in reference variables?

    - by Brett Ryan
    Have been learning Objective-C and different books and examples use differing conventions for the location of the star (*) when naming reference variables. MyType* x; MyType *y; MyType*z; // this also works Personally I prefer the first option as it illustrates that x is a "pointer type of MyType". I see the first two used interchangeably, and sometimes in the same code I've seen differing uses of both. I want to know what is the most common convention It's been a very long time since I've programmed in C (15 years) so I can't remember if all variants are legal for C also or if this is Objective-C specific. I'd prefer answers which state why one is better than the other, as how I explained how I read it above.

    Read the article

  • Sort algorithms that work on large amount of data

    - by Giorgio
    I am looking for sorting algorithms that can work on a large amount of data, i.e. that can work even when the whole data set cannot be held in main memory at once. The only candidate that I have found up to now is merge sort: you can implement the algorithm in such a way that it scans your data set at each merge without holding all the data in main memory at once. The variation of merge sort I have in mind is described in this article in section Use with tape drives. I think this is a good solution (with complexity O(n x log(n)) but I am curious to know if there are other (possibly faster) sorting algorithms that can work on large data sets that do not fit in main memory. EDIT Here are some more details, as required by the answers: The data needs to be sorted periodically, e.g. once in a month. I do not need to insert a few records and have the data sorted incrementally. My example text file is about 1 GB UTF-8 text, but I wanted to solve the problem in general, even if the file were, say, 20 GB. It is not in a database and, due to other constraints, it cannot be. The data is dumped by others as a text file, I have my own code to read this text file. The format of the data is a text file: new line characters are record separators. One possible improvement I had in mind was to split the file into files that are small enough to be sorted in memory, and finally merge all these files using the algorithm I have described above.

    Read the article

  • How to handle mutiple API calls using javascript/jquery

    - by James Privett
    I need to build a service that will call multiple API's at the same time and then output the results on the page (Think of how a price comparison site works for example). The idea being that as each API call completes the results are sent to the browser immediately and the page would get progressively bigger until all process are complete. Because these API calls may take several seconds each to return I would like to do this via javascript/jquery in order to create a better user experience. I have never done anything like this before using javascript/jquery so I was wondering if there was any frameworks/advice that anyone would be willing to share.

    Read the article

  • Computer science curriculum for non-CS major?

    - by Daniel
    Hi all, I would like to have some ideas for building up my foundation CS skills. I have started programming computers 10 years ago and have made a pretty good career out of it. However, I cannot stop thinking that the path that brought me here was very particular, and if something goes wrong (e.g. I get laid off) it would be harder to find a job here in the US on the same salary level, OR in a top company. The reason I say that is that I am a self-learner; my degree is not in Computer Science so although I master C/C++/Java, I do not have the formal CS and mathematical background that many other software developers (esp. here in the US) have. When I look at job interview questions from Apple, Google, Amazon, I have the impression that I'd flunk those technical interviews at some point. Don't get me wrong, I know my algorithms and data structures, but when things dive too deeply into the CS realm I am in trouble. What can I do to close the gap? I was thinking about a MSc in CS, but will I even UNDERSTAND what's going on there if I'm not a CS undergrad? Should I go back to basics and get a BSc in CS instead? I always tend to go into self-study mode when I want to learn new stuff, but I have the impression that I will need more formal education in CS if I want to have a shot at working at those kinds of companies. Thank you!

    Read the article

  • Runtime analysis

    - by Joe Smith
    can someone please help me with the analysis of the following function (for inputs of size n). The part that confuses me the most is the inner for loop. def prefix_sums(L): # Total cost = ? pSum = [] #cost = 1 for a in range(len(L)+1): # range + body of function = (n+1) + (n+1)*(body) ? s = 0 #cost = 1 for b in range(a): # cost = ? s = s + L[b] #cost = operation + accessing list = 2 pSum.append(s) #cost = 1 return pSum #cost = 1 What I need to do is figure out the cost of each statement.

    Read the article

  • Help me select a "Simpler" target to create a new language: .NET, LLVM, Go, Own VM

    - by mamcx
    Lets define "Simple". This is my first language. I have no previous experience I will not dedicate +4 years to learn it properly. I'm a professional software [developer], but as an amateur in this area, I want instant gratification. If the idea shows a future, I could rewrite it. I don't want to do everything from scratch. In fact, if there exists a way to get GO (for example), change its syntax, add some sugar, give some extra functions and leave intact everything else, that would be perfect! From the example of coffescript/scala I think is better to build on top of some rich runtime like .NET/GO so I don't need to rewrite everything. HOWEVER, if is better other way, no problem for the first try! I want it in a week. I need it in a week so it will really take a month. Then it truly takes 3 months. But I don't want to put more that 3 months on this. I could reduce the scope of my language, but I hope the tools will help me a lot... I want to build a new language. Similar to python, but typed. I wonder what to build it on top of. I like the idea of building on top of GO. To get their sane (IMHO) OO paradigm (I plan to do the same, using interfaces, not inheritance), get goroutines and some other stuff. In my naive thinking I imagine that spit another language could help me to debug it more easily. However, look like everyone is building on top of something like .NET (don't like Java), LLVM or make it own VM. I read http://createyourproglang.com/ (great!) and the part of the VM look "easy" to me. So, what I need is the proper criteria and question I need to know in advance to have a fair shot at make this.

    Read the article

  • So we've got a code review tool, now what can we use for software documents?

    - by Tini
    We're using Subversion as a full CM for code and also for related project documents. We have JIRA and Fisheye. When we wanted to add a peer review tool, we looked at and tested several candidates. Our weighted requirements included both code and document review, but ultimately, the integration with JIRA slanted the scores in Crucible's favor. Atlassian has slammed the door on ever supporting Word or PDF in Crucible. I've tested several workaround methods to make Crucible work for documents without success. (The Confluence/Crucible plug-in was deprecated by Atlassian, so that option is out, too.) I haven't found a plugin for Crucible that adds this functionality, so short of writing my own plug-in, Crucible for documents is unworkable. Word Track Changes doesn't provide a method for true collaboration and commenting. Adobe PDF Comment and Markup is interesting, but doesn't provide a great way to keep a permanent quality record of the conversation. We can't go cloud-based, our documents must be locally hosted on our own server only. We're only on Sharepoint 2007. Help! Anyone have a suggestion?

    Read the article

  • Is Silverlight only for eye-candy, or does it have a use in business?

    - by Cyberherbalist
    Granted that Silverlight may make eye-popping websites of great beauty, is there any justification for using it to make practical web applications that have serious business purposes? I'd like to use it (to learn it) for a new assignment I have, which is to build a web-based application that keeps track of the data interfaces used in our organization, but I'm not sure how to justify it, even to myself. Any thoughts on this? If I can't justify it then I will have to build the app using the same old tired straight ASP.NET approach I've used (it seems) a hundred times already.

    Read the article

  • How to indicate reliability when reporting availability of competencies

    - by Jan Doggen
    We have employees with competencies: Pete Welder Carpenter Melissa Carpenter Assume they both work 40 hours/week, and have not yet been assigned work. We need to report the availability of these competencies, expressed in hours. As far as I can see now, we can report this in two ways: Method A. When someone has multiple competencies, count them both. Welder 40 hours Carpenter 80 hours Method B. When someone has multiple competencies, count an equal division of hours for each Welder 20 hours Carpenter 60 hours Method A has our preference: - A good planner will know to plan the least available competency first. If 30 hours of welding is planned, we will be left with 10 welder, 50 carpenter. - Method B has the disadvantage that the planner thinks he cannot plan the job when 30 hours of welding is required. However, if we report this we would like to give an estimate of the reliability of the numbers for each competency, i.e. how much are these over-reported? In my example A, would I say that carpenter is 100% over-reported, or 50%, or maybe another number? How would I calculate this for large numbers of competencies? I'm sure we are not the first ones dealing with this, is there a 'usual' way of doing this in planning? Additionally: - Would there be an even better method than A or B? - Optionally, we also have an preference order of competencies (like: use him/her in this order), Pete could be 1. welder 2. carpenter. Does this introduce new options?

    Read the article

  • Online & Offline in Web Chat Application

    - by Mohammed Safeer
    I stuck amidst developing a chat web application using php for client side app. I used comet for chat application. And use technique of updating database when someone logout. Thus display offline on other side user. My problem is if someone close browser without logout, how the other side user know the person goes offline. How can i set online and offline icon in a php webchat application, when someone close chat window without logout? Is web sockets in php solve this problem? welcome all suggestions.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >