Search Results

Search found 17191 results on 688 pages for 'programming logic'.

Page 104/688 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • A more concise example that illustrates that type inference can be very costly?

    - by mrrusof
    It was brought to my attention that the cost of type inference in a functional language like OCaml can be very high. The claim is that there is a sequence of expressions such that for each expression the length of the corresponding type is exponential on the length of the expression. I devised the sequence below. My question is: do you know of a sequence with more concise expressions that achieves the same types? # fun a -> a;; - : 'a -> 'a = <fun> # fun b a -> b a;; - : ('a -> 'b) -> 'a -> 'b = <fun> # fun c b a -> c b (b a);; - : (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'a -> 'c = <fun> # fun d c b a -> d c b (c b (b a));; - : ((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'a -> 'd = <fun> # fun e d c b a -> e d c b (d c b (c b (b a)));; - : (((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'd -> 'e) -> ((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'a -> 'e = <fun> # fun f e d c b a -> f e d c b (e d c b (d c b (c b (b a))));; - : ((((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'd -> 'e) -> ((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'e -> 'f) -> (((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'd -> 'e) -> ((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'a -> 'f = <fun>

    Read the article

  • How to bring an application from Sublime Text to a web IDE for sharing?

    - by Kyle Pennell
    I generally work on my projects locally in Sublime Text but sometimes need to share them with others using things like Jsfiddle, codepen, or plunker. This is usually so I can get unstuck. Is there an easier way to share code that doesn't involve purely copy pasting and the hassle of getting all the dependencies right in a new environment? It's taking me hours to get some of my angular apps working in plunker and I'm wondering if there's a better way.

    Read the article

  • What problems can arise from emulating concepts from another languages?

    - by Vandell
    I've read many times on the web that if your language doesn't support some concept, for example, object orientation, or maybe function calls, and it's considered a good practice in this other context, you should do it. The only problem I can see now is that other programmers may find your code too different than the usual, making it hard for them to program. What other problems do you think may arise from this?

    Read the article

  • Do any OO languages support a mechanism to guarantee an overriden method will call the base?

    - by Aaron Anodide
    I think this might be a useful language feature and was wondering if any languages already support it. The idea is if you have: class C virtual F statement1 statement2 and class D inherits C override F statement1 statement2 C.F() There would be a keyword applied to C.F() such that removing the last line of code above would cause a compiler error because it's saying "This method can be overridden but the implementation here needs to run no matter what".

    Read the article

  • How can a developer realize the full value of his work [closed]

    - by Jubbat
    I, honestly, don't want to work as a developer in a company anymore after all I have seen. I want to continue developing software, yes, but not in the way I see it all around me. And I'm in London, a city that congregates lots of great developers from the whole world, so it shouldn't be a problem of location. So, what are my concerns? First of all, best case scenario: you are paying managers salary out of yours. You are consistently underpaid by making up for the average manager negative net return plus his whole salary. Typical scenario. I am a reasonably good developer with common sense who cares for readable code with attention to basic principles. I have found way too often, overconfident and arrogant developers with a severe lack of common sense. Personally, I don't want to follow TDD or Agile practices like all the cool kids nowadays. I would read about them, form my own opinion and take what I feel is useful, but don't follow it sheepishly. I want to work with people who understand that you have to design good interfaces, you absolutely have to document your code, that readability is at the top of your priorities. Also people who don't have a cargo cult mentality too. For instance, the same person who asked me about design patterns in a job interview, later told me that something like a List of Map of Vector of Map of Set (in Java) is very readable. Why would someone ask me about design patterns if they can't even grasp encapsulation? These kind of things are the norm. I've seen many examples. I've seen worse than that too, from very well paid senior devs, by the way. Every second that you spend working with people with such lack of common sense and clear thinking, you are effectively losing money by being terribly inefficient with your time. Yet, with all these inefficiencies, the average developer earns a high salary. So I tried working on my own then, although I don't like the idea. I prefer healthy exchange of opinions and ideas and task division. I then did a bit of online freelancing for a while but I think working in a sweatshop might be more enjoyable. Also, I studied computer engineering and you are in an environment in which your client will presume you don't have any formal education because there is no way to prove it. Again, you are undervalued. You could try building a product, yes. But, of course, luck is a big factor. I wonder if there is a way to work in something you can do well, software development, and be valued for the quality of your work and be paid accordingly, and where you and only you get fairly paid for the value you generate. I know that what I have written seems somehow unlikely but I strongly feel this way. Hopefully someone will understand me and has already figured this out. I don't think I'm alone in this kind of feeling.

    Read the article

  • What software do you use to help plan your team work, and why?

    - by Alex Feinman
    Planning is very difficult. We are not naturally good at estimating our own future, and many cognitive biases exacerbate the problem. Group planning is even harder. Incomplete information, inconsistent views of a situation, and communication problems compound the difficulty. Agile methods provide one framework for organizing group planning--making planning visible to everyone (user stories), breaking it into smaller chunks (sprints), and providing retrospective analysis so you get better at planning. But finding good tools to support these practices is proving tricky. What software tools do you use to achieve these goals? Why are you using that tool? What successes have you had with a particular tool?

    Read the article

  • What is the most efficient way to study multiple languages, frameworks, and APIs as a developer?

    - by Akromyk
    I know there are those out there who have read a slurry of books on a specific technology and only code in that one particular language, but this question is aimed at those who need bounce around between using multiple technologies and yet still manage to be productive. What is the most efficient way to study multiple languages, frameworks, and APIs as a developer without becoming a cheap swiss army knife? And how much time should one dedicate to a particular subject before moving to another?

    Read the article

  • Sucking Less Every Year ?

    - by AdityaGameProgrammer
    Sucking Less Every Year A trail of thought that had been on my mind for a while Quoting directly from the post I've often thought that sucking less every year is how humble programmers improve. You should be unhappy with code you wrote a year ago. If you aren't, that means either A) you haven't learned anything in a year, B) your code can't be improved, or C) you never revisit old code. All of these are the kiss of death for software developers. How often does this happen or not happen to you? How long before you see an actual improvement in your coding ? month, year? Do you ever revisit Your old code? How often does your old code plague you? or how often do you have to deal with your technical debt. It is definitely very painful to fix old bugs n dirty code that we may have done to quickly meet a deadline and those quick fixes ,some cases we may have to rewrite most of the application/code. No arguments about that. Some of the developers i had come across argued that they were already at the evolved stage where their coding doesn't need improvement or cant get improved anymore. Does this happen? If so how many years into coding on a particular language does one expect this to happen?

    Read the article

  • Oculus Rift with Antichamber

    - by Scott Hainline
    Antichamber runs great on linux (steam version). But it is not playable with the Oculus Rift at this point. The issues are: 1) no headtracking 2) graphics are not being split and distorted by Oculus SDK My current plan is to use LD_PRELOAD to add the functionality, this seems to be the linux equivalent of DLL injection. Antichamber appears to be using SDL, I'm hoping this can be configured to use the headtracking data as a joystick and apply the graphics distortion, but I am not sure which functions I should be looking for. Is there a simpler way of getting these issues resolved? Is SDL the right choice here? Would appreciate any information on how the Unreal Engine 3 works under linux; and library injection too.

    Read the article

  • How often is seq used in Haskell production code?

    - by Giorgio
    I have some experience writing small tools in Haskell and I find it very intuitive to use, especially for writing filters (using interact) that process their standard input and pipe it to standard output. Recently I tried to use one such filter on a file that was about 10 times larger than usual and I got a Stack space overflow error. After doing some reading (e.g. here and here) I have identified two guidelines to save stack space (experienced Haskellers, please correct me if I write something that is not correct): Avoid recursive function calls that are not tail-recursive (this is valid for all functional languages that support tail-call optimization). Introduce seq to force early evaluation of sub-expressions so that expressions do not grow to large before they are reduced (this is specific to Haskell, or at least to languages using lazy evaluation). After introducing five or six seq calls in my code my tool runs smoothly again (also on the larger data). However, I find the original code was a bit more readable. Since I am not an experienced Haskell programmer I wanted to ask if introducing seq in this way is a common practice, and how often one will normally see seq in Haskell production code. Or are there any techniques that allow to avoid using seq too often and still use little stack space?

    Read the article

  • Is committing/checking in code everyday a good practice?

    - by ArtB
    I've been reading Martin Fowler's note on Continuous Integration and he lists as a must "Everyone Commits To the Mainline Every Day". I do not like to commit code unless the section I'm working on is complete and that in practice I commit my code every three days: one day to investigate/reproduce the task and make some preliminary changes, a second day to complete the changes, and a third day to write the tests and clean it up^ for submission. I would not feel comfortable submitting the code sooner. Now, I pull changes from the repository and integrate them locally usually twice a day, but I do not commit that often unless I can carve out a smaller piece of work. Question: is committing everyday such a good practice that I should change my workflow to accomodate it, or it is not that advisable? Edit: I guess I should have clarified that I meant "commit" in the CVS meaning of it (aka "push") since that is likely what Fowler would have meant in 2006 when he wrote this. ^ The order is more arbitrary and depends on the task, my point was to illustrate the time span and activities, not the exact sequence.

    Read the article

  • Can a language support something like "Retry/Fix"?

    - by Aaron Anodide
    I was just wondering if a language could support something like a Retry/Fix block? The answer to this question is probably the reason it's a bad idea or equivalent to something else, but the idea keeps popping into my head. void F() { try { G(); } fix(WrongNumber wn, out int x) { x = 1; } } void G() { int x = 0; retry<int> { if(x != 1) throw new WrongNumber(x); } } After the fix block ran, the retry block would run again...

    Read the article

  • Can a candidate be judged by asking to write a complex program on "paper"?

    - by iammilind
    Sometime back in an interview, I was asked to write following program: In a keypad of a mobile phone, there is a mapping between number and characters. e.g. 0 & 1 corresponds to nothing; 2 corresponds to 'a','b','c'; 3 corresponds to 'd','e','f'; ...; 9 corresponds to 'w','x','y','z'. User should input any number (e.g. 23, 389423, 927348923747293) and I should store all the combinations of these character mapping into some data structure. For example, if user enters "23" then possible character combinations are: ad, ae, af, bd, be, bf, cd, ce, cf or if user enters, "4676972" then it can be, gmpmwpa, gmpmwpb, ..., hnroxrc, ..., iosozrc Interviewer told that people have written code for this within 20-30 mins!! Also he insisted I have to write on paper. If I am writing a code then my tendency is as of I am writing production code, even though it may not be expected from me. So, I always try to think all the aspects like, optimization, readability, maintainability, extensible and so on. Considering all these, I felt that I should be writing on PC and it needs decent 2 hours. Finally after 25 mins, I was able to come up with just the concept and some shattered pieces of code (not to mention of my rejection). My question is not the answer for the above program. I want to know that is this a right way to judge the caliber of a person ? Am I wrong / too slow in the estimates ? Am I too idealistic ?

    Read the article

  • Do I have to write a lot of boilerplate code if I keep working using Java?

    - by edem
    I'm working for a company writing ERP applications. My problem is that I have to write tons of boilerplate code. I came up with ideas to automatize/prevent the drudgery but only some of them were accepted. I have been told by the lead developer that my ideas tend to be go far afield and I should write code everyone can understand. I had a discussion about this lately and it seems to me that this kind of code ramp is within java's philosophy. I have to write lots of code to achiveve simple things not because it is necessary but because this is the way most of the people at the company think. Is this universally applicable to most of the companies out there using java or this is just my company's view? Do I have to get used to the drudgery if I keep working for java-based firms?

    Read the article

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Is "watermarking" code with random trailing whitespace a good way to detect plagiarism?

    - by paperjam
    Consider this: int f(int x) { return 2 * x * x; } and this int squareAndDouble(int y) { return 2*y*y; } If you found these in independent bodies of code, you might give the two programmers the benefit of the doubt and assume they came up with more-or-less the same function independently. But look at the whitespace at the end of each line of code. Same pattern in both. Surely evidence of copying. On a larger piece of code, correlation of random whitespace at line ends would be irrefutable evidence of a shared origin. Now aside from the obvious weaknesses: e.g. visible or obvious in some editors, easily removed, I was wondering if it was worth deploying something like this in my open source project. My industry has a history of companies ripping off open source projects.

    Read the article

  • when i marked all check boxes in settings,update fails.

    - by user67966
    "W:Failed to fetch cdrom://Ubuntu 12.04 LTS Precise Pangolin - Release i386 (20120423)/dists/precise/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS Precise Pangolin - Release i386 (20120423)/dists/precise/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , E:Some index files failed to download. They have been ignored, or old ones used instead." the above information is a crash info.It happens when i manually update ubuntu.I have marked all check-boxes in settings which was by default not marked. What is the main reason for this problem?

    Read the article

  • Final Year Project Advice: what impact on my CV [closed]

    - by Devon Smith
    I am being offered - as a final year project - to do a Company Website. This is basically an out-house project and I am not completely sure whether I should take it. The requirements are : Company Information User Registration Order placements. The technologies that I should use are PHP, Javascript, HTML, CSS and maybe Java Servlets. This appears to me a very basic project and I need an opinion as to what effect it might have on my CV. Is it worth to do it? Or should I go into some research project or something that has not been done before?

    Read the article

  • Is perfectionism a newbie's friend or enemy? [closed]

    - by Akromyk
    Possible Duplicate: Where do you draw the line for your perfectionism? I see that the development community is very focused on doing things the right way and personally I would like to do the same too, however, is it a good or bad idea for a newbie to focus on design principles, design patterns, and commenting code when getting started, or is it better to let creativity run wild and potentially write sloppy code. Where should a newbie draw the line?

    Read the article

  • Shader inputs in a general purpouse engine

    - by dreta
    I'm not familiar with SDKs like Unity or UDK that much, so i can't check this off hand. Do general purpouse engines allow users to create custom uniform variables? The way i see it, and the way i have implemented it in an engine i'm writing to learn 3D, is that there is a "set" of uniforms provided by the engine and if you want to write a custom shader then you utilize uniforms you need to create a wanted effect. Now, the thing is, first of all i'm not an artist, second of all, i didn't have a chance to create complex scenes yet. So my question is, is it common practice to define variables that the engine provides and only allow the user to work with what they're given? Allowing users to add custom programs and use them where they want is not hard, but i have issues imagining how you'd go about doing the same for uniforms.

    Read the article

  • LOD in modern games

    - by Firas Assaad
    I'm currently working on my master's thesis about LOD and mesh simplification, and I've been reading many academic papers and articles about the subject. However, I can't find enough information about how LOD is being used in modern games. I know many games use some sort of dynamic LOD for terrain, but what about elsewhere? Level of Detail for 3D Graphics for example points out that discrete LOD (where artists prepare several models in advance) is widely used because of the performance overhead of continuous LOD. That book was published in 2002 however, and I'm wondering if things are different now. There has been some research in performing dynamic LOD using the geometry shader (this paper for example, with its implementation in ShaderX6), would that be used in a modern game? To summarize, my question is about the state of LOD in modern video games, what algorithms are used and why? In particular, is view dependent continuous simplification used or does the runtime overhead make using discrete models with proper blending and impostors a more attractive solution? If discrete models are used, is an algorithm used (e.g. vertex clustering) to generate them offline, do artists manually create the models, or perhaps a combination of both methods is used?

    Read the article

  • Single responsibility principle - am I overusing it?

    - by Tarun
    For reference - http://en.wikipedia.org/wiki/Single_responsibility_principle I have a test scenario where in one module of application is responsible for creating ledger entries. There are three basic tasks which could be carried out - View existing ledger entries in table format. Create new ledger entry using create button. Click on a ledger entry in the table (mentioned in first pointer) and view its details in next page. You could nullify a ledger entry in this page. (There are couple more operation/validations in each page but fore sake of brevity I will limit it to these) So I decided to create three different classes - LedgerLandingPage CreateNewLedgerEntryPage ViewLedgerEntryPage These classes offer the services which could be carried out in those pages and Selenium tests use these classes to bring application to a state where I could make certain assertion. When I was having it reviewed with on of my colleague then he was over whelmed and asked me to make one single class for all. Though I yet feel my design is much clean I am doubtful if I am overusing Single Responsibility principle

    Read the article

  • How were some language communities (eg, Ruby and Python) able to prevent fragmentation while others (eg, Lisp or ML) were not?

    - by chrisaycock
    The term "Lisp" (or "Lisp-like") is an umbrella for lots of different languages, such as Common Lisp, Scheme, and Arc. There is similar fragmentation is other language communities, like in ML. However, Ruby and Python have both managed to avoid this fate, where innovation occurred more on the implementation (like PyPy or YARV) instead of making changes to the language itself. Did the Ruby and Python communities do something special to prevent language fragmentation?

    Read the article

  • Are two database trips reasonable for a login system?

    - by Randolph Potter
    I am designing a login system for a project, and have an issue about it requiring two trips to the database when a user logs in. User types in username and password Database is polled and password hash is retrieved for comparative purposes (first trip) Code tests hash against entered password (and salt), and if verified, resets the session ID New session ID and username are sent back to the database to write a row to the login table, and generate a login ID for that session. EDIT: I am using a random salt. Does this design make sense? Am I missing something? Is my concern about two trips unfounded? Comments and suggestions are welcome.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >