Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 78/605 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Dependently typed language best suited to “real world” programming?

    - by Kim
    Which dependently typed programming languages could be used for real world application development? I will mostly be writing toy applications at first, after that maybe web development or a simple DBMS. These are some points, that I think are important: documentation example programs a good/big standard library an easy to use foreign function interface a community of people using the language for real world tasks tool support

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • Bundled Software Installers

    - by Volomike
    I have two unrelated Windows programs that come with their own setup.exe files. Using a third-party tool, how do I bundle the two in a single installer? The setup wizard functionality would work like this: They run my setup.exe. It's primary goal is to install Windows program A. On page 2 of the installer, it has a checkbox to install Windows program B. When they click Next on page 2, my installer window vanishes and Windows program A's installer runs. When that installer finishes, my installer detects this. If Windows program B was selected to also be installed, its installer runs next, and again with my installer window not shown. When step 4 is done, my installer detects one or both installs are finished and shows a Finish page from my installer. The user clicks that and my installer closes because it has concluded its purpose.

    Read the article

  • What kinds of low level knowledge matter?

    - by Peter Smith
    I realize that this question is similar to Low level programming - what's in it for me, but the answers didn't really address my question well. Part from just an understanding, how exactly does your low level knowledge translate into faster and better programs? There's the obvious lack of garbage collection, but what else is an advantage? Do you really outperform your optimizing compiler? Do you pack your data structures in as tight as possible and be concerned about alignment? There's extra freedom naturally, but does that really translate into a faster program?

    Read the article

  • Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews

    - by t0x1n
    I am a great believer in the Boy Scout Rule: Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we'd see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We'd also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I am also a great believer in the related idea of Opportunistic Refactoring: Although there are places for some scheduled refactoring efforts, I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs to cleaned up - by whoever. What this means is that at any time someone sees some code that isn't as clear as it should be, they should take the opportunity to fix it right there and then - or at least within a few minutes Particularly note the following excerpt from the refactoring article: I'm wary of any development practices that cause friction for opportunistic refactoring ... My sense is that most teams don't do enough refactoring, so it's important to pay attention to anything that is discouraging people from doing it. To help flush this out be aware of any time you feel discouraged from doing a small refactoring, one that you're sure will only take a minute or two. Any such barrier is a smell that should prompt a conversation. So make a note of the discouragement and bring it up with the team. At the very least it should be discussed during your next retrospective. Where I work, there is one development practice that causes heavy friction - Code Review (CR). Whenever I change anything that's not in the scope of my "assignment" I'm being rebuked by my reviewers that I'm making the change harder to review. This is especially true when refactoring is involved, since it makes "line by line" diff comparison difficult. This approach is the standard here, which means opportunistic refactoring is seldom done, and only "planned" refactoring (which is usually too little, too late) takes place, if at all. I claim that the benefits are worth it, and that 3 reviewers will work a little harder (to actually understand the code before and after, rather than look at the narrow scope of which lines changed - the review itself would be better due to that alone) so that the next 100 developers reading and maintaining the code will benefit. When I present this argument my reviewers, they say they have no problem with my refactoring, as long as it's not in the same CR. However I claim this is a myth: (1) Most of the times you only realize what and how you want to refactor when you're in the midst of your assignment. As Martin Fowler puts it: As you add the functionality, you realize that some code you're adding contains some duplication with some existing code, so you need to refactor the existing code to clean things up... You may get something working, but realize that it would be better if the interaction with existing classes was changed. Take that opportunity to do that before you consider yourself done. (2) Nobody is going to look favorably at you releasing "refactoring" CRs you were not supposed to do. A CR has a certain overhead and your manager doesn't want you to "waste your time" on refactoring. When it's bundled with the change you're supposed to do, this issue is minimized. The issue is exacerbated by Resharper, as each new file I add to the change (and I can't know in advance exactly which files would end up changed) is usually littered with errors and suggestions - most of which are spot on and totally deserve fixing. The end result is that I see horrible code, and I just leave it there. Ironically, I feel that fixing such code not only will not improve my standings, but actually lower them and paint me as the "unfocused" guy who wastes time fixing things nobody cares about instead of doing his job. I feel bad about it because I truly despise bad code and can't stand watching it, let alone call it from my methods! Any thoughts on how I can remedy this situation ?

    Read the article

  • MVC : Does Code to save data in cache or session belongs in controller?

    - by newbie
    I'm a bit confused if saving the information to session code below, belongs in the controller action as shown below or should it be part of my Model? I would add that I have other controller methods that will read this session value later. public ActionResult AddFriend(FriendsContext viewModel) { if (!ModelState.IsValid) { return View(viewModel); } // Start - Confused if the code block below belongs in Controller? Friend friend = new Friend(); friend.FirstName = viewModel.FirstName; friend.LastName = viewModel.LastName; friend.Email = viewModel.UserEmail; httpContext.Session["latest-friend"] = friend; // End Confusion return RedirectToAction("Home"); } I thought about adding a static utility class in my Model which does something like below, but it just seems stupid to add 2 lines of code in another file. public static void SaveLatestFriend(Friend friend, HttpContextBase httpContext) { httpContext.Session["latest-friend"] = friend; } public static Friend GetLatestFriend(HttpContextBase httpContext) { return httpContext.Session["latest-friend"] as Friend; }

    Read the article

  • Can my ikmnet test results say something about career choice I should take?

    - by Nicke
    I took 2 tests via ikmnet and scored 70 % on SQL and 65 % on Java. While not bad, it can be improved. The subskills I need to improve according to the test are interfaces and inheritance, compilation and deployment, flow control, The java.lang package and "Java Program Construction" and these topics seems rather broad to me. Rather than just learning by programming, could you advice me to take a certification, follow a course or otherwise improve my skills? By the way, I enjoy python more than Java so should I market myself more of a python programmer or even a role that some companies search for which seems like a system developer with more technical writing where the title is system analysts (evaluating systems in cooperation with management rather than programming.) Thank you for any comment and/or answer.

    Read the article

  • Which adjustable ergo keyboard do you recommend to a fellow coder and why?

    - by thefonso
    My beloved Goldtouch has died (a moment of silence)....the hinge at the middle which allows you to adjust the board finally broke and the manufacturer does not do repairs for free. I'd have to buy a new board. So...since this I'm in the market for a new keyboard and I'm looking for an adjustable one comparable or better to the Goldentouch. I post this question to all you coders out there. Which adjustable ergo keyboard do you recommend to a fellow coder and why?

    Read the article

  • scheme vs common lisp: war stories

    - by SuperElectric
    There are no shortage of vague "Scheme vs Common Lisp" questions on StackOverflow, so I want to make this one more focused. The question is for people who have coded in both languages: While coding in Scheme, what specific elements of your Common Lisp coding experience did you miss most? Or, inversely, while coding in Common Lisp, what did you miss from coding in Scheme? I don't necessarily mean just language features. The following are all valid things to miss, as far as the question is concerned: Specific libraries. Specific features of development environments like SLIME, DrRacket, etc. Features of particular implementations, like Gambit's ability to write blocks of C code directly into your Scheme source. And of course, language features. Examples of the sort of answers I'm hoping for: "I was trying to implement X in Common Lisp, and if I had Scheme's first-class continuations, I totally would've just done Y, but instead I had to do Z, which was more of a pain." "Scripting the build process in Scheme project, got increasingly painful as my source tree grew and I linked in more and more C libraries. For my next project, I moved back to Common Lisp." "I have a large existing C++ codebase, and for me, being able to embed C++ calls directly in my Gambit Scheme code was totally worth any shortcomings that Scheme may have vs Common Lisp, even including lack of SWIG support." So, I'm hoping for war stories, rather than general sentiments like "Scheme is a simpler language" etc.

    Read the article

  • Scheme vs Common Lisp: war stories

    - by SuperElectric
    There are no shortage of vague "Scheme vs Common Lisp" questions on both StackOverflow and on this site, so I want to make this one more focused. The question is for people who have coded in both languages: While coding in Scheme, what specific elements of your Common Lisp coding experience did you miss most? Or, inversely, while coding in Common Lisp, what did you miss from coding in Scheme? I don't necessarily mean just language features. The following are all valid things to miss, as far as the question is concerned: Specific libraries. Specific features of development environments like SLIME, DrRacket, etc. Features of particular implementations, like Gambit's ability to write blocks of C code directly into your Scheme source. And of course, language features. Examples of the sort of answers I'm hoping for: "I was trying to implement X in Common Lisp, and if I had Scheme's first-class continuations, I totally would've just done Y, but instead I had to do Z, which was more of a pain." "Scripting the build process in my Scheme project got increasingly painful as my source tree grew and I linked in more and more C libraries. For my next project, I moved back to Common Lisp." "I have a large existing C++ codebase, and for me, being able to embed C++ calls directly in my Gambit Scheme code was totally worth any shortcomings that Scheme may have vs Common Lisp, even including lack of SWIG support." So, I'm hoping for war stories, rather than general sentiments like "Scheme is a simpler language" etc.

    Read the article

  • pros and cons of taking an ABAP job

    - by sJhonny
    I'm a programmer with 3 years of .NET experience under my belt, and am currently looking for a new job. One of the options I'm considering is as an OO ABAP developer position with SAP. However, I have several concerns about taking an ABAP job: as ABAP is used exclusively by SAP, any experience in ABAP that I have would be irrelevant in the outside world. I'm also worried that I wouldn't be exposed to new technologies while working in ABAP, and ultimately I would lose touch with what's going on in the world. This is a real sore point, since I really enjoy exploring and learning new & cool stuff. (*note: Yes, I could experiment with other technologies & trends on my own time, but this is much harder to do, and isn't really the same as working full-time with them) One of the nicest things about programming, for me, is finding a great OO architecture / design (I'm really into object-oriented :)). I know that ABAP is a procedural language, and I'm not certain how 'OO' it's OO version is. This leads me to the conclusion that, unless I stay with SAP to the end of my career, any time spent there would be professionaly unbenificial. Is there anyone who can shed some light on these opinions? are my concerns founded? Are there any advantages (career and technology-wise) to ABAP that I'm missing?

    Read the article

  • quick approach to migrate classic asp project to asp.net

    - by Buzz
    Recently we got a requirement for converting a classic asp project to asp.net. This one is really a very old project created around 2002/2003. It consists of around 50 asp pages. I found very little documentation for this project, FSD and design documents for only a few modules. Just giving a quick look into this project my head start to hurt. It is really a mess. I checked the records and found none of the developers who worked on this project work for the company anymore. My real pain is that this is an urgent requirement and I have to provide an estimated deadline to my supervisor. I found a similar question classic-asp-to-asp-net, but I need some more insight on how to convert this classic asp project to asp.net in the quickest possible way.

    Read the article

  • Where can I find fellow developers who are looking for help to create a startup?

    - by Jeremy Child
    Where can I find fellow developers who are looking for help to create a startup? Are there any collaboration websites whereby people pitch ideas and a group of people 'join' the project in an effort to create some kind of prototype? Edit: @Vitor Braga - Seriously i'm not looking to make money, just finding developers who are interested in making a trivial little app with someone else. Edit: It may be worthy to explain that I live in a remote area of Australia.

    Read the article

  • Fastest way to document software architecture and design

    - by Karsten
    We are a small team of 5 developers and I'm looking for some great advices about how to document the software architecture and design. I'm going for the sweet spot, where the time invested pays off. I don't want to use more time documenting than necessary. I'll quickly give you my thoughts. What are the diagrams I should made? I'm thinking an overall diagram showing the various applications and services. And then some sequence diagrams showing the most important or complicated processes. About the code it self, I really don't see much value in describing or making diagrams for the code outside the .cs files them self. About text documents, I'm a bit uncertain about when to put down on paper. Most developers don't like to either write or read long documents.

    Read the article

  • Distinguishing repetitive code with the same implementation

    - by KyelJmD
    Given this sample code import java.util.ArrayList; import blackjack.model.items.Card; public class BlackJackPlayer extends Player { private double bet; private Hand hand01 = new Hand(); private Hand hand02 = new Hand(); public void addCardToHand01(Card c) { hand01.addCard(c); } public void addCardToHand02(Card c) { hand02.addCard(c); } public void bustHand01() { hand01.setBust(true); } public void bustHand02() { hand02.setBust(true); } public void standHand01() { hand01.setStand(true); } public void standHand02() { hand02.setStand(true); } public boolean isHand01Bust() { return hand01.isBust(); } public boolean isHand02Bust() { return hand02.isBust(); } public boolean isHand01Standing() { return hand01.isStanding(); } public boolean isHand02Standing() { return hand02.isStanding(); } public int getHand01Score(){ return hand01.getCardScore(); } public int getHand02Score(){ return hand02.getCardScore(); } } Is this considered as a repetitive code? providing that each method is operating a seperate field but doing the same implementation ? Note that hand01 and hand02 should be distinct. if this is considered as repetitive code, how would I address this? providing that each hand is a seperate entity

    Read the article

  • Do unit tests sometimes break encapsulation?

    - by user1288851
    I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Altough he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just to complex. For example, a hashing function. There's no good way to break a hashing function apart and mantain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :)

    Read the article

  • Programmaticaly finding the Landau notation (Big O or Theta notation) of an algorithm?

    - by Julien L
    I'm used to search for the Landau (Big O, Theta...) notation of my algorithms by hand to make sure they are as optimized as they can be, but when the functions are getting really big and complex, it's taking way too much time to do it by hand. it's also prone to human errors. I spent some time on Codility (coding/algo exercises), and noticed they will give you the Landau notation for your submitted solution (both in Time and Memory usage). I was wondering how they do that... How would you do it? Is there another way besides Lexical Analysis or parsing of the code? PS: This question concerns mainly PHP and or JavaScript, but I'm opened to any language and theory.

    Read the article

  • how to evaluate own project

    - by gruszczy
    I am working on a open source project in pure C, that I have started some time ago, but only recently found time to add some features. I can clearly some weaknesses of my old design, so I am trying to refactor my old code. I have no idea however, how to evaluate properly my new code. Do you know about any techniques or tools for code evaluation? I am pretty good with object oriented design, but for about three years I had no contact with purely structural one. Therefore I don't have enough experience, to be able to discern between good and bad design choices.

    Read the article

  • I need a step-by-step Sample Programming Tutorial (book or website)

    - by Albert Y.
    Can anyone recommend a step-by-step programming tutorial (either book or website) where they walk you through designing a complex program and explain what they are coding & why? Language doesn't matter, but preferably something like Java, Python, C++, or C and not web based. I am a new programmer and I am looking for good examples that will teach me how to program something more complex than simple programs given in programming textbooks.

    Read the article

  • Where should I start reading AngularJS's source code?

    - by Abaco
    After reading this article I realized that I really didn't read any "serious" source code during my 3-years as a professional developer. Recently I started a new web-project which makes heavy use of AngularJS, so I decided to start my reading - or, better, decoding [as the blogger wrote] - activity from something that is both challenging and professionally useful. Now I just need to be pointed in the right direction. Should I just start from the start of the source code or is there a better starting point?

    Read the article

  • How to become a Kernel/Systems/Device driver programmer?

    - by accordionfolder
    Hello all! I currently work in a professional capacity as a software engineer working with the Android OS. We work at integrating our platform as a native daemon among other facets of the project. I primarily work in Java developing the SDK and Android applications, but get to help with the platform in C/C++. Anywho, I have a great interest to work professionally developing low level for linux. I am not unhappy in my current position and will hang around as long as the company lets me (as a matter of fact I quite enjoy working there!), but I would like to work my way that direction. I've been working through Linux Kernel Development (Robert Love) and The Linux Programming Interface (Michael Kerrisk) (In addition to strengthening my C skills at every chance I get) and casually browsing Monster and similar sites. The problem I see is, there are no entry level positions. How does one break into this field? Anytime I see "Linux Systems Programmer" or "Linux Device Driver Programmer" they all require at the minimum 5-7 years of relevant experience. They want someone who knows the ropes, not a junior level programmer (I've been working for 7 months now...). So, I'm assuming, that some of you on stackoverflow work in a professional capacity doing just what I would like to do. How did you get there? What platforms did you use to work your way there? Am I going to have a more difficult time because I have my bachelors in CSC as opposed to a computer engineer (where they would experience a bit more embedded, asm, etc)?

    Read the article

  • Type Casting variables in PHP: Is there a practical example?

    - by Stephen
    PHP, as most of us know, has weak typing. For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10"; $value = 10 + $var; var_dump($value); // int(20) PHP also alows you to explicitly cast a variable, like so: $var = "10"; $value = 10 + $var; $value = (string)$value; var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of—and fully understand the usefulness of—type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will, it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10"; $value = (int)$var; $value = $value . ' TaDa!'; var_dump($value); // string(8) "10 TaDa!" So what's the point? Can anyone show me a practical application or example of type casting—one that would fail if type casting were not involved? I ask this here instead of SO because I figure practicality is too subjective. Edit in response to Chris' comment Take this theoretical example of a world where user-defined type casting makes sense in PHP: You force cast variable $foo as int -- (int)$foo. You attempt to store a string value in the variable $foo. PHP throws an exception!! <--- That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1 $foo = 0; $foo = (string)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; // example 2 $foo = 0; $foo = (int)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo;

    Read the article

  • Minimal set of critical database operations

    - by Juan Carlos Coto
    In designing the data layer code for an application, I'm trying to determine if there is a minimal set of database operations (both single and combined) that are essential for proper application function (i.e. the database is left in an expected state after every data access call). Is there a way to determine the minimal set of database operations (functions, transactions, etc.) that are critical for an application to function correctly? How do I find it? Thanks very much!

    Read the article

  • How to use TCP/IP Nagle algorithm at Apple Push Notification

    - by Mahbubur R Aaman
    From Apple's Developer Library The binary interface employs a plain TCP socket for binary content that is streaming in nature. For optimum performance, you should batch multiple notifications in a single transmission over the interface, either explicitly or using a TCP/IP Nagle algorithm. How to use TCP/IP Nagle algorithm in case Apple's Push Notification? How to batch multiple notification in a single transmission over the interface? Additional # In Apple's Push Notification Urban Airship is a familiar name to send large amount of push notification within several minutes. Does they use TCP/IP Nagle algorithm?

    Read the article

  • TDD - A question about the approach

    - by k25
    I have a question about TDD. I have always seen the recommendation that we should first write unit tests and then start writing code. But I feel that going the other way is much more comfortable (for me) - write code and then the unit tests, because I feel we have much more clarity after we have written the actual code. If I write the code and then the tests, I may have to change my code a little bit to make it testable, even if I concentrate much on creating a testable design. On the other hand, if I write the tests and then the code, the tests will change pretty frequently as and when the code shapes up. My questions are: 1) As I see a lot of recommendations to start writing tests and then move on to coding, what are the disadvantages if I do it the other way - write code and then the unit tests? 2) Could you please point me to some links that discuss about this or recommend some books (TDD)?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >