Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 105/563 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Difference between extensible programming and extendible programming?

    - by loudandclear
    What exactly is the different between "extensible programming" and "extendible programming?" Wikipedia states the following: The Lisp language community remained separate from the extensible language community, apparently because, as one researcher observed, any programming language in which programs and data are essentially interchangeable can be regarded as an extendible [sic] language. ... this can be seen very easily from the fact that Lisp has been used as an extendible language for years. If I'm understanding this correctly, it says "Lisp is extendible implies Lisp is not extensible". So what do these two terms mean, and how do they differ?

    Read the article

  • What is the value in hiding the details through abstractions? Isn't there value in transparency?

    - by user606723
    Background I am not a big fan of abstraction. I will admit that one can benefit from adaptability, portability and re-usability of interfaces etc. There is real benefit there, and I don't wish to question that, so let's ignore it. There is the other major "benefit" of abstraction, which is to hide implementation logic and details from users of this abstraction. The argument is that you don't need to know the details, and that one should concentrate on their own logic at this point. Makes sense in theory. However, whenever I've been maintaining large enterprise applications, I always need to know more details. It becomes a huge hassle digging deeper and deeper into the abstraction at every turn just to find out exactly what something does; i.e. having to do "open declaration" about 12 times before finding the stored procedure used. This 'hide the details' mentality seems to just get in the way. I'm always wishing for more transparent interfaces and less abstraction. I can read high level source code and know what it does, but I'll never know how it does it, when how it does it, is what I really need to know. What's going on here? Has every system I've ever worked on just been badly designed (from this perspective at least)? My philosophy When I develop software, I feel like I try to follow a philosophy I feel is closely related to the ArchLinux philosophy: Arch Linux retains the inherent complexities of a GNU/Linux system, while keeping them well organized and transparent. Arch Linux developers and users believe that trying to hide the complexities of a system actually results in an even more complex system, and is therefore to be avoided. And therefore, I never try to hide complexity of my software behind abstraction layers. I try to abuse abstraction, not become a slave to it. Question at heart Is there real value in hiding the details? Aren't we sacrificing transparency? Isn't this transparency valuable?

    Read the article

  • What is the origin of the name string? [closed]

    - by Andrej M.
    Possible Duplicate: Etymology of “String” Every programmer knows the meaning of the name string. In programming, it is traditionally a sequence of characters. But historically, who has decided that a sequence of characters will be called a string? Has there ever been an attempt to name a sequence of characters differently, but was ultimately abandoned due to the rising popularity of the name string?

    Read the article

  • Array Multiplication and Division

    - by Narfanator
    I came across a question that (eventually) landed me wondering about array arithmetic. I'm thinking specifically in Ruby, but I think the concepts are language independent. So, addition and subtraction are defined, in Ruby, as such: [1,6,8,3,6] + [5,6,7] == [1,6,8,3,6,5,6,7] # All the elements of the first, then all the elements of the second [1,6,8,3,6] - [5,6,7] == [1,8,3] # From the first, remove anything found in the second and array * scalar is defined: [1,2,3] * 2 == [1,2,3,1,2,3] But What, conceptually, should the following be? None of these are (as far as I can find) defined: Array x Array: [1,2,3] * [1,2,3] #=> ? Array / Scalar: [1,2,3,4,5] / 2 #=> ? Array / Scalar: [1,2,3,4,5] % 2 #=> ? Array / Array: [1,2,3,4,5] / [1,2] #=> ? Array / Array: [1,2,3,4,5] % [1,2] #=> ? I've found some mathematical descriptions of these operations for set theory, but I couldn't really follow them, and sets don't have duplicates (arrays do). Edit: Note, I do not mean vector (matrix) arithmetic, which is completely defined. Edit2: If this is the wrong stack exchange, tell me which is the right one and I'll move it. Edit 3: Add mod operators to the list. Edit 4: I figure array / scalar is derivable from array * scalar: a * b = c => a = b / c [1,2,3] * 3 = [1,2,3]+[1,2,3]+[1,2,3] = [1,2,3,1,2,3,1,2,3] => [1,2,3] = [1,2,3,1,2,3,1,2,3] / 3 Which, given that programmer's division ignore the remained and has modulus: [1,2,3,4,5] / 2 = [[1,2], [3,4]] [1,2,3,4,5] % 2 = [5] Except that these are pretty clearly non-reversible operations (not that modulus ever is), which is non-ideal. Edit: I asked a question over on Math that led me to Multisets. I think maybe extensible arrays are "multisets", but I'm not sure yet.

    Read the article

  • Newbie worried about CASE tool.

    - by Jason Evans
    Hi there. I'm looking for some guidance on CASE tools and whether my concerns are valid. Recently I was in a meeting between my employer and an external software company which have a CASE tool currently in beta. They demonstrated this tool to us, showing how you build a UML model in Enterprise Architect (or something like it) and then, through their tool, that UML model is transformed into a Visual Studio project, with C# files, stored procedures for SQL Server, code for the data layer, WCF stuff, logging code and allsorts. Now, admittedly, I don't see the point in this, as in I'm not convinced it will save that much time (plus it feels like overkill). The tool authors said that a trial of the tool at another company had saved a team there 5 weeks of development time (from 6 weeks down to about 1 week) using this tool. I find the accuracy of that estimate hard to believe. My main concern is whether using this tool is going slow down my productivity. For example - Say I have a UML model which I built a VS solution from. Now, I want to rename a class method to something else; will this mean having to update the UML model first and then rebuilding the code? Is this how case tools normally work? Something I will need to check with the authors is the structure of the generated VS solution. I like the Domain Driven Design way of project structure - Infrstructure, Services, Model, etc. I doubt very much this tool will do that. Also, I've been playing around with Entity Framework Code First and think it's a great way to build the data model. I have nice repositories, unit of work classes and other design patterns that work well with EF. I have data anootations and stuff like that working great. By not having EF (the CASE tool uses it's own data layer code) I'm concerned that this tool's data layer code might not be a nice to integrate in the UoW pattern, repositories, etc. This I will need to verify when I get a closer look at the generated code. What are other people's experiences with CASE tools? Am I being paranoid about nothing? Am I being unfair - are my negativities unfounded? EDIT: I like to use TDD/BDD for building my code, and using a CASE tool looks like it will make this difficult. Again, any feedback on this would be great. Cheers. Jas.

    Read the article

  • Are books on programming hard to understand?

    - by DarkEnergy
    I've been reading books that are extremely daunting. Accelerated C++ is by far one of the books -- that I haven't finished. I plan too, but that's another story. When reading a programming book, do you find yourself re reading a lot of the paragraphs? Sometimes it takes me like an hour to read 20 pages out of a book. Sometimes they become so daunting that it takes me all day to finish a single chapter. I think having these as e-books makes them even harder to read sometimes, since I'm so used to looking down to read a book or just looking at tangible paper. IDK, just wanting to know if reading these books becomes extremely hard, and do you find yourself rereading the most simplest paragraphs 2-3 times just to get the meaning of it because the previous paragraph left your brain hurting? http://www.it-career-coach.net/2007/03/04/are-computer-programming-books-hard-to-study/ here is a article i read on something similar to this. edit sometimes I find myself reading a whole page... then I look up and say 'wth did I just read'... I could finish a chapter in 30 minutes to an hour and feel this way too...

    Read the article

  • Annotate source code with diagrams as comments

    - by Steven Lu
    I write a lot of (primarily c++ and javascript) code that touches upon computational geometry and graphics and those kinds of topics, so I have found that visual diagrams have been an indispensable part of the process of solving problems. I have determined just now that "oh, wouldn't it just be fantastic if I could somehow attach a hand-drawn diagram to a piece of code as a comment", and this would allow me to come back to something I worked on, days, weeks, months earlier and far more quickly re-grok my algorithms. As a visual learner, I feel like this has the potential to improve my productivity with almost every type of programming because simple diagrams can help with understanding and reasoning about any type of non-trivial data structure. Graphs for example. During graph theory class at university I had only ever been able to truly comprehend the graph relationships that I could actually draw diagrammatical representations of. So... No IDE to my knowledge lets you save a picture as a comment to code. My thinking was that I or someone else could come up with some reasonably easy-to-use tool that can convert an image into a base64 binary string which I can then insert into my code. If the conversion/insertion process can be streamlined enough it would allow a far better connection between the diagram and the actual code, so I no longer need to chronographically search through my notebooks. Even more awesome: plugins for the IDEs to automatically parse out and display the image. There is absolutely nothing difficult about this from a theoretical point of view. My guess is that it would take some extra time for me to actually figure out how to extend my favorite IDEs and maintain these plugins, so I'd be totally happy with a sort of code post-processor which would do the same parsing out and rendering of the images and show them side by side with the code, inside of a browser or something. Since I'm a javascript programmer by trade. What do people think? Would anyone pay for this? I would.

    Read the article

  • Logical progressions through the job market

    - by Philluminati
    I'm 5 years out of a unrecognised university where I did Software Engineering. First job was VB.NET, one job was Python, Linux and Web development. I feel cast as a web developer. I'd love a role doing C but no one is interested in juniors if the applicant hasn't got 3 years of C development experience already. I've done some C and a drop of open source coding but I'll never have the confidence to convince someone I know absolutely what I'm doing. Do I just spend more and more time letting life pass me by as I sit in my room on a friday night writing a C problem "for the sake of learning more C" Basically I'm just not sure I want to continue my career if it's going to involve nothing but high level, machine abstracted, business logic and as interested as I am in low level development and enjoy reading books by Taunembaum I struggle to see how I can make the jump and I just feel life would be easier if I got a job in a cafe in Amsterdam rolling spliffs for customers. My ideal job, being a paid member of the Fedora development team seems so far away, without anyone to pay me to learn the skills to get there, and the only way would be to literally spend weeks and weeks of my life contributing code without recognition for free and without any guarentees at the end. Not that I've contributed anything at all so far. Are there any career paths that are logically set out so that jumping between roles is "correctly" incremental and where hard work and learning does eventually lead to the kind of places I might want to go? [ and also getting paid at the same time? ]

    Read the article

  • When do you call yourself a programmer

    - by benhowdle89
    "A programmer, computer programmer or coder is someone who writes computer software" from Wikipedia If you do frontend development using jQuery/CSS/HTML do you call yourself a programmer? If you develop PHP applications that deal with databases, do you call yourself a programmer? Are you only a programmer if you write applications for desktops and mobiles? Is the web a place where the line between developer and programmer stops?

    Read the article

  • What should we tell our unsupported IE6 users?

    - by Dan Fabulich
    In the upcoming version of our web app, we've broken IE6, and we don't intend to fix it. We've had a clear warning posted for IE6 users for some months; we've decided it's time not to support it. My question is: how should we communicate this to our users? Some people here feel that we should block IE6 users who would try to access the web app, because it's not going to work for them. Others feel that we should just leave up a warning, saying "This doesn't work in IE6," but not block them; instead, if they click to dismiss the warning, just let them in to the broken site to see for themselves that it doesn't work. Who is right? Is there a better way?

    Read the article

  • Multithreaded UI desktop application issues

    - by igor
    I am involved into development a rich UI project: desktop windows application. Application uses asynchronous invocations and in its turn it should be ready to process external messages (events). The problem is clear: at first time it was built as a simple prototype and it was not stress tested and all was fine. Then application was grown: the number of calls to server and number of events from server are high and performance is low. What is more users noticed that sometimes performance is extremal low. Asynchronous invocations based on thread pool (BeginInvoke, EndInvoke), external events are going from WCF service (.NET 3.5). My goal is synchronization of all tasks and putting priorities to every executions in desktop application. My question is: is there any practice how to reach my goal: patterns, task priority list, others? What should I do at first, second and next times? Thanks

    Read the article

  • Can You Have "Empty" Abstract/Classes?

    - by ShrimpCrackers
    Of course you can, I'm just wondering if it's rational to design in such a way. I'm making a breakout clone and was doing some class design. I wanted to use inheritance, even though I don't have to, to apply what I've learned in C++. I was thinking about class design and came up with something like this: GameObject - base class (consists of data members like x and y offsets, and a vector of SDL_Surface* MovableObject : GameObject - abstract class + derived class of GameObject (one method void move() = 0; ) NonMovableObject : GameObject - empty class...no methods or data members other than constructor and destructor(at least for now?). Later I was planning to derive a class from NonMovableObject, like Tileset : NonMovableObject. I was just wondering if "empty" abstract classes or just empty classes are often used...I notice that the way I'm doing this, I'm just creating the class NonMovableObject just for sake of categorization. I know I'm overthinking things just to make a breakout clone, but my focus is less on the game and more on using inheritance and designing some sort of game framework.

    Read the article

  • Defensive Programming Techniques.

    - by Pemdas
    I was attempting to identify an element of software engineering that I think is overlooked, not emphasized or not taught in typical undergraduate course work for CS or SE. What I came up is the concept of defensive programing. I would like to hear the communities options on defensive program and/or specific techniques that you use on a regular basis. Also, I would to know if there are any language specific techniques.

    Read the article

  • Beyond Syntax Highlighting - What other code representations are possible today?

    - by Mathieu Hélie
    Despite GUI applications having been around for 30ish years, software is still written as lines of text instructions, for various valid reasons. But we've also found that manipulating these text instructions is mind-blowingly difficult unless we apply a layer of coloring on different words to represent their syntax, thus allowing us to quickly parse through these text files without having to read the whole words. But besides the Sublime Text minimap feature, I've yet to see any innovation in visual representation of code since colors came around on CRT monitors. I can think of one obviously essential representation that modern graphics technology allows: visual hierarchies for nested structures. If we make nested text slightly smaller than its outer context, and zoom on it when the cursor is focused on the line, then we will be able to browse huge files of nested statements very quickly. This becomes even more essential as languages based on closures and anonymous functions become filled with deep statements. Has anyone attempted to implement this in a text editor? Do you know of any otherwise useful improvements in representing code text graphically?

    Read the article

  • Securing Back End API for Mobile Applications

    - by El Guapo
    I have an application that I am writing for both iOS and Android; this application will be served by a ReSTFUL API running on a cluster of servers on "the internets". I am curious how the rest of the world is going about securing their APIs so only specific applications running on iOS or Android can use these APIs. I could go the same route as other OAuth providers by providing a key/secret combination (2-legged OAuth), however, what do I do if I ever have to change these keys??? Do I create a new key/secret for every person that downloads the app??? The application is a social-based game that will allow the user to interact with other "participants" in the game based on location, achievements, etc. The API will provide the following functions: -Questions, Quests, etc -Profile Management -User Interaction -Possible Social Interaction Once the app gains traction I plan on opening up the API ala Facebook, Twitter, etc. Which is easy enough, I plan on implementing an OAuth Server and whatnot. However, I want to make sure, during this phase, that only people who are using the application can access and use the API.

    Read the article

  • Can an issue tracking system be distributed?

    - by Klaim
    I was thinking about issue tracking software like Redmine, Trac or even the one that is in Fossil and something hit me: Is there a reason why Redmine and Trac are not possible to be distributed? Or maybe it's possible and I just don't know how it's possible? If it's not possible, why? By distributed I mean like Facebook or Google or other applications that effectively runs on multiple hardware a the same time but share data.

    Read the article

  • Is hashing of just "username + password" as safe as salted hashing

    - by randomA
    I want to hash "user + password". EDIT: prehashing "user" would be an improvement, so my question is also for hashing "hash(user) + password". If cross-site same user is a problem then the hashing changed to hashing "hash(serviceName + user) + password" From what I read about salted hash, using "user + password" as input to hash function will help us avoid problem with reverse hash table hacking. The same thing can be said about rainbow table. Any reason why this is not as good as salted hashing?

    Read the article

  • Where and how to reference composite MVP components?

    - by Lea Hayes
    I am learning about the MVP (Model-View-Presenter) Passive View flavour of MVC. I intend to expose events from view interfaces rather than using the observer pattern to remove explicit coupling with presenter. Context: Windows Forms / Client-Side JavaScript. I am led to believe that the MVP (or indeed MVC in general) pattern can be applied at various levels of a user interface ranging from the main "Window" to an embedded "Text Field". For instance, the model to the text field is probably just a string whereas the model to the "Window" contains application specific view state (like a persons name which resides within the contained text field). Given a more complex scenario: Documentation viewer which contains: TOC navigation pane Document view Search pane Since each of these 4 user interface items are complex and can be reused elsewhere it makes sense to design these using MVP. Given that each of these user interface items comprises of 3 components; which component should be nested? where? who instantiates them? Idea #1 - Embed View inside View from Parent View public class DocumentationViewer : Form, IDocumentationViewerView { public DocumentationViewer() { ... // Unclear as to how model and presenter are injected... TocPane = new TocPaneView(); } protected ITocPaneView TocPane { get; private set; } } Idea #2 - Embed Presenter inside View from Parent View public class DocumentationViewer : Form, IDocumentationViewerView { public DocumentationViewer() { ... // This doesn't seem like view logic... var tocPaneModel = new TocPaneModel(); var tocPaneView = new TocPaneView(); TocPane = new TocPanePresenter(tocPaneModel, tocPaneView); } protected TocPanePresenter TocPane { get; private set; } } Idea #3 - Embed View inside View from Parent Presenter public class DocumentationViewer : Form, IDocumentationViewerView { ... // Part of IDocumentationViewerView: public ITocPaneView TocPane { get; set; } } public class DocumentationViewerPresenter { public DocumentationViewerPresenter(DocumentationViewerModel model, IDocumentationViewerView view) { ... var tocPaneView = new TocPaneView(); var tocPaneModel = new TocPaneModel(model.Toc); var tocPanePresenter = new TocPanePresenter(tocPaneModel, tocPaneView); view.TocPane = tocPaneView; } } Some better idea...

    Read the article

  • How to become a better programmer in 2011?

    - by Anish Patel
    Not strictly a Stack Overflow thing, but I thought I'd get it out there and ask the question. What are you as a programmer going to do to improve in 2011? The things I am planning to do are as follows: Learn Functional Programming Write 100 blog posts Take a bunch of Microsoft exams (70-433, 70-511, 70-513, 70-515, 70-516, 70-518, 70-519) Contribute to an open source project Lets hope the motivation lasts all year!

    Read the article

  • Load Balancer impact on web development

    - by confusedGeek
    This question has it's roots in a SharePoint site that I am help with. Background on the issue I dealt with: The dev box and integration server are not setup behind a load balancer. The links were being built using the HttpRequest.Url value from the current context. Note that the links weren't relative links but full URIs. Once we deployed to testing (which has a LB, amongst other things) we received errors on the links being built since the server had an address of "http://some.site.org:999" while the address at the LB as "https://site.org" (SSL was off-loaded at the LB). The fix was easy enough by using relative URIs. The Question: Since this is the first site I've worked with that's behind a Load Balancer on I'm wondering if there are other gotcha's that I need to consider when developing a site behind one?

    Read the article

  • Best practice with branching source code and application lifecycle

    - by Toni Frankola
    We are a small ISV shop and we usually ship a new version of our products every month. We use Subversion as our code repository and Visual Studio 2010 as our IDE. I am aware a lot of people are advocating Mercurial and other distributed source control systems but at this point I do not see how we could benefit from these, but I might be wrong. Our main problem is how to keep branches and main trunk in sync. Here is how we do things today: Release new version (automatically create a tag in Subversion) Continue working on the main trunk that will be released next month And the cycle repeats every month and works perfectly. The problem arises when an urgent service release needs to be released. We cannot release it from the main trunk (2) as it is under heavy development and it is not stable enough to be released urgently. In such case we do the following: Create a branch from the tag we created in step (1) Bug fix Test and release Push the change back to main trunk (if applicable) Our biggest problem is merging these two (branch with main). In most cases we cannot rely on automatic merging because e.g.: a lot of changes has been made to main trunk merging complex files (like Visual Studio XML files etc.) does not work very well another developer / team made changes you do not understand and you cannot just merge it So what you think is the best practice to keep these two different versions (branch and main) in sync. What do you do?

    Read the article

  • Beta Testing iOS Application

    - by dbramhall
    I was wondering if it is advisable to get a small team of beta testers for an iOS application that will be released to the App Store. I am developing an iOS application and I have setup a beta application form however I was wondering if it is advisable to even do beta testing considering I am actively testing and using my application on all of my own iOS devices (iPad 2, 2 iPod Touches and an iPhone 4 (plus, of course iOS Simulator)) - all running various versions of iOS 4. My question is: would you advise someone to get beta testers for an iOS application and, if so, how would you advise them to go about getting testers. For those interested, my application is at http://affogato.visioa.com/

    Read the article

  • Is Google Closure a true compiler?

    - by James Allardice
    This question is inspired by the debate in the comments on this Stack Overflow question. The Google Closure Compiler documentation states the following (emphasis added): The Closure Compiler is a tool for making JavaScript download and run faster. It is a true compiler for JavaScript. Instead of compiling from a source language to machine code, it compiles from JavaScript to better JavaScript. However, Wikipedia gives the following definition of a "compiler": A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language... A language rewriter is usually a program that translates the form of expressions without a change of language. Based on that, I would say that Google Closure is not a compiler. But the fact that Google explicitly state that it is in fact a "true compiler" makes me wonder if there's more to it. Is Google Closure really a JavaScript compiler?

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

  • Useful certifications for a young programmer

    - by Alain
    As @Paddyslacker elegantly stated in Are certifications worth it? The main purpose of certifications is to make money for the certifying body. I am a fairly young developer, with only an undergraduate degree, and my job is (graciously) offering to sponsor some professional development of my choice (provided it can be argued that it will contribute to the quality of work I do for them). A search online offers a slew of (mostly worthless) certifications one can attain. I'm wondering if there are any that are actually recognized in the (North American) industry as an asset. My local university promoted CIPS (I.S.P., ITCP) at the time I was graduating, but for all I can tell it's just the one that happened to get its foot in the door. It's certainly money grubbing - with a $205 a year fee. So are there any such certifications that provide useful credentials? To better define 'useful' - would it benefit full time developers, or is it only something worth while to the self-employed? Would any certifications lead me to being considered for higher wages, or can that only be achieved with more experience and an higher-level degree?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >