Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 245/563 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Should I be worried if I solve a lot of my problems the same way?

    - by Bryan Harrington
    I really enjoy programming games and puzzle creators/games. I find myself engineering a lot of these problems the same way and ultimately using similar technique to program them that I'm really comfortable with. To give you brief insight, I like to create graphs where nodes are represented with objects. These object hold data such as coordinates, positions and of course references to other neighboring objects. I'll place them all in a data structure and make decisions on this information in a "game loop". While this is a brief example, its not exact in all situations. It's just one way I feel really comfortable with. Is this bad?

    Read the article

  • Licenses that i can use for my works, web apps, desktop apps, wordpress themes etc

    - by jiewmeng
    I originally thought of creative commons when while reading a book about wordpress (professional wordpress), I learned that I should also specify that the product is provided ... WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE and they recommend GNU GPL. How do I write a license or select 1? btw, what does 'MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE' mean actually? Isn't without warranty enough?

    Read the article

  • Acceptable placement of the composition root using dependency injection and inversion of control containers

    - by Lumirris
    I've read in several sources including Mark Seemann's 'Ploeh' blog about how the appropriate placement of the composition root of an IoC container is as close as possible to the entry point of an application. In the .NET world, these applications seem to be commonly thought of as Web projects, WPF projects, console applications, things with a typical UI (read: not library projects). Is it really going against this sage advice to place the composition root at the entry point of a library project, when it represents the logical entry point of a group of library projects, and the client of a project group such as this is someone else's work, whose author can't or won't add the composition root to their project (a UI project or yet another library project, even)? I'm familiar with Ninject as an IoC container implementation, but I imagine many others work the same way in that they can scan for a module containing all the necessary binding configurations. This means I could put a binding module in its own library project to compile with my main library project's output, and if the client wanted to change the configuration (an unlikely scenario in my case), they could drop in a replacement dll to replace the library with the binding module. This seems to avoid the most common clients having to deal with dependency injection and composition roots at all, and would make for the cleanest API for the library project group. Yet this seems to fly in the face of conventional wisdom on the issue. Is it just that most of the advice out there makes the assumption that the developer has some coordination with the development of the UI project(s) as well, rather than my case, in which I'm just developing libraries for others to use?

    Read the article

  • Packages organisation with MVC design pattern

    - by Oltarus
    I have been programming quite a lot now and still can't decide which of these packages hierachies was the best: package1 Class1Controller Class1Model Class1View package2 Class2Controller Class2Model Class2View or controller Class1Controller Class2Contoller model Class1Model Class2Model view Class1View Class2View In other words, is it better to apply the MVC design pattern to classes or to packages? Is there any reason to choose one over the other? My question is language-agnostic, but I'm mostly a Java programmer, if it does any difference.

    Read the article

  • how do we clear new programming concept

    - by Sarang
    In IT world, new latest technologies are generated daily. Every time, every programmer need to learn something & then clear it conceptually to implement. All new technologies are built on some basic concepts. But, these technologies have their own area of development & a developer is supposed to grasp it from very basic. This seems like starting from very beginning to reach till current. What is the best & fast way to learn and grasp a new developed technology ?

    Read the article

  • Looking for suggestions: becoming a hireable, young programmer [closed]

    - by Dan
    I am a 17 year old Java programmer that has filled the last year with learning all of the ins and outs of Java - Using Eclipse, and the help of a friend of the family (a Java programming architect for some company), I have learned everything from serializing objects, basic networking, generics, reflection, multi-threading, code optimization and efficiency & some concurrency safety - built my own proxy class, and nowadays, I answer questions on Project Euler. I am seeking some suggestions though on where I go next, or where I go from here to get a job in programming. I dedicate at least an hour every day to coding, sometimes literally, the entire day, and I really have come to love the process. I just started reading Effective Java (v2), and learning Scala (as I see often, possibly the Java replacement) I will be going to college for Computer Science next year - and taking AP computer science this year (however, I took a practice exam and got an 87, only need a 60to70 to pass, so no need to study for it too much) -- I was wondering if getting the SE 7 OCA and OCP would help me in trying to get a programming job. I looked around and most people have said online that an OCA/OCP are practically useless, but, at my age do they make me any more credible? More or less, what would you recommend to get a job in programming these days - or distinguish yourself from the crowd? I have enough time and dedication to learn another language, or anything really. Thank you very much.

    Read the article

  • How does one handle sensitive data when using Github and Heroku?

    - by Jonas
    I am not yet accustomed with the way Git works (And wonder if someone besides Linus is ;)). If you use Heroku to host you application, you need to have your code checked in a Git repo. If you work on an open-source project, you are more likely going to share this repo on Github or other Git hosts. Some things should not be checked in the public repo; database passwords, API keys, certificates, etc... But these things still need to be part of the Git repo since you use it to push your code to Heroku. How to work with this use case? Note: I know that Heroku or PHPFog can use server variables to circumvent this problem. My question is more about how to "hide" parts of the code.

    Read the article

  • How do you achieve a numeric versioning scheme with Git?

    - by Erlend
    My organization is considering moving from SVN to Git. One argument against moving is as follows: How do we do versioning? We have an SDK distribution based on the NetBeans Platform. As the svn revisions are simple numbers we can use them to extend the version numbers of our plugins and SDK builds. How do we handle this when we move to Git? Possible solutions: Using the build number from hudson (Problem: you have to check hudson to correlate that to an actual git version) Manually upping the version for nightly and stable (Problem: Learning curve, human error) If someone else has encountered a similar problem and solved it, we'd love to hear how.

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • Is There a Real Advantage to Generic Repository?

    - by Sam
    Was reading through some articles on the advantages of creating Generic Repositories for a new app (example). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor var c1 = new Country() { Name = "United States", CountryCode = "US" }; var c2 = new Country() { Name = "Canada", CountryCode = "CA" }; var c3 = new Country() { Name = "Mexico", CountryCode = "MX" }; var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" }; var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" }; var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" }; repo.Add<Country>(c1); repo.Add<Country>(c2); repo.Add<Country>(c3); repo.Add<Province>(p1); repo.Add<Province>(p2); repo.Add<Province>(p3); repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query(); IList<T> Find(Expression<Func<T,bool>> predicate); T Get(Expression<Func<T,bool>> predicate); T First(Expression<Func<T,bool>> predicate); //... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Edit: Original link doesn't reflect the pattern I was using in my sample code. Here is an (updated link).

    Read the article

  • Working with a company as a Junior Developer [closed]

    - by user1601973
    We all have started our careers in some way or other. Well, I am a college student based in North America & I am doing my second internship with the same company with which I did my first internship. I came back here because people here were helpful always and supportive. But it just happened today, and I wanted to share this on SO. Well since I started I have been doing documentation and that kind of stuff only as compared to my first internship in which I actually worked along with the developers & learned so many things. Well, I was in a conversation with my Team lead, and he asked me if I completed that particular work or not? Well, That particular work had slipped from my mind. He was indeed I kind of pissed, and said "You don't have to worry about it, I will figure out". Well, I felt so bad and was about to literally cry. I stopped my lunch and then went on to complete that work. I always ask for work in office, and I always try to be an asset for whoever I am working but this was the first time that it happened. What are your thoughts on this and should I apologise or not? I think I should.

    Read the article

  • How to Implement a Parallel Workflow

    - by Paul
    I'm trying to implement a parallel split task using a workflow system. I'm using .NET but my process is very simple and I don't want to use WF or anything heavy like that. I've tried using Stateless. So far is was easy to set up and run, but I may be using the wrong tool for the job because I'm not sure how you're supposed to model parallel split workflows, where you have multiple sub-tasks required before you can advance to the next state, but the steps don't require being performed in any particular order. I can easily use the dynamic configuration options to check my data model manually to see if the model is in the correct state (all sub-tasks completed) and can transition to the next state, but this seems to completely break the workflow paradigm. What is the proper, orthodox way to implement a parallel split process? Thanks

    Read the article

  • Using public domain source code from JDK in my application

    - by user2941369
    Can I use source code from ThreadPoolExecutor.java taken from JDK 1.7 considering that the following clausule is at the beginning of the ThreadPoolExecutor.java: /* * Written by Doug Lea with assistance from members of JCP JSR-166 * Expert Group and released to the public domain, as explained at * http://creativecommons.org/licenses/publicdomain */ And just before that there is also: /* * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. */

    Read the article

  • What is the best idea to put available OS (linux) and Web application to client?

    - by Fernando Costa
    After a year programming a web based business management system, I got my idea divided into two differents ways to do what I'm doing... I will try to explain in follow lines: First I will describe my enviroment: Webserver: apache, ngynx Programming Language: PHP, Shell Script, Java Script, SQL Database: Mysql Operating System: Linux, UNIX (All Distros) (If manually configured works on windows) Authentication Server: FreeRadius First situation I have my application running on this enviroment that I had just described before, as my application is a SaaS app, then I have my own server to run it all and customers pay to use it as a service accessed by webbrowser. Second Situation The same as before but with one big difference, everything (environment) is installed in the customer, then I need to cryptography all my codes (It includes PHP and Shell Scripts). I think this situation is most difficulty, but I would like to hear it from different points of view.

    Read the article

  • Consultant - Google Cloud Endpoints

    - by Marc M.
    How does when go along finding a consultant to hire for a few hours for particular technologies? Lets say Google Cloud Endpoints, basically a professional at the library that can answer anything I need to know about it? How much might this cost. Can you hire them for only a couple hours at a time? Or even for a 20 minute phone call like a lawyer? P.S. I could use someone on call for Objectify questions too.

    Read the article

  • The cost of longer delay between development and QA

    - by Neil N
    At my current position, QA has become a bottleneck. We have had the unfortunate occurence of features being held out of the current build so that QA could finish testing. This means features that are done being developed may not get tested for 2-3 weeks after the developer has already moved on. With dev moving faster thean QA, this time gap is only going to get bigger. I keep flipping through my copy of Code Compelte, looking for a "Hard Data" snippet that shows the cost of fixing defects grows exponentially the longer it exists. Can someone point me to some studies that back up this concept? I am trying to convince the powers that be that the QA bottleneck is a lot more costly than they think.

    Read the article

  • Would you use (a dialect of) LISP for a real-world application? Where and why?

    - by Anto
    LISP (and dialects such as Scheme, Common LISP and Clojure) haven't gained much industry support even though they are quite decent programming languages. (At the moment though it seems like they are gaining some traction). Now, this is not directly related to the question, which is would you use a LISP dialect for a production program? What kind of program and why? Usages of the kind of being integrated into some other code (e.g. C) are included as well, but note that it is what you mean in your answer. Broad concepts are preferred but specific applications are okey as well.

    Read the article

  • How does the "Fourth Dimension" work with arrays?

    - by Questionmark
    Abstract: So, as I understand it (although I have a very limited understanding), there are three dimensions that we (usually) work with physically: The 1st would be represented by a line. The 2nd would be represented by a square. The 3rd would be represented by a cube. Simple enough until we get to the 4th -- It is kinda hard to draw in a 3D space, if you know what I mean... Some people say that it has something to do with time. The Question: Now, that is all great with me. My question isn't about this, or I'd be asking it on MathSO or PhysicsSO. My question is: How does the computer handle this with arrays? I know that you can create 4D, 5D, 6D, etc... arrays in many different programming languages, but I want to know how that works.

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • Are all languages basically the same?

    - by Anirudh
    Recently, i had to understand the design of a small program written in a language i had no idea about (ABAP, if you must know). I could figure it out without too much difficulty. I realize that mastering a new language is a completely different ball game, but purely understanding the intent of code (specifically production standard code, which is not necessarily complex) in any language is straight forward, if you already know a couple of languages (preferably one procedural/OO and one functional). Is this generally true? Are all programming languages made up of similar constructs like loops, conditional statements and message passing between functions? Are there non-esoteric languages that a typical Java/Ruby/Haskell programmer would not be able to make sense of? Do all languages have a common origin?

    Read the article

  • Real-time chat in Ruby on Rails

    - by Skydreamer
    First, I'm sorry because I know this question has been asked many times but I'm still looking forward to finding the answer to my problem. I'd want to implement a Real-time chat for my Rails app but I can't really host the server which handles the sockets. I've tried Faye but it needs a server. I've also heard of pusher but it's limited to 20 users at a time on the chat and I can't really be sure they won't be more. I've thought of irc but I think I can't really embed it into a rails app, maybe it needs sockets... So here's my problem, can I implement a real-time chat without owning a server ? What can you advice me ? Thank you for your answers.

    Read the article

  • Help us with our git workflow

    - by Brandon Cordell
    We have a web application that gets deployed to multiple regions around our state. An instance of the application for each region. We maintain a staging and production (master) branch in our repository, but we were wondering what is the best way of maintaining each instances codebase. It's similar at the core, but we have to give each region the ability to make specific requests that may not make it into the core of the application. Right now we have branches for each region, like region_one_staging, and region_one_production. At the rate we're growing we'll have hundreds of branches here in the next few years. Is there a better way to do this?

    Read the article

  • Help migrating from VB style programming to OO programming [closed]

    - by Agent47DarkSoul
    Being a hobbyist Java developer, I quickly took on with OO programming and understood its advantages over procedural code from C, that I did in college. But I couldn't grasp VB event based code (weird, right?). Bottom-line is OOP came natural to me. Curently I work in a small development firm developing C# applications. My peers here are a bit attached to VB style programming. Most of the C# code written is VB6 event handling code in C#'s skin. I tried explaining to them OOP with its advantages but it wasn't clear to them, maybe because I have never been much of a VB programmer. So can anybody provide any resources: books, web articles on how to migrate from VB style to OO style programming ?

    Read the article

  • Start with open source desktop application and move to iPhone/Android app

    - by user92356
    I'm a high schooler and I am competing in an open source software development competition. It must be a desktop application that runs on either Windows or Linux. I have a great idea for the open source desktop app, and I wanted to know if I could take it farther and port it to the iPhone or Android platform and make money (preferably through $.99 cost, not ads) I read somewhere that certain open source licenses allow me to do this... am I correct?

    Read the article

  • Is it a good pattern that no objects should know more than what it needs to know?

    - by Jim Thio
    I am implementing a viewController class. The view controller class got NSNotification when the Grabbing class start or finish updating. I have 2 choices. I can make the grabbing class to provide a public read only property so all other classes can know whether it is still uploading. Or I can let view Controller to listen to 2 different events. Start updating and finish updating events. The truth is the viewController do need to know whether the grabbing class is still updating or not at any other time. So I am thinking of creating 2 events would be a better way to go. Actually, what do you think?

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >