Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 206/563 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • How to start competitive programming?

    - by Vaibhav Agarwal
    I am practicing coding for a while but the problem is that it takes me a lot of time to write a solution for the problems. I want to ask if competitive programming can help me in improving this? If yes, then how should I start and from what site like TopCoder? I would obviously won't be able to solve very hard problems for now. What should I do? If no, what else should I do? I also have another problem that I want to learn coding but the thing is that I feel that I am not very good at it. What should I do? It's like bugging me from inside. I know some people may not find this question informative but please at least allow me to get an answer.

    Read the article

  • Side effect-free interface on top of a stateful library

    - by beta
    In an interview with John Hughes where he talks about Erlang and Haskell, he has the following to say about using stateful libraries in Erlang: If I want to use a stateful library, I usually build a side effect-free interface on top of it so that I can the use it safely in the rest of my code. What does he mean by this? I am trying to think of an example of how this would look, but my imagination and/or knowledge is failing me.

    Read the article

  • Avoid GPL violation by moving library out of process

    - by Andrey
    Assume there is a library that is licensed under GPL. I want to use it is closed source project. I do following: Create small wrapper application around that GPL library that listens to socket, parse messages and call GPL library. Then returns results back. Release it's sources (to comply with GPL) Create client for this wrapper in my main application and don't release sources. I know that this adds huge overhead compared to static/dynamic linking, but I am interested in theoretical way.

    Read the article

  • Just interviewed, turned down, now got an email asking to chat with recruiter. No response. What should I do? [closed]

    - by Lambert
    I was turned down after two interviews by a prominent company for an internship, and only a couple days later, I was asked when I had 10-15 minutes to chat today. Of course, I loved to, so I emailed within just 10 minutes of their email and let them know what times I was available at, and asked them when the best time should be, and if I should go somewhere or expect a phone call. No reply has come from them since yesterday afternoon, the recruiter wanted to talk to me today. I don't want to lose this opportunity, but I have no way to contact the recruiter other than by email, and the recruiter hasn't responded to my emails from yesterday, even though we were supposed to talk today. What's the best thing I can do (preferably within the next few hours!) to get the job? Is that even probably why she emailed me, or was a different reason likely? Any ideas?

    Read the article

  • What does "general purpose system" mean for Java SE Embedded?

    - by Majid Azimi
    The Oracle website says this about Java SE Embedded license: development is free, but royalties are required upon deployment on anything other than general purpose systems What does "general purpose system" mean here? We have a sensor network around the country. On each box we have installed, there is a micro controller based board that gets data from the environment and send data on serial port to a ARM based embedded board. On this board system there is a Java process which reads and submits data to our central server using JMS. Is this categorized as general purpose system? Sorry I'm asking this here. We are in Iran, there is no Oracle office here to ask.

    Read the article

  • As a person getting into mobile development, what's the best mobile platform in terms of profitability? [closed]

    - by Kyle Loman
    I realize this question can range very far so would love to hear any and all opinions on this. However, I'll be honest and say that I have been thinking of this in terms of most profitable. I know how this may sound either way but this is one of my main sticking points. I realize that I'm not guaranteed a single cent and success is never guaranteed but I'm going into this with the thought of making something out of it both financially and also for my own interest. I know that iOS gets a lot of attention on this front but Android commands a lot more market share. However, I know there are drawbacks to Android too, whether it's in the actual development process and programming (though I've heard conflicting reports on this, such as how easy/difficult it is for to address screen res in different devices) or the app ecosystem being flooded. But iOS's app ecosystem has been described as too saturated and harder to compete in for that reason. Since Windows Phone has fewer apps than both of those two, that might be the best place to start in order to be closer to the ground floor of the store and be noticed more? Less saturation = better chances of sales or differentiating? Something like the gold rush during the first years of the iOS App Store (not exactly but at least in concept)? Would it be that despite fewer users on the platform, there's more exposure due to less competition so that may translate to better success at sales? Plus, I know MS is in it for the long haul so I'm not too fearful of something like WebOS going away. Obviously RIM isn't very popular nowadays but I read a recent article that says Blackberry actually has the apps that make the most money, any thoughts on that: http://gigaom.com/mobile/which-mobile-oss-apps-make-most-money-surprise-its-blackberry/ Again, this is all I've heard or known about so if there's anything to add or correct here, please do. In addition, this has actually affected my next personal phone upgrade. I'm eligible for a carrier discount now and I've had my eye on the iPhone 5. However, the Lumia 920 is the one I'm holding out for and I'm open to trying an Android but I'm not sure I can wait that long for any new Nexus or even the Razr HD. Even the new Lumia in November is making me antsy, I'm so close to just getting an iPhone 5. But when I say this has affected my phone choice, I'd want to be able to carry the apps I write with me so that I'm able to pull my phone out to show people without having to carry around a second device to do so. So that's why I'd like to make my personal phone match the main platform I'm developing for. Of course, I will likely expand to other platforms if I gain any decent success but the one I target now would serve well as my personal phone I carry around so that I can use it as a marketing tool, in a sense, showing people my apps if the opportunity presents itself. So what's the best mobile platform to choose, and especially in regards to most lucrative? As said previously, this would influence my personal phone choice greatly. Thanks in advance and I hope this isn't taken the wrong way - I understand there are trade-offs and other factors that may balance this out but making some revenue is key among that. For some background, I have done software development and know programming language concepts so I'm not entirely new to it and I do get the notion of being familiar with these things so that I can translate this skill among a variety of languages but I'm currently just having difficulty choosing my first main mobile platform based on the criteria I've outlined above.

    Read the article

  • Distributed Transaction Framework across webservices

    - by John Petrak
    I am designing a new system that has one central web service and several site web services which are spread across the country and some overseas. It has some data that must be identical on all sites. So my plan is to maintain that data in the central web service and then "sync" the data to sites. This includes inserts, edits and deletes. I see a problem when deleting, if one site has used the record, then I need to undo the delete that has happened on the other servers. This lead me to idea that I need some sort of transaction system that can work across different web servers. Before I design one from scratch, I would like to know if anyone has come across this sort of problem and if there are any frame works or even design patterns that might aid me?

    Read the article

  • Which of these algorithms is best for my goal?

    - by JonathonG
    I have created a program that restricts the mouse to a certain region based on a black/white bitmap. The program is 100% functional as-is, but uses an inaccurate, albeit fast, algorithm for repositioning the mouse when it strays outside the area. Currently, when the mouse moves outside the area, basically what happens is this: A line is drawn between a pre-defined static point inside the region and the mouse's new position. The point where that line intersects the edge of the allowed area is found. The mouse is moved to that point. This works, but only works perfectly for a perfect circle with the pre-defined point set in the exact center. Unfortunately, this will never be the case. The application will be used with a variety of rectangles and irregular, amorphous shapes. On such shapes, the point where the line drawn intersects the edge will usually not be the closest point on the shape to the mouse. I need to create a new algorithm that finds the closest point to the mouse's new position on the edge of the allowed area. I have several ideas about this, but I am not sure of their validity, in that they may have far too much overhead. While I am not asking for code, it might help to know that I am using Objective C / Cocoa, developing for OS X, as I feel the language being used might affect the efficiency of potential methods. My ideas are: Using a bit of trigonometry to project lines would work, but that would require some kind of intense algorithm to test every point on every line until it found the edge of the region... That seems too resource intensive since there could be something like 200 lines that would have each have to have as many as 200 pixels checked for black/white.... Using something like an A* pathing algorithm to find the shortest path to a black pixel; however, A* seems resource intensive, even though I could probably restrict it to only checking roughly in one direction. It also seems like it will take more time and effort than I have available to spend on this small portion of the much larger project I am working on, correct me if I am wrong and it would not be a significant amount of code (100 lines or around there). Mapping the border of the region before the application begins running the event tap loop. I think I could accomplish this by using my current line-based algorithm to find an edge point and then initiating an algorithm that checks all 8 pixels around that pixel, finds the next border pixel in one direction, and continues to do this until it comes back to the starting pixel. I could then store that data in an array to be used for the entire duration of the program, and have the mouse re-positioning method check the array for the closest pixel on the border to the mouse target position. That last method would presumably execute it's initial border mapping fairly quickly. (It would only have to map between 2,000 and 8,000 pixels, which means 8,000 to 64,000 checked, and I could even permanently store the data to make launching faster.) However, I am uncertain as to how much overhead it would take to scan through that array for the shortest distance for every single mouse move event... I suppose there could be a shortcut to restrict the number of elements in the array that will be checked to a variable number starting with the intersecting point on the line (from my original algorithm), and raise/lower that number to experiment with the overhead/accuracy tradeoff. Please let me know if I am over thinking this and there is an easier way that will work just fine, or which of these methods would be able to execute something like 30 times per second to keep mouse movement smooth, or if you have a better/faster method. I've posted relevant parts of my code below for reference, and included an example of what the area might look like. (I check for color value against a loaded bitmap that is black/white.) // // This part of my code runs every single time the mouse moves. // CGPoint point = CGEventGetLocation(event); float tX = point.x; float tY = point.y; if( is_in_area(tX,tY, mouse_mask)){ // target is inside O.K. area, do nothing }else{ CGPoint target; //point inside restricted region: float iX = 600; // inside x float iY = 500; // inside y // delta to midpoint between iX,iY and tX,tY float dX; float dY; float accuracy = .5; //accuracy to loop until reached do { dX = (tX-iX)/2; dY = (tY-iY)/2; if(is_in_area((tX-dX),(tY-dY),mouse_mask)){ iX += dX; iY += dY; } else { tX -= dX; tY -= dY; } } while (abs(dX)>accuracy || abs(dY)>accuracy); target = CGPointMake(roundf(tX), roundf(tY)); CGDisplayMoveCursorToPoint(CGMainDisplayID(),target); } Here is "is_in_area(int x, int y)" : bool is_in_area(NSInteger x, NSInteger y, NSBitmapImageRep *mouse_mask){ NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSUInteger pixel[4]; [mouse_mask getPixel:pixel atX:x y:y]; if(pixel[0]!= 0){ [pool release]; return false; } [pool release]; return true; }

    Read the article

  • Liskov substitution and abstract classes / strategy pattern

    - by Kolyunya
    I'm trying to follow LSP in practical programming. And I wonder if different constructors of subclasses violate it. It would be great to hear an explanation instead of just yes/no. Thanks much! P.S. If the answer is no, how do I make different strategies with different input without violating LSP? class IStrategy { public: virtual void use() = 0; }; class FooStrategy : public IStrategy { public: FooStrategy(A a, B b) { c = /* some operations with a, b */ } virtual void use() { std::cout << c; } private: C c; }; class BarStrategy : public IStrategy { public: BarStrategy(D d, E e) { f = /* some operations with d, e */ } virtual void use() { std::cout << f; } private: F f; };

    Read the article

  • C# or .Net features to cut off assuming no backward compatibility needed?

    - by Gulshan
    Any product or framework evolves. Mainly it's done to catch up the needs of it's users, leverage new computing powers and simply make it better. Sometimes the primary design goal also changes with the product. C# or .net framework is no exception. As we see, the present day 4th version is very much different comparing with the first one. But thing comes as a barricade to this evolution- backward compatibility. In most of frameworks/products there are features would have been cut off if there was no need to support backward compatibility. According to you, what are these features in C#/.net? Please mention one feature per answer.

    Read the article

  • Should I use a Class or Dictionary to Store Form Values

    - by Shamim Hafiz
    I am working on a C# .NET Application, where I have a Form with lots of controls. I need to perform computations depending on the values of the controls. Therefore, I need to pass the Form values to a function and inside that function, several helper functions will be called depending on the Control element. Now, I can think of two ways to pass all the Form values: i) Save everything in a Dictionary and pass the Dictionary to the function or ii) Have a class with attributes that corresponds to each of the Form element. Which of these two approaches , or any other, is better?

    Read the article

  • Where can I find programming work online ?

    - by explorest
    I have setup an ideal, quiet, non-interrupting environment at home. I am extremely productive here. I dont want to leave my home, not my room, not even my couch. How/where do I find work online so that I don't have to travel to it? Kindly post about your own personal experiences. Have you done it full time from home? Where and how? I am outside United States in a third world country so a lower pay is not an issue. The issue is the work-enviroment.

    Read the article

  • What to do when the programming activity becomes a problem?

    - by gablin
    I once saw a program (can't remember which) where it talked about people "experiencing flow" when they are doing something they are passionate about. When "in flow", they tend to lose track of time and surrounding, concentrating only on their activity at hand. This happens a lot for me when I program; most particularly when I face a problem. I refuse to give up until it's solved. This usually leads to hours just rushing by and I forget to eat lunch, dinner gets pushed into far into the evening, and when I finally look at the clock, it's way into the wee-hours of the night and I will only get a few hours of sleep before having to rise early in the morning. (This is not to say that I'm in flow only when facing a problem - but I find it particularly hard to stop programming and step back when there's something I can't solve immediately.) I love programming, but I hate it when it disrupts my normal routines (most importantly eating and sleeping patterns). And sitting still for so many hours, staring a screen, is not healthy. Please, any ideas on how I can get my rampant programming activity under control?

    Read the article

  • Evolution in coding standards, how do you deal with them?

    - by WardB
    How do you deal with evolution in the coding standards / style guide in a project for the existing code base? Let's say someone on your team discovered a better way of object instantiation in the programming language. It's not that the old way is bad or buggy, it's just that the new way is less verbose and feels much more elegant. And all team members really like it. Would you change all exisiting code? Let's say your codebase is about 500.000+ lines of code. Would you still want to change all existing code? Or would you only let new code adhere to the new standard? Basically lose consistency? How do you deal with an evolution in the coding standards on your project?

    Read the article

  • How do I handle a Controller that's not controlling a specific Model?

    - by Ben Brocka
    I've got a nice MVC set up going but my website requires some views that don't map directly to a model. Specifically I've got some generic Reports users need to run, and now I'm creating a utility for comparing some system configurations. Right now the logic is crammed into a Reports Controller and I'm starting a Comparison Controller but this feels like a big abuse of the system. Both controllers use an assortment of different Models to pull data from, and they're only related based on what the user is doing. Reports are run from the Reports Controller and their views are all grouped together in the file system/URL structure. Is this an acceptable use of the Controller paradigm? I can't think of a better way to structure my Controllers, and making a Controller for each model I'm using to make reports/ect doesn't seem like a good idea; I'd end up with one Controller/Model/View per report or comparison, vastly complicating the apparent structure of my site.

    Read the article

  • Just started a job with Scrum. Something seems to be missing. I am new to Scrum

    - by punkouter
    The code is a complete mess of a combination of classic ASP/ASP.NET. The scrum consist of us patching up the big mess or making additions to it. We are all too busy doing that to start a rewrite so I am wondering.. Where is the part in Scrum where the developers can have the power to say that enough is enough and demand that they are given time to start the big rewrite ? We seem in an endless loop of just patching old code with 'Stories'. So things are being run by the non-technical people who seem to have no desire to push for a rewrite because they don't understand how bad the code base has gotten.. So who is in charge of making this big rewrite change happen ? The devs? The scrum master? The current strategy is just find time and do it ourselves without the higher ups involved.. since they are mostly to blame for the current mess we are in.. <-insert rant about non-tech people telling tech people what to do here-

    Read the article

  • how can we have a person to allot and track tasks in agile development

    - by vignesh
    I understand that Agile team should be self organized and self driven, but is there a provision that I can have someone who will allot tasks to developers and ensure that all user stories will be completed on time?? For example if there are two persons in an agile team who are not self motivated to take up tasks and they will work only when task is assigned to them with a deadline, how can we deal this in Agile? The problem I face is that no one is fixing the deadlines for the tasks and the team is under delivering for the last two sprints. It will be better if we can have someone who can fix deadlines. IS there a provision for this in Agile

    Read the article

  • Should a Parent with Children have a DefaultChild, or should a Child have a Default property?

    - by Stijn
    Which of the following two models makes more sense? I'm leaning towards the first one because there can only be one default child. The examples are in C# but I think it can apply to other languages too. Here DefaultChild holds one of the items in Children. class Parent { int ID { get; set; } Child DefaultChild { get; set; } IEnumerable<Child> Children { get; set; } } class Child { int ID { get; set; } } Here one of the items in Children has Default set to true while the others have it set to false. class Parent { int ID { get; set; } IEnumerable<Child> Children { get; set; } } class Child { int ID { get; set; } bool Default { get; set; } } A concrete situation: a User in our system has one or more Customers attached. When logging in, if said User has a default Customer, they are immediately working under this Customer. If they don't, they have to select a Customer to work under. While logged in, they can switch between Customers.

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • Finish feature reverted commits from develop

    - by marco-fiset
    I am using git as a version control system, and using git-flow as the branching model. I started a feature branch some weeks ago in order to maintain the system in a clean state while developping that feature. The main development continued on the develop branch, and changes from develop were merged periodically into the feature, to keep it up to date as much as possible. However came the time where the feature was finished, and I used git-flow's finish feature to merge the feature back into develop. The merge was successfully done, but then I found out that some of the commits I made in develop were reverted by the merge commit! Nowhere in develop or in the feature branch were these changes reverted, I can't see any commit that overwrote them. I just can't find anything. The only theory I have for the moment is that git is failing on me, but that would be extremely unlikely. Maybe I did some kind of wrong manipulation that made this situation come true? I can trace back in the history when the commit was made. I can see that the changes from that commit were reverted by the merge commit. Nowhere in the branch I see a commit that reverts those changes. Yet they were reverted. How is this even possible?

    Read the article

  • Restful Java based web services in json + html5 and javascript no templates (jsp/jsf/freemarker) aka fat/thick client

    - by Ismail Marmoush
    I have this idea of building a website which service JSON data through restful services framework. And will not use any template engines like jsp/jsf/freemarker. Just pure html5 and Javascript libs. What do you think of the pros and cons of such design ? Just for elaboration and brain storming a friend of mine argued with the following concerns: sounds like gwt this way you won't have any control over you service api for example say you wanna charge the user per request how will you handle it? how will you control your design and themes? what about the 1st request the browser make? not easy with this all of the user's requests will come with "Accept" header "application/json" how will you separate browser from abuser? this way all of your public apis will be used by third party apps abusively and you won't be able to lock it since you won't be able to block the normal user browser We won't use compiled html anyway but may be something like freemarker and in that case you won't expose any of your json resources to the unauthorized user but you will expose all the html since any browser can access them all the well known 1st class services do this can you send me links to what you've read? keep in mind the DOM based XSS it will be a nightmare ofc, if what you say is applicable.

    Read the article

  • Quantifying the value of refactoring in commercial terms

    - by Myles McDonnell
    Here is the classic scenario; Dev team build a prototype. Business mgmt like it and put it into production. Dev team now have to continue to deliver new features whilst at the same time pay the technical debt accrued when the code base was a prototype. My question is this (forgive me, it's rather open ended); how can the value of the refactoring work be quantified in commercial terms? As developers we can clearly understand and communicate the value in technical terms, such a the removal of code duplication, the simplification of an object model and so on. But this means little to an executive focussed on the commercial elements. What will mean something to this executive is the dev. team being able to deliver requirements at faster velocity. Just making this statement without any metrics that clearly quantify return on investment (increased velocity in return for resource allocated to refactoring) carries little weight. I'm interested to hear from anyone who has had experience, positive or negative, in relation to the above. ----------------- EDIT ---------------- Thanks for the responses so far, all of which I think are good. What I want to develop is a metric that proves (or disproves!) all of these statements. A report that ties velocity to refactoring and shows a positive effect.

    Read the article

  • Prefer algorithms to hand-written loops?

    - by FredOverflow
    Which of the following to you find more readable? The hand-written loop: for (std::vector<Foo>::const_iterator it = vec.begin(); it != vec.end(); ++it) { bar.process(*it); } Or the algorithm invocation: #include <algorithm> #include <functional> std::for_each(vec.begin(), vec.end(), std::bind1st(std::mem_fun_ref(&Bar::process), bar)); I wonder if std::for_each is really worth it, given such a simple example already requires so much code. What are your thoughts on this matter?

    Read the article

  • Does one's native spoken language affect quality of code?

    - by Xepoch
    There is a school of thought in linguistics that problem solving is very much tied to the syntax, semantics, grammar, and flexibility of one's own native spoken language. Working with various international development teams, I can clearly see a mental culture (if you will) in the codebase. Programming language aside, the German coding is quite different from my colleagues in India. As well, code is distinctly different in Middle America as it is in Coastal America (actually, IBM noticed this years ago). Do you notice with your international colleagues (from ANY country) that coding style and problem solving are in-line with native tongues?

    Read the article

  • Can't understand example using continuations

    - by Matt Fenwick
    I'm reading the r6rs Scheme report and am confused by the explanation of continuations (I find it to be too dense and lacking of examples for a beginner). What is this code doing and how does it evaluate to 4? Why does call/cc want an argument that's a function of one argument? How is call/cc's argument used? (+ 1 (call-with-current-continuation (lambda (escape) (+ 2 (escape 3))))) =? 4 This example is from section 1.11 - Continuations.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >