Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 221/563 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Has MyEclipse implicit breakpoint in debugging mode in class URLClassPath [migrated]

    - by MJM
    I am beginner in MyEclipse IDEA. I using 8.6.1 version of it. My issue is: When I execute my program in debug mode, MyEclipse go to sun.misc.URLClassPath class and I must Resume breakpoint(by pressing F8 key) and continue executing my program. MyEclipse stay in URLClassPath class in following thread stack: 1. URLClassPath$JarLoader.<init>(URL, URLStreamHandler, HashMap) line: 581 2. URLClassPath$JarLoader.ensureOpen() line: 631 3. URLClassPath$JarLoader.getJarFile(URL) line: 641 4. URLClassPath$JarLoader.ensureOpen() line: 631 Note: this event happen when some jar file exist in my project Build-Path but when my application is simple this problem don't make and first breakpoint is my first breakpoint. Why this event happened?

    Read the article

  • How can I approach creating an efficient algorithm for maximizing value with these specific constraints?

    - by sway
    I'm having trouble coming up with an approach that isn't n^2 for this problem. Here's a contrived, simplified version I've come up with: Let's say you're a company that needs 4 employees to launch in a new city, a manager, two salespeople, and a customer support rep, and you magically know how much impact every candidate will have and how much salary they require to take the job. Your table of potential employees looks something like this: Name Position Salary Impact Adam Smith Manager 60,000 11 Allison Brown Salesperson 40,000 9 Brad Stewart Manager 55,000 9 ...etc (thousands of records) What algorithmic approach can be taken to find the maximum "impact" while still filling all the positions and remaining under, say, a 200,000 budget? Thanks!

    Read the article

  • Semantic Versioning and splitting apart a library, providing a bundled build

    - by Derick Bailey
    I've got a nice, fairly popular JavaScript library that is following Semantic Versioning. The current library has a few dependency libraries, which are available either as separate downloads or as part of a single bundled download. I see a need to head down this path further. I want to extract additional, smaller libraries out of the one larger library. Each of these extracted libraries would be available as separate files, or inside of the one bundled build, again. If I go down this path of extracting the libraries, and providing a bundled version of the final code, does this require a full version change in semantic versioning? Would I have to bump from 1.x to 2.x? My first thought it no: I will not change any public API, so I don't have to change the major version number. But then I wonder... well, I am restructuring a lot of things, even though the final API for the bundled version would be the same. Is there a clear answer from semver on something like this? Do I need to bump first, second or third dot? Or something else?

    Read the article

  • WCF Keep Alive: Whether to disable keepAliveEnabled

    - by Lijo
    I have a WCF web service hosted in a load balanced environment. I do not need any WCF session related functionality in the service. QUESTION What are the scenarios in which performances will be best if keepAliveEnabled = false keepAliveEnabled = true Reference From Load Balancing By default, the BasicHttpBinding sends a connection HTTP header in messages with a Keep-Alive value, which enables clients to establish persistent connections to the services that support them. This configuration offers enhanced throughput because previously established connections can be reused to send subsequent messages to the same server. However, connection reuse may cause clients to become strongly associated to a specific server within the load-balanced farm, which reduces the effectiveness of round-robin load balancing. If this behavior is undesirable, HTTP Keep-Alive can be disabled on the server using the KeepAliveEnabled property with a CustomBinding or user-defined Binding.

    Read the article

  • Fair 2-combinations

    - by Tometzky
    I need to fairly assign 2 experts from x experts (x is rather small - less than 50) for every n applications, so that: each expert has the same number of applications (+-1); each pair of experts (2-combination of x) has the same number of applications (+-1); It is simple to generate all 2-combinations: for (i=0; i<n; i++) { for (j=i+1; j<n; j++) { combinations.append(tuple(i,j)); } } But to assign experts fairly I need to assign a combination to an application i correct order, for example: experts: 0 1 2 3 4 fair combinations: counts 01234 01 11000 23 11110 04 21111 12 22211 34 22222 02 32322 13 33332 14 34333 03 44343 24 44444 I'm unable to come up with a good algorithm for this (the best I came up with is rather complicated and with O(x4) complexity). Could you help me?

    Read the article

  • Relationship between Repository and Unit of Work

    - by NullOrEmpty
    I am going to implement a repository, and I would like to use the UOW pattern since the consumer of the repository could do several operations, and I want to commit them at once. After read several articles about the matter, I still don't get how to relate this two elements, depending on the article it is being done in a way u other. Sometimes the UOW is something internal to the repository: public class Repository { UnitOfWork _uow; public Repository() { _uow = IoC.Get<UnitOfWork>(); } public void Save(Entity e) { _uow.Track(e); } public void SubmittChanges() { SaveInStorage(_uow.GetChanges()); } } And sometimes it is external: public class Repository { public void Save(Entity e, UnitOfWork uow) { uow.Track(e); } public void SubmittChanges(UnitOfWork uow) { SaveInStorage(uow.GetChanges()); } } Other times, is the UOW whom references the Repository public class UnitOfWork { Repository _repository; public UnitOfWork(Repository repository) { _repository = repository; } public void Save(Entity e) { this.Track(e); } public void SubmittChanges() { _repository.Save(this.GetChanges()); } } How are these two elements related? UOW tracks the elements that needs be changed, and repository contains the logic to persist those changes, but... who call who? Does the last make more sense? Also, who manages the connection? If several operations have to be done in the repository, I think using the same connection and even transaction is more sound, so maybe put the connection object inside the UOW and this one inside the repository makes sense as well. Cheers

    Read the article

  • Where can you find your first customers as a freelancer?

    - by Adam Smith
    I want to start doing freelance work, but no matter how I look at it, it seems like the best way to get customers and to have work most of the time, you have to already be in the freelancing game. Most freelancers I've talked to have had the same customers over the years or got new customers because their satisfied clients referred them. What I'd like to know from the successful people here that work as freelancers is how do you start doing business when you haven't yet set foot in freelancing? I want to start small, creating websites that won't require me to hire other people other than maybe a designer I already know. (I'd like to create desktop applications as well, but I think I should keep that for later when I'm more experienced) . I thought about localized Google ads or visiting companies and meeting the people in charge there, but I wouldn't know which kind of businesses to look for or if it's even a good way to approach this. Anyone care to share their personal startup experiences / advice that can help future freelancers?

    Read the article

  • What data is available regarding cowboy coding?

    - by Christine
    I'm not a programmer; I'm a freelance writer and researcher. I have a client who is looking for stats on certain "threats" to the apps market in general (not any specific app store). One of them is cowboy coding: specifically, he wants to see numbers regarding how many apps have failed to function as intended/crashed/removed because of errors made by, in essence, sloppy coding. Note that I'm not here to debate the merits of cowboy coding, and whether or not it is sloppy. Is there any data about this type of development?

    Read the article

  • Advice needed: Software Development [closed]

    - by Hunter McMillen
    I recently graduated from college with a B.S. in Computer Science, and am now currently attending the same college to get an M.S. in Computer Science. I know lots of things about Computer Science and programming but throughout all of my coursework I have never had to develop a single complete application, the projects were always relatively small (~300-500 lines of code). Basically, my overall I am about to have these two degrees and I feel like I don't know anything about software development or design; which doesn't make a whole lot of sense. I am looking for ways to fill in the gaps in my knowledge, I would love people's advice on these questions: 1) How do you design good software? Where do you start? 2) What makes a good software developer? Sorry for the convoluted question, but in my mind it is a convoluted situation. Thanks Edit Thanks everyone for your advice.

    Read the article

  • Why are people using C instead of C++? [closed]

    - by Darth
    Possible Duplicate: When to use C over C++, and C++ over C? Many times I've stumbled upon people saying that C++ is not always better than C. Great example here would be the Linux kernel, where they simply decided to use C instead of C++ because it had better compilers at the time. But that's many years ago and a lot has changed. So the question is, why are people still using C over C++? I gues there are probably some cases (like embedded devices), where there simply isn't a good C++ compiler, or am I wrong here? What are the other cases when it is better to go with C instead of C++?

    Read the article

  • Is it worth to learn Experimental Languages?

    - by Xander Lamkins
    I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers. The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from multiple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments.

    Read the article

  • How do you handle measuring Code Coverage in JavaScript

    - by Dancrumb
    In order to measure Code Coverage for JavaScript unit tests, one needs to instrument the code, run the tests and then perform post-processing. My concern is that, as a result, you are unit testing code that will never be run in production. Since JavaScript isn't compiled, what you test should be precisely what you execute. So here's my question, how do you handle this? One thought I had was to run Unit Testing on the production code and use that for my pass fail. I would then create a shadow of my production code, with instrumentation and run my unit tests again; this would give me my code coverage stats. Has anyone come across a method that is a little more graceful than this?

    Read the article

  • Should I pay for my training? [closed]

    - by user65883
    I work at a company that made me sign a contract to pay for my training should I decide to leave within my 3 months probation. I never went on a formal course and didn't get any proof of undergoing the course. They stated if I don't sign the contract then I wouldn't receive training and I won't be able to do my job. If their training is to watch what other employees do then I took the course. I'm wondering if it's allowed for them to ask me for money when I didn't receive any proof of training?

    Read the article

  • Thread safe GUI programming

    - by James
    I have been programming Java with swing for a couple of years now, and always accepted that GUI interactions had to happen on the Event Dispatch Thread. I recently started to use GTK+ for C applications and was unsurprised to find that GUI interactions had to be called on gtk_main. Similarly, I looked at SWT to see in what ways it was different to Swing and to see if it was worth using, and again found the UI thread idea, and I am sure that these 3 are not the only toolkits to use this model. I was wondering if there is a reason for this design i.e. what is the reason for keeping UI modifications isolated to a single thread. I can see why some modifications may cause issues (like modifying a list while it is being drawn), but I do not see why these concerns pass on to the user of the API. Is there a limit imposed by an operating system? Is there a good reason these concerns are not 'hidden' (i.e. some form of synchronization that is invisible to the user)? Is there any (even purely conceptual) way of creating a thread safe graphics library, or is such a thing actually impossible? I found this http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness which seems to describe GTK differently to how I understood it (although my understanding was the same as many people's) How does this differ to other toolkits? Is it possible to implement this in Swing (as the EDT model does not actually prevent access from other threads, it just often leads to Exceptions)

    Read the article

  • Sending an email with attachment from server side

    - by SaravananArumugam
    I have to create a word document in a specific format and send it as attachment to some email addresses. I have a preview screen for the report which on approval has to be send in email. This is an ASP.NET MVC 3 application. I am left with a few options here. I am creating the preview using html. I can convert this html into doc and send it, which would be a straight solution. But capturing the Response object's output is being a tough job. I thought of using Mail merge functionality of MS word, where I'll be filling the placeholders of the doc template. But the problem is conceptually, it doesn't appear to be mail merge. I have found someone suggesting to use RTF format and replace the placeholders with database values. Which is the right thing to do? What's the best solution here? Is there any other option than the three listed above?

    Read the article

  • Design pattern and best practices

    - by insane-36
    I am an iphone developer. I am quite confident on developing iphone application with some minimal feature. I would consider myself as a fair application developer but the code I write is not so much structured. I make vey little use of MVC because I dont seem to find places to impose MVC. Most of the time, I create application with viewcontrollers and very few models only. How could I improve the skill for making my code more reusable, standard, easy and maintainable. I have seen few books on design patterns and tried few chapters myself but I dont seem to skip my habit. I know few of them but I am not being able to apply those patterns into my app. What is the best way to learn the design patterns and coding habit. Any kind of suggestion is warmly welcomed.

    Read the article

  • Do you develop with localization in mind?

    - by Jimmy C
    When working on a software project or a website, do you develop with localization in mind? By this I mean e.g. Externalizing all strings, including error messages. Not using images that contain text. Designing your UI with text expansion in mind. Using pseudo-translation to test your UI's early in the process. etc. On projects you work on, are these in the 'nice to have' category and let the L10N team worry about the rest, or do you have localization readiness built into your development process? I'm interested to hear how developers view localization in general.

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • Why are we as an industry not more technically critical of our peers? [closed]

    - by Jarrod Roberson
    For example: I still see people in 2011 writing blog posts and tutorials that promote setting the Java CLASSPATH at the OS environment level. I see people writing C and C++ tutorials dated 2009 and newer and the first lines of code are void main(). These are examples, I am not looking for specific answers to the above questions, but to why the culture of accepting sub-par knowledge in the industry is so rampant. I see people posting these same type of empirically wrong suggestions as answers on www.stackoverflow.com and they get lots of up votes and practically no down votes! The ones that get lots of down votes are usually from answering a question that wasn't asked because of lack of reading for comprehension skills, and not incorrect answers per se. Is our industry that ignorant as a whole, I can understand the internet in general being lazy, apathetic and un-informed but our industry should be more on top of things like this and way more critical of people that are promoting bad habits and out-dated techniques and information. If we are really an engineering discipline, why aren't people held to a higher standard as they are in other engineering disciplines? I want to know why people accept bad advice, poor practices as the norm and are not more critical of their peers in the software industry.?

    Read the article

  • Easy to understand and interesting book on algorithms

    - by gasan
    Please advise me a book on algorithms, that would be easier to read and understand than Cormen's book1. It may be not so big and deep in explanation. I even want it to not be that big, however it shouldn't contain misconceptions or errors or inaccuracies. It should be a some kind of pre-Cormen's book, that will help later to understand more sophisticated conceptions. A beginner book (but still worth to read). 1 Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein

    Read the article

  • .NET Libraries Cost More Than Windows?

    - by Kevin Mark
    When looking into libraries to make my programming life a little bit easier I've (almost) always been disappointed by the prices offered. For instance, Actipro's WPF Studio is $650. I suppose that's worth it if you plan to make money from the use of those controls. But take a look at, say, Windows. Windows 7 Ultimate is just about $220. I consider Windows to be a far more complex and "worth-it" product/purchase than a library that runs on it. Why the significant difference in pricing? Do libraries really need to be so expensive, or do they need to charge more in order to make a decent some of money?

    Read the article

  • How does throwing an ArgumentNullException help?

    - by Scott Whitlock
    Let's say I have a method: public void DoSomething(ISomeInterface someObject) { if(someObject == null) throw new ArgumentNullException("someObject"); someObject.DoThisOrThat(); } I've been trained to believe that throwing the ArgumentNullException is "correct" but an "Object reference not set to an instance of an object" error means I have a bug. Why? I know that if I was caching the reference to someObject and using it later, then it's better to check for nullity when passed in, and fail early. However, if I'm dereferencing it on the next line, why are we supposed to do the check? It's going to throw an exception one way or the other. Edit: It just occurred to me... does the fear of the dereferenced null come from a language like C++ that doesn't check for you (i.e. it just tries to execute some method at memory location zero + method offset)?

    Read the article

  • What are some recommended video lectures for a non-CS student to prepare for the GRE CS subject test?

    - by aristos
    Well the title kinda explains all there is to explain. I'm a non-cs student and was preparing to apply PhD programs in applied mathematics. But for my senior thesis I've been reading lots of machine learning and pattern recognition literature and enjoying it a lot. I've taken lots of courses with statistics and stochastics content, which I think, would help me if I get accepted to a program with ML focus, but there are only two CS courses -introduction to programming- in my transcript and therefore I decided to take the CS subject test to increase my chances. Which courses do you think would be most essential to have a good result from CS subject test? I'm thinking of watching video lectures of them, so do you have any recommendations?

    Read the article

  • How can I refactor client side functionality to create a product line-like generic design?

    - by Nupul
    Assume the following situation similar to that of Stack Overflow: I have a system with a front-end that can perform various manipulations on the data (by sending messages to REST back-end): Posting Editing and deleting Adding labels and tags Now in the first version we created it well modularized but the need as of now for 'evolving' the system similar to Stack Overflow. My question is how best to separate the commonality and how to incorporate the variability with respect to the following: Commonality: The above 'functionalities' and sending/receiving the data from the server Look and feel (also a variability as explained below) HTTP verbs associated with the above actions Variability: The RESTful URLs where the requests are sent The text/style of the UI (the commonality is analogous to Stack Overflow - the functionality of upvotes, posting a question remains the same, but the words, the icons, the look and feel is still different across sites) I think this is entirely a client-side code organization/refactoring issue. I'm heavily using jQuery, javascript and backbone for front-end development. My question is how best should I isolate the same to be able to create multiple such aspects to the tool we are currently working on?

    Read the article

  • Is what someone publishes on the Internet fair game when considering them for employment as a programmer?

    - by Jon Hopkins
    (Originally posted on Stack Overflow but closed there and more relevant for here) So we first interviewed a guy for a technical role and he was pretty good. Before the second interview we googled him and found his MySpace page which could, to put it mildly, be regarded as inappropriate. Just to be clear there was no doubt that it was his page (name, photos, matching biographical information and so on). The content was entirely personal and in no way related to his professional abilities or attitude. Is it fair to consider this when thinking about whether to offer them a job? In most situations my response would be what goes on in someone's private life is their own doing. However for anyone technical who professes (implicitly or explicitly) to understand the Internet and the possibilities it offers, is posting things in a way which can so obviously be discovered a significant error of judgement? EDIT: Clarification - essentially it was a fairly graphic commentary on porn (but of, shall we say, a non-academic nature). I'm actually more interested in the general concept than the specific incident as it's something we're likely to see more in the future as people put more and more of themselves on-line. My concerns are not primarily about him and how he feels about such things (he's white, straight, male and about the last possible victim of discrimination on the planet in that sense), more how it reflects on the company that a very simple search (basically his name) returns these things and that clients may also do it. We work in a relatively conservative industry.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >