Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 221/563 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Rationale behind freeware projects

    - by VexXtreme
    I've seen some freeware projects in the past where the author(s) invested a significant amount of their personal time and resources and never even considered charging for the software. A lot of these projects were donation based, and from what I've heard, donationware can never be a viable business model (even to simply support development costs) because most people choose not to donate if given an option. A lot of these projects eventually shut down because their authors could not sustain them further. Granted, some people simply like making the community happy (or something), but if you're struggling to keep your project alive, why not charge some small amount such as $10 simply to stay operational? If people find your software useful (and a lot of people found those projects VERY useful) they won't have a problem paying such a small amount. The question is: if you have a popular app that people like and download in great numbers, why not put a price tag on it? Why do it for free?

    Read the article

  • Are flag variables an absolute evil?

    - by dukeofgaming
    I remember doing a couple of projects where I totally neglected using flags and ended up with better architecture/code; however, it is a common practice in other projects I work at, and when code grows and flags are added, IMHO code-spaghetti also grows. Would you say there are any cases where using flags is a good practice or even necessary?, or would you agree that using flags in code are... red flags and should be avoided/refactored; me, I just get by with doing functions/methods that check for states in real time instead. Edit: Not talking about compiler flags

    Read the article

  • How to understand Linux kernel source code for a beginner?

    - by Amit Chavan
    Hi, I am a student interested in working on Memory Management, particularly the page replacement component of the linux kernel. What are the different guides that can help me to begin understanding the kernel source? I have tried to read the book Understanding the Linux Virutal Memory Manager by Mel Gorman and Understanding the Linux Kernel by Cesati and Bovet, but they do not explain the flow of control through the code. They only end up explaining various data structures used and the work various functions perform. This makes the code more confusing. My project deals with tweaking the page replacement algorithm in a mainstream kernel and analyse its performance for a set of workloads. Is there a flavor of the linux kernel that would be easier to understand(if not the linux-2.6.xx kernel)?

    Read the article

  • What is a good stopword in full text indexation?

    - by Benoit
    When you go to the Appendix D in Oracle Text Reference they provide lists of stopwords used by Oracle Text when indexing table contents. When I see the English list, nothing puzzles me. But the reason why the French list includes moyennant (French for in view of which) for example is unclear. Oracle has probably thought it through more than once before including it. How would you constitute a list of appropriate stopwords if you were to design an indexer?

    Read the article

  • Thread safe GUI programming

    - by James
    I have been programming Java with swing for a couple of years now, and always accepted that GUI interactions had to happen on the Event Dispatch Thread. I recently started to use GTK+ for C applications and was unsurprised to find that GUI interactions had to be called on gtk_main. Similarly, I looked at SWT to see in what ways it was different to Swing and to see if it was worth using, and again found the UI thread idea, and I am sure that these 3 are not the only toolkits to use this model. I was wondering if there is a reason for this design i.e. what is the reason for keeping UI modifications isolated to a single thread. I can see why some modifications may cause issues (like modifying a list while it is being drawn), but I do not see why these concerns pass on to the user of the API. Is there a limit imposed by an operating system? Is there a good reason these concerns are not 'hidden' (i.e. some form of synchronization that is invisible to the user)? Is there any (even purely conceptual) way of creating a thread safe graphics library, or is such a thing actually impossible? I found this http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness which seems to describe GTK differently to how I understood it (although my understanding was the same as many people's) How does this differ to other toolkits? Is it possible to implement this in Swing (as the EDT model does not actually prevent access from other threads, it just often leads to Exceptions)

    Read the article

  • What is bootstrap listener in the context of Spring framework?

    - by jillionbug2fix
    I am studying Spring framework, in web.xml I added following which is a bootstrap listener. Can anyone give me a proper idea of what is a bootstrap listener? <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> You can see the doc here: ContextLoadListener Bootstrap listener to start up and shut down Spring's root WebApplicationContext. Simply delegates to ContextLoader as well as to ContextCleanupListener. This listener should be registered after Log4jConfigListener in web.xml, if the latter is used. As of Spring 3.1, ContextLoaderListener supports injecting the root web application context via the ContextLoaderListener(WebApplicationContext) constructor, allowing for programmatic configuration in Servlet 3.0+ environments. See WebApplicationInitializer for usage examples...

    Read the article

  • Semantic Versioning and splitting apart a library, providing a bundled build

    - by Derick Bailey
    I've got a nice, fairly popular JavaScript library that is following Semantic Versioning. The current library has a few dependency libraries, which are available either as separate downloads or as part of a single bundled download. I see a need to head down this path further. I want to extract additional, smaller libraries out of the one larger library. Each of these extracted libraries would be available as separate files, or inside of the one bundled build, again. If I go down this path of extracting the libraries, and providing a bundled version of the final code, does this require a full version change in semantic versioning? Would I have to bump from 1.x to 2.x? My first thought it no: I will not change any public API, so I don't have to change the major version number. But then I wonder... well, I am restructuring a lot of things, even though the final API for the bundled version would be the same. Is there a clear answer from semver on something like this? Do I need to bump first, second or third dot? Or something else?

    Read the article

  • Online medical image processing grand challenges

    - by taltos
    Hello! I moved my question from stackoverflow here. I cherish the hope that I will be luckier. I'm currently working on my thesis, and I'm looking for an/some online medical image processing grand challenge(s). I already know this site but I need a challenge which has microscopic image dataset like cells, chromosomes, bacterias, viruses etc with classification or recognation objective. Like karyotyping. Maybe someone is working on this field or his university made a challenge what I'm looking for, and can help me. Thank you!

    Read the article

  • Best arguments for/against introducing ORM technology into a companies dev process

    - by james
    I have started using ORM technology in the last few years. My first exposure was NHibernate. I then moved onto Linq 2 Sql, and Entity Framework. The issue I have however is, there are some organisations where I have found strong opposition to introducing ORM tools. They usually have a number of reasons: they have a lot of built up SQL skills in the team, and are worried about the underlying SQL that ORM's generate. they have DBA's who like to be able to see the SQL an app uses in order that can review it for best practice. they are worried about performance (some people have "heard" the ORM's aren't as performant but have no real proof themselves - there may well be some truth in this! :). So, I'm looking for the best or most convincing arguments that you have put forward FOR the use of ORM tools. Equally, I would be interested in the against arguments too. Note: this is NOT a discussion over which ORM I should use.

    Read the article

  • Relationship between Repository and Unit of Work

    - by NullOrEmpty
    I am going to implement a repository, and I would like to use the UOW pattern since the consumer of the repository could do several operations, and I want to commit them at once. After read several articles about the matter, I still don't get how to relate this two elements, depending on the article it is being done in a way u other. Sometimes the UOW is something internal to the repository: public class Repository { UnitOfWork _uow; public Repository() { _uow = IoC.Get<UnitOfWork>(); } public void Save(Entity e) { _uow.Track(e); } public void SubmittChanges() { SaveInStorage(_uow.GetChanges()); } } And sometimes it is external: public class Repository { public void Save(Entity e, UnitOfWork uow) { uow.Track(e); } public void SubmittChanges(UnitOfWork uow) { SaveInStorage(uow.GetChanges()); } } Other times, is the UOW whom references the Repository public class UnitOfWork { Repository _repository; public UnitOfWork(Repository repository) { _repository = repository; } public void Save(Entity e) { this.Track(e); } public void SubmittChanges() { _repository.Save(this.GetChanges()); } } How are these two elements related? UOW tracks the elements that needs be changed, and repository contains the logic to persist those changes, but... who call who? Does the last make more sense? Also, who manages the connection? If several operations have to be done in the repository, I think using the same connection and even transaction is more sound, so maybe put the connection object inside the UOW and this one inside the repository makes sense as well. Cheers

    Read the article

  • Why do people think SOAP is deprecated?

    - by user98q37479
    While browsing SO today I found this question here and it starts with this: Sure, you're gonna tell me that SOAP is depracated and all, well i'm forced to use it Found lots of statement like this one on SO up till now, this one just triggered me to ask this question. REST has its uses, SOAP has its uses, in some places they intersect as functionality but they are not replaceable to one another. So I wonder, why do people think SOAP is "deprecated"? Is it ignorance? Complexity of SOAP and WS-* specs? REST hype? What? If you think SOAP is deprecated please tell me why. I'm curious!

    Read the article

  • Mobile cross-platform SDK for computationally intensive apps

    - by K.Steff
    I am aware of the PhoneGap toolkit for creating mobile applications for virtually all mobile platforms with a significant market share. However, the code in PhoneGap that is shared between the different platforms is written in JavaScript. While I like JS, I think it's hardly appropriate for computationally intensive tasks. The situation with Titanium is pretty much the same. So, is there any way that I can create a cross-platform mobile app that has the computationally intensive code shared between the platforms? Some context: Obviously, I don't want to implement the time consuming algorithm in many different languages, since this violates DRY, increases the chance for bugs slipping in at least one version and require boilerplate code to work. I've looked at Xamarin's MonoTouch and Mono for Android tools, but while they cover iOS and Android, they're not nearly as versatile for deployment as PhoneGap. On the other hand, (IMO) the statically typed nature of C# is more suited for intense computation than JS. Are there any other SDK/tools appropriate for the task that I don't know about or a point about the mentioned above that I've missed? Also, uploading data to a web service for processing is not an option, because of the traffic required.

    Read the article

  • Is it worth to learn Experimental Languages?

    - by Xander Lamkins
    I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers. The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from multiple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments.

    Read the article

  • How do you learn a new programming language?

    - by Naveen
    I am C++ developer with some good experience on it. When I try to learn a new language ( have tried Java, C#, python, perl till now) I usually pickup a book and try to read it. But the problem with this is that these books typically start with some very basic programming concepts such as loops, operators etc and it starts to get very boring soon. Also, I feel I would get only theoeritcal knowledge without any practical knowledge on writing the code. So my question is how do you tacke these situations? do you just skip the chapters if its explaining something basic? also, do you have some standard set of programs that you will try to write in every new programming language you try to learn?

    Read the article

  • Listing Unix experience on resume.

    - by beacon
    I have been using Linux for quite a long time, and I am familiar with many Unix commands and tools. However, my only experience with Unix is through various Linux distributions. How should I communicate on my resume that I am familiar with the Unix command line even though I have never used a UNIX(R) system? It seems strange to me to list Unix when I've never used UNIX(R). Some people refer to Unix clones as *nix, but I'm afraid that might fly over the heads of some HR people.

    Read the article

  • Why are we as an industry not more technically critical of our peers? [closed]

    - by Jarrod Roberson
    For example: I still see people in 2011 writing blog posts and tutorials that promote setting the Java CLASSPATH at the OS environment level. I see people writing C and C++ tutorials dated 2009 and newer and the first lines of code are void main(). These are examples, I am not looking for specific answers to the above questions, but to why the culture of accepting sub-par knowledge in the industry is so rampant. I see people posting these same type of empirically wrong suggestions as answers on www.stackoverflow.com and they get lots of up votes and practically no down votes! The ones that get lots of down votes are usually from answering a question that wasn't asked because of lack of reading for comprehension skills, and not incorrect answers per se. Is our industry that ignorant as a whole, I can understand the internet in general being lazy, apathetic and un-informed but our industry should be more on top of things like this and way more critical of people that are promoting bad habits and out-dated techniques and information. If we are really an engineering discipline, why aren't people held to a higher standard as they are in other engineering disciplines? I want to know why people accept bad advice, poor practices as the norm and are not more critical of their peers in the software industry.?

    Read the article

  • .NET Libraries Cost More Than Windows?

    - by Kevin Mark
    When looking into libraries to make my programming life a little bit easier I've (almost) always been disappointed by the prices offered. For instance, Actipro's WPF Studio is $650. I suppose that's worth it if you plan to make money from the use of those controls. But take a look at, say, Windows. Windows 7 Ultimate is just about $220. I consider Windows to be a far more complex and "worth-it" product/purchase than a library that runs on it. Why the significant difference in pricing? Do libraries really need to be so expensive, or do they need to charge more in order to make a decent some of money?

    Read the article

  • If I implement a web-service, how do I respond to POST requests with JSON?

    - by Vova Stajilov
    I have to make a rather complex system for my diploma work. Logically it will consist of the following components: Database Web-service Management with web interface Client iOS application that will consume web-service I decided to implement all the first three components under .NET. Firstly I will create a database depending on the information load - this is clear. But then I need a web-service that will return data in JSON format for iOS clients to consume - that's obvious and not that hard to implement. For this I will use WCF technology. Now I have a question, if I implement the web-service, how will I be able to respond to POST requests with JSON? It probably involves WCF JSON or something related? But I also need some web pages as admin part, so will this web-application be able to consume my centralized web-services as well or I should develop it separately? I just want my web service to act like a set of controllers. There is a related question here but this doesn't quite reflect my question.

    Read the article

  • Easy to understand and interesting book on algorithms

    - by gasan
    Please advise me a book on algorithms, that would be easier to read and understand than Cormen's book1. It may be not so big and deep in explanation. I even want it to not be that big, however it shouldn't contain misconceptions or errors or inaccuracies. It should be a some kind of pre-Cormen's book, that will help later to understand more sophisticated conceptions. A beginner book (but still worth to read). 1 Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein

    Read the article

  • How can I refactor client side functionality to create a product line-like generic design?

    - by Nupul
    Assume the following situation similar to that of Stack Overflow: I have a system with a front-end that can perform various manipulations on the data (by sending messages to REST back-end): Posting Editing and deleting Adding labels and tags Now in the first version we created it well modularized but the need as of now for 'evolving' the system similar to Stack Overflow. My question is how best to separate the commonality and how to incorporate the variability with respect to the following: Commonality: The above 'functionalities' and sending/receiving the data from the server Look and feel (also a variability as explained below) HTTP verbs associated with the above actions Variability: The RESTful URLs where the requests are sent The text/style of the UI (the commonality is analogous to Stack Overflow - the functionality of upvotes, posting a question remains the same, but the words, the icons, the look and feel is still different across sites) I think this is entirely a client-side code organization/refactoring issue. I'm heavily using jQuery, javascript and backbone for front-end development. My question is how best should I isolate the same to be able to create multiple such aspects to the tool we are currently working on?

    Read the article

  • Sending an email with attachment from server side

    - by SaravananArumugam
    I have to create a word document in a specific format and send it as attachment to some email addresses. I have a preview screen for the report which on approval has to be send in email. This is an ASP.NET MVC 3 application. I am left with a few options here. I am creating the preview using html. I can convert this html into doc and send it, which would be a straight solution. But capturing the Response object's output is being a tough job. I thought of using Mail merge functionality of MS word, where I'll be filling the placeholders of the doc template. But the problem is conceptually, it doesn't appear to be mail merge. I have found someone suggesting to use RTF format and replace the placeholders with database values. Which is the right thing to do? What's the best solution here? Is there any other option than the three listed above?

    Read the article

  • What are some recommended video lectures for a non-CS student to prepare for the GRE CS subject test?

    - by aristos
    Well the title kinda explains all there is to explain. I'm a non-cs student and was preparing to apply PhD programs in applied mathematics. But for my senior thesis I've been reading lots of machine learning and pattern recognition literature and enjoying it a lot. I've taken lots of courses with statistics and stochastics content, which I think, would help me if I get accepted to a program with ML focus, but there are only two CS courses -introduction to programming- in my transcript and therefore I decided to take the CS subject test to increase my chances. Which courses do you think would be most essential to have a good result from CS subject test? I'm thinking of watching video lectures of them, so do you have any recommendations?

    Read the article

  • Should I pay for my training? [closed]

    - by user65883
    I work at a company that made me sign a contract to pay for my training should I decide to leave within my 3 months probation. I never went on a formal course and didn't get any proof of undergoing the course. They stated if I don't sign the contract then I wouldn't receive training and I won't be able to do my job. If their training is to watch what other employees do then I took the course. I'm wondering if it's allowed for them to ask me for money when I didn't receive any proof of training?

    Read the article

  • How does throwing an ArgumentNullException help?

    - by Scott Whitlock
    Let's say I have a method: public void DoSomething(ISomeInterface someObject) { if(someObject == null) throw new ArgumentNullException("someObject"); someObject.DoThisOrThat(); } I've been trained to believe that throwing the ArgumentNullException is "correct" but an "Object reference not set to an instance of an object" error means I have a bug. Why? I know that if I was caching the reference to someObject and using it later, then it's better to check for nullity when passed in, and fail early. However, if I'm dereferencing it on the next line, why are we supposed to do the check? It's going to throw an exception one way or the other. Edit: It just occurred to me... does the fear of the dereferenced null come from a language like C++ that doesn't check for you (i.e. it just tries to execute some method at memory location zero + method offset)?

    Read the article

  • How is"cloud computing"different from "client-server"?

    - by BellevueBob
    Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >