Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 156/563 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Video documentary on the open source culture ?

    - by explorest
    Hello, I'm looking for some videos on these subjects: A movie/documentary detailing the origin, history, and current state of open source culture A movie/documentary on how open source software actually gets developed. What are the technical workflows. How do people create projects, recruit contributors, build a community, assign roles, track issues, assimilate new comers ... etc etc. Could someone suggest a title?

    Read the article

  • Significance and role of Node.js in Web development

    - by Pankaj Upadhyay
    I have read that Node.js is a server-side javascript enviroment. This has put few thought and tinkers in my mind. Can we develop a complete data-drivent web application utilizing just JavaScript (along with node.js), HTML5 and CSS? Do we still need to use some server-side scripting language (e.g. C#, PHP)? In case we still need to use other scripting languages, what is node.js worth for, or useful? NOTE: Pardon with my knowledge about node.js.

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • Why don't computers store decimal numbers as a second whole number?

    - by SomeKittens
    Computers have trouble storing fractional numbers where the denominator is something other than a solution to 2^x. This is because the first digit after the decimal is worth 1/2, the second 1/4 (or 1/(2^1) and 1/(2^2)) etc. Why deal with all sorts of rounding errors when the computer could have just stored the decimal part of the number as another whole number (which is therefore accurate?) The only thing I can think of is dealing with repeating decimals (in base 10), but there could have been an edge solution to that (like we currently have with infinity).

    Read the article

  • How do I mashup Google Maps with geolocated photos from one or more social networks?

    - by PureCognition
    I'm working on a proof of concept for a project, and I need to pin random photos to a Google Map. These photos can come from another social network, but need to be non-porn. I've done some research so far, Google's Image Search API is deprecated. So, one has to use the Custom Search API. A lot of the images aren't photos, and I'm not sure how well it handles geolocation yet. Twitter seems a little more well suited, except for the fact that people can post pictures of pretty much anything. I was also going to look into the API's for other networks such as Flickr, Picasa, Pinterest and Instagram. I know there are some aggregate services out there that might have done some of this mash-up work for me as well. If there is anyone out there that has a handle on social APIs and where I should look for this type of solution, I would really appreciate the help. Also, in cases where server-side implementation matters, I'm a .NET developer by experience.

    Read the article

  • In WPF, should I base my converters on types or use-cases?

    - by user1013159
    I'm looking for some advice on how to write my WPF value converters. The way I'm currently writing them, they are very specific, like (bool?,bool) = Brush, i.e. I'm writing each converter for a specific use case, in this case, the Brush is bound to an indicator showing equality information between the bool? and the bool. This obviously makes re-use very hard and I end up with a quite large list of converters. Should I strive to write my converters in a more general way? Can I?

    Read the article

  • What are the advantages of Maven when it comes to single man, educational projects

    - by Leron
    I've spend a few hours playing around with Maven + reading some stuff on the apache official site and also a few random googled articles. By this I mean that I really tried to find the answers myself - both by reading and by doing things on my own. Also maybe worth to mention that I installed the m2e plugin so most of the time I've tried things out from Eclipse and not using the command line too much. However aside from the generated project that for example prevent me from using the default package I didn't see that much of a difference with the standard way I've created my projects before try Maven. In fact I've almost decided to skip Maven for now and move on to the other technology I wanted to learn more in-depth - Hibernate, but when I start with opening the official page the first thing I've read was the recommendation to use Hibernate with Maven. That get me confused and made me taking a step back and trying once more to find what I'm obviously missing right now. As it's said in the maven.apache.. site, the true strength of Maven is shown when you work on large projects with other people, but I lack the option to see how Maven is really used in this scenario, still i think that there are maybe advantages even when it comes to working with small projects alone, but I really have difficulties to point them out. So what do you think are the advantages of Maven when it's used for small projects writing from a single person. What are the things that I should be aware of and try to exploit (I mean features offered by Maven) that can come in handy in this situations?

    Read the article

  • Spring MVC vs raw servlets and template engine?

    - by Gigatron
    I've read numerous articles about the Spring MVC framework, and I still can't see the benefits of using it. It looks like writing even a simple application with it requires creating a big hodgepodge of XML files and annotations and other reams of code to conform to what the framework wants, a whole bunch of moving parts to accomplish a simple single task. Any time I look at a Spring example, I can see how I can write something with the same functionality using a simple servlet and template engine (e.g. FreeMarker, StringTemplate), in half the lines of code and little or no XML files and other artifacts. Just grab the data from the session and request, call the application domain objects if necessary, pass the results to the template engine to generate the resulting web page, done. What am I missing? Can you describe even one example of something that is actually made simpler with Spring than using a combination of raw servlets with a template engine? Or is Spring MVC just one of those overly complicated things that people use only because their boss tells them to use it?

    Read the article

  • What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy?

    - by David Cowden
    If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes. This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you --the Factory design pattern. My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need ask a Factory to give me one. As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls. My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

    Read the article

  • Concerns on first ASP.NET cloud application

    - by RPK
    I am writing a small ASP.NET Web Application. My worries are that I want to keep the architecture same giving me the option to install it on an Intranet or on a Cloud Platform. I am not using MVC but lately learned that Azure only supports ASP.NET MVC applications. I want to know whether ASP.NET Web Forms application work on Azure/AppHarbor or not. Do I need to convert this application to MVC if Web Forms is not supported? Will the same application run on Intranet as well?

    Read the article

  • Which open source PHP project has the 'perfect' OOP design I can learn from?

    - by aditya menon
    I am a newbie to OOP, and I learn best by example. You could say this question is similar to Which Scala open source projects should I study to learn best coding practices - but in PHP. I have heard-tell that Symfony has the best 'architecture' (I will not pretend I know what that exactly means), as well as Doctrine ORM. Is it worth it to spend many months reading the source code of these projects, trying to deduce the patterns used and learning new tricks? I have seen equal number of web pages dissing and liking Zend's codebase (will provide links if deemed necessary). Do you know of any other project that would make any veteran OOP developer shed tears of joy? Please let me add that practicality and scope of use is not a concern at all here - I just want to do: Pick a project that has a codebase deemed awesome by devs way better and greater than me. Write code that achieves what the project does. Compare results and try to learn what I don't know. Basically, an academic interest codebase. Any recommendations please?

    Read the article

  • Should maven generate jaxb java code or just use java code from source control?

    - by Peter Turner
    We're trying to plan how to mash together a build server for our shiny new java backend. We use a lot of jaxb XSD code generation and I was getting into a heated argument with whoever cared that the build server should delete jaxb created structures that were checked in generate the code from XSD's use code generated from those XSD's Everyone else thought that it made more sense to just use the code they checked in (we check in the code generated from the XSD because Eclipse pretty much forces you to do this as far as I can tell). My only stale argument is in my reading of the Joel test is that making the build in one step means generating from the source code and the source code is not the java source, but the XSD's because if you're messing around with the generated code you're gonna get pinched eventually. So, given that we all agree (you may not agree) we should probably be checking in our generate java files, should we use them to generate our code or should we generate it using the XSD's?

    Read the article

  • CSS naming guildlines with elements with multiple classes

    - by ryanzec
    Its seems like there are 2 ways someone can handle naming classes for elements that are designed to have multiple classes. One way would be: <span class="btn btn-success"></span> This is something that twitter bootstrap uses. Another possibility I would think would be: <span class="btn success"></span> It seems like the zurb foundation uses this method. Now the benefits of the first that I can see is that there less chance of outside css interfering with styling as the class name btn-success would not be as common as the class name success. The benefit of the second as I can see is that there is less typing and potential better style reuse. Are there any other benefits/disadvantages of either option and is one of them more popular than the other?

    Read the article

  • Is NAN suitable for communicating that an invalid parameter was involved in a calculation?

    - by Arman
    I am currently working on a numerical processing system that will be deployed in a performance-critical environment. It takes inputs in the form of numerical arrays (these use the eigen library, but for the purpose of this question that's perhaps immaterial), and performs some range of numerical computations (matrix products, concatenations, etc.) to produce outputs. All arrays are allocated statically and their sizes are known at compile time. However, some of the inputs may be invalid. In these exceptional cases, we still want the code to be computed and we still want outputs not "polluted" by invalid values to be used. To give an example, let's take the following trivial example (this is pseudo-code): Matrix a = {1, 2, NAN, 4}; // this is the "input" matrix Scalar b = 2; Matrix output = b * a; // this results in {2, 4, NAN, 8} The idea here is that 2, 4 and 8 are usable values, but the NAN should signal to the receipient of the data that that entry was involved in an operation that involved an invalid value, and should be discarded (this will be detected via a std::isfinite(value) check before the value is used). Is this a sound way of communicating and propagating unusable values, given that performance is critical and heap allocation is not an option (and neither are other resource-consuming constructs such as boost::optional or pointers)? Are there better ways of doing this? At this point I'm quite happy with the current setup but I was hoping to get some fresh ideas or productive criticism of the current implementation.

    Read the article

  • What electronic user-story-mapping tools can you recommend?

    - by azheglov
    Agile software development relies heavily on a work item type called user stories. For example, you have a backlog full of user stories and you can select a few of them to work on during the next sprint. But where and how do you find user stories to put into the backlog? There is a popular technique for doing that called story mapping. Jeff Patton invented it and here is the definitive guide on how to do it. The question is, what electronic tools are out there that support Patton's story-mapping technique? I've done a bit of research, found Pivotal and Rally plug-ins (but I'm not a customer of either) and I'm currently experimenting with SilverStories. What other tools are out there? What have you used? What do you (not) recommend? Why? UPDATE: Some people who wrote comments seem to lean towards an answer that applying this technique is simply impossible with an electronic tool and we should just accept that. Can't someone write it up as an answer?

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • What Programming languages/technologies stack to use when building Facebook like website ?

    - by Blaze Boy
    I'm developing a website idea that will perform the same as Facebook functionality without the applications extensibility. the site will have a client application to perform a task similar to dropbox.com the site will be a social network of some sort of professionalism, it will highlight code of almost all languages has a high speed backend database Now what languages/techniques do I need to use to achieve that?

    Read the article

  • Dealing with a developer continuously ignoring edge cases in his work

    - by Alex N.
    I have an interesting, fairly common I guess, issue with one of the developers in my team. The guy is a great developer, work fast and productive, produces fairly good quality code and all. Good engineer. But there is a problem with him - very often he fails to address edge cases in his code. We spoke with him about it many times and he is trying but I guess he just doesn't think this way. So what ends up happening is that QA would find plenty issues with his code and return it back for development again and again, ultimately resulting in missed deadlines and everyone in the team unhappy. I don't know what to do with him and how to help him overcome this problem. Perhaps someone with more experience could advise? Thank you!

    Read the article

  • Finding an A* heuristic for a directed graph

    - by Janis Peisenieks
    In a previous question, I asked about finding a route (or path if you will) in a city. That is all dandy. The solution I chose was with the A* algorithm, which really seems to suit my needs. What I find puzzling is heuristic. How do I find one in an environment without constant distance between 2 nodes? Meaning, not every 2 nodes have the same distance between them. What I have is nodes (junctures), streets with weight (which may also be one-way), a start/finish node (since the start and end is always in the same place) and a goal node. In an ordinary case, I would just use the same way I got to goal to go back, but since one of the streets could have been a one-way, that may not be possible. The main question How do I find a heuristic in a directed graph?

    Read the article

  • Is LINQ to objects a collection of combinators?

    - by Jimmy Hoffa
    I was just trying to explain the usefulness of combinators to a colleague and I told him LINQ to objects are like combinators as they exhibit the same value, the ability to combine small pieces to create a single large piece. Though I don't know that I can call LINQ to objects combinators. I've seen 2 levels of definition for combinator that I generalize as such: A combinator is a function which only uses things passed to it A combinator is a function which only uses things passed to it and other standard atomic functions but not state The first is very rigid and can be seen in the combinatory calculus systems and in haskell things like $ and . and various similar functions meet this rule. The second is less rigid and would allow something like sum as it uses the + function which was not passed in but is standard and not stateful. Though the LINQ extensions in C# use state in their iteration models, so I feel I can't say they're combinators. Can someone who understands the definition of a combinator more thoroughly and with more experience in these realms give a distinct ruling on this? Are my definitions of 'combinator' wrong to begin with?

    Read the article

  • how to plot scatterplot and histogram using R [migrated]

    - by Wee Wang Wang
    I have a dataset about maximum wind speed(cm) as below: Year Jan Feb Mar Apr May June July Aug Sept Oct Nov Dec 2011 4.5 5.6 5.0 5.4 5.0 5.0 5.2 5.3 4.8 5.4 5.4 3.8 2010 4.6 5.0 5.8 5.0 5.2 4.5 4.4 4.3 4.9 5.2 5.2 4.6 2009 4.5 5.3 4.3 3.9 4.7 5.0 4.8 4.7 4.9 5.6 4.9 4.1 2008 3.8 1.9 5.6 4.7 4.7 4.3 5.9 4.9 4.9 5.6 5.2 4.4 2007 4.6 4.6 4.6 5.6 4.2 3.6 2.5 2.5 2.5 3.3 5.6 1.5 2006 4.3 4.8 5.0 5.2 4.7 4.6 3.2 3.4 3.6 3.9 5.9 4.4 2005 2.7 4.3 5.7 4.7 4.6 5.0 5.6 5.0 4.9 5.9 5.6 1.8 How to create monthly max wind speed scatterplot (month in x-axis and wind speed in y-axis) and also the monthly max wind speed histogram by using R programming?

    Read the article

  • Detect frameworks and/or CMS utilized on websites in Firefox

    - by jkneip
    I'm redesigning the website for my academic library and am examining other sites to determine to identify the technologies used. Things like: Web frameworks Javascript frameworks Server-side technology Content management system Now I've had some real success in Firefox using plugins like Wappalyzer, Firebug, and the DOM Inspector. But some sites just don't display any of the info. I'm looking for using these tools, especially it seems it an enterprise-level CMS is being used. Does anyone know of any other tools to detect this kind of data? Also with Firebug & the DOM Inspector, there is a lot of info. displayed and I wondered if there was a way to derive the presence of server-side technologies, CMS's, etc. within certain elements of a web page? Also, if this question is more relevant to another Stack Exchange site, please let me know and I'll post it there instead. Much thanks, Jason

    Read the article

  • Should I use title case in URLs?

    - by Amadiere
    We are currently deciding on a consistent naming convention across a site with multiple web applications. Historically, I've been an advocate of the 'lowercase all the letters!' when creating URLs: http://example.com/mysystem/account/view/1551 However, within the last year or two, specifically since I began using ASP.NET MVC & had more dealings with REST based URLs, I've become a fan of capitalizing the first letter of each section/word within the URL as it makes it easier to read (imho). http://example.com/MySystem/Account/View/1551 We're not in a situation where people need to read or be able to understand the URLs, so that's not a driver per se. The main thing we are after is a consistent approach that is rational and makes sense. Are there any standards that declare it good to do one way or another, or issues that we may run into on (at least realistically modern) setups that would choose a preference over another? What is the general consensus for this debate currently?

    Read the article

  • Branching strategy for frequent releases

    - by Technext
    We have very frequent releases and we use Git for version control. When i am mentioning about frequency, please assume it to include bug-fixes and feature release too. All releases are eventually merged into ‘mainline’. When a release is deployed on production and if a bug is identified, people start fixing the bug on the same branch from which the latest release was deployed on production. They do not create a new bug-fix branch for the same. I feel that’s not the right way to go for. There are several components and each component has a different owner, and thus, different perspective. Though I have not initiated talks with them, I am sure there will be a lot of resistance. Main issue that they might cite would be, “There’s a lot of work involved in creating and tracking branches especially when there are so frequent deployments on production. This will consume a lot of dev effort.” Do you think that fixing bug on the same branch from which release was done, a good idea? If yes, how do you manage it? Using tags? I know that best practices may not always be applicable due to several factors but still I would like to know what might be a good approach for branching in a scenario where releases/bug-fixes happen almost on a daily basis.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >