Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 252/563 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Decorator not calling the decorated instance - alternative design needed

    - by Daniel Hilgarth
    Assume I have a simple interface for translating text (sample code in C#): public interface ITranslationService { string GetTranslation(string key, CultureInfo targetLanguage); // some other methods... } A first simple implementation of this interface already exists and simply goes to the database for every method call. Assuming a UI that is being translated at start up this results in one database call per control. To improve this, I want to add the following behavior: As soon as a request for one language comes in, fetch all translations from this language and cache them. All translation requests are served from the cache. I thought about implementing this new behavior as a decorator, because all other methods of that interface implemented by the decorater would simple delegate to the decorated instance. However, the implementation of GetTranslation wouldn't use GetTranslation of the decorated instance at all to get all translations of a certain language. It would fire its own query against the database. This breaks the decorator pattern, because every functionality provided by the decorated instance is simply skipped. This becomes a real problem if there are other decorators involved. My understanding is that a Decorator should be additive. In this case however, the decorator is replacing the behavior of the decorated instance. I can't really think of a nice solution for this - how would you solve it? Everything is allowed, even a complete re-design of ITranslationService itself.

    Read the article

  • Performing client-side OAuth authorized Twitter API calls versus server side, how much of a difference is there in terms of performance?

    - by Terence Ponce
    I'm working on a Twitter application in Ruby on Rails. One of the biggest arguments that I have with other people on the project is the method of calling the Twitter API. Before, everything was done on the server: OAuth login, updating the user's Twitter data, and retrieving tweets. Retrieving tweets was the heaviest thing to do since we don't store the tweets in our database, so viewing the tweets means that we have to call the API every time. One of the people in the project suggested that we call the tweets through Javascript instead to lessen the load on the server. We used GET search, which, correct me if I'm wrong, will be removed when v1.0 becomes completely deprecated, but that really isn't a concern now. When the Twitter API has migrated completely to v1.1 (again, correct me if I'm wrong), every calls to the API must be authenticated, so we have to authenticate our Javascript requests to the API. As said here: We don't support or recommend performing OAuth directly through Javascript -- it's insecure and puts your application at risk. The only acceptable way to perform it is if you kept all keys and secrets server-side, computed the OAuth signatures and parameters server side, then issued the request client-side from the server-generated OAuth values. If we do exactly what Twitter suggests, the only difference between this and doing everything server-side is that our server won't have to contact the Twitter API anymore every time the user wants to view tweets. Here's how I would picture what's happening every time the user makes a request: If we do it through Javascript, it would be harder on my part because I would have to create the signatures manually for every request, but I will gladly do it if the boost in performance is worth all the trouble. Doing it through Ruby on Rails would be very easy since the Twitter gem does most of the grunt work already, so I'm really encouraging the other people in the project to agree with me. Is the difference in performance trivial or is it significant enough to switch to Javascript?

    Read the article

  • How to "back track"?

    - by esqew
    I find that I start projects and, due to my lack of experience, find that old database structures and huge blocks of code are inefficient and memory-costly. However, by the time I realize a re-design of the entire project is needed, the project has grown to such a size that it is simply too late to go back and modify the project in its current state and requires a completely new project file and the whole shebang. How should I prevent ruts such as this one, where it is too late to go back and modify the current project to fit specifications modified far down the road from the creation of the project? (Apologies in advance for confusing grammar, it's been a long day here... as you can probably tell.)

    Read the article

  • How do I classify using SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. I gave the parameters exactly as in the example provided in the documentation itself. The output I obtained was stats = autoc: [1.857855266614132e+000 1.857955341199538e+000] contr: [5.103143332457753e-002 5.030548650257343e-002] corrm: [9.512661919561399e-001 9.519459060378332e-001] corrp: [9.512661919561385e-001 9.519459060378338e-001] cprom: [7.885631654779597e+001 7.905268525471267e+001] cshad: [1.219440700252286e+001 1.220659371449108e+001] dissi: [2.037387269065756e-002 1.935418927908687e-002] energ: [8.987753042491253e-001 8.988459843719526e-001] entro: [2.759187341212805e-001 2.743152140681436e-001] homom: [9.930016927881388e-001 9.935307908219834e-001] homop: [9.925660617240367e-001 9.930960070222014e-001] maxpr: [9.474275457490587e-001 9.474466930429607e-001] sosvh: [1.847174384255155e+000 1.846913030238459e+000] savgh: [2.332207337361002e+000 2.332108469591401e+000] svarh: [6.311174784234007e+000 6.314794324825067e+000] senth: [2.663144677055123e-001 2.653725436772341e-001] dvarh: [5.103143332457753e-002 5.030548650257344e-002] denth: [7.573115918713391e-002 7.073380266499811e-002] inf1h: [-8.199645492654247e-001 -8.265514568489666e-001] inf2h: [5.643539051044213e-001 5.661543271625117e-001] indnc: [9.980238521073823e-001 9.981394883569174e-001] idmnc: [9.993275086521848e-001 9.993404634013308e-001] The thing is, I run the program for three images. But all three gave me the same output. When I used graycoprops() stat = Contrast: 4.721877658740964e+005 Correlation: -3.282870417955449e-003 Energy: 8.647689474127760e-006 Homogeneity: 8.194621855726478e-003 stat = Contrast: 2.817160447307697e+004 Correlation: 2.113032196952781e-005 Energy: 4.124904827799189e-004 Homogeneity: 2.513567163994905e-002 stat = Contrast: 7.086638436309059e+004 Correlation: 2.459637878221028e-002 Energy: 4.640677159445994e-004 Homogeneity: 1.158305728309460e-002 The images are:

    Read the article

  • Creating a blog for software changes

    - by Dave
    I work for a small company where I maintain a number of project all at once. I would like to create a blog and note software changes/update so that I can keep track of things. Plus it will also serve as help tool for other if they need help. I would like to install something locally on my machine or network, either ASP or PHP is fine. Which software would you recommend? Is it good idea, bad idea? Has anyone done it? I have worked with wordpress and I like it but I am afraid it is not best for code snippets. Any thoughts I do use source control. I am not an expert on it though. I use three different development environment. 1. Visual Studio 2. Eclipse 3. SQL Server Management Studio

    Read the article

  • Tender vs. Requirements vs. Solution Design

    - by Tom Tom
    Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance and that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other?

    Read the article

  • Is there a sequence to read through the Android Developer Site for a user new to Android?

    - by Paul
    I keep seeing that I don't need to buy an android development book, that I should just read the Android Developer Site site because it has everything I need to know. I see it more as drinking from a fire hose. But I'm one of those people who likes to be walked thru the basics. I like to build up my knowledge, rather than being dropped into reference documentation. I also like to make sure I have see all/most of the topics covered. I'd hate to develop the wrong thing because I don't know about Fragments, Content Providers, or whatever. So, since it's a great resource, better than any book (we don't need no stinkin books), how do I traverse the site to get the information provide in the same way as a book would lay it out?

    Read the article

  • How can I find a good open source project to join?

    - by Lord Torgamus
    I just started working a year ago, and I want to join an open source project for the same reasons as anyone else: help create something useful and develop my skills further. My problem is, I don't know how to find a project where I'll fit in. How can I find a beginner-friendly project? What attributes should I be searching for? What are warning signs that a project might not be the right fit? Are there any tools out there to help match people with open source projects? There's a similar question here, but that question has to do with employment and is limited to PHP/Drupal.

    Read the article

  • What are approaches for analyzing the cost-benefits of a development methodology?

    - by Garrett Hall
    There are many development practices (TDD, continuous integration, cowboy-coding), principles (SOLID, layers of abstraction, KISS), and processes (RUP, Scrum, XP, Waterfall). I have learned you can't follow any of these blindly, but have to consider context and ROI (return on investment). My question is: How do you know whether you are getting a good ROI by following a particular methodology? Metrics, guesstimation, experience? Do analytical methods exist? Or is this just the million-dollar question in software engineering that has no answer?

    Read the article

  • Can I release complementary Windows 8 and WP8 apps on their respective stores?

    - by Clay Shannon
    I am creating a pair of apps, one to run preferably on tablets, but also laptops and PCs, and the other for WP8. These apps are complementary - having one is of no use without the other. I know there is a Windows Store, and a Windows Phone store, so one would be released on one, and one on the other. My question is: as these apps are useless by themselves (although in most cases it won't be the same people running both apps), will there be a problem with offering these useless-when-used-alone apps? IOW: Person A will use the Windows 8 app to interact with some people that have the WP8 app installed; those with the WP8 app will interact with a person or people who have the Windows 8 app installed. What I'm worried about is if these apps go through a certification process where they must be useful "standalone" - is that the case?

    Read the article

  • How to manage long running background threads and report progress with DDD

    - by Mr Happy
    Title says most of it. I have found surprising little information about this. I have a long running operation of which the user wants to see the progress (as in, item x of y processed). I also need to be able to pause and stop the operation. (Stopping doesn't rollback the items already processed.) The thing is, it's not that each item takes a long time to get processed, it's that that there are usually a lot of items. And what I've read about so far is that it's somewhat of an anti-pattern to put something like a queue in the DB. I currently don't have any messaging system in place, and I've never worked with one either. Another thing I read somewhere is that progress reporting is something that belongs in the application layer, but it didn't go into the details. So having said all this, what I have in mind is the following. User request with list of items enters the application layer. Application layer gets some information from the domain needed to process the items. Application layer passes the items and the information off to some domain service (should the implementation of this service belong in the infrastructure layer?) This service spins up a worker thread with callbacks for both progress reporting and pausing/stopping it. This worker thread will process each item in it's own UoW. This means the domain information from earlier needs to be stored in some DTO. Since nothing is really persisted, the service should be singleton and thread safe Whenever a user requests a progress report or wants to pause/stop the operation, the application layer will ask the service. Would this be a correct solution? Or am I at least on the right track with this? Especially the singleton and thread safe part makes the whole thing feel icky.

    Read the article

  • How to estimate freight / shipping costs ??

    - by Vani
    Hi, I am working on a PHP web application and want to know the best way to estimate freight costs in USA. The site deals with construction materials that uses LTL or Truck loads. I found a few sites like www.freightCenter.com that provide quotes using webservice. Two drawbacks, its paid service and the other, my site response time is slow if I use the webservice. Is there a open source tool/logic avaliable for estimating shipping / freight costs?? Or a way to determine the rate per mile per pound for different freight classes? Thank you, Vani

    Read the article

  • In a multidisciplicary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • Rewriting GPL code to change license

    - by I_like_traffic_lights
    I have found a GPL library (no dual license), which does exactly what I need. Unfortunately, the GPL license on the library is incompatible with the license of a different library I use. I have therefore decided to rewrite the GPL library, so the license can be changed. My question is: How extensive do the changes need to be to the library in order to be able to change the license? In other words, what is the cheapest way to do this?

    Read the article

  • Which is more important in a web application code promotion hierarchy? production environment to repo equivalence or unidirectional propagation?

    - by ghbarratt
    Lets say you have a code promotion hierarchy consisting of several environments, (the polar end) two of which are development (dev) and production (prod). Lets say you also have a web application where important (but not developer controlled) files are created (and perhaps altered) in the production environment. Lets say that you (or someone above you) decided that the files which are controlled/created/altered/deleted in the production environment needed to go into the repository. Which of the following two sets of practice / approaches do you find best: Committing these non-developed file modifications made in the production environment so that the repository reflects the production environment as closely and as often as possible. Generally ignoring the non-developed production environment alterations, placing confidence in backups to restore the production environment should it be harmed, and keeping a resolution to avoid pushing developments through the promotion hierarchy in the reverse direction (avoiding pushing from prod to dev), only committing the files found in the production environment if they were absolutely necessary in other environments for development. So, 1 or 2, and why? PS - I am currently slightly biased toward maintaining production environment to repository equivalence (option 1), but I keep an open mind and would accept an answer supporting either.

    Read the article

  • Should I expect my team to have more than a basic proficiency with our source control system?

    - by Joshua Smith
    My company switched from Subversion to Git about three months ago. We had weeks of advance notice prior to the switch. Since I'd never used Git before (or any other DVCS), I read Pro Git and spent a little time spinning up my own repositories and playing around, so that when we switched I'd be able to keep working with minimal pain. Now I'm the 'Git guy' by default. With a couple of exceptions, most of my team still has no idea how Git works. For example, they still think of branches as complete copies of the source code, and even go so far as to clone the repo into multiple folders (one per branch). They generally look at Git as a scary black box. Given the fundamental nature of source control in our daily work (not to mention the ridiculous amount of power Git affords us), I'm of the opinion that any dev who doesn't achieve a certain level of proficiency with it is a liability. Should I expect my team to have at least some understanding of how Git works internally, and how to use it beyond the most basic pull/merge/push operations? Or am I just making something out of nothing?

    Read the article

  • Programming Language, Turing Completeness and Turing Machine

    - by Amumu
    A programming language is said to be Turing Completeness if it can successfully simulate a universal TM. Let's take functional programming language for example. In functional programming, function has highest priority over anything. You can pass functions around like any primitives or objects. This is called first class function. In functional programming, your function does not produce side effect i.e. output strings onto screen, change the state of variables outside of its scope. Each function has a copy of its own objects if the objects are passed from the outside, and the copied objects are returned once the function finishes its job. Each function written purely in functional style is completely independent to anything outside of it. Thus, the complexity of the overall system is reduced. This is referred as referential transparency. In functional programming, each function can have its local variables kept its values even after the function exits. This is done by the garbage collector. The value can be reused the next time the function is called again. This is called memoization. A function usually should solve only one thing. It should model only one algorithm to answer a problem. Do you think that a function in a functional language with above properties simulate a Turing Machines? Functions (= algorithms = Turing Machines) are able to be passed around as input and returned as output. TM also accepts and simulate other TMs Memoization models the set of states of a Turing Machine. The memorized variables can be used to determine states of a TM (i.e. which lines to execute, what behavior should it take in a give state ...). Also, you can use memoization to simulate your internal tape storage. In language like C/C++, when a function exits, you lose all of its internal data (unless you store it elsewhere outside of its scope). The set of symbols are the set of all strings in a programming language, which is the higher level and human-readable version of machine code (opcode) Start state is the beginning of the function. However, with memoization, start state can be determined by memoization or if you want, switch/if-else statement in imperative programming language. But then, you can't Final accepting state when the function returns a value, or rejects if an exception happens. Thus, the function (= algorithm = TM) is decidable. Otherwise, it's undecidable. I'm not sure about this. What do you think? Is my thinking true on all of this? The reason I bring function in functional programming because I think it's closer to the idea of TM. What experience with other programming languages do you have which make you feel the idea of TM and the ideas of Computer Science in general? Can you specify how you think?

    Read the article

  • Anonymous function vs. separate named function for initialization in jquery

    - by Martin N.
    We just had some controversial discussion and I would like to see your opinions on the issue: Let's say we have some code that is used to initialize things when a page is loaded and it looks like this: function initStuff() { ...} ... $(document).ready(initStuff); The initStuff function is only called from the third line of the snippet. Never again. Now I would say: Usually people put this into an anonymous callback like that: $(document).ready(function() { //Body of initStuff }); because having the function in a dedicated location in the code is not really helping with readability, because with the call on ready() makes it obvious, that this code is initialization stuff. Would you agree or disagree with that decision? And why? Thank you very much for your opinion!

    Read the article

  • Windows Server, SQL Server [on hold]

    - by user136329
    I will give high level details of my requirement. We have one web application which accesses the database through SYBASE. The following technologies being used. Visual Studio 2010 .NET frame work 4.5 and for reporting Crystal Report. These are housed on windows server 2008. And for Database we use different servers. We are thinking of moving to SQL Server to be able to utilize the reporting features. My question is does SQL Server can be part of Windows Server 2008 R2 or we needed to have additional server for SQL?

    Read the article

  • Where to store front-end data for "object calculator"

    - by Justin Grahn
    I recently have completed a language library that acts as a giant filter for food items, and flows a bit like this :Products -> Recipes -> MenuItems -> Meals and finally, upon submission, creates an Order. I have also completed a database structure that stores all the pertinent information to each class, and seems to fit my needs. The issue I'm having is linking the two. I imagined all of the information being local to each instance of the product, where there exists one backend user who edits and manipulates data, and multiple front end users who select their Meal(s) to create an Order. Ideally, all of the front end users would have all of this information stored locally within the library, and would update the library on startup from a database. How should I go about storing the data so that I can load it into the library every time the user opens the application? Do I package a database onboard and just load and populate every time? The only method I can currently conceive of doing this, even if I only have 500 possible Product objects, would require me to foreach the list for every Product that I need to match to a Recipe and so on and so forth every time I relaunch the program, which seems like a lot of wasteful loading. Here is a general flow of my architecture: Products: public class Product : IPortionable { public Product(string n, uint pNumber = 0) { name = n; productNumber = pNumber; } public string name { get; set; } public uint productNumber { get; set; } } Recipes: public Recipe(string n, decimal yieldAmt, Volume.Unit unit) { name = n; yield = new Volume(yieldAmt, unit); yield.ConvertUnit(); } /// <summary> /// Creates a new ingredient object /// </summary> /// <param name="n">Name</param> /// <param name="yieldAmt">Recipe Yield</param> /// <param name="unit">Unit of Yield</param> public Recipe(string n, decimal yieldAmt, Weight.Unit unit) { name = n; yield = new Weight(yieldAmt, unit); } public Recipe(Recipe r) { name = r.name; yield = r.yield; ingredients = r.ingredients; } public string name { get; set; } public IMeasure yield; public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable,IMeasure>(); MenuItems: public abstract class MenuItem : IScalable { public static string title = null; public string name { get; set; } public decimal maxPortionSize { get; set; } public decimal minPortionSize { get; set; } public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable, IMeasure>(); and Meal: public class Meal { public Meal(int guests) { guestCount = guests; } public int guestCount { get; private set; } //TODO: Make a new MainCourse class that holds pasta and Entree public Dictionary<string, int> counts = new Dictionary<string, int>(){ {MainCourse.title, 0}, {Side.title , 0}, {Appetizer.title, 0} }; public List<MenuItem> items = new List<MenuItem>(); The Database just stores and links each of these basic names and amounts together usings ID's (RecipeID, ProductID and MenuItemID)

    Read the article

  • Problems hiring someone on elance or similar sites

    - by akim
    I am assigned to hire some developers/designers in these kind of sites, and I'm looking for war stories about problems with this strategy. All I found so far are tips about getting more jobs as a freelancer, but nothing about being in the other side. We need a web designer and maybe a web programer for a internal and small site, but really important with a very short deadline. Any thoughts about? Does it worth to hire someone? Or we should work extra hour and get this done by ourselves?

    Read the article

  • Registering as developer on Google Play store

    - by ChosenOne
    I am registering as a Developer to sell paid applications on the Google Play store and have run into a slight issue: After I paid, I clicked on "Setup merchant details" link. I filled out the business address section, but in the "Public contact" section, Google says this: How can your customers get in touch with you? This information will be made available to your customers when they make a purchase. I work from home. I do not want customers knowing my home address, nor do I want it displayed anywhere online or even accessible by anyone. Should I just enter NA in each of the following fields? Surely Google understands that we have a right to keep such things private? How can I get around this while not getting my account suspended or risk not being approved?

    Read the article

  • What do you do when practical problems get in the way of practical goals?

    - by P.Brian.Mackey
    UPDATE Source control is good to use. Sometimes, real world issues make it impractical to use. For example: If the team is not used to using source control, training problems can arise If a team member directly modifies code on the server, various issues can arise. Merge problems, lack of history, etc Let's say there's a project that is way out of sync. The physical files on the server differ in unknown ways over ~100 files. Merging would take not only a great knowledge of the project, but is also well beyond the ability to complete in the given time. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying source control saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming, across ~10 projects. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremly difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • Why is Javascript used in MongoDB and CouchDB instead of other languages such as Java, C++?

    - by startup007
    I asked this question on SO but was suggested to try here. So here it goes: My understanding of Javascript so far has been that it is a client-side language that capture events and makes a web-page dynamic. But on reading the comparison between MongoDB and CouchDB I noticed that both are using Javascript. This makes me wonder the reason behind the choice of JavaScript over other conventional languages. I guess I am trying to understand the role of JavaScript and its advantages over other languages. Update: I am not asking about the languages / drivers supported by the two databases. The comparison says: Both CouchDB and MongoDB make use of Javascript. CouchDB uses Javascript extensively including in the building of views. MongoDB also supports running arbitrary javascript functions server-side and uses javascript for map/reduce operations. My lack of understanding pertains to why is Javascript being used at all for the backend work. Why is it preferred for building views in CouchDB, or for using map/reduce operations? Why C/C++ or Java were not used? What are the advantages in using Javascript for such back-end work?

    Read the article

  • How to provide value?

    - by Francisco Garcia
    Before I became a consultant all I cared about was becoming a highly skilled programmer. Now I believe that what my clients need is not a great hacker, coder, architect... or whatever. I am more and more convinced every day that there is something of greater value. Everywhere I go I discover practices where I used to roll my eyes in despair. I saw the software industry with pink glasses and laughed or cried at them depending on my mood. I was so convinced everything could be done better. Now I believe that what my clients desperately need is finding a balance between good engineering practices and desperate project execution. Although a great design can make a project cheap to maintain thought many years, usually it is more important to produce quick fast and cheap, just to see if the project can succeed. Before that, it does not really matters that much if the design is cheap to maintain, after that, it might be too late to improve things. They need people who get involved, who do some clandestine improvements into the project without their manager approval/consent/knowledge... because they are never given time for some tasks we all know are important. Not all good things can be done, some of them must come out of freewill, and some of them must be discussed in order to educate colleagues, managers, clients and ourselves. Now my big question is. What exactly are the skills and practices aside from great coding that can provide real value to the economical success of software projects? (and not the software architecture alone)

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >