Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 42/861 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • How should compound words be handled when coding? Is there a definitive list of compound words? [closed]

    - by Ray
    QUESTION: How should you handle compound words when programming? Are there any good lists available online for developers of generally accepted technology-related compound words? I can see how this is highly ambiguous, so should I just use common-sense? EXAMPLE: I would be inclined to do this: filename NOT FileName or login NOT LogIn However, the microsoft documentation indicates that filename is not compound. So I wonder, is there a more definitive source? See also, this english.stackexchange discussion on filename. Under the section "Capitalization Rules for Compound Words and Common Terms" located here: Microsoft .NET Capitalization Conventions only offers a limited introduction into the topic, and leaves it up to the developer to use their intuition with the rest.

    Read the article

  • I've inherited 200K lines of spaghetti code -- what now?

    - by kmote
    I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?

    Read the article

  • How do I know which file a program is trying to access?

    - by user9069
    I have a program which I am trying to run, however when I run it; it just complains that it can't find a particular file. However I have no idea which folder it is trying to find this particular file in. I have a copy of the required file, I just need to know which folder to copy it too. Is there any way to show in real time which files are being accessed or which files are trying and failing to be accessed? I am using Ext4 filesystem if that helps. Thanks

    Read the article

  • How to explain that writing universally cross-platform C++ code and shipping products for all OSes is not that easy?

    - by sharptooth
    Our company ships a range of desktop products for Windows and lots of Linux users complain on forums that we should have been written versions of our products for Linux years ago and the reason why we don't do that is we're a greedy corporation all our technical specialists are underqualified idiots Our average product is something like 3 million lines of C++ code. My and my colleagues analysis is the following: writing cross-platform C++ code is not that easy preparing a lot of distribution packages and maintaining them for all widespread versions of Linux takes time our estimate is that Linux market is something like 5-15% of all users and those users will likely not want to pay for our effort when this is brought up the response is again that we're greedy underqualified idiots and that when everything is done right all this is easy and painless. How reasonable are our evaluations of the fact that writing cross-platform code and maintaining numerous ditribution packages takes lots of effort? Where can we find some easy yet detailed analysis with real life stories that show beyond the shadow of a doubt what amount of effort exactly it takes?

    Read the article

  • How to plan/manage multi-platform (mobile) products?

    - by PhD
    Say I've to develop an app that runs on iOS, Android and Windows 8 Mobile. Now all three platforms are technically in different program languages. The only 'reuse' that I can see is that of the boxes-and-lines drawings (UML :) charts and nothing else. So how do companies/programmers manage the variation of the same product across different platforms especially since the implementation languages differ? It's 'easier' in the desktop world IMO given the plethora of languages and cross-platform libraries to make your life easier. Not so in the mobile world. More so, product line management principles don't seem to be all that applicable - what is same and variant doesn't really matter - the application is the same (conceptually) and the implementation is variant. Some difficulties that come to mind: Bug Fixing: Applications maybe designed in a similar manner but the bug identification and fixing would be radically different. A bug on iOS may/may-not be existent for that on Android. Or a bug fix approach on one platform may not be the same on another (unless it's a semantic bug like a!=b instead of a==b which would require the same 'approach' to fixing in essence Enhancements: Making a change on one platform would be radically different than on another Code-Design Divergence: They way the code is written/organized, the class structures etc., could be very different given the different implementation environments - leading to further reuse of the (above) UML models. There are of course many others - just keeping the development in sync and making sure all applications are up to the same version with the same set of features etc. Seems the effort is 3x that of a single application. So how exactly does one manage this nightmarish situation? Some thoughts: Split application to client/server to minimize the effect to client side only (not always doable) Use frameworks like Unity-3D that could take care of the cross-platform problem (mostly applicable to games and probably not to other applications etc.) Any other ways of managing a platform line? What are some proven approaches to managing/taming the effects?

    Read the article

  • How can I improve my skills while working on actual projects, in the absence of more experienced developers?

    - by LolCoder
    I'm the lead developer at a small company, working with C# and ASP.Net. Our team is small, 2-3 people, without much experience in development and design. I don't have the opportunity to learn from more senior developers, there is no one in my team to guide me and help me choose the best approaches, as I take care most of the projects myself. How can I improve my software development skills while working on actual projects, in the absence of more experienced developers?

    Read the article

  • Why to let / not let developers test their own work

    - by pyvi
    I want to gather some arguments as to why letting a developer testing his/her own work as the last step before the product goes into production is a bad idea, because unfortunately, my place of work sometimes does this (the last time this came up, the argument boiled down to most people being too busy with other things and not having the time to get another person familiar with that part of the program - it's very specialised software). There are test plans in this case (though not always), but I am very much in favor of making a person who didn't make the changes that are tested actually doing the final testing. So I am asking if you could provide me with a good and solid list of arguments I can bring up the next time this is discussed. Or to provide counter-arguments, in case you think this is perfectly fine especially when there are formal test cases to test.

    Read the article

  • What should you include in a development approach document?

    - by Liggy
    I'm in the middle of co-producing a "development approach" document for off-shore resources as they ramp up onto our project. The most recent (similar) document our company has used is over 80 pages, and that does not include coding standards/conventions documents. My concern is that this document will not be consumable and will therefore fail. What should be in a development approach document? Are there any decent guidelines on this topic? EDIT: The development approach document should detail the practices and techniques that will be used by software developers while software is designed, built, and tested.

    Read the article

  • Master Data Management

    - by Logicalj
    I am looking for a very flexible, easy to integrate and dynamic application with as many features as possible for Master Data Management. As Master Data Management is used to Manage Operational Data, Analytical Data and Master Data so, I want guidance about "What is exactly expected from Master Data Management and What are the Basic and Challenging Scenarios to be covered or resolved in Master Data Management". Please guide me with all the possible aspects of Master Data Management like Data Cleansing, Data Management and Start Data Analyzing, etc.

    Read the article

  • Why can't the IT industry deliver large, faultless projects quickly as in other industries?

    - by MainMa
    After watching National Geographic's MegaStructures series, I was surprised how fast large projects are completed. Once the preliminary work (design, specifications, etc.) is done on paper, the realization itself of huge projects take just a few years or sometimes a few months. For example, Airbus A380 "formally launched on Dec. 19, 2000", and "in the Early March, 2005", the aircraft was already tested. The same goes for huge oil tankers, skyscrapers, etc. Comparing this to the delays in software industry, I can't help wondering why most IT projects are so slow, or more precisely, why they cannot be as fast and faultless, at the same scale, given enough people? Projects such as the Airbus A380 present both: Major unforeseen risks: while this is not the first aircraft built, it still pushes the limits if the technology and things which worked well for smaller airliners may not work for the larger one due to physical constraints; in the same way, new technologies are used which were not used yet, because for example they were not available in 1969 when Boeing 747 was done. Risks related to human resources and management in general: people quitting in the middle of the project, inability to reach a person because she's on vacation, ordinary human errors, etc. With those risks, people still achieve projects like those large airliners in a very short period of time, and despite the delivery delays, those projects are still hugely successful and of a high quality. When it comes to software development, the projects are hardly as large and complicated as an airliner (both technically and in terms of management), and have slightly less unforeseen risks from the real world. Still, most IT projects are slow and late, and adding more developers to the project is not a solution (going from a team of ten developer to two thousand will sometimes allow to deliver the project faster, sometimes not, and sometimes will only harm the project and increase the risk of not finishing it at all). Those which are still delivered may often contain a lot of bugs, requiring consecutive service packs and regular updates (imagine "installing updates" on every Airbus A380 twice per week to patch the bugs in the original product and prevent the aircraft from crashing). How can such differences be explained? Is it due exclusively to the fact that software development industry is too young to be able to manage thousands of people on a single project in order to deliver large scale, nearly faultless products very fast?

    Read the article

  • What individual needs to be aware when signing a NDA with client?

    - by doNotCheckMyBlog
    I am very new to IT industry and have no prior experience. However I came into contact with a party who is gear to build a mobile application. But, they want me to sign NDA (No Disclosure Agreement). The definition seems vague, The following definitions apply in this Agreement: Confidential Information means information relating to the online and mobile application concepts discussed and that: (a) is disclosed to the Recipient by or on behalf of XYZ; (b) is acquired by the Recipient directly or indirectly from XYZ; (c) is generated by the Recipient (whether alone or with others); or (d) otherwise comes to the knowledge of the Recipient, When they say otherwise comes to the knowledge of the recipient. Does it mean if I think of any idea from my own creative mind and which is similar to their idea then it would be a breach of this agreement? and also is it okay to tell to include application name in definition as currently to me it sounds like any online of mobile application concept they think I should not disclose it to anybody. "Confidential Information means information relating to the online and mobile application concepts discussed and that:" I am more concerned about this part, Without limiting XYZ’s rights at law, the Recipient agrees to indemnify XYZ in respect of all claims, losses, liabilities, costs or expenses of any kind incurred directly or indirectly as a result of or in connection with a breach by it or any of its officers, employees, or consultants of this Agreement. Is it really common in IT industry to sign this agreement between client and developer? Any particular thing I should be concerned about?

    Read the article

  • Consultant in a firm that doesn't understand the tech!

    - by techsjs2012
    I got the job as a Consultant in a firm that has 3 other programmers. My job is to rewrite all the old system in Java, Spring etc but the staff programmers only know perl and the manager does not know any programming. I am trying to get them to understand that I have 6 projects to rewrite here but no one has design docs or spec. the staff programmers never had to write any documents. Plus I cant get the manager to understand the new java tech stuff.. he keeps asking some of the staff for views on things but the staff don't know it or understand it. Where do I go from here to make the manager understand that the staff programmers or someone has to write a design document so I know what to build. or if I have to write the documents how do I get the information?

    Read the article

  • Looking software for making an animated cartoon to present a new application/scenario idea [closed]

    - by Skarab
    I have an idea for an application (+usage scenario) and I would like to create an animated cartoon that shows a use case for this application and its novelty. My company is a rather big so I am looking for an interesting way to get people know my idea to get feedback/get a green light to further develop it. Therefore I am looking for an application (free or commercial) that I could use to realize such an animated cartoon. I have posted this quesion before on stackoverflow, but I think this might be a better community to ask such a question.

    Read the article

  • Need advice concerning Feature Based Development when knowledge DB is involved

    - by voroninp
    We develop BackOffice application which is used to edit our knowledge DB. Now our main product's development team is shifting to the feature based development and we need to support several DB's with not identical data schemes. (DS changes slightly from DB to DB) The information from knowledge Db is extracted by the script and then is distributed to the clients. We also need to support merging these DB's. We now analyze pros and cons of different approaches. We discuss this one: One working DB (WDB) with one DB for each feature branch (FDB). The approved data is moved from WDB to FDB. So we need to support only one script for each branch. This script will extract data from corresponding FDB. Nevertheless we are to code the differences between FDBs and WDB manually. May be some automatic mapping tools exist? I also wish to know whether classic solutions to the alike problems already exist. Can anyone share the best practices for this case?

    Read the article

  • How to make sprint planning fun

    - by Jacob Spire
    Not only are our sprint planning meetings not fun, they're downright dreadful. The meetings are tedious, and boring, and take forever (a day, but it feels like a lot longer). The developers complain about it, and dread upcoming plannings. Our routine is pretty standard (user story inserted into sprint backlog by priority story is taken apart to tasks tasks are estimated in hours repeat), and I can't figure out what we're doing wrong. How can we make the meetings more enjoyable? ... Some more details, in response to requests for more information: Why are the backlog items not inserted and prioritized before sprint kickoff? User stories are indeed prioritized; we have no idea how long they'll take until we break them down into tasks! From the (excellent) answers here, I see that maybe we shouldn't estimate tasks at all, only the user stories. The reason we estimate tasks (and not stories) is because we've been getting story-estimates terribly wrong -- but I guess that's the subject for an altogether different question. Why are developers complaining? Meetings are long. Meetings are monotonous. Story after story, task after task, struggling (yes, struggling) to estimate how long it will take and what it involves. Estimating tasks makes user-story-estimation seem pointless. The longer the meeting, the less focus in the room. The less focused colleagues are, the longer the meeting takes. A recursive hate-spiral develops. We've considered splitting the meeting into two days in order to keep people focused, but the developers wouldn't hear of it. One day of planning is bad enough; now we'll have two?! Part of our problem is that we go into very small detail (in order to get more accurate estimations). But when we estimate roughly, we go way off the mark! To sum up the question: What are we doing wrong? What additional ways are there to make the meeting generally more enjoyable?

    Read the article

  • Which Continuous Integration framework do you use and why?

    - by Richard Warburton
    There are quite a few different Continuous Integration (CI) frameworks out there and I'm wondering which is the most popular. Which frameworks have you used at firms where you work? Is there any reason one CI framework is more popular than another - perhaps this is to do with the features it offers, things that integrate into it or maybe its just marketing? It seems like continuous integration is used more in the Java and .net worlds than say ruby or python. Why is this?

    Read the article

  • How do you take into account usability and user requirements for your application?

    - by voroninp
    Our team supports BackOffice application: a mix of WinForm and WPF windows. (about 80 including dialogs). Really a kind of a Swiss Army Knife. It is used by developers, tech writers, security developers, testers. The requirements for new features come quite often and sometimes we play Wizard of Oz to decide which GUI our users like the most. And it usually happens (I admit it can be just my subjective interpretation of the reality) that one tiny detail giving the flavor of good usability to our app requires a lot of time. This time is being spent on 'fighting' with GUI framework making it act like we need. And it very difficult to make estimations for this type of tasks (at least for me and most members of our team). Scrum poker is not a help either. Management often considers this usability perfectionism to be a waste of time. On the other hand an accumulated affect of features where each has some little usability flaw frustrates users. But the same users want frequent releases and instant bug fixes. Hence, no way to get the positive feedback: there is always somebody who is snuffy. I constantly feel myself as competing with ourselves: more features - more bugs/tasks/architecture. We are trying to outrun the cart we are pushing. New technologies arrive and some of them can potentially help to improve the design or decrease task implementation time but these technologies require learning, prototyping and so on. Well, that was a story. And now is the question: How do you balance between time pressure, product quality, users and management satisfaction? When and how do you decide to leave the problem with not a perfect but to some extent acceptable solution, how often do you make these decisions? How do you do with your own satisfaction? What are your priorities? P.S. Please keep in mind, we are a BackOffice team, we have neither dedicated technical writer nor GUI designer. The tester have joined us recently. We've much work to do and much freedom concerning 'how'. I like it because it fosters creativity but I don't want to become too nerdy perfectionist.

    Read the article

  • Any empirical evidence on the efficacy of CMMI?

    - by mehaase
    I am wondering if there are any studies that examine the efficacy of software projects in CMMI-oriented organizations. For example, are CMMI organizations more likely to finish projects on time and/or on budget than non-CMMI organizations? Edit for clarification: CMMI stands for "Capability Maturity Model Integration". It's developed by the Software Engineering Institute at Carnegie-Mellon University (SEI-CMU). It's not a certification, but there are various companies that will "appraise" your organization to various levels of CMMI, such as level 2 and level 3. (I believe CMMI level 1 is an animalistic, Hobbesian free-for-all that nobody aspires to. In other words, everybody is at least CMMI level 1, even if you've never heard of CMMI before.) I'm definitely not an expert, but I believe that an organization can be appraised for CMMI levels within different scopes of work: i.e. service delivery, software development, foobaring, etc. My question is focused on the software development appraisal: is an organization that has been appraised to CMMI Level X for software projects more likely to finish a software project on time and on budget than another organization that has not been appraised to CMMI Level X? However, in the absence of hard data about software-oriented CMMI, I'd be interested in the effect that CMMI appraisals have on other activities as well. I originally asked the question because I've seen various studies conducted on software (e.g. the essays in The Mythical Man Month refer to numerous empirical studies, as does McConnell's Code Complete), so I know that there are organizations performing empirical studies of software development.

    Read the article

  • Making Agile and DevOps methodology compatible with PCI requirements

    - by kenchew
    Would like to hear from those working in a PCI compliance environment and is practicing agile development and devops methodology, how you maintain compliance with PCI requirements. Specifically, what do you do to address: separation of duties between development/test and production alignment of continuous integration / deployment and change control alignment of agile stories to requirement documentation

    Read the article

  • In which fields does quality of the software product matter as much as the completion time?

    - by Nav
    Someone told me that if the software product meets clients expectations, it is good quality. But I've worked with Interaction Designers (the same kind of people who made Gmail's interface and usability so cool!), and I've loved working with them because even though they came up with hundreds of changes in requirements, and emphasised on many many subtle details, when the software was complete, I could look at the product and say WOW! The current place I work, the only thing that matters is completing the project on time. As long as it works and as long as the client says it's ok, nobody bothers to improve it. I'm not talking about gold-plating, but I believe that for a programmer to enjoy his (well, maybe her too ;) ) job, they should be able to proudly say that "Hey, I made that software" and that comes only when the product is of good quality. Apart from your opinions on this, I'd also like to know which fields (Eg. Aerospace, Finance etc.) could I find companies (or you could mention the company name) where the quality of a product is as important as completing the project on time?

    Read the article

  • Stop myself from over-complicating applications

    - by stuartmclark
    Recently I worked on a fairly large project involving C# and MVVM. This application had around 160 projects in the solutions each seprarated into their own layers. As I have been working on this application for almost a year, building it from scratch as part of a team, I am now coming off that project and onto smaller more trivial projects. As I was beginning to develop a small in-house tool I found myself trying to mimic the larger applications structure and layering but in the end I just had a simple application with several DLLs which I know I wouldn't have done if I had not worked on that larger application before. I am just wondering if there are any techniques I can utilise to stop myself from turning a "code-behind" style trivial application into a full blown MVVM application? Or should I continue developing as I am and try to keep the unnecessary fluff out of the project?

    Read the article

  • Need to find a find a fast/multi-user database program

    - by user65961
    Our company is currently utilizing Excel and have been encountering a series of issues for starters we have multiple users sharing this application. We utilize it write our schedules for our employees and generate staffing levels. May someone give me please or inform me what are the pros and cons of this program and offer suggestions for another database that allows multiple users to share and also give the pros and cons need something that will hold massive data and allow sharing, protecting capabilities.

    Read the article

  • What "version naming convention" do you use?

    - by rjstelling
    Are different version naming conventions suited to different projects? What do you use and why? Personally, I prefer a build number in hexadecimal (e.g 11BCF), this should be incremented very regularly. And then for customers a simple 3 digit version number, i.e. 1.1.3. 1.2.3 (11BCF) <- Build number, should correspond with a revision in source control ^ ^ ^ | | | | | +--- Minor bugs, spelling mistakes, etc. | +----- Minor features, major bug fixes, etc. +------- Major version, UX changes, file format changes, etc.

    Read the article

  • Can a plug-in access a database server?

    - by Black Panther
    At my work place, we use an external client's web application monitor and respond to support tickets of our clients. the problem with that application is that it does not house a field to enter the actual effort (hours worked on a particular ticket) to be stored in the database. What is needed to write a plug-in for Internet Explorer that would get triggered on a button click on a certain webpage and save some data in an external database? That is, if the support personnel is closing the ticket after resolving it, is it possible to invoke that plugin that asks the personnel to enter the effort spent on that ticket and store it in an external database? We can't modify the web application as it is vendor supplied and not an in-house product.

    Read the article

  • What is "egoless programming"?

    - by Bob Murphy
    I first heard this term about fifteen years ago. My understanding is similar to that described in the Wikipedia article and a TechRepublic article: you work with your colleagues in a "friendly, collegiate way in which personal feelings are put aside". It includes things like doing peer reviews with mutual respect and a desire to learn, and not feeling like you "own" code, so if somebody has a suggestion or says there's a bug or needs to change it, you don't get defensive about it. I've also thought it was largely about having an attitude that makes for good relations with other programmers with the goal of improving the code. So I haven't seen it as being incompatible with taking pride in the quality of your work or feeling regret if something you did caused your customer a problem. However, an answer to a recent question makes me think some other programmers have different understandings about "egoless programming". So what is the correct definition? And what are its implications?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >