Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 241/563 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Software design methods for Java or any other programming language

    - by IkerB
    I'm junior programmer and I would like to know how professionals write their code or which steps they follow when they are creating new software. I mean, which steps they follow, which programming methodology, software architecture design application software, etc. I would like to find a tutorial where they explain from the beginning which steps I have to follow from The Idea I have in my mind to the final version of the application in any language. Or perhaps how is your programming steps or rules that you used to follow. Because everytime I want to create the an application I spend few time on the design and a lot of time coding (I know, that's not good).

    Read the article

  • Started wrong with a project. Should I start over?

    - by solidsnake
    I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code. This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. What do you think I should do?

    Read the article

  • What triggered the popularity of lambda functions in modern mainstream programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? IMPORTANT NOTE The focus of this question is the introduction of anonymous functions in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about "the adoption of the functional paradigm and its benefits", because this is not the topic of the question.

    Read the article

  • JavaScript evolution -- weeding out the confusion [closed]

    - by good_computer
    There was JavaScript v1.3 (I guess) that we all started with. Then there was JavaScript 2.0 that Adobe implemented (ActionScript) but was abandoned later. Then came E4X. Then ES5. There is also ES harmony. I am really confused about which version is the latest and where is the standards body going. Can someone describe the whole chronology of JavaScript / ECMAScript evolution and the important differences between those versions?

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • Date calculation algorithm

    - by Julian Cuevas
    I'm working on a project to schedule a machine shop, basically I've got everything covered BUT date calculations, I've got a method called schedule (working on PHP here): public function schedule($start_date, $duration_in_minutes) Now my problem is, currently I'm calculating end time manually because time calculations have the following rules: During weekdays, work with business hours (7:00 AM to 5:00 PM) Work on Saturdays from 7:00 AM to 2:00 PM Ignore holidays (in Colombia we have A LOT of holidays) I already have a lookup table for holidays, I also have a Java version of this algorithm that I wrote for a previous version of the project, but that one's also manual. Is there any way to calculate an end time from a start time given duration?, my problem is that I have to consider the above rules, I'm looking for a (maybe?) math based solution, however I currently don't have the mind to devise such a solution myself. I'll be happy to provide code samples if necessary.

    Read the article

  • How do you update live web sites with code changes?

    - by Aaron Anodide
    I know this is a very basic question. If someone could humor me and tell me how they would handle this, I'd be greatful. I decided to post this because I am about to install SynchToy to remedy the issue below, and I feel a bit unprofessional using a "Toy" but I can't think of a better way. Many times I find when I am in this situation, I am missing some painfully obvious way to do things - this comes from being the only developer in the company. ASP.NET web application developed on my computer at work Solution has 2 projects: Website (files) WebsiteLib (C#/dll) Using a Git repository Deployed on a GoGrid 2008R2 web server Deployment: Make code changes. Push to Git. Remote desktop to server. Pull from Git. Overwrite the live files by dragging/dropping with windows explorer. In Step 5 I delete all the files from the website root.. this can't be a good thing to do. That's why I am about to install SynchToy... UPDATE: THANKS for all the useful responses. I can't pick which one to mark answer - between using a web deployment - it looks like I have several useful suggesitons: Web Project = whole site packaged into a single DLL - downside for me I can't push simple updates - being a lone developer in a company of 50, this remains something that is simpler at times. Pulling straight from SCM into web root of site - i originally didn't do this out of fear that my SCM hidden directory might end up being exposed, but the answers here helped me get over that (although i still don't like having one more thing to worry about forgetting to make sure is still true over time) Using a web farm, and systematically deploying to nodes - this is the ideal solution for zero downtime, which is actually something I care about since the site is essentially a real time revenue source for my company - i might have a hard time convincing them to double the cost of the servers though. -- finally, the re-enforcement of the basic principal that there needs to be a single click deployment for the site OR ELSE THERE SOMETHING WRONG is probably the most useful thing I got out of the answers. UPDATE 2: I thought I come back to this and update with the actual solution that's been in place for many months now and is working perfectly (for my single web server solution). The process I use is: Make code changes Push to Git Remote desktop to server Pull from Git Run the following batch script: cd C:\Users\Administrator %systemroot%\system32\inetsrv\appcmd.exe stop site "/site.name:Default Web Site" robocopy Documents\code\da\1\work\Tree\LendingTreeWebSite1 c:\inetpub\wwwroot /E /XF connectionsconfig Web.config %systemroot%\system32\inetsrv\appcmd.exe start site "/site.name:Default Web Site" As you can see this brings the site down, uses robocopy to intelligently copy the files that have changed then brings the site back up. It typically runs in less than 2 seconds. Since peak traffic on this site is about 2 requests per second, missing 4 requests per site update is acceptable. Sine I've gotten more proficient with Git I've found that the first four steps above being a "manual process" is also acceptable, although I'm sure I could roll the whole thing into a single click if I wanted to. The documentation for AppCmd.exe is here. The documentation for Robocopy is here.

    Read the article

  • What is the most effective order to learn SQL Server, LINQ, and Entity Framework?

    - by user1525474
    I am trying to get some advice on what order I should learn about SQL Server, LINQ, and Entity Framework to be able to better work with ASP.NET Webforms and MVC. From what I've been able to learn so far, many recommend learning LINQ or Entity Framework before learning SQL Server. It also appears that many companies are looking for people with knowledge in LINQ-to-SQL and Entity Framework without mentioning SQL Server. However, my understanding is that LINQ-to-SQL and Entity Framework translate code into SQL Server queries, making this a poor approach. Is there a correct or best order in which to learn these technologies?

    Read the article

  • Sprite Sheets in PyGame?

    - by Eamonn
    So, I've been doing some googling, and haven't found a good solution to my problem. My problem is that I'm using PyGame, and I want to use a Sprite Sheet for my player. This is all well and good, and it would be too, if I wasn't using a Sprite Sheet strip. Basically, if you don't understand, I have a strip of 32x32 'frames'. These frames are all in an image, along side each other. So, I have 3 frames in 1 image. I'd like to be able to use them as my sprite sheet, and not have to crop them up. I have used an awesome, popular and easy-to-use game framework for Lua called LÖVE. LÖVE has these things called "Quads". They are similar to texture regions in LibGDX, if you know what they are. Basically, quads allow you to get parts of an image. You define how large a quad is, and you define parts of an image that way, or 'regions' of an image. I would like to do something similar to this in PyGame, and use a "for" loop to go through the entire image width and height and mark each 32x32 area (or whatever the user defines as their desired frame width and height) and store that in a list or something for use later on. I'd define an animation speed and stuff, but that's for later on. I've been looking around on the web, and I can't find anything that will do this. I found 1 script on the PyGame website, but it crashed PyGame when I tried to run it. I tried for hours trying to fix it, but no luck. So, is there a way to do this? Is there a way to get regions of an image? Am I going about this the wrong way? Is there a simpler way to do this? Thanks! :-)

    Read the article

  • Do you use your personal laptop for work?

    - by davekaro
    We're trying to get our company to let us use our own personal laptop for client work. We've agreed that any code/data will be encrypted using something like TrueCrypt, in case the laptop is stolen or lost. However, the company is still skeptical and not sure they want to allow us to use our personal machines for development. They would rather buy us laptops... but we want to use MacBook Pros and they don't want to pay for them. Even if they did buy us laptops, we would stil have the issue of needing to encrypt the code/data in case of theft/loss. Do you use your own laptop for work? What are the arguments for/against this?

    Read the article

  • Solutions for software using many calls to a server

    - by Val
    I am developing software that uses many calls to a server. On a client side it's a Silverlight application. Almost every time a user clicks on a button in it, it sends 1-5 WCF calls to a server. There can be up to dozen or so users at a time. The server is a database server that serves data to a client. I am an intermediate level developer and am thinking about caching some data and syncing my changes from time to time. Are there any official solutions or technologies for it, like, patterns and such?

    Read the article

  • What are good reasons to use explicit interface implementation for the sole purpose of hiding members?

    - by Nathanus
    During one of my studies into the intricacies of C#, I came across an interesting passage concerning explicit interface implementation. While this syntax is quite helpful when you need to resolve name clashes, you can use explicit interface implementation simply to hide more "advanced" members from the object level. The difference between allowing the use of object.method() or requiring the casting of ((Interface)object).method() seems like mean-spirited obfuscation to my inexperienced eyes. The text noted that this will hide the method from Intellisense at the object level, but why would you want to do that if it was not necessary to avoid name conflicts?

    Read the article

  • Validating User Stories: How much change is too much?

    - by David Kaczynski
    While the core of requirements development and acceptance criteria would ideally take place during the planning meeting in order to create a better estimate, Scrum encourages continuous interaction with the product owner throughout the sprint to validate and refine user stories. What kind of criteria is used to judge if there is too much change being imposed on a user story mid-sprint? When is it appropriate to change the requirements of the user story? When is it appropriate to cancel the user story / sprint in order to re-evaluate and re-estimate a user story in question?

    Read the article

  • using a wiki for requirements

    - by apollodude217
    Hi, I'm looking into ways of improving requirements management. Currently, we have a Word document published on a Web site. Unfortunately, we cannot (to my knowledge) look at changes from one revision to the next. I would greatly prefer to be able to do so, much like with a wiki or VCS (or both, like the wiki's on bitbucket!). Also, each document describes changes devs are expected to meet by a given deadline. There is no collection of accumulated app features documented anywhere, so it's sometimes hard to distinguish between a bug and a (poorly-designed) feature when trying to make quick fixes to legacy apps. So I had an idea I wanted to get feedback on. What about: Using a wiki so that we can track who changed what when (mostly to even see if any edits were made since the last time one looked). Having one, say, wiki page per product rather than one per deadline, keeping up with all features of the product rather than the changes that should be implemented. This way, I can look at a particular revision of the page to see what the app should do at a given point in time, and I can look at changes to the page since the last release for the requirements to be implemented by the next deadline. Waddayathink?

    Read the article

  • Best industry to work for as a developer.

    - by The Elite Gentleman
    Hi guys, Hmmm, StackOverflow now warns me of: "The question you're asking appears subjective and is likely to be closed." I've been an avid java (J2SE, JEE) developer for over 5 years now (and I'm not complaining, even though I want to go back to Delphi & C++). My contract has just ended and I'm wondering of possible job to further tackle. I've done banking and insurance industry for all my career (with 1 year on a Fortuner 500 company) and pretty much, banking is the slowest (and boring) industry to work for (IMHO) since they're strict in their business practices (fair enough). The upside is that they pay. My question is: What is the best industry for a developer, who tends to get bored relatively quickly, to work for? Is it also worthwhile for me to do consultancy (and if so, what type of consultancy)? PS There's no gaming industry in South Africa, so suggesting it requires that I have to travel to a country where gaming is alive! I don't see the Community Wiki checkbox, so I don't know how to make this a wiki.

    Read the article

  • How to detect if an app was already installed before

    - by Dante
    How do software applications keep track of whether the user already installed the application before in it's Windows system? Say you install app X, trial version, remove it, then re install it, and when you run it again it detects you had already installed it before. If you uninstall and clean all registry information it shouldn't know you had already installed it before... Disclaimer: I'm not trying to "hack" any application, just thinking about how this is implemented.

    Read the article

  • Rule of thumb for cost vs. savings for code re-use

    - by Styler
    Is it a good rule of thumb to always write code for the intent of re-using it somewhere down the road? Or, depending on the size of the component you are writing, is it better practice to design it for re-use when it makes sense with regards to time spent on it. What is a good rule of thumb for spending extra time on analysis and design on project components that have "some probability" of being needed later down the road for other things that may or may need this part. For example, if I have the need for project X to do things A, and B. A definitely needs to be written for re-use because it just makes sense to do so. B is very project specific at the moment, and I can hack it all together in a couple days to finish the project on time and give everyone kudos for being a great team, etc. Or if we say, lets spend a whole friggin' 2 weeks figuring out what project Y/Z might need this thing for and spend a load of extra time on on part B because someday we might need to use it on project Y/Z (where the savings will be realized). I'd imagine a perfect world situation would be a nicely crafted combination of project specific vs. re-use architected components given the project. However some code shops might feel it would be a great idea to write everything for the intention of using it at some point down the road.

    Read the article

  • Can I release complementary Windows 8 and WP8 apps on their respective stores?

    - by Clay Shannon
    I am creating a pair of apps, one to run preferably on tablets, but also laptops and PCs, and the other for WP8. These apps are complementary - having one is of no use without the other. I know there is a Windows Store, and a Windows Phone store, so one would be released on one, and one on the other. My question is: as these apps are useless by themselves (although in most cases it won't be the same people running both apps), will there be a problem with offering these useless-when-used-alone apps? IOW: Person A will use the Windows 8 app to interact with some people that have the WP8 app installed; those with the WP8 app will interact with a person or people who have the Windows 8 app installed. What I'm worried about is if these apps go through a certification process where they must be useful "standalone" - is that the case?

    Read the article

  • Need suggestion for Mutiple Windows application design

    - by King Chan
    This was previously posted in StackOverflow, I just moved to here... I am using VS2008, MVVM, WPF, Prism to make a mutiple window CRM Application. I am using MidWinow in my MainWindow, I want Any ViewModel would able to make request to MainWindow to create/add/close MidChildWindow, ChildWindow(from WPF Toolkit), Window (the Window type). ViewModel can get the DialogResult from the ChildWindow its excutes. MainWindow have control on all opened window types. Here is my current approach: I made Dictionary of each of the windows type and stores them into MainWindow class. For 1, i.e in a CustomerInformationView, its CustomerInformationViewModel can execute EditCommand and use EventAggregator to tell MainWindow to open a new ChildWindow. CustomerInformationViewModel: CustomerEditView ceView = new CustomerEditView (); CustomerEditViewModel ceViewModel = CustomerEditViewModel (); ceView.DataContext = ceViewModel; ChildWindow cWindow = new ChildWindow(); cWindow.Content = ceView; MainWindow.EvntAggregator.GetEvent<NewWindowEvent>().Publish(new WindowEventArgs(ceViewModel.ViewModeGUID, cWindow )); cWindow.Show(); Notice that all my ViewModel will generates a Guid for help identifies the ChildWindow from MainWindow's dictionary. Since I will only be using 1 View 1 ViewModel for every Window. For 2. In CustomerInformationViewModel I can get DialogResult by OnClosing event from ChildWindow, in CustomerEditViewModel can use Guid to tell MainWindow to close the ChildWindow. Here is little question and problems: Is it good idea to use Guid here? Or should I use HashKey from ChildWindow? My MainWindows contains windows reference collections. So whenever window close, it will get notifies to remove from the collection by OnClosing event. But all the Windows itself doesn't know about its associated Guid, so when I remove it, I have to search for every KeyValuePair to compares... I still kind of feel wrong associate ViewModel's Guid for ChildWindow, it would make more sense if ChildWindow has it own ID then ViewModel associate with it... But most important, is there any better approach on this design? How can I improve this better?

    Read the article

  • Acceptable placement of the composition root using dependency injection and inversion of control containers

    - by Lumirris
    I've read in several sources including Mark Seemann's 'Ploeh' blog about how the appropriate placement of the composition root of an IoC container is as close as possible to the entry point of an application. In the .NET world, these applications seem to be commonly thought of as Web projects, WPF projects, console applications, things with a typical UI (read: not library projects). Is it really going against this sage advice to place the composition root at the entry point of a library project, when it represents the logical entry point of a group of library projects, and the client of a project group such as this is someone else's work, whose author can't or won't add the composition root to their project (a UI project or yet another library project, even)? I'm familiar with Ninject as an IoC container implementation, but I imagine many others work the same way in that they can scan for a module containing all the necessary binding configurations. This means I could put a binding module in its own library project to compile with my main library project's output, and if the client wanted to change the configuration (an unlikely scenario in my case), they could drop in a replacement dll to replace the library with the binding module. This seems to avoid the most common clients having to deal with dependency injection and composition roots at all, and would make for the cleanest API for the library project group. Yet this seems to fly in the face of conventional wisdom on the issue. Is it just that most of the advice out there makes the assumption that the developer has some coordination with the development of the UI project(s) as well, rather than my case, in which I'm just developing libraries for others to use?

    Read the article

  • How to Rotate different data in days of the week in php [migrated]

    - by shihon
    I am working on a project in which i have to distribute different ad's per day, the ad's in form of array are: $ad = array( 'attribute1_value' => "12", 'attribute2_value' => "xyz", 'attribute3_value' => 'http://example.com', 'attribute4_value' => 'data'); The logic i am using with switch case : $day = date('w',time()); switch ($day) { case '0': if($day == '0') { $count = 0; echo $ad; $count++; } else { $count = 7; echo $ad; } break; case '1': if($day == '1') { $count = 1; echo $ad; $count++; } else { $count = 8; echo $ad; } break; Problem is if i have ~15 ad's then i want to distribute ad/day, date('w') output's the present day but after day 7 i.e saturday, on sunday ad number 8 initiate. I have to implement this scenario using date function. Also i have to send ad's to those user who are not experience this ad before. I am not expert in php, as a beginner working in php/mysql. Kindly help me to improve this concept

    Read the article

  • i need ur help [closed]

    - by aisha khan
    Write a program that prompts the user to enter the size of the array, allocate memory using malloc(), ask the user to enter elements of the array and displays if the puzzle is solvable. As part of this program, write a recursive function int Solvable(int start, int squares[]) that takes a starting position of the marker along with the array of squares. The function should return 1 if it is possible to solve the puzzle from the starting configuration and 0 if it is impossible

    Read the article

  • MIT and copyright

    - by Petah
    I am contributing to a library that is licensed under the MIT license. In the license and in each class file it has a comment at the top saying: Copyright (c) 2011 Joe Bloggs <[email protected]> I assume that he owns the copyright to the file, and can change the license of that file as he sees fit. If I contribute to the library with a new class entirely write by me, can I claim copyright of that file. And put: Copyright (c) 2011 Petah Piper <[email protected]> at the top?

    Read the article

  • Get Info From Database, or Build Inferred Info?

    - by Zaemz
    Does it make more sense to store and retrieve properties or information directly related to an item in a database, or, say in such a case that a product's ID could describe information about it, should the information be gathered from that? Example: Item SKU -- 4HBU12 4 - is the number of motors H - the voltage B - the color, blue U - the model 12 - the length Should I store those individual attributes as well as the SKU, or should I store only the SKU and build the attributes from it?

    Read the article

  • Google Closure Compiler - what does the name mean?

    - by mikez302
    I am curious about the Google Closure Compiler. Why did they name it that? Does it have anything to do with lexical closures? EDIT: I tried researching it in the FAQ and documentation, as well as doing Google searches such as "closure compiler name". I couldn't find anything definite, hence the reason I am asking. I don't think I will get a profoundly helpful answer but I was hoping that I could at least satisfy my curiosity. I am not trying to solve a specific problem. I am just curious.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >