Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 124/605 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Newbie worried about CASE tool.

    - by Jason Evans
    Hi there. I'm looking for some guidance on CASE tools and whether my concerns are valid. Recently I was in a meeting between my employer and an external software company which have a CASE tool currently in beta. They demonstrated this tool to us, showing how you build a UML model in Enterprise Architect (or something like it) and then, through their tool, that UML model is transformed into a Visual Studio project, with C# files, stored procedures for SQL Server, code for the data layer, WCF stuff, logging code and allsorts. Now, admittedly, I don't see the point in this, as in I'm not convinced it will save that much time (plus it feels like overkill). The tool authors said that a trial of the tool at another company had saved a team there 5 weeks of development time (from 6 weeks down to about 1 week) using this tool. I find the accuracy of that estimate hard to believe. My main concern is whether using this tool is going slow down my productivity. For example - Say I have a UML model which I built a VS solution from. Now, I want to rename a class method to something else; will this mean having to update the UML model first and then rebuilding the code? Is this how case tools normally work? Something I will need to check with the authors is the structure of the generated VS solution. I like the Domain Driven Design way of project structure - Infrstructure, Services, Model, etc. I doubt very much this tool will do that. Also, I've been playing around with Entity Framework Code First and think it's a great way to build the data model. I have nice repositories, unit of work classes and other design patterns that work well with EF. I have data anootations and stuff like that working great. By not having EF (the CASE tool uses it's own data layer code) I'm concerned that this tool's data layer code might not be a nice to integrate in the UoW pattern, repositories, etc. This I will need to verify when I get a closer look at the generated code. What are other people's experiences with CASE tools? Am I being paranoid about nothing? Am I being unfair - are my negativities unfounded? EDIT: I like to use TDD/BDD for building my code, and using a CASE tool looks like it will make this difficult. Again, any feedback on this would be great. Cheers. Jas.

    Read the article

  • How to introduce versioning for questions on Stack*? [closed]

    - by András Szepesházi
    What today is the best answer for any given question, yesterday was not available and tomorrow will be obsolete. Especially when we're talking about software development. Here is an example for you (there must be thousands, this one is absolutely imaginary): Q: What is the best way to implement autocomplete in javascript? A: (2000) Whut? A: (2007) Write a custom ajax function, display the results after processing A: (2011) Use this plugin: http://jqueryui.com/demos/autocomplete/ (nono, I'm not a jQuery affiliate, actually I prefer MooTools) What would be your recommendation to introduce versioning for Stack Exchange questions and answers? Is there a need at all for that?

    Read the article

  • What is the evidence that an API has exceeded its orthogonality in the context of types?

    - by hawkeye
    Wikipedia defines software orthogonality as: orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Jason Coffin has defined software orthogonality as Highly cohesive components that are loosely coupled to each other produce an orthogonal system. C.Ross has defined software orthogonality as: the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa. Now there is a hypothesis published in the the ACM Queue by Tim Bray - that some have called the Bánffy Bray Type System Criteria - which he summarises as: Static typings attractiveness is a direct function (and dynamic typings an inverse function) of API surface size. Dynamic typings attractiveness is a direct function (and static typings an inverse function) of unit testing workability. Now Stuart Halloway has reformulated Banfy Bray as: the more your APIs exceed orthogonality, the better you will like static typing My question is: What is the evidence that an API has exceeded its orthogonality in the context of types? Clarification Tim Bray introduces the idea of orthogonality and APIs. Where you have one API and it is mainly dealing with Strings (ie a web server serving requests and responses), then a uni-typed language (python, ruby) is 'aligned' to that API - because the the type system of these languages isn't sophisticated, but it doesn't matter since you're dealing with Strings anyway. He then moves on to Android programming, which has a whole bunch of sensor APIs, which are all 'different' to the web server API that he was working on previously. Because you're not just dealing with Strings, but with different types, the API is non-orthogonal. Tim's point is that there is a empirical relationship between your 'liking' of types and the API you're programming against. (ie a subjective point is actually objective depending on your context).

    Read the article

  • Good approach for hundreds of comsumers and big files

    - by ????? ???????
    I have several files (nearly 1GB each) with data. Data is a string line. I need to process each of these files with several hundreds of consumers. Each of these consumers does some processing that differs from others. Consumers do not write anywhere concurrently. They only need input string. After processing they update their local buffers. Consumers can easily be executed in parallel. Important: With one specific file each consumer has to process all lines (without skipping) in correct order (as they appear in file). The order of processing different files doesn't matter. Processing of a single line by one consumer is comparably fast. I expect less than 50 microseconds on Corei5. So now I'm looking for the good approach to this problem. This is going to be be a part of a .NET project, so please let's stick with .NET only (C# is preferable). I know about TPL and DataFlow. I guess that the most relevant would be BroadcastBlock. But i think that the problem here is that with each line I'll have to wait for all consumers to finish in order to post the new one. I guess that it would be not very efficient. I think that ideally situation would be something like this: One thread reads from file and writes to the buffer. Each consumer, when it is ready, reads the line from the buffer concurrently and processes it. The entry from the buffer shouldn't be deleted as one consumer reads it. It can be deleted only when all consumers have processed it. TPL schedules consumer threads itself. If one consumer outperforms the others, it shouldn't wait and can read more recent entries from the buffer. Am i right with this kind of approach? Whether yes or not, how can i implement the good solution? A bit was already discussed on StackOverflow: link

    Read the article

  • How do you safely work around broken code? [closed]

    - by burnt1ce
    Once in a while, a co-worker will check-in bad code which blocks my application from initializing properly. To get around this problem quickly, I comment out the offending code. I then forget to uncomment the offending code at the time of my check-in, so I want to prevent this from happening again. Do you have any suggestions on how to: Disable bad code that stop you from working Prevent yourself from checking in unwanted changes? I'm currently using Visual Studio as my IDE and TFS as my source code repo.

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

  • Moving Forward with Code Iteration

    - by rcapote
    There are times when working on my programming projects, and I get to a point where I'm ready to move on to the next part of my program. However, when I sit down to implement this new feature I get stuck, in a sense. It's not that I don't know how to implement the feature, it's that I get stuck on figuring out the best way to implement said feature. So I sit back for a day or two and let the ideas ferment until I am comfortable with a design. I get worried that I may not write something as well as it could be, or that I might have to go back and rework the whole thing; so I put it off. This is a big reason why I've never really finished many personal projects. Anyone else experience this, and how do you keep your self moving forward in your project?

    Read the article

  • Why (not) logic programming?

    - by Anto
    I have not yet heard about any uses of a logical programming language (such as Prolog) in the software industry, nor do I know of usage of it in hobby programming or open source projects. It (Prolog) is used as an academic language to some extent, though (why is it used in academia?). This makes me wonder, why should you use logic programming, and why not? Why is it not getting any detectable industry usage?

    Read the article

  • Architecture for interfacing multiple applications

    - by Erwin
    Let's say you have a Master Database and a few External/Internal applications that use WebServices to interface data. What would be your preferred architecture to interface data from and to those applications? Would you put some sort of Enterprise Service Bus in between? Like BizTalk? Or something cheaper? We don't want to block applications while they are interfacing, but we do want to use return codes from the interfaces to determine if we need to take some actions in the originating application or not.

    Read the article

  • Pros and cons of using Grails compared to pure Groovy

    - by shabunc
    Say, you (by you I mean an abstract guy, any guy in your team) have experience of writing and building java web apps, know about filters, servlet mappings and so on, and so on. Also, let us assume you know pretty well any sql db, no matter which one exactly, whether it mysql, oracle or psql. At last, let pretend we know Groovy and its standard libraries, for example all that JsonBuilder and XmlSlurper stuff, so we don't need grails converters. The question is - what are benefits of using grails in this case. I'm not trying to start flame war, I'm just asking to compare - what are ups and downs of grails development compared to pure groovy one. For instance, off the top of my head I can name two pluses - automatic DB mapping and custom gsp tags. But when I want to write a modest app which provides small API for handling some well defined set of data, I'm totally OK with groovy's awesome SQL support. As for gsp, we does not use it at all, so we are not interested in custom tags as well.

    Read the article

  • Multithreaded UI desktop application issues

    - by igor
    I am involved into development a rich UI project: desktop windows application. Application uses asynchronous invocations and in its turn it should be ready to process external messages (events). The problem is clear: at first time it was built as a simple prototype and it was not stress tested and all was fine. Then application was grown: the number of calls to server and number of events from server are high and performance is low. What is more users noticed that sometimes performance is extremal low. Asynchronous invocations based on thread pool (BeginInvoke, EndInvoke), external events are going from WCF service (.NET 3.5). My goal is synchronization of all tasks and putting priorities to every executions in desktop application. My question is: is there any practice how to reach my goal: patterns, task priority list, others? What should I do at first, second and next times? Thanks

    Read the article

  • Social Analytics in your current data

    - by Dan McGrath
    By now everyone is aware of the massive boom in social-networking (Twitter, Facebook, LinkedIn) and obviously a big part of its business model revolves around being able to mine this data to create information that can be used to make money for someone. Gartner has identified 'Social Analytics' as one of the top 10 strategic technologies for 2011. Has anyone looked at their existing data structures to determine if they could extract a social graph and then perform further data mining against this? How does it fit in with your other strategic development strategies? What information are you trying to extract from the data? Take for example, a bank. They could conceivably determine a social graph through account relationships and transactions. Obviously there would be open edges on the graph where funds enter/leave the institute, but that shouldn't detract from the usefulness of the data. I'm looking for actual examples with the answers, as well as why/how they did it. References to other sites will be greatly appreciated. Note: I'm not at all referring to mining data out of actual social networks.

    Read the article

  • Python interview questions

    - by Andy
    I am going to interview within two weeks for an internship that would involve Python programming. Can anyone suggest what possible areas should I polish? I am looking for commonly asked stuff in interviews for Python openings. Apart from the fact that I have already been doing the language for over a year now, I fail to perceive what they can ask me. Like for a C or C++ interview, there are lots of questions ranging from reversing of strings to building linked lists, but for a Python interview, I am clueless. Personal experiences and/ or suggestions are welcomed.

    Read the article

  • Backend devs put down by user stories

    - by Szili
    I planned to slice in backend development into to the user stories vertically. But a backend guy on our team started to complain that this makes their work invisible. My answer was that at the sprint planning and review meetings we discuss backend tasks in front of stakeholders so it makes it visible, and maintaining a high quality during the project will result a slower startin pace than other teams, but we will have a stable velocity during the project. And velocity is highly visible to stakeholders. He still insist having stories like: "As a developer I need to have a domain layer so I can encapsulate business logic." How can I solve the issue before it pollutes the team? The root of the issue is that our management systematically consider backend work as invisible and call backed devs miners, or other pejorative terms.

    Read the article

  • What kind of specific projects can I do to master bitwise operations in C++? Also is there a canonical book? [closed]

    - by Ford
    I don't use C++ or bitwise operations at my current job but I'm thinking of applying to companies where it is a requirement to be fluent with them (on their tests anyway). So my question is: Can anyone suggest a project which will require gaining a fluency in bitwise operations to complete? On a side note, is there a canonical book on optimization techniques using bitwise operations since that seems to be an important use of them?

    Read the article

  • Can I remove all-caps and shorten the disclaimer on my License?

    - by stefano palazzo
    I am using the MIT License for a particular piece of code. Now, this license has a big disclaimer in all-caps: THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF... ... I've seen a normally capitalised disclaimer on the zlib license (notice that it is above the license text), and even software with no disclaimer at all (which implies, i take it, that there is indeed a guarantee?), but i'd like some sourced advice by a trusted party. I just haven't found any. GNU's License notice for other files comes with this disclaimer: This file is offered as-is, without any warranty. Short and simple. My question therefore: Are there any trusted sources indicating that a short rather than long, and a normally spelled rather than capitalised disclaimer (or even one or the other) are safely usable in all of the jurisdictions I should be concerned with? If the answer turns out to be yes: Why not simply use the short license notice that the fsf proposes for readme-files and short help documents instead of the MIT License? Is there any evidence suggesting this short 'license' will not hold up? For the purposes of this question, the software is released in the European Union, should it make any difference.

    Read the article

  • Outdoor Programming Jobs...

    - by Rodrick Chapman
    Are there any kinds of jobs that require programming (or at least competency) but take place outdoors for a significant portion of the time? As long as I'm fantasizing, an ideal job would involve programming in a high level language like Haskell, F#, or Scala* for, say, 50% of the time and doing something like digging an irrigation trench the rest of the time. My background: I triple majored in mathematics, philosophy, and history (BS/BA) and have been working as a web developer for the past six years. I love hacking but I'm feeling a bit burned out. *I only chose these languages as examples since, ideally, I'd want to work among high caliber people... but it really doesn't matter.

    Read the article

  • Correct term for PSD to HTML to CMS

    - by John Magnolia
    Hi, I have heard a lot of different terms to describe the process of turning a website design into a editable CMS. Currently I take the design and "slice" this up into HTML and CSS then I "plug" this into a CMS. I would class this as frontend development depending on the level of customisation required for the CMS. The reason I ask is I am currently writing up my CV and have become stuck on the correct term for this. Kind Regards

    Read the article

  • Avoid GPL violation by moving library out of process

    - by Andrey
    Assume there is a library that is licensed under GPL. I want to use it is closed source project. I do following: Create small wrapper application around that GPL library that listens to socket, parse messages and call GPL library. Then returns results back. Release it's sources (to comply with GPL) Create client for this wrapper in my main application and don't release sources. I know that this adds huge overhead compared to static/dynamic linking, but I am interested in theoretical way.

    Read the article

  • Who is Configuration Manager?

    - by altern
    I would like to ask members of the community about the role of Configuration Manager, as you see it. I'm not asking what Configuration Management is, as long it had been asked before. What I need to know is: What tasks do you think Configuration Manager should perform (or performs) in your team? What is primary responsibility of Configuration Manager? What are secondary/auxiliary responsibilities of Configuration Manager? Does Configuration Manager need to be in charge of development processes on the project/company or he should be told what to do? What are relations between Configuration Manager, Build Manager, Release Manager, Deployment Engineer, CI Engineer roles? Aren't they all the same - Configuration Management? Maybe term Configuration Management is redundant and Technical/Team Lead should do all the related work instead? It would be really great if you could share your vision and experience.

    Read the article

  • How often to authenticate iOS app in web service

    - by jeraldov
    I am trying to build an iOS app that connects to a PHP+MySQL web service. My question is how often should I check for user's authentication to get data from the web service. My app requires a login at start up, but I am wondering if how often should I check if he can still validly get data from the web service. Should I check for his username and password each time the user views a table view that get its data from the web service?

    Read the article

  • Resources for Test Driven Development in Web Applications?

    - by HorusKol
    I would like to try and implement some TDD in our web applications to reduce regressions and improve release quality, but I'm not convinced at how well automated testing can perform with something as fluffy as web applications. I've read about and tried TDD and unit testing, but the examples are 'solid' and rather simple functionalities like currency converters, and so on. Are there any resources that can help with unit testing content management and publication systems? How about unit testing a shopping cart/store (physical and online products)? AJAX? Googling for "Web Test Driven Development" just gets me old articles from several years ago either covering the same examples of calculator-like function or discussions about why TDD is better than anything (without any examples).

    Read the article

  • Submitting Java Code with Junit unit test

    - by LivingThing
    I have mostly work on simple java programs and compiled and run it with eclipse on Windows. So, i have no experience of using command prompt for compiling Java projects and do not have much info about what actually happens beneath the play button in Eclipse. Now i have to submit a Java application which will have basic operation on XML. My project also will have (JUnit) Unit Test. My question is related to submission of this Project. Which files would be necessary to submit the code. So, it executes properly? Does chosing eclipse as an IDE or junit as a unit testing framweork produces any dependenices i.e the executor of the program should have eclipse/libraries to execute the program on his machine?

    Read the article

  • Defensive Programming Techniques.

    - by Pemdas
    I was attempting to identify an element of software engineering that I think is overlooked, not emphasized or not taught in typical undergraduate course work for CS or SE. What I came up is the concept of defensive programing. I would like to hear the communities options on defensive program and/or specific techniques that you use on a regular basis. Also, I would to know if there are any language specific techniques.

    Read the article

  • Why are some seasoned ASP.NET developers defecting to Ruby on Rails?

    - by Tony_Henrich
    Once a while I hear some known ASP.NET developer declaring that they quit developing in .NET and moving to Ruby using Ruby in Rails. The problem is they don't mention exactly the reasons. They use words like RoR is 'easier', 'better' & 'faster'. That really doesn't say much to me. Anyone care to do faithful comparison using code samples, case studies ..etc or from personal experience in using both? Try to convince me to throw away all my years of learning C#, the .NET Framework using a powerful IDE (Visual Studio). Does RoR save you hours a week in development time? What are the major pain points in .NET that compels one to move away from it? This question is NOT about a pure RoR vs ASP.NET (MVC) comparison. It's about the compelling technical reasons (getting bored does not count!) to switch over after using a platform for several years and start with a new language and platform. (prefer this to be a wiki)

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >