Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 181/563 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • How can I justify software testing to management?

    - by Nate
    I work for a small company (less than 200 employees) whose software group only makes up a small part of our staff (4 employees, occasionally with a few contractors). The four of us have been making strides in transitioning to better practices, and one of the next logical steps is to improve our testing. As anyone who has done any meaningful tests knows, testing takes a lot of time - and at my company, it takes too much time to justify to management, so we generally do what little we do on the sly. I don't think this is serving us well, as we keep coming up against otherwise avoidable problems when we ship under-tested software. I would like to be able to come to management with a justification for hiring a dedicated software test engineer (someone who can both write automated tests and perform manual ones). Are there any good published studies that show the benefits of adding such a position to a small company? Where can I find information about costs associated with the position? I plan on doing a little number crunching on our own history, but having some external sources to point to would help bolster my case.

    Read the article

  • Algorithm to use for shop floor layout?

    - by jkohlhepp
    I ran into a classroom problem yesterday (business oriented class, not computer science) and I found it interesting from an algorithmic perspective. The problem goes something like this: Assume there is a shop floor with N different rooms, and you have N different departments that need to go in those rooms. The departments and the rooms are all the same size, so any department could go in any room. There is a known travel distance from each room to each other room. There is also a known amount of trips necessary from one department to another (trips are counted the same regardless which room they originate from, so a trip from A to B is equivalent to a trip from B to A). Given those inputs, determine a layout of departments into rooms which minimizes travel time. What is the best way to approach this problem algorithmically? Is there already a particular algorithm or class of algorithms designed to solve this type of problem? Does this type of problem have a name in computer science? I am not looking for you to design an algorithm to solve this, although feel free to do so if you would like. I'm wondering if this is a problem space that has already been well defined and studied algorithmically and if so get some links to research further. I can see a lot of different data structures and algorithms that might apply to this and I'm curious which approach would be "best". And don't worry, you are not doing my homework for me. This is not a homework problem per se, as this is a business course and we were simply discussing the concepts and not trying to solve the problem algorithmically.

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • C# String.format extension method

    - by Paul Roe
    With the addtion of Extension methods to C# we've seen a lot of them crop up in our group. One debate revolves around extension methods like this one: public static class StringExt { /// <summary> /// Shortcut for string.Format. /// </summary> /// <param name="str"></param> /// <param name="args"></param> /// <returns></returns> public static string Format(this string str, params object[] args) { if (str == null) return null; return string.Format(str, args); } } Does this extension method break any programming best practices that you can name? Would you use it anyway, if not why? If I renamed the function to "F" but left the xml comments would that be epic fail or just a wonderful savings of keystrokes?

    Read the article

  • What is Mozilla's new release management strategy ?

    - by RonK
    I saw today that FireFox released a new version (5). I tried reading about what was added and ran into this link: http://arstechnica.com/open-source/news/2011/06/firefox-5-released-arrives-only-three-months-after-firefox-4.ars It states that: Mozilla has launched Firefox 5, a new version of the popular open source Web browser. This is the first update that Mozilla has issued since adopting a new release management strategy that has drastically shortened the Firefox development cycle. I find this very intriguing - any idea what this new strategy is?

    Read the article

  • How is the RIP loaded when an interrupt arrives in an IA-32e 64-bit IDT Gate Descriptor?

    - by Vern
    I need some help with the programming of an IA-32e Interrupt Descriptor as I'm pretty new to it. I don't think I quite understand how the RIP is loaded when an Interrupt arrives. There is a Segment Selector in Intel's 64-bit IDT Gate Descriptor. However, from my understanding across the 5 part Intel manuals, the Linear Address of the Interrupt Handler is loaded into RIP from the 64-bit offset specified in the IDT Gate Descriptor. The only use of the segment selector is to check: if there is a change in privilege levels the Interrupt Handler is truly pointing to a code segment My questions are: Is RIP taken from the 64-bit offset only? Or is RIP = offset(sign extended to 64-bits) + segment selector base? Is the base address pointed to by the segment selector in the IDT Gate Descriptor ignored? Or does it have a use?

    Read the article

  • What is the best way to design a table with an arbitrary id?

    - by P.Brian.Mackey
    I have the need to create a table with a unique id as the PK. The ID is a surrogate key. Originally, I had a natural key, but requirement changes have undermined this idea. Then, I considered adding an auto incrementing identity. But, this presents problems. A. I can't specify my own ID. B. The ID's are difficult to reset. Both of these together make it difficult to copy over this table with new data or move the table across domains, e.g. Dev to QA. I need to refer to these ID's from the front end, JavaScript...so they must not change. So, the only way I am aware of to meet all these challenges is to make a GUID ID. This way, I can overwrite the ID's when I need to or I can generate a new one without concern for order (E.G. an int based id would require I know the last inserted ID). Is a GUID the best way to accomplish my goals? Considering that a GUID is a string and joining on a string is an expensive task, is there a better way?

    Read the article

  • Developing for Windows CE platform?

    - by grmbl
    I'm looking in creating some applications for workers to use on the workfloor. They'll be using Psion NEO devices running Windows CE 5.0. My skillset allows for C#, PHP, ASP.Net (+ webservices). Application requirements: should connect to our ERP system running on IBM iSeries (AS400). should be run in fullscreen (effectively hiding the OS). usability touch functionality. I have tried the following: Full winform application ran through RDP session: [+] easy deployment using .rdp file. [+] application can be run on desktop environment too. [+] rdp host can easily access DB2 using IBM drivers. [+] GUI works ok on small screen. [-] environment = terminal server. (which is already under heavy use) Full winform application running on device OS: [+] environment = local. [+] responsive. [-] must use a webservice to access DB2. [-] deployment... [-] fixed platform (no desktop) Console application running on device OS: [+] environment = local. [+] very responsive. [-] must use a webservice to access DB2. [-] no fullscreen or other window options? [-] deployment... [-] fixed platform (no desktop) I'm considering creating a web application but it seems the OS comes with IE 5? I don't want to alter the OS in any way! (install other browsers etc.) I would like to have an application that's responsive, easy to deploy, fullscreen and optionally multiplatform. I have seen handheld devices using terminal (emulation?) with a console like interface. This seems to be native to the device but I'm afraid this requires modest knowledge of C++? It seems that using RDP is the way to go but, I came here for advice and look for people that have been in the same situation willing to share their experience. There does not seem to be many "best practices" on the web that could help me decide the best way of working. Greetings

    Read the article

  • Why use link classes in oql instead of classes that contain links

    - by Isaac
    itop abstracts its very complex database design with an object query language (oql). For this there are classes definded, like 'Ticket' and 'Server'. Now a Ticket usually is linked to a Server. In my naive way I would give the Ticket class an attribute 'affected_server_list', where I could reference the affected servers. itop does it different: neither Servers nor Tickets know of each other. Instead there is a class 'linkTicketToServer', which provides the link between the two. The first thing I noticed is that it makes oql queries more complex. So I wondered why they designed it this way. One thing that occured to me is that it allows for more flexiblity, in that I can add links without modifying the original classes. Is this allready why one would implement it this way, or are there other reasons for this kind of design?

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • Notifying a separate application of an event

    - by TomDestry
    I have an application that runs through various tasks as an automated process. My client would like me to create a file in a given folder for each task as a way to flag when each task completes. They prefer this to a database flag because they can be notified by the file system rather than continually polling a database table. I can do this but creating and deleting files as flags feels clunky. Is there a more elegant approach to notifying a third-party of an event?

    Read the article

  • Where does the term "Front End" come from?

    - by Richard JP Le Guen
    Where does the term "front-end" come from? Is there a particular presentation/talk/job-posting which is regarded as the first use of the term? Is someone credited with coining the term? The Merriam-Webster entry for "front-end" claims the first known use of the term was 1973 but it doesn't seem to provide details about that first known use. Likewise, the Wikipedia page about front and back ends is fairly low quality, and cites very few sources.

    Read the article

  • How do you keep cool when production system goes down?

    - by Mag20
    This has happened to most of us... You come to work one day. Everything seems normal: the sun is shining, birds are chirping, but you notice a couple of weird things on your way to work like deja vu with cat in matrix. You get into office, there are a lot of phones ringing, but could be that they are just doing a new sales promotion. You settle in, when you notice a dark cloud hovering over you. It takes you a couple of moments, but you recognize the cloud is your boss. Usually he checks on you every morning with his "Soooo Peeeeter, how about those TCP/IP reports?" routine, but today he forgot everything about common manners and rudely invaded your personal space. No "Good Morning", just some drooling, grunts and curses. He reminds you a bit of neanderthal who is trying to get away from cyber tooth tiger, fear and panic all compressed in a tight ball. You try to decipher the new language that he created since yesterday and you start understanding that something bad happened overnight - production system went down. Now, your system is usually used by clients during regular working hours from 9-5, but for whatever reason you didn't get any alerts on your beeper (for people under 30 - beeper was like a mobile phone that could only ring and tell you who beeped you). Need to remember to charge it next time. So it is 8:45am, the system MUST be up at 9am. Every 10 seconds, your boss lets out yet another curse which communicates to you that another customer is having problems getting into the system. Also several account managers are now hovering over your boss trying to make him understand how clients are REALLY REALLY suffering. Everyone is depending on you to get the system up ASAP and at the same time hinder your progress by constantly distracting you. How do you keep cool in a situation like this?

    Read the article

  • Is there a good book to grok C++?

    - by Paperflyer
    This question got me thinking. I would say I am a pretty experienced C++ programmer. I use it a lot at work, I had some courses on it at the university, I can understand most C++ code I find out there without problems. Other languages you can pretty much learn by using them. But every time I use a new C++ library or check out some new C++ code by someone I did not know before, I discover a new set of idioms C++ has to offer. Basically, this has lead me to believe that there is a lot of stuff in C++ that might be worth knowing but that is not easily discoverable. So, is there a good book for a somewhat experienced C++ programmer to step up the game? You know, to kind of 'get' that language the way you can 'get' Ruby or Objective-C, where everything just suddenly makes sense and you start instinctively knowing 'that C++ way of thing'?

    Read the article

  • Understanding the problem when things break in production

    - by bitcycle
    Scenario: You push to production The push broke multiple things That same build did not break qa or dev As a developer, you don't have prod access. There is lots of pressure from above to get things working agian. Specifics: PHP/MVC application that is API-driven in Zend. Deployed to a few servers. My question: While investigating, lets say I have a hunch that something is wrong. But, I don't know for sure. And, of course, I can't test things in production. If I have a suggested fix based on that hunch, would it be wise to try and apply it and see if it works, before understanding what the problem is?

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • Have unit test generators helped you when working with legacy code?

    - by Duncan Bayne
    I am looking at a small (~70kLOC including generated) C# (.NET 4.0, some Silverlight) code-base that has very low test coverage. The code itself works in that it has passed user acceptance testing, but it is brittle and in some areas not very well factored. I would like to add solid unit test coverage around the legacy code using the usual suspects (NMock, NUnit, StatLight for the Silverlight bits). My normal approach is to start working through the project, unit testing & refactoring, until I am satisfied with the state of the code. I've done this many times in the past, and it's worked well. However, this time I'm thinking of using a test generator (in particular Pex) to create the test framework, then manually fleshing it out. My question is: have you used unit test generators in the past when commencing work on a legacy codebase, and if so, would you recommend them? My fear is that the generated tests will miss the semantic nuances of the code-base, leading to the dreaded situation of having tests for the sake of the coverage metric, rather than tests which clearly express the intended behaviour in code.

    Read the article

  • Pointer initialization doubt

    - by Jestin Joy
    We could initialize a character pointer like this in C. char *c="test"; Where c points to the first character(t). But when I gave code like below. It gives segmentation fault. #include<stdio.h> #include<stdlib.h> main() { int *i; *i=0; printf("%d",*i); } But when I give #include<stdio.h> #include<stdlib.h> main() { int *i; i=(int *)malloc(2); *i=0; printf("%d",*i); } It works( gives output 0). Also when I give malloc(0), It also works( gives output 0). Please tell what is happening

    Read the article

  • What's the best language combo for code generation?

    - by Peter Turner
    I read through Code Generation in Action but never bothered to make anything of it because Ruby just doesn't fit with my lifestyle at this juncture. The book came out more on the cusp of the C# revolution, and it said that C# "was a language designed to be generated", apparently using Ruby as the generator language. In your experience, what is the ideal combination of languages to generate the most useful code?

    Read the article

  • Worst coding standard you've ever had to follow?

    - by finnw
    Have you ever had to work to coding standards that: Greatly decreased your productivity? Were originally included for good reasons but were kept long after the original concern became irrelevant? Were in a list so long that it was impossible to remember them all? Made you think the author was just trying to leave their mark rather than encouraging good coding practice? You had no idea why they were included? If so, what is your least favourite rule and why? Some examples here

    Read the article

  • Why doesn't Haskell have type-level lambda abstractions?

    - by Petr Pudlák
    Are there some theoretical reasons for that (like that the type checking or type inference would become undecidable), or practical reasons (too difficult to implement properly)? Currently, we can wrap things into newtype like newtype Pair a = Pair (a, a) and then have Pair :: * -> * but we cannot do something like ?(a:*). (a,a). (There are some languages that have them, for example, Scala does.)

    Read the article

  • Use-cases for node.js and c#

    - by Chase Florell
    I do quite a bit of ASP.NET work (C#, MVC), but most of it is typical web development. I do Restful architecture using CRUD repositories. Most of my clients don't have a lot of advanced requirements within their applications. I'm now looking at node.js and it's performance implications (I'm addicted to speed), but I haven't delved into it all that much. I'm wondering if node.js can realistically replace my typical web development in C# and ASP.NET MVC (not rewriting existing apps, but when working on new ones) node.js can complement an ASP.NET MVC app by adding some async goodness to the existing architecture. Are there use-cases for/against C# and node.js? Edit I love ASP.NET MVC and am super excited with where it's going. Just trying to see if there are special use cases that would favor node.js

    Read the article

  • In Scrum, should you split up the backlog in a functional backlog and a technical backlog or not?

    - by Patrick
    In our Scrum teams we use a backlog, which mostly contains functional topics, but also sometimes contains technical topics. The advantage of having 1 backlog is that it becomes easy to choose the topics for the next sprint, but I have some questions: First, to me it seems more logical to have a separate technical backlog, where developers themselves can add pure technical items, like: we could improve performance in this method, this class lacks some technical documentation, ... By having one backlog, all developers always have to pass via the product owner to have their topics added to the backlog, which seems additional, unnecessary work for the product owner. Second, if you have a product owner that only focuses on the pure-functional items, the pure-technical items (like missing technical documentation, code that erodes and should be refactored, classes that always give problems during debugging because they don't have a stable foundation and should be refactored, ...) always end up at the end of the list because "they don't serve the customer directly". By having a separate technical backlog, and time reserved in every sprint for these pure technical items, we can improve the applications functionally, but also keep them healthy inside. What is the best approach? One backlog or two?

    Read the article

  • How to deal with cargo-cult programming attitude?

    - by Aivar
    I have some students (in introductory programming course) who see programming language as a set of magic spells, which must be cast in order to achieve some effect (instead of seeing it as a flexible medium for expressing his idea of solution). They tend to copy-paste code from previous similarly sounding assignments without considering the essence of the problem. Can anyone recommend some exercises or analogies to make those students more confident that they can and should understand the structure and meaning of each piece of code they write?

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >