Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 245/563 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • C# inherit from a class in a different DLL

    - by Onno
    I need to make an application that needs to be highly modular and that can easily be expanded with new functionality. I've thought up a design where I have a main window and a list of actions that are implemented using a strategy pattern. I'd like to implement the base classes/interfaces in a DLL and have the option of loading actions from DLL's which are loaded dynamically when the application starts. This way the main window can initiate actions without having to recompile or redistribute a new version. I just have to (re)distribute new DLL's which I can update dynamically at runtime. This should enable very easy modular updating from a central online source. The 'action' DLL's all inherit their structure from the code defined in the the DLL which defines the main strategy pattern structure and it's abstract factory. I'd like to know if C# /.Net will allow such a construction. I'd also like to know whether this construction has any major problems in terms of design.

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • Java web UI framework like ASP.NET MVC?

    - by Ethel Evans
    I'm doing some web apps for personal projects that might be shared out with my friends. I'm trying to use skills that will help me at work, but don't have $$ to spend on Visual Studio right now and don't want to try to cobble something together with Express Editions. Since I've been sort of wanting to bring my Java skills up to date and the main skills I want to work on are design and architecture skills, this isn't a big deal - except that I have no idea how to track down the right UI framework. I know I want something based on MVC, to get more practice with frameworks for that design pattern (we're using ASP .NET MVC2 at work). The UIs that I'll be making will be pretty simple - data entry, buttons, text, images. They will need AJAX. Any thoughts about which frameworks to look at? I'll be watching the comments, if anyone wants additional clarification on what I'm looking for.

    Read the article

  • C++ Iterator lifetime and detecting invalidation

    - by DK.
    Based on what's considered idiomatic in C++11: should an iterator into a custom container survive the container itself being destroyed? should it be possible to detect when an iterator becomes invalidated? are the above conditional on "debug builds" in practice? Details: I've recently been brushing up on my C++ and learning my way around C++11. As part of that, I've been writing an idiomatic wrapper around the uriparser library. Part of this is wrapping the linked list representation of parsed path components. I'm looking for advice on what's idiomatic for containers. One thing that worries me, coming most recently from garbage-collected languages, is ensuring that random objects don't just go disappearing on users if they make a mistake regarding lifetimes. To account for this, both the PathList container and its iterators keep a shared_ptr to the actual internal state object. This ensures that as long as anything pointing into that data exists, so does the data. However, looking at the STL (and lots of searching), it doesn't look like C++ containers guarantee this. I have this horrible suspicion that the expectation is to just let containers be destroyed, invalidating any iterators along with it. std::vector certainly seems to let iterators get invalidated and still (incorrectly) function. What I want to know is: what is expected from "good"/idiomatic C++11 code? Given the shiny new smart pointers, it seems kind of strange that STL allows you to easily blow your legs off by accidentally leaking an iterator. Is using shared_ptr to the backing data an unnecessary inefficiency, a good idea for debugging or something expected that STL just doesn't do? (I'm hoping that grounding this to "idiomatic C++11" avoids charges of subjectivity...)

    Read the article

  • How does the "Fourth Dimension" work with arrays?

    - by Questionmark
    Abstract: So, as I understand it (although I have a very limited understanding), there are three dimensions that we (usually) work with physically: The 1st would be represented by a line. The 2nd would be represented by a square. The 3rd would be represented by a cube. Simple enough until we get to the 4th -- It is kinda hard to draw in a 3D space, if you know what I mean... Some people say that it has something to do with time. The Question: Now, that is all great with me. My question isn't about this, or I'd be asking it on MathSO or PhysicsSO. My question is: How does the computer handle this with arrays? I know that you can create 4D, 5D, 6D, etc... arrays in many different programming languages, but I want to know how that works.

    Read the article

  • what does composition example vs aggregation

    - by meWantToLearn
    Composition and aggregation both are confusion to me. Does my code sample below indicate composition or aggregation? class A { public static function getData($id) { //something } public static function checkUrl($url) { // something } class B { public function executePatch() { $data = A::getData(12); } public function readUrl() { $url = A::checkUrl('http/erere.com'); } public function storeData() { //something not related to class A at all } } } Is class B a composition of class A or is it aggregation of class A? Does composition purely mean that if class A gets deleted class B does not works at all and aggregation if class A gets deleted methods in class B that do not use class A will work?

    Read the article

  • Help migrating from VB style programming to OO programming [closed]

    - by Agent47DarkSoul
    Being a hobbyist Java developer, I quickly took on with OO programming and understood its advantages over procedural code from C, that I did in college. But I couldn't grasp VB event based code (weird, right?). Bottom-line is OOP came natural to me. Curently I work in a small development firm developing C# applications. My peers here are a bit attached to VB style programming. Most of the C# code written is VB6 event handling code in C#'s skin. I tried explaining to them OOP with its advantages but it wasn't clear to them, maybe because I have never been much of a VB programmer. So can anybody provide any resources: books, web articles on how to migrate from VB style to OO style programming ?

    Read the article

  • Algorithm to reduce calls to mapping API

    - by aidan
    A random distribution of points lies on a map. This data lies behind an API, and I want to grab the complete set of points within a given bounding box. I can query the API with the bounding box and the API will return the set of points that fall within that box. The problem is that the API will limit the result set to 10 items, with no pagination and no indication if there are more points that have been omitted. So I made a recursive algorithm that takes a bounding box and requests the points that lie within it. If the result set is exactly 10 items, then I split the bounding box into four quadrants and recurse. It works fine but my question is this: if want to minimize the number of API calls, what is the optimal way to split the bounding box? Splitting it into quadrants was just an arbitrary decision. When there are a lot of points on the map, I have to drill down many levels before I start getting meaningful results. So I imagine it might be faster to split the box into, say, 9, 16, or more sections. But if I do that, then I eventually get to a point where a lot of requests are returning 0 results which isn't so efficient. Also, does the size of the limit on the results set affect the answer? (This is all assuming that I have no prior knowledge of nominal point density in the bounding box)

    Read the article

  • Start with open source desktop application and move to iPhone/Android app

    - by user92356
    I'm a high schooler and I am competing in an open source software development competition. It must be a desktop application that runs on either Windows or Linux. I have a great idea for the open source desktop app, and I wanted to know if I could take it farther and port it to the iPhone or Android platform and make money (preferably through $.99 cost, not ads) I read somewhere that certain open source licenses allow me to do this... am I correct?

    Read the article

  • Is it a good pattern that no objects should know more than what it needs to know?

    - by Jim Thio
    I am implementing a viewController class. The view controller class got NSNotification when the Grabbing class start or finish updating. I have 2 choices. I can make the grabbing class to provide a public read only property so all other classes can know whether it is still uploading. Or I can let view Controller to listen to 2 different events. Start updating and finish updating events. The truth is the viewController do need to know whether the grabbing class is still updating or not at any other time. So I am thinking of creating 2 events would be a better way to go. Actually, what do you think?

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • The cost of longer delay between development and QA

    - by Neil N
    At my current position, QA has become a bottleneck. We have had the unfortunate occurence of features being held out of the current build so that QA could finish testing. This means features that are done being developed may not get tested for 2-3 weeks after the developer has already moved on. With dev moving faster thean QA, this time gap is only going to get bigger. I keep flipping through my copy of Code Compelte, looking for a "Hard Data" snippet that shows the cost of fixing defects grows exponentially the longer it exists. Can someone point me to some studies that back up this concept? I am trying to convince the powers that be that the QA bottleneck is a lot more costly than they think.

    Read the article

  • Would you use (a dialect of) LISP for a real-world application? Where and why?

    - by Anto
    LISP (and dialects such as Scheme, Common LISP and Clojure) haven't gained much industry support even though they are quite decent programming languages. (At the moment though it seems like they are gaining some traction). Now, this is not directly related to the question, which is would you use a LISP dialect for a production program? What kind of program and why? Usages of the kind of being integrated into some other code (e.g. C) are included as well, but note that it is what you mean in your answer. Broad concepts are preferred but specific applications are okey as well.

    Read the article

  • How do you achieve a numeric versioning scheme with Git?

    - by Erlend
    My organization is considering moving from SVN to Git. One argument against moving is as follows: How do we do versioning? We have an SDK distribution based on the NetBeans Platform. As the svn revisions are simple numbers we can use them to extend the version numbers of our plugins and SDK builds. How do we handle this when we move to Git? Possible solutions: Using the build number from hudson (Problem: you have to check hudson to correlate that to an actual git version) Manually upping the version for nightly and stable (Problem: Learning curve, human error) If someone else has encountered a similar problem and solved it, we'd love to hear how.

    Read the article

  • Visual Studio 2010 editor painfully slow

    - by Daniel Gehriger
    I'm running out of patience with MS VisualStudio 2010: I'm working on a solution containing ~50 C++ projects. When using the editor, I experience a lag of 1 - 2 seconds whenever I move the cursor to a different line, or when I move to a different window, or generally when the editor losses and gains focus. I went through a whole series of optimizations, to no avail: installed all hotfixes for VS2010 disabled all add-ins and extensions disabled Intellisense deleted all temporary files created by VS2010 disabled hardware acceleration unloaded all but 15 projects disabled tracking changes closed all but one window and so on. This is on a Dual Core machine with SSD harddrive (verified throughput 100MB/s), enough free space on HD, Windows 7 Pro 32-bit with 3GB of RAM and most of it still free. Whenever I type a letter, CPU usage of devenv.exe goes to 50 - 90% in process monitor for 1 - 2 seconds before returning to 5%. I used Process Explorer to analyze registry and file system access, and I only notice frequent accesses to the .sln file (which is quiet small), and a few registry reads, but nothing that would raise a red flag. I don't have this problem with solutions containing less projects, so I'm inclined to think that it's related to the number of projects. For your information, the entire solution has been migrated over the years from VS2005 to VS2008 to now VS2010. Does anyone have any ideas what else I could do to resume work on this project, other than returning to VS2008?

    Read the article

  • How to deal with project managers who micromanage?

    - by entens
    Perhaps I'm just naive, but when I try to decipher the wall of tasks I'm targeted to do over the course of a week, I just can't help but think whoever builds the project schedule needs to get some remedial training on basic project management. For example, I am assigned 13 tasks today, the shortest lasting .13 days (default time metric in Microsoft Project), and the longest lasting .75 days. I can't help but think that it is blatant micromanagement scheduling projects in sub 10 minute intervals. The effects of management are becoming evident in slipped tasks, resource assignment exceeding capacity by a factor of two at some points in time, and spending more time clearing tasks and figuring out what comes next than actually doing work. How can I convince the project manager to create tasks with larger duration and to see the larger picture?

    Read the article

  • Best way to land a new job while working full time?

    - by JerryC
    I work full time as a software engineer but I would like to find another job. I'm a little worried about posting my resume directly to monster or dice just in case someone at my company finds out about it. Recruiters occasionally call me (maybe once a month or so) but I'd like some more frequent correspondence so I can speed this process up. What are some good strategies to find a new job while working?

    Read the article

  • Are all languages basically the same?

    - by Anirudh
    Recently, i had to understand the design of a small program written in a language i had no idea about (ABAP, if you must know). I could figure it out without too much difficulty. I realize that mastering a new language is a completely different ball game, but purely understanding the intent of code (specifically production standard code, which is not necessarily complex) in any language is straight forward, if you already know a couple of languages (preferably one procedural/OO and one functional). Is this generally true? Are all programming languages made up of similar constructs like loops, conditional statements and message passing between functions? Are there non-esoteric languages that a typical Java/Ruby/Haskell programmer would not be able to make sense of? Do all languages have a common origin?

    Read the article

  • What is the best idea to put available OS (linux) and Web application to client?

    - by Fernando Costa
    After a year programming a web based business management system, I got my idea divided into two differents ways to do what I'm doing... I will try to explain in follow lines: First I will describe my enviroment: Webserver: apache, ngynx Programming Language: PHP, Shell Script, Java Script, SQL Database: Mysql Operating System: Linux, UNIX (All Distros) (If manually configured works on windows) Authentication Server: FreeRadius First situation I have my application running on this enviroment that I had just described before, as my application is a SaaS app, then I have my own server to run it all and customers pay to use it as a service accessed by webbrowser. Second Situation The same as before but with one big difference, everything (environment) is installed in the customer, then I need to cryptography all my codes (It includes PHP and Shell Scripts). I think this situation is most difficulty, but I would like to hear it from different points of view.

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • How does one handle sensitive data when using Github and Heroku?

    - by Jonas
    I am not yet accustomed with the way Git works (And wonder if someone besides Linus is ;)). If you use Heroku to host you application, you need to have your code checked in a Git repo. If you work on an open-source project, you are more likely going to share this repo on Github or other Git hosts. Some things should not be checked in the public repo; database passwords, API keys, certificates, etc... But these things still need to be part of the Git repo since you use it to push your code to Heroku. How to work with this use case? Note: I know that Heroku or PHPFog can use server variables to circumvent this problem. My question is more about how to "hide" parts of the code.

    Read the article

  • Looking for suggestions: becoming a hireable, young programmer [closed]

    - by Dan
    I am a 17 year old Java programmer that has filled the last year with learning all of the ins and outs of Java - Using Eclipse, and the help of a friend of the family (a Java programming architect for some company), I have learned everything from serializing objects, basic networking, generics, reflection, multi-threading, code optimization and efficiency & some concurrency safety - built my own proxy class, and nowadays, I answer questions on Project Euler. I am seeking some suggestions though on where I go next, or where I go from here to get a job in programming. I dedicate at least an hour every day to coding, sometimes literally, the entire day, and I really have come to love the process. I just started reading Effective Java (v2), and learning Scala (as I see often, possibly the Java replacement) I will be going to college for Computer Science next year - and taking AP computer science this year (however, I took a practice exam and got an 87, only need a 60to70 to pass, so no need to study for it too much) -- I was wondering if getting the SE 7 OCA and OCP would help me in trying to get a programming job. I looked around and most people have said online that an OCA/OCP are practically useless, but, at my age do they make me any more credible? More or less, what would you recommend to get a job in programming these days - or distinguish yourself from the crowd? I have enough time and dedication to learn another language, or anything really. Thank you very much.

    Read the article

  • Help us with our git workflow

    - by Brandon Cordell
    We have a web application that gets deployed to multiple regions around our state. An instance of the application for each region. We maintain a staging and production (master) branch in our repository, but we were wondering what is the best way of maintaining each instances codebase. It's similar at the core, but we have to give each region the ability to make specific requests that may not make it into the core of the application. Right now we have branches for each region, like region_one_staging, and region_one_production. At the rate we're growing we'll have hundreds of branches here in the next few years. Is there a better way to do this?

    Read the article

  • Real-time chat in Ruby on Rails

    - by Skydreamer
    First, I'm sorry because I know this question has been asked many times but I'm still looking forward to finding the answer to my problem. I'd want to implement a Real-time chat for my Rails app but I can't really host the server which handles the sockets. I've tried Faye but it needs a server. I've also heard of pusher but it's limited to 20 users at a time on the chat and I can't really be sure they won't be more. I've thought of irc but I think I can't really embed it into a rails app, maybe it needs sockets... So here's my problem, can I implement a real-time chat without owning a server ? What can you advice me ? Thank you for your answers.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >