Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 194/563 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Small-scale database options for .NET

    - by raney
    I have a .NET 4.0/WPF based application I've developed and maintain for my company that acts as a friendly GUI central-point-of-information, combining information pulled from a couple of SQL databases, as well as CSV exports from a few other applications. I would like to build out my own database to support the entirety of the information that the application accesses, so that I could have a service running on my server that would read in necessary remote SQL info and file exports, to provide the user's application with a single database to connect to, as well as to remove all of the file handling currently involved in the program (copying new CSV resources from network location, reading them into memory each launch.) I have complete control and flexibility here as long as the user's experience isn't affected, and this is as much a learning experience as it is tidying up. Caveat being, I don't have much in the way of a budget. Right now I recognize my options to be: SQL Express - I'm comfortable with the server setup, I like ADO.NET and LINQ to SQL. I feel that I have the least to learn here, but it would let me focus on SQL in a familiar environment. Perhaps in conjunction with Entity Framework? MongoDB - I don't know a whole lot about, but I've heard the name enough to make me curious. Brief research seems friendly enough, and there is .NET support. I like working with open source projects. My questions are: What's popular and extensible right now? I'm not far from starting to job-hunt, and I'd like this project to be relevant going forward. What am I missing? Pros, cons? Other options? What plays well with .NET? What are the things I should be considering, the questions I should be asking, when making a decision like this? Thanks for your time.

    Read the article

  • In a multidisciplinary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • junior / professional / senior categorization

    - by oozoo
    Hey guys, is it just me or is the categorization of developer levels highly subjective? I get the feeling that every company tries to hire experienced developers as juniors because they don't know $technology. For example my own career: I switched technologies a couple of times, while sticking to java as a programming language. For example I first worked for 3 years using JavaSE technologies, the next company I worked for hired me as junior because I didn't have JavaEE experience - while still selling me as professional level to customers (I work in consulting). The next company hired me again as junior because I didn't have SAP experience - they mostly work with SAP Java technologies which is definitely a niche. Still, they are selling all their technology consultants for exactly the same rate while paying them significantly different wages. Now when switching jobs again I feel like this whole thing is going to start all over again because I don't have Spring experience or Oracle knowledge. tl;dr = is my observation totally off base that companies are just using these categorizations as means to keep down wages?

    Read the article

  • Experience of Python's “PEP-302 New Import Hooks”

    - by Koichi Sasada
    I'm one of the developers of Ruby (CRuby). We are working on Ruby 2.0 release (planned to release 2012/Feb). Python has "PEP302: New Import Hooks" (2003): This PEP proposes to add a new set of import hooks that offer better customization of the Python import mechanism. Contrary to the current import hook, a new-style hook can be injected into the existing scheme, allowing for a finer grained control of how modules are found and how they are loaded. We are considering introducing a feature similar to PEP302 into Ruby 2.0 (CRuby 2.0). I want to make a proposal which can persuade Matz. Currently, CRuby can load scripts from only file systems in a standard way. If you have any experience or consideration about PEP 302, please share. Example: It's a great spec. No need to change it. It is almost good, but it has this problem... If I could go back to 2003, then I would change the spec to... I'm sorry if such a question is not suitable for here. I posted here because I'm not sure that I can ask this question at python-dev (of course, the list is not for cruby development). This post is moved from http://stackoverflow.com/questions/11188229/experience-of-pythons-pep-302-new-import-hooks.

    Read the article

  • How to implement a safe password history

    - by Lorenzo
    Passwords shouldn't be stored in plain text for obvious security reasons: you have to store hashes, and you should also generate the hash carefully to avoid rainbow table attacks. However, usually you have the requirement to store the last n passwords and to enforce minimal complexity and minimal change between the different passwords (to prevent the user from using a sequence like Password_1, Password_2, ..., Password_n). This would be trivial with plain text passwords, but how can you do that by storing only hashes? In other words: how it is possible to implement a safe password history mechanism?

    Read the article

  • Licensing approach for .NET library that might be used desktop / web-service / cloud environment

    - by Bobrovsky
    I am looking for advice how to architect licensing for a .NET library. I am not asking for tool/service recommendations or something like that. My library can be used in a regular desktop application, in an ASP.NET solution. And now Azure services come into play. Currently, for desktop applications the library checks if the application and company names from the version history are the same as the names the key was generated for. In other cases the library compares hardware IDs. Now there are problems: an Azure-enabled web-application can be run on different hardware each time (AFAIK) sometimes the hardware ID for the same hardware changes unexpectedly checking the hardware ID or version info might not be allowed in some circumstances (shared hosting for example) So, I am thinking about what approach I can take to architect a licensing scheme that: is friendly to customers (I do not try to fight piracy, but I do want to warn the customer if he uses the library on more servers than he paid for) can be used when there is no internet connection can be used on shared hosting What would you recommend?

    Read the article

  • What does this piece of code in C++ mean? [closed]

    - by user1838418
    const double pi = 3.141592653589793; const angle rightangle = pi/2; inline angle deg2rad(angle degree) { return degree * rightangle / 90.; } angle function1() { return rightangle * ( ((double) rand()) / ((double) RAND_MAX) - .5 ); } bool setmargin(angle theta, angle phi, angle margin) { return (theta > phi-margin && theta < phi+margin); } Please help me. I'm new to C++

    Read the article

  • How do you cope with change in open source frameworks that you use for your projects?

    - by Amy
    It may be a personal quirk of mine, but I like keeping code in living projects up to date - including the libraries/frameworks that they use. Part of it is that I believe a web app is more secure if it is fully patched and up to date. Part of it is just a touch of obsessive compulsiveness on my part. Over the past seven months, we have done a major rewrite of our software. We dropped the Xaraya framework, which was slow and essentially dead as a product, and converted to Cake PHP. (We chose Cake because it gave us the chance to do a very rapid rewrite of our software, and enough of a performance boost over Xaraya to make it worth our while.) We implemented unit testing with SimpleTest, and followed all the file and database naming conventions, etc. Cake is now being updated to 2.0. And, there doesn't seem to be a viable migration path for an upgrade. The naming conventions for files have radically changed, and they dropped SimpleTest in favor of PHPUnit. This is pretty much going to force us to stay on the 1.3 branch because, unless there is some sort of conversion tool, it's not going to be possible to update Cake and then gradually improve our legacy code to reap the benefits of the new Cake framework. So, as usual, we are going to end up with an old framework in our Subversion repository and just patch it ourselves as needed. And this is what gets me every time. So many open source products don't make it easy enough to keep projects based on them up to date. When the devs start playing with a new shiny toy, a few critical patches will be done to older branches, but most of their focus is going to be on the new code base. How do you deal with radical changes in the open source projects that you use? And, if you are developing an open source product, do you keep upgrade paths in mind when you develop new versions?

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Can a new idea for a software project be an intellectual property?

    - by Wesley Khan
    I have to do my final year project and I am going to do some kind of stuff that no one has yet attempted to do, though the completion of the project involves some things that have already been done but I am extending those ideas to do something that no one has yet done. In simple words I have an idea that needs combination of two ideas plus something from my own. Can I claim this idea to be an intellectual property of mine so that no one else attempts to do it while I am doing the project?If Anybody does it after my project, will he need a license from me?

    Read the article

  • what is need for a handler in general

    - by nish
    I have been searching for a definition for handler. basics i've understood that "A handler is a piece of code that is called when something happens, and usually takes some action, like generating a response." - (from http://stackoverflow.com/questions/3246200/what-is-an-handler ). But that can be a trigger or a callback. Also in specific an event handler on a low-level , often works by polling a device and waiting for a hardware response. So, what is the specific role of a handler ( that makes it unique from a trigger or a callback or any other such function ). Do all handlers have similar role ( event handler, file handler , exception handler, error handler )...

    Read the article

  • Is extensive documentation a code smell?

    - by Griffin
    Every library, open-source project, and SDK/API I've ever come across has come packaged with a (usually large) documentation file, and this seems contradictory to the wide-spread belief that good code needs little to no comments. What separates documentation from this programming methodology? a one to two page overview of a package seems reasonable, but elegant code combined with standard intelisense should have theoretically deprecated the practice of documentation by now IMO. I feel like companies only create detailed documentation and tutorials because its what they've always done. Why should developers have to constantly be searching through online documentation in order to learn how to do things when such information should be intrinsic to the classes, methods and namespaces?

    Read the article

  • Why is it java code indented as BSD KNF Style and C C++ code indented as Allman or BSD style?

    - by Caffeine
    I do understand that coding convention is a matter of preference, and that different coding conventions have different subtle advantages or shortcomings, and depending on what one wants, one should choose his/her style. But why is usually Java written where the opening brace is on the same line as the function definition of control statement, and in C or C++ the curly braces have a line of their own? BSD KNF style if (data != NULL && res > 0) { if (JS_DefineProperty(cx, o, "data", STRING_TO_JSVAL(JS_NewStringCopyN(cx, data, res)), NULL, NULL, JSPROP_ENUMERATE) != 0) { QUEUE_EXCEPTION("Internal error!"); goto err; } PQfreemem(data); } else { if (JS_DefineProperty(cx, o, "data", OBJECT_TO_JSVAL(NULL), NULL, NULL, JSPROP_ENUMERATE) != 0) { QUEUE_EXCEPTION("Internal error!"); goto err; } } Allman or BSD Style if (x == y) { something(); somethingelse(); } Courtesy: http://en.wikipedia.org/wiki/Indent_style

    Read the article

  • What good books are out there on program execution models? [on hold]

    - by murungu
    Can anyone out there name a few books that address the topic of program execution models?? I want a book that can answer questions such as... What is the difference between interpreted and compiled languages and what are the performance consequences at runtime?? What is the difference between lazy evaluation, eager evaluation and short circuit evaluation?? Why would one choose to use one evaluation strategy over another?? How do you simulate lazy evaluation in a language that favours eager evaluation??

    Read the article

  • Do certain corporations hold more weight on a resume?

    - by Ryan
    Would a developer/tester position at Google, Apple, Microsoft, etc. (any large tech. company of which most people have heard) be more valuable on a resume than working as a developer/tester somewhere where tech. isn't the main objective (shipping company, restaurant chain, insurance company, etc.)? Let's say you have two offers, and you only plan to stay with whichever company for 5 years, before trying to get a better position at a different company. One at Google that has a starting salary of $60,000, and one at some insurance company that has a starting salary of $80,000. I guess what I'm trying to say is... with university's, if someone graduates from MIT or Carnegie Mellon, they can pretty much get a job anywhere. Does someone seem more valuable after having worked at a company like Google, Apple, Microsoft, etc.? In other words, would taking the lower paying job be better in the long run since it's at Google, or would it be better to take the higher paying job at the insurance company?

    Read the article

  • Long term plan of attack to learn math?

    - by zhenka
    I am a web-developer with a desire to expand my skill-set to mathematics relevant to programming. As 2nd career, I am stuck in college doing some of the requirements while working. I was hoping the my education will teach me the needed skills to apply math, however I am quickly finding it to be too much easily-testable breadth-based approach very inefficient for the time invested. For example in my calculus 2 class, the only remotely useful mind expanding experience I had was volumes and areas under the curve. The rest was just monotonous glorified algebra, which while comes easy to me, could be done by software like wolfram alpha within seconds. This is not my idea of learning math. So here I am a frustrated student looking for a way to improve my understanding of math in a way that focuses on application, understanding and maximally removed needless tedium. However I cannot find a good long term study strategy with this approach in mind. So for those of like mind, how would you go about learning the necessary math without worrying too much about stuff a computer can do much better?

    Read the article

  • What is a recent programming language of choice for the AI?

    - by Eduard Florinescu
    For a few decades the programming language of choice for AI was either Prolog or LISP, and a few more others that are not so well known. Most of them were designed before the 70's. Changes happens a lot on many other domains specific languages, but in the AI domain it hadn't surfaced so much as in the web specific languages or scripting etc. Are there recent programming languages that were intended to change the game in the AI and learn from the insufficiencies of former languages?

    Read the article

  • How to properly diagram lambda expressions or traversals through them in Architecture Explorer?

    - by MainMa
    I'm exploring a piece of code in Architecture Explorer in Visual Studio 2010 to study the relations between methods. I noticed a strange behavior. Take the following source code. It generates a hello message based on a template and a template engine, the template engine being a method (a sort of strategy pattern simplified at a maximum for demo purposes). public string GenerateHelloMessage(string personName) { return this.ApplyTemplate( this.DefaultTemplateEngine, this.GenerateLocalizedHelloTemplate(), personName); } private string GenerateLocalizedHelloTemplate() { return "Hello {0}!"; } public string ApplyTemplate( Func<string, string, string> templateEngine, string template, string personName) { return templateEngine(template, personName); } public string DefaultTemplateEngine(string template, string personName) { return string.Format(template, personName); } The graph generated from this code is this one: Change the first method from this: public string GenerateHelloMessage(string personName) { return this.ApplyTemplate( this.DefaultTemplateEngine, this.GenerateLocalizedHelloTemplate(), personName); } to this: public string GenerateHelloMessage(string personName) { return this.ApplyTemplate( (a, b) => this.DefaultTemplateEngine(a, b), this.GenerateLocalizedHelloTemplate(), personName); } and the graph becomes: While semantically identical, those two versions of code produce different dependency graphs, and Architecture Explorer shows no trace of the lambda expression (while Visual Studio's code coverage, for example, shows them, as well as Code analysis seems to be able to understand that the link exists). How would it be possible, without changing the source code, to: Either force Architecture Explorer to display everything, including lambda expressions, Or make it traverse lambda expressions while drawing a dependency through them (so in this case, drawing the dependency from GenerateHelloMessage to DefaultTemplateEngine in the second example)?

    Read the article

  • Lending epub files for limited time to users

    - by JP Hellemons
    I am looking for components to build a digital library who lends people epub (ebooks) for about a week. It's like a digital version of the offline old public library. Now I have found several flash (pdf) file streaming solutions. But that would require an active internet link. And like the public library, you are able to take your books to the beach or pool on holiday abroad where you have no connection. So streaming is no option. The other file restriction method I have found was DRM, but that would require a really expensive license of Adobe Content Server 4 which is not suitable for my little hobby project. But it seems that adobe content server and Adobe Digital Editions is the only option at the moment. Or are there open source alternatives?

    Read the article

  • Examples of Liskov Substitution

    - by james lewis
    I'm facilitating a session next week on the Liskov Substitution Principle and I was wondering if anyone had any examples of violations 'from the trenches'? I'm looking for something other than uncle Bob's rectangle - square problem and the persistent set problem he talks about in A-PPP (although that is a great example). So far I'm using the example of a (very simple) List and an IndexedList as the 'correct' use of inheritance. And the addition of a Set to this hierarchy as a violation (as a Set is distinct; strengthening the pre condition of the Add method). I've also taken this great example and it's solution from this question Both those examples are great but I'm looking for something more subtle and harder to spot. So far I've come up with nothing so if you've got a great, subtle example post it up. Also, any metaphors you've come across that helped you understand LSP would be really useful too.

    Read the article

  • Was it necessary to build this site in ASP.NET ?

    - by Andrew M
    From what I'm told, the whole StackOverflow/StackExchange 'stack' is based on Microsoft's ASP.NET. SO and the SE sites are probably the most complex that I visit on a regular basis. There's a lot going on in every page - lots of different boxes, pulling data from different places and changing dynamically and responding to user interaction. And the sites work very smoothly, despite the high traffic. My question is, could this have been achieved using a different platform/framework? Does ASP.NET lend itself to more complex projects where other web frameworks would strain and falter? Or is the choice pretty incidental?

    Read the article

  • Going back to ASP.Net Webforms from ASP.Net MVC. Recommend patterns/architectures?

    - by jlnorsworthy
    To many of you this will sound like a ridiculous question, but I am asking because I have little to no experience with ASP.Net Webforms - I went straight to ASP.Net MVC. I am now working on a project where we are limited to .Net 2.0 and Visual Studio 2005. I liked the clean separation of concerns when working with ASP.Net MVC, and am looking for something to make webforms less unbearable. Are there any recommended patterns or practices for people who prefer asp.net MVC, but are stuck on .net 2.0 and visual studio 2005?

    Read the article

  • DDD and filtering

    - by tikhop
    I am developing an app in ddd maner. So I have a complex domain model. Suppose I have a Fare object and Airline. Each Airline should contain several or much more Fares. My UI should represent Model (only small part of complex model) as a list of Airline, when the user select the Airline, I must show the list of Fares. User can filtering the Fares (by travel time, cost, etc.). What is the appropriate place for filtering Fares and Airlines? I am assuming that I should do it in ViewModel. Like: My domain model has wrapped with Service Layer - UI works with ViewModel - ViewModel obtain data from Service Layer filtering it and create DTO objects for UI. Or I'm wrong?

    Read the article

  • If your unit test code "smells" does it really matter?

    - by Buttons840
    Usually I just throw my unit tests together using copy and paste and all kind of other bad practices. The unit tests usually end up looking quite ugly, they're full of "code smell," but does this really matter? I always tell myself as long as the "real" code is "good" that's all that matters. Plus, unit testing usually requires various "smelly hacks" like stubbing functions. How concerned should I be over poorly designed ("smelly") unit tests?

    Read the article

  • What is the best retort to "premature optimization is the root of all evil"

    - by waffles
    Often I hear the sentiment ... "Why worry about performance, write slow code, get your product to market ... don't worry about performance. You can sort that out later" The culmination of this sentiment is: "... premature optimization is the root of all evil ... #winning" I was wondering, does anybody have a good retort to this one liner. Ideally an equally strong one liner that encompasses the reverse of this sentiment?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >