Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 207/563 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Lack of ideas for startup equals slack career?

    - by Fanatic23
    After 12-15 years of working in the same industry, if a person does not have any new ideas for a startup then is it safe to say that his/her career has not reached its potential? We are not talking of implementation strategies or insights here to fructify the startup -- just great ideas which can change things for the better. Not your source code optimization. I mean a radical way of looking at things. If you lot disagree with this line of thinking, then please share some examples where despite such a long span a person can end up without new ideas.

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • What's the best way to manage list item sort order with Drag & Drop UI?

    - by Reddy S R
    I have a list of Students that I should display to user on a web page in tabular format. The items are stored in DB along with SortOrder information. On the web page, user can rearrange the list order by dragging and dropping the items to their desired sort order, similar to this post. Below is a screenshot of my test page. In the above example, each row has sort order info attached to it. When I drop John Doe (Student Id 10) above the Student Id 1 row, the list order should now be: 2, 10, 1, 8, 11. What's the optimistic (less resource hungry) way to store and update Sort Order information? My only idea for now is, for every change in the list's sort order, every object's SortOrder value should be updated, which in my opinion is very resource hungry. Just FYI: I might have at most 25 rows in my table.

    Read the article

  • Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?

    - by Doug T.
    I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder. I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds. Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final "release" build of a project, that ant, maven, or another tool is used to do the full build from the command line? Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?

    Read the article

  • I don't understand why algorithms are so special

    - by Jessica
    I'm a student of computer science trying to soak up as much information on the topic as I can during my free time. I keep returning to algorithms time and again in various formats (online course, book, web tutorial), but the concept fails to sustain my attention. I just don't understand: why are algorithms so special? I can tell you why fractals are awesome, why the golden ratio is awesome, why origami is awesome and scientific applications of all the above. Heck I even love Newton's laws and conical sections. But when it comes to algorithms, I'm just not astounded. They are not insightful in new ways about human cognition at all. I was expecting algorithms to be shattering preconceptions and mind-altering but time and time again they fail miserably. What am I doing wrong in my approach? Can someone tell me why algorithms are so awesome?

    Read the article

  • Quantifying the value of refactoring in commercial terms

    - by Myles McDonnell
    Here is the classic scenario; Dev team build a prototype. Business mgmt like it and put it into production. Dev team now have to continue to deliver new features whilst at the same time pay the technical debt accrued when the code base was a prototype. My question is this (forgive me, it's rather open ended); how can the value of the refactoring work be quantified in commercial terms? As developers we can clearly understand and communicate the value in technical terms, such a the removal of code duplication, the simplification of an object model and so on. But this means little to an executive focussed on the commercial elements. What will mean something to this executive is the dev. team being able to deliver requirements at faster velocity. Just making this statement without any metrics that clearly quantify return on investment (increased velocity in return for resource allocated to refactoring) carries little weight. I'm interested to hear from anyone who has had experience, positive or negative, in relation to the above. ----------------- EDIT ---------------- Thanks for the responses so far, all of which I think are good. What I want to develop is a metric that proves (or disproves!) all of these statements. A report that ties velocity to refactoring and shows a positive effect.

    Read the article

  • How dependecy injection increases coupling?

    - by B?????
    Reading wiki page on Dependency injection, the disadvantages section tells this : Dependency injection increases coupling by requiring the user of a subsystem to provide for the needs of that subsystem. with a link to an article against DI. What DI does is that it makes a class use the interface instead of the concrete implementation. That should be decreased coupling, no? So, what am I missing? How is dependency injection increasing coupling between classes?

    Read the article

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

  • Is "Interface inheritance" always safe?

    - by Software Engeneering Learner
    I'm reading "Effective Java" by Josh Bloch and in there is Item 16 where he tells how to use inheritance in a correct way and by inheritance he means only class inheritance, not implementing interfaces or extend interfaces by other interfaces. I didn't find any mention of interface inheritance in the entire book. Does this mean that interface inheritance is always safe? Or there are guidlines for interface inheritance?

    Read the article

  • Concurrency checking with Last Change Time

    - by Lijo
    I have a following three tables Email (emailNumber, Address) Recipients (reportNumber, emailNumber, lastChangeTime) Report (reportNumber, reportName) I have a C# application that uses inline queries for data selection. I have a select query that selects all reports and their Recipients. Recipients are selected as comma separacted string. During updating, I need to check concurrency. Currently I am using MAX(lastChangeTime) for each reportNumber. This is selected as maxTime. Before update, it checks that the lastChangeTime <= maxTime. --//It works fine One of my co-developers asked why not use GETDATE() as “maxTime” rather than using a MAX operation. That is also working. Here what we are checking is the records are not updated after the record selection time. Is there any pitfalls in using GETDATE() for this purpose?

    Read the article

  • Efficient Trie implementation for unicode strings

    - by U Mad
    I have been looking for an efficient String trie implementation. Mostly I have found code like this: Referential implementation in Java (per wikipedia) I dislike these implementations for mostly two reasons: They support only 256 ASCII characters. I need to cover things like cyrillic. They are extremely memory inefficient. Each node contains an array of 256 references, which is 4096 bytes on a 64 bit machine in Java. Each of these nodes can have up to 256 subnodes with 4096 bytes of references each. So a full Trie for every ASCII 2 character string would require a bit over 1MB. Three character strings? 256MB just for arrays in nodes. And so on. Of course I don't intend to have all of 16 million three character strings in my Trie, so a lot of space is just wasted. Most of these arrays are just null references as their capacity far exceeds the actual number of inserted keys. And if I add unicode, the arrays get even larger (char has 64k values instead of 256 in Java). Is there any hope of making an efficient trie for strings? I have considered a couple of improvements over these types of implementations: Instead of using array of references, I could use an array of primitive integer type, which indexes into an array of references to nodes whose size is close to the number of actual nodes. I could break strings into 4 bit parts which would allow for node arrays of size 16 at the cost of a deeper tree.

    Read the article

  • How to explain a layperson why a developer should not be interrupted while neck-deep in coding?

    - by András Szepesházi
    If you just consider the second part of my question, "Why a developer should not be interrupted while neck-deep in coding", that has been discussed a number of times by smart people. Heck, even the co-founder of SO, Joel Spolsky, wrote a blog post about "getting in the zone" and "being knocked out of the zone" and why it takes an average of 15 minutes to achieve productivity when participating in complex, software development related tasks. So I think the why has been established. What I'm interested in is how to explain all that to somebody who doesn't know beans about Beans (khmm I mean software development). How to tell the wife, or the funny guy from accounting at the workplace, or the long time friend who pings you on Skype every 30 minutes with a "Wazzzzzzup?!", that all the interruptions have a much deeper impact on your work than the obvious 30 seconds they took from your time. Obviously you can't explain it by sentences like "I have to juggle a lot of variable names in my short term memory" unless you want to be the target of blank stares or friendly abuse. I'd like to be able to explain all that to non-developers in a way that will make them clearly understand - without being offensive, elitist or too technical.

    Read the article

  • Future of a ServiceStack based Solution in the Context of Licensing

    - by Harindaka
    I just want someone to clarify the following questions as Demis Bellot had announced a couple of weeks ago that ServiceStack would go commercial. Refer link below. https://plus.google.com/app/basic/stream/z12tfvoackvnx1xzd04cfrirpvybu1nje54 (Please note that when I say ServiceStack or SS I refer to all associated SS libraries such as ServiceStack.Text, etc.) If I have a solution already developed using ServiceStack today will I have to purchase a license once SS goes commercial even if I don't upgrade the SS binaries to the commercial release version? Will previous versions of SS (prior to commercial licensing) always be opensource and use the same license as before? If I fork SS today (prior to commercial licensing) on Github, would it be illegal to maintain that after SS goes commercial? If the answer to question 2 is yes, then would I still be able to fork a previous version after SS goes commercial without worrying about the commercial license (all the while maintaining and releasing the source to the public)?

    Read the article

  • History of open source software

    - by Victor Sorokin
    I've been always interested, out of the pure self-amusement, in the history of open software used today: who were the people which started it and what were the reasons to start what were design decisions at the start how software evolved over the time Specifically, I'm interested in following software: GCC X Linux kernel Java Of course, there is plenty of information in Internet to google for, but I thought it would be nice to have list of interesting resources at this site. I hope some of visitors of this site have similar interest and can share a link or two they found particularly amusing/interesting. To make this entry more question-like, here's straight question: what are the most interesting/amusing links about history of open source software?

    Read the article

  • Do I need a degree in Computer Science to get a Jr Programming job in the world? [closed]

    - by t84
    As above really, do I need to go to Uni to get a job as a Junior C# coder? I'm 26 and have been working in Games (Production) for 6 years and I am thinking of a change, I've had exposure to VB6, VBA, HTML, CSS, PHP, JavaScript over the past few years and did a web design NCFE at College, but other than that, nothing else! I'm teaching myself C# at the moment with books and I was wondering 'how much' I need to learn and also how I can improve my chances of getting a programming job! Am I a late started to learn coding? (I know many people who started at a very young age!) Thanks for the help :)

    Read the article

  • My Client wants to convert me from a Contractor to an Employee. I'd like part of the Headhunter's fee. Is this fair?

    - by Bob Kaufman
    I am happily working on a contract for a good, solid company. This client is very happy with my work and has asked me to consider converting to full-time employment. My problem is the headhunter. This firm has not been entirely upfront with me throughout this contract. A mistake was made by the firm that benched me for several days, costing me those days' pay. Inadequate healthcare coverage left me with a bill of several thousand dollars after my wife's brief hospital stay. My feeling is that I did the work that earned me the invitation to work full-time for this company. Asking for 1/3 of the commission I figure they're going to receive would nicely counterbalance the inequities that I perceive were dealt to me. Is this an (un)reasonable or (in)appropriate request?

    Read the article

  • Do I really need to learn Python? [closed]

    - by Pouya
    These days, I see the name "Python" a lot. Mostly when I'm doing some programming on linux/mac, I see a trace of Python. I have a fair knowledge of C++ and I'm quite good at Java. I also know Delphi which comes handy sometimes. I've been good with these languages, however, I was wondering if learning Python could make it better. What does it offer that makes it worth learning? What are its key/unique advantages/features?

    Read the article

  • How do we keep dependent data structures up to date?

    - by Geo
    Suppose you have a parse tree, an abstract syntax tree, and a control flow graph, each one logically derived from the one before. In principle it is easy to construct each graph given the parse tree, but how can we manage the complexity of updating the graphs when the parse tree is modified? We know exactly how the tree has been modified, but how can the change be propagated to the other trees in a way that doesn't become difficult to manage? Naturally the dependent graph can be updated by simply reconstructing it from scratch every time the first graph changes, but then there would be no way of knowing the details of the changes in the dependent graph. I currently have four ways to attempt to solve this problem, but each one has difficulties. Nodes of the dependent tree each observe the relevant nodes of the original tree, updating themselves and the observer lists of original tree nodes as necessary. The conceptual complexity of this can become daunting. Each node of the original tree has a list of the dependent tree nodes that specifically depend upon it, and when the node changes it sets a flag on the dependent nodes to mark them as dirty, including the parents of the dependent nodes all the way down to the root. After each change we run an algorithm that is much like the algorithm for constructing the dependent graph from scratch, but it skips over any clean node and reconstructs each dirty node, keeping track of whether the reconstructed node is actually different from the dirty node. This can also get tricky. We can represent the logical connection between the original graph and the dependent graph as a data structure, like a list of constraints, perhaps designed using a declarative language. When the original graph changes we need only scan the list to discover which constraints are violated and how the dependent tree needs to change to correct the violation, all encoded as data. We can reconstruct the dependent graph from scratch as though there were no existing dependent graph, and then compare the existing graph and the new graph to discover how it has changed. I'm sure this is the easiest way because I know there are algorithms available for detecting differences, but they are all quite computationally expensive and in principle it seems unnecessary so I'm deliberately avoiding this option. What is the right way to deal with these sorts of problems? Surely there must be a design pattern that makes this whole thing almost easy. It would be nice to have a good solution for every problem of this general description. Does this class of problem have a name?

    Read the article

  • Publish/Subscribe/Request for exchange of big, complex, and confidential data?

    - by Morten
    I am working on a project where a website needs to exchange complex and confidential (and thus encrypted) data with other systems. The data includes personal information, technical drawings, public documents etc. We would prefer to avoid the Request-Reply pattern to the dependent systems (and there are a LOT of them), as that would create an awful lot of empty traffic. On the other hand, I am not sure that a pure Publisher/Subscriber pattern would be apropriate -- mainly because of the complex and bulky nature of the data to be exchanged. For that reason we have discussed the possibility of a "publish/subscribe/request" solution. The Publish/Subscribe part would be to publish a message to the dependent systems, that something is ready for pickup. The actual content is then picked up by old-school Request-Reply action. How does this sound to you?? Regards, Morten

    Read the article

  • Why are Javascript for/in loops so verbose?

    - by Matthew Scharley
    I'm trying to understand the reasoning behind why the language designers would make the for (.. in ..) loops so verbose. For example: for (var x in Drupal.settings.module.stuff) { alert("Index: " + x + "\nValue: " + Drupal.settings.module.stuff[x]); } It makes trying to loop over anything semi-complex like the above a real pain as you either have to alias the value locally inside the loop yourself, or deal with long access calls. This is especially painful if you have two to three nested loops. I'm assuming there is a reason why they would do things this way, but I'm struggling with the reasoning.

    Read the article

  • Purpose of "new" keyword

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • Educational, well-written FOSS projects to read, study or discuss

    - by Godot
    Before you say it: yes, this "question" has been asked other times. However, I could not fine many of such questions and not that easily, and those I found had similar results. What I'm trying to say that there are no comprehensive lists of well written Open Source projects, so I decided to set some requirements for the entries (one or possibly more): Idiomatic use of the language in which they are written The project should be lightweight. Not as in "a few kbs", as in "clean" and possibly following the UNIX philosophy, making an efficient use of resources and performing its duty and nothing more. No code bloat, most importantly. Projects like Firefox and GNOME wouldn't qualify, for example. Minimal reliance on external, non-standard libraries, with exceptions for some common FOSS libraries (curses, Xlib, OpenGL and possibly "usual suspects" like gtk+, webkit and Boost). Reliance on well-written libraries is welcome. No reliance on proprietary software - for obvious reasons (programs that rely on XNA, DirectX, Cocoa and similar, for example). Well-documented code is welcome. Include link to web interfaces to their repositories if possible. Here are some sample projects that often pop up in these threads: Operating Systems Plan 9 from Bell Labs: More or less, the official "sequel" to UNIX. Written in C by the same people who invented C! NetBSD: The most portable BSD implementation, written in C and also a good example of portable and organized code. Network and Databases Sqlite: Extremely lightweight and extremely efficient, one of the best pieces of C software I've seen. Count the lines yourself! Lighttpd: A small but pretty reliable web server written in C. Programming languages and VMs Lua: extremely lightweight multi-paradigm programming language. Written in C. Tiny C Compiler: Really tiny C compiler. Not really comparable to GCC or Clang but does its job. PyPy: A Python implementation written in Python. Pharo: OK, I admit it, I'm not really a Smalltalk expert but Pharo is a fork of Squeak and looked rather interesting. Stackless Python - An implementation of Python that doesn't rely on the C call stack - written in C (with some parts in Python) Games and 3D: Angband: One of the most accessible roguelike codebases around here, written in C. Ogre3D: Cross-platform 3D engine. Gets bloated if you don't skip the platform-specific implementation code, otherwise is a pretty solid example of good C++ OO. Simon Tatham's Portable Puzzle Collection: Title says it all. Other - dwm: Lightweight window manager. Written in C. Emulation and Reverse Engineering - Bochs: x86 emulator, written in C++ and tiny enough. - MAME: If you want to see C at one of its lowest levels, MAME is for you. May not be as clean as the other projects but it can teach you A LOT. Before you ask: I didn't mention Linux because it has become quite bloated in the last few years, Linus has also confirmed it. Nonetheless, it'd be a great educational read the same, even if for other reasons. Same for GCC. Feel free to edit or wikify my post. I hope you won't lock my question, I'm only trying to organize a little community effort for the good of all those people who want to enhance their coding skills.

    Read the article

  • How to deal with social login

    - by Matteo Pagliazzi
    In my new web app I'm going to allow social login through Twitter (maybe), Facebook and Google and I'm in search of the best way to do it. Actually I'm using Rails with Devise + Omniauth and this is the problem: Should I ask the user to choose a password so that he can login without a social network? Or maybe the user should be able to set a password if he want (for example when editing his account?) The second way seems the best one but since Twitter doesn't provide user email and google doesn't provide an username I'll probably have to ask the user for username/email when he log in so in that case I may also ask for the password... waht do you think?

    Read the article

  • Project Idea - Android

    - by Darren Young
    Hi, I am trying to come up with some project ideas for my final year at University, and I think that I have one that would offer be a (massive) challenge, and something I could potentially make money from. I just want to check something. Is it possible(from a photograph), to be able to determine somebodys face and the individual parts of that face - eyes, ears, nose, etc? This will probably be via Android. Thanks.

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >