Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 211/563 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • What are basic programming directions? [closed]

    - by Goward Gerald
    What are basic programming directions? Can you please list them and give a brief review of each? Would be nice to have a list for each direction (web-development/*enterprise*/standalone/*mobile*/etc, correct me if I skipped something) like this: 1). Most popular languages of this direction (php for web, objective C for iOS mobile development etc) 2). It's demand on market (from 0 to 5, subjective) 3). How much tasks differ (do you always create same-of-a-kind programs which are like clones of each oother or projects change and you often get to create something interesting, new and fresh?) 4). Freelance demand (from 0 to 5) 5). Fun factor (from 0 to 5, totally subjective but still write it please) Thanks!

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • Pirate Problem In Interview Question

    - by Hafiz
    Some one asked me this question in an interview, so I want to know that what can be technical or algorithmic or strategical solution can we provide? If I am a leader of Pirates who looted 100kg gold, now every pirate has 1 bullet in gun and every pirate wants to get each other's share. They are 5 in number including me. So what strategy I will use to get to kill others while being safe or is there way to decrease probability?

    Read the article

  • Using an actor model versus a producer-consumer model?

    - by hewhocutsdown
    I'm doing some early-stage research towards architecting a new software application. Concurrency and multithreading will likely play a significant part, so I've been reading up on the various topics. The producer-consumer model, at least how it is expressed in Java, has some surface similarities but appears to be deeply dissimilar to the actor model in use with languages such as Erlang and Scala. I'm having trouble finding any good comparative data, or specific reasons to use or avoid the one or the other. Is the actor model even possible with Java or C#, or do you have do use one of the languages built for the purpose? Is there a third way?

    Read the article

  • Paying a developer in stock/fixed rate? [closed]

    - by user51648
    I have an idea for a cross platform application. It will require knowledge of several different languages, web development, and system administration/IT. I don't personally code, but I want to pay professionals to do it. I'm wondering how I should go about paying them. Yes, this will be a large project, but I want it done ASAP. Is it ok if I don't pay them by the hour? I really want it to be a set price. Also, is it reasonable to pay them in stock of the company? Like, 20%? P.S. How do I know how big a project will be in order to give the devs themselves an idea?

    Read the article

  • Which approach would lead to an API that is easier to use?

    - by Clem
    I'm writing a JavaScript API and for a particular case, I'm wondering which approach is the sexiest. Let's take an example: writing a VideoPlayer, I add a getCurrentTime method which gives the elapsed time since the start. The first approach simply declares getCurrentTime as follows: getCurrentTime():number where number is the native number type. This approach includes a CURRENT_TIME_CHANGED event so that API users can add callbacks to be aware of time changes. Listening to this event would look like the following: myVideoPlayer.addEventListener(CURRENT_TIME_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().getCurrentTime()); }); The second approach declares getCurrentTime differently: getCurrentTime():CustomNumber where CustomNumber is a custom number object, not the native one. This custom object dispatches a VALUE_CHANGED event when its value changes, so there is no need for the CURRENT_TIME_CHANGED event! Just listen to the returned object for value changes! Listening to this event would look like the following: myVideoPlayer.getCurrentTime().addEventListener(VALUE_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().valueOf()); }); Note that CustomNumber has a valueOf method which returns a native number that lets the returned CustomNumber object being used as a number, so: var result = myVideoPlayer.getCurrentTime()+5; will work! So in the first approach, we listen to an object for a change in its property's value. In the second one we directly listen to the property for a change on its value. There are multiple pros and cons for each approach, I just want to know which one the developers would prefer to use!

    Read the article

  • Why don't we just fix Javascript?

    - by Jan Meyer
    Javascript sucks because of a few fatalities well pointed out by Douglas Crockford. We talk a lot about it. But the point here is, why we don't fix it? Coffeescript of course does that and a lot more. But the question here is another: if we provide a webservice that can convert one version of Javascript to the next, and so on, we can keep the language up to date. Such a conversion allows old code to run, albeit with an ever-increasing startup delay, as newer browsers convert old code to the new syntax. To avoid that delay, the site only needs to take the output of the code-transform and paste it in! The effort has immediate benefits for those businesses interested in the results. The rest can sleep tight: their code will continue to run. If we provide backward code-transformation also, then elder browsers can also run ANY new code! Migration scripts should be created by those that make changes to a language. Today they don't, which is in itself a fundamental omission! It should be am obvious part of their job to provide them, as their job isn't really done without them. The onus of making it work should be on them. With this system Any site will be able to run in Any browser, but new code will run best on the newest browsers. This way we reap the benefit of an up-to-date and productive development environment, where today we suffer, supposedly because of yesterday. This is a misconception. We are all trapped in committee-thinking, and we drag along things that only worsen our performance over time! We cause an ever increasing complexity that is hard to underestimate. Javascript is easily fixed. The fact is we don't. As an example, I have seen Patrick Michaud tackle the migration problem in PmWiki. It included forward migration scripts. Whenever syntax changes were made, a migration script was added to transform pages to the new syntax. As far as I know, ALL migrations have worked flawlessly. In other words, we don't tackle the migration problem, we just drag it along. We are incompetent! And why is that? Because technically incompetent people feel they must decide for us. Because they are incompetent, fear rules them. They are obnoxiously conservative, and we suffer the consequence of bad leadership. But the competent don't need to play by the same rules. They can (and must) change them. They are the path forward. It is about time to leave the past behind, and pursue the leanest meanest, no, eternal functionality. That would in and of itself revolutionize programming. So, why don't we stop whining and fix programming? Begin with Javascript and change the world. Even if the browser doesn't hook into this system, coders could. So language updaters should take it upon them to provide migration scripts. Once they exist, browsers may take advantage of them.

    Read the article

  • OpenGL programming vs Blender Software, which is better for custom video creation?

    - by iammilind
    I am learning OpenGL API bit by bit and also develop my own C++ framework library for effectively using them. Recently came across Blender software which is used for graphics creation and is in turn written in OpenGL itself. For my part time hobby of graphics learning, I want to just create small-small movie or video segments; e.g. related to construction engineering, epic stories and so on. There may be very minimal to nil mouse-keyboard interaction for those videos, unlike video games which are highly interactive. I was wondering if learning OpenGL from scratch is worth for it or should I invest my time in learning Blender software? There are quite a few good movie examples are created using Blender and are shown in its website. Other such opensource cross platform alternatives are also welcome, which can serve my aforementioned purpose.

    Read the article

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • Why many designs ignore normalization in RDBMS?

    - by Yosi
    I got to see many designs that normalization wasn't the first consideration in decision making phase. In many cases those designs included more than 30 columns, and the main approach was "to put everything in the same place" According to what I remember normalization is one of the first, most important things, so why is it dropped so easily sometimes? Edit: Is it true that good architects and experts choose a denormalized design while non-experienced developers choose the opposite? What are the arguments against starting your design with normalization in mind?

    Read the article

  • Converting ANTLR AST to Java bytecode using ASM

    - by Nick
    I am currently trying to write my own compiler, targeting the JVM. I have completed the parsing step using Java classes generated by ANTLR, and have an AST of the source code to work from (An ANTLR "CommonTree", specifically). I am using ASM to simplify the generating of the bytecode. Could anyone give a broad overview of how to convert this AST to bytecode? My current strategy is to explore down the tree, generating different code depending on the current node (using "Tree.getType()"). The problem is that I can only recognise tokens from my lexer this way, rather than more complex patterns from the parser. Is there something I am missing, or am I simply approaching this wrong? Thanks in advance :)

    Read the article

  • Combining Code Review with Trust Metrics

    - by DragonFax
    I don't get the chance to partake of it at work. But I love the idea of code review. Especially of online open source code review like Gerrit Code Review. I love what Trust Metrics have done for forums and collective intelligences sites on the internet like stackexchange, reddit, and wikipedia. Would it be possible to combine the two and come up with an open source project management system. Something that ends up being mostly community driven. Perhaps a kind of wikipedia of code for a project. Where submitters become popular/trusted by having lots of patches reviewed favoriably by others, and accepted into the trunk. And popular/trusted submitters get their patchs accepted faster/easier. I'm looking for some opinions on the idea, or perhaps pointers to where its been done before, if thats the case. This might leave the lead maintiner little more to do than: wrangle the direction of the project by fast-tracking or vetoing specific patches. settling disputes when the CI tests break, or fixing it himself. Is design by community worse than design by committee?

    Read the article

  • What happened to Borland Delphi?

    - by Lucas
    I have the impression that Delphi isn't very popular anymore. But now at work I had to make some changes to an old Delphi program that we are still using. I used Borland Developer Studio 2006 and it was very pleasant and intuitive to work with, even though I had practically no previous exposure to it. Is Delphi still widely-used and I am simply not aware of it or are there other reasons for its decline?

    Read the article

  • Any suggestions how it would be good to promote software in a small company ?

    - by Derfder
    Ok, I know if I am Red hat or other giant and offer some support etc. I can be profitable, in fact, Red Hat is doing quite well. However, what about a small company where I create a small program. e.g. an instant messenger for a windows or linux (just as an illustration) and I want to sell it. But how can I sell it if it is free and everybody can download it? Any advice? I like the idea of FSF by Richard Stallman, however I am missing the way how to sell my software under GNU/GPL licence. Any advice, how can I solve this problem? Any profitable small business software developers around with their opinion? Any links or names of small companies taht I can look at and study their model of business?

    Read the article

  • How Microsoft Market DotNet?

    - by Fendy
    I just read an Joel's article about Microsoft's breaking change (non-backwards compatibility) with dot net's introduction. It is interesting and explicitly reflected the condition during that time. But now almost 10 years has passed. The breaking change It is mainly on how bad is Microsoft introducing non-backwards compatibility development tools, such as dot net, instead of improving the already-widely used asp classic or VB6. As much have known, dot net is not natively embedded in windows XP (yes in vista or 7), so in order to use the .net apps, you need to install the .net framework of over 300mb (it's big that day). However, as we see that nowadays many business use .net as their main development tools, with asp.net or mvc as their web-based applications. C# nowadays be one of tops programming languages (the most questions in stackoverflow). The more interesing part is, win32api still alive even there is newer technology out there (and still widely used). Imagine if microsoft does not introduce the breaking change, there will many corporates still uses asp classic or vb-based applications (there still is, but not that much). There are many corporates use additional services such as azure or sharepoint (beside how expensive is it). Please note that I also know there are many flagships applications (maybe adobe's and blizzard's) still use C-based or older language and not porting to newer high-level language. The question How can Microsoft persuade the users to migrate their old applications into dot net? As we have known it is very hard and give no immediate value when rewrite the applications (netscape story), and it is very risky. I am more interested in Microsoft's way and not opinion such as "because dot net is OOP, or dot net is dll-embedable, etc". This question may be constructive, as the technology is vastly changes over times lately. As we can see, Microsoft changes Asp.Net webform to MVC, winform is legacy now, it is starting to change to use windows store rather than basic-installment, touchscreen and later on we will have see-through applications such as google class. And that will be breaking changes. We will need to account portability as an issue nowadays. We will need other than just mere technology choice, but also migration plans. Even maybe as critical as we might need multiplatform language compiler, as approached by Joel's Wasabi. (hey, I read his articles too much!)

    Read the article

  • blurry lines between web application context layer, service layer and data access layer in spring

    - by thenaglecode
    I Originally asked this question in SO but on advice I have moved the question here... I'll admit I'm a spring newbie, but you can correct me if I'm wrong, this one liner looks kinda fishy in a best practices sort of way: @RepositoryRestResource(collectionResourceRel="people"...) public interface PersonRepository extends PagingAndSortingRepository<Person, Long> For those who are unaware, the following does many things: It is an interface definition that can be registered in an application context as a jpa repository, automagically hooking up all the default CRUD operations within a persistence context (that is externally configured). and also configures default controller/request-mapping/handler functionality at the namespace "/people" relative to your configured dispatcher servlet-mapping. Here's my point. I just crossed 3 conceptual layers with one line of code! this feels against my seperation-of-concern instincts but i wanted to hear your opinion. And for the sake of being on a question and answer site, I would like to know whether there is a better way of seperating these different layers - Service, Data, Controllers - whilst maintaining as minimal configuration as possible

    Read the article

  • What are the downsides of implementing a singleton with Java's enum?

    - by irreputable
    Traditionally, a singleton is usually implemented as public class Foo1 { private static final Foo1 INSTANCE = new Foo1(); public static Foo1 getInstance(){ return INSTANCE; } private Foo1(){} public void doo(){ ... } } With Java's enum, we can implement a singleton as public enum Foo2 { INSTANCE; public void doo(){ ... } } As awesome as the 2nd version is, are there any downsides to it? (I gave it some thoughts and I'll answer my own question; hopefully you have better answers)

    Read the article

  • Has any hobbyist attempted to make a simple VGA-graphics based operating system in machine code?

    - by Bigyellow Bastion
    I mean real bare bones, bare machine here(no Linux kernel, pre-existing kernel, or any bootloader). I mean honestly write the bootloading software in direct microarchitecture-specific machine opcode, host the operating system, interrupts, I/O, services, and graphical software and all hardware interaction, computation, and design entirely in binary. I know this is quite the leap here, but I was thinking to practice first in x86 assembly (not binary) 16-bit style. Any ideas?

    Read the article

  • still about perl vs python but (to me) slightly different from what has been asked [closed]

    - by B Chen
    Being a newbie to coding, I read from this site that Perl is still as viable as it has been, while Python, quoted from someone else's post, is good but just "snake oil" (not sure what this refers to exactly though). So from the responses in that post, I got the gist that Perl is good and worthy to learn. My question is - pardon me for phrasing it in this "non-programmer's" way - Which one should I learn FIRST? (I am actually currently learning R) Here below is the background info - (a) I will be using it mostly for data mining and statistics analysis (b) Will there be this "first" and "later" issue with learning either Perl or Python? That is, after I become competent with one language, would there be a need to learn the second one (for a similar task??) (c) If there should be circumstances where I must learn the second one, would learning Perl FIRST be better than learning Python? I hope to learn as much from exchanging info here, so please help provide with more than just "it depends" type of info. Great many thanks to all who choose to respond to my query.

    Read the article

  • Scalable Architecture for modern Web Development [on hold]

    - by Jhilke Dai
    I am doing research about Scalable architecture for Web Development, the research is solely to support Modern Web Development with flexible architecture which can scale up/down according to the needs without losing any core functionality. By Modern Web I mean to support all the Devices used to access websites, but the loading mechanism for all devices would be different. My quest of architecture is: For PC: Accessing web in PC is faster but it also depends on the Geo-location, so, the application would check by default the capacity of Internet/Browser and load the page according to it. For Mobile: Most of the mobile design these days either hide information or use different version of same application. eg: facebook uses m.facebook.com which is completely different than PC version. Hiding the things from Mobile using JavaScript or CSS is not a solution as it'll consume the bandwidth and make the application slow. So, my architecture research is about Serving one Application, which has different stack. When the application receives the request it'd send the Packaged Stack to the received request. This way the load time for end users would be faster and maintenance of application for developers would be easier. I am researching about for 4-tier(layered) architecture like: Presentation Layer Application Logic Layer -- The main Logic layer which stores the Presentation Stack Business Logic Layer Data Layer Main Question: Have you come across of similar architecture? If so, then can you list the links here, I'm very much interested to learn about those implementations specially in real world scenario. Have you thought about similar architectures and tried your own ideas, or if you have any ideas regarding this, then I urge to share. I am open to any discussions regarding this, so, please feel free to comment/answer.

    Read the article

  • CMS vs Admin Panel?

    - by Bob
    Okay, so this probably seems like an unusual, more grammar related question, but I was unsure of what to call it. If you use a software such as vBulletin or MyBB or even Blogger and you're the administrator (or other, lesser position such as moderator) of the forum, or publisher/author of the blog, you generally have access to something of an "admin panel". For example, vBulletin's admin panel looks like this and Blogger's admin panel looks something like this. While they both look different and do different things, the goal is fundamentally the same: to provide the user with a method for adding, modifying, or deleting content... to let them control and administrate their forum or blog. Also, they're both made specifically by the company for use in a specific product. Now, there's also options like Drupal. It seems to offer quite a bit more and be quite a bit more generalized. How does something like this work? If you were freelancing, would you deploy a website with Drupal, or would it be something the client might already have installed on their own server? I've never really used Drupal, only heard about it, so please let me know. Also, there seems to be other options like cPanel, a sort of global CMS that allows you to administrate over your entire website. How do those work in comparison to Drupal, or the administrative panels with vBulletin? They seem to serve related, but different purposes. Basically, what is the norm? If I'm developing a web application for a group that needs to be able to edit their website without the need to go into the code or the database (or rather, wants to act in a graphical, easy-to-use content-management/admin panel), would it also be necessary to write my own miniature admin panel? Or would I be able to send them off knowing that they have cPanel? Or could something like Drupal fill this void? Again, I'm a little new to web development, and I'm working on planning out my first, real, large website. So I need a little advice on the standards and expectations for web development - security and coding practices aside, what should I be looking for as far as usability and administration for the client (who will be running the site once I'm done creating the website)? Any extra tips would also be appreciated! Oh, and just a little bit: I'm writing the website using Ruby on the Sinatra framework (both Ruby and Sinatra are things I'm fairly comfortable with) and I'm not being paid to make the website (and I will also be a user, and one of the three administrators of the website) - it's being built for a club I'm in.

    Read the article

  • Survival rate of open source projects

    - by Shogoot
    I'm trying to write a paper on why or why not an open source project will have good odds for survival or not. I've found very few articles on the Internet on the topic or I'm just searching with the wrong terms. I've tried: "open source" survival "open source" success failure "open source" determinants for success So far i've only found this, which says some on the topic. So I turn to you my dear stackers! Help me find some arguments and articles that will throw some clarity on the subject.

    Read the article

  • Machine Learning Web Jobs

    - by gprime
    I always see job positions for web companies for Machine Learning. For example facebook always has this type of job opening. Anyways, i was curious as to what exactly do web companies use machine learning for. Is it for giving people ads based on their site surfing history or something like that. I want to know because i have some experience with machine learning and it sounds like a fun thing to work on as long as i can convince the business guys to go ahead with it.

    Read the article

  • Is reference to bug/issue in commit message considered good practice?

    - by Christian P
    I'm working on a project where we have the source control set up to automatically write notes in the bug tracker. We simply write the bug issue ID in the commit message and the commit message is added as a note to the bug tracker. I can see only a few downsides for this practice. If sometime in the future the source code gets separated from the bug tracking software (or the reported bugs/issues are somehow lost). Or when someone is looking in the history of commits but doesn't have access to our bug tracker. My question is if having a bug/issue reference in the commit message is considered good practice? Are there some other downsides?

    Read the article

  • Process arbitrarily large lists without explicit recursion or abstract list functions?

    - by Erica Xu
    This is one of the bonus questions in my assignment. The specific questions is to see the input list as a set and output all subsets of it in a list. We can only use cons, first, rest, empty?, empty, lambda, and cond. And we can only define exactly once. But after a night's thinking I don't see it possible to go through the arbitrarily long list without map or foldr. Is there a way to perform recursion or alternative of recursion with only these functions?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >