Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 207/563 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Project Idea - Android

    - by Darren Young
    Hi, I am trying to come up with some project ideas for my final year at University, and I think that I have one that would offer be a (massive) challenge, and something I could potentially make money from. I just want to check something. Is it possible(from a photograph), to be able to determine somebodys face and the individual parts of that face - eyes, ears, nose, etc? This will probably be via Android. Thanks.

    Read the article

  • Is "Interface inheritance" always safe?

    - by Software Engeneering Learner
    I'm reading "Effective Java" by Josh Bloch and in there is Item 16 where he tells how to use inheritance in a correct way and by inheritance he means only class inheritance, not implementing interfaces or extend interfaces by other interfaces. I didn't find any mention of interface inheritance in the entire book. Does this mean that interface inheritance is always safe? Or there are guidlines for interface inheritance?

    Read the article

  • Purpose of "new" keyword

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • Why are Javascript for/in loops so verbose?

    - by Matthew Scharley
    I'm trying to understand the reasoning behind why the language designers would make the for (.. in ..) loops so verbose. For example: for (var x in Drupal.settings.module.stuff) { alert("Index: " + x + "\nValue: " + Drupal.settings.module.stuff[x]); } It makes trying to loop over anything semi-complex like the above a real pain as you either have to alias the value locally inside the loop yourself, or deal with long access calls. This is especially painful if you have two to three nested loops. I'm assuming there is a reason why they would do things this way, but I'm struggling with the reasoning.

    Read the article

  • My boss is feuding with his boss. My workload is expanding What should I do?

    - by steve
    These two have always had a somewhat shaky relationship when they were on the same level. The other guy was recently promoted to director and now my boss reports to him. On the surface, they appear to get along when they get together, but my boss despises the man and badmouths him every chance that he gets (to peers, subordinates, etc). He believe that the director is setting him up to fail. The Director and upper management is holding my boss responsible for the not-so-great performance by the team as of late. He's been playing games to make my boss look bad. Due to lay offs, we don't have the manpower to deliever the results that we did before...but expectations have not lowered...and my boss is taking the heat for it. Now he's on the warpath and starting to micromanage. He's giving everyone more work. He's forcing us midlevel guys to take responsibility for the level one techs' performance. I'm spending less and less time coding....and more time babysitting vendors, techs, etc. I'm not so sure that's a bad thing because I'm sorta burnt out on coding, but I don't really care for the idea of having to be responsible for others poor performance....isn't that the manager's job? Anyway, do you guys have any suggestions on dealing with the situation?

    Read the article

  • Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?

    - by Doug T.
    I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder. I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds. Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final "release" build of a project, that ant, maven, or another tool is used to do the full build from the command line? Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?

    Read the article

  • What are the relative merits for implementing an Erlang-style "Continuation" pattern in C#

    - by JoeGeeky
    What are the relative merits (or demerits) for implementing an Erlang-style "Continuation" pattern in C#. I'm working on a project that has a large number of Lowest priority threads and I'm wondering if my approach may be all wrong. It would seem there is a reasonable upper limit to the number of long-running threads that any one Process 'should' spawn. With that said, I'm not sure what would signal the tipping-point for too many thread or when alternate patterns such as "Continuation" would be more suitable. In this case, many of the threads do a small amount of work and then sleep until woken to go again (Ex. Heartbeat, purge caches, etc...). This continues for the life of the Process.

    Read the article

  • History of open source software

    - by Victor Sorokin
    I've been always interested, out of the pure self-amusement, in the history of open software used today: who were the people which started it and what were the reasons to start what were design decisions at the start how software evolved over the time Specifically, I'm interested in following software: GCC X Linux kernel Java Of course, there is plenty of information in Internet to google for, but I thought it would be nice to have list of interesting resources at this site. I hope some of visitors of this site have similar interest and can share a link or two they found particularly amusing/interesting. To make this entry more question-like, here's straight question: what are the most interesting/amusing links about history of open source software?

    Read the article

  • Do I need a degree in Computer Science to get a Jr Programming job in the world? [closed]

    - by t84
    As above really, do I need to go to Uni to get a job as a Junior C# coder? I'm 26 and have been working in Games (Production) for 6 years and I am thinking of a change, I've had exposure to VB6, VBA, HTML, CSS, PHP, JavaScript over the past few years and did a web design NCFE at College, but other than that, nothing else! I'm teaching myself C# at the moment with books and I was wondering 'how much' I need to learn and also how I can improve my chances of getting a programming job! Am I a late started to learn coding? (I know many people who started at a very young age!) Thanks for the help :)

    Read the article

  • Quantifying the value of refactoring in commercial terms

    - by Myles McDonnell
    Here is the classic scenario; Dev team build a prototype. Business mgmt like it and put it into production. Dev team now have to continue to deliver new features whilst at the same time pay the technical debt accrued when the code base was a prototype. My question is this (forgive me, it's rather open ended); how can the value of the refactoring work be quantified in commercial terms? As developers we can clearly understand and communicate the value in technical terms, such a the removal of code duplication, the simplification of an object model and so on. But this means little to an executive focussed on the commercial elements. What will mean something to this executive is the dev. team being able to deliver requirements at faster velocity. Just making this statement without any metrics that clearly quantify return on investment (increased velocity in return for resource allocated to refactoring) carries little weight. I'm interested to hear from anyone who has had experience, positive or negative, in relation to the above. ----------------- EDIT ---------------- Thanks for the responses so far, all of which I think are good. What I want to develop is a metric that proves (or disproves!) all of these statements. A report that ties velocity to refactoring and shows a positive effect.

    Read the article

  • How dependecy injection increases coupling?

    - by B?????
    Reading wiki page on Dependency injection, the disadvantages section tells this : Dependency injection increases coupling by requiring the user of a subsystem to provide for the needs of that subsystem. with a link to an article against DI. What DI does is that it makes a class use the interface instead of the concrete implementation. That should be decreased coupling, no? So, what am I missing? How is dependency injection increasing coupling between classes?

    Read the article

  • Reasonable technological solutions to create CRM using .NET eventually Java

    - by user1825608
    My background(If it's too long, just skip it please ; ) ): I am Java programmer(because of demand): mostly teacher for other students, worked on few thesis for others, but during my journey I discovered that .NET and Microsoft's tools are on at least two levels higher than Java and its tools so I want to learn more about them. I programmed little bit on Windows Phone(NFC Tags, TCP Clients, guitar tuner using internal microphone, simple RSS), used WPF, integrated WPF with Windows Forms, Apple Bonjour(.NET), I have expierience with IP cameras and with unusal problems, I learn Android, but I don't like it at all. Problem: I was asked by my friend to create CRM for small new company. There will maximum 20 workers in the company working at computers in few cities in the country(Poland). They just want to store contracts with the clients and client's data. I am not sure what exacly they do but probably sell apartments so there will be at most few thousands of contracts to store in far future. Now I am totally new to CRM but I want to learn. I have few questions: Should the data be stored on a server in the company's building running 24/7 or cloud. If cloud which one? Should I use ASPX or WPF. I read one topic about it but as far as I know aspx sites can be viewed from every device with internet browser: tablets, phones(Android, WP, iOS) and computers at the same time- so the job is done once and for all(Am I right?), I don't know nothing about aspx. Can WPF be also used in manner that does not need to port it for other platforms?

    Read the article

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

  • My Client wants to convert me from a Contractor to an Employee. I'd like part of the Headhunter's fee. Is this fair?

    - by Bob Kaufman
    I am happily working on a contract for a good, solid company. This client is very happy with my work and has asked me to consider converting to full-time employment. My problem is the headhunter. This firm has not been entirely upfront with me throughout this contract. A mistake was made by the firm that benched me for several days, costing me those days' pay. Inadequate healthcare coverage left me with a bill of several thousand dollars after my wife's brief hospital stay. My feeling is that I did the work that earned me the invitation to work full-time for this company. Asking for 1/3 of the commission I figure they're going to receive would nicely counterbalance the inequities that I perceive were dealt to me. Is this an (un)reasonable or (in)appropriate request?

    Read the article

  • Getting started with ClojureScript and Google Closure

    - by Andrea
    I would like to investigate whether ClojureScript, with the associated Google Closure library is a reasonable tool to build modern, in-browser, Javascript applications. My current Javascript stack consists of jQuery, Backbone and RequireJS with the possible additions of some widgets libraries like jQueryUI or KendoUI. So it will be quite a big leap (I already know how to work in Clojure, although I have little experience). What is a good roadmap to do so? Should I learn the Google Closure library first, or can I grasp it together with ClojureScript? One thing I am concerned about is the overall application structure. Backbone is rather opinionated on how to organize your application. I am not sure whether Google Closure also includes some components to help with the design of the application. And, if this is the case, I do not know how to tell whether this structure will port to ClojureScript or a ClojureScript application will require a different organization anyway, and only use - say - the widgets and DOM manipulation features of Closure.

    Read the article

  • Good resources for language design

    - by Aaron Digulla
    There are lots of books about good web design, UI design, etc. With the advent of Xtext, it's very simple to write your own language. What are good books and resources about language design? I'm not looking for a book about compiler building (like the dragon book) but something that answers: How to create a grammar that is forgiving (like adding optional trailing commas)? Which grammar patterns cause problems for users of a language? How create a compact grammar without introducing ambiguities

    Read the article

  • Do I really need to learn Python? [closed]

    - by Pouya
    These days, I see the name "Python" a lot. Mostly when I'm doing some programming on linux/mac, I see a trace of Python. I have a fair knowledge of C++ and I'm quite good at Java. I also know Delphi which comes handy sometimes. I've been good with these languages, however, I was wondering if learning Python could make it better. What does it offer that makes it worth learning? What are its key/unique advantages/features?

    Read the article

  • Is an 'if password == XXXXXXX' enough for minimum security?

    - by Morgan Herlocker
    If I create a login for an app that has middle to low security risk (in other words, its not a banking app or anything), is it acceptable for me to verify a password entered by the user by just saying something like: if(enteredPassword == verifiedPassword) SendToRestrictedArea(); else DisplayPasswordUnknownMessage(); It seems to easy to be effective, but I certainly would not mind if that was all that was required. Is a simple check on username/password combo enough? Update: The particular project happens to be a web service, the verification is entirely server side, and it is not open-source. Does the domain change how you would deal with this?

    Read the article

  • What are the arguments against parsing the Cthulhu way?

    - by smarmy53
    I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator à la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. These are the first few reasons that come to my mind to avoid a String.Split-based solution: you need lot of ifs to handle special cases and things quickly spiral out of control lots of hardcoded array indexes makes maintenance painful extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) What are your favorite arguments against parsing the Cthulhu way?* *of course if you can convince me he's right I'll be perfectly happy as well

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. * If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project? EDIT *This bit seemed to get a lot of attention. What I mean is I could probably do this for say, 30 users, spending a few tens of thousands of pounds. I'm not including stuff I don't know about the medical industry and government, but I think most people who've been around programming are familiar with that kind of database/front end kind of design. My point is the NHS project looks like a BIG version of this, with bells and whistles, notably security. But surely a budget millions of times larger than mine could provide this?

    Read the article

  • Exclusive use of a Jini server during long-running call

    - by Matthew Flint
    I'm trying to use Jini, in a "Masters/Workers" arrangement, but the Worker jobs may be long running. In addition, each worker needs to have exclusive access to a singleton resource on that machine. As it stands, if a Worker receives another request while a previous request is running, then the new request is accepted and executed in a second thread. Are there any best-practices to ensure that a Worker accepts no further jobs until the current job is complete? Things I've considered: synchronize the job on the server, with a lock on the singleton resource. This would work, but is far from ideal. A call from a Master would block until the current Worker thread completes, even if other Workers become free in the meantime unregister the Worker from the registry while the job is running, then re-register when it completes. Might work OK, but something doesn't smell right with this idea... Of course, I'm quite happy to be told that there are more appropriate technologies than Jini... but I wouldn't want anything too heavyweight, like rolling out EJB containers all over the place.

    Read the article

  • Concurrency checking with Last Change Time

    - by Lijo
    I have a following three tables Email (emailNumber, Address) Recipients (reportNumber, emailNumber, lastChangeTime) Report (reportNumber, reportName) I have a C# application that uses inline queries for data selection. I have a select query that selects all reports and their Recipients. Recipients are selected as comma separacted string. During updating, I need to check concurrency. Currently I am using MAX(lastChangeTime) for each reportNumber. This is selected as maxTime. Before update, it checks that the lastChangeTime <= maxTime. --//It works fine One of my co-developers asked why not use GETDATE() as “maxTime” rather than using a MAX operation. That is also working. Here what we are checking is the records are not updated after the record selection time. Is there any pitfalls in using GETDATE() for this purpose?

    Read the article

  • Why does this static field always get initialized over-eagerly?

    - by TheSilverBullet
    I am looking at this excellent article from Jon Skeet. While executing the demo code, Jon Skeet says that we can expect three different kinds of behaviours. To quote that article: The runtime could decide to run the type initializer on loading the assembly to start with... Or perhaps it will run it when the static method is first run... Or even wait until the field is first accessed... When I try this out (on framework 4), I always get the first result. That is, the static method is initialized before the assembly is loaded. I have tried running this multiple times and get the same result. (I tried both the debug and release versions) Why is this so? Am I missing something?

    Read the article

  • Learning the GO programming language and its prospects [closed]

    - by SHOUBHIK BOSE
    Possible Duplicate: What are the chances of Google's Go becoming a mainstream language? Recently I've started experimenting with The GO programming language by Google. Its a programmer-friendly language having the simplicity of Python. I was wondering whether companies other than Google would also start using Go for development, and if they do , what would be the prospects of being a Go programmer?

    Read the article

  • What's the ethos of the programming profession?

    - by mac
    I am one of those people who became professional programmer by chance, rather than by choice: I moved to a country whose main language I couldn't speak, I knew how to code... and here I am a few years later. Because of this I never really gave much a thought about the ethos of being a programmer, and working as a freelance I neither had many occasions to discuss this with fellow colleagues. Among others, Dictionary.com define the word ethos as follows: The fundamental character or spirit of a culture; the underlying sentiment that informs the beliefs, customs, or practices of a group or society; dominant assumptions of a people or period. So my question is: How would you describe the ethos of being a programmer, and why would you say so? Please note that: my question is different than this and this other ones (although you might have chosen to become a programmer because of the programmer'ethos or you might think that part of the programmer ethos is about "programming being a meaningful profession"). beside the "how/what" part of the question, there is a "why" part too! :) I would appreciate if the answer could be based not only on the idealised vision of the hero-programmer, but also on real working and life experience. Thank you in advance for your time and contributions!

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >