Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 221/563 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • How to explain OOP concepts to a non technical person?

    - by John
    I often try to avoid telling people I'm a programmer because most of the time I end up explaining to them what that really means. When I tell them I'm programming in Java they often ask general questions about the language and how it differs from x and y. I'm also not good at explaining things because 1) I don't have that much experience in the field and 2) I really hate explaining things to non-technical people. They say that you truly understand things once you explain them to someone else, in this case how would you explain OOP terminology and concepts to a non technical person?

    Read the article

  • Should I start making connections even if I'm not ready for a job yet?

    - by James
    The first job is always the hardest to get and I'm not exception. I'm 23 years old and I have no college degree but planned on going to college this year if all goes well (CS of course). I'm self-studying java right now. I know most of the topics related to the language besides the more advanced topics and I'm beginning to look at open source projects. I would like to find a job (at least a part time job) after a year or two when I'll gain more experience and learn more about java technologies and other technologies that interest me. Finding a job will be a bit difficult because most of the people (or a lot of them at least) at my current age already have 2 years or more of experience, so I will be somewhat disadvantaged. Should I start building connections and joining websites such as linkedin ? I never bothered to look into it because I'm not much of a social network person. If I start contributing to open source projects and create personal projects for 2 years could I apply for jobs that require 1-2 years of experience? Does this experience count ?

    Read the article

  • Why not have a High Level Language based OS? Are Low Level Languages more efficient?

    - by rtindru
    Without being presumptuous, I would like you to consider the possibility of this. Most OS today are based on pretty low level languages (mainly C/C++) Even the new ones such as Android uses JNI & underlying implementation is in C In fact, (this is a personal observation) many programs written in C run a lot faster than their high level counterparts (eg: Transmission (a bittorrent client on Ubuntu) is a whole lot faster than Vuze(Java) or Deluge(Python)). Even python compilers are written in C, although PyPy is an exception. So is there a particular reason for this? Why is it that all our so called "High Level Languages" with the great "OOP" concepts can't be used in making a solid OS? So I have 2 questions basically. Why are applications written in low level languages more efficient than their HLL counterparts? Do low level languages perform better for the simple reason that they are low level and are translated to machine code easier? Why do we not have a full fledged OS based entirely on a High Level Language?

    Read the article

  • Powershell Active Directory Account Attribute to a variable

    - by Bill Garrett
    Sorry for the newbie question. I am using Powershell 3 to get a list of all user accounts. I am trying to generate an output for accounts, either "Enabled" or "Disabled". I am able to get the account status code from active directory using: $rc = $Rech.PropertiesToLoad.Add("userAccountControl"); That will display the correct account status code. When I try to use an if statement on the value, I dont get any result. How do I put this value into a variable to use some logic with it? In the end, my requirements are to have the output to an CSV file that I can send to HR and have them examin it and instead of a code I would like to have it say "Enabled" or "Disabled". Thank you.

    Read the article

  • In agile environment, how is bug tracking and iteration tracking consolidated.

    - by DXM
    This topic stemmed from my other question about management-imposed waterfall-like schedule. From the responses in the other thread, I gathered this much about what is generally advised: Each story should be completed with no bugs. Story is not closed until all bugs have been addressed. No news there and I think we can all agree with this. If at a later date QA (or worse yet a customer) finds a bug, the report goes into a bug tracking database and also becomes a story which should be prioritized just like all other work. Does this sum up general handling of bugs in agile environment? If yes, the part I'm curious about is how do teams handle tracking in two different systems? (unless most teams don't have different systems). I've read a lot of advice (including Joel's blog) on software development in general and specifically on importance of a good bug tracking tool. At the same time when you read books on agile methodology, none of them seem to cover this topic because in "pure" agile, you finish iteration with no bugs. Feels like there's a hole there somewhere. So how do real teams operate? To track iterations you'd use (whiteboard, Rally...), to track bugs you'd use something from another set of products (if you are lucky enough, you might even get stuck with HP Quality Center). Should there be 2 separate systems? If they are separate, do teams spend time creating import/sync functionality between them? What have you done in your company? Is bug tracking software even used? Or do you just go straight to creating a story?

    Read the article

  • Is it worth to learn Experimental Languages?

    - by Xander Lamkins
    I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers. The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from multiple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments.

    Read the article

  • Do you develop with localization in mind?

    - by Jimmy C
    When working on a software project or a website, do you develop with localization in mind? By this I mean e.g. Externalizing all strings, including error messages. Not using images that contain text. Designing your UI with text expansion in mind. Using pseudo-translation to test your UI's early in the process. etc. On projects you work on, are these in the 'nice to have' category and let the L10N team worry about the rest, or do you have localization readiness built into your development process? I'm interested to hear how developers view localization in general.

    Read the article

  • Why are people using C instead of C++? [closed]

    - by Darth
    Possible Duplicate: When to use C over C++, and C++ over C? Many times I've stumbled upon people saying that C++ is not always better than C. Great example here would be the Linux kernel, where they simply decided to use C instead of C++ because it had better compilers at the time. But that's many years ago and a lot has changed. So the question is, why are people still using C over C++? I gues there are probably some cases (like embedded devices), where there simply isn't a good C++ compiler, or am I wrong here? What are the other cases when it is better to go with C instead of C++?

    Read the article

  • Creating an expandable, cross-platform compatible program "core".

    - by Thomas Clayson
    Hi there. Basically the brief is relatively simple. We need to create a program core. An engine that will power all sorts of programs with a large number of distinct potential applications and deployments. The core will be an analytics and algorithmic processor which will essentially take user-specific input and output scenarios based on the information it gets, whilst recording this information for reporting. It needs to be cross platform compatible. Something that can have platform specific layers put on top which can interface with the core. It also needs to be able to be expandable, for instance, modular with developers being able to write "add-ons" or "extensions" which can alter the function of the end program and can use the core to its full extent. (For instance, a good example of what I'm looking to create is a browser. It has its main core, the web-kit engine, for instance, and then on top of this is has a platform-specific GUI and can also have add-ons and extensions which can change the behavior of the program.) Our problem is that the extensions need to interface directly with the main core and expand/alter that functionality rather than the platform specific "layer". So, given that I have no experience in this whatsoever (I have a PHP background and recently objective-c), where should I start, and is there any knowledge/wisdom you can impart on me please? Thanks for all the help and advice you can give me. :) If you need any more explanation just ask. At the moment its in the very early stages of development, so we're just researching all possible routes of development. Thanks a lot

    Read the article

  • C# Algorithms for * Operator

    - by Harsha
    I was reading up on Algorithms and came across the Karatsuba multiplication algorithm and a little wiki-ing led to the Schonhage-Strassen and Furer algorithms for multiplication. I was wondering what algorithms are used on the * operator in C#? While multiplying a pair of integers or doubles, does it use a combination of algorithms with some kind of strategy based on the size of the numbers? How could I find out the implementation details for C#?

    Read the article

  • Why do people think SOAP is deprecated?

    - by user98q37479
    While browsing SO today I found this question here and it starts with this: Sure, you're gonna tell me that SOAP is depracated and all, well i'm forced to use it Found lots of statement like this one on SO up till now, this one just triggered me to ask this question. REST has its uses, SOAP has its uses, in some places they intersect as functionality but they are not replaceable to one another. So I wonder, why do people think SOAP is "deprecated"? Is it ignorance? Complexity of SOAP and WS-* specs? REST hype? What? If you think SOAP is deprecated please tell me why. I'm curious!

    Read the article

  • What's so bad about pointers in C++?

    - by Martin Beckett
    To continue the discussion in Why are pointers not recommended when coding with C++ Suppose you have a class that encapsulates objects which need some initialisation to be valid - like a network socket. // Blah manages some data and transmits it over a socket class socket; // forward declaration, so nice weak linkage. class blah { ... stuff TcpSocket *socket; } ~blah { // TcpSocket dtor handles disconnect delete socket; // or better, wrap it in a smart pointer } The ctor ensures that socket is marked NULL, then later in the code when I have the information to initialise the object. // initialising blah if ( !socket ) { // I know socket hasn't been created/connected // create it in a known initialised state and handle any errors // RAII is a good thing ! socket = new TcpSocket(ip,port); } // and when i actually need to use it if (socket) { // if socket exists then it must be connected and valid } This seems better than having the socket on the stack, having it created in some 'pending' state at program start and then having to continually check some isOK() or isConnected() function before every use. Additionally if TcpSocket ctor throws an exception it's a lot easier to handle at the point a Tcp connection is made rather than at program start. Obviously the socket is just an example, but I'm having a hard time thinking of when an encapsulated object with any sort of internal state shouldn't be created and initialised with new.

    Read the article

  • What relationship do software Scrum or Lean have to industrial engineering concepts like theory of constraints?

    - by DeveloperDon
    In Scrum, work is delivered to customers through a series of sprints in which project work is time boxed to a fixed number of days or weeks, usually 30 days. In lean software development, the goal is to deliver as soon as possible, permitting early feedback for the next iteration. Both techniques stress the importance of workflow in which software work product does not accumulate in development awaiting release at some future date. Both permit new or refined requirements and feedback from QA and customers to be acted on with as little delay as possible based on priority. A few years ago I heard a lecture where the speaker talked briefly about a family of concepts from industrial engineering called theory of constraints. In the factory, they use an operations model based on three components: drum, buffer, and rope. The drum synchronizes work product as it flows through the system. Buffers that protect the system by holding output from one stage as it waits to be consumed by the next. The rope pulls product from one work station to the next. Historically, are these ideas part of the heritage of Scrum and Lean, or are they on a separate track? It we wanted to think about Scrum and Lean in terms of drum-buffer-rope, what are the parts? Drum = {daily scrum meeting, monthly release)? Buffer = {burn down list, source control system)? Rope = { daily meeting, constant integration server, monthly releases}? Industrial engineers define work flow in terms of different kinds of factories. I-Factories: straight pipeline. One input, one output. A-Factories: many inputs and one output. V-Factories: one input, many output products. T-Plants: many inputs, many outputs. If it applies, what kind of factory is most like Scrum or Lean and why?

    Read the article

  • Who is likely to need the most this high-quality, measurable, reliable approach to software? [closed]

    - by Marek Cruz
    Software engineering is the application of principles of engineering to software. Trouble is, most of those who like to flatter with the title "software engineer" don't do that. They just keep writing code and patching it until it's stable enough to foist off on users. That's not software engineering. Who is likely to need the most the practice of software engineering? (with all the project planning, requirements engineering, software design, implementation based on the design, testing, deployment, awareness of IEEE standards, metrics, security, dependability, usability, etc.)

    Read the article

  • Isolated Unit Tests and Fine Grained Failures

    - by Winston Ewert
    One of the reasons often given to write unit tests which mock out all dependencies and are thus completely isolated is to ensure that when a bug exists, only the unit tests for that bug will fail. (Obviously, an integration tests may fail as well). That way you can readily determine where the bug is. But I don't understand why this is a useful property. If my code were undergoing spontaneous failures, I could see why its useful to readily identify the failure point. But if I have a failing test its either because I just wrote the test or because I just modified the code under test. In either case, I already know which unit contains a bug. What is the useful in ensuring that a test only fails due to bugs in the unit under test? I don't see how it gives me any more precision in identifying the bug than I already had.

    Read the article

  • Commercial product using a GPL OS

    - by pfried
    we are planning to create a commercial product. The product consists of come MCUs and a small computer (we are developping on a raspberry pi at the moment). The computer needs an operating system as we would like keep things like WLAN and booting as simple as possible. We create some software running on this computer (node.js application). The most operating systems like Arch Linux are licenced under the GPL. The product we would sell contains the computer with preinstalled OS and software. This system operates as a central access point to MCU devices and is able to control them. We use other's software in our product. We do not modify their source code. The product (the computer part) consists of a computer, an OS and software we create. How does the use of an OS affect our own code (licence)? Is there a possibility of avoiding GPL for our own code? eg. shipping the software seperated? Are there any effects to other components of our product, eg. the MCU part? The node.js application delivers a WebApp to the client where it is executed. Are there any effects (As we would like to sell parts of the code as an additional App on the App Stores)? I know we make use of the work of the community and i respect this. The problem is: The software alone is kind of useless without the MCU devices. I do not expect a legal advice.

    Read the article

  • Are flag variables an absolute evil?

    - by dukeofgaming
    I remember doing a couple of projects where I totally neglected using flags and ended up with better architecture/code; however, it is a common practice in other projects I work at, and when code grows and flags are added, IMHO code-spaghetti also grows. Would you say there are any cases where using flags is a good practice or even necessary?, or would you agree that using flags in code are... red flags and should be avoided/refactored; me, I just get by with doing functions/methods that check for states in real time instead. Edit: Not talking about compiler flags

    Read the article

  • REST Service and CQRS

    - by Paul Wade
    I am struggling with architecture on a new project. I am using the following patterns/technology. CQRS - anything going in goes through a command REST - using WebAPI MVC - asp.net mvc Angular - building a spa nhibernate I belive this provides some great separation and should help keep a very complex domain from growing into a giant set of services that mix queries with other business logic. The REST services have become non restful. They are putting methods in rest that are "SearchByDate", "SearchByItem" etc. Service Methods that execute commands are called with a "web" model class, a new command is built in the service and executed, I feel like there is a lot of extra code. I expected this to be much different but I wasn't around to keep things on track. Finally my questions are this... I would have liked to see PUT Person (CreatePersonCommand) but then I realized that isn't restful either is it? the put should be a person entity not a command. Can I make CQRS and REST service work together or am I going about this all wrong? How do I handle service methods that don't fit into a REST model. I am not performing CRUD on the object but rather executing some business logic. I.E. I don't want the UI to be responsible for how a shipment is "unshipped" I want the service layer to worry about that.

    Read the article

  • What are the best Microsoft Certifications to start with?

    - by emragins
    Background I have a bachelors in math and a certification. in C++ from 2007. Since then I've spent a lot of time working with python, C#, and started going through the ASP.NET certification materials. I'm starting to realize that the certification is going to take longer than anticipated and I'm not sure I want to spend the next 4-5 months studying before I have it completed. Most of resume shows teaching/tutoring experience with some low-level administration thrown in. Question If I want to get any programming position, which certifications would be best to start with? What would be the quickest and easiest to obtain, yet represent value for my employer? Are certifications even the way to go? If not, what would you suggest? Update I have several programs that I show off when I can (mostly games), and I'm about 75% through a C# application I hope to have done in the next week. Since most employers simply ask for a resume and not samples, what would be the best way to present the work to them?

    Read the article

  • Is what someone publishes on the Internet fair game when considering them for employment as a programmer?

    - by Jon Hopkins
    (Originally posted on Stack Overflow but closed there and more relevant for here) So we first interviewed a guy for a technical role and he was pretty good. Before the second interview we googled him and found his MySpace page which could, to put it mildly, be regarded as inappropriate. Just to be clear there was no doubt that it was his page (name, photos, matching biographical information and so on). The content was entirely personal and in no way related to his professional abilities or attitude. Is it fair to consider this when thinking about whether to offer them a job? In most situations my response would be what goes on in someone's private life is their own doing. However for anyone technical who professes (implicitly or explicitly) to understand the Internet and the possibilities it offers, is posting things in a way which can so obviously be discovered a significant error of judgement? EDIT: Clarification - essentially it was a fairly graphic commentary on porn (but of, shall we say, a non-academic nature). I'm actually more interested in the general concept than the specific incident as it's something we're likely to see more in the future as people put more and more of themselves on-line. My concerns are not primarily about him and how he feels about such things (he's white, straight, male and about the last possible victim of discrimination on the planet in that sense), more how it reflects on the company that a very simple search (basically his name) returns these things and that clients may also do it. We work in a relatively conservative industry.

    Read the article

  • Where can you find your first customers as a freelancer?

    - by Adam Smith
    I want to start doing freelance work, but no matter how I look at it, it seems like the best way to get customers and to have work most of the time, you have to already be in the freelancing game. Most freelancers I've talked to have had the same customers over the years or got new customers because their satisfied clients referred them. What I'd like to know from the successful people here that work as freelancers is how do you start doing business when you haven't yet set foot in freelancing? I want to start small, creating websites that won't require me to hire other people other than maybe a designer I already know. (I'd like to create desktop applications as well, but I think I should keep that for later when I'm more experienced) . I thought about localized Google ads or visiting companies and meeting the people in charge there, but I wouldn't know which kind of businesses to look for or if it's even a good way to approach this. Anyone care to share their personal startup experiences / advice that can help future freelancers?

    Read the article

  • How do I find a programming internship / practice?

    - by user828584
    I'm taking the SAT soon, and quickly heading toward the chaos of figuring out which college's I will be able to attend, and how on Earth I'll be able to afford it. I would like to be able to gain some experience in programming or web development, but I don't know where to look. I've been trying my best to learn over the past year, and have been doing alright in C# and the web languages (HTML, PHP, CSS, javascript). I have no idea where to look though. I've asked similar questions and rummaged through old questions on here, and they all say nothing specifically. The main two points are always "Contribute to open source projects" and "Find a company and ask to be a part of it." I don't know how to find either of the two. I've looked online at github and source forge, and the like, but all the projects are already so progressed and I just don't have the experience needed to bring myself up to speed with their code. I don't have much experience in code management, and I don't know how to get it. I would be ecstatic to be able to start a project with a group of more experienced members, but, like I said, I have no clue how to find these people.

    Read the article

  • What is required for a scope in an injection framework?

    - by johncarl
    Working with libraries like Seam, Guice and Spring I have become accustomed to dealing with variables within a scope. These libraries give you a handful of scopes and allow you to define your own. This is a very handy pattern for dealing with variable lifecycles and dependency injection. I have been trying to identify where scoping is the proper solution, or where another solution is more appropriate (context variable, singleton, etc). I have found that if the scope lifecycle is not well defined it is very difficult and often failure prone to manage injections in this way. I have searched on this topic but have found little discussion on the pattern. Is there some good articles discussing where to use scoping and what are required/suggested prerequisites for scoping? I interested in both reference discussion or your view on what is required or suggested for a proper scope implementation. Keep in mind that I am referring to scoping as a general idea, this includes things like globally scoped singletons, request or session scoped web variable, conversation scopes, and others. Edit: Some simple background on custom scopes: Google Guice custom scope Some definitions relevant to above: “scoping” - A set of requirements that define what objects get injected at what time. A simple example of this is Thread scope, based on a ThreadLocal. This scope would inject a variable based on what thread instantiated the class. Here's an example of this: “context variable” - A repository passed from one object to another holding relevant variables. Much like scoping this is a more brute force way of accessing variables based on the calling code. Example: methodOne(Context context){ methodTwo(context); } methodTwo(Context context){ ... //same context as method one, if called from method one } “globally scoped singleton” - Following the singleton pattern, there is one object per application instance. This applies to scopes because there is a basic lifecycle to this object: there is only one of these objects instantiated. Here's an example of a JSR330 Singleton scoped object: @Singleton public void SingletonExample{ ... } usage: public class One { @Inject SingeltonExample example1; } public class Two { @Inject SingeltonExample example2; } After instantiation: one.example1 == two.example2 //true;

    Read the article

  • Current iOS version/device statistics?

    - by hotpaw2
    The answer to this SO question has become stale: iOS version/device statistics - where can i find? because time currency wasn't part of that question, and iOS version updates have been release since this question was asked. Is there a web site or other publicly available source which keeps a current or frequently updated list of the percentages of iOS devices and OS versions in use, perhaps by continual monitoring of app analytics or web site logs or other means? And what device or OS information are iOS app analytics currently allowed to report, if any? (...assuming an appropriate privacy policy and adhering to such, of course.)

    Read the article

  • How to decide on a price for the project as a freelancer

    - by Shekhar_Pro
    I have seen similar question on this SE site but none comes close to a sure shot answer and many are rather subjective. So i am taking a website as an example to be more objective for you to decide its development price i should quote for the complete work.I would like to have specific figures. In past I have developed many projects for my classmates (Computer science and few .net) when i was in college and there i just arbitrarily quoted the price i will take depending on my mood and customer's ability to pay.. usually ranging from Rs.500 (about $10 USD) to Rs. 1500 (about $30 USD). I have also developed few websites but that was open-source and free. But this time impressed by my work i have got a client that wants to get a website developed similar to this: [ http://www.jeetle.in/ ]. So taking this website as an example tell me how much should i charge for complete work from designing to payment gateway implementation (Excluding the charge the payment gateway provider will take). Few information you might like to consider. I am the only developer on this project if that makes any difference. And i would be using ASP.Net and MSSQL Express for server side processing and jQuery on client. Time period for development offered is about 4 to 6 Weeks. Its like i know my work but not how much I'm worth

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >