Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 43/563 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • How do you quantify competency in terms of time (years)?

    - by o.k.w
    While looking for a job via agencies some time ago, I kept having questions from the recuitment agents or in the application forms like: How many years of experience do you have in: Oracle ASP.NET J2EE etc etc etc.... At first I answered faithfully... 5yrs, 7yrs, 2 yrs, none, few months etc etc.. Then I thought; I can be doing something shallow for 7 years and not being competent at it simply because I am just doing a minor support for a legacy system running SQL2000 which requires 10 days of my time for the past 7 years. Eventualy I declined to answers such questions. I wonder why do they ask these questions anymore. Anyone who just graduated with a computer science can claim 3 to 4 years experience in anything they 'touched' in the cirriculum, which to me can be equivalent to zero or 10 years depending how you look at it. It might hold true decades ago where programmers and IT skills are of very different nature. I might be wrong but I really doubt 'time' or 'years' are a good gauge of competency or experience anymore. Any opinion/rebuttal are welcome!

    Read the article

  • How to learn to program [on hold]

    - by user94914
    I went to a community college and got a degree in computer science, but I found out I only learn very little about programming. As a result I landed byself a office assistant work (for a year now), I want to study on my own and apply for some internship / very entry level development job. I am wondering how should a person learn to program now? I feel that I might not doing it correctly, I understand everyone has a different approach, but I am really clueless on what to do, as it seems I am 5-10 years away. 1) Read the old college programming textbook cover to cover, learn every single concepts, do all the practice problems and master them (1-2 times until error free). Currently reading this java book 2) Work on any project, keep on googling and reading tutorials (including the books on that specific language). I have been doing 1, but the progress is really slow, about 2-5 pages / hour, over a 1000+ page book, I felt really discouraged. I have a few of them to go through (data struction, analyis algorthim, computer theory, operating system.) I wonder is this the right method to do? I know it is going to take time, but I am hoping to get some advice from current programmers.

    Read the article

  • Do I have to change my company to make sure I'm good enough? [closed]

    - by superM
    I have been working as a developer since my fourth year of university until now. I'm getting my master's degree next year (in math modeling). I've worked for the same company all the time, first on .Net, then on Android, and now .Net again. It seems I'm doing quite well in my current company. Some of my coursemates have tried to work in my company, but they failed after some time. This (and not only this) makes me think that I'm really worth something. But we're working on a very specific project. I was wondering if I am good enough and if I can make it in another company. I love my current job, but sometimes I have a feeling that I'm not moving on. So, is it possible to keep improving when working at the same company with the same technology and at similar tasks? I know that most of the programmers go from one place to another very frequently. Is it the only way?

    Read the article

  • How should a non-IT manager secure the long-term maintenance and development of essential legacy software?

    - by user105977
    I've been hunting for a place to ask this question for quite a while; maybe this is the place, although I'm afraid it's not the kind of "question with an answer" this site would prefer. We are a small, very specialized, benefits administration firm with an extremely useful, robust collection of software, some written in COBOL but most in BASIC. Two full-time consultants have ably maintained and improved this system over more than 30 years. Needless to say they will soon retire. (One of them has been desperate to retire for several years but is loyal to a fault and so hangs on despite her husband's insistence that golf should take priority.) We started down the path of converting to a system developed by one of only three firms in the country that offer the type of software we use. We now feel that although this this firm is theoretically capable of completing the conversion process, they don't have the resources to do so timely, and we have come to believe that they will be unable to offer the kind of service we need to run our business. (There's nothing like being able to set one's own priorities and having the authority to allocate one's resources as one sees fit.) Hardware is not a problem--we are able to emulate very effectively on modern servers. If COBOL and BASIC were modern languages, we'd be willing to take the risk that we could find replacements for our current consultants going forward. It seems like there ought to be a business model for an IT support firm that concentrates on legacy platforms like this and provides the programming and software development talent to support a system like ours, removing from our backs the risks of finding the right programming talent and the job of convincing younger programmers that they can have a productive, rewarding career, in part in an old, non-sexy language like BASIC. Where do I find such firms?

    Read the article

  • Suggestions on switching from lamp based web design-development to game design-development

    - by Sandeepan Nath
    I have around 2.5 years of experience as a web developer cum designer working mainly on the LAMP platform. Now, I want to try out game development (of the likes of First Person Shooter games like Call of Duty (COD)). It is one of my dreams to some day succeed in making a profitable, popular, commercial game of this type. However, I have never done any kind of business nor even freelancing yet even in the web domain. Okay, first things first, I am just starting and I don't yet have any idea about the technologies, languages, engines (game engines) etc involved in that. I would like this question to be a complete guide for people with similar interests. Best resources for getting hold really fast What would be the best approach to get the basic hold of the domain really fast? Any resource(s) for programmers coming from other domains/experienced in other domains would be the ideal ones for me. E.g., if anybody would ask me some good resource for quickly learning PHP/Mysql, I would suggest books like "How to do everything with PHP & MySql" - because - it introduces all the basics of the domain (not the advanced things which can be later learnt by practice and also a lot by searching in stackoverflow questions) it contains some very nice working projects in the end, which help in applying the skills learnt in the chapters of the book. This is the best way for self learners, I feel. I would appreciate some similar resource which connects all concepts together to get the bigger picture. I have read about C, C++, C#, JAVA being used in game programming but not sure which language to go for (I have previously learnt a little of C and JAVA). I have also read about game engines but there would be various other concepts. Commonly accepted ways of learning Should 3D games like these be tried after 2D games? Are there some commonly accepted ways of learning such kind of games? Like in web development, we should go for frameworks after practising well with basic language, AJAX after getting properly done with simple page-reload processing etc. Apart from these, any useful tips (like language choices etc.) would be much appreciated. Like it is highly recommended to contribute to open source web projects for getting recognition, are there similar open source game projects? Thanks, Sandeepan

    Read the article

  • Is 4-5 years the “Midlife Crisis” for a programming career?

    - by Jeff
    I’ve been programming C# professionally for a bit over 4 years now. For the past 4 years I’ve worked for a few small/medium companies ranging from “web/ads agencies”, small industry specific software shops to a small startup. I've been mainly doing "business apps" that involves using high-level programming languages (garbage collected) and my overall experience was that all of the works I’ve done could have been more professional. A lot of the things were done incorrectly (in a rush) mainly due to cost factor that people always wanted something “now” and with the smallest amount of spendable money. I kept on thinking maybe if I could work for a bigger companies or a company that’s better suited for programmers, or somewhere that's got the money and time to really build something longer term and more maintainable I may have enjoyed more in my career. I’ve never had a “mentor” that guided me through my 4 years career. I am pretty much blog / google / self taught programmer other than my bachelor IT degree. I’ve also observed another issue that most so called “senior” programmer in “my working environment” are really not that senior skill wise. They are “senior” only because they’ve been a long time programmer, but the code they write or the decisions they make are absolutely rubbish! They don't want to learn, they don't want to be better they just want to get paid and do what they've told to do which make sense and most of us are like that. Maybe that’s why they are where they are now. But I don’t want to become like them I want to be better. I’ve run into a mental state that I no longer intend to be a programmer for my future career. I started to think maybe there are better things out there to work on. The more blogs I read, the more “best practices” I’ve tried the more I feel I am drifting away from “my reality”. But I am not a great programmer otherwise I don't think I am where I am now. I think 4-5 years is a stage that can be a step forward career wise or a step out of where you are. I just wanted to hear what other have to say about what I’ve mentioned above and whether you’ve experienced similar situation in your past programming career and how you dealt with it. Thanks.

    Read the article

  • How to become a solid python web developer [closed]

    - by Estarius
    Possible Duplicate: How do I learn Python from zero to web development? I have started Python recently with the goal to become a solid developer to make a web application eventually. However, as time goes by I am wondering if I am being optimal about how I will achieve my goal. I would compare it to a game for example, to be better you must spend time playing and trying new things... However, if you just log in and sit in the lobby chatting you are most likely not progressing. So far, this is my plan (feel free to comment or judge it): Review basic programmation concepts Start coding slowly in Python Once comfortable in Python, learn about web development in Python Learn about those things we heard about: SQLAlchemy, MVC, TDD, Git, Agile (Group project) To achieve these things, I started the Learn python the hard way exercises, which I am doing at the rate of 5 per days. I also started to read Think Python at the same time and planning to move on with Dive into python. As far as my research goes, these documentations along with Python documentation is usually what is the most recommended to learn Python. I consider this to get my point 1 and 2 done. While learning Python is really great, my goal remains to do quality web development. I know there are books about Django etc. however I would like to become comfortable with any Python web development. This means without Framework and with Framework... Any framework, then be able to choose the one which best fits our needs. For this I would like to know if some people have suggestions. Should I just get a book on Django and it should apply to everything ? What would be the best method to go from Python to Web Python and not end up creating crappy code which would turn into nightmares for other programmers ? Then finally, those "things we hear about". While I understand what they all do basically, I am fairly sure that like everything, there are good and wrong ways of making use of them. Should I go through at least a whole book on each before starting to use them or keep it at their respective online documentation ? Are there some kind of documentation which links their use to Python ? Also, from looking at Django and Pyramid they seems to use something else than MVC, while the Django model looks similar, the Pyramid one seems to cut a whole part of it... Is learning MVC still worth it ? Sorry for the wall of text, Thanks in advance !

    Read the article

  • Computer / Software Engineering vs Other Engineering Disciplines [closed]

    - by Mohammad Yaseen
    Since this was a rather specific question, I have tried my best to present this question in a format which fits the style of this site. Please comment if it can be improved further. I have to choose the Engineering discipline on 6th Nov. My interest is in Robotics, hardware-level programming, Artificial Intelligence and back-end programming. I am currently working as a freelance developer using mainly PHP and occasionally working with GWT.I am somewhat familiar with C# and Python too. I am not super good at programming but I do like it. I am thinking to choose Computer and Information Systems Engineering as this is what I love but all the eggheads of my city are going to Mechanical Engineering and when I ask one of them Why are you choosing this? They say It's my interest and for job and the money. Basically I am confused between CIS and Mechanical Engineering, specifically the job market for both. Since this is a programmers' site I think following questions will be relevant . I am asking this because I want to take advice from professionals in this field before diving too deep . Are you happy with your job / work and pay. Are you satisfied with the work environment and career growth Do you feel OK (or great?) about the near and/or distant future of your industry. Why should a person choose Computer if he has other choices i.e what this industry has to offer in particular which other fields of work don't This industry is subject to rapid changes and you have to learn continuously throughout your entire career. Does this learning and constant hard work pay off ? In my country there is no hardware manufacturing. So most of CIS graduates (like Software Engineers) work in Software Houses. What is the scenario in your country. Is a degree titled 'Software' necessary or companies will take Computer Engineers too if they have relevant experience. I am asking this because I plan to move abroad for work. This is going to be something which I'll do for the rest of my life so I am a bit confused about the right choice. You can view the course outline for both programmes below. Computer and Information Systems Engineering. Mechanical Engineering

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Please help me give this principle a name

    - by Brent Arias
    As a designer, I like providing interfaces that cater to a power/simplicity balance. For example, I think the LINQ designers followed that principle because they offered both dot-notation and query-notation. The first is more powerful, but the second is easier to read and follow. If you disagree with my assessment of LINQ, please try to see my point anyway; LINQ was just an example, my post is not about LINQ. I call this principle "dial-able power". But I'd like to know what other people call it. Certainly some will say "KISS" is the common term. But I see KISS as a superset, or a "consumerism" practice. Using LINQ as my example again, in my view, a team of programmers who always try to use query notation over dot-notation are practicing KISS. Thus the LINQ designers practiced "dial-able power", whereas the LINQ consumers practice KISS. The two make beautiful music together. I'll give another example. Imagine a C# logging tool that has two signatures allowing two uses: void Write(string message); void Write(Func<string> messageCallback); The purpose of the two signatures is to fulfill these needs: //Every-day "simple" usage, nothing special. myLogger.Write("Something Happened" + error.ToString() ); //This is performance critical, do not call ToString() if logging is //disabled. myLogger.Write( () => { "Something Happened" + error.ToString() }); Having these overloads represents "dial-able power," because the consumer has the choice of a simple interface or a powerful interface. A KISS-loving consumer will use the simpler signature most of the time, and will allow the "busy" looking signature when the power is needed. This also helps self-documentation, because usage of the powerful signature tells the reader that the code is performance critical. If the logger had only the powerful signature, then there would be no "dial-able power." So this comes full-circle. I'm happy to keep my own "dial-able power" coinage if none yet exists, but I can't help think I'm missing an obvious designation for this practice. p.s. Another example that is related, but is not the same as "dial-able power", is Scott Meyer's principle "make interfaces easy to use correctly, and hard to use incorrectly."

    Read the article

  • Why learn Flash Builder 4 (Flex) when I can just use Flash Professional?

    - by Jason McKenna
    I want to learn Flash Builder 4 (Flex) because I see sooo many jobs requesting experience with it. i also just like knowing stuff. I am also very interested in focusing on RIA development now. BUT... can anyone tell me CLEARLY why the heck I would ever use FLEX over Flash Pro?? it is a time investment, so is it worth it? All I read are misguided posts about how Flash Pro is for games and banner ads, and Flex is for programmers and RIAs blah blah... this simply isn't so from my 9 years of contracting experience. I'm 99.9% certain that I can build anything a flex developer can build, but using Flash Pro. I can build powerful AS3-driven apps for the desktop, mobile device, or browser, and I can link to databases with XML and I can import text files and communicate with ColdFusion and everything. The advantage with Flash Pro is that I can also easily and clearly animate transitions and build custom elements that look the way I want/need them to look for my specific client. Why would I want to use a bunch of pre-built components that drive my file sizes to the moon?? Who is happy with a drag-n-drop button?? Is Flex just a thing made for programmer people with no artistic inclination? What is the advantage of using it?? It takes me back to Visual Basic class. Seems like a pain to have to use multiple tools to import crap from Flash Pro into Flex and yada yada... why when I can do it all nicely in Flash Pro to begin with. Am I clueless, or missing some major piece of the puzzle? Thanks for any clarity. PS, I couldn't care less about the code editors. It aint that bad people. They make it out like the thing doesn't even respond to keyboard input or something. Does everthing I need it do anyways. Please help out here. If I just dont need to learn it, I dont want to waste the time. Jase

    Read the article

  • Best practices for logging and tracing in .NET

    - by Levidad
    I've been reading a lot about tracing and logging, trying to find some golden rule for best practices in the matter, but there isn't any. People say that good programmers produce good tracing, but put it that way and it has to come from experience. I've also read similar questions in here and through the internet and they are not really the same thing I am asking or do not have a satisfying answer, maybe because the questions lack some detail. So, folks say that tracing should sort of replicate the experience of debugging the application in cases where you can't attach a debugger. It should provide enough context so that you can see which path is taken at each control point in the application. Going deeper, you can even distinguish between tracing and event logging, in that "event logging is different from tracing in that it captures major states rather than detailed flow of control". Now, say I want to do my tracing and logging using only the standard .NET classes, those in the System.Diagnostics namespace. I figured that the TraceSource class is better for the job than the static Trace class, because I want to differentiate among the trace levels and using the TraceSource class I can pass in a parameter informing the event type, while using the Trace class I must use Trace.WriteLineIf and then verify things like SourceSwitch.TraceInformation and SourceSwitch.TraceErrors, and it doesn't even have properties like TraceVerbose or TraceStart. With all that in mind, would you consider a good practice to do as follows: Trace a "Start" event when begining a method, which should represent a single logical operation or a pipeline, along with a string representation of the parameter values passed in to the method. Trace an "Information" event when inserting an item into the database. Trace an "Information" event when taking one path or another in an important if/else statement. Trace a "Critical" or "Error" in a catch block depending on weather this is a recoverable error. Trace a "Stop" event when finishing the execution of the method. And also, please clarify when best to trace Verbose and Warning event types. If you have examples of code with nice trace/logging and are willing to share, that would be excelent. Note: I've found some good information here, but still not what I am looking for: http://msdn.microsoft.com/en-us/magazine/ff714589.aspx Thanks in advance!

    Read the article

  • What are the software design essentials? [closed]

    - by Craig Schwarze
    I've decided to create a 1 page "cheat sheet" of essential software design principles for my programmers. It doesn't explain the principles in any great depth, but is simply there as a reference and a reminder. Here's what I've come up with - I would welcome your comments. What have I left out? What have I explained poorly? What is there that shouldn't be? Basic Design Principles The Principle of Least Surprise – your solution should be obvious, predictable and consistent. Keep It Simple Stupid (KISS) - the simplest solution is usually the best one. You Ain’t Gonna Need It (YAGNI) - create a solution for the current problem rather than what might happen in the future. Don’t Repeat Yourself (DRY) - rigorously remove duplication from your design and code. Advanced Design Principles Program to an interface, not an implementation – Don’t declare variables to be of a particular concrete class. Rather, declare them to an interface, and instantiate them using a creational pattern. Favour composition over inheritance – Don’t overuse inheritance. In most cases, rich behaviour is best added by instantiating objects, rather than inheriting from classes. Strive for loosely coupled designs – Minimise the interdependencies between objects. They should be able to interact with minimal knowledge of each other via small, tightly defined interfaces. Principle of Least Knowledge – Also called the “Law of Demeter”, and is colloquially summarised as “Only talk to your friends”. Specifically, a method in an object should only invoke methods on the object itself, objects passed as a parameter to the method, any object the method creates, any components of the object. SOLID Design Principles Single Responsibility Principle – Each class should have one well defined purpose, and only one reason to change. This reduces the fragility of your code, and makes it much more maintainable. Open/Close Principle – A class should be open to extension, but closed to modification. In practice, this means extracting the code that is most likely to change to another class, and then injecting it as required via an appropriate pattern. Liskov Substitution Principle – Subtypes must be substitutable for their base types. Essentially, get your inheritance right. In the classic example, type square should not inherit from type rectangle, as they have different properties (you can independently set the sides of a rectangle). Instead, both should inherit from type shape. Interface Segregation Principle – Clients should not be forced to depend upon methods they do not use. Don’t have fat interfaces, rather split them up into smaller, behaviour centric interfaces. Dependency Inversion Principle – There are two parts to this principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. In modern development, this is often handled by an IoC (Inversion of Control) container.

    Read the article

  • How much freedom should a programmer have in choosing a language and framework?

    - by Spencer
    I started working at a company that is primarily a C# oriented. We have a few people who like Java and JRuby, but a majority of programmers here like C#. I was hired because I have a lot of experience building web applications and because I lean towards newer technologies like JRuby on Rails or nodejs. I have recently started on a project building a web application with a focus on getting a lot of stuff done in a short amount of time. The software lead has dictated that I use mvc4 instead of rails. That might be OK, except I don't know mvc4, I don't know C# and I am the only one responsible for creating the web application server and front-end UI. Wouldn't it make sense to use a framework that I already know extremely well (Rails) instead of using mvc4? The two reasons behind the decision was that the tech lead doesn't know Jruby/rails and there would be no way to reuse the code. Counter arguments: He won't be contributing to the code and is frankly, not needed on this project. So, it doesn't really matter if he knows JRuby/rails or not. We actually can reuse the code since we have a lot of java apps that JRuby can pull code from and vice-versa. In fact, he has dedicated some resources to convert a Java library to C#, instead of just running the Java library on the JRuby on Rails app. All because he doesn't like Java or JRuby I have built many web applications, but using something unfamiliar is causing some spin-up and I am unable to build an awesome application in as short of a time that I'm used to. This would be fine, learning new technologies is important in this field. The problem is, for this project, we need to get a lot done in a short period of time. At what point should a developer be allowed to choose his tools? Is this dependent on the company? Does my company suck or is this considered normal? Do greener pastures exist? Am I looking at this the wrong way? Bonus: Should I just keep my head down and move along at a snails pace, or defy orders and go with what I know in order to make this project more successful? Edit: I had actually created a fully function rails application (on my own time) and showed it to the team and it did not seem to matter. I am currently porting it to mvc4 (slowly).

    Read the article

  • Why learn Flash Builder 4 (Flex) when I can just use Flash Professional?

    - by Jason McKenna
    I want to learn Flash Builder 4 (Flex) because I see so many jobs requesting experience with it. I also just like knowing stuff. I am also very interested in focusing on RIA development now. BUT... can anyone tell me CLEARLY why the heck I would ever use FLEX over Flash Pro? It is a time investment, so is it worth it? All I read are misguided posts about how Flash Pro is for games and banner ads, and Flex is for programmers and RIAs blah blah... this simply isn't so from my 9 years of contracting experience. I'm 99.9% certain that I can build anything a flex developer can build, but using Flash Pro. I can build powerful AS3-driven apps for the desktop, mobile device, or browser, and I can link to databases with XML and I can import text files and communicate with ColdFusion and everything. The advantage with Flash Pro is that I can also easily and clearly animate transitions and build custom elements that look the way I want/need them to look for my specific client. Why would I want to use a bunch of pre-built components that drive my file sizes to the moon? Who is happy with a drag-n-drop button? Is Flex just a thing made for programmer people with no artistic inclination? What is the advantage of using it? It takes me back to Visual Basic class. Seems like a pain to have to use multiple tools to import crap from Flash Pro into Flex and yada yada... why when I can do it all nicely in Flash Pro to begin with. Am I clueless, or missing some major piece of the puzzle? Thanks for any clarity. PS, I couldn't care less about the code editors. It ain't that bad people. They make it out like the thing doesn't even respond to keyboard input or something. Does everything I need it do anyways. Please help out here. If I just don't need to learn it, I don't want to waste the time.

    Read the article

  • How to recover from finite-state-machine breakdown?

    - by Earl Grey
    My question may seems very scientific but I think it's a common problem and seasoned developers and programmers hopefully will have some advice to avoid the problem I mention in title. Btw., what I describe bellow is a real problem I am trying to proactively solve in my iOS project, I want to avoid it at all cost. By finite state machine I mean this I have a UI with a few buttons, several session states relevant to that UI and what this UI represents, I have some data which values are partly displayed in the UI, I receive and handle some external triggers (represented by callbacks from sensors). I made state diagrams to better map the relevant scenarios that are desirable and alowable in that UI and application. As I slowly implement the code, the app starts to behave more and more like it should. However, I am not very confident that it is robust enough. My doubts come from watching my own thinking and implementation process as it goes. I was confident that I had everything covered, but it was enough to make a few brute tests in the UI and I quickly realized that there are still gaps in the behavior ..I patched them. However, as each component depends and behaves based on input from some other component, a certain input from user or some external source trigers a chain of events, state changes..etc. I have several components and each behave like this Trigger received on input - trigger and its sender analyzed - output something (a message, a state change) based on analysis The problem is, this is not completely selfcontained, and my components (a database item, a session state, some button's state)...COULD be changed, influenced, deleted, or otherwise modified, outside the scope of the event-chain or desirable scenario. (phone crashes, battery is empty phone turn of suddenly) This will introduce a nonvalid situation into the system, from which the system potentially COULD NOT BE ABLE to recover. I see this (althought people do not realize this is the problem) in many of my competitors apps that are on apple store, customers write things like this "I added three documents, and after going there and there, i cannot open them, even if a see them." or "I recorded videos everyday, but after recording a too log video, I cannot turn of captions on them.., and the button for captions doesn't work".. These are just shortened examples, customers often describe it in more detail..from the descriptions and behavior described in them, I assume that the particular app has a FSM breakdown. So the ultimate question is how can I avoid this, and how to protect the system from blocking itself? EDIT I am talking in the context of one viewcontroller's view on the phone, I mean one part of the application. I Understand the MVC pattern, I have separate modules for distinct functionality..everything I describe is relevant to one canvas on the UI.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

    - by DeveloperDon
    In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming. Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code. Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements. In my reading, I have seen examples of multi-process and multi-thread programming that don't seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use? For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment. th_id = omp_get_thread_num(); #pragma omp critical { cout << "Hello World from thread " << th_id << '\n'; } This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this: wait(sem); th_id = get_thread_num(); cout << "Hello World from thread " << th_id << '\n'; signal(sem); In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage. Do these systems like OpenMP eliminate the problems with semaphores? Do they move the problem somewhere else? How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

    Read the article

  • viable part-time career in IT/programming?

    - by Rider
    Hi, I'd like to ask for some career advice from you people. Is there a viable job/career that can be done in programming/IT for the long term? Right now, I am thinking about website (PHP?) developer path. My background: I have a degree in computer science and have been a programmer/system analyst for almost 10 years. Lately I took a big break from programming and studied for a B.arch. degree (yes architecture), only to discover that architecture offers zero (0) jobs where I'm from, for 3 years already (and no, I am not going to move and the grass in not greener in other places). I have never been particularly interested in programming, in fact I was bored by it. But I was always quite good at both programming and system analysis, and very valued by practically all my employers. On the other hand, I have never been valued or offered a good job in any other field (although I can do many things, like design, architecture, translations, documentation, teaching, etc etc.) I guess the human component has been always more important for me in programming jobs - I value all the good people I worked with, but not projects. However, I have about zero skills or desire to be a project manager. I also have close to zero skills for selling myself. I like it best when I can do "my thing", have my niche, have an ownership of some project. Right now my career perspective is to do part time programming and to part time teach yoga. I have already started the yoga teaching part. Do you think that part time programming is viable? And what niche works best for that? I have considered web development, QA, or software development in a company like I did before. However, my fear is that when you do programming part-time, you get the most boring coding work, only to see your colleagues move to more interesting projects and up their respective career ladders. I also fear that part-timers are not especially needed either. And, since I don't share much enthusiasm at programming, I'd rather not be around young programmers boiling with geeky enthusiasm about coding, but rather QA mindset with people from different backgrounds and life paths might work better for me. Thanks for any advice, --Rider

    Read the article

  • Is the Joel Test really a good gauging tool?

    - by henry
    I just learned about the Joel Test. I have been computer programmer for 22 years, but somehow I never heard about it before. I consider my best job so far to be this small investment managing company with 30 employees and only three people in the IT department. I am no longer with them, but I had being working there for five years – my longest streak with any given company. To my surprise they scored extremely poor on the Joel Test. The only two questions I would answer “yes” are #4: Do you have a bug database? And #9: Do you use the best tools money can buy? Everything else is either “sometimes” or straight “no”. Here is what I liked about the company however: Good pay. They bragged about it to my face, and I bragged about it to their face, so it was almost like a family environment. I always knew the big picture. When writing code to solve a particular problem there were no ambiguity about the business nature of that problem. Even though we did not always had written specifications we could ask business users a question anytime, often yelling it across the floor. I could even talk to executives any time I felt like doing it: no appointment necessary. Immediate feedback. Once we implement a solution and make business users happy they immediately let us know that, we (programmers) become heroes of the moment. No red tape. I could always buy any tools I deemed necessary, and design solutions the way my professional judgment dictates. Flexibility. If I had mid-day dental appointment that is near my house rather than near the office, I would send email to the company: "FYI: I work from home today". As long as one of three IT guys was on the floor (to help traders in case their monitors go dark) they did not care where two others were. So the question thus becomes: How valuable is the Joel Test? Why bother with it?

    Read the article

  • introducing pointers to a large software project

    - by stefan
    I have a fairly large software project written in c++. In there, there is a class foo which represents a structure (by which i don't mean the programmers struct) in which foo-objects can be part of a foo-object. Here's class foo in simplest form: class Foo { private: std::vector<unsigned int> indices; public: void addFooIndex(unsigned int); unsigned int getFooIndex(unsigned int); }; Every foo-object is currently stored in an object of class bar. class Bar { private: std::vector<Foo> foos; public: void addFoo(Foo); std::vector<Foo> getFoos(); } So if a foo-object should represent a structure with a "inner" foo-object, I currently do Foo foo; Foo innerFoo; foo.addFooIndex(bar.getFoos().size() - 1); bar.addFoo(innerFoo); And to get it, I obviously use: Foo foo; for ( unsigned int i = 0; i < foo.getFooIndices().size(); ++i ) { Foo inner_foo; assert( foo.getFooIndices().at(i) < bar.getFoos().size() ); inner_foo = bar.getFoos().at(foo.getFooIndices().at(i)); } So this is not a problem. It just works. But it's not the most elegant solution. I now want to make the inner foos to be "more connected" with the foo-object. It would be obviously to change class foo to: class Foo { private: std::vector<Foo*> foo_pointers; public: void addFooPointer(Foo*); std::vector<Foo*> getFooPointers(); }; So now, for my question: How to gently change this basic class without messing up the whole code? Is there a "clean way"?

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 (U+1D11E) MUSICAL SYMBOL G CLEF 𝕥 (U+1D565) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 (U+1D7F6) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 (U+2008A) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful?

    Read the article

  • Good, simple reasons for having a multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't phased because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I cant really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • Throwing exception from a property when my object state is invalid

    - by Rumi P.
    Microsoft guidelines say: "Avoid throwing exceptions from property getters", and I normally follow that. But my application uses Linq2SQL, and there is the case where my object can be in invalid state because somebody or something wrote nonsense into the database. Consider this toy example: [Table(Name="Rectangle")] public class Rectangle { [Column(Name="ID", IsPrimaryKey = true, IsDbGenerated = true)] public int ID {get; set;} [Column(Name="firstSide")] public double firstSide {get; set;} [Column(Name="secondSide")] public double secondSide {get; set;} public double sideRatio { get { return firstSide/secondSide; } } } Here, I could write code which ensures that my application never writes a Rectangle with a zero-length side into the database. But no matter how bulletproof I make my own code, somebody could open the database with a different application and create an invalid Rectangle, especially one with a 0 for secondSide. (For this example, please forget that it is possible to design the database in a way such that writing a side length of zero into the rectangle table is impossible; my domain model is very complex and there are constraints on model state which cannot be expressed in a relational database). So, the solution I am gravitating to is to change the getter to: get { if(firstSide > 0 && secondSide > 0) return firstSide/secondSide; else throw new System.InvalidOperationException("All rectangle sides should have a positive length"); } The reasoning behind not throwing exceptions from properties is that programmers should be able to use them without having to make precautions about catching and handling them them. But in this case, I think that it is OK to continue to use this property without such precautions: if the exception is thrown because my application wrote a non-zero rectangle side into the database, then this is a serious bug. It cannot and shouldn't be handled in the application, but there should be code which prevents it. It is good that the exception is visibly thrown, because that way the bug is caught. if the exception is thrown because a different application changed the data in the database, then handling it is outside of the scope of my application. So I can't do anything about it if I catch it. Is this a good enough reasoning to get over the "avoid" part of the guideline and throw the exception? Or should I turn it into a method after all? Note that in the real code, the properties which can have an invalid state feel less like the result of a calculation, so they are "natural" properties, not methods.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >