Search Results

Search found 12769 results on 511 pages for 'multicore programming'.

Page 114/511 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Solving quadratic programming using R

    - by user702846
    I would like to solve the following quadratic programming equation using ipop function from kernlab : min 0.5*x'*H*x + f'*x subject to: A*x <= b Aeq*x = beq LB <= x <= UB where in our example H 3x3 matrix, f is 3x1, A is 2x3, b is 2x1, LB and UB are both 3x1. edit 1 My R code is : library(kernlab) H <- rbind(c(1,0,0),c(0,1,0),c(0,0,1)) f = rbind(0,0,0) A = rbind(c(1,1,1), c(-1,-1,-1)) b = rbind(4.26, -1.73) LB = rbind(0,0,0) UB = rbind(100,100,100) > ipop(f,H,A,b,LB,UB,0) Error in crossprod(r, q) : non-conformable arguments I know from matlab that is something like this : H = eye(3); f = [0,0,0]; nsamples=3; eps = (sqrt(nsamples)-1)/sqrt(nsamples); A=ones(1,nsamples); A(2,:)=-ones(1,nsamples); b=[nsamples*(eps+1); nsamples*(eps-1)]; Aeq = []; beq = []; LB = zeros(nsamples,1); UB = ones(nsamples,1).*1000; [beta,FVAL,EXITFLAG] = quadprog(H,f,A,b,Aeq,beq,LB,UB); and the answer is a vector of 3x1 equals to [0.57,0.57,0.57]; However when I try it on R, using ipop function from kernlab library ipop(f,H,A,b,LB,UB,0)) and I am facing Error in crossprod(r, q) : non-conformable arguments I appreciate any comment

    Read the article

  • Planning a programming project by example (C# or C++)

    - by Lunan
    I am in the last year of undergraduate degree and i am stumped by the lack of example in c++ and c# large project in my university. All the mini project and assignment are based on text based database, which is so inefficient, and console display and command, which is frustrating. I want to develop a complete prototype of corporate software which deals in Inventory, Sales, Marketing, etc. Everything you would usually find in SAP. I am grateful if any of you could direct me to a books or article or sample program. Some of the question are : How to plan for this kind of programming? should i use the concept of 1 object(such as inventory) have its own process and program and have an integrator sit for all the program, or should i integrate it in 1 big program? How to build and address a database? i have little bit knowledge in database and i know SQL but i never address database in a program before. Database are table, and how do you suppose to represent a table in a OOP way? For development type, which is better PHP and C++ or C# and ASP.NET? I am planning to use Web Interface to set form and information, but using a background program to handle the compute. .NET is very much integrated and coding should be much faster, but i really wonder about performance if compared to PHP and C++ package thank you for the info

    Read the article

  • Safe way for getting/finding a vertex in a graph with custom properties -> good programming practice

    - by Shadow
    Hi, I am writing a Graph-class using boost-graph-library. I use custom vertex and edge properties and a map to store/find the vertices/edges for a given property. I'm satisfied with how it works, so far. However, I have a small problem, where I'm not sure how to solve it "nicely". The class provides a method Vertex getVertex(Vertexproperties v_prop) and a method bool hasVertex(Vertexproperties v_prop) The question now is, would you judge this as good programming practice in C++? My opinion is, that I have first to check if something is available before I can get it. So, before getting a vertex with a desired property, one has to check if hasVertex() would return true for those properties. However, I would like to make getVertex() a bit more robust. ATM it will segfault when one would directly call getVertex() without prior checking if the graph has a corresponding vertex. A first idea was to return a NULL-pointer or a pointer that points past the last stored vertex. For the latter, I haven't found out how to do this. But even with this "robust" version, one would have to check for correctness after getting a vertex or one would also run into a SegFault when dereferencing that vertex-pointer for example. Therefore I am wondering if it is "ok" to let getVertex() SegFault if one does not check for availability beforehand?

    Read the article

  • Programming Practice

    - by deepti
    public DataTable UserUpdateTempSettings(int install_id, int install_map_id, string Setting_value,string LogFile) { SqlConnection oConnection = new SqlConnection(sConnectionString); DataSet oDataset = new DataSet(); DataTable oDatatable = new DataTable(); SqlDataAdapter MyDataAdapter = new SqlDataAdapter(); try { oConnection.Open(); cmd = new SqlCommand("SP_HOTDOC_PRINTTEMPLATE_PERMISSION", oConnection); cmd.Parameters.Add(new SqlParameter ("@INSTALL_ID", install_id)); cmd.Parameters.Add(new SqlParameter ("@INSTALL_MAP_ID", install_map_id)); cmd.Parameters.Add(new SqlParameter("@SETTING_VALUE", Setting_value)); if (LogFile != "") { cmd.Parameters.Add(new SqlParameter("@LOGFILE",LogFile)); } cmd.CommandType = CommandType.StoredProcedure; MyDataAdapter.SelectCommand = cmd; cmd.ExecuteNonQuery(); MyDataAdapter.Fill(oDataset); oDatatable = oDataset.Tables[0]; return oDatatable; } catch (Exception ex) { Utils.ShowError(ex.Message); return oDatatable; } finally { if ((oConnection.State != ConnectionState.Closed) || (oConnection.State != ConnectionState.Broken)) { oConnection.Close(); } oDataset = null; oDatatable = null; oConnection.Dispose(); oConnection = null; } } i have used execute non query.. normally its not used with data adapter... if iam not using its giving me error.. is it bad programming practice to use execute non query with data adapter

    Read the article

  • How to approach socket programming between C# -> Java (Android)

    - by Alex
    I've recently knocked up a server/client app for Windows & Android that allows one to send a file from Windows to an android phone over a socket connection. It works great for a single file but trying to send multiple files over in a single stream is causing me problems. I've also realised that aside from the binary data, I will need to send messages over the socket to indicate error states and other application messages. I have little experience with network programming and and wondering what is the best way forward. Basically the C# server side of the app just goes into a listening state and uses Socket.SendFile to transmit the file. On Android I use the standard Java Socket.getInputStream() to receive the file. That works great for a single file transfer, but how should I handle multiple files and error/messaging information? Do I need to use a different socket for each file? Should I be using a higher level framework to handle this or can I send everything over the single socket? Any other suggestions for frameworks or learning materials?

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • development server?

    - by ajsie
    for a project there will be me and one more programmer to develop a web service. i wonder how the development environment should be like. cause we need central storage (documents, pictures, business materials etc), file version handling, lamp (testing the web service) etc. i have never set up an environment for this before and want to have suggestions from experienced people which tools to use for effective collaboration. what crossed my mind: seperate applications: - google wave (for communication forth and back, setting up guide lines, other information) - team viewer (desktop sharing) - skype (calling) vps (ubuntu server): - svn (version tracking) - ftp (central storage) - lamp (testing the web service) - ssh (managing the vps) is this an appropriate programming environment? and regarding the vps, is it best practice to use ONE vps for all tasks listed up there? all suggestions and feedbacks are welcome!

    Read the article

  • Eclipse And Linux: Keyboard unusable after gnome-screen-saver

    - by martijn-courteaux
    Hi, I know this is not programming related. But I can't find any topics on Google or UbuntuForums. So the problem is: When gnome-screensaver starts on the moment Eclipse has the focus and I wake up again my laptop, Eclipse doesn't listen to keyboard-events. To solve this I have to change the focus to another program and then back to Eclipse. Than it works again. This isn't a real problem, but it would be nice if someone can solve it. Thanks

    Read the article

  • How would you rank these programming skills in order of learning them? [closed]

    - by mumtaz
    As a general purpose programmer, what should you learn first and what should you learn later on? Here are some skills I wonder about... SQL Regular Expressions Multi-threading / Concurrency Functional Programming Graphics The mastery of your mother programming language's syntax/semantics/featureset The mastery of your base class framework libraries Version Control System Unit Testing XML Do you know other important ones? Please specify them... On which skills should I focus first?

    Read the article

  • What is a recent programming language of choice for the AI?

    - by Eduard Florinescu
    For a few decades the programming language of choice for AI was either Prolog or LISP, and a few more others that are not so well known. Most of them were designed before the 70's. Changes happens a lot on many other domains specific languages, but in the AI domain it hadn't surfaced so much as in the web specific languages or scripting etc. Are there recent programming languages that were intended to change the game in the AI and learn from the insufficiencies of former languages?

    Read the article

  • Answer programming for "What are your interests?" interview questions?

    - by Morgan Herlocker
    For interview questions that ask for personal hobbies, should you mention a bunch of tech activities you enjoy, like how "I love building java applets in my free time" or should you focus on non-programming activities to show you are well rounded? Does it show passion to say programming is a hobby, or does it sound disingenuous? I could see it going either way, so please back up your answer with some sound reasoning.

    Read the article

  • Can a 20 years old programmer who has been programming daily since 10 get a job that will pay for what he knows?

    - by Dokkat
    I'm a programmer who has been programming daily since I was 10-years-old. Is it possible to get a job with a salary that reflects my programming knowledge, or do I have to be in the same place as someone starting just now, as I've never had an actual job? I am not sure if this kind of question is allowed here and could not find out. If it is not, could you kindly suggest a place to ask this? Sorry for any inconveniences.

    Read the article

  • How can I deal with the cargo-cult programming attitude?

    - by Aivar
    I have some computer science students in a compulsory introductory programming course who see programming language as a set of magic spells, which must be cast in order to achieve some effect (instead of seeing it as a flexible medium for expressing their idea of solution). They tend to copy-paste code from previous, similar-looking assignments without considering the essence of the problem. Can anyone recommend some exercises or analogies to make these students more confident that they can, and should, understand the structure and meaning of each piece of code they write?

    Read the article

  • A new free book from Microsoft - Programming Windows 8 Apps with HTML, CSS, and JavaScript (Second Preview)

    - by TATWORTH
    At  http://borntolearn.mslearn.net/btl/b/weblog/archive/2012/09/12/turn-your-bright-ideas-into-applications-with-the-new-mcsd.aspx there is mention of a new free book from Microsoft Press - Programming Windows 8 Apps with HTML, CSS, and JavaScript (Second Preview)The actual download page is http://blogs.msdn.com/b/microsoft_press/archive/2012/08/20/free-ebook-programming-windows-8-apps-with-html-css-and-javascript-second-preview.aspx

    Read the article

  • Is it OK to write a programming language in Jython? [closed]

    - by Christopher
    I've been looking at the tools for writing a programming language and I had a bizarre idea. What If I wrote a full blown programming language in Jython? Is that a practical solution? Will distribution and bundling be a problem? Can that make or break the language's success? In other words, I'm asking if writing a programing language written in Jython is a practical solution for production enviroments?

    Read the article

  • Status of VB6/ Best Desktop Application Language with Native Compliation

    - by Sandeep Jindal
    Hi, I was looking for a Desktop Application Programming Language with one of the biggest constraint: - “ I need to output as native executable”. I explored multiple options: a) Java is not a very good option for desktop programming, but still you can use it. But Java to Exe is a problem. [Only GCJ and Excelsior-Jet provides this][1]. b) .Net platform does not support native compilation. Only very few expensive tools are available which can do the job. c) Python is not an option for native compilation. Right? d) VB6 is the option I am left with. From the above list, if I am correct, VB6 is the only and probably the best option I have. But VB6 itself has issues like: a) It is no more under development since 2003. b) There are questions on support of VB6 IDE with Vista. Thus my questions are: a) From the list of programming language options, do you want to add any more? b) If VB6 is good/best option, looking at its development status, would you suggest using VB6 in this era? Regards Sandeep Jindal

    Read the article

  • What ever happened to APL?

    - by lkessler
    When I was at University 30 years ago, I used a programming language called APL. I believe the acronym stood for "A Programming Language", This language was interpretive and was especially useful for array and matrix operations with powerful operators and library functions to help with that. Did you use APL? Is this language still in use anywhere? Is it still available, either commercially or open source? I remember the combinatorics assignment we had. It was complex. It took a week of work for people to program it in PL/1 and those programs ranged from 500 to 1000 lines long. I wrote it in APL in under an hour. I left it at 10 lines for readability, although I should have been a purist and worked another hour to get it into 1 line. The PL/1 programs took 1 or 2 minutes to run on the IBM mainframe and solve the problem. The computer charge was $20. My APL program took 2 hours to run and the charge was $1,500 which was paid for by our Computer Science Department's budget. That's when I realized that a week of my time is worth way more than saving some $'s in someone else's budget. I got an A+ in the course. p.s. Don't miss this presentation entitled: "APL one of the greatest programming languages ever"

    Read the article

  • Are there any prototype-based languages with a whole development cycle?

    - by Kaveh Shahbazian
    Are there any real-world prototype-based programming languages with a whole development cycle? "A whole development cycle" like Ruby and Python: web frameworks, scripting/interacting with the system, tools for debugging, profiling, etc. Thank you A brief note on PBPLs: (let's call these languages PBPL : prototype-based programming language) There are some PBPLs out there. Some are being widely used like JavaScript (which Node.js may bring it into the field - or may not!). One other language is ActionScript which is also a PBPL but tightly bound to Flash VM (is it correct to say so?). From less known ones I can speak of Lua which has a strong reputation in game development (mostly spread by WOW) but never took off as a full language. Lua has a table concept which can provide you some sort of prototype based programming facility. There is also JScript (Windows scripting tool) which is already pointless by the newcomer PowerShell (I have used JScript to manipulate IIS but I never understood what is JScript!). Others can be named like io (indeed very very neat, you will fall in love with it; absolutely impossible to use) and REBOL (What is this all about? A proprietary scripting tool? You must be kidding!) and newLISP (Which is actually a full language, but no one ever heard about it). For sure there are much more to list here but either I do not remember or I did not understood them as a real world thing, like Self).

    Read the article

  • Which language should I use to program a GUI application?

    - by Roman
    I would like to write a GUI application for management of information (text documents). In more details, it should be similar to the TiddlyWiki. I would like to have there some good visual effects (like nice representation for three structures, which you can rotate, some sound). I also would like to include some communication via Internet (for sharing and collaboration). In should include some features of such applications as a web browser, word processor, Skype. Which programming language should I use? I like the idea of usage of JavaScripts (like TddlyWiki). The good thing about that, is that user should not install anything. They open a file in a browser and it works! The bad thing is that JavaScript cannot communicate via internet with other applications. I think the choice of the programming language, in my case, id conditioned by 2 things: What can be done with this programming language (which restrictions are there). How easy to program. I would like to have "block" which can do a lot of things (rather than to program then and, in this way, to "rediscover a bicycle") ADDED: I would like to make it platform independent.

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • Kernighan & Ritchie word count example program in a functional language

    - by Frank
    I have been reading a little bit about functional programming on the web lately and I think I got a basic idea about the concepts behind it. I'm curious how everyday programming problems which involve some kind of state are solved in a pure functional programing language. For example: how would the word count program from the book 'The C programming Language' be implemented in a pure functional language? Any contributions are welcome as long as the solution is in a pure functional style. Here's the word count C code from the book: #include <stdio.h> #define IN 1 /* inside a word */ #define OUT 0 /* outside a word */ /* count lines, words, and characters in input */ main() { int c, nl, nw, nc, state; state = OUT; nl = nw = nc = 0; while ((c = getchar()) != EOF) { ++nc; if (c == '\n') ++nl; if (c == ' ' || c == '\n' || c = '\t') state = OUT; else if (state == OUT) { state = IN; ++nw; } } printf("%d %d %d\n", nl, nw, nc); }

    Read the article

  • Should a programmer have mastery over C++

    - by Yogendra
    I was wondering if it is necessary for programmers to have expertise on at least 1 programming language? Programming languages like C#, java, VB.Net etc change every year or two. Should a programmer have mastery over C++, which is a stable language and rarely undergoes changes? I am a C# developer and using it for about 7 years now, I still don't have mastery on it. EDIT I think my question is being misunderstood. I am not against changes or evolution. I love the new features and abstraction provided by languages such as C#, VB, Java. And I keep waiting for new features if it makes a programmers life easy. But this fact also make this languages very difficult to master. They are continuously evolving. Languages like C++ have slow evolution cycle. So given this scenario, Is it helpful to be master of C++? This is what my original question meant. Note:- Based on the answers by friends below, I have understood that languages and framework are tools for expressing the concepts. Also it might be a good idea to express the concepts in different programming languages.

    Read the article

  • Please suggest other ways of communicating between server & client.

    - by Geo
    I'm writing a TCP chat server ( programming language does not mather ). It's a school project for my nephew, so it won't be released, and all questions I'm asking are just for my knowledge :). . Some of the things it will support: chatting between users ( doh ), it will be multithreaded sending each other files I know I could easily get away with all the stuff above if I go with serialization, and just send objects from client to server and back. But, if I do that, it will be limited to a specific programming language ( meaning clients written in other programming languages may not be able to deserialize the objects ). What would be the way to go so that other clients written in other languages could be supported? One way to go, off the top of my head, would be to go in this direction: the server & the client communicate by sending messages & chunks ( in lieu of other names ). Here's what I mean by this: every time the client/server wants to send something ( text message or file ) it will first send a simple text message ( newline terminated ) with the number of the chunks it will send. Example: command 4,20,30,40,50 Where command would be something like instant-message or file,4 would be the number of chunks to be sent, 20 would be the size in bytes of the first chunk, 30 of the 2nd, and so forth. after the message was sent, the client/server will start sending chunks ( of sizes mentioned in the sent message ). What do you think about implementing the client/server communication this way? What better options are there?

    Read the article

  • Revision control for writing programming lessons

    - by Dietrich Epp
    I'd like to write a series programming lessons that guide programmers to build a certain kind of program. After each lesson, I'd like to provide sample code that implements what that lesson covered, and the next lesson would use that code as a starting point. Right now I'm using Git to keep track of the code from lesson to lesson. Each lesson has its own branch. lesson1: A--B--C \ lesson2: D--E--F \ lesson3: G--H--I However, suppose that now I want to make it easier on the Windows programmers using my lessons, so I add a Visual Studio project to lesson 1 and then merge it into lessons 2 and 3. lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K \ \ lesson3: G--H--I--L And then someone points out a bug in lesson 2 that causes crashes on certain systems. (This diagram is where I am right now, and I'm having doubts about continuing along this path.) lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K--M \ \ \ lesson3: G--H--I--L--N Here are the problems I imagine having: If I had many lessons, and I fix something in lesson 1, am I going to have to spend fifteen minutes or more just merging that one simple change? I know I'll probably have to test all of those lessons again, but I can put that off. When I make a bunch of changes to various lessons on one computer, how do I pull all of the branches at the same time? If I decide to publish these lessons, I'd like a way to tag all of the branches to correspond with what I publish. I figure I'll just need to tag each branch separately, but it would be nice if there were a better way. When I look at the history, I imagine becoming terribly confused about what I've done. Compare the above diagram to a hypothetical diagram below, where I use rebase instead of merge (and rebase has its own problems): lesson1: A--B--C--J \ lesson2: D2--E2--F2--M \ lesson3: G2--H2--I2 Do any of you have experience working with a project like this? Should I consider using a different VCS, such as Darcs? (Note: it would be a real pain to use centralized VCS, so don't suggest one of those unless the benefits are clear.) Should I consider writing plugins or extra tools for a VCS (such as a "meta tag" which tags several branches)?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >