Search Results

Search found 12661 results on 507 pages for 'tacit programming'.

Page 9/507 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • What programming language matches this description? [on hold]

    - by Benubird
    I am looking for a functional language that is basically dynamic programming - i.e. one where functions are first-class objects - but where all function calls are asynchronous by default; i.e. you define function X(a,b) = (Y(a)+Z(b)), and when X() is called, it sees it is waiting for the return from two functions, runs one in the current thread, and spawns a new thread to run the other. The future is very much parallel processing; multiple cores, multiple machines, the internet of things, etc. and I was wondering if there was a language specifically designed to make this kind of parallelization easy. I currently have only used imperative languages (c, php, java, ruby, etc), so I don't know anything about what kind of functional languages are available.

    Read the article

  • Is copy paste programming bad ?

    - by ring bearer
    With plain google as well as google code search tools it is easy to find how to program using some resource or solve certain problems ( such as a Java class, or a ftp block in perl etc) and so developers are so tempted to just purely copy paste the code (in a way re-use) - is this an incompetency? I have done this myself though I think I am a better programmer than many others I have seen. Who has the time to RTFM? In this age of information abundance, I do not think that copy paste programming is bad. Isn't that what sites like stackoverflow do anyway? People ask - ok here is my problem - how to solve it? now someone will post complete code and the person who asked the question would simply copy paste the most voted answer. No matter how small the problem is. I am working with a bunch of young coders who heavily rely on internet to get their job done. I see convenience (for example, you may be quite good with algorithms and such but you may not know how to use a BufferedReader in Java - would you read complete Javadoc for BufferedReader or look up some example of using it somewhere??) in copy pasting and modifying code to get the job done. What are the real dangers of copy paste coding that can impact their competency?

    Read the article

  • Functional programming and stateful algorithms

    - by bigstones
    I'm learning functional programming with Haskell. In the meantime I'm studying Automata theory and as the two seem to fit well together I'm writing a small library to play with automata. Here's the problem that made me ask the question. While studying a way to evaluate a state's reachability I got the idea that a simple recursive algorithm would be quite inefficient, because some paths might share some states and I might end up evaluating them more than once. For example, here, evaluating reachability of g from a, I'd have to exclude f both while checking the path through d and c: So my idea is that an algorithm working in parallel on many paths and updating a shared record of excluded states might be great, but that's too much for me. I've seen that in some simple recursion cases one can pass state as an argument, and that's what I have to do here, because I pass forward the list of states I've gone through to avoid loops. But is there a way to pass that list also backwards, like returning it in a tuple together with the boolean result of my canReach function? (although this feels a bit forced) Besides the validity of my example case, what other techniques are available to solve this kind of problems? I feel like these must be common enough that there have to be solutions like what happens with fold* or map. So far, reading learnyouahaskell.com I didn't find any, but consider I haven't touched monads yet. (if interested, I posted my code on codereview)

    Read the article

  • Thread safe GUI programming

    - by James
    I have been programming Java with swing for a couple of years now, and always accepted that GUI interactions had to happen on the Event Dispatch Thread. I recently started to use GTK+ for C applications and was unsurprised to find that GUI interactions had to be called on gtk_main. Similarly, I looked at SWT to see in what ways it was different to Swing and to see if it was worth using, and again found the UI thread idea, and I am sure that these 3 are not the only toolkits to use this model. I was wondering if there is a reason for this design i.e. what is the reason for keeping UI modifications isolated to a single thread. I can see why some modifications may cause issues (like modifying a list while it is being drawn), but I do not see why these concerns pass on to the user of the API. Is there a limit imposed by an operating system? Is there a good reason these concerns are not 'hidden' (i.e. some form of synchronization that is invisible to the user)? Is there any (even purely conceptual) way of creating a thread safe graphics library, or is such a thing actually impossible? I found this http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness which seems to describe GTK differently to how I understood it (although my understanding was the same as many people's) How does this differ to other toolkits? Is it possible to implement this in Swing (as the EDT model does not actually prevent access from other threads, it just often leads to Exceptions)

    Read the article

  • C# Threading Background Process - Programming - How to?

    - by Magic
    Hello...I have been given the horrible task of doing this. Launch the website Take a screenshot Fill in the form details, click on Next Take a screenshot ... ... ... Rinse. Repeat. Now, with various combinations, this comes up to 300 screenshots. And I have to do this for 4 different browsers. Chrome, Firefox, IE 6 and IE 7. I cannot use tools which will capture the screenshot and store them, such as, SnagIT. I need to take a screenshot, copy it to a Word Document and take the second screenshot and take it to a Word Document. I thought, I will write a tiny utility which will help me do this. Here is the requirement spec that I put up for it - An executable which once launched seats itself in the System Tray. While it is active, all instances of Key Press (Print Scrn), it should write the contents to a Word Document as defined (either a default path or a user defined one). Save the document periodically. Now, my question is - if I am going to develop this using C# (Winforms application), how do I go about doing this. I can do a fair bit of C# programming and I am willing to learn. But I am not able to locate the references for how to do a background process so that it runs in the background. And while it runs, it has to capture the Print Scrn command. Can you folks point me to the right material where I can learn this? Theoretical references should suffice. But if there are practical references, then nothing like it. Thanks!

    Read the article

  • How to get better at solving Dynamic programming problems

    - by newbie
    I recently came across this question: "You are given a boolean expression consisting of a string of the symbols 'true', 'false', 'and', 'or', and 'xor'. Count the number of ways to parenthesize the expression such that it will evaluate to true. For example, there is only 1 way to parenthesize 'true and false xor true' such that it evaluates to true." I knew it is a dynamic programming problem so i tried to come up with a solution on my own which is as follows. Suppose we have a expression as A.B.C.....D where '.' represents any of the operations and, or, xor and the capital letters represent true or false. Lets say the number of ways for this expression of size K to produce a true is N. when a new boolean value E is added to this expression there are 2 ways to parenthesize this new expression 1. ((A.B.C.....D).E) ie. with all possible parenthesizations of A.B.C.....D we add E at the end. 2. (A.B.C.(D.E)) ie. evaluate D.E first and then find the number of ways this expression of size K can produce true. suppose T[K] is the number of ways the expression with size K produces true then T[k]=val1+val2+val3 where val1,val2,val3 are calculated as follows. 1)when E is grouped with D. i)It does not change the value of D ii)it inverses the value of D in the first case val1=T[K]=N.( As this reduces to the initial A.B.C....D expression ). In the second case re-evaluate dp[K] with value of D reversed and that is val1. 2)when E is grouped with the whole expression. //val2 contains the number of 'true' E will produce with expressions which gave 'true' among all parenthesized instances of A.B.C.......D i) if true.E = true then val2 = N ii) if true.E = false then val2 = 0 //val3 contains the number of 'true' E will produce with expressions which gave 'false' among all parenthesized instances of A.B.C.......D iii) if false.E=true then val3=( 2^(K-2) - N ) = M ie. number of ways the expression with size K produces a false [ 2^(K-2) is the number of ways to parenthesize an expression of size K ]. iv) if false.E=false then val3 = 0 This is the basic idea i had in mind but when i checked for its solution http://people.csail.mit.edu/bdean/6.046/dp/dp_9.swf the approach there was completely different. Can someone tell me what am I doing wrong and how can i get better at solving DP so that I can come up with solutions like the one given above myself. Thanks in advance.

    Read the article

  • HTML5 game programming style

    - by fnx
    I am currently trying learn javascript in form of HTML5 games. Stuff that I've done so far isn't too fancy since I'm still a beginner. My biggest concern so far has been that I don't really know what is the best way to code since I don't know the pros and cons of different methods, nor I've found any good explanations about them. So far I've been using the worst (and propably easiest) method of all (I think) since I'm just starting out, for example like this: var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); var width = 640; var height = 480; var player = new Player("pic.png", 100, 100, ...); also some other global vars... function Player(imgSrc, x, y, ...) { this.sprite = new Image(); this.sprite.src = imgSrc; this.x = x; this.y = y; ... } Player.prototype.update = function() { // blah blah... } Player.prototype.draw = function() { // yada yada... } function GameLoop() { player.update(); player.draw(); setTimeout(GameLoop, 1000/60); } However, I've seen a few examples on the internet that look interesting, but I don't know how to properly code in these styles, nor do I know if there are names for them. These might not be the best examples but hopefully you'll get the point: 1: Game = { variables: { width: 640, height: 480, stuff: value }, init: function(args) { // some stuff here }, update: function(args) { // some stuff here }, draw: function(args) { // some stuff here }, }; // from http://codeincomplete.com/posts/2011/5/14/javascript_pong/ 2: function Game() { this.Initialize = function () { } this.LoadContent = function () { this.GameLoop = setInterval(this.RunGameLoop, this.DrawInterval); } this.RunGameLoop = function (game) { this.Update(); this.Draw(); } this.Update = function () { // update } this.Draw = function () { // draw game frame } } // from http://www.felinesoft.com/blog/index.php/2010/09/accelerated-game-programming-with-html5-and-canvas/ 3: var engine = {}; engine.canvas = document.getElementById('canvas'); engine.ctx = engine.canvas.getContext('2d'); engine.map = {}; engine.map.draw = function() { // draw map } engine.player = {}; engine.player.draw = function() { // draw player } // from http://that-guy.net/articles/ So I guess my questions are: Which is most CPU efficient, is there any difference between these styles at runtime? Which one allows for easy expandability? Which one is the most safe, or at least harder to hack? Are there any good websites where stuff like this is explained? or... Does it all come to just personal preferance? :)

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • STL algorithms and concurrent programming

    - by Andrew
    Hello everyone, Can any of STL algorithms/container operations like std::fill, std::transform be executed in parallel if I enable OpenMP for my compiler? I am working with MSVC 2008 at the moment. Or maybe there are other ways to make it concurrent? Thanks.

    Read the article

  • Are we in a functional programming fad?

    - by TraumaPony
    I use both functional and imperative languages daily, and it's rather amusing to see the surge of adoption of functional languages from both sides of the fence. It strikes me, however, that it looks rather like a fad. Do you think that it's a fad? I know the reasons for using functional languages at times and imperative languages in others, but do you really think that this trend will continue due to the cliched "many-core" revolution that has been only "18 months from now" since 2004 (sort of like communism's Radiant Future), or do you think that it's only temporary; a fascination of the mainstream developer that will be quickly replaced by the next shiny idea, like Web 3.0 or GPGPU? Note, that I'm not trying to start a flamewar or anything (sorry if it sounds bitter), I'm just curious as to whether people will think functional or functional/imperative languages will become mainstream. Edit: By mainstream, I mean, equal number of programmers to say, Python, Java, C#, etc

    Read the article

  • is this possible in java or any other programming language

    - by drake
    public abstract class Master { public void printForAllMethodsInSubClass() { System.out.println ("Printing before subclass method executes"); } } public class Owner extends Master { public void printSomething () { System.out.println ("This printed from Owner"); } public int returnSomeCals () { return 5+5; } } Without messing with methods of subclass...is it possible to execute printForAllMethodsInSubClass() before the method of a subclass gets executed?

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • What Functional features are worth a little OOP confusion for the benefits they bring?

    - by bonomo
    After learning functional programming in Haskell and F#, the OOP paradigm seems ass-backwards with classes, interfaces, objects. Which aspects of FP can I bring to work that my co-workers can understand? Are any FP styles worth talking to my boss about retraining my team so that we can use them? Possible aspects of FP: Immutability Partial Application and Currying First Class Functions (function pointers / Functional Objects / Strategy Pattern) Lazy Evaluation (and Monads) Pure Functions (no side effects) Expressions (vs. Statements - each line of code produces a value instead of, or in addition to causing side effects) Recursion Pattern Matching Is it a free-for-all where we can do whatever the programming language supports to the limit that language supports it? Or is there a better guideline?

    Read the article

  • Software development magazines [closed]

    - by Sebastian
    Ive spent the last hour or so browsing the web for professional development magazines. I am mostly interested in the java platform, agile methods, "programming in general" (tutorials on languages or whatever, "hot new stuff" etc) and software craftmanship. My best finding yet was pragpub and maybe MSDN magazine. I am willing to pay and have a Zinio account if anyone knows a magazine about programming that is distributed by them. Ive already browsed a couple of related threads here on stackexchange. ACM and IEEE does not seem relevant, as Im not interested in research articles. Maybe conferences like OOPSLA as somebody mentioned in another thread. PS. I prefer if they are in pdf or readable on kindle or a tablet. DS. BR Sebastian

    Read the article

  • Should a new programmer nowadays start with C/C++ or OOP language? [closed]

    - by deviDave
    I've been a programmer for 15+ years. In my time, we all started with C or C++ and then moved to C# or Java. At that time it was a usual practice. Now, my brother wants to follow my steps and I am not sure what advice to give him. So, I am asking the community for an opinion. Should nowadays new programmer with zero programming knowledge start with functional languages (C, C++, etc.) or he should start directly with OOP languages (Java, C#, etc.)? The reply should be considered in the context of my brother's future assignments. He will mainly work on Java mobile applications as well as ASP.NET web apps. He will have to touch with desktop apps, low level programming, drivers, etc. This is the reason I am not sure if he should ever need to learn functional languages.

    Read the article

  • Scheme vs Haskell for an Introduction to Functional Programming?

    - by haziz
    I am comfortable with programming in C and C#, and will explore C++ in the future. I may be interested in exploring functional programming as a different programming paradigm. I am doing this for fun, my job does not involve computer programming, and am somewhat inspired by the use of functional programming, taught fairly early, in computer science courses in college. Lambda calculus is certainly beyond my mathematical abilities, but I think I can handle functional programming. Which of Haskell or Scheme would serve as a good intro to functional programming? I use emacs as my text editor and would like to be able to configure it more easily in the future which would entail learning Emacs Lisp. My understanding, however, is that Emacs Lisp is fairly different from Scheme and is also more procedural as opposed to functional. I would likely be using "The Little Schemer" book, which I have already bought, if I pursue Scheme (seems to me a little weird from my limited leafing through it). Or would use the "Learn You a Haskell for Great Good" if I pursue Haskell. I would also watch the Intro to Haskell videos by Dr Erik Meijer on Channel 9. Any suggestions, feedback or input appreciated. Thanks. P.S. BTW I also have access to F# since I have Visual Studio 2010 which I use for C# development, but I don't think that should be my main criteria for selecting a language.

    Read the article

  • Function-Local Static Const variable Initialization semantics.

    - by Hassan Syed
    The questions are in bold, for those that cannot be bothered reading a question in depth. This is a followup to this question. It is to do with the initialization semantics of static variables in functions. Static variables should be initialized once, and their internal state might be altered later - as I (currently) do in the linked question. However, the code in question does not require the feature to change the state of the variable later. Let me clarrify my position, since I don't require the string object's internal state to change. The code is for a trait class for meta programming, and as such would would benifit from a const char * const ptr -- thus Ideally a local cost static const variable is needed. My educated guess is that in this case the string in question will be optimally placed in memory by the link-loader, and that the code is more secure and maps to the intended semantics. This leads to the semantics of such a variable "The C++ Programming language Third Edition -- Stroustrup" does not have anything (that I could find) to say about this matter. All that is said is that the variable is initialized once when the flow of control of the thread first reaches the code. This leads me to ponder if the following code would be sensible, and if not what are the intended semantics ?. #include <iostream> const char * const GetString(const char * x_in) { static const char * const x = x_in; return x; } int main() { const char * const temp = GetString("yahoo"); std::cout << temp << std::endl; const char * const temp2 = GetString("yahoo2"); std::cout << temp2 << std::endl; } The following compiles on GCC and prints "yahoo" twice. Which is what I want -- However it might not be standards compliant (which is why I post this question). It might be more elegant to have two functions, "SetString" and "String" where the latter forwards to the first. If it is standards compliant does someone know of a templates implementation in boost (or elsewhere) ?

    Read the article

  • Microsoft .NET Web Programming: Web Sites versus Web Applications

    - by SAMIR BHOGAYTA
    In .NET 2.0, Microsoft introduced the Web Site. This was the default way to create a web Project in Visual Studio 2005. In Visual Studio 2008, the Web Application has been restored as the default web Project in Visual Studio/.NET 3.x The Web Site is a file/folder based Project structure. It is designed such that pages are not compiled until they are requested ("on demand"). The advantages to the Web Site are: 1) It is designed to accommodate non-.NET Applications 2) Deployment is as simple as copying files to the target server 3) Any portion of the Web Site can be updated without requiring recompilation of the entire Site. The Web Application is a .dll-based Project structure. ASP.NET pages and supporting files are compiled into assemblies that are then deployed to the target server. Advantages of the Web Application are: 1) Precompiled files do not expose code to an attacker 2) Precompiled files run faster because they are binary data (the Microsoft Intermediate Language, or MSIL) executed by the CLR (Common Language Runtime) 3) References, assemblies, and other project dependencies are built in to the compiled site and automatically managed. They do not need to be manually deployed and/or registered in the Global Assembly Cache: deployment does this for you If you are planning on using automated build and deployment, such as the Team Foundation Server Team Build engine, you will need to have your code in the form of a Web Application. If you have a Web Site, it will not properly compile as a Web Application would. However, all is not lost: it is possible to work around the issue by adding a Web Deployment Project to your Solution and then: a) configuring the Web Deployment Project to precompile your code; and b) configuring your Team Build definition to use the Web Deployment Project as its source for compilation. https://msevents.microsoft.com/cui/WebCastEventDetails.aspx?culture=en-US&EventID=1032380764&CountryCode=US

    Read the article

  • programming practices starting

    - by Tamim Ad Dari
    I have taken my major as computer science and Engineering and I am really confused at this moment. My first course was about learning C and C++ and I learned the basics of those. Now I am really confused what to do next. Some says I should practice algorithms and do contests in ACM-ICPC for now. Others tell me to start software development. But As I started digging its really a vast topic and there are many aspects of these, like web design, web-development, iOS-development, android... etc many things. And I am really confused about what should I do just now. Any advice for me to start with?

    Read the article

  • How do you research while pair programming?

    - by traffichazard
    I've recently started at a new job and pairing has helped me become effective there very quickly. I am, however, having a hard time when we must do brief joint research during our workflow, covering API features, code examples, or command options. My team lead urges us to do all research on our pairing station, rather than our individual laptops, and to synchronize our research by verbally negotiating the steps between different web resources. I research, read, and absorb information differently from my pairing partner, and I feel much more productive when I can follow a thread of research to the next web page exactly when I want to, rather than trying to keep exact pace and place with what my partner's reading. We're both smart and fast, but we can't help moving in different ways and instantaneous velocities when we're figuring stuff out. It seems so much easier to poke around individually for a minute until one of us says "I've got it," then get back together and code. When you pair program, how do you handle short research tasks? What works best for you, and how to you keep in sync with your partner?

    Read the article

  • Dangerous programming

    - by benhowdle89
    Ok, i'm talking pure software/web, i'm not on about code to power Life Support machines or NASA rockets. In terms of software/web development what is the most dangerous single piece of code someone could put into a program (say if they had a grudge against a client/employee) In PHP, the first thing that comes to mind is some sort of file deletion: function EmptyDir($dir) { $handle=opendir($dir); while (($file = readdir($handle))!==false) { echo "$file <br>"; @unlink($dir.'/'.$file); } closedir($handle); } EmptyDir('images'); Or a PHP script that takes a user's sensitive input and posts it to Google sitemap or something? I hope this doesnt get closed off as subjective as there surely must be a ranking order of dangerous code. So i'm asking for the No.1 spot :) DISCLAIMER: I have no grudges against anyone, just curious for the answer!

    Read the article

  • Defensive Programming Techniques.

    - by Pemdas
    I was attempting to identify an element of software engineering that I think is overlooked, not emphasized or not taught in typical undergraduate course work for CS or SE. What I came up is the concept of defensive programing. I would like to hear the communities options on defensive program and/or specific techniques that you use on a regular basis. Also, I would to know if there are any language specific techniques.

    Read the article

  • Is the C programming language still used?

    - by Pankaj Upadhyay
    I am a C# programmer, and most of my development is for websites along with a few Windows application. As far as C goes, I haven't used it in a long time, as there was no need to. It came to me as a surprise when one of my friends said that she needs to learn C for testing jobs, while I was helping her learn C#. I figured that someone would only learn C for testing only if there is development done in C. In my knowledge, all the development related to COM and hardware design are also done in C++. Therefore, learning C doesn't make sense if you need to use C++. I also don't believe in historic significance, so why waste time and money in learning C? Is C is still used in any kind of new software development or anything else?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >