Search Results

Search found 12719 results on 509 pages for 'language translator'.

Page 72/509 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Design pattern to use instead of multiple inheritance

    - by mizipzor
    Coming from a C++ background, Im used to multiple inheritance. I like the feeling of a shotgun squarely aimed at my foot. Nowadays, I work more in C# and Java, where you can only inherit one baseclass but implement any number of interfaces (did I get the terminology right?). For example, lets consider two classes that implement a common interface but different (yet required) baseclasses: public class TypeA : CustomButtonUserControl, IMagician { public void DoMagic() { // ... } } public class TypeB : CustomTextUserControl, IMagician { public void DoMagic() { // ... } } Both classes are UserControls so I cant substitute the base class. Both needs to implement the DoMagic function. My problem now is that both implementations of the function are identical. And I hate copy-and-paste code. The (possible) solutions: I naturally want TypeA and TypeB to share a common baseclass, where I can write that identical function definition just once. However, due to having the limit of just one baseclass, I cant find a place along the hierarchy where it fits. One could also try to implement a sort of composite pattern. Putting the DoMagic function in a separate helper class, but the function here needs (and modifies) quite a lot of internal variables/fields. Sending them all as (reference) parameters would just look bad. My gut tells me that the adapter pattern could have a place here, some class to convert between the two when necessery. But it also feels hacky. I tagged this with language-agnostic since it applies to all languages that use this one-baseclass-many-interfaces approach. Also, please point out if I seem to have misunderstood any of the patterns I named. In C++ I would just make a class with the private fields, that function implementation and put it in the inheritance list. Whats the proper approach in C#/Java and the like?

    Read the article

  • How to test an application for correct encoding (e.g. UTF-8)

    - by Olaf
    Encoding issues are among the one topic that have bitten me most often during development. Every platform insists on its own encoding, most likely some non-UTF-8 defaults are in the game. (I'm usually working on Linux, defaulting to UTF-8, my colleagues mostly work on german Windows, defaulting to ISO-8859-1 or some similar windows codepage) I believe, that UTF-8 is a suitable standard for developing an i18nable application. However, in my experience encoding bugs are usually discovered late (even though I'm located in Germany and we have some special characters that along with ISO-8859-1 provide some detectable differences). I believe that those developers with a completely non-ASCII character set (or those that know a language that uses such a character set) are getting a head start in providing test data. But there must be a way to ease this for the rest of us as well. What [technique|tool|incentive] are people here using? How do you get your co-developers to care for these issues? How do you test for compliance? Are those tests conducted manually or automatically? Adding one possible answer upfront: I've recently discovered fliptitle.com (they are providing an easy way to get weird characters written "u?op ?pisdn" *) and I'm planning on using them to provide easily verifiable UTF-8 character strings (as most of the characters used there are at some weird binary encoding position) but there surely must be more systematic tests, patterns or techniques for ensuring UTF-8 compatibility/usage. Note: Even though there's an accepted answer, I'd like to know of more techniques and patterns if there are some. Please add more answers if you have more ideas. And it has not been easy choosing only one answer for acceptance. I've chosen the regexp answer for the least expected angle to tackle the problem although there would be reasons to choose other answers as well. Too bad only one answer can be accepted. Thank you for your input. *) that's "upside down" written "upside down" for those that cannot see those characters due to font problems

    Read the article

  • Algorithm to find the percentage of how much two texts are identical

    - by qster
    What algorithm would you suggest to identify how much from 0 to 1 (float) two texts are identical? Note that I don't mean similar (ie, they say the same thing but in a different way), I mean exact same words, but one of the two texts could have extra words or words slightly different or extra new lines and stuff like that. A good example of the algorithm I want is the one google uses to identify duplicate content in websites (X search results very similar to the ones shown have been omitted, click here to see them). The reason I need it is because my website has the ability for users to post comments; similar but different pages currently have their own comments, so many users ended up copy&pasting their comments on all the similar pages. Now I want to merge them (all similar pages will "share" the comments, and if you post it on page A it will appear on similar page B), and I would like to programatically erase all those copy&pasted comments from the same user. I have quite a few million comments but speed shouldn't be an issue since this is a one time thing that will run in the background. The programming language doesn't really matter (as long as it can interface to a MySQL database), but I was thinking of doing it in C++.

    Read the article

  • What's your most controversial programming opinion?

    - by Jon Skeet
    This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately. The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently). So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions. Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

    Read the article

  • Linear feedback shift register?

    - by Mattia Gobbi
    Lately I bumped repeatedly into the concept of LFSR, that I find quite interesting because of its links with different fields and also fascinating in itself. It took me some effort to understand, the final help was this really good page, much better than the (at first) cryptic wikipedia entry. So I wanted to write some small code for a program that worked like a LFSR. To be more precise, that somehow showed how a LFSR works. Here's the cleanest thing I could come up with after some lenghtier attempts (Python): def lfsr(seed, taps): sr, xor = seed, 0 while 1: for t in taps: xor += int(sr[t-1]) if xor%2 == 0.0: xor = 0 else: xor = 1 print xor sr, xor = str(xor) + sr[:-1], 0 print sr if sr == seed: break lfsr('11001001', (8,7,6,1)) #example I named "xor" the output of the XOR function, not very correct. However, this is just meant to show how it circles through its possible states, in fact you noticed the register is represented by a string. Not much logical coherence. This can be easily turned into a nice toy you can watch for hours (at least I could :-) def lfsr(seed, taps): import time sr, xor = seed, 0 while 1: for t in taps: xor += int(sr[t-1]) if xor%2 == 0.0: xor = 0 else: xor = 1 print xor print time.sleep(0.75) sr, xor = str(xor) + sr[:-1], 0 print sr print time.sleep(0.75) Then it struck me, what use is this in writing software? I heard it can generate random numbers; is it true? how? So, it would be nice if someone could: explain how to use such a device in software development come up with some code, to support the point above or just like mine to show different ways to do it, in any language Also, as theres not much didactic stuff around about this piece of logic and digital circuitry, it would be nice if this could be a place for noobies (like me) to get a better understanding of this thing, or better, to understand what it is and how it can be useful when writing software. Should have made it a community wiki? That said, if someone feels like golfing... you're welcome.

    Read the article

  • Programming Concepts: What should be done when an exception is thrown?

    - by Dooms101
    This does not really apply to any language specifically, but if it matters I am using VB.NET in Visual Studio 2008. I can't seem to find anything really that useful using Google about this topic, but I was wondering what is common practice when an exception is thrown and caught but since it has been thrown the application cannot continue operating. For example I have exceptions that are thrown by my FileLoader class when a file cannot be found or when a file is deemed corrupt. The exception is only thrown within the class and is not handled really. If the error is detected, then the exception is thrown and whatever function is was thrown is basically quits. So in the code trying to create that object or call one of its members I use a Try...Catch statement. However, I was wondering, what should even do when this exception is caught? My application needs these files to be intact, and if they are not, the application is almost useless. So far I just pop up a message box telling the user their is an error and to reinstall. What else can I do, or better, what's common practice in these situations?

    Read the article

  • should the builder reset its build environment after delivering the product

    - by Sudhi
    I am implementing a builder where in the deliverable is retrieved by calling Builder::getProduct() . The director asks various parts to build Builder::buildPartA() , Builder::buildPartB() etc. in order to completely build the product. My question is, once the product is delivered by the Builder by calling Builder::getProduct(), should it reset its environment (Builder::partA = NULL;, Builder::partB = NULL;) so that it is ready to build another product? (with same or different configuration?) I ask this as I am using PHP wherein the objects are by default passed by reference (nope, I don't want to clone them, as one of their field is a Resource) . However, even if you think from a language agnostic point of view, should the Builder reset its build environment ? If your answer is 'it depends on the case' what use cases would justify reseting the environment (and other way round) ? For the sake of providing code sample here's my Builder::gerProcessor() which shows what I mean by reseting the environment /** * @see IBuilder::getProessor() */ public function getProcessor() { if($this->_processor == NULL) { throw new LogicException('Processor not yet built!'); } else { $retval = $this->_processor; $this->_product = NULL, $this->_processor = NULL; } return $retval; }

    Read the article

  • Choosing a method for a webservice

    - by Wrikken
    I'm asked to set up a new webservice which should be easily usable in whatever language (php, .NET, Java, etc.) possible. Of course rolling my own can be done, accepting different content-types (xml / x-www-form-urlencoded (normal post) / json / etc.), but an existing method or mechanism would of course be prefered, cutting down time spent on development for the consumers of the service. The webservice does accept modifications / sets (it is not only simply data retrieval), but those will most likely be quite a lot less then gets (we estimate about 2.5% sets, 97.5 gets). The term webservice here indicates the protocol should go over HTTP, not being able to implement it totally client sided (javascript in the end-users browser etc.), as it needs specific user authentication. Both gets and sets are pretty light on the parameter count (usually 1 to 4). Methods like REST (which I'd prefer for only gets), XML-RPC & SOAP (might be a bit overkill, but has the advantage of explicitly defined methods and returns) are the usual suspects. What in your opinion / experience is the most widely 'spoken' and most easily implementable protocol in different languages (seen from the consumers' viewpoint) which could fullfill this need?

    Read the article

  • Automated Legal Processing

    - by Chris S
    Will it ever be possible to make legal systems quantifiable enough to process with computer algorithms? What technologies would have to be in place before this is possible? Are there any existing technologies that are already trying to accomplish this? Out of curiosity, I downloaded the text for laws in my local municipality, and tried applying some simple NLP tricks to extract rules from sentences. I had mixed results. Some sentences were very explicit (e.g. "Cars may not be left in the park overnight"), but other sentences seemed hopelessly vague (e.g. "The council's purpose is to ensure the well-being of the community"). I apologize if this is too open-ended a topic, but I've often wondered what society would look like if legal systems were based on less ambiguous language. Lawyers, and the legal process in general, are so expensive because they have to manually process a complex set of rules codified in ambiguous legal texts. If this system could be represented in software, this huge expense could potentially be eliminated, making the legal system more accessible for everyone.

    Read the article

  • Should client-server code be written in one "project" or two?

    - by Ricket
    I've been beginning a client-server application. At first I naturally created two projects in Eclipse, two source control repositories, etc. But I'm quickly seeing that there is a bit of shared code between the two that would probably benefit to sharing instead of copying. In addition, I've been learning and trying test-driven development, and it seems to me that it would be easier to test based on real client components rather than having to set up a huge amount of code just to mock something, when the code is probably mostly in the client. My biggest concern in merging the client and server is of security; how do I ensure that the server pieces of the code do not reach an user's computer? So especially if you are writing client-server applications yourself (and especially in Java, though this can turn into a language-agnostic question if you'd like to share your experience with this in other languages), what sort of separation do you keep between your client and server code? Are they just in different packages/namespaces or completely different binaries using shared libraries, or something else entirely? How do you test the code together and yet ship separately?

    Read the article

  • How to make that the LanguageBinder take precedence over the DynamicBinder

    - by rudimenter
    Hi I Have a class which implement IDynamicMetaObjectProvider I implement the BindGetMember Method from DynamicMetaObject. Now when i Generate a dynamic Object and Access a property every call gets implicit passed through the BindGetMember Method. I want that at first the language Binder get his chance before my code comes in. It is somehow doable with "binder.FallbackGetMember" but i am not sure how the expression has to look like. I call here dynamic com=CommandFactory.GetCommand(); com.testprop; //expected: "test"; but "test2" comes back public class Command : System.Dynamic.IDynamicMetaObjectProvider { public string testprop { get { return "test"; } } public object GetValue(string name) { return "test2"; } System.Dynamic.DynamicMetaObject System.Dynamic.IDynamicMetaObjectProvider.GetMetaObject(System.Linq.Expressions.Expression parameter) { return new MetaCommand(parameter, this); } private class MetaCommand : System.Dynamic.DynamicMetaObject { public MetaCommand(Expression expression, Command value) : base(expression, System.Dynamic.BindingRestrictions.Empty, value) { } public override System.Dynamic.DynamicMetaObject BindGetMember(System.Dynamic.GetMemberBinder binder) { var self = this.Expression; var bag = (Command)base.Value; Expression target; target = Expression.Call( Expression.Convert(self, typeof(Command)), typeof(Command).GetMethod("GetValue"), Expression.Constant(binder.Name) ); var restrictions = BindingRestrictions .GetInstanceRestriction(self, bag); return new DynamicMetaObject(target, restrictions); } #endregion } }

    Read the article

  • Game engine deployment strategy for the Android?

    - by Jeremy Bell
    In college, my senior project was to create a simple 2D game engine complete with a scripting language which compiled to bytecode, which was interpreted. For fun, I'd like to port the engine to android. I'm new to android development, so I'm not sure which way to go as far as deploying the engine on the phone. The easiest way I suppose would be to require the engine/interpreter to be bundled with every game that uses it. This solves any versioning issues. There are two problems with this. One: this makes each game app larger and two: I originally released the engine under the LGPL license (unfortunately), but this deployment strategy makes it difficult to conform to the rules of that license, particularly with respect to allowing users to replace the lib easily with another version. So, my other option is to somehow have the engine stand alone as an Activity or service that somehow responds to intents raised by game apps, and somehow give the engine app permissions to read the scripts and other assets to "run" the game. The user could then be able to replace the engine app with a different version (possibly one they made themselves). Is this even possible? What would you recommend? How could I handle it in a secure way?

    Read the article

  • How to keep confirmation messages after POST while doing a post-submit redirect?

    - by MicE
    Hello, I'm looking for advise on how to share certain bits of data (i.e. post-submit confirmation messages) between individual requests in a web application. Let me explain: Current approach: user submits an add/edit form for a resource if there were no errors, user is shown a confirmation with links to: submit a new resource (for "add" form) view the submitted/edited resource view all resources (one step above in hierarchy) user then has to click on one of the three links to proceed (i.e. to the page "above") Progmatically, the form and its confirmation page are one set of classes. The page above that is another. They can technically share code, but at the moment they are both independent during processing of individual requests. We would like to amend the above as follows: user submits an add/edit form for a resource if there were no errors, the user is redirected to the page with all resources (one step above in hierarchy) with one or more confirmation messages displayed at the top of the page (i.e. success message, to whom was the request assigned, etc) This will: save users one click (they have to go through a lot of these add/edit forms) the post-submit redirect will address common problems with browser refresh / back-buttons What approach would you recommend for sharing data needed for the confirmation messages between the two requests, please? I'm not sure if it helps, it's a PHP application backed by a RESTful API, but I think that this is a language-agnostic question. A few simple solutions that come to mind are to share the data via cookies or in the session, this however breaks statelessness and would pose a significant problem for users who work in several tabs (the data could clash together). Passing the data as GET parameters is not suitable as we are talking about several messages which are dynamic (e.g. changing actors, dates). Thanks, M.

    Read the article

  • What are Code Smells? What is the best way to correct them?

    - by Rob Cooper
    OK, so I know what a code smell is, and the Wikipedia Article is pretty clear in its definition: In computer programming, code smell is any symptom in the source code of a computer program that indicates something may be wrong. It generally indicates that the code should be refactored or the overall design should be reexamined. The term appears to have been coined by Kent Beck on WardsWiki. Usage of the term increased after it was featured in Refactoring. Improving the Design of Existing Code. I know it also provides a list of common code smells. But I thought it would be great if we could get clear list of not only what code smells there are, but also how to correct them. Some Rules Now, this is going to be a little subjective in that there are differences to languages, programming style etc. So lets lay down some ground rules: ** ONE SMELL PER ANSWER PLEASE! & ADVISE ON HOW TO CORRECT! ** See this answer for a good display of what this thread should be! DO NOT downmod if a smell doesn't apply to your language or development methodology We are all different. DO NOT just quickly smash in as many as you can think of Think about the smells you want to list and get a good idea down on how to work around. DO downmod answers that just look rushed For example "dupe code - remove dupe code". Let's makes it useful (e.g. Duplicate Code - Refactor into separate methods or even classes, use these links for help on these common.. etc. etc.). DO upmod answers that you would add yourself If you wish to expand, then answer with your thoughts linking to the original answer (if it's detailed) or comment if its a minor point. DO format your answers! Help others to be able to read it, use code snippets, headings and markup to make key points stand out!

    Read the article

  • How would I code a complex formula parser manually?

    - by StormianRootSolver
    Hm, this is language - agnostic, I would prefer doing it in C# or F#, but I'm more interested this time in the question "how would that work anyway". What I want to accomplish ist: a) I want to LEARN it - it's about my ego this time, it's for a fun project where I want to show myself that I'm a really good at this stuff b) I know a tiny little bit about EBNF (although I don't know yet, how operator precedence works in EBNF - Irony.NET does it right, I checked the examples, but this is a bit ominous to me) c) My parser should be able to take this: 5 * (3 + (2 - 9 * (5 / 7)) + 9) for example and give me the right results d) To be quite frankly, this seems to be the biggest problem in writing a compiler or even an interpreter for me. I would have no problem generating even 64 bit assembler code (I CAN write assembler manually), but the formula parser... e) Another thought: even simple computers (like my old Sharp 1246S with only about 2kB of RAM) can do that... it can't be THAT hard, right? And even very, very old programming languages have formula evaluation... BASIC is from 1964 and they already could calculate the kind of formula I presented as an example f) A few ideas, a few inspirations would be really enough - I just have no clue how to do operator precedence and the parentheses - I DO, however, know that it involves an AST and that many people use a stack So, what do you think?

    Read the article

  • Will Learning C++ Help for Building Fast/No-Additional-Requirements Desktop Applications?

    - by vito
    Will learning C++ help me build native applications with good speed? Will it help me as a programmer, and what are the other benefits? The reason why I want to learn C++ is because I'm disappointed with the UI performances of applications built on top of JVM and .NET. They feel slow, and start slow too. Of course, a really bad programmer can create a slower and sluggish application using C++ too, but I'm not considering that case. One of my favorite Windows utility application is Launchy. And in the Readme.pdf file, the author of the program wrote this: 0.6 This is the first C++ release. As I became frustrated with C#’s large .NET framework requirements and users lack of desire to install it, I decided to switch back to the faster language. I totally agree with the author of Launchy about the .NET framework requirement or even a JRE requirement for desktop applications. Let alone the specific version of them. And some of the best and my favorite desktop applications don't need .NET or Java to run. They just run after installing. Are they mostly built using C++? Is C++ the only option for good and fast GUI based applications? And, I'm also very interested in hearing the other benefits of learning C++.

    Read the article

  • Data encapsulation in Swift

    - by zpasternack
    I've read the entire Swift book, and watched all the WWDC videos (all of which I heartily recommend). One thing I'm worried about is data encapsulation. Consider the following (entirely contrived) example: class Stack<T> { var items : T[] = [] func push( newItem: T ) { items.insert( newItem, atIndex: 0 ) } func pop() -> T? { if items.count == 0 { return nil; } return items.removeAtIndex( 0 ); } } This class implements a stack, and implements it using an Array. Problem is, items (like all properties in Swift) is public, so nothing is preventing anyone from directly accessing (or even mutating) it separate from the public API. As a curmudgeonly old C++ guy, this makes me very grumpy. I see people bemoaning the lack of access modifiers, and while I agree they would directly address the issue (and I hear rumors that they might be implemented Soon (TM) ), I wonder what some strategies for data hiding would be in their absence. Have I missed something, or is this simply an omission in the language?

    Read the article

  • Plotting an Arc in Discrete Steps

    - by phobos51594
    Good afternoon, Background My question relates to the plotting of an arbitrary arc in space using discrete steps. It is unique, however, in that I am not drawing to a canvas in the typical sense. The firmware I am designing is for a gcode interpreter for a CNC mill that will translate commands into stepper motor movements. Now, I have already found a similar question on this very site, but the methodology suggested (Bresenham's Algorithm) appears to be incompatable for moving an object in space, as it only relies on the calculation of one octant of a circle which is then mirrored about the remaining axes of symmetry. Furthermore, the prescribed method of calculation an arc between two arbitrary angles relies on trigonometry (I am implementing on a microcontroller and would like to avoid costly trig functions, if possible) and simply not taking the steps that are out of the range. Finally, the algorithm only is designed to work in one rotational direction (e.g. counterclockwise). Question So, on to the actual question: Does anyone know of a general-purpose algorithm that can be used to "draw" an arbitrary arc in discrete steps while still giving respect to angular direction (CW / CCW)? The final implementation will be done in C, but the language for the purpose of the question is irrelevant. Thank you in advance. References S.O post on drawing a simple circle using Bresenham's Algorithm: "Drawing" an arc in discrete x-y steps Wiki page describing Bresenham's Algorithm for a circle http://en.wikipedia.org/wiki/Midpoint_circle_algorithm Gcode instructions to be implemented (see. G2 and G3) http://linuxcnc.org/docs/html/gcode.html

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • Most readable way to write simple conditional check

    - by JRL
    What would be the most readable/best way to write a multiple conditional check such as shown below? Two possibilities that I could think of (this is Java but the language really doesn't matter here): Option 1: boolean c1 = passwordField.getPassword().length > 0; boolean c2 = !stationIDTextField.getText().trim().isEmpty(); boolean c3 = !userNameTextField.getText().trim().isEmpty(); if (c1 && c2 && c3) { okButton.setEnabled(true); } Option 2: if (passwordField.getPassword().length > 0 && !stationIDTextField.getText().trim().isEmpty() && !userNameTextField.getText().trim().isEmpty() { okButton.setEnabled(true); } What I don't like about option 2 is that the line wraps and then indentation becomes a pain. What I don't like about option 1 is that it creates variables for nothing and requires looking at two places. So what do you think? Any other options?

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • Measuring how "heavily linked" a node is in a graph

    - by Eduardo León
    I have posted this question at MathOverflow.com as well. I am no mathematician and English is not my first language, so please excuse me if my question is too stupid, it is poorly phrased, or both. I am developing a program that creates timetables. My timetable-creating algorithm, besides creating the timetable, also creates a graph whose nodes represent each class I have already programmed, and whose arcs represent which pairs of classes should not be programmed at the same time, even if they have to be reprogrammed. The more "heavily linked" a node is, the more inflexible its associated class is with respect to being reprogrammed. Sometimes, in the middle of the process, there will be no option but to reprogram a class that has already been programmed. I want my program to be able to choose a class that, if reprogrammed, affects the least possible number of other already-programmed classes. That would mean choosing a node in the graph that is "not very heavily linked", subject to some constraints with respect to which nodes can be chosen. EDIT: The question was... Do you know any algorithm that measures how "heavily linked" a node is?

    Read the article

  • Why aren't variables declared in "try" in scope in "catch" or "finally"?

    - by Jon Schneider
    In C# and in Java (and possibly other languages as well), variables declared in a "try" block are not in scope in the corresponding "catch" or "finally" blocks. For example, the following code does not compile: try { String s = "test"; // (more code...) } catch { Console.Out.WriteLine(s); //Java fans: think "System.out.println" here instead } In this code, a compile-time error occurs on the reference to s in the catch block, because s is only in scope in the try block. (In Java, the compile error is "s cannot be resolved"; in C#, it's "The name 's' does not exist in the current context".) The general solution to this issue seems to be to instead declare variables just before the try block, instead of within the try block: String s; try { s = "test"; // (more code...) } catch { Console.Out.WriteLine(s); //Java fans: think "System.out.println" here instead } However, at least to me, (1) this feels like a clunky solution, and (2) it results in the variables having a larger scope than the programmer intended (the entire remainder of the method, instead of only in the context of the try-catch-finally). My question is, what were/are the rationale(s) behind this language design decision (in Java, in C#, and/or in any other applicable languages)?

    Read the article

  • How do you handle huge if-conditions?

    - by Teifion
    It's something that's bugged me in every language I've used, I have an if statement but the conditional part has so many checks that I have to split it over multiple lines, use a nested if statement or just accept that it's ugly and move on with my life. Are there any other methods that you've found that might be of use to me and anybody else that's hit the same problem? Example, all on one line: if (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true){ Example, multi-line: if (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true){ Example-nested: if (var1 = true && var2 = true && var2 = true && var3 = true){     if (var4 = true && var5 = true && var6 = true)     {

    Read the article

  • Is programming overrated?

    - by aengine
    [Subjective and intended to be a community wiki] I am sorry for such an offensive question: But here are my arguments Most of the progress in "computing" has came from non-programming sources. i.e. People invented faster microprocessors and better routers and novel memory devices. I dont think on average people are writting more efficient programs than those written 10 years ago. And the newer and popular languages are infact slower than C. though speed is one of the lesser criterias. Most of the progress came from novel paradigms. Web, Internet, Cloud computing and Social networking are novel paradigms and did not involve progress in programming as such. Heck even facebook was written in PHP and not some extreme language. Though it did face scalability issues (same with twitter) but i believe money and better programmers (who came in much later) took care of that. Thus ideating capability trumped programming capability/ Even things like Map-Reduce, Column oriented database and Probablistic algorithms (E.g. bloom filters) came from hardcore Algorithms research, rather than some programming convention. Thus my final point is why programming skill is so overstressed? To point a recent example about how only 10% of programmers can "write code" (binary search) without debugging. Isnt it a bit hypocritical, considering your real successs lies in coming up with better algorithm or a novel feature rather than getting right first time???

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >