Search Results

Search found 7380 results on 296 pages for 'scripting languages'.

Page 270/296 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • How do you read a file line by line in your language of choice?

    - by Jon Ericson
    I got inspired to try out Haskell again based on a recent answer. My big block is that reading a file line by line (a task made simple in languages such as Perl) seems complicated in a functional language. How do you read a file line by line in your favorite language? So that we are comparing apples to other types of apples, please write a program that numbers the lines of the input file. So if your input is: Line the first. Next line. End of communication. The output would look like: 1 Line the first. 2 Next line. 3 End of communication. I will post my Haskell program as an example. Ken commented that this question does not specify how errors should be handled. I'm not overly concerned about it because: Most answers did the obvious thing and read from stdin and wrote to stdout. The nice thing is that it puts the onus on the user to redirect those streams the way they want. So if stdin is redirected from a non-existent file, the shell will take care of reporting the error, for instance. The question is more aimed at how a language does IO than how it handles exceptions. But if necessary error handling is missing in an answer, feel free to either edit the code to fix it or make a note in the comments.

    Read the article

  • Getting my foot in the SCADA door, how?

    - by bibby
    I keep hearing that I should learn SCADA and its PLC language to upgrade my career. While I enjoy currently being a web and mobile development privateer, the prospects of working for a municipality or industrial entity has its appeals (since I am trying to grow a family). Over the years, I've tought myself to skillfully use php, javascript, java, perl, awk, bash. Surely, these language skills can tranfer somewhat to SCADA's logic controller language. Without any formal training in CS (music major!) other than at the workplace, I wouldn't have been able to pick up those languages and run with them had it not been for their open documentation and free-to-install or already-installed interpreters/compilers. I can't see that this is true with SCADA, and I'm hoping that I'm wrong. Ideally, I'd like to be able to apply for a job that requires [A,B,C] and suggest that they hire me because I already know [A & B]; that they wouldn't have to do a ground-up training with someone that's never programmed before. So, finally, the question; How do I "learn" SCADA? Are there sites and docs? What's going to help me get my foot in the door? Any insight is appreciated. Thanks!

    Read the article

  • Getting Started with Ruby & Ruby on Rails

    - by JakeTheSnake
    Some background: I'm a jack-of-all traits, one of which is programming. I learned VB6 through Excel and PHP for creating websites and so far it's worked out just fine for me. I'm not CS major or even mathematically inclined - logic is what interests me. Current status: I'm willing to learn new and more powerful languages; my first foray into such a route is learning Ruby. I went to the main Ruby website and did the interactive intro. (by the way, I'm currently getting redirected to google.com when I try the link...it's happening to other websites as well...is my computer infected?) I liked what I learned and wanted to get started using Ruby to create websites. I downloaded InstantRails and installed it; everything so far has been fine - the program starts up just fine, and I can test some Ruby code in the console. However my troubles begin when I try and view a web page with Ruby code present. Lastly, my problem: As in PHP, I can browse to the .php file directly and through using PHP tags and some simple 'echo' statements I can be on my way in making dynamic web pages. However with the InstantRails app working, accessing a .rb or .rhtml page doesn't produce similar results. I made a simple text file named 'test.rb' and put basic HTML tags in there (html, head, body) and the Ruby tags <%= and % with some ruby code inside. The web page actually shows the tags and the code - as if it's all just plain HTML. I take it Ruby isn't parsing the page before it is displayed to the user, but this is where my lack of understanding of the Ruby environment stops me short. Where do I go from here?

    Read the article

  • Full Text Search in MSSQL2008 shows wrong display_item for Thai language

    - by ensecoz
    I am working with MSSQL2008. My task is to investigate the issue where FTS cannot find the right result for Thai. First, I have the table which enables the FTS on the column 'ItemName' which is nvarchar. The Catalog is created with the Thai Language. Note that the Thai language is one of the languages that doesn't separate the word by spaces, so '????' '???' '????' are written like this in a sentence: '???????????' In the table, there are many rows that include the word (????); for example row#1 (ItemName: '???????????') On the webpage, I try to search for '????' but SQLServer cannot find it. So I try to investigate it by trying the following query in SQLServer: select * from sys.dm_fts_parser(N'"???????????"', 1054, 0, 0) ...to see how the words are broken. The first one is the text to be broken. The second parameter is to specify that we're using Thai (WorkBreaker, so on). Here is the result: row#1 (display_item: '????', source_item: '???????????') row#2 (display_item: '????', source_item: '???????????') row#3 (display_item: '??', source_item: '???????????') Notice that the first and second row display the wrong display_item '?' in the '????' isn't even Thai characters. '?' in '????' is not a Thai character either. So the question is where did those alien characters come from? I guess this why I cannot search for '????' because the word breaker is broken and keeping the wrong character in the indexes. Please help!

    Read the article

  • Setting default language for iPhone app on first run

    - by RaYell
    I'm developing an application that should support two languages: English and French. However because English translation is not done yet we want to deploy it in French only and later on add English translation later on. The problem is that I don't want to strip English language out of my code since some parts are already done, there are different NIBs for that language etc. Instead I'd just want english language to be temporary disabled in my app. What I did is I put this code as the first instruction of - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; [defaults setObject:[NSArray arrayWithObjects:@"fr", nil] forKey:@"AppleLanguages"]; [defaults synchronize]; It works fine except for one thing. When you launch the application for the first time after installation it's still in English. That's probably because AppleLanguages preference was not yet set for it. After I quit the application and start it again it's being displayed correctly in French. Does anyone knows a fix so that French language was applied also on the first run?

    Read the article

  • Please help us non-C++ developers understand what RAII is

    - by Charlie Flowers
    Another question I thought for sure would have been asked before, but I don't see it in the "Related Questions" list. Could you C++ developers please give us a good description of what RAII is, why it is important, and whether or not it might have any relevance to other languages? I do know a little bit. I believe it stands for "Resource Acquisition is Initialization". However, that name doesn't jive with my (possibly incorrect) understanding of what RAII is: I get the impression that RAII is a way of initializing objects on the stack such that, when those variables go out of scope, the destructors will automatically be called causing the resources to be cleaned up. So why isn't that called "using the stack to trigger cleanup" (UTSTTC:)? How do you get from there to "RAII"? And how can you make something on the stack that will cause the cleanup of something that lives on the heap? Also, are there cases where you can't use RAII? Do you ever find yourself wishing for garbage collection? At least a garbage collector you could use for some objects while letting others be managed? Thanks.

    Read the article

  • Backreferences syntax in replacement strings (why dollar sign?)

    - by polygenelubricants
    In Java, and it seems in a few other languages, backreferences in the pattern is preceded by a slash (e.g. \1, \2, \3, etc), but in a replacement string it's preceded by a dollar sign (e.g. $1, $2, $3, and also $0). Here's a snippet to illustrate: System.out.println( "left-right".replaceAll("(.*)-(.*)", "\\2-\\1") // WRONG!!! ); // prints "2-1" System.out.println( "left-right".replaceAll("(.*)-(.*)", "$2-$1") // CORRECT! ); // prints "right-left" System.out.println( "You want million dollar?!?".replaceAll("(\\w*) dollar", "US\\$ $1") ); // prints "You want US$ million?!?" System.out.println( "You want million dollar?!?".replaceAll("(\\w*) dollar", "US$ \\1") ); // throws IllegalArgumentException: Illegal group reference Questions: Is the use of $ for backreferences in replacement strings unique to Java? If not, what language started it? What flavors use it and what don't? Why is this a good idea? Why not stick to the same pattern syntax? Wouldn't that lead to a more cohesive and an easier to learn language? Wouldn't the syntax be more streamlined if statements 1 and 4 in the above were the "correct" ones instead of 2 and 3?

    Read the article

  • Space-saving character encoding for japanese?

    - by Constantin
    In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach? Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!

    Read the article

  • Is the Windows dev environment worth the cost?

    - by MCS
    I recently made the move from Linux development to Windows development. And as much of a Linux enthusiast that I am, I have to say - C# is a beautiful language, Visual Studio is terrific, and now that I've bought myself a trackball my wrist has stopped hurting from using the mouse so much. But there's one thing I can't get past: the cost. Windows 7, Visual Studio, SQL Server, Expression Blend, ViEmu, Telerik, MSDN - we're talking thousands for each developer on the project! You're definitely getting something for your money - my question is, is it worth it? [Not every developer needs all the aforementioned tools - but have you ever heard of anyone writing C# code without Visual Studio? I've worked on pretty large software projects in Linux without having to pay for any development tool whatsoever.] Now obviously, if you're already a Windows shop, it doesn't pay to retrain all your developers. And if you're looking to develop a Windows desktop app, you just can't do that in Linux. But if you were starting a new web application project and could hire developers who are experts in whatever languages you want, would you still choose Windows as your development platform despite the high cost? And if yes, why?

    Read the article

  • Methods and properties in scheme - is object oriented programming possible in scheme?

    - by incrediman
    I will use a simple example to illustrate my question. In Java, C, or any other OOP language, I could create a pie class in a way similar to this: class Apple{ public String flavor; public int pieces; private int tastiness; public goodness(){ return tastiness*pieces; } } What's the best way to do that with Scheme? I suppose I could do with something like this: (define make-pie (lambda (flavor pieces tastiness) (list flavor pieces tastiness))) (define pie-goodness (lambda (pie) (* (list-ref pie 1) (list-ref pie 2)))) (pie-goodness (make-pie 'cherry 2 5)) ;output: 10 ...where cherry is the flavor, 2 is the pieces, and 5 is the tastiness. However then there's no type-safety or visibility, and everything's just shoved in an unlabeled list. How can I improve that? Sidenote: The make-pie procedure expects 3 arguments. If I want to make some of them optional (like I'd be able to in curly-brace languages like Java or C), is it good practice to just take the arguments in as a list (that is treat the arguments as a list - not require one argument which is a list) and deal with them that way?

    Read the article

  • Help understanding .NET delegates, events, and eventhandlers

    - by Seth Spearman
    Hello, In the last couple of days I asked a couple of questions about delegates HERE and HERE. I confess...I don't really understand delegates. And I REALLY REALLY REALLY want to understand and master them. (I can define them--type safe function pointers--but since I have little experience with C type languages it is not really helpful.) Can anyone recommend some online resource(s) that will explain delegates in a way that presumes nothing? This is one of those moments where I suspect that VB actually handicaps me because it does some wiring for me behind the scenes. The ideal resource would just explain what delegates are, without reference to anything else like (events and eventhandlers), would show me how all everything is wired up, explain (as I just learned) that delegates are types and what makes them unique as a type (perhaps using a little ildasm magic)). That foundation would then expand to explain how delegates are related to events and eventhandlers which would need a pretty good explanation in there own right. Finally this resource could tie it all together using real examples and explain what wiring DOES happen automatically by the compiler, how to use them, etc. And, oh yeah, when you should and should not use delegates, in other words, downsides and alternatives to using delegates. What say ye? Can any of you point me to resource(s) that can help me begin my journey to mastery? EDIT One last thing. The ideal resource will explain how you can and cannot use delegates in an interface declaration. That is something that really tripped me up. Thanks for your help. Seth

    Read the article

  • Creating a Hello World library function in assembly and calling it from C#

    - by Filip Ekberg
    Let's say we use NASM as they do in this answer: how to write hellow world in assembly under windows. I got a couple of thoughts and questions regarding assembly combined with c# or any other .net languages for that matter. First of all I want to be able to create a library that has the following function HelloWorld that takes this parameter: Name In C# the method signature would looke like this: void HelloWorld(string name) and it would print out something like Hello World from name I've searched around a bit but can't find that much good and clean material for this to get me started. I know some basic assembly from before mostly gasthough. So any pointers in the right direction is very much apprechiated. To sum it up Create a function in ASM ( NASM ) that takes one or more parameters Compile and create a library of the above functionality Include the library in any .net language Call the included library function Bonus features How does one handle returned values? Is it possible to write the ASM-method inline? When creating libraries in assembly or c, you do follow a certain "pre defined" way, the c calling convetion, correct?

    Read the article

  • QuadTrees - how to update when internal items are moving

    - by egarcia
    I've implemented a working QuadTree. It subdivides 2-d space in order to accomodate items, identified by their bounding box (x,y,width,height) on the smallest possible quad (up to a minimum area). My code is based on this implementation (mine is in Lua instead of C#) : http://www.codeproject.com/KB/recipes/QuadTree.aspx I've been able to successfully implement insertions and deletions successfully. I've turn now my attention to the update() function, since my items' position and dimensions change over time. My first implementation works, but it is quite naïve: function QuadTree:update(item) self:remove(item) return self.root:insert(item) end Yup, I basically remove and reinsert every item every time they move. This works, but I'd like to optimize it a bit more; after all, most of the time, moving items still remain on the same quadTree node most of the iterations. Is there any standard way to deal with this kind of update? In case it helps, my code is here: http://github.com/kikito/passion/blob/master/ai/QuadTree.lua I'm not looking for someone to implement it for me; pointers to an existing working implementation (even in other languages) would suffice.

    Read the article

  • Python 3.0 IDE - Komodo and Eclipse both flaky?

    - by victorhooi
    heya, I'm trying to find a decent IDE that supports Python 3.x, and offers code completion/in-built Pydocs viewer, Mercurial integration, and SSH/SFTP support. Anyhow, I'm trying Pydev, and I open up a .py file, it's in the Pydev perspective and the Run As doesn't offer any options. It does when you start a Pydev project, but I don't want to start a project just to edit one single Python script, lol, I want to just open a .py file and have It Just Work... Plan 2, I try Komodo 6 Alpha 2. I actually quite like Komodo, and it's nice and snappy, offers in-built Mercurial support, as well as in-built SSH support (although it lacks SSH HTTP Proxy support, which is slightly annoying). However, for some reason, this refuses to pick up Python 3. In Edit-Preferences-Languages, there's two option, one for Python and Python3, but the Python3 one refuses to work, with either the official Python.org binaries, or ActiveState's own ActivePython 3. Of course, I can set the "Python" interpreter to the 3.1 binary, but that's an ugly hack and breaks Python 2.x support. So, does anybody who uses an IDE for Python have any suggestions on either of these accounts, or can you recommend an alternate IDE for Python 3.0 development? Cheers, Victor

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • Are Fortran control characters (carriage control) still implemented in compilers?

    - by CmdrGuard
    In the book Fortran 95/2003 for Scientists and Engineers, there is much talk given to the importance of recognizing that the first column in a format statement is reserved for control characters. I've also seen control characters referred to as carriage control on the internet. To avoid confusion, by control characters, I refer to the characters "1, a blank (i.e. \s), 0, and +" as having an effect on the vertical spacing of output when placed in the first column (character) of a FORMAT statement. Also, see this text-only web page written entirely in fixed-width typeface : Fortran carriage-control (because nothing screams accuracy and antiquity better than prose in monospaced font). I found this page and others like it to be not quite clear. According to Fortran 95/2003 for Scientists and Engineers, failure to recall that the first column is reserved for carriage control can lead to horrible unintended output. Paraphrasing Dave Barry, type the wrong character, and nuclear missiles get fired at Norway. However, when I attempt to adhere to this stern warning, I find that gfortran has no idea what I'm talking about. Allow me to illustrate my point with some example code. I am trying to print out the number Pi: PROGRAM test_format IMPLICIT NONE REAL :: PI = 2 * ACOS(0.0) WRITE (*, 100) PI WRITE (*, 200) PI WRITE (*, 300) PI 100 FORMAT ('1', "New page: ", F11.9) 200 FORMAT (' ', "Single Space: ", F11.9) 300 FORMAT ('0', "Double Space: ", F11.9) END PROGRAM test_format This is the output: 1New page: 3.141592741 Single Space: 3.141592741 0Double Space: 3.141592741 The "1" and "0" are not typos. It appears that gfortran is completely ignoring the control character column. My question, then, is this: Are control characters still implemented in standards compliant compilers or is gfortran simply not standards compliant? For clarity, here is the output of my gfortran -v Using built-in specs. Target: powerpc-apple-darwin9 Configured with: ../gcc-4.4.0/configure --prefix=/sw --prefix=/sw/lib/gcc4.4 --mandir=/sw/share/man --infodir=/sw/share/info --enable-languages=c,c++,fortran,objc,java --with-gmp=/sw --with-libiconv-prefix=/sw --with-ppl=/sw --with-cloog=/sw --with-system-zlib --x-includes=/usr/X11R6/include --x-libraries=/usr/X11R6/lib --disable-libjava-multilib --build=powerpc-apple-darwin9 --host=powerpc-apple-darwin9 --target=powerpc-apple-darwin9 Thread model: posix gcc version 4.4.0 (GCC)

    Read the article

  • HTML5 tags not working at all in firefox 3.6.3

    - by William
    Okay, so I'm trying to get into this whole HTML 5 thing, and this tutorial (http://www.webreference.com/authoring/languages/html/HTML5/) says that these tags should move the content around without any kind of CSS at all, but all I'm getting is a line of text that looks like this: Header tag Nav tag Artical Section tags Aside tag footer tag Here is the code: <!DOCTYPE html> <html lang="en"> <head> <title>HTML5 test1</title> <meta charset="utf-8" /> </head> <body> <header> Header tag </header> <nav> Nav tag </nav> <article> <section> Artical Section tags </section> </article> <aside> Aside tag </aside> <footer> footer tag </footer> </body> </html>

    Read the article

  • Backdoor in OpenBSD how is it that no developer saw it ? And what about other Linux ? [closed]

    - by user310291
    It had been revealed that there have been backdoor implanted in OpenBSD http://www.infoworld.com/d/developer-world/software-security-honesty-the-best-policy-285 OpenBSD is opensource, how is it that nobody in the community developper could see it in the source code ? So how can one trust all the other "opensource" Linux Of course OpenBSD is only a case, the point is not about OpenBSD, it is about opensource in general. my question is not about Openbsd per se it's about source code os inspection especially c/c++ since most are written in these languages. Also once the source is compiled how one can be sure that it really reflects the source code ? If a law requires that a backdoor being implanted and obliges to deny that kind of action under the guise of security, how can you be sure that the system has not been corrupted by some tools ? As said there is there is a "nondisclosure agreement" My guess is that 99.99% of developpers in the world are just incapable of understanding os source code and won't even bother to look at it. And above all nobody wonders about why the gov wants such massive backdoor, and that of course they will pressure medias to deny.

    Read the article

  • Magic Methods in Python

    - by dArignac
    Howdy, I'm kind of new to Python and I wonder if there is a way to create something like the magic methods in PHP (http://www.php.net/manual/en/language.oop5.overloading.php#language.oop5.overloading.methods) My aim is to ease the access of child classes in my model. I basically have a parent class that has n child classes. These classes have three values, a language key, a translation key and a translation value. The are describing a kind of generic translation handling. The parent class can have translations for different translation key each in different languages. E.g. the key "title" can be translated into german and english and the key "description" too (and so far and so on) I don't want to get the child classes and filter by the set values (at least I want but not explicitly, the concrete implementation behind the magic method would do this). I want to call parent_class.title['de'] # or also possible maybe parent_class.title('de') for getting the translation of title in german (de). So there has to be a magic method that takes the name of the called method and their params (as in PHP). As far as I dug into Python this is only possible with simple attributes (_getattr_, _setattr_) or with setting/getting directly within the class (_getitem_, _setitem_) which both do not fit my needs. Maybe there is a solution for this? Please help! Thanks in advance!

    Read the article

  • Handling learning curve for new developers

    - by pete the pagan-gerbil
    Our company likes to hire new developers, with no experience. We have a core set of skills that we try to get them up to speed with, like ASP.NET and WinForms - to teach basic programming, the .NET languages, and the things they'll need to maintain and write. We also try and mentor them through early projects, so they can learn from someone more experienced. Recently, we've been seeing the benefits of new frameworks like MVC and ideas like Unit Testing and TDD (by extension, dependancy injection and IoC), and we'd like to start using these in the team. However, this increases the time that a junior would have before they can get started on a new project - because doing something like unit tests wrong could cause major headaches months or years later in maintenance, especially if we believe unit tests to be comprehensive. How do you handle the huge amount of things that a junior will need to take on, acknowledging that the business wants them working independantly as soon as possible? Is it acceptable to tell them not to unit test till a while after they are independant (and give them small, simpler projects in the meantime) before taking them to 'level 2' of the core skills?

    Read the article

  • Web-app currency input/manipulation/calculation with javascript .. there has got to be a better (fra

    - by dreftymac
    BACKGROUND: I am of the "user-input-lockdown" school of thought. Whenever possible, I try to mistrust and sanitize user input, both client side and server side; and I try to take multiple opportunities to restrict possible inputs to a known subset of possibilities, usually this means providing a lot of checkboxes and select lists. (This is from the usability side of things, I know security-wise that malicious users can easily bypass fixed user input GUI controls). PROBLEM: Anyway, the problem always arises with non-fixed input of currency. Whenever I have to accept a freely-specified dollar amount as user input, I always have to confront these problems/annoyances and it is always painful: 1) Make sure to give the user two input boxes for each currency_datapoint, one for the whole_dollar_part and another for the fractional_pennies_part 2) Whenever the user changes a currency_datapoint, provide keystroke-by-keystroke GUI feedback to let them know whether the currency_datapoint is well-formed, with context-appropriate validation rules (e.g., no negatives?, nonzero only?, numeric only!, no non-numeric punctuation! no symbols!) 3) For display purposes, every user-provided currency_datapoint should be translated to human-readable currency formatting (dollar sign, period, commas provided by the app, where appropriate) 4) For calculation purposes, every user-provided currency_datapoint has to be converted to integer (all pennies, to avoid floating point errors) and summed into a grand total with zero or more subtotals. 5) Every user-provided currency_datapoint should be displayed or displayable in a nice "tabular" format, which auto-updates as the user enters each currency_datapoint, including a baloon that warns when one or more currency_datapoints is not well-formed. I seem to be re-inventing this wheel every time I have to work with currency in Javascript on the client side (server side is a bit more flexible since most programming languages have higher-level currency formatting logic). QUESTION: Has anyone out there solved the problem of dealing with the above issues, client side, in a way that is server-side-technology-stack agnostic, (preferrably plain javascript or jquery)? This is getting old, there has to be a better way.

    Read the article

  • Should a new language compiler target the JVM?

    - by Pindatjuh
    I'm developing a new language. My initial target was to compile to native x86 for the Windows platform, but now I am in doubt. I've seen some new languages target the JVM (most notable Scala and Clojure). Ofcourse it's not possible to port every language easily to the JVM; to do so, it may lead to small changes to the language and it's design. So that's the reason behind this doubt, and thus this question: Is targetting the JVM a good idea, when creating a compiler for a new language? Or should I stick with x86? I have experience in generating JVM bytecode. Are there any workarounds to JVM's GC? The language has deterministic implicit memory management. How to produce JIT-compatible bytecode, such that it will get the highest speedup? Is it similar to compiling for IA-32, such as the 4-1-1 muops pattern on Pentium? I can imagine some advantages (please correct me if I'm wrong): JVM bytecode is easier than x86. Like x86 communicates with Windows, JVM communicates with the Java Foundation Classes. To provide I/O, Threading, GUI, etc. Implementing "lightweight"-threads.I've seen a very clever implementation of this at http://www.malhar.net/sriram/kilim/. Most advantages of the Java Runtime (portability, etc.) The disadvantages, as I imagined, are: Less freedom? On x86 it'll be more easy to create low-level constructs, while JVM has a higher level (more abstract) processor. Most disadvantages of the Java Runtime (no native dynamic typing, etc.)

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • How do tools like Hiphop for PHP deal with heterogenous arrays?

    - by Derek Thurn
    I think HipHop for PHP is an interesting tool. It essentially converts PHP code into C++ code. Cross compiling in this manner seems like a great idea, but I have to wonder, how do they overcome the fundamental differences between the two type systems? One specific example of my general question is heterogeneous data structures. Statically typed languages don't tend to let you put arbitrary types into an array or other container because they need to be able to figure out the types on the other end. If I have a PHP array like this: $mixedBag = array("cat", 42, 8.5, false); How can this be represented in C++ code? One option would be to use void pointers (or the superior version, boost::any), but then you need to cast when you take stuff back out of the array... and I'm not at all convinced that the type inferencer can always figure out what to cast to at the other end. A better option, perhaps, would be something more like a union (or boost::variant), but then you need to enumerate all possible types at compile time... maybe possible, but certainly messy since arrays can contain arbitrarily complex entities. Does anyone know how HipHop and similar tools which go from a dynamic typing discipline to a static discipline handle these types of problems?

    Read the article

  • mysql and trigger usage question

    - by dhruvbird
    I have a situation in which I don't want inserts to take place (the transaction should rollback) if a certain condition is met. I could write this logic in the application code, but say for some reason, it has to be written in MySQL itself (say clients written in different languages will be inserting into this MySQL InnoDB table) [that's a separate discussion]. Table definition: CREATE TABLE table1(x int NOT NULL); The trigger looks something like this: CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN NEW.x = NULL; END IF; END; I am guessing it could also be written as(untested): CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN ROLLBACK; END IF; END; But, this doesn't work: CREATE TRIGGER t1 BEFORE INSERT ON table1 ROLLBACK; You are guaranteed that: Your DB will always be MySQL Table type will always be InnoDB That NOT NULL column will always stay the way it is Question: Do you see anything objectionable in the 1st method?

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >