Search Results

Search found 5206 results on 209 pages for 'spoken languages'.

Page 73/209 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • how to lengthen the pause between the words with text-to-speech (pyTTS or SAPI5)

    - by Berry Tsakala
    Is it possible to extend the gap between spoken words when using text to speech with SAPI5 ? The problem is that esp. with some voices, the words are almost connected to each other, which makes the speech more difficult to understand. I'm using python and pyTTS module (on windows, since it's using SAPI) I tried to hook to the OnWord event and add a time.sleep() or tts.Pause(), but apparently even though all the events are caught, they are being processed only at the end of the spoken text, whether i'm using the sync or async flag. In this NON WORKING example, the sleep() method is executed only after the sentence is spoken: tts = pyTTS.Create() def f(x): tts.Pause() sleep(0.5) tts.Resume() tts.OnWord = f tts.Speak(text)

    Read the article

  • Change the User Interface Language in Vista or Windows 7

    - by Matthew Guay
    Would you like to change the user interface language in any edition of Windows 7 or Vista on your computer?  Here’s a free app that can help you do this quickly and easily. If your native language is not the one most spoken in your area, you’ve likely purchased a PC with Windows preinstalled with a language that is difficult or impossible for you to use.  Windows 7 and Vista Ultimate include the ability to install multiple user interface languages and switch between them. However, all other editions are stuck with the language they shipped with.  With the free Vistalizator app, you can add several different interface languages to any edition of Vista or Windows 7 and easily switch between them. Note:  In this test, we used an US English copy of both Windows 7 Home Premium and Windows Vista Home Premium, and it works the same on any edition. The built-in language switching in the Ultimate Editions lets you set a user interface language for each user account, but this will only switch it for all users.  Add a User Interface Language to Windows To add an interface language to any edition of Windows 7 and Vista, first download Vistalizator (link below).  Then, from the same page, download the language pack of your choice.  The language packs are specific for each service pack of Windows, so make sure to choose the correct version and service pack you have installed. Once the downloads are finished, launch the Vistalizator program. You do not need to install it; simply run it and you’re ready to go.  Click the Add languages button to add a language to Windows. Select the user interface language pack you downloaded, and click Open. Depending on the language you selected, it may not automatically update with Windows Update when a service pack is released.  If so, you will have to remove the language pack and reinstall the new one for that service pack at that time.  Click Ok to continue. Make sure you’ve selected the correct language, and click Install language. Vistalizator will extract and install the language pack.  This took around 5 to 10 minutes in our test. Once the language pack is installed, click Yes to make it the default display language. Now, you have two languages installed in Windows.  You may be prompted to check for updates to the language pack; if so, click Update languages and Vistalizator will automatically check for and install any updates. When finished, exit Vistalizator to finish switching the language.  Click Yes to automatically reboot and apply the changes. When you computer reboots, it will show your new language, which in our test is Thai.  Here’s our Windows 7 Home Premium machine with the Thai language pack installed and running. You can even add a right to left language, such as Arabic, to Windows.  Simply repeat the steps to add another language pack.    Vistalizator was originally designed for Windows Vista, and works great with Windows 7 too.  The language packs for Vista are larger downloads than their Windows 7 counterparts.  Here’s our Vista Home Premium in English… And here’s how it looks after installing the Simplified Chinese language pack with Vistalizator. Revert to Your Original Language If you wish to return to the language that your computer shipped with, or want to switch to another language you’ve installed, run Vistalizator again.  Select the language you wish to use, and click Change language.   When you close Vistalizator, you will again be asked to reboot.  Once you’ve rebooted, you’ll see your new (or original) language ready to use.  Here’s our Windows 7 Home Premium desktop, back in it’s original English interface. Conclusion This is a great way to change your computer’s language into your own native language, and is especially useful for expatriates around the world.  Also, if you’d like to simply change or add an input language instead of changing the language throughout your computer, check out our tutorial on How to Add Keyboard Languages to XP, Vista, and Windows 7. Download Vistalizator Similar Articles Productive Geek Tips Enable Military Time in Windows 7 or VistaWhy Does My Password Expire in Windows?Use Windows Vista Aero through Remote Desktop ConnectionDisable User Account Control (UAC) the Easy Way on Win 7 or VistaAdd keyboard languages to XP, Vista, and Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • What is the need of functional programming?

    - by Lazer
    I have read about functional programming which is stateless, gives the same result invocation after invocation, about closures and other related concepts. I still feel that I have very little idea what these things are about. Thinking about this, right now, I feel complete in C, C++, and Java. Any programming problem and I start thinking in one of these languages. So, I never feel and understand the need for functional languages. A good starting point therefore would be to try to understand some things that are not possible in imperative languages but possible in functional languages. I feel unless I understand where exactly functional languages fit inside my already complete world of C, C++ and Java, I would never be able to appreciate and understand them. So, can somebody help me understand the real need for functional programming? Where exactly do they fit in?

    Read the article

  • Are we in a functional programming fad?

    - by TraumaPony
    I use both functional and imperative languages daily, and it's rather amusing to see the surge of adoption of functional languages from both sides of the fence. It strikes me, however, that it looks rather like a fad. Do you think that it's a fad? I know the reasons for using functional languages at times and imperative languages in others, but do you really think that this trend will continue due to the cliched "many-core" revolution that has been only "18 months from now" since 2004 (sort of like communism's Radiant Future), or do you think that it's only temporary; a fascination of the mainstream developer that will be quickly replaced by the next shiny idea, like Web 3.0 or GPGPU? Note, that I'm not trying to start a flamewar or anything (sorry if it sounds bitter), I'm just curious as to whether people will think functional or functional/imperative languages will become mainstream. Edit: By mainstream, I mean, equal number of programmers to say, Python, Java, C#, etc

    Read the article

  • Font choices in International scenarios: multilingual vs unicode

    - by TravisO
    I have a website that will eventually display multiple languages. I notice the common fonts used in web CSS (ex: Arial, Verdana, Times New Roman, Tahoma) and even the newer Vista/Office 2007/VS2008 fonts (Calibri,Cambria, Candara, Corbel, etc) are significantly larger (~350K) than your average (US only?) TTF font (~50k) so these fonts contain most/all the major character sets that common languages (Spanish, French, German, etc) use. My question is, would somebody confirm that these fonts listed above are acceptable for international use of the major (let's say top 8) spoken languages? If so, then I'm guessing the only purpose of unicode fonts; such "Arial Unicode" (a massive 22mb) is only for dealing with extremely niche dialog, eastern glyphs (Chinese, Japanese) and dead languages? I'm just looking for some confirmation from developers that have their desktop apps/web apps rendering multiple languages and have a visual confirmation, I'm already in the 99% sure bin but you know what they say about assumption.

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Missing ideas in programming language design

    - by meyka
    I wanted to try something new and so I designed some programming languages and wrote interpreters for them: A rather low-level, not very expressive language. (I didn't want to parse complex expressions right at the beginning) It featured: Variables (yay) Subroutines, with a call stack Basic arithmetic functions, basic string manipulation, ... Code in the language looks like this: set i 0 inc i print i Very, very basic you see. A more high-level language I decided to make it structured and so it featured things like if-else, while, functions, and so on. The stuff most programming languages have. Ended up like a unworthy Python clone, I hated that. A code-golf language Which ended up similar to J, golfcode, APL, etc. Nothing special As you can see: I don't lack the skills but the ideas. I can't figure out anything new, not even bad, unneccessary things, for my languages. - Do you know of some weird things I could implement in my languages, which don't try to make programming harder (like most esoteric languages) but funnier or more different from other languages? It can't be possible that every weird thing has been tried out so far, or?

    Read the article

  • Natural language grammar and user-entered names

    - by Owen Blacker
    Some languages, particularly Slavic languages, change the endings of people's names according to the grammatical context. (For those of you who know grammar or studied languages that do this to words, such as German or Russian, and to help with search keywords, I'm talking about noun declension.) This is probably easiest with a set of examples (in Polish, to save the whole different-alphabet problem): Dorothy saw the cat — Dorota zobaczyla kota The cat saw Dorothy — Kot zobaczyl Dorote It is Dorothy’s cat — To jest kot Doroty I gave the cat to Dorothy — Dalam kota Dorotie I went for a walk with Dorothy — Poszlam na spacer z Dorota “Hello, Dorothy!” — “Witam, Doroto!” Now, if, in these examples, the name here were to be user-entered, that introduces a world of grammar nightmares. Importantly, if I went for Katie (Kasia), the examples are not directly comparable — 3 and 4 are both Kasi, rather than *Kasy and *Kasie — and male names will be wholly different again. I'm guessing someone has dealt with this situation before, but my Google-fu appears to be weak today. I can find a lot of links about natural-language processing, but I don'think that's quite what I want. To be clear: I'm only ever gonna have one user-entered name per user and I'm gonna need to decline them into known configurations — I'll have a localised text that will have placeholders something like {name nominative} and {name dative}, for the sake of argument. I really don't want to have to do lexical analysis of text to work stuff out, I'll only ever need to decline that one user-entered name. Anyone have any recommendations on how to do this, or do I need to start calling round localisation agencies ;o) Further reading (all on Wikipedia) for the interested: Declension Grammatical case Declension in Polish Declension in Russian Declension in Czech nouns and pronouns Disclaimer: I know this happens in many other languages; highlighting Slavic languages is merely because I have a project that is going to be localised into some Slavic languages.

    Read the article

  • Cookie not working after mod rewrite rule

    - by moonwalker
    Hi all, I have a simple Cookie to set the chosen language: $lang = $_GET['lang']; $myLang = $_COOKIE["myLang"]; if (!isset($_COOKIE["myLang"])){ setcookie("myLang", "en", $expire); include "languages/en.php"; $myLang = "en"; }else{ include "languages/$myLang.php"; } // One year to expire $expire = time()+60*60*24*30*365; // Put $languages in a common header file. $languages = array('en' => 1, 'fr' => 2, 'nl' => 3); if (array_key_exists($lang, $languages)) { include "languages/{$lang}.php"; setcookie("myLang", $lang, $expire); $myLang = $lang; } After using some rewrite rules, it just doesn't work anymore. I tried the following: setcookie("myLang", "en", $expire, "/" , false); No luck at all. This is my .htaccess file: <IfModule mod_rewrite.c> Options +FollowSymLinks Options +Indexes RewriteEngine On RewriteBase / RewriteRule ^sort/([^/]*)/([^/]*)$ /3arsi2/sort.php?mode=$1&cat=$2 [L] RewriteRule ^category/([^/]*)$ /3arsi2/category.php?cat=$1 [L] RewriteRule ^category/([^/]*)/([^/]*)$ /3arsi2/category.php?cat=$1&lang=$2 [L] RewriteRule ^search/([^/]*)$ /3arsi2/search.php?mode=$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^u/([^/]+)/?$ 3arsi2/user.php?user=$1 [NC,L] RewriteRule ^u/([^/]+)/(images|videos|music)/?$ 3arsi2/user.php?user=$1&page=$2 [NC,L] RewriteRule ^([^\.]+)$ 3arsi2/$1.php [NC,L] </IfModule> Any idea how to solve this? I'm still new to the mod rewrite thing, so I still don't really understand the logic behind it all. Thanks for any help you can provide.

    Read the article

  • What are the Open Source alternatives to WPF/XAML?

    - by Evan Plaice
    If we've learned anything from HTML/CSS it's that, declarative languages (like XML) work best to describe User Interfaces because: It's easy to build code preprocessors that can template the code effectively. The code is in a well defined well structured (ideally) format so it's easy to parse. The technology to effectively parse or crawl an XML based source file already exists. The UIs scripted code becomes much simpler and easier to understand. It simple enough that designers are able to design the interface themselves. Programmers suck at creating UIs so it should be made easy enough for designers. I recently took a look at the meat of a WPF application (ie. the XAML) and it looks surprisingly familiar to the declarative language style used in HTML. It's apparent to me that the current state of desktop UI development is largely fractionalized, otherwise there wouldn't be so much duplicated effort in the domain of graphical user interface design (IE. GTK, XUL, Qt, Winforms, WPF, etc). There are 45 GUI platforms for Python alone It's seems reasonable to me that there should be a general purpose, open source, standardized, platform independent, markup language for designing desktop GUIs. Much like what the W3C made HTML/CSS into. WPF, or more specifically XAML seems like a pretty likely step in the right direction. Now that the 'browser wars' are over should we look forward to a future of 'desktop gui wars?' Note: This topic is relatively subjective in the attempt to be 'future-thinking.' I think that desktop GUI development in its current state sucks ((really)hard) and, even though WPF is still in it's infancy, it presents a likely solution to the problem. Update: Thanks a lot for the info, keep it comin'. Here's are the options I've gathered from the comments and answers. GladeXML Editor: Glade Interface Designer OS Platforms: All GUI Platform: GTK+ Languages: C (libglade), C++, C# (Glade#), Python, Ada, Pike, Perl, PHP, Eiffel, Ruby XRC (XML Resource) Editors: wxGlade, XRCed, wxDesigner, DialogBlocks (non-free) OS Platforms: All GUI Platform: wxWidgets Languages: C++, Python (wxPython), Perl (wxPerl), .NET (wx.NET) XML based formats that are either not free, not cross-platform, or language specific XUL Editor: Any basic text editor OS Platforms: Any OS running a browser that supports XUL GUI Platform: Gecko Engine? Languages: C++, Python, Ruby as plugin languages not base languages Note: I'm not sure if XUL deserves mentioning in this list because it's less of a desktop GUI language and more of a make-webapps-run-on-the-desktop language. Plus, it requires a browser to run. IE, it's 'DHTML for the desktop.' CookSwing Editor: Eclipse via WindowBuilder, NetBeans 5.0 (non-free) via Swing GUI Builder aka Matisse OS Platforms: All GUI Platform: Java Languages: Java only XAML (Moonlight) Editor: MonoDevelop OS Platforms: Linux and other Unix/X11 based OSes only GUI Platforms: GTK+ Languages: .NET Note: XAML is not a pure Open Source format because Microsoft controls its terms of use including the right to change the terms at any time. Moonlight can not legally be made to run on Windows or Mac. In addition, the only platform that is exempt from legal action is Novell. See this for a full description of what I mean.

    Read the article

  • How to become an expert web-developer?

    - by John Smith
    I am currently a Junior PHP developer and I really LOVE it, I love internet from first time I got into it, I always loved smartly-created websites, always was wondering how it all works, always admired websites with good design and rich functionality, and finally I am creating web-sites on my own and it feels really great. My goals are to become expert web-developer (aiming for creating websites for small and medium business, not enterprise-sized systems), to have a great full-time job, to do freelance and to create my own startup in future. General question: What do I do to be an expert, professional and demanded web-programmer? More concrete questions: 1). How do I choose languages and technologies needed? I know that every web-developer must know HTML+CSS+JS+AJAX+JQuery, I am doing some design aswell cause I like it and I need it for freelance also. But what about backend languages? Currently I picked PHP cause it's most demanded in my area and most of web uses it, but what would happen in future? Say, in 3 years, I am good at PHP and PHP frameworks by than, but what if some other languages get most popular? Do I switch to them? I know that good programmer is not about languages and frameworks but about ability to learn and to aim the goals, but still I think that learning frameworks for some language can take quite some time. Am I wrong? 2). In general, what are basic guidelines to be expert web-developer? What are most important things I should focus on? Thank you!

    Read the article

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.”

    Read the article

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.” Originally published on blogs.oracle.com/javaone.

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

  • Is learning how to use C (or C++) a requirement in order to be a good (excellent) programmer?

    - by blueberryfields
    When I first started to learn how to program, real programmers could write assembly in their sleep. Any serious schooling in computer science would include a hefty bit of training and practice in programming using assembly. That has since changed, to the point where I see Computer Science degrees with assembly, if included at all, is relegated to one assignment, and one chapter, for a total of two weeks' work out of 4 years' schooling. C/C++ programming seems to have followed a similar path. I'm no longer surprised to interview university graduates who have not spent more than two weeks programming in C++, and have only read of C in a book somewhere. While the most serious CS degrees still seem to include significant time learning and using one or both of the languages, the trend is clearly towards less enforced C/C++ in school. It's clearly possible to make a career producing good work without ever reading or writing a single line of C or C++ code. Given all of that, is learning the two languages worth the effort? Are they at all required to excel? (beyond the obvious, non-language specific advice, such as "a good selection of languages is probably important for a comprehensive education", and "it's probably a good idea to keep trying out and learning new languages throughout a programmers' career, just to stretch the gray cells")

    Read the article

  • Save Actions in NetBeans IDE 7.3

    - by Geertjan
    Several developers, especially those familiar with equivalent functionality in Eclipse, have been asking for so-called "Save Actions", that is, support for actions that are automatically performed when a file is saved. Here's the related NetBeans issue: http://netbeans.org/bugzilla/show_bug.cgi?id=140719   In NetBeans IDE 7.3, the issue is resolved as follows: A new "On Save" tab is found in the "Editor" tab of the Options window. Defaults for all languages are set via the "All Languages" item in the drop-down. Here, for all languages, you can specify what kind (all, none, or only modified lines) of formatting and space removal will occur automatically when a file is saved: Via the drop-down, you see all the languages supported by the IDE: You can pick a language and then override the default On Save settings: Per language, there may be additional On Save settings. For example, for Java, you can specify that, when saving a Java file, unused import statements should be removed and/or the rules you've set for organizing import statements should be applied: There's also a set of new NetBeans IDE APIs for adding new On Save functionality via custom plugins. Via MIME type registration of OnSaveTask.Factory, you can register new On Save actions that will be run for files conforming to the relevant MIME type. There's also extensions via the Editor Options API for registering new panels (one per language) to the On Save panel in the Options window. I'll demonstrate some examples of the APIs in upcoming blog entries.

    Read the article

  • Experiences with learning Chinese

    - by Greg Low
    I've had a few friends asking me about learning Chinese and what I've found works and doesn't work. I was answering a question on a mailing list today and I thought I should post this info where it might be useful to many. The question that was initially asked was whether Rosetta Stone was useful but I've provided much more info on learning the language here. I’ve used Rosetta Stone with Chinese but it’s really hard to know whether to recommend it or not. Rosetta Stone works the same way in all languages. They show you photos and then let you both see and hear the target language and get you to work out what they’re talking about. The thinking is that that’s how children learn. However, at first, I found it very frustrating. I’d be staring at photos trying to work out what they were really trying to get at. Sometimes it’s far from obvious. I could not have survived without Google Translate open at the same time. The other weird thing is that the photos are from a mixture of countries. While that’s good in a way, it also means that they are endlessly showing pictures of something that would never happen in the target language and culture. For any language, constant interaction with a speaker of the target language is needed. Rosetta Stone has a “Studio” option. That’s the best part of the program. In my case, it lets me connect around twice a week to a live online class from Beijing. Classes usually have the teacher plus two to four students. You get some Studio access with the initial packages but need to purchase it for ongoing use. I find it very inexpensive. It seems to work out to about $70 (AUD/USD) for six months. That’s a real bargain. The other downside to Rosetta Stone is that they tend to teach very formal language, but as with other languages, that’s not how the locals speak. It might have been correct at one point but no-one actually says that. As an example, Rosetta Stone teach Gonggòng qìche (pronounced roughly like “gong gong chee chure” for bus. Most of my friends from areas like Taiwan would just say Gongche. Google Translate says Zongxiàn (pronounced somewhat like “dzong sheean”) instead. Mind you, the Rosetta Stone option isn't really as bad as "omnibus"; it's more like saying "public bus". If you say the option they provide, people would understand you. I also listen to ChinesePod in the car. They also have SpanishPod. Each podcast is about five minutes of spoken conversation. It is very good for providing current language. Another resource I use is local Meetup groups. Most cities have these and for a variety of languages. It’s way less structured (just standard conversation) but good for getting interaction. The obvious challenge for Asian languages is reading/writing. The input editors for Chinese that are part of Windows are excellent. Many of my Chinese friends speak fluently but cannot read or write. I was determined to learn to do both. For writing, I’m talking about on a computer, not with a pen. (Mind you, I can barely write English with a pen nowadays). When using Rosetta Stone, you can choose to have the Chinese words displayed in pinyin (Wo xihuan xuéxí zhongguó) or in Chinese characters (???????) or both. This year, I’ve been forcing myself to just use the Chinese characters. I use a pinyin input editor in Windows though, as it’s very fast.  (The character recognition input in the iPad is also amazing). Notice from the example that I provided above that the pronunciation of the pinyin isn’t that obvious to us at first either.  Since changing to only using characters, I find I can now read many more Chinese characters fluently. It’s a major challenge though. I can read about 300 now and yet you need around 2,500 to be able to read a newspaper fairly well. Tones are a major issue for some Asian languages. Mandarin has four tones (plus a neutral tone) and there is a major difference in meaning between two words that are spelled the same in pinyin but with different tones. For example, Ma (3rd tone?) is a horse, Ma (1st tone?) is like “mom”, and ma (neutral tone?) is a question mark and so on. Clearly you don’t want to mix these up. As in English, they also have words that do sound the same but mean different things in different contexts. What’s interesting is that even though we see two words that differ only by tone as very similar, to a native speaker, if you say the right words with the wrong tone, you might as well have said a completely different word. My wife’s dialect of Chinese has eight tones. It’s much worse. The reason I’m so keen to learn to read/write Chinese is that even though the different dialects are pronounced so differently that speakers of one dialect often cannot understand another dialect, the writing is generally the same. The only difference is that many years ago, the Chinese government created a simplified set of characters for some of the most commonly used ones. Older Chinese and most Cantonese speakers often struggle with the simplified characters. This is the simplified form of “three apples”: ????   This is the traditional form of the same words: ????  Note that two of the characters are the same but the middle two are quite different. For most languages, the best thing is to watch current movies in the target language but to watch them with the target language as subtitles, not your native language. You want to know what they actually said, not what it roughly means (which is what the English subtitle would give you). The difficulty with Asian languages like Chinese is that you have the added challenge of understanding the subtitles when they are written in the target language. I wish there were Mandarin Chinese movies with pinyin subtitles. For learning to read characters, I also recommend HanCard on the iPad. It is targeted at the HSK language proficiency levels. (I’m intending to take the first HSK exam as soon as I’m ready). Hope that info helps someone get started.  

    Read the article

  • Learning to implement dynamically typed language compiler

    - by TriArc
    I'm interested in learning how to create a compiler for a dynamically typed language. Most compiler books, college courses and articles/tutorials I've come across are specifically for statically typed languages. I've thought of a few ways to do it, but I'd like to know how it's usually done. I know type inferencing is a pretty common strategy, but what about others? Where can I find out more about how to create a dynamically typed language? Edit 1: I meant dynamically typed. Sorry about the confusion. I've written toy compilers for statically typed languages and written some interpreters for dynamically typed languages. Now, I'm interested in learning more about creating compilers for a dynamically typed language. I'm specifically experimenting with LLVM and since I need to specify the type of every method and argument, I'm thinking of ways to implement a dynamically typed language on something like LLVM.

    Read the article

  • Why is Python slower than Java but faster than PHP

    - by good_computer
    I have many times seen various benchmarks that show how a bunch of languages perform on a given task. Always these benchmarks reveal that Python is slower then Java and faster than PHP. And I wonder why is that the case. Java, Python, and PHP run inside a virtual machine All three languages convert their programs into their custom byte codes that run on top of OS -- so none is running natively Both Java and Python can be "complied" (.pyc for Python) but the __main__ module for Python is not compiled Python and PHP are dynamically typed and Java statically -- is this the reason Java is faster, and if so, please explain how that affects speed. And, even if the dynamic-vs-static argument is correct, this does not explain why PHP is slower than Python -- because both are dynamic languages. You can see some benchmarks here and here, and here

    Read the article

  • Game-oriented programming language features/objectives/paradigm?

    - by Klaim
    What are the features and language objectives (general problems to solves) or paradigms that a fictive programming language targetted at games (any kind of game) would require? For example, obviously we would have at least Performance (in speed and memory) (because a lot of games simply require that), but it have a price in the languages we currently use. Expressivity might be a common feature that is required for all languages. I guess some concepts from not-usually-used-for-games paradigms, like actor-based languages, or language-based message passing, might be useful too. So I ask you what would be ideal for games. (maybe one day someone will take those answers and build a language over it? :D ) Please set 1 feature/objective/paradigm per answer. Note: maybe that question don't make sense to you. In this case please explain why in an answer. It's a good thing to have answers to this question that might pop in your head sometimes.

    Read the article

  • What functional language is most suited to create games with?

    - by Ricket
    I have had my eye on functional programming languages for a while, but am hesitating to actually get into them. But I think it's about time I at least starting glancing that direction to make sure I'm ready for anything. I've seen talk of Haskell, F#, Scala, and so on. But I have no clue the differences between the languages and their communities, nor do I particularly care; except in the context of game development. So, from a game development standpoint, which functional programming language has the most features suited for game programming? For example, are there any functional game development libraries/engines/frameworks or graphics engines for functional languages? Is there a language that handles certain data structures which are commonly used in game development better? Bottom line: what functional programming language is best for functional game programming, and why? I believe/hope this question will declare a clear best language therefore I haven't marked it CW despite its subjective tendency.

    Read the article

  • Expressions that are idiomatic in one language but not used or impossible in another

    - by Tungsten
    I often find myself working in unfamiliar languages. I like to read code written by others and then jump in and write something myself before going back and learning the corners of each language. To speed up this process, it really helps to know a few of the idioms you'll encounter ahead of time. Some of these, I've found are fairly unique. In Python you might do something like this: '\n'.join(listOfThings) Not all languages allow you to call methods on string literals like this. In C, you can write a loop like this: int i = 50; while(i--) { /* do something 50 times */ } C lets you decrement in the loop condition expression. Most more modern languages disallow this. Do you have any other good examples? I'm interested in often used constructions not odd corner cases.

    Read the article

  • How much localizations is too much for a game?

    - by Krom Stern
    We are making an RTS game and we intend to add localizations to all languages our players use. So far we have 16 locales and about 3-4 are being planned. Now some crazy ideas pop up from our community, players ask for "funny text" localizations. We have been already offered a pack that makes it for 1 of our languages. Now I was thinking where should we draw a line between official localizations which we include into the game and unofficial mods that players will have to install on their own? Obviously overcrowding locale selection menu with all sorts of funny locales (LOL-cat, redneck, welsh, medieval, simplified, etc.) for all the languages seems way too much. But is it really? What are the hidden pros and cons of having too much locales and how much is too much?

    Read the article

  • Manual memory allocation and purity

    - by Eonil
    Language like Haskell have concept of purity. In pure function, I can't mutate any state globally. Anyway Haskell fully abstracts memory management, so memory allocation is not a problem here. But if languages can handle memory directly like C++, it's very ambiguous to me. In these languages, memory allocation makes visible mutation. But if I treat making new object as impure action, actually, almost nothing can be pure. So purity concept becomes almost useless. How should I handle purity in languages have memory as visible global object?

    Read the article

  • Easy Server-Side Language

    - by Nizar
    Most of programming languages (Server-side languages for web development) needs a learning curve and requires some time to learn. However, I'm sure there is a difference between them. So, for example you can master the 'X' language in less time than the 'Y' language. I'm a beginner in web development, meaning that I just know HTML and CSS and now want to choose the right tool for building dynamic sites. What I'm looking for is a language that is easy to master in less time than other languages. So, is there a language that can suit my needs? If so, please let me know about what should I learn in it? (for example, which frameworks?, libraries?, IDEs?, databases?, etc). In the end, I don't want to regret my choice of the language and want to learn solid basics in it and in programming in general.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >