Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 221/563 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Embedded Model Designing -- top down or bottom up?

    - by Jeff
    I am trying to learn RoR and develop a webapp. I have a few models I have thought of for this app, and they are fairly embedded. For example (please excuse my lack of RoR syntax): Model: textbook title:string type:string has_many: chapters Model: chapter content:text has_one: review_section Model: review_section title:string has_many: questions has_many: answers , through :questions Model: questions ... Model: answers ... My question is, with the example I gave, should I start at the top model (textbook) or the bottom most (answers)?

    Read the article

  • Breadcrumbs in a modern web application, make sense? [on hold]

    - by Xtreme Biker
    I'm currently beginning with the development of a new web application. The whole web application is going to be bookmarkable and all the pages accesible via GET requests and url parameters. Having said that, let's suppose I've got three entities in my application, Customer, Team and City. Each Customer and Team belong to a city and I've got a city-detail page which displays the detail for a concrete city. So next navigation cases are possible: Customers - Customer detail (id=2) - City detail (id=3) Football teams - Team detail (id=5) - City detail (id=3) Cities - City detail (id=3) There are three possible ways of ending up in a city detail view. My question is, does it make sense to implement a breadcrumb to show such a history, having it available in the browser itself? Would it be more appropiate to show a breadcrumb with the last case, no matter where we're coming from (hierarchical breadcrumb)? That's what Jakob Nielsen points out here: Offering users a Hansel-and-Gretel-style history trail is basically useless, because it simply duplicates functionality offered by the Back button, which is the Web’s second-most-used feature. A history trail can also be confusing: users often wander in circles or go to the wrong site sections. Having each point in a confused progression at the top of the current page doesn’t offer much help. Finally, a history trail is useless for users who arrive directly at a page deep within the site. Also, even if the history trail seems the most natural way to implement it, it requires an extra effort to keep the whole track being HTTP a stateless mean.

    Read the article

  • OpenGL programming vs Blender Software, which is better for custom video creation?

    - by iammilind
    I am learning OpenGL API bit by bit and also develop my own C++ framework library for effectively using them. Recently came across Blender software which is used for graphics creation and is in turn written in OpenGL itself. For my part time hobby of graphics learning, I want to just create small-small movie or video segments; e.g. related to construction engineering, epic stories and so on. There may be very minimal to nil mouse-keyboard interaction for those videos, unlike video games which are highly interactive. I was wondering if learning OpenGL from scratch is worth for it or should I invest my time in learning Blender software? There are quite a few good movie examples are created using Blender and are shown in its website. Other such opensource cross platform alternatives are also welcome, which can serve my aforementioned purpose.

    Read the article

  • Multithreading: Communication from Parent thread to child thread

    - by Dennis Nowland
    I have a List of threads normally 3 threads each of the threads reference a webbrowser control that communicates with the parent control to populate a datagridview. What I need to do is when the user clicks the button in a datagridviewButtonCell corresponding data will be sent back to the webbrowser control within the child thread that originally communicated with the main thread. but when I try to do this I receive the following error message 'COM object that has been separated from its underlying RCW cannot be used.' my problem is that I can not figure out how to reference the relevant webbrowser control. I would appreciate any help that anyone can give me. The language used is c# winforms .Net 4.0 targeted Code sample: The following code is executed when user click on the Start button in the main thread private void StartSubmit(object idx) { /* method used by the new thread to initialise a 'myBrowser' inherited from the webbrowser control each submitters object is an a custom Control called 'myBrowser' which holds detail about the function of the object eg: */ //index: is an integer value which represents the threads id int index = (int)idx; //submitters[index] is an instance of the 'myBrowser' control submitters[index] = new myBrowser(); //threads integer id submitters[index]._ThreadNum = index; // naming convention used 'browser' +the thread index submitters[index].Name = "browser" + index; //set list in 'myBrowser' class to hold a copy of the list found in the main thread submitters[index]._dirs = dirLists[index]; // suppress and javascript errors the may occur in the 'myBrowser' control submitters[index].ScriptErrorsSuppressed = true; //execute eventHandler submitters[index].DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(DocumentCompleted); //advance to the next un-opened address in datagridview the navigate the that address //in the 'myBrowser' control. SetNextDir(submitters[index]); } private void btnStart_Click(object sender, EventArgs e) { // used to fill list<string> for use in each thread. fillDirs(); //connections is the list<Thread> holding the thread that have been opened //1 to 10 maximum for (int n = 0; n < (connections.Length); n++) { //initialise new thread to the StartSubmit method passing parameters connections[n] = new Thread(new ParameterizedThreadStart(StartSubmit)); // naming convention used conn + the threadIndex ie: 'conn1' to 'conn10' connections[n].Name = "conn" + n.ToString(); // due to the webbrowser control needing to be ran in the single //apartment state connections[n].SetApartmentState(ApartmentState.STA); //start thread passing the threadIndex connections[n].Start(n); } } Once the 'myBrowser' control is fully loaded I am inserting form data into webforms found in webpages loaded via data enter into rows found in the datagridview. Once a user has entered the relevant details into the different areas in the row the can then clicking a DataGridViewButtonCell that has tha collects the data entered and then has to be send back to the corresponding 'myBrowser' object that is found on a child thread. Thank you

    Read the article

  • Version hash to solve Event Sourcing problems

    - by SystematicFrank
    The basic examples I have seen about Event Sourcing do not deal with out of order events, clock offsets in different systems and late events from system partitions. I am wondering if more polished Event Sourcing implementations rely on a version stamp of modified objects? For example, assuming that the system is rendering the entity Client with version id ABCD1234. If the user modifies the entity, the system will create an event with the modified fields AND the version id reference to which version it applies. Later the event responder would detect out of order events and merge them.

    Read the article

  • Design for future changes or solve the problem at hand.

    - by Naveen
    While writing the code or during design do you try to generalize the problem at the first instance itself or try to solve that very specific problem. I am asking this because, trying to generalize the problem tends to complicate the things (which may not be necessary) and on the otherhand it will be very difficult to extend the specific solution if there is a change in the requirement. I guess the solution is to find the middle path which is easier said than done. How do you tackle this type of problem ? If you start generalizing it at what point of time you know that this much of generalization is sufficient ?

    Read the article

  • iOS and Server: OAuth strategy

    - by drekka
    I'm trying to working how to handle authentication when I have iOS clients accessing a Node.js server and want to use services such as Google, Facebook etc to provide basic authentication for my application. My current idea of a typical flow is this: User taps a Facebook/Google button which triggers the OAuth(2) dialogs and authenticates the user on the device. At this point the device has the users access token. This token is saved so that the next time the user uses the app it can be retrieved. The access token is transmitted to my Node.js server which stores it, and tags it as un-verified. The server verifies the token by making a call to Facebook/google for the users email address. If this works the token is flagged as verified and the server knows it has a verified user. If Facebook/google fail to authenticate the token, the server tells iOS client to re-authenticate and present a new token. The iOS client can now access api calls on my Node.js server passing the token each time. As long as the token matches the stored and verified token, the server accepts the call. Obviously the tokens have time limits. I suspect it's possible, but highly unlikely that someone could sniff an access token and attempt to use it within it's lifespan, but other than that I'm hoping this is a reasonably secure method for verification of users on iOS clients without having to roll my own security. Any opinions and advice welcome.

    Read the article

  • What is your most preferred method of site pagination?

    - by John Smith
    There seem to be quite a few implementations of this feature. Some sites like like Stackexchange have it laid out like this: [1][2][3][4][5] ... [954][Next] Other sites like game forums may have something like this: [1][2][3] ... [10] ... [50] ... [500] ... [954][Next] Some sites like webcomics (XKCD comes to mind) have it laid out like this: [Last][Prev][Random][Next][First] Reddit has a very simple pagination with only: [Prev][Next] Sites like Stackexchange and Google also allow you to change how many results you want per page. Personally, I have never used this feature. Is it even worth including or does it just further confuse the design with needless features? Personally, I have only ever seen the need for the webcomic style (without the random). If I need to go to a specific page (which is very, very rare) then I can just edit the address bar. Is it good design to make something more complex for rare occasions where it might make save the user some time? Is having to edit the address bar to navigate the site effectively in some circumstances bad design?

    Read the article

  • Why is there a "new" in Go?

    - by dystroy
    I'm still puzzled as why we have new in Go. When you want to instantiate a struct, you do t := Thing{} and, obviously, you can get a pointer to a new instance by doing t := &Thing{} But there's also this possibility : t := new(Thing) This last one seems a little alien to me. &Thing{} is as clear and concise as new(Thing) and it uses only constructs you often use elsewhere. It's also more extensible as you might change it to &Thing{3} or &Thing{Feets:7}. In my opinion, having a supplementary keyword is costly, it makes the language more complex and adds to what you must know. And it might mask to newcomers what's behind instantiating a struct. It also makes one more reserved word. So what's the reasoning behind new ? Is it something useful ? Should we use it ?

    Read the article

  • Should a model binder populate all of the model?

    - by Richard
    Should a model binder populate all of the model, or only the bits that are being posted? For example, I am adding a product in my system and on the form i want the user to select which sites the new product will appear on. Therefore, in my model I want to populate a collection called "AllAvailableSites" to render the checkboxes for the user to choose from. I also need to populate the model with any chosen sites on a post in case the form does not validate, and I need to represent the form showing the initial selections. It would seem that I should let the model binder set the chosen sites on the model, and (once in the controller method) I set the "AllAvailableSites" on the model. Does that sound right? It seems more efficient to set everything in the model binder but someone is suggesting it is not quite right. I am grateful for any advice; I have to say that all the MVC model binding help online seems to cite really simple examples, nothing complicated. Do I really need a GET and a POST version of a method? Can't they just take the same view model? Then I check in my model binder if it is a GET/POST, and populate all the model accordingly.

    Read the article

  • Building general programming skills?

    - by toleero
    Hello :) I currently am quite new to programming, I've had exposure to a few languages (C#, PHP, JavaScript, VB, and some others) and I'm quite new to OOP. I was just wondering what is the best way to build up general programming/problem solving skills without being language specific? I was thinking maybe of something like Project Euler but more geared towards newbies? Thanks! Edit: I am looking at getting into Game Scripting/Programming, I'm already in Games but in a different discipline :)

    Read the article

  • Does semi-normalization exist as a concept? Is it "normalized"?

    - by Gracchus
    If you don't mind, a tldr on my experience: My experience tldr I have an application that's heavily dependent upon uncertainty, a bane to database design. I tried to normalize it as best as I could according to the capabilities of my database of choice, but a "simple" query took 50ms to read. Nosql appeals to me, but I can't trust myself with it, and besides, normalization has cut down my debugging time immensely over and over. Instead of 100% normalization, I made semi-redundant 1:1 tables with very wide primary keys and equivalent foreign keys. Read times dropped to a few ms, and write times barely degraded. The semi-normalized point Given this reality, that anyone who's tried to rely upon views of fully normalized data is aware of, is this concept codified? Is it as simple as having wide unique and foreign keys, or are there any hidden secrets to this technique? Or is uncertainty merely a special case that has extremely limited application and can be left on the ash heap?

    Read the article

  • As a person getting into mobile development, what's the best mobile platform in terms of profitability? [closed]

    - by Kyle Loman
    I realize this question can range very far so would love to hear any and all opinions on this. However, I'll be honest and say that I have been thinking of this in terms of most profitable. I know how this may sound either way but this is one of my main sticking points. I realize that I'm not guaranteed a single cent and success is never guaranteed but I'm going into this with the thought of making something out of it both financially and also for my own interest. I know that iOS gets a lot of attention on this front but Android commands a lot more market share. However, I know there are drawbacks to Android too, whether it's in the actual development process and programming (though I've heard conflicting reports on this, such as how easy/difficult it is for to address screen res in different devices) or the app ecosystem being flooded. But iOS's app ecosystem has been described as too saturated and harder to compete in for that reason. Since Windows Phone has fewer apps than both of those two, that might be the best place to start in order to be closer to the ground floor of the store and be noticed more? Less saturation = better chances of sales or differentiating? Something like the gold rush during the first years of the iOS App Store (not exactly but at least in concept)? Would it be that despite fewer users on the platform, there's more exposure due to less competition so that may translate to better success at sales? Plus, I know MS is in it for the long haul so I'm not too fearful of something like WebOS going away. Obviously RIM isn't very popular nowadays but I read a recent article that says Blackberry actually has the apps that make the most money, any thoughts on that: http://gigaom.com/mobile/which-mobile-oss-apps-make-most-money-surprise-its-blackberry/ Again, this is all I've heard or known about so if there's anything to add or correct here, please do. In addition, this has actually affected my next personal phone upgrade. I'm eligible for a carrier discount now and I've had my eye on the iPhone 5. However, the Lumia 920 is the one I'm holding out for and I'm open to trying an Android but I'm not sure I can wait that long for any new Nexus or even the Razr HD. Even the new Lumia in November is making me antsy, I'm so close to just getting an iPhone 5. But when I say this has affected my phone choice, I'd want to be able to carry the apps I write with me so that I'm able to pull my phone out to show people without having to carry around a second device to do so. So that's why I'd like to make my personal phone match the main platform I'm developing for. Of course, I will likely expand to other platforms if I gain any decent success but the one I target now would serve well as my personal phone I carry around so that I can use it as a marketing tool, in a sense, showing people my apps if the opportunity presents itself. So what's the best mobile platform to choose, and especially in regards to most lucrative? As said previously, this would influence my personal phone choice greatly. Thanks in advance and I hope this isn't taken the wrong way - I understand there are trade-offs and other factors that may balance this out but making some revenue is key among that. For some background, I have done software development and know programming language concepts so I'm not entirely new to it and I do get the notion of being familiar with these things so that I can translate this skill among a variety of languages but I'm currently just having difficulty choosing my first main mobile platform based on the criteria I've outlined above.

    Read the article

  • How can calculus and linear algebra be useful to a system programmer?

    - by Victor
    I found a website saying that calculus and linear algebra are necessary for System Programming. System Programming, as far as I know, is about osdev, drivers, utilities and so on. I just can't figure out how calculus and linear algebra can be helpful on that. I know that calculus has several applications in science, but in this particular field of programming I just can't imagine how calculus can be so important. The information was on this site: http://www.wikihow.com/Become-a-Programmer Edit: Some answers here are explaining about algorithm complexity and optimization. When I made this question I was trying to be more specific about the area of System's Programming. Algorithm complexity and optimization can be applied to any area of programming not just System's Programming. That may be why I wasn't able to came up with such thinking at the time of the question.

    Read the article

  • Why are people using C instead of C++? [closed]

    - by Darth
    Possible Duplicate: When to use C over C++, and C++ over C? Many times I've stumbled upon people saying that C++ is not always better than C. Great example here would be the Linux kernel, where they simply decided to use C instead of C++ because it had better compilers at the time. But that's many years ago and a lot has changed. So the question is, why are people still using C over C++? I gues there are probably some cases (like embedded devices), where there simply isn't a good C++ compiler, or am I wrong here? What are the other cases when it is better to go with C instead of C++?

    Read the article

  • Apply WCF For Large Projects

    - by svlytns
    We have a large projects that have nearly 20 modules on it.We want to use WCF for business layer. We think three way to implement WCF our project: Use only one datacontract and one operation contract. Send ClassName, MethodName to operation and create class by reflaction then invoke the method in WCF side. Second way put all modules in one wcf application, and create their data contracts, operation contracts. Third way is create seperate wcf application for each module and host them seperatly. Which one is the best way? I need your ideas. TIA!

    Read the article

  • Using Sql Server Change Data Capture with a frequently changing schema

    - by Pete
    We are looking into enabling Sql Server Change Data Capture for a new subsystem we are building. It's not really because we need it, but we are being pushed for having a complete history traceability, and CDC would nicely solve this requirement with minimum effort on our parts. We are following an agile development process, which in this case means that we frequently make changes to the database schema, e.g. adding new columns, moving data to other columns, etc. We did a small test where we created a table, enabled CDC for that table, and then added a new column to the table. Changes to the new column is not registered in the CDC table. Is there a mechanism to update the CDC table to the new schema, and are there any best practices to how you deal with captured data when migrating the database schema?

    Read the article

  • Pro Google App Engine developer interview questions (with answers)

    - by WooYek
    What are good questions to determine if applicant is pro Google App Engine developer? Questions that can distinguish that someone is not an ad-hoc GAE programmer, but is really doing professional GAE development, with all areas concerned (eg. performance, transactions, async/batch data processing). Please provide answers, so an intermediate developer (such as myself :) can interview someone more experienced. Please avoid open questions. If possible please provide a link to a documentation part that's covering a topic in question. Please keep one interview question/answer per response for better reading experience and easier interview preparation.

    Read the article

  • To implement registration page with Vaadin or not?

    - by JVerstry
    This is a tactical implementation question about usage of Vaadin or in some part of my application. Vaadin is a great framework to login users and implement sophisticated web applications with many pages. However, I think it is not very well suited to desgin pages to register new users for my application. Am I right? Am I am wrong? It seems to me that a simple HTML/CSS/Javascript login + email registration + confirmation email with confirmation link cannot be implemented easily with Vaadin. It seems like Vaadin would be overkill. Do you agree? Or am I missing something? I am looking for feedback from experienced Vaadin users.

    Read the article

  • Does it make sense to write a build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Software Architecture and MEF composition location

    - by Leonardo
    Introduction My software (a bunch of webapi's) consist of 4 projects: Core, FrontWebApi, Library and Administration. Library is a code library project that consists of only interfaces and enumerators. All my classes in other projects inherit from at least one interface, and this interface is in the library. Generally speaking, my interfaces define either Entities, Repositories or Controllers. This project references no other project or any special dlls... just the regular .Net stuff... Core is a class-library project where concrete implementation of Entities and Repositories. In some cases i have more than 1 implementation for a Repository (ex: one for azure table storage and one for regular Sql). This project handles the intelligence (business rules mostly) and persistence, and it references only the Library. FrontWebApi is a ASP.NET MVC 4 WebApi project that implements the controllers interfaces to handle web-requests (from a mobile native app)... It references the Core and the Library. Administration is a code-library project that represents a "optional-module", meaning: if it is present, it provides extra-features (such as Access Control Lists) to the application, but if its not, no problem. Administration is also only referencing the Library and implementing concrete classes of a few interfaces such as "IAccessControlEntry"... I intend to make this available with a "setup" that will create any required database table or anything like that. But it is important to notice that the Core has no reference to this project... Development Now, in order to have a decoupled code I decide to use IoC and because this is a small project, I decided to do it using MEF, specially because of its advertised "composition" capabilities. I arranged all the imports/exports and constructors and everything, but something is quite not perfect in my "mental-visualisation": Main Question Where should I "Compose" the objects? I mean: Technically, the only place where real implementation access is required is in the Repositories, because in order to retrieve data from wherever, entities instances will be necessary, and in all other places. The repositories could also provide a public "GetCleanInstanceOf()" right? Then all other places will be just fine working with the interfaces instead of concrete classes... Secondary Question Should "Administration" implement the concrete object for "IAccessControlGeneralRepository" or the Core should?

    Read the article

  • We're Subversion Geeks and we want to know the benefits of Mercurial

    - by Matt
    Having read I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS. I have a related follow up question. I read that question and read the recommended links and videos and I see the benefits but I don't see the overall mindshift people are talking about. Our team is of 8-10 developers that work on one large code base consisting of 60 projects. We use Subversion and have a main trunk. When a developer starts a new Fogbugz case they create a svn branch, do the work on the branch and when they're done they merge back to the trunk. Occasionally they may stay on the branch for an extended time and merge the trunk to the branch to pick up the changes. When I watched Linus talk about people creating a branch and never doing it again, that's not us at all. We create probably 50-100 branches a week without issue. The biggest challenge is the merging but we've gotten pretty good at that as well. I tend to merge by fogbugz case & checkin rather than the entire root of the branch. We never work remotely and we never make branches off of branches. If you're the only one working in that section of the code base then the merge to the trunk goes smoothly. If someone else had modified the same section of code then the merge can get messy and you might need to do some surgery. Conflicts are conflicts, I don't see how any system could get it right most of the time unless if was smart enough to understand the code. After creating a branch the following checkout of 60k+ files takes some time but that would be an issue with any source control system we'd use. Is there some benefit of any DVCS that we're not seeing that would be of great help to us?

    Read the article

  • How can test users access an unpublished iOS app?

    - by David
    I am considering outsourcing the development of an iOS app to various independent developers. I will have have various testers of the app. We all work for separate companies. Some of these testers will be customers, who I would like feedback from. As there are multiple developers involved I expect there to be a new release on a daily basis. How can this be done? Would each of the testers need to buy some sort of license to avoid having to go through the app approval process? Is there any smooth way to do this so that it will not be a hassle for our friendly customers, who are willing to test our app?

    Read the article

  • When connecting to a server using the DRDA protocol, is it true that the first Client-To-Server command MUST be EXCSAT chained with ACCSEC?

    - by Alon Rew
    When connecting to a server using the DRDA protocol, is it true that the first Client-To-Server command MUST be EXCSAT chained with ACCSEC? I found 2 different answers when I googled it. If you look at The Open Group web site (https://collaboration.opengroup.org/dbiop/) it can be understood that the answer is NO. However, if you look at the IBM website (http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.ims11.doc.apr%2Fims_ddm_excsat.htm) you can understand the answer is YES. So which is it?

    Read the article

  • Is it worth to learn Experimental Languages?

    - by Xander Lamkins
    I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers. The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from multiple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >