Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 207/563 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Educational, well-written FOSS projects to read, study or discuss

    - by Godot
    Before you say it: yes, this "question" has been asked other times. However, I could not fine many of such questions and not that easily, and those I found had similar results. What I'm trying to say that there are no comprehensive lists of well written Open Source projects, so I decided to set some requirements for the entries (one or possibly more): Idiomatic use of the language in which they are written The project should be lightweight. Not as in "a few kbs", as in "clean" and possibly following the UNIX philosophy, making an efficient use of resources and performing its duty and nothing more. No code bloat, most importantly. Projects like Firefox and GNOME wouldn't qualify, for example. Minimal reliance on external, non-standard libraries, with exceptions for some common FOSS libraries (curses, Xlib, OpenGL and possibly "usual suspects" like gtk+, webkit and Boost). Reliance on well-written libraries is welcome. No reliance on proprietary software - for obvious reasons (programs that rely on XNA, DirectX, Cocoa and similar, for example). Well-documented code is welcome. Include link to web interfaces to their repositories if possible. Here are some sample projects that often pop up in these threads: Operating Systems Plan 9 from Bell Labs: More or less, the official "sequel" to UNIX. Written in C by the same people who invented C! NetBSD: The most portable BSD implementation, written in C and also a good example of portable and organized code. Network and Databases Sqlite: Extremely lightweight and extremely efficient, one of the best pieces of C software I've seen. Count the lines yourself! Lighttpd: A small but pretty reliable web server written in C. Programming languages and VMs Lua: extremely lightweight multi-paradigm programming language. Written in C. Tiny C Compiler: Really tiny C compiler. Not really comparable to GCC or Clang but does its job. PyPy: A Python implementation written in Python. Pharo: OK, I admit it, I'm not really a Smalltalk expert but Pharo is a fork of Squeak and looked rather interesting. Stackless Python - An implementation of Python that doesn't rely on the C call stack - written in C (with some parts in Python) Games and 3D: Angband: One of the most accessible roguelike codebases around here, written in C. Ogre3D: Cross-platform 3D engine. Gets bloated if you don't skip the platform-specific implementation code, otherwise is a pretty solid example of good C++ OO. Simon Tatham's Portable Puzzle Collection: Title says it all. Other - dwm: Lightweight window manager. Written in C. Emulation and Reverse Engineering - Bochs: x86 emulator, written in C++ and tiny enough. - MAME: If you want to see C at one of its lowest levels, MAME is for you. May not be as clean as the other projects but it can teach you A LOT. Before you ask: I didn't mention Linux because it has become quite bloated in the last few years, Linus has also confirmed it. Nonetheless, it'd be a great educational read the same, even if for other reasons. Same for GCC. Feel free to edit or wikify my post. I hope you won't lock my question, I'm only trying to organize a little community effort for the good of all those people who want to enhance their coding skills.

    Read the article

  • On github (gist) what is the star-button used for?

    - by user unknown
    On github::gist what is the star-button used for? For example here: https://gist.github.com/1372818 I have 3 Buttons in the headline, they are edit download (un)star With the last one changing from star to unstar. My first idea was: Upvote system. But I can star my own code, and I didn't mention some star appearing somewhere. Next idea: a favorite, a bookmark. I can select 'starred gists' but the page came up even before I starred it. Reading the Help didn't let me find something about 'star', and the searchbox is for searching in code, in pages - not to search the helppages of github::gist.

    Read the article

  • Evolution in coding standards, how do you deal with them?

    - by WardB
    How do you deal with evolution in the coding standards / style guide in a project for the existing code base? Let's say someone on your team discovered a better way of object instantiation in the programming language. It's not that the old way is bad or buggy, it's just that the new way is less verbose and feels much more elegant. And all team members really like it. Would you change all exisiting code? Let's say your codebase is about 500.000+ lines of code. Would you still want to change all existing code? Or would you only let new code adhere to the new standard? Basically lose consistency? How do you deal with an evolution in the coding standards on your project?

    Read the article

  • if/else statements or exceptions

    - by Thaven
    I don't know, that this question fit better on this board, or stackoverflow, but because my question is connected rather to practices, that some specified problem. So, consider an object that does something. And this something can (but should not!) can go wrong. So, this situation can be resolved in two way: first, with exceptions: DoSomethingClass exampleObject = new DoSomethingClass(); try { exampleObject.DoSomething(); } catch (ThisCanGoWrongException ex) { [...] } And second, with if statement: DoSomethingClass exampleObject = new DoSomethingClass(); if(!exampleObject.DoSomething()) { [...] } Second case in more sophisticated way: DoSomethingClass exampleObject = new DoSomethingClass(); ErrorHandler error = exampleObject.DoSomething(); if (error.HasError) { if(error.ErrorType == ErrorType.DivideByPotato) { [...] } } which way is better? In one hand, I heard that exception should be used only for real unexpected situations, and if programist know, that something may happen, he should used if/else. In second hand, Robert C. Martin in his book Clean Code Wrote, that exception are far more object oriented, and more simple to keep clean.

    Read the article

  • Recommended books on C++

    - by Mr Teeth
    Hi, I'm looking for a book that contains a CDRom with a IDE for readers to install and use as a environment to learn C++ on. Like the "Objects First With Java - A Practical Introduction Using BlueJ" books, where Java is learnt on BlueJ. Is there a book like this teaching C++? If there isn't any books like this, i'll still appericiate a recommended book for a novice to learn C++ on. I know nothing about C++ and I want to learn during my private times.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the (Microsoft) supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the Microsoft supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • Origins of code indentation

    - by Daniel Mahler
    I am interested in finding out who introduced code indentation, as well as when and where it was introduced. It seems so critical to code comprehension, but it was not universal. Most Fortran and Basic code was (is?) unindented, and the same goes for Cobol. I am pretty sure I have even seen old Lisp code written as continuous, line-wrapped text. You had to count brackets in your head just to parse it, never mind understanding it. So where did such a huge improvement come from? I have never seen any mention of its origin. Apart from original examples of its use, I am also looking for original discussions of indentation.

    Read the article

  • How to learn what the industry standards/expectations are, particularly with security?

    - by Aerovistae
    For instance, I was making my first mobile web-application about a year ago, and half-way through, someone pointed me to jQuery Mobile. Obviously this induced a total revolution in my app. Rewrote everything. Now, if you're in the field long enough, maybe that seems like common knowledge, but I was totally new to it. But this set me wondering: there are so many libraries and extensions and frameworks. This seems particularly crucial in the category of security. I'm afraid I'm going to find myself doing something in a professional setting eventually (I'm still a student) and someone's going to walk over and be like, My god, you're trying to secure user data that way? Don't you know about the Gordon-Wokker crypto-magic-hash-algorithms library? Without it you may as well go plaintext. How do you know what the best ways are to maximize security? Especially if you're trying to develop something on your own...

    Read the article

  • The perfect crossfade

    - by epologee
    I find it hard to describe this problem in words, which is why I made a video (45 seconds) to illustrate it. Here's a preview of the questions, please have a look at it on Vimeo: http://vimeo.com/epologee/perfect-crossfade The issue of creating a flawless crossfade or dissolve of two images or shapes has been recurring to me in a number of fields over the last decade. First in video editing, then in Flash animation and now in iOS programming. When you start googling it, there are many workarounds to be found, but I really want to solve this without a hack this time. The summary: What is the name of the technique or curve to apply in crossfading two semi-transparent, same-colored bitmaps, if you want the resulting transparency to match the original of either one? Is there a (mathematical) function to calculate the neccessary partial transparency/alpha values during the fade? Are there programming languages that have these functions as a preset, similar to the ease in, ease out or ease in out functions found in ActionScript or Cocoa?

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • History of open source software

    - by Victor Sorokin
    I've been always interested, out of the pure self-amusement, in the history of open software used today: who were the people which started it and what were the reasons to start what were design decisions at the start how software evolved over the time Specifically, I'm interested in following software: GCC X Linux kernel Java Of course, there is plenty of information in Internet to google for, but I thought it would be nice to have list of interesting resources at this site. I hope some of visitors of this site have similar interest and can share a link or two they found particularly amusing/interesting. To make this entry more question-like, here's straight question: what are the most interesting/amusing links about history of open source software?

    Read the article

  • How should I update Ajax Control Toolkit in VS 2010?

    - by Soham
    Suppose if there is a new version available of Ajax Control Toolkit then how should I install/update it in my visual studio 2010 which has already install older version of same toolkit? I would like to install new one while older toolkit would totally uninstalled, because mostly new toolkit always has all controls that were in the old toolkit & also some new controls. Then 1) Should I've to remove .dll file in my toolkit installation folder and place new .dll file there? If I do so then will VS 2010 automatically delete older entries of toolkit controls in .NET Components list and place new control there? Or 2) Should I've to uninstall old toolkit manually i.e delete .dll file and uncheck all entries in .NET Components list & after that install new one from scratch?

    Read the article

  • What are the arguments against parsing the Cthulhu way?

    - by smarmy53
    I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator à la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. These are the first few reasons that come to my mind to avoid a String.Split-based solution: you need lot of ifs to handle special cases and things quickly spiral out of control lots of hardcoded array indexes makes maintenance painful extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) What are your favorite arguments against parsing the Cthulhu way?* *of course if you can convince me he's right I'll be perfectly happy as well

    Read the article

  • What does Symfony Framework offer that Zend Framework does not?

    - by Fatmuemoo
    I have professionally working with Zend Framework for about a year. No major complaints. With some modifications, it has done a good job. I'm beginning to work on a side project where I want to heavily rely on MongoDb and Doctrine. I thought it might be a good idea to broaden my horizons and learn another enterprise level framework. There seems to be a lot a buzz about Symfony. After quickly looking over the site and documentation, I must say I came away pretty underwhelmed. I'm woundering what, if anything, Symfony has to offer that Zend doesn't? What would the advantage be in choosing Symfony?

    Read the article

  • In a SSL web application, what would be the vulnerabilities of using session based authentication?

    - by Thomas C. G. de Vilhena
    I'm not sure the term even exists, so let me explain what I mean by "session based authentication" through some pseudo-code: void PerformLogin(string userName, string password) { if(AreValidCredentials(userName, password)) { Session.Set("IsAuthenticated", true); } else { Message.Show("Invalid credentials!"); } } So the above method simply verifies the provided credentials are valid and then sets a session flag to indicate that the session user is authenticated. Under plain HTTP that is obviously unsafe, because anyone could hijack the session cookie/querystring and breach security. However, under HTTPS the session cookie/querystring is protected because client-server communication is encrypted, so I believe this authentication approach would be safe, wouldn't it? I'm asking this because I want to know how authentication tickets can improve web applications security. Thanks in advance!

    Read the article

  • How to "translate" interdependent object states in code?

    - by Earl Grey
    I have the following problem. My UI interace contains several buttons, labels, and other visual information. I am able to describe every possible workflow scenario that should be be allowed on that UI. That means I can describe it like this - when button A is pressed, the following should follow - In the case of that A button, there are three independent factors that influence the possible result when pushing the A button. The state of the session (blank, single, multi, multi special), the actual work that is being done by the system at the moment of pressing the A button (nothing was happening, work was being done, work was paused) and a separate UI element that has two states (on , off)..This gives me a 3 dimensional cube with 24 possible outcomes. I could write code for this using if cycles, switch cycles etc....but the problem is, I have another 7 buttons on that ui, I can enter this UI from different states..some buttons change the state, some change parameters... To sum up, the combinations are mindbogling and I am not able come up with a methodology that scales and is systematically reliable. I am able to describe EVERY workflow with words, I am sure my description is complete and without logical errors. But I am not able to translate that into code. I was trying to draw flowcharts but it soon became visually too complicated due to too many if "emafors". Can you advice how to proceeed?

    Read the article

  • Trainings for Back-end Programmer [closed]

    - by Pius
    I am currently working as an Android developer but I want to continue my career as a back-end developer. I consider my self having a relatively good knowledge of networking, databases and writing low-level code and other stuff that is involved in back- and mid- ends. What would be some good courses, training or whatever to improve as a back-end developer? Not the basic ones but rather more advanced ones (not too much, I'm self-taught). What are the main events in this area?

    Read the article

  • Git workflow for small teams

    - by janos
    I'm working on a git workflow to implement in a small team. The core ideas in the workflow: there is a shared project master that all team members can write to all development is done exclusively on feature branches feature branches are code reviewed by a team member other than the branch author the feature branch is eventually merged into the shared master and the cycle starts again The article explains the steps in this cycle in detail: https://github.com/janosgyerik/git-workflows-book/blob/small-team-workflow/chapter05.md Does this make sense or am I missing something?

    Read the article

  • Library to fake intermittent failures according to tester-defined policy?

    - by crosstalk
    I'm looking for a library that I can use to help mock a program component that works only intermittently - usually, it works fine, but sometimes it fails. For example, suppose I need to read data from a file, and my program has to avoid crashing or hanging when a read fails due to a disk head crash. I'd like to model that by having a mock data reader function that returns mock data 90% of the time, but hangs or returns garbage otherwise. Or, if I'm stress-testing my full program, I could turn on debugging code in my real data reader module to make it return real data 90% of the time and hang otherwise. Now, obviously, in this particular example I could just code up my mock manually to test against a random() routine. However, I was looking for a system that allows implementing any failure policy I want, including: Fail randomly 10% of the time Succeed 10 times, fail 4 times, repeat Fail semi-randomly, such that one failure tends to be followed by a burst of more failures Any policy the tester wants to define Furthermore, I'd like to be able to change the failure policy at runtime, using either code internal to the program under test, or external knobs or switches (though the latter can be implemented with the former). In pig-Java, I'd envision a FailureFaker interface like so: interface FailureFaker { /** Return true if and only if the mocked operation succeeded. Implementors should override this method with versions consistent with their failure policy. */ public boolean attempt(); } And each failure policy would be a class implementing FailureFaker; for example there would be a PatternFailureFaker that would succeed N times, then fail M times, then repeat, and a AlwaysFailFailureFaker that I'd use temporarily when I need to simulate, say, someone removing the external hard drive my data was on. The policy could then be used (and changed) in my mock object code like so: class MyMockComponent { FailureFaker faker; public void doSomething() { if (faker.attempt()) { // ... } else { throw new RuntimeException(); } } void setFailurePolicy (FailureFaker policy) { this.faker = policy; } } Now, this seems like something that would be part of a mocking library, so I wouldn't be surprised if it's been done before. (In fact, I got the idea from Steve Maguire's Writing Solid Code, where he discusses this exact idea on pages 228-231, saying that such facilities were common in Microsoft code of that early-90's era.) However, I'm only familiar with EasyMock and jMockit for Java, and neither AFAIK have this function, or something similar with different syntax. Hence, the question: Do such libraries as I've described above exist? If they do, where have you found them useful? If you haven't found them useful, why not?

    Read the article

  • Improve the business logic

    - by Victor
    In my application,I have a feature like this: The user wants to add a new address to the database. Before adding the address, he needs to perform a search(using input parameters like country,city,street etc) and when the list comes up, he will manually check if the address he wants to add is present or not. If present, he will not add the address. Is there a way to make this process better. maybe somehow eliminate a step, avoid need for manual verification etc.

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • I need advice on laptop purchase for university [closed]

    - by Systemic33
    I'm currently in University studying Computer Science/IT/Information Technology. And this first year i've managed to do with the laptop I had; an ASUS Eee PC 1000H with a 10.1" screen. But this is getting way too underpowered and small for programming more than just quick programming introduction excercises. So I'm looking to buy a more suitable laptop. It's not supposed to be a desktop replacement though, since I've got a pretty good desktop already with a 24" monitor. So the kinda laptop I want to buy is one suited for university. If this bears any significance, I'm working in Java atm, but I will likely work with lots of other things incl. web development. I'm looking to spend about $1700 plus/minus. And it should be powerful/big enough for working on programming projects as well as the usual university stuff like MATLAB, Maple, etc out "in the field", and sometimes for maybe a week when visiting my parents. What I'm looking at right now is the ASUS Zenbook UX31A with the 1920 x 1080 resolution on 13.3" IPS display. But I'm kinda nervous that this will be too petite for programming. In essence i'm looking for a powerfull computer, that has good enough battery, and looks good. I would love suggestions or any type of feedback, either with maybe a better choice, or input on how its like programming on 13" laptops. Very much thanks in advance for anyone who even went through all that! PS. I don't want a mac, or my inner karma would commit Seppuku xD But experiences from working on the 13" Macbook Air would kinda be equivalent to the Zenbook i'm considering, so I would love to hear that. tl;dr The quick brown fox jumps over the lazy dog ;)

    Read the article

  • Nested languages code smell

    - by l0b0
    Many projects combine languages, for example on the web with the ubiquitous SQL + server-side language + markup du jour + JavaScript + CSS mix (often in a single function). Bash and other shell code is mixed with Perl and Python on the server side, evaled and sometimes even passed through sed before execution. Many languages support runtime execution of arbitrary code strings, and in some it seems to be fairly common practice. In addition to advice about security and separation of concerns, what other issues are there with this type of programming, what can be done to minimize it, and is it ever defensible (except in the "PHB on the shoulder" situation)?

    Read the article

  • Resources on learning to program in machine code?

    - by AceofSpades
    I'm a student, fresh into programming and loving it, from Java to C++ and down to C. I moved backwards to the barebones and thought to go further down to Assembly. But, to my surprise, a lot of people said it's not as fast as C and there is no use. They suggested learning either how to program a kernel or writing a C compiler. My dream is to learn to program in binary (machine code) or maybe program bare metal (program micro-controller physically) or write bios or boot loaders or something of that nature. The only possible thing I heard after so much research is that a hex editor is the closest thing to machine language I could find in this age and era. Are there other things I'm unaware of? Are there any resources to learn to program in machine code? Preferably on a 8-bit micro-controller/microprocessor. This question is similar to mine, but I'm interested in practical learning first and then understanding the theory.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >