Search Results

Search found 20211 results on 809 pages for 'language implementation'.

Page 166/809 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Should these concerns be separated into separate objects?

    - by Lewis Bassett
    I have objects which implement the interface BroadcastInterface, which represents a message that is to be broadcast to all users of a particular group. It has a setter and getter method for the Subject and Body properties, and an addRecipientRole() method, which takes a given role and finds the contact token (e.g., an email address) for each user in the role and stores it. It then has a getContactTokens() method. BroadcastInterface objects are passed to an object that implements BroadcasterInterface. These objects are responsible for broadcasting a passed BroadcastInterface object. For example, an EmailBroadcaster implementation of the BroadcasterInterface will take EmailBroadcast objects and use the mailer services to email them out. Now, depending on what BroadcasterInterface implementation is used to broadcast, a different implementation of BroadcastInterface is used by client code. The Single Responsibility Principle seems to suggest that I should have a separate BroadcastFactory object, for creating BroadcastInterface objects, depending on what BroadcasterInterface implementation is used, as creating the BroadcastInterface object is a different responsibility to broadcasting them. But the class used for creating BroadcastInterface objects depends on what implementation of BroadcasterInterface is used to broadcast them. I think, because the knowledge of what method is used to send the broadcasts should only be configured once, the BroadcasterInterface object should be responsible for providing new BroadcastInterface objects. Does the responsibility of “creating and broadcasting objects that implement the BroadcastInterface interface” violate the Single Responsibility Principle? (Because the contact token for sending the broadcast out to the users will differ depending on the way it is broadcasted, I need different broadcast classes—though client code will not be able to tell the difference.)

    Read the article

  • How do you visually represent programming skills?

    - by TomSchober
    I had a discussion with a recruiter recently that made me wish I could visually represent programming skills. In trying to explain how skills relate, what are the important properties of those skills? Would a tagging model work (i.e. "Design Pattern," "Programming Language," "IDE," or "VCS")? Are they really hierarchical? Clarification: The real problem I see is communicating the level of granularity among skill sets. For instance saying someone "knows Java" is a uselessly broad term in describing what someone can DO. However saying they know how to write web services with the Java Programming language is a bit better. To go even further, saying they know Spring as a tool under all that is probably specific enough. What should we call those levels of granularity? What are the relationships between the terms we use? i.e. Framework to Language, Tool to Language, Framework to Solution(like web services), etc.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • How to set default hreflangs for some languages?

    - by user1721135
    I want to make a site with different versions for 2 countries, which have the same language. Then I need to do the same for another language. Basically I want to have 6 versions of the site: UK English US English Default English ?? Austrian German Germany German Default German The question is, how do I define the "default" language versions, for any country with this language which isnt defined already? I know there is x-default, but I think you can only use that once and it is for all languages and all countries.

    Read the article

  • In Ubuntu 13.10, none of the hotkeys of LibreOffice works in non-English keyboard layout

    - by maqtanim
    In Ubuntu 13.10, the hotkeys/shortcut-keys (Ctrl+b, Ctrl+s etc) in LibreOffice are language dependent and work in English language only. While writing in any other language (i.e. any Cyrillic and/or any Bengali etc) it's impossible to use hotkeys, they just don't do anything. Switching to English input language enable hotkeys once again. This is very frustrating as user needs to switch language to save document, to make it bold, or italic, etc. This was not experienced in Ubuntu 13.04. Steps to reproduce: System Settings Text Entry. Add another keyboard layout beside English [In my case it is Bengali (Probhat)] Now launch Writer. Switch the keyboard layout from English (US) to Bengali (Probhat) by pressing Ctrl+Space. Press Ctrl+B to change font weight to bold. Error: Font weight does not get changed. Expected: Font weight should change to bold. Note: none other system hotkeys work as expected. I.e. Ctrl+s to save, or Ctrl+b to subscript, or Ctrl+i to italic etc. Workaround: The only way is to - change the keyboard layout to English then press desired hotkey then switch keyboard layout back to Bengali. The issue is critical, as it make writer very slow for keyboard-only typing.

    Read the article

  • How do search engines segment against locale?

    - by Hope I Helped
    Assume I run a website with multiple language modes. If I had a Spanish section, it should be included in Spanish-segmented search engines such as Google Spain, Google Peru, Google El Salvador, etc. and excluded in the others. Likewise, even though the website would have content in Chinese, multilingual countries such as Singapore should feature content in their main language (English in this case). What is the best approach to ensure the appropriate language is associated with the various geographically segmented search engines?

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • what's included in a typical computer architecture class? [closed]

    - by sq1020
    Does this description fit what's usually included in a computer architecture class? Computer Organization and Assembly Language An introduction to the hardware organization and assembly language of the Intel processor. Topics include memory hierarchy and design- CPU design- pipelining- addressing modes- subroutine linkage- polled input/output- interrupts- high level language interfacing and macros.

    Read the article

  • How were the first compilers made?

    - by Sauron
    I always wonder this, and perhaps I need a good history lesson on programming languages. But....since most compilers nowadays are made in C......how were the very first compilers made (AKA before C) or were all the languages just interpreted. With that being said, I still don't understand how even the first assembly language was done, I understand what assembly language is......but I don't see how they got the VERY first assembly language working (like.....how did they make the first commands (like mov R21) or w/e set to the binary equivalent.

    Read the article

  • Choosing Technology To Include In Software Design

    How many of us have been forced to select one technology over another when designing a new system? What factors do we and should we consider? How can we ensure the correct business decision is made? When faced with this type of decision it is important to gather as much information possible regarding each technology being considered as well as the project itself. Additionally, I tend to delay my decision about the technology until it is ultimately necessary to be made. The reason why I tend to delay such an important design decision is due to the fact that as the project progresses requirements and other factors can alter a decision for selecting the best technology for a project. Important factors to consider when making technology decisions: Time to Implement and Maintain Total Cost of Technology (including Implementation and maintenance) Adaptability of Technology Implementation Team’s Skill Sets Complexity of Technology (including Implementation and maintenance) orecasted Return On Investment (ROI) Forecasted Profit on Investment (POI) Of the factors to consider the ROI and POI weigh the heaviest because the take in to consideration the other factors when calculating the profitability and return on investments.For a real world example let us consider developing a web based lead management system for a new company. This system can either be hosted on Microsoft Windows based web server or on a Linux based web server. Important Factors for this Example Implementation Team’s Skill Sets Member 1  Skill Set: Classic ASP, ASP.Net, and MS SQL Server Experience: 10 years Member 2  Skill Set: PHP, MySQL, Photoshop and MS SQL Server Experience: 3 years Member 3  Skill Set: C++, VB6, ASP.Net, and MS SQL Server Experience: 12 years Total Cost of Technology (including Implementation and maintenance) Linux Initial Year: $5,000 (Random Value) Additional Years: $3,000 (Random Value) Windows Initial Year: $10,000 (Random Value) Additional Years: $3,000 (Random Value) Complexity of Technology Linux Large Learning Curve with user driven documentation Estimated learning cost: $30,000 Windows Minimal based on Teams skills with Microsoft based documentation Estimated learning cost: $5,000 ROI Linux Total Cost Initial Total Cost: $35,000 Additional Cost $3,000 per year Windows Total Cost Initial Total Cost: $15,000 Additional Cost $3,000 per year Based on the hypothetical numbers it would make more sense to select windows based web server because the initial investment of the technology is much lower initially compared to the Linux based web server.

    Read the article

  • What php programmer should know?

    - by emchinee
    I've dig the database here and didn't found any answer for my question. What is a standard for a php programmer to know? I mean, literally, what group of language functions, mechanisms, variables should person know to consider oneself a (good) php programmer? (I know 'being good' is beyond language syntax, still I'm considering syntax of plain php only) To give an example what I mean: functions to control http sessions, cookies functions to control connection with databases functions to control file handling functions to control xml etc.. I omit phrases like 'security' or 'patterns' or 'framework' intentionally as it applies to every programming language. Hope I made myself clear, any input appreciated :) Note: Michael J.V. is right claiming that databases are independent from language, so to put my question more precisely and emphasise differences: Practises or security, are some ideas to implement (there is no 'Pattern' object with 'Decorator()' method, is there?) while using databases means knowing a mysqli and a set of its methods.

    Read the article

  • Differences between C# and Javascript for Unity [closed]

    - by vrinek
    Apart from the language differences (class-based vs prototypical, strong vs weak typing), what are the differences between using Javascript and using C# when developing games in Unity3D? Is there a noticable performance difference? Is the javascript code packaged as-is? And if yes, does this help the game's modability? Is it possible to use libraries developed for one language while developing in the other one? Is it possible to mix the two languages in the same Unity project by coding some parts in C# and others in Javascript? The next couple of questions are time-specific so feel free to ignore or remove: If libraries are not cross-functional, which language has better library support from the game development perspective? Which language has better game dev specific resources available (books, websites, forums)?

    Read the article

  • Were the first assemblers written in machine code?

    - by The111
    I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Class as first-class object

    - by mrpyo
    Could a class be a first-class object? If yes, how would the implementation look? I mean, how could syntax for dynamically creating new classes look like? EDIT: I mean what example syntax could look like (I'm sorry, English is not my native language), but still I believe this question makes sense - how you give this functionality while keeping language consistent. For example how you create reference for new type. Do you make reference first-class object too and then use something like this: Reference<newType> r = new Reference<newType>(); r.set(value); Well this could get messy so you may just force user to use Object type references for dynamically created classes, but then you loose type-checking. I think creating concise syntax for this is interesting problem which solving could lead to better language design, maybe language which is metalanguage for itself (I wonder if this is possible).

    Read the article

  • Suggestions for Future or On-The-Edge Languages (2011)

    - by Kurtis
    I'm just looking for some suggestions on newer languages and language implementations that are useful for string manipulation. It's now 2011 and a lot has changed over the years. Most of my work includes web development (which is mostly text-based) and command line scripting. I'm pretty language agnostic, although I've felt violated using PHP over the years. My only requirements are that the language be good at text manipulation, without a lot of 3rd party libraries (core libraries are okay, though), and that the language and/or standard implementation is very up to date or even "futuristic". For example, the two main languages I'm looking at right now are Python (Version 3.x) or Perl (Version 6.x). Research, Academic, and Experimental languages are okay with me. I don't mind functional languages although I'd like to have the option of programming in a procedural or even object oriented manner. Thanks!

    Read the article

  • What's New in Visual Studio 2010 Languages

    - by Aamir Hasan
    What's New in Visual Basic 2010Describes new features in the Visual Basic language and Code Editor. The features include implicit line continuation, auto-implemented properties, collection initializers, and more.What's New in Visual C# 2010Describes new features in the C# language and Code Editor. The features include the dynamic type, named and optional arguments, enhanced Office programmability, and variance.What's New in Visual C++ 2010Describes new and revised features in Visual C++. The features include lambda expressions, the rvalue reference declarator, and the auto, decltype, and static_assert keywords.What's New in Visual F# 2010Describes the F# language, which is a language that supports functional programming for the .NET Framework.Reference:http://msdn.microsoft.com/en-us/library/bb386063%28VS.100%29.aspx

    Read the article

  • Developing a computer system based on Nand2Tetris [on hold]

    - by Ryan
    I recently finished a book called Nand2Tetris (nand2tetris.org) where I built my own computer system from scratch with its own machine language, assembly code, and a high level language called Jack that's translated to Hack binary. However, I feel like the "computer" I built throughout the course of this book (called the Hack computer) is a bit too simple for various reasons: 1) There are only two registers (D and A), whereas most computers have much more 2) Peripheral devices like mouse and keyboard have to be directly implemented 3) Peripheral devices use a pre-planned shared memory map to communicate with the CPU instead of using interrupts (which aren't covered at all) 4) Jack (the high level language) code doesn't compile to Assembly code directly, instead it compiles to an intermediate language, which in turn gets translated to Assembly. 5) There is no ROM or permanent storage device, everything is stored in RAM 6) No support for colored monitor, networking or sound I would like to build a more complicated computer system now based on what I've learned from Nand2Tetris. Does anyone know of any good resources or books to get started on this? (BTW by computer system I mean software that can emulate the hardware of a virtual computer with its own unique instruction set)

    Read the article

  • JRuby and JVM Languages at JavaOne!

    - by Yolande Poirier
    "My goal with my talks at JavaOne is to teach what is happening at the JVM level and below so people understand better where we are going" explains Charles Nutter, Jruby project lead. In this interview, Charles shared the JRuby features he presented at the JVM Language Summit. They include foreign function interface (FFI), IO layer, character transcoding, regular expressions, compilers, coroutines, and more.  At JavaOne, he will be presenting:  Going Native: Bringing FFI to the JVM The Java Native Runtime (JNR) is a high-speed foreign function interface (FFI) for calling native code from Java without ever writing a line of C. Based on the success of JNR, JDK Enhancement Proposal (JEP) 191 will bring FFI to OpenJDK as an internal API.  The Emerging Languages Bowl: The Big League Challenge In this panel discussion, these emerging languages are portrayed by their respective champions, who explain how they may help your everyday life as a Java developer. Script Bowl 2014: The Battle Rages On In this contest, languages that run on the JVM, represented by their respective language experts, battle for most popular language status by showing off their new features. Audience members will also vote on a language that should not return in 2015. Returning from 2013 are language gurus representing Clojure, Groovy, JRuby, and Scala.

    Read the article

  • Any tips to learn how to program with severe ADHD?

    - by twinbornJoint
    I have a difficult time trying to learn how to program from straight text-books. Video training seems to work well for me in my past experiences with PHP. I am trying my hardest to stay focussed and push through. Specifically I am looking to start indie game development. Over the last two weeks I have been trying to pick the "right" language and framework to develop with. I started going through Python, but I am not really enjoying the language so far. I am constantly looking through this website to compare this language to that, and keep getting distracted. Aside from all of this, is it possible to become a programmer when you have trouble focussing? Has anyone been through this that can recommend some advice? edit - you guys can check my new question out with detailed information thanks to all of the responses from this thread. http://programmers.stackexchange.com/questions/15916/what-is-the-best-language-and-framework-for-my-situation

    Read the article

  • Defining a service layer: the text-based adventure

    - by Stacy Vicknair
    Applications these days have more options than ever for a user interface, and it’s only going to grow. A successful product might require native applications for mobile devices, a regular web implementation, or even a gaming console. These systems often will be centralized and data driven. The solution is one that’s fairly solitary, a service layer! Simply put, take what’s shared and put it behind a physical or abstract layer that defines the boundary between the specific user interface and the shared content.   I know, I know, none of this is complicated. But some times it can be difficult to discern what belongs on which side of the line. For instance, say we’re creating a service that will provide content for both an ASP.NET MVC application and a WP7 application. Although the content served to each application is the same, there are different paradigms and patterns for displaying that data in the different environments. In ASP.NET MVC, you may create a model specific to a page that combines necessary information. In the WP7 application you might require different sets of data that you will connect via MVVM with the view. The general rule of thumb is that any shared content, business rules, or data should exist separately. Any element that is specific to the current UI implementation should be included in a separate library or with the UI implementation itself. The WP7 application doesn’t need my MVC specific model classes. My MVC application doesn’t require those INotifyPropertyChanged viewmodels that the WP7 application depends on. In both cases, there should be additional processing done above the service layer to massage the data to the application’s specific needs.   Service-ocalypse: the text based adventure What helps me the most about deciding whether or not something belongs coupled to the UI implementation or in the shared implementation is thinking of the simplest implementation you could have: a console application. You might have played a game like Peasant’s Quest: The console app is the text based adventure game version of your application. If you’re service was consumed in its simplest form, you would simply have a console based API for it that issues requests. Maybe those requests aren’t SWIM TO BOAT, but they might be CREATE USER JOHN. If I issue a request, I expect that request to be issued to the service. If the service has any exceptions or issues with my input, that business logic should be encapsulated in that service, not implemented in the UI. The service layer should be your functional application in its entirety, and anything above that layer should only assist with the display of that information.

    Read the article

  • Google I/O 2012 - Meet the Go Team

    Google I/O 2012 - Meet the Go Team Andrew Gerrand , Rob Pike The Go programming language is an open source project to make programmers more productive. Go is expressive, concise, clean, and efficient. It's a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language. In this fireside chat, Have your Go questions answered by the gophers themselves. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 168 11 ratings Time: 01:00:29 More in Science & Technology

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >