Search Results

Search found 504 results on 21 pages for 'abstraction'.

Page 7/21 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Haskell web frameworks survey

    - by Phuc Nguyen
    There are several web frameworks for Haskell like Happstack, Snap, and Yesod, and probably a few more. In what aspects do they differ from each other? For example: features (e.g. server only, or also client scripting, easy support for different kinds of database) maturity (e.g. stability, documentation quality) scalability (e.g. performance, handy abstraction) main targets Also, what are examples of real-world sites / web apps using these frameworks? Many thanks.

    Read the article

  • What do you consider to be a high-level language and for what reason?

    - by Anto
    Traditionally, C was called a high-level language, but these days it is often referred to as a low-level language (it is high-level compared to Assembly, but it is very low-level compared to, for instance, Python, these days). Generally, everyone calls languages of the sort of Python and Java high-level languages nowadays. How would you judge whether a programming language really is a high-level language (or a low-level one)? Must you give direct instructions to the CPU while programming in a language to call it low-level (e.g. Assembly), or is C, which provides some abstraction from the hardware, a low-level language too? Has the meaning of "high-level" and "low-level" changed over the years?

    Read the article

  • ASP.NET WebAPI Security 2: Identity Architecture

    - by Your DisplayName here!
    Pedro has beaten me to the punch with a detailed post (and diagram) about the WebAPI hosting architecture. So go read his post first, then come back so we can have a closer look at what that means for security. The first important takeaway is that WebAPI is hosting independent-  currently it ships with two host integration implementations – one for ASP.NET (aka web host) and WCF (aka self host). Pedro nicely shows the integration into the web host. Self hosting is not done yet so we will mainly focus on the web hosting case and I will point out security related differences when they exist. The interesting part for security (amongst other things of course) is the HttpControllerHandler (see Pedro’s diagram) – this is where the host specific representation of an HTTP request gets converted to the WebAPI abstraction (called HttpRequestMessage). The ConvertRequest method does the following: Create a new HttpRequestMessage. Copy URI, method and headers from the HttpContext. Copies HttpContext.User to the Properties<string, object> dictionary on the HttpRequestMessage. The key used for that can be found on HttpPropertyKeys.UserPrincipalKey (which resolves to “MS_UserPrincipal”). So the consequence is that WebAPI receives whatever IPrincipal has been set by the ASP.NET pipeline (in the web hosting case). Common questions are: Are there situations where is property does not get set? Not in ASP.NET – the DefaultAuthenticationModule in the HTTP pipeline makes sure HttpContext.User (and Thread.CurrentPrincipal – more on that later) are always set. Either to some authenticated user – or to an anonymous principal. This may be different in other hosting environments (again more on that later). Why so generic? Keep in mind that WebAPI is hosting independent and may run on a host that materializes identity completely different compared to ASP.NET (or .NET in general). This gives them a way to evolve the system in the future. How does WebAPI code retrieve the current client identity? HttpRequestMessage has an extension method called GetUserPrincipal() which returns the property as an IPrincipal. A quick look at self hosting shows that the moral equivalent of HttpControllerHandler.ConvertRequest() is HttpSelfHostServer.ProcessRequestContext(). Here the principal property gets only set when the host is configured for Windows authentication (inconsisteny). Do I like that? Well – yes and no. Here are my thoughts: I like that it is very straightforward to let WebAPI inherit the client identity context of the host. This might not always be what you want – think of an ASP.NET app that consists of UI and APIs – the UI might use Forms authentication, the APIs token based authentication. So it would be good if the two parts would live in a separate security world. It makes total sense to have this generic hand off point for identity between the host and WebAPI. It also makes total sense for WebAPI plumbing code (especially handlers) to use the WebAPI specific identity abstraction. But – c’mon we are running on .NET. And the way .NET represents identity is via IPrincipal/IIdentity. That’s what every .NET developer on this planet is used to. So I would like to see a User property of type IPrincipal on ApiController. I don’t like the fact that Thread.CurrentPrincipal is not populated. T.CP is a well established pattern as a one stop shop to retrieve client identity on .NET.  That makes a lot of sense – even if the name is misleading at best. There might be existing library code you want to call from WebAPI that makes use of T.CP (e.g. PrincipalPermission, or a simple .Name or .IsInRole()). Having the client identity as an ambient property is useful for code that does not have access to the current HTTP request (for calling GetUserPrincipal()). I don’t like the fact that that the client identity conversion from host to WebAPI is inconsistent. This makes writing security plumbing code harder. I think the logic should always be: If the host has a client identity representation, copy it. If not, set an anonymous principal on the request message. Btw – please don’t annoy me with the “but T.CP is static, and static is bad for testing” chant. T.CP is a getter/setter and, in fact I find it beneficial to be able to set different security contexts in unit tests before calling in some logic. And, in case you have wondered – T.CP is indeed thread static (and the name comes from a time where a logical operation was bound to a thread – which is not true anymore). But all thread creation APIs in .NET actually copy T.CP to the new thread they create. This is the case since .NET 2.0 and is certainly an improvement compared to how Win32 does things. So to sum it up: The host plumbing copies the host client identity to WebAPI (this is not perfect yet, but will surely be improved). or in other words: The current WebAPI bits don’t ship with any authentication plumbing, but solely use whatever authentication (and thus client identity) is set up by the host. WebAPI developers can retrieve the client identity from the HttpRequestMessage. Hopefully my proposed changes around T.CP and the User property on ApiController will be added. In the next post, I will detail how to add WebAPI specific authentication support, e.g. for Basic Authentication and tokens. This includes integrating the notion of claims based identity. After that we will look at the built-in authorization bits and how to improve them as well. Stay tuned.

    Read the article

  • Relationship between TDD and Software Architecture/Design

    - by Christopher Francisco
    I'm new to TDD and have been reading the theory since applying it is more complicated than it sounds when you're learning by yourself. As far as I know, the objective is to write test cases for each requirement and run the test so it fails (to prevent a false positive). Afterward, you should write the minimum amount of code that can pass the test and move to the next one. That being said, is it true that you get a fast development, but what about the code itself? this theory makes me think you are not considering things like abstraction, delegation of responsibilities, design patterns, architecture and others since you're just writing "the minimum amount of code that can pass the test". I know I'm probably wrong because if this were true, we'd have a lot of crappy developers with poor software architecture and documentation so I'm asking for a guide here, what's the relationship between TDD and Software Architecture/Design?

    Read the article

  • Design patterns frequently seen in embedded systems programming

    - by softwarelover
    I don't have any question related to coding. My concerns are about embedded systems programming independent of any particular programming language. Because I am new in the realm of embedded programming, I would quite appreciate responses from those who consider themselves experienced embedded systems programmers. I basically have 2 questions. Of the design patterns listed below are there any seen frequently in embedded systems programming? Abstraction-Occurrence pattern General Hierarchy pattern Player-Role pattern Singleton pattern Observer pattern Delegation pattern Adapter pattern Facade pattern Immutable pattern Read-Only Interface pattern Proxy pattern As an experienced embedded developer, what design patterns have you, as an individual, come across? There is no need to describe the details. Only the pattern names would suffice. Please share your own experience. I believe the answers to the above questions would work as a good starting point for any novice programmers in the embedded world.

    Read the article

  • Model Driven Architecture Approach in programming / modelling

    - by yak
    I know the basics of the model driven architecture: it is all about model the system which I want to create and create the core code afterwards. I used CORBA a while ago. First thing that I needed to do was to create an abstract interface (some kind of model of the system I want to build) and generate core code later. But I have a different question: is model driven architecture a broad approach or not? I mean, let's say, that I have the language (modelling language) in which I want to model EXISTING system (opposite to the system I want to CREATE), and then analyze the model of the created system and different facts about that modeled abstraction. In this case, can the process I described above be considered the model driven architecture approach? I mean, I have the model, but this is the model of the existing system, not the system to be created.

    Read the article

  • What technologies are used for Game development now days?

    - by Monika Michael
    Whenever I ask a question about game development in an online forum I always get suggestions like learning line drawing algorithms, bit level image manipulation and video decompression etc. However looking at games like God of War 3, I find it hard to believe that these games could be developed using such low level techniques. The sheer awesomeness of such games defy any comprehensible(for me) programming methodology. Besides the gaming hardware is really a monster now days. So it stands to reason that the developers would work at a higher level of abstraction. What is the latest development methodology in the gaming industry? How is it that a team of 30-35 developers (of which most is management and marketing fluff) able to make such mind boggling games? If the question seems too general could you explain the architecture of God of War 3? Or how you would go about producing a clone? That I think should be objectively answerable.

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Database Management System do exist?

    - by Bakaburg
    I want to build a database for my no-profit association, and i was planning to do it by myself. But then i realize that really i don't have the time to buid a solid, secure system. So i was thinking, maybe like cms do exist, maybe there are also database management systems. I mean a layer of abstraction over the database that allow you manage data, manage access to data, create widgets with and expand the data. Maybe with a frontend to use this data and a backend to manage it. that is a cms but not based on pages and post but on data! Moreover, i would like some standard solution, because my IT management position ends this year, so i need something that will be easy to use and expand even by someone that is not a developer. So do exist something that fit my need? PS: i would really like some modern and visually pleasant solution, javascritp and ajax heavy and that relies the fewest is possible on server and reloading of the pages.

    Read the article

  • Using a subset of GetHashCode() to increase AzureTable performance through partitioning

    - by makerofthings7
    Generally speaking, Azure Table IO performance improves as more partitions are used (with some tradeoffs in continuation tokens and batch updates I won't go into). Since the partition key is always a string I am considering using a "natural" load balancing technique based on a subset of the GetHashCode() of the partition key, and appending this subset to the partition key itself. This will allow all direct PK/RK queries to be computed with little overhead and with ease. Batch updates may just need an intermediate to group similar PKs together prior to submission. Question: Should I use GetHashCode() to compute the partition key? Is a better function available? If I use GetHashCode() does it matter which character I use for my PK? Is there an abstraction for Azure Table and Blob storage that does this for me already?

    Read the article

  • How do I overcome paralysis by analysis when coding?

    - by LuxuryMode
    When I start a new project, I often times immediately start thinking about the details of implementation. "Where am I gonna put the DataBaseHandler? How should I use it? Should classes that want to use it extend from some Abstract superclass..? Should I an interface? What level of abstraction am I going to use in my class that contains methods for sending requests and parsing data?" I end up stalling for a long time because I want to code for extensibility and reusability. But I feel it almost impossible to get past thinking about how to implement perfectly. And then, if I try to just say "screw it, just get it done!", I hit a brick wall pretty quickly because my code isn't organized, I mixed levels of abstractions, etc. What are some techniques/methods you have for launching into a new project while also setting up a logical/modular structure that will scale well?

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Multiple Object Instantiation

    - by Ricky Baby
    I am trying to get my head around object oriented programming as it pertains to web development (more specifically PHP). I understand inheritance and abstraction etc, and know all the "buzz-words" like encapsulation and single purpose and why I should be doing all this. But my knowledge is falling short with actually creating objects that relate to the data I have in my database, creating a single object that a representative of a single entity makes sense, but what are the best practises when creating 100, 1,000 or 10,000 objects of the same type. for instance, when trying to display a list of the items, ideally I would like to be consistent with the objects I use, but where exactly should I run the query/get the data to populate the object(s) as running 10,000 queries seems wasteful. As an example, say I have a database of cats, and I want a list of all black cats, do I need to set up a FactoryObject which grabs the data needed for each cat from my database, then passes that data into each individual CatObject and returns the results in a array/object - or should I pass each CatObject it's identifier and let it populate itself in a separate query.

    Read the article

  • How to structure a project that supports multiple versions of a service?

    - by Nick Canzoneri
    I'm hoping for some tips on creating a project (ASP.NET MVC, but I guess it doesn't really matter) against multiples versions of a service (in this case, actually multiple sets of WCF services). Right now, the web app uses only some of the services, but the eventual goal would be to use the features of all of the services. The code used to implement a service feature would likely be very similar between versions in most cases (but, of course, everything varies). So, how would you structure a project like this? Separate source control branches for each different version? Kind of shying away from this because I don't feel like branch merging should be something that we're going to be doing really often. Different project/solution files in the same branch? Could link the same shared projects easily Build some type of abstraction layer on top of the services, so that no matter what service is being used, it is the same to the web application?

    Read the article

  • Taking Object Oriented development to the next level

    - by Songo
    Can you mention some advanced OO topics or concepts that one should be aware of? I have been a developer for 2 years now and currently aiming for a certain company that requires a web developer with a minimum experience of 3 years. I imagine the interview will have the basic object oriented topics like (Abstraction, Polymorphism, Inheritance, Design patterns, UML, Databases and ORMs, SOLID principles, DRY principle, ...etc) I have these topics covered, but what I'm looking forward to is bringing up topics such as Efferent Coupling, Afferent Coupling, Instability, The law of Demeter, ...etc. Till few days ago I never knew such concepts existed (maybe because I'm a communication engineer basically not a CS graduate.) Can you please recommend some more advanced topics concerning object oriented programming?

    Read the article

  • When should I use AtomPub?

    - by Gary Rowe
    I have been conducting some research into RESTful web service design and I've reached what I think is a key decision point so I thought I'd offer it up to the community to get some advice. In keeping with the principles of a RESTful architecture I want to present a discoverable API, so I will be supporting the various HTTP verbs as fully as possible. My difficulty comes with the choice of representation of those resources. You see, it would be easy for me to come up with my own API that covers how search results are to be presented and how links to other resources are provided, but this would be unique to my application. I've read about the Atom Publishing Protocol (RFC 5023), and how OData promotes its use, but it seems to add an extra level of abstraction over what is (currently) a rather simple API. So my question is, when should a developer select AtomPub as their choice of representation - if at all? And if not, what is the current recommended approach?

    Read the article

  • Getting My Head Around Immutability

    - by Michael Mangold
    I'm new to object-oriented programming, and one concept that has been taking me a while to grasp is immutability. I think the light bulb went off last night but I want to verify: When I come across statements that an immutable object cannot be changed, I'm puzzled because I can, for instance, do the following: NSString *myName = @"Bob"; myName = @"Mike"; There, I just changed myName, of immutable type NSString. My problem is that the word, "object" can refer to the physical object in memory, or the abstraction, "myName." The former definition applies to the concept of immutability. As for the variable, a more clear (to me) definition of immutability is that the value of an immutable object can only be changed by also changing its location in memory, i.e. its reference (also known as its pointer). Is this correct, or am I still lost in the woods?

    Read the article

  • What technologies are used for Game development now days?

    - by Monika Michael
    Whenever I ask a question about game development in an online forum I always get suggestions like learning line drawing algorithms, bit level image manipulation and video decompression etc. However looking at games like God of War 3, I find it hard to believe that these games could be developed using such low level techniques. The sheer awesomeness of such games defy any comprehensible(for me) programming methodology. Besides the gaming hardware is really a monster now days. So it stands to reason that the developers would work at a higher level of abstraction. What is the latest development methodology in the gaming industry? How is it that a team of 30-35 developers (of which most is management and marketing fluff) able to make such mind boggling games?

    Read the article

  • Erlang web frameworks survey

    - by Zachary K
    (Inspired by similar question on Haskel) There are several web frameworks for Erlang like Nitrogen, Chicago Boss, and Zotonic, and a few more. In what aspects do they differ from each other? For example: features (e.g. server only, or also client scripting, easy support for different kinds of database) maturity (e.g. stability, documentation quality) scalability (e.g. performance, handy abstraction) main targets Also, what are examples of real-world sites / web apps using these frameworks? EDIT: Starting a bounty in hopes that it will get some conversation going

    Read the article

  • What are approaches for analyzing the cost-benefits of a development methodology?

    - by Garrett Hall
    There are many development practices (TDD, continuous integration, cowboy-coding), principles (SOLID, layers of abstraction, KISS), and processes (RUP, Scrum, XP, Waterfall). I have learned you can't follow any of these blindly, but have to consider context and ROI (return on investment). My question is: How do you know whether you are getting a good ROI by following a particular methodology? Metrics, guesstimation, experience? Do analytical methods exist? Or is this just the million-dollar question in software engineering that has no answer?

    Read the article

  • Avoiding coupling

    - by Seralize
    It is also true that a system may become so coupled, where each class is dependent on other classes that depend on other classes, that it is no longer possible to make a change in one place without having a ripple effect and having to make subsequent changes in many places.[1] This is why using an interface or an abstract class can be valuable in any object-oriented software project. Quote from Wikipedia Starting from scratch I'm starting from scratch with a project that I recently finished because I found the code to be too tightly coupled and hard to refactor, even when using MVC. I will be using MVC on my new project aswell but want to try and avoid the pitfalls this time, hopefully with your help. Project summary My issue is that I really wish to keep the Controller as clean as possible, but it seems like I can't do this. The basic idea of the program is that the user picks wordlists which is sent to the game engine. It will pick random words from the lists until there are none left. Problem at hand My main problem is that the game will have 'modes', and need to check the input in different ways through a method called checkWord(), but exactly where to put this and how to abstract it properly is a challenge to me. I'm new to design patterns, so not sure whether there exist any might fit my problem. My own attempt at abstraction Here is what I've gotten so far after hours of 'refactoring' the design plans, and I know it's long, but it's the best I could do to try and give you an overview (Note: As this is the sketch, anything is subject to change, all help and advice is very welcome. Also note the marked coupling points): Wordlist class Wordlist { // Basic CRUD etc. here! // Other sample methods: public function wordlistCount($user_id) {} // Returns count of how many wordlists a user has public function getAll($user_id) {} // Returns all wordlists of a user } Word class Word { // Basic CRUD etc. here! // Other sample methods: public function wordCount($wordlist_id) {} // Returns count of words in a wordlist public function getAll($wordlist_id) {} // Returns all words from a wordlist public function getWordInfo($word_id) {} // Returns information about a word } Wordpicker class Wordpicker { // The class needs to know which words and wordlists to exclude protected $_used_words = array(); protected $_used_wordlists = array(); // Wordlists to pick words from protected $_wordlists = array(); /* Public Methods */ public function setWordlists($wordlists = array()) {} public function setUsedWords($used_words = array()) {} public function setUsedWordlists($used_wordlists = array()) {} public function getRandomWord() {} // COUPLING POINT! Will most likely need to communicate with both the Wordlist and Word classes /* Protected Methods */ protected function _checkAvailableWordlists() {} // COUPLING POINT! Might need to check if wordlists are deleted etc. protected function _checkAvailableWords() {} // COUPLING POINT! Method needs to get all words in a wordlist from the Word class } Game class Game { protected $_session_id; // The ID of a game session which gets stored in the database along with game details protected $_game_info = array(); // Game instantiation public function __construct($user_id) { if (! $this->_session_id = $this->_gameExists($user_id)) { // New game } else { // Resume game } } // This is the method I tried to make flexible by using abstract classes etc. // Does it even belong in this class at all? public function checkWord($answer, $native_word, $translation) {} // This method checks the answer against the native word / translation word, depending on game mode public function getGameInfo() {} // Returns information about a game session, or creates it if it does not exist public function deleteSession($session_id) {} // Deletes a game session from the database // Methods dealing with game session information protected function _gameExists($user_id) {} protected function _getProgress($session_id) {} protected function _updateProgress($game_info = array()) {} } The Game /* CONTROLLER */ /* "Guess the word" page */ // User input $game_type = $_POST['game_type']; // Chosen with radio buttons etc. $wordlists = $_POST['wordlists']; // Chosen with checkboxes etc. // Starts a new game or resumes one from the database $game = new Game($_SESSION['user_id']); $game_info = $game->getGameInfo(); // Instantiates a new Wordpicker $wordpicker = new Wordpicker(); $wordpicker->setWordlists((isset($game_info['wordlists'])) ? $game_info['wordlists'] : $wordlists); $wordpicker->setUsedWordlists((isset($game_info['used_wordlists'])) ? $game_info['used_wordlists'] : NULL); $wordpicker->setUsedWords((isset($game_info['used_words'])) ? $game_info['used_words'] : NULL); // Fetches an available word if (! $word_id = $wordpicker->getRandomWord()) { // No more words left - game over! $game->deleteSession($game_info['id']); redirect(); } else { // Presents word details to the user $word = new Word(); $word_info = $word->getWordInfo($word_id); } The Bit to Finish /* CONTROLLER */ /* "Check the answer" page */ // ?????????????????? ( http://pastebin.com/cc6MtLTR ) Make sure you toggle the 'Layout Width' to the right for a better view. Thanks in advance. Questions To which extent should objects be loosely coupled? If object A needs info from object B, how is it supposed to get this without losing too much cohesion? As suggested in the comments, models should hold all business logic. However, as objects should be independent, where to glue them together? Should the model contain some sort of "index" or "client" area which connects the dots? Edit: So basically what I should do for a start is to make a new model which I can more easily call with oneliners such as $model->doAction(); // Lots of code in here which uses classes! How about the method for checking words? Should it be it's own object? I'm not sure where I should put it as it's pretty much part of the 'game'. But on another hand, I could just leave out the 'abstraction and OOPness' and make it a method of the 'client model' which will be encapsulated from the controller anyway. Very unsure about this.

    Read the article

  • LuaJit FFI and hiding C implementation details

    - by wirrbel
    I would like to extend an application using LuaJit FFI. Having seen http://luajit.org/ext_ffi_tutorial.html this is surprisingly easy when comparing this to the Lua C API. So far so good. However I do not plainly want to wrap C functions but provide a higher level API to users writing scripts for the application. Especially I do not want users to be able to access "primitives", i.e. the ffi.* namespace. Is this possible or will that ffi namespace be available to user's Lua scripts? On the issue of Sandboxing Lua I found http://lua-users.org/wiki/SandBoxes which is not talking about FFI though. Furthermore, the plan I have described above is assuming that the introduction of abstraction layers happens on the lua side of code. Is this an advisable approach or would you rather abstract functionality on the statically compiled code (on the C-side)?

    Read the article

  • Why is Backbone.js a bad option in the Technology radar 2012 of thoughtworks?

    - by Cfontes
    In the latest Technology Radar 2012 they state that Backbone.js has pushed to far on it's MVC abstraction and say that Knockout.js or Angular.js should be used instead. I cannot get why they think that Backbone.js model is bad, for me it's just a way to create a standard so people can have some kind of roadmap to dev frontend JS without Spaghetti code. Also for me Angular and Knockout solve a different problem, I like both of them but having to code all MVC classes is something I think is kind of a rework. The thing is simple easy extendable and fast to learn, comes with a lot of goodies and is easy to combine with other Frameworks. (see Knockback.js) Can anybody tell me what made it so bad to their eyes ?

    Read the article

  • Why is Android VM-based? [closed]

    - by adib
    By about 2004, it was clear that ARM is the clear winner for mobile CPUs, beating out MIPS, SH3, and DragonBall. PocketPC (Windows Mobile) applications was natively-compiled (at least most of them - except for .NET compact and its competitors). Likewise, Apple's iOS (named iPhone OS at the time) prefers natively-compiled applications. Then why Android chose a virtual machine based system stack? (the Dalvik VM). Wouldn't it be simpler to just compile applications down to ARM code using GCJ or something? Is the decision influenced by the J2ME-way of doing things, or was just because it's "cool"? Perhaps like most things Java, the culture that prefers multiple levels of indirection and abstractions, they just added another layer of abstraction for "just in case"?

    Read the article

  • Strategies for porting application from Win32 API to GTK+

    - by Vitor Braga
    I have a legacy application written in C, using the raw Win32 API. The general level of abstraction is low and raw dependency on <windows.h> is common. I would like to port this application to GTK+. There are any kind of guidelines or best practices on how to do this? I've previously ported a MFC application to Qt, but the application was very abstracted - it draw it's own set of widgets, for example - and initial porting was very straightforward. I've been thinking at first using Wine to build a native Linux executable and then trying to slowly refactor it into a GTK+ app. Does some one have best practices or previous experiences to share about this?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >