Search Results

Search found 7602 results on 305 pages for 'actual costing'.

Page 120/305 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision

    - by JiminyCricket
    I've been fooling around with moving on sloped tiles in XNA and it is semi-working but not completely satisfactory. I also have been thinking that having sets of predetermined slopes might not give me terrain that looks "organic" enough. There is also the problem of having to construct several different types of tile for each slope when they're chained together (only 45 degree tiles will chain perfectly as I understand it). I had thought of somehow scanning for connected chains of sloped tiles and treating it as a new large triangle, as I was having trouble with glitching at the edges where sloped tiles connect. But, this leads back to the problem of limiting the curvature of the terrain. So...what I'd like to do now is create a simple image or texture of the terrain of a level (or section of the level) and generate a simple heightmap (of the Y's for each X) for the terrain. The player's Y position would then just be updated based on their X position. Is there a simple way of doing this (or a better way of solving this problem)? The main problem I can see with this method is the case where there are areas above the ground that can be walked on. Maybe there is a way to just map all walkable ground areas? I've been looking at this helpful bit of code: http://thirdpartyninjas.com/blog/2010/07/28/sloped-platform-collision/ but need a way to generate the actual points/vectors.

    Read the article

  • C++ Iterator lifetime and detecting invalidation

    - by DK.
    Based on what's considered idiomatic in C++11: should an iterator into a custom container survive the container itself being destroyed? should it be possible to detect when an iterator becomes invalidated? are the above conditional on "debug builds" in practice? Details: I've recently been brushing up on my C++ and learning my way around C++11. As part of that, I've been writing an idiomatic wrapper around the uriparser library. Part of this is wrapping the linked list representation of parsed path components. I'm looking for advice on what's idiomatic for containers. One thing that worries me, coming most recently from garbage-collected languages, is ensuring that random objects don't just go disappearing on users if they make a mistake regarding lifetimes. To account for this, both the PathList container and its iterators keep a shared_ptr to the actual internal state object. This ensures that as long as anything pointing into that data exists, so does the data. However, looking at the STL (and lots of searching), it doesn't look like C++ containers guarantee this. I have this horrible suspicion that the expectation is to just let containers be destroyed, invalidating any iterators along with it. std::vector certainly seems to let iterators get invalidated and still (incorrectly) function. What I want to know is: what is expected from "good"/idiomatic C++11 code? Given the shiny new smart pointers, it seems kind of strange that STL allows you to easily blow your legs off by accidentally leaking an iterator. Is using shared_ptr to the backing data an unnecessary inefficiency, a good idea for debugging or something expected that STL just doesn't do? (I'm hoping that grounding this to "idiomatic C++11" avoids charges of subjectivity...)

    Read the article

  • The best way to learn how to extend Orchard

    - by Bertrand Le Roy
    We do have tutorials on the Orchard site, but we can't cover all topics, and recently I've found myself more and more responding to forum questions by pointing people to an existing module that was solving a similar problem to the one the question was about. I really like this way of learning by example and from the expertise of others. This is one of the reasons why we decided that modules would by default come in source code form that we compile dynamically. it makes them easy to understand and easier to modify for your own purposes. Hackability FTW! But how do you crack open a module and look at what's inside? You can do it in two different ways. First, you can just install the module from the gallery, directly from your Orchard instance's admin panel. Once you've done that, you can just look into your Modules directory under the web site. There is now a subfolder with the name of the new module that contains a csproj that you can open in Visual Studio or add to your Orchard solution. Second, you can simply download the package (it's NuGet) and rename it to a .zip extension. NuGet being based on Zip, this will open just fine in Windows Explorer: What you want to dig into is the Content/Modules/[NameOfTheModule] folder, which is where the actual code is. Thanks to Jason Gaylord for the idea for this post.

    Read the article

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point. To search in only the answers from this question, use the inquestion:this option.

    Read the article

  • How to diagram custom programming languages, non textual?

    - by Adam
    I've used and created domain-specific languages before, plenty of times (e.g. using yacc/lex). Normally we'd start with grammar written in BNF, and a bunch of keywords. This is easy to do, easy to share. Recently, I've started working with diagrammatic programming languages - closest parallel is circuit-diagrams in electronics, where it's very difficult to express ideas in text, but very easy to express them in wiring-diagrams. This is a new and novel problem for me: how to efficiently express these mini-languages, and share concepts in them with colleagues? (i.e. how to whiteboard-program within them. Actual programming is easy - you have physical components to hand) Are there tools for this? Or good/best practices (e.g. equivalent of "always use BNF as starting point for your new DSL, and use tools like yacc to generate the parser, compiler, etc"). My googlefu is proving weak - all I get is false positives for wiring diagrams, and UML editors (since these are custom languages, UML doesn't seem to help)

    Read the article

  • Should the syntax for disabling code differ from that of normal comments?

    - by deltreme
    For several reasons during development I sometimes comment out code. As I am chaotic and sometimes in a hurry, some of these make it to source control. I also use comments to clarify blocks of code. For instance: MyClass MyFunction() { (...) // return null; // TODO: dummy for now return obj; } Even though it "works" and alot of people do it this way, it annoys me that you cannot automatically distinguish commented-out code from "real" comments that clarify code: it adds noise when trying to read code you cannot search for commented-out code for for instance an on-commit hook in source control. Some languages support multiple single-line comment styles - for instance in PHP you can either use // or # for a single-line comment - and developers can agree on using one of these for commented-out code: # return null; // TODO: dummy for now return obj; Other languages - like C# which I am using today - have one style for single-line comments (right? I wish I was wrong). I have also seen examples of "commenting-out" code using compiler directives, which is great for large blocks of code, but a bit overkill for single lines as two new lines are required for the directive: #if compile_commented_out return null; // TODO: dummy for now #endif return obj; So as commenting-out code happens in every(?) language, shouldn't "disabled code" get its own syntax in language specifications? Are the pro's (separation of comments / disabled code, editors / source control acting on them) good enough and the cons ("shouldn't do commenting-out anyway", not a functional part of a language, potential IDE lag (thanks Thomas)) worth sacrificing? Edit I realise the example I used is silly; the dummy code could easily be removed as it is replaced by the actual code.

    Read the article

  • Moving objects colliding when using unalligned collision avoidance (steering)

    - by James Bedford
    I'm having trouble with unaligned collision avoidance for what I think is a rare case. I have set two objects to move towards each other but with a slight offset, so one of the objects is moving slightly upwards, and one of the objects is moving slightly downwards. In my unaligned collision avoidance steering algorithm I'm finding the points on the object's forward line and the other object's forward line where these two lines are the closest. If these closest points are within a collision avoidance distance, and if the distance between them is smaller than the two radii of the two object's bounding spheres, then the objects should steer away in the appropriate direction. The problem is that for my case, the closest points on the lines are calculated to be really far away from the actual collision point. This is because the two forward lines for each object are moving away from each other as the objects pass. The problem is that because of this, no steering takes place, and the two objects partially collide. Does anyone have any suggestions as to how I can correctly calculate the point of collision? Perhaps by somehow taking into account the size of the two objects?

    Read the article

  • Would it be possible to create an open source software library, entirely developed and moderated by an open community?

    - by Steven Jeuris
    Call it democratic software development, or open source on steroids if you will. I'm not just talking about the possibility of providing a patch which can be approved by the library owner. Think more along the lines of how Stack Exchange works. Anyone can post code, and through community moderation it is cleaned up and eventually valid code ends up in the final library. For complex libraries an elaborate system should probably be created, but for a simple library it is my belief this is already possible even within the Stack Exchange platform. Take a library of extension methods for .NET for example. Everybody goes their own way and implements their own subset of what they feel is important, open-source library or not. People want to share their code, but there is no suitable platform for it. extensionmethod.net is the result of answering this call for extension methods, but the framework hopelessly falls short; there is no order, or structure at all. You don't know whether an idea is any good until you try it, so I decided to create an Extension Methods proposal on Area51. I belief with proper moderation, it could be possible for the site to be more than a Q&A site, and that an actual library (or subsets of it) could be extracted from it. Has anything like this been attempted before? Are there platforms better suited for this?

    Read the article

  • What should happen at the start of a software project startup?

    - by Willem
    A quick introduction My college semesters include a 8 week project working for an actual company with a software need in order to get some much needed practical experience. I have just started such a project with 5 other students. We're required to spend roughly 40 hours a week per student on this project. We're working with SCRUM as the software development method, this was assigned by our teachers. The question Day one of the project just ended which has created some questions for me as to how to start a project in the 'real world'. Our first day included working on a project planning document (not sure what the English term is), creating a appointment with the company for an introduction and the opportunity to start specifying the requirements and setting up some standards for the behavior within the group. However these items didn't take that long to finish. We've made some concrete plans for tomorrow and the day after we'll meet the company. This still leaves several hours of 'work-time' unspent. Is it usual not being able to fill every hour of a day for work at the start of a project or are we simply too inexperienced to see what work needs to be done at this stage of a project, or are we, perhaps, going through the above list too fast? How does this work in the 'real world'? Do you spend your time wondering 'what should I do now', or do you have a clear view of what you're supposed to do at that moment?

    Read the article

  • How to write your unit tests to switch between NUnit and MSTest

    - by Justin Jones
    On my current project I found it useful to use both NUnit and MsTest for unit testing. When using ReSharper for running unit tests, it just simply works better with NUnit, and on large scale projects NUnit tends to run faster. We would have just simply used NUnit for everything, but MSTest gave us a few bonuses out of the box that were hard to pass up. Namely code coverage (without having to shell out thousands of extra dollars for the privilege) and integrated tests into the build process. I’m one of those guys who wants the build to fail if the unit tests don’t pass. If they don’t pass, there’s no point in sending that build on to QA. So making the build work with MsTest is easiest if you just create a unit test project in your solution. This adds the right references and project type Guids in the project file so that everything just automagically just works. Then (using NuGet of course) you add in NUnit. At the top of your test file, remove the using statements that refer to MsTest and replace it with the following: #if NUNIT using NUnit.Framework; #else using TestFixture = Microsoft.VisualStudio.TestTools.UnitTesting.TestClassAttribute; using Test = Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute; using TestFixtureSetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using SetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using Microsoft.VisualStudio.TestTools.UnitTesting; #endif Basically I’m taking the NUnit naming conventions, and redirecting them to MsTest. You can go the other way, of course. I only chose this direction because I had already written the tests as NUnit tests. NUnit and MsTest provide largely the same functionality with slightly differing class names. There’s few actual differences between then, and I have not run into them on this project so far. To run the tests as NUnit tests, simply open up the project properties tab and add the compiler directive NUNIT. Remove it, and you’re back in MsTest land.

    Read the article

  • From a DDD perspective is a report generating service a domain service or an infrastructure service?

    - by Songo
    Let assume we have the following service whose responsibility is to generate Excel reports: class ExcelReportService{ public String generateReport(String fileFormatFilePath, ResultSet data){ ReportFormat reportFormat = new ReportFormat(fileFormatFilePath); ExcelDataFormatterService excelDataFormatterService = new ExcelDataFormatterService(); FormattedData formattedData = excelDataFormatterService.format(data); ExcelFileService excelFileService = new ExcelFileService(); String reportPath= excelFileService.generateReport(reportFormat,formattedData); return reportPath; } } This is pseudo code for the service I want to design where: fileFormatFilePath: path to a configuration file where I'll keep the format of my excel file (headers, column widths, number of columns,..etc) data: the actual records returned from the database. This data can't be used directly coz I might need to make further calculations to the data before inserting them to the excel file. ReportFormat: Value object to hold the report format, has methods like getHeaders(), getColumnWidth(),...etc. ExcelDataFormatterService: a service to hold any logic that need to be applied to the data returned from the database before inserting it to the file. FormattedData: Value object the represents the formatted data to be inserted. ExcelFileService: a wrapper top the 3rd party library that generates the excel file. Now how do you determine whether a service is an infrastructure or domain service? I have the following 3 services here: ExcelReportService, ExcelDataFormatterService and ExcelFileService?

    Read the article

  • Impact on SEO of adding categories/tags in front of the HTML title [closed]

    - by Mad Scientist
    Possible Duplicate: Does the order of keywords matter in a page title? All StackExchange sites add the most-used tag of a question in front of the HTML title for SEO purposes. On Stackoverflow for example this is usually the programming language, so you end up with a title like python - How do I do X? This has obviously an enourmous benefit on SEO as the programming language is an extremely important keyword that is very often omitted from the title. Now, my question is for the cases where the tag isn't an important keyword missing from the title, but just a category. So on Biology.SE for example one would have questions like biochemistry - How does protein X interact with Y? or on Skeptics medical science - Do vaccines cause autism? Those tags are usually not part of the search terms, they serve to categorize the content but users don't use those tags in their searches. How harmful is adding tags that are not used in searches in terms of SEO? Is there any hard data on the impact this practise might have on SEO? The negative aspects I can imagine, but have no data to show that it is actually a problem are: I heard that search engines dislike keyword stuffing and this might trigger some defense mechanisms against that It's a practise associated with less reputable sites, a keyword in front that doesn't fit the actual title well might look suspicious to some users. It wastes precious space in the title shown in search results.

    Read the article

  • Installing Visual Studio 2010 Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. This post seams almost redundant as I had no problems, but I think that is as valuable to other thinking of installing the Service Pack as all the problems that we sometimes get. As per Brian's post I am Installing Visual Studio Team Foundation Server Service Pack 1 first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. Figure: Hopefully this will be more uneventful It takes a little while for your system to be checked to see what components need updating. On my main computer this was pretty quick, but on the laptop it took some time. Figure: There are a lot of components to update With this update also comes an update to .NET as well as many other components. Figure: I downloaded the full 1.5GB’s, but you could do a web install It depends on how good you internet connection is to how long it would take to download, but as I am now in the US I decided not to trust the internet connection speeds. It took around 30-40 minutes to download the full thing which is a little slow. Figure: I did not need to download, but that would increase the install time So on my main computer again this was fast, but again on my netbook this took a little while. Figure: The actual install took around 30-40 minutes (2 hours on netbook) I was pretty impressed with the speed of the install, and as Team Explore is now out of the box with Visual Studio 2010 I don’t get the problem of the SP being installed before Team Explorer and having a disjointed experience Figure: As I suspected, no problems with the install Figure: Checking in Visual Studio shows that all the servicing points were successful This was an easy experience even if the SP was over 1.5GB’s to download Hopefully I will be discovering things that work better for a good while to come, as well as not seeing holes in the product that I had no encountered yet. What were your experiences of installing Visual Studio 2010 Service pack 1?

    Read the article

  • How to cover the widest range of computers when publishing?

    - by DevilWithin
    When you plan a game, or even when you already made a game, and its time to publish, you wonder how much of your audience is covered by the game technology demands. I'm directing this essentialy to casual games, as I constantly see people having old laptops and being unable to replace them. Laptops with integrated cards whose OpenGL version doesn't even support textures larger than 1024x1024. These people may be avid gamers as well, and a reasonable share of the audience to consider giving them the chance to play casual games, once they cannot play any blockbusters. As I've seen happening, a very "noticeable" example is Angry Birds. It's gameplay is merely casual (I think nobody disagrees here) and still, it uses so high resolution textures that at least OpenGL 2.0 or around is needed, which blocks away a lot of people. So, the actual question is: what is a good tradeoff for this issue? Would it be better to just sacrifice the texture resolution for everyone, but have more supported hardware? Would it be better to keep the high quality and just slice the textures into smaller ones, sacrificing the performance a little bit? What else? Any ideas about this topic are welcome for discussion.

    Read the article

  • Does F# kill C++?

    - by MarkPearl
    Okay, so the title may be a little misleading… but I am currently travelling and so have had very little time and access to resources to do much fsharping – this has meant that I am right now missing my favourite new language. I was interested to see this post on Stack Overflow this evening concerning the performance of the F# language. The person posing the question asked 8 key points about the F# language, namely… How well does it do floating-point? Does it allow vector instructions How friendly is it towards optimizing compilers? How big a memory foot print does it have? Does it allow fine-grained control over memory locality? Does it have capacity for distributed memory processors, for example Cray? What features does it have that may be of interest to computational science where heavy number processing is involved? Are there actual scientific computing implementations that use it? Now, I don’t have much time to look into a decent response and to be honest I don’t know half of the answers to what he is asking, but it was interesting to see what was put up as an answer so far and would be interesting to get other peoples feedback on these questions if they know of anything other than what has been covered in the answer section already.

    Read the article

  • Logistics of code reuse (OOP)

    - by Ominus
    One of the driving points behind OOP is code reuse. I am curious about the actual logistics of this and how others both in team or solo handle it. For example lets say you have 5 projects you have worked on and between them you have a ton of classes that you think would be useful in other projects. How do you store them? Are they just in the normal project repository or do you break out the relevant classes and have them (as now copies) in another unique source repository that only houses code pieces that are intended to be reused? How do you go about finding or even knowing that there is a good piece of code out there that you should reuse? It's easier if your solo because you remember that you have coded something similar but even then it becomes kind of a stretch. If there is some way that you are storing these pieces of code do you then also have them indexed and searchable by tag or something. I fear that it just boils down to some tribal knowledge that you just know that for situation A i need solution B and we have a good piece of code that already can help here. A bit verbose but I hope you get what I am aiming at. If you think of a better way to make the question clearer please have at it :) TIA!

    Read the article

  • How to analyze data

    - by Subhash Dike
    We are working on an application that allows user to search/read some content in a particular domain. We wanted to add some capability in the app which can suggest user some content based on the usage pattern (analyze data based on frequency and relevance). Currently every time user search or read something we do store that information in backend database. We would like to use this data to present some additional content to user. Could someone explain what kind of tools will be required for such a job and any example? And what this concept is called, data analysis? data mining? business intelligence? or something else? Update: Sorry for being too broad, here is an example SQL Database (Just to give an idea, actual db is little different with normalization and stuff) Table: UserArticles Fields: UserName | ArticleId | ArticleTitle | DateVisited | ArticleCategory Table: CategoryArticles Fields: Category | Article Title | Author etc. One Category may have one more articles. One user may have read the same article multiple times (in this case we place additional entry in the user article table. Task: Use the information availabel in UserArticle table and rank categories in order which would be presented to user automatically in other part of application. Factors to be considered are frequency and recency. This might be possible through simple queries or may require specialized tools. Either way, the task is what mention above. I am not too sure which route to take, hence the question. Thoughts??

    Read the article

  • Is there an open source clone of a game in the Total War Series?

    - by sinekonata
    I loved Shogun:Total War gameplay and then later on spent weeks re-enacting historical wars and battles with Europa Barbarorum. It's a mod for Rome:TW that focuses on historical accuracy in the peoples, units, sounds, visuals, everything from macro mechanics to actual battles (e.g. a lot more missiles). Since that time I kind of turned my back on Windows cause it sucks and use Linux cause Mac sucks even worse. So as I miss that game (Eur. Barb.) and consider it the most realistic RTS to date, I'd like to know if there are any free and open source alternatives to it because ever since I'm under linux, I became addicted to FOSS so I also turned my back on paying (even kickstarters) for closed source, pay to play games. I have found a clone/alternative for everyone of the best games like Minecraft, CSS, Natural Selection, TA/SupCom etc... It's kind of the last one I need. The Spring engine is amazing for example, is there another open source project of the source in current development? Or would Spring itself be enough (it certainly looks capable) to make it? Thanks in advance guys...

    Read the article

  • Learning Programming during the job?

    - by Hossein
    Introduction: I have read and heard advice, about learning programming by accepting programming projects. I need real assistance to understand this, because: Problem: Although, it would seem to me that one would gain much more technical knowledge by doing, real world projects, if one doesn't know much about a technology, it would add much more risk to the actual delivery of the final product! Even the smallest of real world projects could be too much for a newbie. There is a contradiction here: You need to know the job to do it! and It's recommended to do the job, in order to learn it! Question: Any personal experiences in this case would be very pleasant to know while describing: How new was the subject to you? didn't have a clue at all? Or, did you have experience with similar technologies? Was it a solo project or were you in a team? If team, then did others help you with learning it? Did it work as expected? Did you deliver on time? Do you recommend this approach to others as well?

    Read the article

  • Dynamic libraries are not allowed on iOS but what about this?

    - by tapirath
    I'm currently using LuaJIT and its FFI interface to call C functions from LUA scripts. What FFI does is to look at dynamic libraries' exported symbols and let the developer use it directly form LUA. Kind of like Python ctypes. Obviously using dynamic libraries is not permitted in iOS for security reasons. So in order to come up with a solution I found the following snippet. /* (c) 2012 +++ Filip Stoklas, aka FipS, http://www.4FipS.com +++ THIS CODE IS FREE - LICENSED UNDER THE MIT LICENSE ARTICLE URL: http://forums.4fips.com/viewtopic.php?f=3&t=589 */ extern "C" { #include <lua.h> #include <lualib.h> #include <lauxlib.h> } // extern "C" #include <cassert> // Please note that despite the fact that we build this code as a regular // executable (exe), we still use __declspec(dllexport) to export // symbols. Without doing that FFI wouldn't be able to locate them! extern "C" __declspec(dllexport) void __cdecl hello_from_lua(const char *msg) { printf("A message from LUA: %s\n", msg); } const char *lua_code = "local ffi = require('ffi') \n" "ffi.cdef[[ \n" "const char * hello_from_lua(const char *); \n" // matches the C prototype "]] \n" "ffi.C.hello_from_lua('Hello from LUA!') \n" // do actual C call ; int main() { lua_State *lua = luaL_newstate(); assert(lua); luaL_openlibs(lua); const int status = luaL_dostring(lua, lua_code); if(status) printf("Couldn't execute LUA code: %s\n", lua_tostring(lua, -1)); lua_close(lua); return 0; } // output: // A message from LUA: Hello from LUA! Basically, instead of using a dynamic library, the symbols are exported directly inside the executable file. The question is: is this permitted by Apple? Thanks.

    Read the article

  • Floating point undesireable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • Why isn't there a typeclass for functions?

    - by Steve314
    I already tried this on Reddit, but there's no sign of a response - maybe it's the wrong place, maybe I'm too impatient. Anyway... In a learning problem I've been messing around with, I realised I needed a typeclass for functions with operations for applying, composing etc. Reasons... It can be convenient to treat a representation of a function as if it were the function itself, so that applying the function implicitly uses an interpreter, and composing functions derives a new description. Once you have a typeclass for functions, you can have derived typeclasses for special kinds of functions - in my case, I want invertible functions. For example, functions that apply integer offsets could be represented by an ADT containing an integer. Applying those functions just means adding the integer. Composition is implemented by adding the wrapped integers. The inverse function has the integer negated. The identity function wraps zero. The constant function cannot be provided because there's no suitable representation for it. Of course it doesn't need to spell things as if it the values were genuine Haskell functions, but once I had the idea, I thought a library like that must already exist and maybe even using the standard spellings. But I can't find such a typeclass in the Haskell library. I found the Data.Function module, but there's no typeclass - just some common functions that are also available from the Prelude. So - why isn't there a typeclass for functions? Is it "just because there isn't" or "because it's not so useful as you think"? Or maybe there's a fundamental problem with the idea? The biggest possible problem I've thought of so far is that function application on actual functions would probably have to be special-cased by the compiler to avoid a looping problem - in order to apply this function I need to apply the function application function, and to do that I need to call the function application function, and to do that...

    Read the article

  • Augmenting functionality of subclasses without code duplication in C++

    - by Rob W
    I have to add common functionality to some classes that share the same superclass, preferably without bloating the superclass. The simplified inheritance chain looks like this: Element -> HTMLElement -> HTMLAnchorElement Element -> SVGElement -> SVGAlement The default doSomething() method on Element is no-op by default, but there are some subclasses that need an actual implementation that requires some extra overridden methods and instance members. I cannot put a full implementation of doSomething() in Element because 1) it is only relevant for some of the subclasses, 2) its implementation has a performance impact and 3) it depends on a method that could be overridden by a class in the inheritance chain between the superclass and a subclass, e.g. SVGElement in my example. Especially because of the third point, I wanted to solve the problem using a template class, as follows (it is a kind of decorator for classes): struct Element { virtual void doSomething() {} }; // T should be an instance of Element template<class T> struct AugmentedElement : public T { // doSomething is expensive and uses T virtual void doSomething() override {} // Used by doSomething virtual bool shouldDoSomething() = 0; }; class SVGElement : public Element { /* ... */ }; class SVGAElement : public AugmentedElement<SVGElement> { // some non-trivial check bool shouldDoSomething() { /* ... */ return true; } }; // Similarly for HTMLAElement and others I looked around (in the existing (huge) codebase and on the internet), but didn't find any similar code snippets, let alone an evaluation of the effectiveness and pitfalls of this approach. Is my design the right way to go, or is there a better way to add common functionality to some subclasses of a given superclass?

    Read the article

  • Cropping images & SEO

    - by user1181950
    So I have a page with a bunch of images with largely varying sizes. Also the layout of the page is such that the images are all in the shape of square tiles, so just resizing will cause distorted images. What I've been doing previously is when users upload images, I resize and crop them appropriately and display the new image as the thumbnail and load full image when user clicks on it. However, I just realized this is an issue with SEO as google will crawl the thumbnails and stick the thumbnails on Google Images instead of the full images. Is there any way to show a cropped/resized image but have Google Image show the full image? I can do something with css using an enclosing div and overflow:hidden, but I'd imagine the performance on that would be pretty bad. Any suggestions? Thanks! PS. I saw this (Make google index the actual image not the thumbnail), but in my case I have users continuously uploading images, and the database of images is always changing and pretty big (thousands), so sitemap will be pretty unwieldy..

    Read the article

  • Publishing a game -- any way to target both WP7 and Win8 Store?

    - by Rei Miyasaka
    I'm at a dilemma which seems should soon become an important issue for a lot of developers. If I build a game in XNA, I won't be able to publish it on the Windows 8 Store, as it would be a classic application -- and classic applications can't be sold on the store. If I build a game in Metro DirectX, I would be able to sell it on the Store, but porting it to Windows Phone would involve porting it to Reach XNA, which in fact would likely involve more effort even than porting to OS X or Android -- both of which support C++. Of all the WinRT API that is supported on C++/JS/.NET, DirectX can only be programmed from C++. It's also unlikely that Microsoft will update Windows 7 or Vista to support the new DirectX features, although that would make the Metro DirectX the first new version of DirectX to stop supporting the immediate predecessor OS. If I build a game in Pre-Win8 DirectX 9/10/11, I won't be able to sell it on the Windows Store or Windows Phone, but I could sell it on something like Steam. It would also involve the most amount of manual plumbing. In fact, DirectWrite, despite being part of DirectX 11, doesn't talk to Direct3D. I'm getting really tired of all these restrictions -- artificial and otherwise -- and I'm coming to a point where I'm considering switching to a platform with a less fragmented API, like Android or Mac/iOS. As far as bringing a game into market goes, excluding the actual market share of any platforms that I might consider, what other factors would help me in making a decision? Just a few years ago this question was a lot easier to answer: if you were primarily concerned with Windows platforms, all you had to answer was whether you wanted DirectX, XNA, or something like SlimDX. If you made the wrong decision, no biggie -- all you really would have lost is XBox and the fairly small Windows Phone market.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >