Search Results

Search found 43110 results on 1725 pages for 'noob question'.

Page 267/1725 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Why don't C++ Game Developers use the boost library?

    - by James
    So if you spend any time viewing / answering questions over on Stack Overflow under the C++ tag, you will quickly notice that just about everybody uses the boost library; some would even say that if you aren't using it, you're not writing "real' C++ (I disagree, but that's not the point). But then there is the game industry, which is well known for using C++ and not using boost. I can't help but wonder why that is. I don't care to use boost because I write games (now) as a hobby, and part of that hobby is implementing what I need when I am able to and using off-the-shelf libraries when I can't. But that is just me. Why don't game developers, in general, use the boost library? Is it performance or memory concerns? Style? Something Else? I was about to ask this on stack overflow, but I figured the question is better asked here. EDIT : I realize I can't speak for all game programmers and I haven't seen all game projects, so I can't say game developers never use boost; this is simply my experience. Allow me to edit my question to also ask, if you do use boost, why did you choose to use it?

    Read the article

  • Earmarks of a Professional PHP Programmer

    - by Scotty C.
    I'm a 19 year old student who really REALLY enjoys programming, and I'm hoping to glean from your years of experience here. At present, I'm studying PHP every chance I get, and have been for about 3 years, although I've never taken any formal classes. I'd love to some day be a programmer full time, and make a good career of it. My question to you is this: What do you consider to be the earmarks or traits of a professional programmer? Mainly in the field of PHP, but other, more generalized qualifications are also more than welcome, as I think PHP is more of a hobbyist language and may not be the language of choice in the eyes of potential employers. Please correct me if I'm wrong. Above all, I don't want to wast time on something that isn't worth while. I'm currently feeling pretty confident in my knowledge of PHP as a language, and I know that I could build just about anything I need and have it "work", but I feel sorely lacking in design concepts and code structure. I can even write object oriented code, but in my personal opinion, that isn't worth a hill of beans if it isn't organized well. For this reason, I bought Matt Zandstra's book "PHP Objects, Patterns, and Practice" and have been reading that a little every day. Anyway, I'm starting to digress a little here, so back to the original question. What advice would you give to an aspiring programmer who wants to make an impact in this field? Also, on a side note, I've been working on a project with a friend of mine that would give a fairly good idea of where I'm at coding wise. I'm gonna give a link, I don't want anyone to feel as though I'm pushing or spamming here, so don't click it if you don't want to. But if you are interested on giving some feedback there as well, you can see the code on github. I'm known as The Craw there. https://github.com/PureChat/PureChat--Beta-/tree/

    Read the article

  • Conflict resolution for two-way sync

    - by K.Steff
    How do you manage two-way synchronization between a 'main' database server and many 'secondary' servers, in particular conflict resolution, assuming a connection is not always available? For example, I have an mobile app that uses CoreData as the 'database' on the iOS and I'd like to allow users to edit the contents without Internet connection. In the same time, this information is available on a website the devices will connect to. What do I do if/when the data on the two DB servers is in conflict? (I refer to CoreData as a DB server, though I am aware it is something slightly different.) Are there any general strategies for dealing with this sort of issue? These are the options I can think of: 1. Always use the client-side data as higher-priority 2. Same for server-side 3. Try to resolve conflicts by marking each field's edit timestamp and taking the latest edit Though I'm certain the 3rd option will open room for some devastating data corruption. I'm aware that the CAP theorem concerns this, but I only want eventual consistency, so it doesn't rule it out completely, right? Related question: Best practice patterns for two-way data synchronization. The second answer to this question says it probably can't be done.

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Tree Surgeon 2.0 - The future on the T4 Express

    - by Malcolm Anderson
    If you've never been a fan of TreeSurgeon (http://treesurgeon.codeplex.com/) then skip this post.However, if have been there have been some interesting developments over the last couple of years.The biggest one is T4Recently Bill Simser wrote a detailed post about the potential future of tree surgeon, called "Tree Surgeon - Alive and Kicking or Dead and Buried" He raised the question:Times have changed. Since that last release in 2008 so much has changed for .NET developers. The question is, today is the project still viable? Do we still need a tool to generate a project tree given that we have things like scaffolding systems, NuGet, and T4 templates. Or should we just give the project its rightful and respectful send off as its had a good life and has outlived its usefulness.For myself, the answer is, keep it.I've spent the last couple of years doing agile engineering coaching and architecture and from my experience, I can tell you, there are a lot of shops out there that would benefit from having Tree Surgeon as a viable product.  Many would benefit simply from having the software engineering information that is embedded in the tree surgeon site be floating around their conversation.Little things like, keep all of your software needed to run the build, with the build in the version control system.Have your developers and the build system using the same build.Have a one-touch buildSeparate your code from your interfacePut unit tests in first, not lastI've seen companies with great developers suffer from the problems that naturally come from builds taking 3 and 4 hours to run.  It takes work to get that build down to 10 minutes, but the benefits are always worth it.  Tree Surgeon gives you a leg up, by starting you off with a project that you can drop into your Continuous Integration system, right out of the box.Well, it used to be right out of the box.  Today, you have to play with the project to make it work for you, but even with the issues (it hasn't been updated since 2008) it still gives you a framework, with logical separations that you can build from.If you have used Tree Surgeon in the past, take a few minutes and drop a comment about what difference it made in your development style, and what you are doing differently today because of it.

    Read the article

  • You are or will be a laid off programmer - what do you do a year ago, right now, tomorrow, and next week?

    - by Adam Davis
    Many programmers, software engineers, and other technology professionals are out of work, facing layoffs, or are unprepared for layoffs though they feel secure right now. What should every programmer do right now (even if secure in their current job) to prepare them for layoffs down the road? If your boss came to your cubicle while you read this and laid you off: What would you do immediately after? What would you do tomorrow? What would you do next week? It obvious that one should always have an up to date resume, always get recommendations from people when they see you at your best (not when you're looking for a new job), etc. What are the things, step by step, that every programmer should do (or should consider doing) long before they are laid off, when they're laid off, and shortly after being laid off? This is a question with many possible facets. While I want to encourage discussion to center around programming career based answers, please reconsider before downvoting someone because they're thinking in terms of how they're going to prevent going into debt. Bonus catch-22 type question: You can study a new language or technology while out of work, but most places want you to have more than 1-2 months experience in a working environment, not just from a learning exercise. Is it worthwhile to place a priority on new (ideally in demand) skills, or should you instead hone existing skills?

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • Does an inexperienced programmer need an IDE?

    - by Torben Gundtofte-Bruun
    Reading this other question makes me wonder if I (as an absolute beginner PHP programmer) should stick with WAMP and Notepad++ or to switch to some IDE like Eclipse. It's understandable that skilled developers will benefit from a big shiny IDE. But why should an absolute beginner use an IDE? Do the benefits outweigh the extra challenge of learning the IDE on top of learning to develop? Update for clarification: My goal is to get some basic programming experience. By choosing PHP and WAMP (and FogBugz and Kiln) I hope to avoid having to navigate the tricky / messy OS specifics and compiling etc. and just focus on basic functionality like an online user registration form. I've got lots of theoretical understanding from university a decade ago but no practical experience. I want to remedy that with a hobby project that would be similar to a real-world sellable web app. There are so many questions to ask. So many pitfalls I probably have to blunder into. This question is just one piece (my first!) of that puzzle.

    Read the article

  • Friday Tips #6, Part 2

    - by Chris Kawalek
    Here is a question about updating Oracle VM: Question: How can I perform Oracle VM 3 server updates from Oracle VM Manager? Answer by Gregory King, Principal Best Practices Consultant, Oracle VM Product Management: Server Update Manager is a built-in feature of the Oracle VM Manager. Basically, Server Update Manager automatically configures YUM updates on all the Oracle VM Servers, pointing each to our Unbreakable Linux Network (ULN) update channel for Oracle VM. The servers periodically check with our Oracle YUM repository and notify the Oracle VM Manager that an update is available for each server. Actual server updates must be triggered by the Oracle VM administrator – they are not executed automatically. At this point, you can use the Oracle VM Manager to put a server into maintenance mode which live migrates all the running Oracle VM Guests to other Oracle VM Servers in the server pool. Once all the Oracle VM Guests have been migrated, the Oracle VM administrator can trigger the update on the server. The entire process is documented in the Installation and Upgrade Guide of Oracle VM Documentation so I won’t spend time detailing the steps. However, configuring the Server Update Manager is exceedingly simple. Simply navigate to the Tools and Resources tab in the Oracle VM Manager, select the link for Server Update Manager and ensure the following values are added to the text boxes as shown in the illustration below: YUM Base URL: http://public-yum.oracle.com/repo/OracleVM/OVM3/latest/x86_64 YUM GPG Key: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle Every server in the pool will be automatically configured for YUM updates once you choose the Apply button. Many thanks to Greg and Rick for providing the answers to this week's questions. If you want to ask us something, hit up Twitter and use hashtag #AskOracleVirtualization. See you next week! -Chris 

    Read the article

  • Failed Project: When to call it?

    - by Dan Ray
    A few months ago my company found itself with its hands around a white-hot emergency of a project, and my entire team of six pulled basically a five week "crunch week". In the 48 hours before go-live, I worked 41 of them, two back to back all-nighters. Deep in the middle of that, I posted what has been my most successful question to date. During all that time there was never any talk of "failure". It was always "get it done, regardless of the pain." Now that the thing is over and we as an organization have had some time to sit back and take stock of what we learned, one question has occurred to me. I can't say I've ever taken part in a project that I'd say had "failed". Plenty that were late or over budget, some disastrously so, but I've always ended up delivering SOMETHING. Yet I hear about "failed IT projects" all the time. I'm wondering about people's experience with that. What were the parameters that defined "failure"? What was the context? In our case, we are a software shop with external clients. Does a project that's internal to a large corporation have more space to "fail"? When do you make that call? What happens when you do? I'm not at all convinced that doing what we did is a smart business move. It wasn't my call (I'm just a code monkey) but I'm wondering if it might have been better to cut our losses, say we're not delivering, and move on. I don't just say that due to the sting of the long hours--the company royally lost its shirt on the project, plus the intangible costs to the company in terms of employee morale and loyalty were large. Factor that against the PR hit of failing to deliver a high profile project like this one was... and I don't know what the right answer is.

    Read the article

  • How to be hired as a remote programmer abroad and not to be an entrepreneur?

    - by user592704
    The question is quite interesting as for me because I watched jobs adds and mostly they all: A) Require being located the same country a vacancy is B) Employers don't want to hire foreign programmers if they don't have H1B or something C) As a rule, most adds provide 6 month contract position I can keep adding the list for long time describing some job adds specifications, anyway, as a rule, most positions require non-employee cooperation status. I don't have a company for such kind of "making projects by a client order" so it is quite complicated; So I was trying, just, as for a statistics, to find out is there a way to be hired abroad as a remote programmer as if I get hired in my native city? The thing is not about being hired where I can be hired "because I am located this or that place" but the thing is about a possibility (not to relocate) which actually should provide nowadays technologies especially for IT specialists in many different fields; So the question is it possible to work any country in remote mode as if I am working in my own place? What do I need for that? Can you advice some useful web sites in this direction? If you can share your own experience I'd love to listen to. Any useful advices are much appreciated

    Read the article

  • As a developer, how do I learn sales? [closed]

    - by Dan Abramov
    I quit the company I was working for to pursuit an opportunity as a startup, and I believe in our product. I'm sure it's going to be great if we attract some customers first to keep going. (I don't want funding.) Our product is targeted at private schools and courses, and helps organize the mess other LMSs introduce. The problem is, our team is basically just me and I have very little idea about sales and marketing. I can do reasonably good copywriting but I'm sure I can do better—and being nervous or too techy in a real world conversation with the client doesn't help. I want to get better, in fact, a lot better at negotiating with clients and pitching my product. I did look for some “sales articles” on the web, and a lot of what I found is plain bullshit on SEO-engineered websites promoting books or $5000 courses. What I need instead is a developer's perspective on how to sale a product you think is great. What are typical programmer's mistakes and misconceptions about sales, and how to avoid them? How do you evolve into a reasonably great salesman? I can't believe it's in the mindset and unlearnable. Your own experience, combined with great articles available on the web is most welcome. To Future Readers The question got closed because it is not a good fit for this site. I found some helpful tips in a similar question asked on a sister StackExchange site about startups: I'm a terrible salesperson. What can I do about it?

    Read the article

  • Installing Ubuntu along with windows 7 on shrunk partition

    - by Thabo
    I am new to Ubuntu OS and ask Ubuntu community. First this is not a duplicate question. Actually this a question which is a summery of all solutions and questions were posted in this community, related to Install Ubuntu along with Windows 7. I have bought a new Hp laptop with its original windows 7.I want to install Ubuntu along with windows 7 64 bit. I ran the Ubuntu 12.4 Desktop installation CD. But Ubuntu installer doesn't show the "along with windows 7 option"only it is showing two options. I read some questions and answers posted on this community. Specially following link Ubuntu 12.04 does not see windows already install on my computer (dual installation) I tried following thinks, I ran the terminal in live CD and tried sudo dmraid -rE command and dmraid remove command .But terminals says there is no dmraid partitions. So I tried another scenario checked my partitions with g parted.There are some partitions labeled C,HP tools,Recovery and System. C is containing windows 7 Files. So I shrank the volume of C Drive. Now I have 50000Mb of unallocated disk. I tried with Gparted to create a partition on that allocated space.It says some thing that you can't create more than four primary partition.Of course all other four partitions were created on widows are actually type of primary partition. So I went back to Windows 7 and tried to create a new volume on unallocated space.But unfortunately it says,If i create a new volume it will be the type of Dynamic partition.It says we cant boot another OS from that partition. So i cancelled that step. Now i have 50000Mb unallocated space but how can i install Ubuntu on that partition without harming the existing Windows 7? Because still I have only two options: Erase and install Ubuntu. Try something else. (I can see my unallocated space by going to "something else" option.)

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • Virtual Pageview Goal Funnel Not Tracking Correctly

    - by cphill
    I have an AJAX form that has three stages: 1. The landing page where a user fills out a form and selects between three question sets and clicks begin assessment 2. The assessment page, where users fill out questions relating to the question set that they selected on the landing page. 3.The results page, which shows whether they are at High Risk or Low Risk. Since this is an AJAX form that does not open a new page for each step of the process, I implemented a virtual pageview that would fire on the pageload of each step of the form process. The following is my virtual pageview setup for each stage: /form/begin-assessment /form/assessment/* (* = Three different virtual pageviews depending on the users selection of the three sets of questions: /one, /two, /three) 3./form/finished-assessment I have set up three separate goals to track user progress through each step of the form assessment. Here is my Goal setup: Goal Description: -Goal Type: Destination Goal Details: -Destination: /form/finished-assessment -Funnel: On Step 1: /form/begin-assessment (Required: Yes) Step 2: /form/assessment/one (Step 2: replace /one with /two or /three and you have my two other goals setup) Now my goals are recording the correct data in the first step and show the completions in the destination, but the second step does not show any drop offs. They show the same data as the destination. Any ideas of how I set up the goals wrong?

    Read the article

  • Logic / Render phases with a single thread

    - by DevilWithin
    The question I have may generate different opinions from different developers, but I'd still like to have an answer on this. Its all about the updating and rendering steps of the game loop, and their use under multi and single threaded environments. Currently, there is one thread running, which takes care of sequentially executing events , logic and rendering. Sometimes, the logic part may wish to change the game state to something else, and in between do some loading of files. The result is that the game hangs completely while loading, and then proceeds to normal rendering of the new state. To go around this, i could make another thread, do the loading there while the main thread renders a smooth loading animation, and then proceed normally. The real question is about if i don't create another thread. I could refresh the screen from the logic thread, and provide some basic loading screen, which could be not so smoothly updated while the files load. In fact, this approach is not loved by a lot of developers, as it scrambles render code in the logic step, which may cause problems of different sorts.. Hope its clear!

    Read the article

  • What do you wish language designers paid attention to?

    - by Berin Loritsch
    The purpose of this question is not to assemble a laundry list of programming language features that you can't live without, or wish was in your main language of choice. The purpose of this question is to bring to light corners of languge design most language designers might not think about. So, instead of thinking about language feature X, think a little more philisophically. One of my biases, and perhaps it might be controversial, is that the softer side of engineering--the whys and what fors--are many times more important than the more concrete side. For example, Ruby was designed with a stated goal of improving developer happiness. While your opinions may be mixed on whether it delivered or not, the fact that was a goal means that some of the choices in language design were influenced by that philosophy. Please do not post: Syntax flame wars (I could care less whether you use whitespace [Python], keywords [Ruby], or curly braces [Java, C/C++, et. al.] to denote program blocks). That's just an implementation detail. "Any language that doesn't have feature X doesn't deserve to exist" type comments. There is at least one reason for all programming languages to exist--good or bad. Please do post: Philisophical ideas that language designers seem to miss. Technical concepts that seem to be poorly implemented more often than not. Please do provide an example of the pain it causes and if you have any ideas of how you would prefer it to function. Things you wish were in the platform's common library but seldom are. One the same token, things that usually are in a common library that you wish were not. Conceptual features such as built in test/assertion/contract/error handling support that you wish all programming languages would implement properly--and define properly. My hope is that this will be a fun and stimulating topic.

    Read the article

  • Does unit testing lead to premature generalization (specifically in the context of C++)?

    - by Martin
    Preliminary notes I'll not go into the distinction of the different kinds of test there are, there are already a few questions on these sites regarding that. I'll take what's there and that says: unit testing in the sense of "testing the smallest isolatable unit of an application" from which this question actually derives The isolation problem What is the smallest isolatable unit of a program. Well, as I see it, it (highly?) depends on what language you are coding in. Micheal Feathers talks about the concept of a seam: [WEwLC, p31] A seam is a place where you can alter behavior in your program without editing in that place. And without going into the details, I understand a seam -- in the context of unit testing -- to be a place in a program where your "test" can interface with your "unit". Examples Unit test -- especially in C++ -- require from the code under test to add more seams that would be strictly called for for a given problem. Example: Adding a virtual interface where non-virtual implementation would have been sufficient Splitting -- generalizing(?) -- a (smallish) class further "just" to facilitate adding a test. Splitting a single-executable project into seemingly "independent" libs, "just" to facilitate compiling them independently for the tests. The question I'll try a few versions that hopefully ask about the same point: Is the way that Unit Tests require one to structure an application's code "only" beneficial for the unit tests or is it actually beneficial to the applications structure. Is the generalization code need to exhibit to be unit-testable useful for anything but the unit tests? Does adding unit tests force one to generalize unnecessarily? Is the shape unit tests force on code "always" also a good shape for the code in general as seen from the problem domain? I remember a rule of thumb that said don't generalize until you need to / until there's a second place that uses the code. With Unit Tests, there's always a second place that uses the code -- namely the unit test. So is this reason enough to generalize?

    Read the article

  • Setting up Clojure Project And Sub Projects

    - by octopusgrabbus
    This is primarily a lein question about setting up a major project and its sub-projects, and is not intended to be a discussion question. Instead, I am interested in either a pointer to documentation or to a Clojure/lein best practices link. I have a municipal property assessments application that splits two master flies into different subset files, depending on whether a billing transfer is taking place or we want to batch update new accounts, rather than making our assessment department enter new accounts once in their system and then again in the tax collection system. My application is going to be large enough, that I can see a common library lein project with support functions, like splitting apart the files, and then individual lein projects that use the common library. Should the lein projects be set up at the same level and support included through the project.clj/core.clj files? Is there an advantage to creating lein new projects underneath a major project? Is there a problem with combing all functions in one project? I can probably make my one core.clj contain all flavors of the program, but coming from a C/C++ and Python background, I would prefer to have a lot of little projects.

    Read the article

  • Does the term "Learning Curve" include the knowing of the gotchas?

    - by voroninp
    When you learn new technology you spend time understanding its concepts and tools. But when technology meets real life strange and not pleasant things happen. Reuqirements are often far from ideal and differ from 'classic' scenario. And soon I find myself bending the technology to my real needs. At this point I begin to know bugs of the system or that is is not so flexible as it seemed at the very begining. And this 'fighting' with technology consumes a great part of the time while developing. What is more depressing is that the bunch of such gotchas and workarounds are not concentrated at one place (book, site, etc.) And before you really confront it you cannot really ask the correct question because you do not even suspect the reason for the problem to occur (unknown-unknown). So my question consiststs of three: 1) Do you really manage (and how) to predict possible future problems? 2) How much time do you spend for finding the workaround/fix/solution before you leave it and switch to other problems. 3) What are the criteria for you to think about yourself as experienced in the tecnology. Do you take these gotchas into account?

    Read the article

  • Developing wheel reinventing tendencies into a skill as opposed to reluctantly learning wheel-finding skills? [duplicate]

    - by Korey Hinton
    This question already has an answer here: Is reinventing the wheel really all that bad? 20 answers I am more of a high-level wheel reinventor. I definitely prefer to make use of existing API features built into a language and popular third-party frameworks that I know can solve the problem, however when I have a particular problem that I feel capable of solving within a reasonable time I am very reluctant to find someone else's solution. Here are a few reasons why I reinvent: It takes time to learn a new API API restrictions might exist that I don't know about Avoiding re-work of unfamiliar code I am conflicted between doing what I know and shifting to a new technique I don't feel comfortable with. On one hand I feel like following my instincts and getting really good at solving problems, especially ones that I would never challenge myself with if all I did was try to find answers. And on the other hand I feel like I might be missing out on important skills like saving time by finding the right framework and expanding my knowledge by learning how to use a new framework. I guess my question comes down to this: My current attitude is to stick to the built-in API and APIs I know well* and to not spend my time searching github for a solution to a problem I know I can solve myself within a reasonable amount of time. Is that a reasonable balance for a successful programmer? *Obviously I will still look around for new frameworks that save time and solve/simplify difficult problems.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >