Search Results

Search found 5819 results on 233 pages for 'compiler theory'.

Page 100/233 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Why are people using C instead of C++? [closed]

    - by Darth
    Possible Duplicate: When to use C over C++, and C++ over C? Many times I've stumbled upon people saying that C++ is not always better than C. Great example here would be the Linux kernel, where they simply decided to use C instead of C++ because it had better compilers at the time. But that's many years ago and a lot has changed. So the question is, why are people still using C over C++? I gues there are probably some cases (like embedded devices), where there simply isn't a good C++ compiler, or am I wrong here? What are the other cases when it is better to go with C instead of C++?

    Read the article

  • Character movement on a 2D tile map

    - by Chris Morris
    I'm working at making a HTML5 game. Top down, closest thing I can equate it to is the gameboy zeldas, but open world and no rooms. What I have so far is a procedurally generated map in a multi dimensional array. And a starting position on the map. Along with this I have an array of movable and non movable tile ID's. I also have a class for my player and have him being rendered out in the center of the starting tile. My problem however is getting the movement sorted out for the player. I want to be able to have the character free move around the map (pixel by pixel essentially) ontop of this 2D generated world. Ideally this would allow the user to move around the walk able area of the canvas. this is simple enough for me to do, but I am having problems now moving the world. If the user is 20% from the edge of the screen i want the world to start panning in the direction the player is heading. But I'm rather lacking in ideas of how to do this. I've looked around for some tutorials, but am coming up blank on ideas of how to generate the playable area (zoomed in) and to then move this generated area under the player when they reach near the end of the screen. My current idea was to generate a certain amount of tiles full size to fill the screen and place the player i the middle. Then when the user approaches the edge of the screen start generating the tiles offset by the distance moved and the direction. I can kind of see this working but I really have no idea if this is the best or easiest to code of methods for generating the world. sorry for the lack of code but I'm still just in the theory stages of working this all out.

    Read the article

  • The Fantastic New WebLogic on Oracle Database Appliance 2.9 Release is Here!

    - by JuergenKress
    Last week was a big day in virtualised ODA-land as it saw the launch of WebLogic on ODA 2.9. Admittedly it doesn't sound like a very exciting release but it is one that we at O-box have been looking forward to for quite some time. Let me explain why, then we'll look into the details... The ODA X4-2 has 48 Intel Xeon cores. That is a lot of compute power. Whilst the largest O-box SOA Appliance single environment configuration can in theory use all those cores (currently with 40 vCPU of SOA!) the vast majority of O-box users will want smaller configurations. Prior to 2.9 the Oracle WebLogic implementation only supported one domain per ODA, so the conundrum O-box development faced last year was either: offer customers only one SOA environment on their O-box for now (but have the benefit of a standard, easily supportable WebLogic installation), or build our own WebLogic/OTD OVM templates from scratch. One of our driving goals with O-box is to give the best possible experience and make the appliance as supportable as possible. Therefore we took the gamble that we would stick with the Oracle's one-domain WebLogic configuration initially, and just hope that it would deliver multi-domain support for us in a timely manner (note: this is probably not a strategy that business textbooks would recommend!). Anyway, we've been working closely with Oracle Product Management for a few months now and I'm delighted to see 2.9 as the fruits of their labour. This also neatly ties in with several recent requests for O-box to include OSB as well as SOA/BPEL (which we have always wanted to have in separate domains). The diagram below is the neatest way to summarise what the new 2.9 release will allow us to deliver, i.e. previously only one 3D box was possible: Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: oBox,WebLogic on ODA,ODA,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • Finish feature reverted commits from develop

    - by marco-fiset
    I am using git as a version control system, and using git-flow as the branching model. I started a feature branch some weeks ago in order to maintain the system in a clean state while developping that feature. The main development continued on the develop branch, and changes from develop were merged periodically into the feature, to keep it up to date as much as possible. However came the time where the feature was finished, and I used git-flow's finish feature to merge the feature back into develop. The merge was successfully done, but then I found out that some of the commits I made in develop were reverted by the merge commit! Nowhere in develop or in the feature branch were these changes reverted, I can't see any commit that overwrote them. I just can't find anything. The only theory I have for the moment is that git is failing on me, but that would be extremely unlikely. Maybe I did some kind of wrong manipulation that made this situation come true? I can trace back in the history when the commit was made. I can see that the changes from that commit were reverted by the merge commit. Nowhere in the branch I see a commit that reverts those changes. Yet they were reverted. How is this even possible?

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Domain transfer and New Hosting Management

    - by Anubhav Saini
    I wanted to migrate from my older registrar to GoDaddy. Main reason because current registrar/hosting provider doesn't support .NET. My old registrar gave me control over the domain and hosting account. So, basically I have everything I would need. ( I know theory only ) I applied for Transfer of domain, bought a hosting package from GoDaddy and uploaded new web site. So, I am waiting for domain transfer and it tells me that I have to wait for 5-7 days for approval. Okay. But today, my old registrar told/taunted me that I really didn't need to apply for transfer. What could possibly I have done differently? My domain expires on this 15th. Now I don't know much about how all of this really works, but I am guessing he meant, "you should have waited for 15 days and let it expire after which you should buy the domain as it is expired". Is it really so(I doubt) or there are some other ways I could have got same result but without transferring domain? (like, changing DNS entries) I have read like all of the documentation available on namecheap/GoDaddy/Whois about domain transfers. But maybe because I am new to this it is all confusing to me. I would also like to know what to do with DNS settings after transfer succeeds. I want to kill the old website. So, what nameserver settings I need to change, new one or old one or both? I have old host+old domain registrar + old working site on one hand, on the other hand, new site + pending domain transfer + new DNS settings.

    Read the article

  • Best practice in setting return value (use else or?)

    - by Deckard
    Whenever you want to return a value from a method, but whatever you return depends on some other value, you typically use branching: int calculateSomething() { if (a == b) { return x; } else { return y; } } Another way to write this is: int calculateSomething() { if (a == b) { return x; } return y; } Is there any reason to avoid one or the other? Both allow adding "else if"-clauses without problems. Both typically generate compiler errors if you add anything at the bottom. Note: I couldn't find any duplicates, although multiple questions exist about whether the accompanying curly braces should be on their own line. So let's not get into that.

    Read the article

  • How-to get the binding for a tab in the Dynamic Tab Shell Template

    - by Frank Nimphius
    The Dynamic Tab Shell template does expose a method on the Tab.java class that allows you to get access to the ADF binding container for a tab. At least in theory this works, because in practice this call always returns a null value (a bug is filed for this). To work around the problem, you can use code similar to the following to get the ADF binding for a specific tab DCBindingContainer currentBinding = (DCBindingContainer) BindingContext.getCurrent().getCurrentBindingsEntry(); DCBindingContainer templateBinding = (DCBindingContainer)currentBinding.get("ptb1"); DCBindingContainer tabBinding= (DCBindingContainer)templateBinding.get("r"+0);  In the code line above, the tabBinding variable will hold the binding reference to the first tab in the dynamic tab shell template. Note that the tab doesn't need to be visible for this (which has to do with how the template works).  "ptb1" is the template reference name in the PageDef file (Executable section) of the template consumer view. Check this string in your page before using this code. If it differs, change it also in the code above. "r0" is the binding reference of the first tab in the template. Te last tab is referenced by "r14".  

    Read the article

  • Does this BSD-like license achieve what I want it to?

    - by Joseph Szymborski
    I was wondering if this license is: self defeating just a clone of an existing, better established license practical any more "corporate-friendly" than the GPL too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) be as flexible/simple as the "Simplified BSD" license (which is essentially the MIT license) allow anyone to make modifications as long as I'm attributed require that I get a notification that such a derived work exists require that I have access to the source code and be given license to use the code not oblige the author of the derivative work to have to release the source code to the general public not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are bolded). Copyright (c) (year), (author) (email) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright holder(s) must be notified of any redistributions of source code. The copyright holder(s) must be notified of any redistributions in binary form The copyright holder(s) must be granted access to the source code and/or the binary form of any redistribution upon the copyright holder's request. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Read the article

  • Fixed timestep with interpolation in AS3

    - by Jim Sreven
    I'm trying to implement Glenn Fiedler's popular fixed timestep system as documented here: http://gafferongames.com/game-physics/fix-your-timestep/ In Flash. I'm fairly sure that I've got it set up correctly, along with state interpolation. The result is that if my character is supposed to move at 6 pixels per frame, 35 frames per second = 210 pixels a second, it does exactly that, even if the framerate climbs or falls. The problem is it looks awful. The movement is very stuttery and just doesn't look good. I find that the amount of time in between ENTER_FRAME events, which I'm adding on to my accumulator, averages out to 28.5ms (1000/35) just as it should, but individual frame times vary wildly, sometimes an ENTER_FRAME event will come 16ms after the last, sometimes 42ms. This means that at each graphical redraw the character graphic moves by a different amount, because a different amount of time has passed since the last draw. In theory it should look smooth, but it doesn't at all. In contrast, if I just use the ultra simple system of moving the character 6px every frame, it looks completely smooth, even with these large variances in frame times. How can this be possible? I'm using getTimer() to measure these time differences, are they even reliable?

    Read the article

  • Cheerp -- C++ for web: advance or regression?

    - by Henrique Barcelos
    Recently I've run into Cheerp, a C++ to Javascript compiler, which uses a modified version of clang to generate Javascript code from C++ sources. That makes me wonder: why in the seven kingdoms would someone do this in their right mind? I mean: why would you take a language that is not designed for web at all, that is far more convoluted and bureaucratic, write your code and then compile it into Javascript itself? Can anybody see any advantages in doing so? We surely can discard performance as a reason, because in the end it generates pure Javascript code. Is there anyone here that have real experience with this? P.S.: I'm not sure if this is an on topic question, but this is the most general forum about programming that I could find in the StackExchange network. Edit Although this seems like a subjective question, it is not. I am asking for reasons that this tool could be useful. I got interested at first, but started wondering why would someone use it.

    Read the article

  • A programming language that does not allow IO. Haskell is not a pure language

    - by TheIronKnuckle
    (I asked this on Stack Overflow and it got closed as off-topic, I was a bit confused until I read the FAQ, which discouraged subjective theoratical debate style questions. The FAQ here doesn't seem to have a problem with it and it sounds like this is a more appropriate place to post. If this gets closed again, forgive me, I'm not trying to troll) Are there any 100% pure languages (as I describe in the Stack Overflow post) out there already and if so, could they feasibly be used to actually do stuff? i.e. do they have an implementation? I'm not looking for raw maths on paper/Pure lambda calculus. However Pure lambda calculus with a compiler or a runtime system attached is something I'd be interested in hearing about.

    Read the article

  • How to organize music files?

    - by newbie
    I'm trying to re-organize my music files. I copied them from my iPod before it died, so they have funky names, like DGEDH.mp3. I tried using a Windows utility to rename them from their ID3 tags, but it didn't work very well--it created a bunch of folders (one for each album in theory, but more like 7 or 8 in reality) and renamed the files with non-English characters. Most of the files are MP3s, but there's at least one or two other file types as well. I'd like to copy them to a new folder and try a Ubuntu utility to rename and re-organize them. In Windows, this would be straightforward--I'd use the Search function (with no file name specified) to list all of the files in the main folder and its subfolders, then drag and drop them to the new folder. What's the easiest way to accomplish the same thing in Ubuntu? The GUI search doesn't seem to accept wildcard characters, and I don't remember all of the file types I have, so it's not as simple as searching for "mp3". Many thanks!

    Read the article

  • Worst practices in C++, common mistakes ...

    - by Felix Dombek
    After reading this famous rant by Linus Torvalds, I wondered what actually are all the bad things programmers might do in C++. I'm explicitly not referring to typography errors or bad program flow as treated in this question and answers, but to more high-level errors which are not detected by the compiler and do not result in obvious bugs at first run, complete design errors, things which are improbable in C but are likely to be done by newcomers who don't understand the full implications of their code. I also welcome answers pointing out a huge performance decrease where it would not usually be expected. An example of what one of my professors once told me: You have used somewhat too many instances of unneeded inheritance and virtuality. Inheritance makes a design much more complicated (and inefficient because of the RTTI (run-time type inference) subsystem), and it should therefore only be used where it makes sense, e.g. for the actions in the parse table." [I wrote an LR(1) parser generator.] "Because you make intensive use of templates, you practically don't need inheritance."

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • How to make the switch to C++11?

    - by Overv
    I've been programming in C++ for a while now, but mostly thinks centered around the low-level features of C++. By that I mean mostly working with pointers and raw arrays. I think this behavior is known as using C++ as C with classes despite me only having tried C recently for the first time. I was pleasantly surprised how languages like C# and Java hide these details away in convenient standard library classes like Dictionaries and Lists. I'm aware that the C++ standard library has many convenience containers like vectors, maps and strings as well and C++11 only adds to this by having std:: array and ranged loops. How do I best learn to make use of these modern language features and which are suitable for which moments? Is it correct that software engineering in C++ nowadays I'd mostly free of manual memory management? Lastly, which compiler should I use to make the most of the new standard? Visual Studio has excellent debugging tools, but even VS2012 seems to have terrible C++11 support.

    Read the article

  • One-week release cycle: how do I make this feasible?

    - by Arkaaito
    At my company (3-yr-old web industry startup), we have frequent problems with the product team saying "aaaah this is a crisis patch it now!" (doesn't everybody?) This has an impact on the productivity (and morale) of engineering staff, self included. Management has spent some time thinking about how to reduce the frequency of these same-day requests and has come up with the solution that we are going to have a release every week. (Previously we'd been doing one every two weeks, which usually slipped by a couple of days or so.) There are 13 developers and 6 local / 9 offshore testers; the theory is that only 4 developers (and all testers) will work on even-numbered releases, unless a piece of work comes up that really requires some specific expertise from one of the other devs. Each cycle will contain two days of dev work and two days of QA work (plus 1 day of scoping / triage / ...). My questions are: (a) Does anyone have experience with this length of release cycle? (b) Has anyone heard of this length of release cycle even being attempted? (c) If (a) or (b), how on Earth do you make it work? (Any pitfalls to avoid, etc., are also appreciated.) (d) How can we minimize the damage if this effort fails?

    Read the article

  • If I were to claim I knew C++, what libraries would you expect me to know?

    - by Peter Smith
    I'm unsure as to the definition of knowing a programming language, so I'm picking C++ as an example. How much does it take to someone to be qualified as knowing C++? Should they just know the basic syntax? Template and generic-programming? Compiler flags and their purposes (Wall, the difference between O1, O2 and O3)? STL? Garbage collection strategies? Boost? Common libraries like zlib, curl, and libxml2?

    Read the article

  • Un système Linux embarqué opérationnel avec Buildroot, tutoriel par Benoit Mauduit

    Salut, Dans le domaine de l'embarqué, nous nous retrouvons souvent en situation où nous devons reconstruire un système complet à partir des sources, pour une architecture cible souvent différente de notre architecture hôte. Que l'on soit débutant ou développeur confirmé, la (cross-)compilation et l'organisation d'un système embarqué sont des étapes longues et fastidieuses, surtout lorsque les éléments du système à compiler nécessitent des adaptations. Il existe heureusement des outils libres qui simplifient et accélèrent cette tâche, en proposant généralement des fonctionnalités complémentaires intéressantes.Cet article est consacré à l'un de ces outils libres pour systèmes Linux embarqué : Buildroot.N'hésitez pas à poster ici vos commentaires sur cet article ...

    Read the article

  • Is Java much harder to "tweak" for performance compared with C/C++?

    - by user997112
    Does the "magic" of the JVM hinder the influence a programmer has over micro-optimisations in Java? I recently read in C++ sometimes the ordering of the data members can provide optimizations (granted, in the microsecond environment) and I presumed a programmer's hands are tied when it comes to squeezing performance from Java? I appreciate a decent algorithm provides greater speed-gains, but once you have the correct algorithm is Java harder to tweak due to the JVM control? If not, could people give examples of what tricks you can use in Java (besides simple compiler flags).

    Read the article

  • Designing rules to fight smallpox in Civ-style TBS games

    - by Williham Totland
    TL;DR: How do you design a ruleset for a Civ-style TBS game that prevents city smallpox from being a profitable or viable strategy? Long version: Civ-style games are pretty great. Bringing a civilization from cradle to grave is a great endeavor, and practicing diplomacy with hard-line human players is fun and challenging. In theory. In practice, however, many of these games has, especially in multiplayer, exactly one viable strategy: City smallpox, a.k.a. infinite city spread, a.k.a. covering all available space with 1-citizen cities, packed as tight as they will go. I suppose this could count as emergent gameplay, but still; it could hardly be considered to be in the spirit of the class of game. The Civilization series, of course, is stuck in their more or less fixed rule sets, established with Civilization. Yes, there have been major changes in some respects, but the rules pertaining to city building and maintenance have stayed pretty similar. So the question, then: If you build a ruleset for a TBS from the ground up; what rules should be in place to prevent Infinite City Sprawl from being a viable strategy? Or should ICS be a viable strategy?

    Read the article

  • Designing spawning system

    - by Vlad
    I played this game recently http://www.kongregate.com/games/JuicyBeast/knightmare-tower and I am amazed by the way how different monsters are beign spawned. I personally developed my own shooter game and I added time based but also count based spawing system. By count based I mean when there are 5 enemies on stage stop spawning. But this is one example. My question is how are these spawning mechanism built, is there some pattern or some theory how they are built? Are there some online materials/pages where I can improve my knowledge? To sumarize, let just say we have 6 types of monsters. I start the game and kill of monsters of type 1,2 and 3 all the time. Once I pass the first ceiling, like in the game above, monster type 4 appear. ANd so on. As I progress trough the game, the same system of 6 types of monsters stay, but they become more and more resilient and dangerous. So I must also improve to be able to destroy the same monsters but now stronger. My question is simple, are there some theories built or written for developing this type of inteligent systems? Note: This is a general question, not tied up with some game or how exactly should the game work. I am capable to program my own mechanisms but I think I need some help. Thanks.

    Read the article

  • Why use string.Empty over "" when assigning to a string object

    - by dreza
    I've been running StyleCop over my code and one of the recommendations SA1122 is to use string.Empty rather than "" when assigning an empty string to a value. My question is why is this considered best practice. Or, is this considered best practice? I assume there is no compiler difference between the two statements so I can only think that it's a readability thing? UPDATE: Thanks for the answers but it's been kindly pointed out this question has been asked many times already on SO, which in hind-sight I should have considered and searched first before asking here. Some of these especially forward links makes for interesting reading. SO question and answer Jon Skeet answer to question

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >