Search Results

Search found 569 results on 23 pages for 'simulation'.

Page 16/23 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Erasing and modifying elements in Boost MultiIndex Container

    - by Sarah
    I'm trying to use a Boost MultiIndex container in my simulation. My knowledge of C++ syntax is very weak, and I'm concerned I'm not properly removing an element from the container or deleting it from memory. I also need to modify elements, and I was hoping to confirm the syntax and basic philosophy here too. // main.cpp ... #include <boost/multi_index_container.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/mem_fun.hpp> #include <boost/tokenizer.hpp> #include <boost/shared_ptr.hpp> ... #include "Host.h" // class Host, all members private, using get fxns to access using boost::multi_index_container; using namespace boost::multi_index; typedef multi_index_container< boost::shared_ptr< Host >, indexed_by< hashed_unique< const_mem_fun<Host,int,&Host::getID> > // ordered_non_unique< BOOST_MULTI_INDEX_MEM_FUN(Host,int,&Host::getAge) > > // end indexed_by > HostContainer; typedef HostContainer::nth_index<0>::type HostsByID; int main() { ... HostContainer allHosts; Host * newHostPtr; newHostPtr = new Host( t, DOB, idCtr, 0, currentEvents ); allHosts.insert( boost::shared_ptr<Host>(newHostPtr) ); // allHosts gets filled up int randomHostID = 4; int newAge = 50; modifyHost( randomHostID, allHosts, newAge ); killHost( randomHostID, allHosts ); } void killHost( int id, HostContainer & hmap ){ HostsByID::iterator it = hmap.find( id ); cout << "Found host id " << (*it)->getID() << "Attempting to kill. hmap.size() before is " << hmap.size() << " and "; hmap.erase( it ); // Is this really erasing (freeing from mem) the underlying Host object? cout << hmap.size() << " after." << endl; } void modifyHost( int id, HostContainer & hmap, int newAge ){ HostsByID::iterator it = hmap.find( id ); (*it) -> setAge( newAge ); // Not actually the "modify" function for MultiIndex... } My questions are In the MultiIndex container allHosts of shared_ptrs to Host objects, is calling allHosts.erase( it ) on an iterator to the object's shared_ptr enough to delete the object permanently and free it from memory? It appears to be removing the shared_ptr from the container. The allhosts container currently has one functioning index that relies on the host's ID. If I introduce an ordered second index that calls on a member function (Host::getAge()), where the age changes over the course of the simulation, is the index always going to be updated when I refer to it? What is the difference between using the MultiIndex's modify to modify the age of the underlying object versus the approach I show above? I'm vaguely confused about what is assumed/required to be constant in MultiIndex. Thanks in advance. Update Here's my attempt to get the modify syntax working, based on what I see in a related Boost example. struct update_age { update_age():(){} // have no idea what this really does... elicits error void operator() (boost::shared_ptr<Host> ptr) { ptr->incrementAge(); // incrementAge() is a member function of class Host } }; and then in modifyHost, I'd have hmap.modify(it,update_age). Even if by some miracle this turns out to be right, I'd love some kind of explanation of what's going on.

    Read the article

  • Allocate constant memory

    - by Vlad
    I'm trying to set my simulation params in constant memory but without luck (CUDA.NET). cudaMemcpyToSymbol function returns cudaErrorInvalidSymbol. The first parameter in cudaMemcpyToSymbol is string... Is it symbol name? actualy I don't understand how it could be resolved. Any help appreciated. //init, load .cubin float[] arr = new float[1]; arr[0] = 0.0f; int size = Marshal.SizeOf(arr[0]) * arr.Length; IntPtr ptr = Marshal.AllocHGlobal(size); Marshal.Copy(arr, 0, ptr, arr.Length); var error = CUDARuntime.cudaMemcpyToSymbol("param", ptr, 4, 0, cudaMemcpyKind.cudaMemcpyHostToDevice); my .cu file contain __constant__ float param;

    Read the article

  • Reset Java Applet on page reload

    - by tommy-susanto
    I need to quit firefox and restart it in order for the applet to be refreshed... its anoying since i'm still programming it an the class files changes... am i missing some codes which makes it unable to refresh the applet and still take the one from the cache??? So I have a .jar applet in my website, a simulation game that spawns army whenever user clicks on the screen... however whenever I refresh the page, the previous army are still there on the screen.. I want it to be refreshed (as if we're just starting to run the application the first time). I already tried pressing CTRL+f5 but the trick doesn't seem to work I'd basically like to do it automatically using scripts or something, so that it won't require user to manually press some button on the keyboard Any Suggestions? I'd really appreciate it Thank you....

    Read the article

  • Unkillable console windows

    - by rep_movsd
    I'm developing an OpenGL based 2d simulation with GLUT under Visual C++ 2008, sometimes when I have an assert() or unhandled exception and break into the debugger, the GLUT display window closes, but the console window remains open.They just cant be killed!!! They do not show up with Task manager, Process Explorer or any other tool, I cannot find the window handle using the Spy++ tool either. Worst thing is they prevent my system (Windows XP) from shutting down, I have to manually poweroff (and of course I have to run chkdsk on my drives and invariably it finds and fixes minor errors after bad shutdowns) Has anyone come across such an issue?

    Read the article

  • Using boost::asio::async_read with stdin?

    - by yeus
    hi poeple.. short question: I have a realtime-simulation which is running as a backround process and is connected with pipes to the calling pogramm. I want to send commands to that process using stdin to get certain information from it via stdout. Now because it is a real-time process, it has to be a non blocking input. Is boost::asio::async_read in conjunction with iostream::cin a good idea for this task? how would I use that function if it is feasible? Any more suggestions?

    Read the article

  • How do I figure out which SOC or SDK board to use?

    - by Ram Bhat
    Hey guys Basically I'm working on a model of an automated vacuum cleaner. I currently have made software simulation of the same. How do I figure out which SOC or SDK board to use for the hardware implementation? My code is mostly written in C. Will this be compatible with the sdk provided by board manufacturers? How do i know what clock speed,memory etc the hardware will need? I'm a software guy and have only basic knowledge about practical hardware implementations. Have some experience in programming the 8086 to carry out basic tasks.

    Read the article

  • Getting COM object to run in Vista

    - by rainslg
    We expose an interface to our simulation software using a COM/ActiveX object. This worked just fine in XP, but in Vista, we get "Error 429: ActiveX can't create object" when a VB client executes CreateObject(). The COM object has been registered by hand so that the Vista Registry is identical to XP's Registry. I run the VB interface from a DOS window that I started using "Run As Administrator". The client is correctly accessing and reading the Registry as I walk through using the debugger in VB, so it's apparently not a security setting, as near as I can tell. I have also loaded the files into VS2005 (the object was originally created in VS6) and rebuilt them to get a later ATL version, but that hasn't helped - we still get the 429 error. Is this a symptom of UAC problems, or should I be looking for something deeper?

    Read the article

  • I like to learn computer science and programming, but i have no understanding.

    - by Josiah
    Hi all, I like to learn computer science and programming, but i don't know where and how to start. For the past two months, i have been researching on types of programming, i found my interest in desktop programming, and also found this subjects (discrete maths, logic, algorithms, data structures, artificial intelligence, simulation) said to be a most known (subjects). Well as a beginner, i have no understanding on what type of language or books that can cover all these subjects, or even how to write any code. Secondly, am as young as you can guess and i need a language to start with that will be easy for me to understand, and books to read. Thank you.

    Read the article

  • Python Beginner: Selective Printing in loops

    - by Jonathan Straus
    Hi there. I'm a very new python user (had only a little prior experience with html/javascript as far as programming goes), and was trying to find some ways to output only intermittent numbers in my loop for a basic bicycle racing simulation (10,000 lines of biker positions would be pretty excessive :P). I tried in this loop several 'reasonable' ways to communicate a condition where a floating point number equals its integer floor (int, floor division) to print out every 100 iterations or so: for i in range (0,10000): i = i + 1 t = t + t_step #t is initialized at 0 while t_step is set at .01 acceleration_rider1 = (power_rider1 / (70 * velocity_rider1)) - (force_drag1 / 70) velocity_rider1 = velocity_rider1 + (acceleration_rider1 * t_step) position_rider1 = position_rider1 + (velocity_rider1 * t_step) force_drag1 = area_rider1 * (velocity_rider1 ** 2) acceleration_rider2 = (power_rider2 / (70 * velocity_rider1)) - (force_drag2 / 70) velocity_rider2 = velocity_rider2 + (acceleration_rider2 * t_step) position_rider2 = position_rider2 + (velocity_rider2 * t_step) force_drag2 = area_rider1 * (velocity_rider2 ** 2) if t == int(t): #TRIED t == t // 1 AND OTHER VARIANTS THAT DON'T WORK HERE:( print t, "biker 1", position_rider1, "m", "\t", "biker 2", position_rider2, "m"

    Read the article

  • What application domains are CPU bound and will tend to benefit from multi-core technologies?

    - by Glomek
    I hear a lot of people talking about the revolution that is coming in programming due to multi-core processors and parallelism, but I can't shake the feeling that for most of us, CPU cycles aren't the bottleneck. Pretty much all of my programs have been I/O bound in one way or another (database, filesystem, network, user interaction, etc.) for a very long time. Now I can think of a few areas where CPU cycles are a limiting factor, like code breaking, graphics, sound, some forms of simulation (weather, physics, etc.), and some forms of mathematical research, but they all seem like fairly specialized application domains. My general impression is that most programs are still I/O bound and that for most of our industry CPUs have been plenty fast for quite a while now. Am I off my rocker? What other application domains are CPU bound today? Do any of them include a large portion of the programming population? In essence, I'm wondering whether the multi-core CPUs will impact very many of us, and if so, how?

    Read the article

  • Finding an HTTP proxy that will intercept static resource requests

    - by pkh
    Background I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource. Problem I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device. Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want. Any ideas? I'd rather not have to roll my own.

    Read the article

  • How can I test my iPhone app on earlier than SDK 3.0 simulator, make sure it work?

    - by Cocorico
    Hi Stackers! I wrote iPhone application. Very simple! It use Cocos2D only, and all other features is very basic, no accelerometer, no camera, nothing. Just buttons and sounds. I think every iPhone can run this app (there is no limits on Cocos2D right?), but my XCode only let me use 3.0 and upwards. I want to confirm 1 thing and ask one thing: If I put "iPhone OS 2.0" in my iPhone OS Deployment Target in XCode, but my "Active SDK" in XCode still says 3.0, if I compile using this and submit to App Store, when it goes up, people who use 2.0 can still download and use the game yes? Is there way I can test in a 2.0 simulator to make sure it works? My XCode only have 3.0 and higher simulation.

    Read the article

  • How can I access variables outside of current scope in javascript?

    - by sekmet64
    I'm writing some application in javascript and cannot figure it out how to access the variables declared in my function, inside this jquery parse. Inside I can access global variables, but I don't really want to create global vars for these values. Basically I want to extract file names from an xml document in the simulationFiles variable. I check if the node attribute is equal with the simName and extract the two strings inside the xml elements, that part I think it's working. How can I extract those xml elements and append them to local variables? function CsvReader(simName) { this.initFileName = "somepath"; this.eventsFileName = "somepath"; $(simulationFiles).find('simulation').each(function() { if ($(this).attr("name") == simName) { initFileName += $(this).find("init").text(); eventsFileName += $(this).find("events").text(); } }); }

    Read the article

  • CAD/CAM without C++

    - by zaladane
    Hello, Is it possible to do CAD/CAM software without having to use C++? My company developed their software with c/C++ but that was more than 10 years ago. Today,there is a lot of legacy code that switching would force us to get rid of but i was wondering what the actual risks are. We have a lot of mathematical algorithms for toolpath calculations, feature recognition and simulation and 3D Rendering and i was wondering if C# can handles all of that without great performance loss. Is it a utopia to rewrite such algorithms in c# or should that language only deal with UI. We are not talking about game development here (Halo 3 or Call Of Duty) so how much processing does CAD/CAM really need? Can anybody enlighten me on this matter? Most of my colleagues are hardcore C++ programmers and although i program in c++ i love .NET but i am having a hard time selling .NET to them other than basic UI. Does it make sense to consider switching to .NET in such a field, or is it just not a wise idea? Thank you

    Read the article

  • Floating point Endianness?

    - by cake
    Hi I'm writing a client and a server for a realtime offshore simulator, and, as I have to send a lot of data through a socket, I'm using binary data to maximize the ammount of data I can send. I already know about integers endianness, and how to use htonl and ntohl to circumvent endianness issues, but my application, as almost all simulation software, deals with a lot of floats. My question is: Is there some issue of endianness whean dealing with binary formats of floating point numbers? I know that all the machines where my code will run use IEEE implementation of floating points, but is there some endianness issue when dealing with floats? Since I only have access to machines with the same endian, I cannot test this by myself. So, I'll be glad if someone can help me with this. Thanks in advance.

    Read the article

  • What is the easiest straightforward way of telling which version performs better?

    - by Peter Perhác
    I have an application, which I have re-factored so that I believe it is now faster. One can't possibly feel the difference, but in theory, the application should run faster. Normally I would not care, but as this is part of my project for my master's degree, I would like to support my claim that the re-factoring did not only lead to improved design and 'higher quality', but also an increase in performance of the application (a small toy-thing - a train set simulation). I have toyed with the latest VisualVM thing today for about four hours but I couldn't get anything helpful out of it. There isn't (or I haven't found it) a way to simply compare the profiling results taken from the two versions (pre- and post- refactoring). What would be the easiest, the most straightforward way of simply telling the slower from the faster version of the application. The difference of the two must have had an impact on the performance. Thank you.

    Read the article

  • Parse file to hash in Ruby

    - by Taschetto
    I'm a ruby newcomer who's trying to read a text file (a Valgrind simulation output) like this: -------------------------------------------------------------------------------- Profile data file 'temp/gt_1024_2_16.out' -------------------------------------------------------------------------------- I1 cache: 1024 B, 16 B, 2-way associative D1 cache: 32768 B, 64 B, 8-way associative LL cache: 3145728 B, 64 B, 12-way associative Profiled target: bash run.sh Events recorded: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw Events shown: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw Event sort order: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw Thresholds: 99 0 0 0 0 0 0 0 0 Include dirs: User annotated: Auto-annotation: off -------------------------------------------------------------------------------- Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw -------------------------------------------------------------------------------- 1,894,017 246,981 2,448 519,124 4,691 2,792 337,817 1,846 1,672 PROGRAM TOTALS // other data I want to extract the PROGRAM TOTALS table and put it into a hash. Something like... myHash = { :Ir => 1894017, :I1mr => 246981, ILmr => 2448, ..., DLmw => 1672 } What are the best options for doing this? Could the CSV classes help me out? Thanks a bunch.

    Read the article

  • Running [R] on a Netbook

    - by Thomas
    I am interested in purchasing a netbook to do field research in another country. My hardware specifications for the nebtook are fairly basic: Be rugged enough to survive a bit of wear and tear Fairly fast processing (the ability to upgrade from 1GB of RAM to 2GB) A battery life of longer than 6 hours At least a 10 inch screen A decent camera for Skyping However, I am mainly concerned about being able to do basic statistical analysis in conjunction with R Be able run a Spreadsheet program to do basic data input (like Excel or Open Office) Use R to do basic data analysis (Regression, some simulation (nothing crazy), data cleaning, and some of the functionality) Word Processing (Word or Open Office) Do you have any suggestions on which models or brands my fit my needs? Some of the models I am considering: Samsung NB-30 Toshiba NB 305 Asus Eee PC 1005HA Lenovo S10-2 Does anyone use R on a netbook, and if so do you have any recommendations on how best to optimize it? This article from Lifehacker mentions some OS. Anybody use these in conjunction with R? Any help would be much appreciated.

    Read the article

  • Junit test that creates other tests

    - by Benju
    Normally I would have one junit test that shows up in my integration server of choice as one test that passes or fails (in this case I use teamcity). What I need for this specific test is the ability to loop through a directory structure testing that our data files can all be parsed without throwing an exception. Because we have 30,000+ files that that 1-5 seconds each to parse this test will be run in its own suite. The problem is that I need a way to have one piece of code run as one junit test per file so that if 12 files out of 30,000 files fail I can see which 12 failed not just that one failed, threw a runtimeexception and stopped the test. I realize that this is not a true "unit" test way of doing things but this simulation is very important to make sure that our content providers are kept in check and do not check in invalid files. Any suggestions?

    Read the article

  • How can I make a view get bigger again, as soon as the status bar goes back to normal height after a

    - by Thanks
    In the simulator I went to Hardware menu and activated the simulation of bigger status bar during phone call. Now, I tried to make a view in my nib that takes up the whole screen. As soon as the status bar gets smaller, I want my view to get bigger, so it uses that space up there. But regardless of any autoresizing settings, my view will keep pressed down after that status bar gets smaller. There is a empty slot left where the status bar was after hanging up the call. What's that actually supposed to be? Is my app recognizing the status bar as a view, or is the status bar indeed making my screen smaller? I mean...does it mess around with my views as if it was a view itself, or do my views not know about a status bar, but about a smaller screen size when the status bar gets bigger? How do you get your views big again when the status bar returns to normal height?

    Read the article

  • What happens when I MPI_Send to a process that has finished?

    - by nieldw
    What happens when I MPI_Send to a process that has finished? I am learning MPI, and writing a small sugar distribution-simulation in C. When the factories stop producing, those processes end. When warehouses run empty, they end. Can I somehow tell if the shop's order to a warehouse did not succeed(because the warehouse process has ended) by looking at the return value of MPI_Send? The documentation doesn't mention a specific error code for this situation, but that no error is returned for success. Can I do: if (MPI_Send(...)) { ... /* destination has ended */ ... } And disregard the error code? Thanks

    Read the article

  • Problems with the Enterframe Event

    - by user434565
    Hey guys, I have been developing a game using Flex, and used the Timer class to keep the main loop going. However, when I tried using the enterFrame event to do the main loop, there were a few problems. First of all, physics simulation seemed way too fast. Is the enterFrame event called more than once per frame? I set the application's global frame rate to 24, so shouldn't the application set off the event every 1/24 of a second? And the second problem is that when the game runs like this, some MXML components that are added are not shown. I have absolutely no idea why this happens. Help me please?!? Thanks.

    Read the article

  • Microsoft Technical Computing

    - by Daniel Moth
    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: "… As we build the Technical Computing initiative, we will invest in three core areas: 1. Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. 2. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today’s modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud. 3. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. …" Our Parallel Computing Platform team is directly responsible for item #2, and we work very closely with the teams delivering items #1 and #3. At the same time as the exec email, our marketing team unveiled a website with interviews that I invite you to check out: Modeling the World. Comments about this post welcome at the original blog.

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by Brian Dayton
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: ·         How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? ·         What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking ·         How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... ·         How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night.   I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise.    But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >