Search Results

Search found 12087 results on 484 pages for 'game mechanics'.

Page 377/484 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • How to "swap in" again memory from page file to physical memory in Windows at once (like linux swap-off)

    - by Arnout
    Is there a way to swap back in (to put back all the memory data that was put into the page file (or swap, whatever you prefer)) memory on a windows PC? On linux, one can easily do this with the swapoff /dev/sdaX, where X is the swap partition. On windows, it seems to ask me to reboot each time.. The reason I'd like to do this, is that, even though swapping out the data to the swap file allows me to play a resource-hungry game fully in physical ram, when I stop the game, all the rest of my programs run slow. This is or course normal; all the programs were pushed into the page file because my RAM was too small, and all memory access to those programs after gaming bumps into hard page faults, with major delays and some frustration as a consequence. However, that frustration could easily be avoided, by simply allowing the PC to copy all data back into the physical memory for a minute or so, and then resume working on a fast working PC! (rather than having to endure the slowness -while- working) Thanks in advance for any advice on this! Kind regards

    Read the article

  • Rapidly changing public IP addresses on certain networks?

    - by zenblender
    I run/develop an online game where many of our users are in southeast asia. I recently went to southeast asia and made an alarming discovery. Anywhere I got internet access, whether it was via 3G, a LAN in a hotel, or wifi in a cafe, both in Singapore and the Philippines, I noticed that my IP address was changing CONSTANTLY. I mean the public IP address, not the private one. I could load a page like whatismyip.com and just hit reload and see a new IP address show up every 5-10 seconds! This has lots of consequences for my online game, as many things "break" if the IP address changes for a given user. Basically, I would like to know more about this. Is there a name for the kind of network or router or paradigm that causes this, so I can read up on it? I don't understand WHY a network would function this way. Does it do this on purpose? Is it for security reasons? Is it to anonymize and protect the identity of the users? Or is it just an "old" method that is mostly obsolete in the rest of the world? Thanks for any info that will help me to understand.

    Read the article

  • How do I know what hardware to buy to meet my needs?

    - by Darth Android
    While Stack Exchange does not permit shopping recommendations, it doesn't provide any general advice to consider when buying hardware. So, instead of just telling those that ask what to buy that it's not allowed, let's tell them how to figure out what they need. When looking forward to build a computer, how do I know what to buy? How do I find out if a given CPU will be enough for a certain game or application that I want to run? How do I find out if a given graphics card will be enough for a certain game or application? What is important when looking at motherboards? How much memory do I need? How do I know how much wattage I need for a power supply? What size case do I need? What relevant standards do I need to read up on and be aware of? PCI, PCIe, SATA, USB 2.0, USB 3.0, etc... What "gotchas" do I need to be on the lookout for? Please keep responses generation-agnostic to ensure they will be helpful to our future users. :)

    Read the article

  • How harmful is a hard disk spin cycle?

    - by Gilles
    It is conventional wisdom¹ that each time you spin a hard disk down and back up, you shave some time off its life expectancy. The topic has been discussed before: Is turning off hard disks harmful? What's the effect of standby (spindown) mode on modern hard drives? Common explanations for why spindowns and spinups are harmful are that they induce more stress on the mechanical parts than ordinary running, and that they cause heat variations that are harmful to the device mechanics. Is there any data showing quantitatively how bad a spin cycle is? That is, how much life expectancy does a spin cycle cost? Or, more practically, if I know that I'm not going to need a disk for X seconds, how large should X be to warrant spinning down? ¹ But conventional wisdom has been wrong before; for example, it is commonly held that hard disks should be kept as cool as possible, but the one published study on the topic shows that cooler drives actually fail more. This study is no help here since all the disks surveyed were powered on 24/7.

    Read the article

  • Diagnosing SAN connectivity issues (RHEL5)

    - by Matthew
    We are currently utilizing GFS2 to share a SAN LUN between 3 servers. However due to a feature problem with vendor software we are using, we currently have the volume unmounted on two of the boxes, and are instead exporting the GFS2 filesystem via NFS from the first one (the software requires some weird locking mechanics that GFS2 doesn't support). As of this morning, NFS was no longer able to read/write to the volume from any of the servers, including the NFS server. I then tried checking the normal mount (the directory that is exported on the NFS server) and I received a weird input/output error just trying to CD into it. When I tried running multipath, I got a DM error, however multipath -l worked just fine. I tried unmounting the GFS2 volume, and the CLI hung. I ran init 0 which killed most services, but then the shutdown appeared to have been hung. I logged in via out of band access (hp ILO) and saw that the shutdown was hung trying to unmount GFS2 volumes. My main priority was getting the box back online so after about 5 minutes of waiting I did a hard reset. I am now trying to figure out what went wrong. What are the correct logs to investigate? I've never run into SAN issues like this before. The SAN is connected via 2 fibre connections. Any help would really be appreciated. Everything appears to be up and functional now.

    Read the article

  • How harmful is a hard disk spin cycle?

    - by Gilles
    It is conventional wisdom¹ that each time you spin a hard disk down and back up, you shave some time off its life expectancy. The topic has been discussed before: Is turning off hard disks harmful? What's the effect of standby (spindown) mode on modern hard drives? Common explanations for why spindowns and spinups are harmful are that they induce more stress on the mechanical parts than ordinary running, and that they cause heat variations that are harmful to the device mechanics. Is there any data showing quantitatively how bad a spin cycle is? That is, how much life expectancy does a spin cycle cost? Or, more practically, if I know that I'm not going to need a disk for X seconds, how large should X be to warrant spinning down? ¹ But conventional wisdom has been wrong before; for example, it is commonly held that hard disks should be kept as cool as possible, but the one published study on the topic shows that cooler drives actually fail more. This study is no help here since all the disks surveyed were powered on 24/7.

    Read the article

  • Zero downtime uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • What do I need to consider when buying hardware to meet my needs?

    - by Darth Android
    I'm looking to build a new computer from the ground up. I'm not sure what to look out for and need guidance and help on how to pick the hardware needed to construct my new rig. How do I know what to buy? How do I find out if a given CPU will be enough for a certain game or application that I want to run? How do I find out if a given graphics card will be enough for a certain game or application? What is important when looking at motherboards? How much memory do I need? How do I know how much wattage I need for a power supply? What size case do I need? What relevant standards do I need to read up on and be aware of? PCI, PCIe, SATA, USB 2.0, USB 3.0, etc... What "gotchas" do I need to be on the lookout for? Please keep responses generation-agnostic to ensure they will be helpful to our future users. While Stack Exchange does not permit shopping recommendations, it doesn't provide any general advice to consider when buying hardware. So, instead of just telling those that ask what to buy that it's not allowed, let's tell them how to figure out what they need. This question was Super User Question of the Week #20 Read the June 20, 2011 blog entry for more details or submit your own Question of the Week.

    Read the article

  • Implications of disabling the AMD Phenom's TLB patch?

    - by DMA57361
    I'm currently running a AMD Phenom X4 9600 processor (yeah, it's aging a bit, but other recent problems mean it's not getting upgraded in the immediate future), which happens to be one of the chips that suffer from the TLB errata. I recall that the first time I played with disabling the TLB patch (probably over a year ago, while playing a game that had a severe performance problem such that it was almost unplayable unless the patch was disabled) I had at least one BSOD, but I can't remeber them being particularly frequent. However, because it decreased instability, I stopped disabling the patch once I was done with the game. Now, after some recent hardware changes I was experiancing much worse performance than expected from the new hardware under some circumstances, and the TLB jumped to mind - after testing I found that disabling the patch would improve the performance to expected levels. I'm now wondering if it's worthwhile always having the patch disabled to avoid any potential slowdowns cropping up in the future, or if it is too dangerous. Everything I read states that the bug, when not patched, can causes a system lock-up in "rare circumstances". So, with the TLB patch disabled: How frequently should system lock-ups be expected? Do we know what the circumstances that trigger the lock-ups are? (Don't worry too much about being highly technical, but essentially I wonder if the chip more vunerable under heavy load, or heavy memory usage, etc?) Are there any secondary problems I should be aware of? (Don't include things that are charateristic to all lock-ups, please)

    Read the article

  • IPtables - Accept Arbitrary Packets

    - by Asad Moeen
    I've achieved a lot on blocking attacks on GameServers but I'm stuck on something. I've blocked major requests of game-server which it aceepts in the form "\xff\xff\xff\xff" which can be followed by the actual queries like get status or get info to make something like "\xff\xff\xff\xff getstatus " but I see other queries if sent to the game-server will cause it to reply with a "disconnect" packet with the same rate as input so if the input rate is high then the high output of "disconnect" might give lag to the server. Hence I want to block all queries except the ones actual clients use which I suppose are in the form "\xff\xff\xff\xff" or .... so, I tried using this rule : -A INPUT -p udp -m udp -m u32 ! --u32 0x1c=0xffffffff -j ACCEPT -A INPUT -p udp -m udp -m recent --set --name Total --rsource -A INPUT -p udp -m udp -m recent --update --seconds 1 --hitcount 20 --name Total --rsource -j DROP Now where the rule does accept the clients but it only blocks requests in the form "\xff\xff\xff\xff getstatus " ( by which GameServer replies with status ) and not just "getstatus " ( by which GameServer replies with disconnect packet ). So I suppose the accept rule is accepting the simple "string" as well. I actually want it to also block the non-(\xff) queries. So how do I modify the rule?

    Read the article

  • Interruptionless Uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Building vs buying a server for an academic lab [closed]

    - by Roy
    I'm looking for advice on the classic build vs buy question. We need a new linux server to run Matlab computation on in our lab (academic). Matlab parallel computing toolbox licence allows up to 12 local workers so we are aiming at a 12 core server with 4GB memory per core (total of 48gb). The system will have an SSD for the OS and a raid-5 (4x2tb) for data. I looked around and found a (relatively) cheap vendor, Silicon Mechanics, that offers a system to our liking (specs below) for $6732. However, buying the components from newegg cost only $4464! The difference is $2268 which is 50% of the base cost. If buying from a company can be thought of as a sort of insurance, basically my premiums are of 50% of the base cost which to me sounds like a lot. Of course any downtime is bad, but the work is not "mission critical", i.e. if it takes a few days to fix it when it breaks its no the end of the world. If it takes weeks to months then its a problem. If it breaks 2-3 times in 3 years, not too bad. If it breaks every month not good. In term of build experience, I set up a linux cluster in grad school (from existing computers) and I build my home pcs but I never built a server before. The server components I'm thinking about: 1 x SUPERMICRO SYS-7046T-6F 4U Tower Server Barebone Dual LGA 1366 Intel 5520 DDR3 1333/1066/800 ($1,050) 12 x Kingston 4GB 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) ECC Unbuffered Server Memory ($420) 2 x Intel Xeon E5645 Westmere-EP 2.4GHz LGA 1366 80W Six-Core ($1,116) 4 x Seagate Constellation ES 2TB 7200 RPM SATA 6.0Gb/s 3.5" ($1,040) 1 x SAMSUNG Internal DVD Writer Black SATA ($20) 1 x Intel 520 Series 2.5" 180GB SATA III MLC SSD $300 1 x LSI LSI00281 PCI-Express 2.0 x8 MD2 Low profile SATA / SAS MegaRAID SAS 9260CV-4i Controller Card, $695

    Read the article

  • What to do with old laptop screens?

    - by Lord Torgamus
    This question is inspired by another SU question I came across earlier today: What to do with old hard drives? It made me think about two long-dead laptops I have with perfectly good screens still inside. One is a Dell Inspiron 5100 and the other is an Averatec E1200, but responses need not be geared towards those particular models' screens. Rules, based heavily on the original question's: Objectives and suggestions to keep in mind when you post an answer : Should showcase your geekiness, be plain ol' fun, serve a social purpose or benefit the community. Your answer need not be limited to only one screen. For a really good answer, I'll go out and buy additional leftover screens. Your answer need not be limited to one project per screen. If additional accessories need be purchased, make sure they are common. Don't tell me to get a moon rock or something. The projects you suggested should serve a useful purpose; art is nice, but functional art is way better. Thanks in advance, folks. EDIT: Found another related question. Fun projects to do with an old 17" LCD monitor EDIT 2: I, for one, am enjoying the new outpouring of creativity here. Best fifty bucks... I mean, rep points... I ever spent. EDIT 3: That does it. At the end of the week, there was a tie for most votes between the accepted answer and the game platform answer. The game platform answer was cooler, but less reasonable as a project to actually do; in other words, it was more moon rocky. Unfortunately, I think fencepost had the best comment on the topic, which is that displays on their own have no good interface. Thanks for playing, everyone!

    Read the article

  • WD Caviar Green Extremely Slow

    - by Steven
    I am encountering a really weird problem on my WD Caviar Green HDD. Well first of all I have 2 HDDs on my Desktop, one 160GB Seagate holding my Win7 Ultimate x64 and the problematic one, WD 1.5 Caviar Green for storage purpose. My problem is kinda weird, when I transfer files from my Seagate(C:) to my WD (D:) the speed is good (50-60MB/s). Then the problem arises when I transfer too "many" large files, the transfer speed would go straight down to kilobytes/s. Well after I cancelled the transfer and access my D:, even entering a folder requires loading for like 10 seconds. Such problem not only arises when I am transferring files to my D:, it seems like my WD can't handle much activities. For instance, last time I installed my game on D: and I would face much lag after playing for some time. When the same game is installed on C: no problem arises. Does anyone knows what is the problem? P/S: There was one temporary solution that I used to tried. After the "situation" occurs, I tried to access as many folders on D: as I can and let it load, repeating such actions and giving it some time bring the D: back to speedy transfer. However, large transfers would causes the situation to happen again. Does it have something to do with cache whatsoever?

    Read the article

  • Switched to new router and now experiencing lag?

    - by Mr_CryptoPrime
    I switched from a dynex-802.11b/g to a Netgear-802.11b/g/n just yesterday. My router is down stairs because my phoneline upstairs is retarded....but my PS3 is still upstairs (SOCOM: Confrontation is game I am experiencing issues). I have done everything I can to make sure the connection is solid and have checked the status and it has been as high as 80% and usually lingers at about 60%. I thought about upgrading my bandwidth from 1.5mbs to 7mbs, but I am guessing something is wrong if it worked fine before? Now the game seems more laggy and my voice chat is choppy. Others seem to receive my voice data fine because I can hear my own feedback clearly from other players (if you are in close proximity to another player and speak and there volume is loud enough sometimes you can hear yourself). I wonder if I port forward or setup DMZ then it will be fixed, but I am not sure and don't know quite how to do it. Has anyone else ever experienced this when switching routers? What did you do to fix it? Thanks!

    Read the article

  • Rendering a frame is producing noise from speakers in Windows and Linux

    - by Robber
    When any hardware accelerated application is rendering a frame (or many of them) a very short noise is coming from my speakers. This can be a game, a WebGL application or XBMC. When the application/game is rendering many frames per second (like most of them do) the noise is a continuous buzzing that gets higher pitched with higher framerates. This applies to Linux and Windows, so I'd assume it's a hardware problem. The current hardware in the PC is: CPU: Core2Quad Q9550 GPU: Radeon HD 5770 RAM: 2x2GB DDR2 Motherboard: Asus P5QLD PRO PSU: be quiet! Pure Power 530W Screen and speakers: Old 720p LCD TV connected via VGA and aux cable Muting the TV stops the noise, muting Windows doesn't. I tried replacing the PSU first (used a Tagan 700W PSU before) because I thought it was a power problem. It wasn't. I tried replacing the motherboard (used a ASUS P5B SE before) next because I thought it was a sound card problem. It wasn't. I tried the GPU in a different PC because I thought it was a broken graphics card. It worked perfectly fine in the other PC. I thought it might be interference, but moving the audio cable around changes absolutely nothing. I tried using an HDMI cable instead and that did work, but is not an option since my TV has only one HDMI input and I need that for my PS3.

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • Sinatra and XML POST request

    - by user292815
    I don't know is it my mistake or no. So i have that code: <code> post '/singin/get_token' do content_type :xml puts request.body.read puts xmlRequest xmlRequest = REXML::Document.new(request.body.read) ... </code> And when i post something like that: <code> <?xml version="1.0" encoding="utf-16"?><request xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><username>adsfasdf</username></request> </code> I receive that in my console: <code> 127.0.0.1 - - [12/Mar/2010 21:18:20] "POST /singin/get_token HTTP/1.1" 500 105872 0.1339 Iconv::InvalidCharacter - ">": /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `conv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `decode_iconv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:58:in `encoding=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:46:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:164:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `create_from' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:146:in `stream=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:123:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `build' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:43:in `initialize' zaiaku-game-server.rb:70:in `new' zaiaku-game-server.rb:70:in `block in <main>' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/chunked.rb:15:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/handler/thin.rb:14:in `run'Iconv::InvalidCharacter: ">" /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `conv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `decode_iconv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:58:in `encoding=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:46:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:164:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `create_from' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:146:in `stream=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:123:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `build' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:43:in `initialize' zaiaku-game-server.rb:70:in `new' zaiaku-game-server.rb:70:in `block in <main>' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:811:in `call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:811:in `block in route' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:488:in `instance_eval' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:488:in `route_eval' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:477:in `block (2 levels) in route!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:474:in `catch' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:474:in `block in route!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:453:in `each' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:453:in `route!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:569:in `dispatch!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:388:in `block in call!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `instance_eval' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `block in invoke' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `catch' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `invoke' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:388:in `call!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:377:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:928:in `block in call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:973:in `synchronize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:928:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/chunked.rb:15:in `call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:in `block in pre_process' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in `catch' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in `pre_process' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in `process' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:in `receive_data' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run_machine' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in `start' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:in `start' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/handler/thin.rb:14:in `run' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:896:in `run!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/main.rb:35:in `block in <top (required)>' !! Unexpected error while processing request: incompatible character encodings: ASCII-8BIT and UTF-8 <code>

    Read the article

  • Create a class that inherets DrawableGameComponent in XNA as a CLASS with custom functions

    - by user3675013
    using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; namespace TileEngine { class Renderer : DrawableGameComponent { public Renderer(Game game) : base(game) { } SpriteBatch spriteBatch ; protected override void LoadContent() { base.LoadContent(); } public override void Draw(GameTime gameTime) { base.Draw(gameTime); } public override void Update(GameTime gameTime) { base.Update(gameTime); } public override void Initialize() { base.Initialize(); } public RenderTarget2D new_texture(int width, int height) { Texture2D TEX = new Texture2D(GraphicsDevice, width, height); //create the texture to render to RenderTarget2D Mine = new RenderTarget2D(GraphicsDevice, width, height); GraphicsDevice.SetRenderTarget(Mine); //set the render device to the reference provided //maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation //works out. Wish I could call base.draw here. return Mine; //I'm hoping that this returns the same instance and not a copy. } public void draw_texture(int width, int height, RenderTarget2D Mine) { GraphicsDevice.SetRenderTarget(null); //Set the renderer to render to the backbuffer again Rectangle drawrect = new Rectangle(0, 0, width, height); //Set the rendering size to what we want spriteBatch.Begin(); //This uses spritebatch to draw the texture directly to the screen spriteBatch.Draw(Mine, drawrect, Color.White); //This uses the color white spriteBatch.End(); //ends the spritebatch //Call base.draw after this since it doesn't seem to recognize inside the function //maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation //works out. Wish I could call base.draw here. } } } I solved a previous issue where I wasn't allowed to access GraphicsDevice outside the main Default 'main' class Ie "Game" or "Game1" etc. Now I have a new issue. FYi no one told me that it would be possible to use GraphicsDevice References to cause it to not be null by using the drawable class. (hopefully after this last bug is solved it doesn't still return null) Anyways at present the problem is that I can't seem to get it to initialize as an instance in my main program. Ie Renderer tileClipping; and I'm unable to use it such as it is to be noted i haven't even gotten to testing these two steps below but before it compiled but when those functions of this class were called it complained that it can't render to a null device. Which meant that the device wasn't being initialized. I had no idea why. It took me hours to google this. I finally figured out the words I needed.. which were "do my rendering in XNA in a seperate class" now I haven't used the addcomponent function because I don't want it to only run these functions automatically and I want to be able to call the custom ones. In a nutshell what I want is: *access to rendering targets and graphics device OUTSIDE default class *passing of Rendertarget2D (which contain textures and textures should automatically be passed with a rendering target? ) *the device should be passed to this function as well OR the device should be passed to this function as a byproduct of passing the rendertarget (which is automatically associated with the render device it was given originally) *I'm assuming I'm dealing with abstracted pointers here so when I pass a class object or instance, I should be recieving the SAME object , I referenced, and not a copy that has only the lifespan of the function running. *the purpose for all these options: I want to initialize new 2d textures on the fly to customize tileclipping and even the X , y Offsets of where a WHOLE texture will be rendered, and the X and Y offsets of where tiles will be rendered ON that surface. This is why. And I'll be doing region based lighting effects per tile or even per 8X8 pixel spaces.. we'll see I'll also be doing sprite rotations on the whole texture then copying it again to a circular masked texture, and then doing a second copy for only solid tiles for masked rotated collisions on sprites. I'll be checking the masked pixels for my collision, and using raycasting possibly to check for collisions on those areas. The sprite will stay in the center, when this rotation happens. Here is a detailed diagram: http://i.stack.imgur.com/INf9K.gif I'll be using texture2D for steps 4-6 I suppose for steps 1 as well. Now ontop of that, the clipping size (IE the sqaure rendered) will be able to be shrunk or increased, on a per frame basis Therefore I can't use the same static size for my main texture2d and I can't use just the backbuffer Or we get the annoying flicker. Also I will have multiple instances of the renderer class so that I can freely pass textures around as if they are playing cards (in a sense) layering them ontop of eachother, cropping them how i want and such. and then using spritebatch to simply draw them at the locations I want. Hopefully this makes sense, and yes I will be planning on using alpha blending but only after all tiles have been drawn.. The masked collision is important and Yes I am avoiding using math on the tile rendering and instead resorting to image manipulation in video memory which is WHY I need this to work the way I'm intending it to work and not in the default way that XNA seems to handle graphics. Thanks to anyone willing to help. I hate the code form offered, because then I have to rely on static presence of an update function. What if I want to kill that update function or that object, but have it in memory, but just have it temporarily inactive? I'm making the assumption here the update function of one of these gamecomponents is automatic ? Anyways this is as detailed as I can make this post hopefully someone can help me solve the issue. Instead of tell me "derrr don't do it this wayyy" which is what a few people told me (but they don't understand the actual goal I have in mind) I'm trying to create basically a library where I can copy images freely no matter the size, i just have to specify the size in the function then as long as a reference to that object exists it should be kept alive? right? :/ anyways.. Anything else? I Don't know. I understand object oriented coding but I don't understand this XNA It's beggining to feel impossible to do anything custom in it without putting ALL my rendering code into the draw function of the main class tileClipping.new_texture(GraphicsDevice, width, height) tileClipping.Draw_texture(...)

    Read the article

  • Software: Launching League of Legends spectator mode from Command Line (Mac)

    - by Alex Popov
    Background: tl;dr at the end League of Legends has a spectator mode, in which you can watch someone else's game (essentially a replay) with a 3 minute delay. Popular LoL website OP.GG has figured out a clever way of hosting these spectator games on their own servers, thereby making them replayable, as opposed to only being available while the game is on (as Riot does it). If you request a replay from OP.GG, it sends a batch file which looks for where the League is situated and then the magic happens: @start "" "League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1" This works fine on Windows. I'm trying to get it to work on Mac (which has an official client). First I tried running the same command by hand, (split for convenience) /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends 8393 LoLLauncher \ /Applications/ ... /LolClient spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1 Running this, however, just starts the LoLLauncher, which closes all the active League processes. The exactly same thing happens if I just call /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends Next I tried seeing what actually happens when Spectator mode is initiated so I ran $ ps -axf | grep -i lol which showed UID PID PPID C STIME TTY TIME CMD 503 3085 1 0 Wed02pm ?? 0:00.00 (LolClient) 503 24607 1 0 9:19am ?? 0:00.98 /Applications/League of Legends.app/Contents/LOL/RADS/system/UserKernel.app/Contents/MacOS/UserKernel updateandrun lol_launcher LoLLauncher.app 503 24610 24607 0 9:19am ?? 1:08.76 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_launcher/releases/0.0.0.122/deploy/LoLLauncher.app/Contents/MacOS/LoLLauncher 503 24611 24610 0 9:19am ?? 1:23.02 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient -runtime .\ -nodebug META-INF\AIR\application.xml .\ -- 8393 503 24927 24610 0 9:44am ?? 0:03.37 /Applications/League of Legends.app/Contents/LoL/RADS/solutions/lol_game_client_sln/releases/0.0.0.117/deploy/LeagueOfLegends.app/Contents/MacOS/LeagueofLegends 8394 LoLLauncher /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient spectator 216.133.234.17:8088 Yn1oMX/n3LpXNebibzUa1i3Z+s2HV0ul 1400781241 NA1 Of Interest: there is (LolClient) which I cannot kill by it's PID. UserKernel updateandrun lol_launcher LoLLauncher.app is launched first. LoLLauncher is launched by the UserKernel (as we can see from the PPID) The very long command (PID: 24927) is how Spectator mode is launched, and is also launched by UserKernel. Spectator mode is launched in exactly the same way that the OP.GG .bat wanted to, with the only difference that Spectator mode connects to Riot instead of OP.GG's spectate server. I tried attaching GDB to the LolClient, but I couldn't get anything meaningful from it since it's an Adobe AIR application (and I've never used GDB with code other than mine own). Next I ran dtruss -a -b 100m -f -p $PID on everything I could think of: the LolClient, the LolLauncher and the UserKernel and skimmed the half a million lines produced. I found stuff like the GET request used to get the information of the game to spectate, but I could not see any launch of the equivalent of League of Legends.exe with spectator options. Finally, I ran lsof | grep -i lol to see if anything else was opened in the process, but didn't find anything that seemed appropriate. Open were UserKernel, LolLauncher, LolClient, Adobe AIR, LeagueofLegends and then Bugsplat, all of which are expected. None of this seemed especially relevant to figuring out how LeagueofLegends was opened into spectator mode. It obviously can be done, since Spectator mode is accessible from within the client. It seems likely that it can be done from the CLI, since Windows can do it and the clients are supposed to equals. Unless I'm missing something in the difference between how UNIX and Windows handle CLI application launches. My question is if there are any other things I can try to figure out how to launch Spectator mode myself. tl;dr: Trying to get into spectator mode from the CLI. It's possible on Windows (see first code block) but it just restarts League on Mac. What else can I try to find what call is made, and how to reproduce it? PS: Please let me know how I can improve this question or its formatting, I'd love to use StackOverflow/SuperUser, but as the guys said on the podcast this week (Ep. 59) it's very intimidating. Sorry for posting this on StackOverflow the first time :(

    Read the article

  • TemplateBinding with Converter - what is wrong?

    - by MartyIX
    I'm creating a game desk. I wanted to specify field size (one field is a square) as a attached property and with this data set value of ViewPort which would draw 2x2 matrix (and tile mode would do the rest of game desk). I'm quite at loss what is wrong because the binding doesn't work. Testing line in XAML for the behaviour I would like to have: <DrawingBrush Viewport="0,0,100,100" ViewportUnits="Absolute" TileMode="None"> The game desk is based on this sample of DrawingPaint: http://msdn.microsoft.com/en-us/library/aa970904.aspx (an image is here) XAML: <Window x:Class="Sokoban.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:Sokoban" Title="Window1" Height="559" Width="419"> <Window.Resources> <local:FieldSizeToRectConverter x:Key="fieldSizeConverter" /> <Style x:Key="GameDesk" TargetType="{x:Type Rectangle}"> <Setter Property="local:GameDeskProperties.FieldSize" Value="50" /> <Setter Property="Fill"> <Setter.Value> <!--<DrawingBrush Viewport="0,0,100,100" ViewportUnits="Absolute" TileMode="None">--> <DrawingBrush Viewport="{TemplateBinding local:GameDeskProperties.FieldSize, Converter={StaticResource fieldSizeConverter}}" ViewportUnits="Absolute" TileMode="None"> <DrawingBrush.Drawing> <DrawingGroup> <GeometryDrawing Brush="CornflowerBlue"> <GeometryDrawing.Geometry> <RectangleGeometry Rect="0,0,100,100" /> </GeometryDrawing.Geometry> </GeometryDrawing> <GeometryDrawing Brush="Azure"> <GeometryDrawing.Geometry> <GeometryGroup> <RectangleGeometry Rect="0,0,50,50" /> <RectangleGeometry Rect="50,50,50,50" /> </GeometryGroup> </GeometryDrawing.Geometry> </GeometryDrawing> </DrawingGroup> </DrawingBrush.Drawing> </DrawingBrush> </Setter.Value> </Setter> </Style> </Window.Resources> <StackPanel> <Rectangle Style="{StaticResource GameDesk}" Width="300" Height="150" /> </StackPanel> </Window> Converter and property definition: using System; using System.Collections.Generic; using System.Text; using System.Windows.Controls; using System.Windows; using System.Diagnostics; using System.Windows.Data; namespace Sokoban { public class GameDeskProperties : Panel { public static readonly DependencyProperty FieldSizeProperty; static GameDeskProperties() { PropertyChangedCallback fieldSizeChanged = new PropertyChangedCallback(OnFieldSizeChanged); PropertyMetadata fieldSizeMetadata = new PropertyMetadata(50, fieldSizeChanged); FieldSizeProperty = DependencyProperty.RegisterAttached("FieldSize", typeof(int), typeof(GameDeskProperties), fieldSizeMetadata); } public static int GetFieldSize(DependencyObject target) { return (int)target.GetValue(FieldSizeProperty); } public static void SetFieldSize(DependencyObject target, int value) { target.SetValue(FieldSizeProperty, value); } static void OnFieldSizeChanged(DependencyObject target, DependencyPropertyChangedEventArgs e) { Debug.WriteLine("FieldSize just changed: " + e.NewValue); } } [ValueConversion(/* sourceType */ typeof(int), /* targetType */ typeof(Rect))] public class FieldSizeToRectConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { Debug.Assert(targetType == typeof(int)); int fieldSize = int.Parse(value.ToString()); return new Rect(0, 0, 2 * fieldSize, 2 * fieldSize); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { // should not be called in our example throw new NotImplementedException(); } } }

    Read the article

  • optimizing iPhone OpenGL ES fill rate

    - by NateS
    I have an Open GL ES game on the iPhone. My framerate is pretty sucky, ~20fps. Using the Xcode OpenGL ES performance tool on an iPhone 3G, it shows: Renderer Utilization: 95% to 99% Tiler Utilization: ~27% I am drawing a lot of pretty large images with a lot of blending. If I reduce the number of images drawn, framerates go from ~20 to ~40, though the performance tool results stay about the same (renderer still maxed). I think I'm being limited by the fill rate of the iPhone 3G, but I'm not sure. My questions are: How can I determine with more granularity where the bottleneck is? That is my biggest problem, I just don't know what is taking all the time. If it is fillrate, is there anything I do to improve it besides just drawing less? I am using texture atlases. I have tried to minimize image binds, though it isn't always possible (drawing order, not everything fits on one 1024x1024 texture, etc). Every frame I do 10 image binds. This seem pretty reasonable, but I could be mistaken. I'm using vertex arrays and glDrawArrays. I don't really have a lot of geometry. I can try to be more precise if needed. Each image is 2 triangles and I try to batch things were possible, though often (maybe half the time) images are drawn with individual glDrawArrays calls. Besides the images, I have ~60 triangles worth of geometry being rendered in ~6 glDrawArrays calls. I often glTranslate before calling glDrawArrays. Would it improve the framerate to switch to VBOs? I don't think it is a huge amount of geometry, but maybe it is faster for other reasons? Are there certain things to watch out for that could reduce performance? Eg, should I avoid glTranslate, glColor4g, etc? I'm using glScissor in a 3 places per frame. Each use consists of 2 glScissor calls, one to set it up, and one to reset it to what it was. I don't know if there is much of a performance impact here. If I used PVRTC would it be able to render faster? Currently all my images are GL_RGBA. I don't have memory issues. Here is a rough idea of what I'm drawing, in this order: 1) Switch to perspective matrix. 2) Draw a full screen background image 3) Draw a full screen image with translucency (this one has a scrolling texture). 4) Draw a few sprites. 5) Switch to ortho matrix. 6) Draw a few sprites. 7) Switch to perspective matrix. 8) Draw sprites and some other textured geometry. 9) Switch to ortho matrix. 10) Draw a few sprites (eg, game HUD). Steps 1-6 draw a bunch of background stuff. 8 draws most of the game content. 10 draws the HUD. As you can see, there are many layers, some of them full screen and some of the sprites are pretty large (1/4 of the screen). The layers use translucency, so I have to draw them in back-to-front order. This is further complicated by needing to draw various layers in ortho and others in perspective. I will gladly provide additional information if reqested. Thanks in advance for any performance tips or general advice on my problem!

    Read the article

  • Inherited varibles are not reading correctly when using bitwise comparisons

    - by Shawn B
    Hey, I have a few classes set up for a game, with XMapObject as the base, and XEntity, XEnviron, and XItem inheriting it. MapObjects have a number of flags, one of them being MAPOBJECT_SOLID. My problem is, that XEntity is the only class that correctly detects MAPOBJECT_SOLID. Both Items are Environs are always considered solid by the game, regardless of the flag's state. What is important, is that Environs and Item should almost never be solid. Here are the relevent code samples: XMapObject: class XMapObject : public XObject { public: Uint8 MapObjectType,Location[2],MapObjectFlags; XMapObject *NextMapObject,*PrevMapObject; XMapObject(); void CreateMapObject(Uint8 MapObjectType); void SpawnMapObject(Uint8 MapObjectLocation[2]); void RemoveMapObject(); void DeleteMapObject(); void MapObjectSetLocation(Uint8 Y,Uint8 X); void MapObjectMapLink(); void MapObjectMapUnlink(); }; XEntity: class XEntity : public XMapObject { public: Uint8 Health,EntityFlags; float Speed,Time; XEntity *NextEntity,*PrevEntity; XItem *IventoryList; XEntity(); void CreateEntity(Uint8 EntityType,Uint8 EntityLocation[2]); void DeleteEntity(); void EntityLink(); void EntityUnlink(); Uint8 MoveEntity(Uint8 YOffset,Uint8 XOffset); }; XEnviron: class XEnviron : public XMapObject { public: Uint8 Effect,TimeOut; void CreateEnviron(Uint8 Type,Uint8 Y,Uint8 X,Uint8 TimeOut); }; XItem: class XItem : public XMapObject { public: void CreateItem(Uint8 Type,Uint8 Y,Uint8 X); }; And lastly, the entity move code. Only entities are capable of moving themselves. Uint8 XEntity::MoveEntity(Uint8 YOffset,Uint8 XOffset) { Uint8 NewY = Location[0] + YOffset, NewX = Location[1] + XOffset; if((NewY >= 0 && NewY < MAPY) && (NewX >= 0 && NewX < MAPX)) { XTile *Tile = GetTile(NewY,NewX); if(Tile->MapList != NULL) { XMapObject *MapObject = Tile->MapList; while(MapObject != NULL) { if(MapObject->MapObjectFlags & MAPOBJECT_SOLID) { printf("solid\n"); return 0; } MapObject = MapObject->NextMapObject; } } if(Tile->Flags & TILE_SOLID && EntityFlags & ENTITY_CLIPPING) { return 0; } this->MapObjectSetLocation(NewY,NewX); return 1; } return 0; } What is wierd, is that the bitwise operator always returns true when the MapObject is an Environ or an Item, but it works correctly for Entities. For debug I am using the printf "Solid", and also a printf containing the value of the flag for both Environs and Items. Any help is greatly appreciated, as this is a major bug for the small game I am working on.

    Read the article

  • Inherited variables are not reading correctly when using bitwise comparisons

    - by Shawn B
    Hey, I have a few classes set up for a game, with XMapObject as the base, and XEntity, XEnviron, and XItem inheriting it. MapObjects have a number of flags, one of them being MAPOBJECT_SOLID. My problem is that XEntity is the only class that correctly detects MAPOBJECT_SOLID. Both Items are Environs are always considered solid by the game, regardless of the flag's state. What is important is that Environs and Item should almost never be solid. Here are the relevent code samples: XMapObject: class XMapObject : public XObject { public: Uint8 MapObjectType,Location[2],MapObjectFlags; XMapObject *NextMapObject,*PrevMapObject; XMapObject(); void CreateMapObject(Uint8 MapObjectType); void SpawnMapObject(Uint8 MapObjectLocation[2]); void RemoveMapObject(); void DeleteMapObject(); void MapObjectSetLocation(Uint8 Y,Uint8 X); void MapObjectMapLink(); void MapObjectMapUnlink(); }; XEntity: class XEntity : public XMapObject { public: Uint8 Health,EntityFlags; float Speed,Time; XEntity *NextEntity,*PrevEntity; XItem *IventoryList; XEntity(); void CreateEntity(Uint8 EntityType,Uint8 EntityLocation[2]); void DeleteEntity(); void EntityLink(); void EntityUnlink(); Uint8 MoveEntity(Uint8 YOffset,Uint8 XOffset); }; XEnviron: class XEnviron : public XMapObject { public: Uint8 Effect,TimeOut; void CreateEnviron(Uint8 Type,Uint8 Y,Uint8 X,Uint8 TimeOut); }; XItem: class XItem : public XMapObject { public: void CreateItem(Uint8 Type,Uint8 Y,Uint8 X); }; And lastly, the entity move code. Only entities are capable of moving themselves. Uint8 XEntity::MoveEntity(Uint8 YOffset,Uint8 XOffset) { Uint8 NewY = Location[0] + YOffset, NewX = Location[1] + XOffset; if((NewY >= 0 && NewY < MAPY) && (NewX >= 0 && NewX < MAPX)) { XTile *Tile = GetTile(NewY,NewX); if(Tile->MapList != NULL) { XMapObject *MapObject = Tile->MapList; while(MapObject != NULL) { if(MapObject->MapObjectFlags & MAPOBJECT_SOLID) { printf("solid\n"); return 0; } MapObject = MapObject->NextMapObject; } } if(Tile->Flags & TILE_SOLID && EntityFlags & ENTITY_CLIPPING) { return 0; } this->MapObjectSetLocation(NewY,NewX); return 1; } return 0; } What is wierd, is that the bitwise operator always returns true when the MapObject is an Environ or an Item, but it works correctly for Entities. For debug I am using the printf "Solid", and also a printf containing the value of the flag for both Environs and Items. Any help is greatly appreciated, as this is a major bug for the small game I am working on.

    Read the article

  • Custom event loop and UIKit controls. What extra magic Apple's event loop does?

    - by tequilatango
    Does anyone know or have good links that explain what iPhone's event loop does under the hood? We are using a custom event loop in our OpenGL-based iPhone game framework. It calls our game rendering system, calls presentRenderbuffer and pumps events using CFRunLoopRunInMode. See the code below for details. It works well when we are not using UIKit controls (as a proof, try Facetap, our first released game). However, when using UIKit controls, everything almost works, but not quite. Specifically, scrolling of UIKit controls doesn't work properly. For example, let's consider following scenario. We show UIImagePickerController on top of our own view. UIImagePickerController covers our custom view We also pause our own rendering, but keep on using the custom event loop. As said, everything works, except scrolling. Picking photos works. Drilling down to photo albums works and transition animations are smooth. When trying to scroll photo album view, the view follows your finger. Problem: when scrolling, scrolling stops immediately after you lift your finger. Normally, it continues smoothly based on the speed of your movement, but not when we are using the custom event loop. It seems that iPhone's event loop is doing some magic related to UIKit scrolling that we haven't implemented ourselves. Now, we can get UIKit controls to work just fine and dandy together with our own system by using Apple's event loop and calling our own rendering via NSTimer callbacks. However, I'd still like to understand, what is possibly happening inside iPhone's event loop that is not implemented in our custom event loop. - (void)customEventLoop { OBJC_METHOD; float excess = 0.0f; while(isRunning) { animationInterval = 1.0f / openGLapp->ticks_per_second(); // Calculate the target time to be used in this run of loop float wait = max(0.0, animationInterval - excess); Systemtime target = Systemtime::now().after_seconds(wait); Scope("event loop"); NSAutoreleasePool* pool = [[ NSAutoreleasePool alloc] init]; // Call our own render system and present render buffer [self drawView]; // Pump system events [self handleSystemEvents:target]; [pool release]; excess = target.seconds_to_now(); } } - (void)drawView { OBJC_METHOD; // call our own custom rendering bool bind = openGLapp->app_render(); // bind the buffer to be THE renderbuffer and present its contents if (bind) { opengl::bind_renderbuffer(renderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } } - (void) handleSystemEvents:(Systemtime)target { OBJC_METHOD; SInt32 reason = 0; double time_left = target.seconds_since_now(); if (time_left <= 0.0) { while((reason = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, TRUE)) == kCFRunLoopRunHandledSource) {} } else { float dt = time_left; while((reason = CFRunLoopRunInMode(kCFRunLoopDefaultMode, dt, FALSE)) == kCFRunLoopRunHandledSource) { double time_left = target.seconds_since_now(); if (time_left <= 0.0) break; dt = (float) time_left; } } }

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >