Search Results

Search found 413 results on 17 pages for 'spawn'.

Page 15/17 | < Previous Page | 11 12 13 14 15 16 17  | Next Page >

  • game currency convert: math efficient

    - by Comradsky
    For variables:4 text views named diamondText, goldText, silverText, and bronzeText;money variable unsigned int money;and an NSTimer, every .1 sec,runs function: -(void)updateMoney{ money++; bronzeText.text = [NSString stringWithFormat:@"%d",money]; silverText.text = [NSString stringWithFormat:@"%d",money%10]; goldText.text = [NSString stringWithFormat:@"%d",money%100]; diamondText.text= [NSString stringWithFormat:@"%d",money%1000]; } Given that my currency is diamond = 10 gold = 10 silver = 10 bronze = 1; What would be most efficient way to calculate and display the money labels? How would you store this variable, with GameCenter and NSDictionary or GameCenter and something else? More details are below if you don't understand. To clarify: bronze has the last 2 numbers, silver has the next 2 numbers, and so on. I understand I could use 4 ints or an array, but i would rather try to use this method, unless theres a much more efficient way. Example: When money = 1000;bronzeText = nothing, silverText = 10,goldText = nothing, diamondText = nothing; What other ways would you do this that you think would be more efficient? I will be calling a function (void)collisionDetector that detects if my player.frame crosses with a flyingObject.frame, and if that object is a coin it gives an added value to money and then calls (void)updateMoney. Im just using the timer to test this and spawn these flying objects.

    Read the article

  • Collision between 2 objects of the same class

    - by user1826033
    Okay, so I have an enemy class(With rotation, position, texture and so on). I spawn a few enemies on the screen, they move around, but they overlap each other. So I tried to do a collision check between two enemies of the same class. But no matter what method I try, it isn't quite working. The best thing I tried was: foreach (Enemy enemy1 in enemies) { enemy1Pos = new Vector2(enemy1.position.X, enemy1.position.Y) foreach (Enemy enemy2 in enemies) { enemy2Pos = new Vector2(enemy2.position.X, enemy2.position.Y) if (Vector2.Distance(enemy2Pos, enemy1Pos) < 200) { enemy1Pos += new Vector2((float)(enemy1.Speed * Math.Cos(enemy1.Rotation)), (float)(enemy1.Speed * Math.Sin(enemy1.Rotation))); } } } This is not to exact code, so it might have some mistakes in it. Anyway when i implemented this solution, the enemies were not overlapping so everything was fine on that part. But, they were always moving to the right side of the screen. I've also looked up flocking etc, but I would like to know, how can I detect collision between 2 objects of the same class?

    Read the article

  • C++ Instantiate class and add name to array/vector then cycle through update functions from the array

    - by SD42
    I have a bunch of classes in a basic (and badly coded!) game engine that are currently instantiated in the code and have to have their individual update functions called. What I want to do is be able to create an instance of a class, pass the name to an array, and then subsequently cycle through the array to call the update functions of each class. I'm unsure as to whether or not this is an impossible or spectacularly stupid way of trying to manage objects, so please tell me if it is. So currently I might have manually instantiated Enemy class a couple of times: Enemy enem1; Enemy enem2; I then have to update them manually in the game loop: enem1.update(); enem2.update(); What method should I use to be able to spawn and destroy instances of the enemy class during gametime? Is it possible to populate an array with instantiated names and then do something like (and i'm aware this doesn't work); array[x].update(); Then iterate through the names in the array? Anything that even points me in the right direction would be greatly appreciated!

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • LaunchDaemon causing Lion to hang on boot

    - by Brett
    I've got a Mac Mini 2011, which I intend to use for a few tasks such as Plex and running a few VM's. I've installed virtualbox, along with XAMPP and phpvirtualbox, which all worked fine. However, getting this to run on startup is proving a real PITA! I'm at the moment trying to get vboxwebsrv running on boot. I've created a launchd plist within /Library/LaunchDaemons to run it and it works fine... well sort of. Lion when booting will show the spinning wheel and stop, never showing a GUI - however if I remote in via screen sharing or SSH, I can login fine and see that vboxwebsrv has launched successfully. Setting this plist to disabled makes lion boot up fine again. Initially I thought it was due to it staying open, so tried to add -b which causes it to run in the background, this just caused launchd to constantly spawn new processes and didn't even fix my problem of Lion being stuck at the spinning wheel. Does anyone have any ideas? I'm losing my mind here! PLIST: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Disabled</key> <false/> <key>KeepAlive</key> <false/> <key>UserName</key> <string>vbox</string> <key>RunAtLoad</key> <true/> <key>OnDemand</key> <false/> <key>Label</key> <string>org.virtualbox.vboxwebsvc</string> <key>ProgramArguments</key> <array> <string>/Applications/VirtualBox.app/Contents/MacOS/vboxwebsrv</string> </array> </dict> </plist>

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • Kickstart based installation of RHEL 6.4 from USB

    - by Peter
    I want to setup some brand new servers with RHEL 6.4. Servers do not have DVD, so, I have to use USB for the installation. I already have a custom ISO with a kickstart file that I use on servers with DVD flawlessly. I used iso2usb to move the ISO t? my USB. When I boot from the USB, the ks file is found, anaconda starts, but then stops with the following error: "The installation source given by device ['sda1'] could not be found. Please check your parameters and try again" Notes: The USB IS the sda. My custom ISO file is renamed to linux.iso from iso2usb and it is present in the root directory of the USB. Kickstart file has the following entry: harddrive --partition=sda1 --dir=/ Please help me to automate the installation with kickstart. Edit 1: This the anaconda.log file: 09:01:57,029 INFO : no /etc/zfcp.conf; not configuring zfcp 09:01:57,259 INFO : created new libuser.conf at /tmp/libuser.4rAbps with instPath="/mnt/sysimage" 09:01:57,259 INFO : anaconda called with cmdline = ['/usr/bin/anaconda', '--stage2', 'hd:sda1:///images/install.img', '--dlabel', '--kickstart', '/tmp/ks.cfg', '--graphical', '--selinux', '--lang', 'en_US.UTF-8', '--keymap', 'us', '--repo', 'hd:sda1:/'] 09:01:57,260 INFO : Display mode = g 09:01:57,260 INFO : Default encoding = utf-8 09:01:59,444 DEBUG : X server has signalled a successful start. 09:01:59,446 INFO : Starting window manager, pid 1345. 09:01:59,537 INFO : Starting graphical installation. 09:01:59,741 INFO : Detected 7968M of memory 09:01:59,741 INFO : Swap attempt of 7968M 09:02:00,840 INFO : ISCSID is /usr/sbin/iscsid 09:02:00,840 INFO : no initiator set Edit 2: This is the part of anaconda log that indicates that it found the USB etc: 09:01:47,918 INFO : starting STEP_STAGE2 09:01:47,918 INFO : partition is sda1, dir is //images/install.img 09:01:47,918 INFO : mounting device sda1 for hard drive install 09:01:48,005 INFO : Path to stage2 image is /mnt/isodir///images/install.img 09:01:54,214 INFO : mounted loopback device /mnt/runtime on /dev/loop0 as /tmp/install.img 09:01:54,214 INFO : Looking for updates for HD in /mnt/isodir///images/updates.img 09:01:54,214 INFO : Looking for product for HD in /mnt/isodir///images/product.img 09:01:54,227 INFO : got stage2 at url hd:sda1:///images/install.img 09:01:54,254 INFO : Loading SELinux policy 09:01:54,700 INFO : getting ready to spawn shell now 09:01:54,975 INFO : Running anaconda script /usr/bin/anaconda 09:01:56,882 INFO : _Fedora is the highest priority installclass, using it 09:01:56,921 INFO : Running kickstart %%pre script(s) 09:01:56,922 WARNING : '/bin/sh' specified as full path 09:01:56,926 INFO : All kickstart %%pre script(s) have been run

    Read the article

  • Trouble with Debian Lenny and Sphinx

    - by Ando
    I've very basic understanding of linux systems, but I've a server which was setup a while ago to host some web apps. Recently I decided to test out and implement Sphinx but unfortunately I cant get the install to work. I'm running a Debian Lenny distro and when I try to install sphinx it says - checking MySQL include files... configure: error: missing include files. ****************************************************************************** ERROR: cannot find MySQL include files. Check that you do have MySQL include files installed. The package name is typically 'mysql-devel'. If include files are installed on your system, but you are still getting this message, you should do one of the following: 1) either specify includes location explicitly, using --with-mysql-includes; 2) or specify MySQL installation root location explicitly, using --with-mysql; 3) or make sure that the path to 'mysql_config' program is listed in your PATH environment variable. To disable MySQL support, use --without-mysql option. ****************************************************************************** I do have mysql 5.1 installed but I can't find the include files, AND one more thing.. I read around the net that I probably need libmysqlclient15-dev but when I try to install that using apt-get i receive the following error. The following packages were automatically installed and are no longer required: libxcb-aux0 libts-0.0-0 libxcb-atom1 ttf-dejavu-extra hunspell-en-us g++-4.3 libmysql++3 libnspr4-0d libdirectfb-1.0-0 libxcb-event1 libasound2 libstdc++6-4.3-dev libhunspell-1.2-0 ttf-dejavu libmozjs2d conkeror-spawn-process-helper libnss3-1d Use 'apt-get autoremove' to remove them. The following NEW packages will be installed: libmysqlclient15-dev 0 upgraded, 1 newly installed, 0 to remove and 276 not upgraded. Need to get 7590 kB of archives. After this operation, 26.3 MB of additional disk space will be used. WARNING: The following packages cannot be authenticated! libmysqlclient15-dev Install these packages without verification [y/N]? Y Err http://ftp.us.debian.org/debian/ lenny/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 35.9.37.225 80] Err http://security.debian.org/ lenny/updates/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 149.20.20.6 80] Failed to fetch http://security.debian.org/pool/updates/main/m/mysql-dfsg-5.0/libmysqlclient15-dev_5.0.51a-24+lenny5_amd64.deb 404 Not Found [IP: 149.20.20.6 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Can you help me out by suggesting how to install the required packages and run the Sphinx.

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • Setting font size of Closed Captions on iPhone using ffmpeg or mencoder

    - by forthrin
    Does anyone know how to either: Make ffmpeg set subtitle font size in the output video file Make mencoder produce an iPhone-compatible video file (with subtitles) I finally found out how to get Closed Captions video on iPhone, with mkv and srt files as source material. The secret was using the mov_text subtitle codec in ffmpeg (and turning on Closed Captions in the iPhone settings of course): ffmpeg -y -i in.mkv -i in.srt -map 0:0 -map 0:1 -map 1:0 -vcodec copy -acodec aac -ab 256k -scodec mov_text -strict -2 -metadata title="Title" -metadata:s:s:0 language=eng out.mp4 However, the font size appears very small on the iPhone, and I can't find out how to set it with ffmpeg (the iPhone has no option for this). I found out that mencoder has a -subfont-text-scale option, but I don't have a lot of experience with this program. The following, my best attempt so far, produces an output file which is not playable on the iPhone. sudo port install mplayer +mencoder_extras +osd mencoder in.mkv -sub in.srt -o out.mp4 -ovc copy -oac faac -faacopts br=256:mpeg=4:object=2 -channels 2 -srate 48000 -subfont-text-scale 10 -of lavf -lavfopts format=mp4 PS! As requested, here is the output from mencoder: 192 audio & 400 video codecs success: format: 0 data: 0x0 - 0xb64b9d2f libavformat version 54.6.101 (internal) libavformat file format detected. [matroska,webm @ 0x1015c9a50]Unknown entry 0x80 [lavf] stream 0: video (h264), -vid 0 [lavf] stream 1: audio (ac3), -aid 0, -alang eng VIDEO: [H264] 1280x544 0bpp 49.894 fps 0.0 kbps ( 0.0 kbyte/s) [V] filefmt:44 fourcc:0x34363248 size:1280x544 fps:49.894 ftime:=0.0200 ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.23.100 (internal) AUDIO: 48000 Hz, 2 ch, s16le, 448.0 kbit/29.17% (ratio: 56000->192000) Selected audio codec: [ffac3] afm: ffmpeg (FFmpeg AC-3) ========================================================================== ** MUXER_LAVF ***************************************************************** REMEMBER: MEncoder's libavformat muxing is presently broken and can generate INCORRECT files in the presence of B-frames. Moreover, due to bugs MPlayer will play these INCORRECT files as if nothing were wrong! ******************************************************************************* OK, exit. videocodec: framecopy (1280x544 0bpp fourcc=34363248) VIDEO CODEC ID: 28 AUDIO CODEC ID: 15002, TAG: 0 Writing header... [mp4 @ 0x1015c9a50]Codec for stream 0 does not use global headers but container format requires global headers [mp4 @ 0x1015c9a50]Codec for stream 1 does not use global headers but container format requires global headers Then the following repeats itself for every frame: Pos: 0.0s 1f ( 2%) 0.00fps Trem: 0min 0mb A-V:0.000 [0:0] [mp4 @ 0x1015c9a50]malformated aac bitstream, use -absf aac_adtstoasc Error while writing frame. I recognize -absf aac_adtstoasc as an ffmpeg option (does mencoder spawn ffmpeg?), but I don't know how to pass this option on (my hunch is this is not even the origin of the problem).

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • startup Error for Zend Server CE

    - by Jamison
    Hello! I've got a strange startup error for Zend Server CE - it's probably easy to fix, but I don't have much experience with Zend Server! I'm running the latest OSX 10.6.6 and the latest Zend Server CE for Mac. When I run the "start" command from the command line, here is what I get: /usr/local/zend/bin/apachectl start [OK] spawn-fcgi: child spawned successfully: PID: 4206 /usr/local/zend/bin/shell_functions.rc: line 133: 4210 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4211 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Zend Server GUI [Lighttpd] [FAILED] /usr/local/zend/bin/lighttpdctl.sh: line 46: 4212 Bus error $WATCHDOG -i $BINARY Starting MySQL SUCCESS! /usr/local/zend/bin/shell_functions.rc: line 133: 4304 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4425 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Java bridge [FAILED] /usr/local/zend/bin/java_bridge.sh: line 39: 4426 Bus error $WATCHDOG -i $BINARY Zend Server started... The challenge is that ZEND SERVER wont open the GUI with this error, and seemingly I can click on Zend Server in the Applications folder and it opens for a second and immediately closes. I've made sure that Web Sharing is turned off to avoid conflicts, and I've run Disk Utility from my recovery disk to make sure there are no file system errors. Here is what the lines that are referenced in the errors have in terms of code: shell_functions.rc: (starting on line 132 - the error message says line 133...): launch() { if [ -z "$DEBUG" ]; then exec 3>/dev/null 4>&3 else exec 3>&1 4>&2 fi $WATCHDOG -i $BINARY 1>&3 2>&4 RET=$? if [ $RET -eq 0 ];then $ECHO_CMD "$BINARY watchdog is up and running.. ${OK_COLOR}[OK]${T_RESET}" return $RET else #$WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY >> "$PREFIX/logs/watchdog_$BINARY.log" 2>&1 $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 report $? "Starting" fi } _kill() { $WATCHDOG -i $BINARY > /dev/null 2>&1 if [ $? -eq 1 ];then $ECHO_CMD "$BINARY is not running" else $WATCHDOG -t $BINARY > /dev/null 2>&1 report $? "Stopping" fi } lighttpdctl.sh: (starting on line 45 - the error message says line 46...): status() { $WATCHDOG -i $BINARY } case "$1" in start) start status ;; stop) stop ;; restart) stop sleep 1 start ;; status) status ;; *) usage exit 1 esac exit $? java_bridge.sh: (starting on line 38 - the error message says line 39...): status() { $WATCHDOG -i $BINARY } Question: "Watchdog" is library in this zend BIN folder - it seems to handle error reporting? all the errors in my start command seem to deal with this Watchdog thing, but I don't know what to do about it... Thanks!

    Read the article

  • Nginx reverse proxy with separate aliases

    - by gabeDel
    Interesting question I have this python code: import sys, bottle, gevent from bottle import * from gevent import * from gevent.wsgi import WSGIServer @route("/") def index(): yield "/" application=bottle.default_app() WSGIServer(('', port), application, spawn=None).serve_forever() that runs standalone with nignx infront of it as a reverse proxy. Now each of these pieces of code run separately but I run multiple of these per domain per project(directory) but the code thinks for some reason that it is top level and its not so when you go to mydomain.com/something it works but if you go to mydomain.com/something/ you will get an error. No I have tested and figured out that nginx is stripping the "something" from the request/query so that when you go to mydomain.com/something/ the code thinks you are going to mydomain.com// how do I get nginx to stop removing this information? Nginx site code: upstream mydomain { server 127.0.0.1:10100 max_fails=5 fail_timeout=10s; } upstream subdirectory { server 127.0.0.1:10199 max_fails=5 fail_timeout=10s; } server { listen 80; server_name mydomain.com; access_log /var/log/nginx/access.log; location /sub { proxy_pass http://subdirectory/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /subdir { proxy_pass http://subdirectory/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }

    Read the article

  • Commercial Drupal Modules & Themes

    - by Ravish
    A discussion at Drupal.org forums prompted me to give my input about commercial ecosystem around Open Source Content Management Systems. WordPress and Joomla have been growing rapidly since past few years. But, growth rate of Drupal seems to be almost flat. Despite being the most powerful CMS around, Drupal is still not being adopted by masses. Many people will argue that Drupal is not targeted towards masses, but developers. I agree, Drupal is more of a development platform than a consumer CMS. Drupal is ‘many things to many people’, and I can build almost any type of website with it. Drupal is being used for building blogs, corporate websites, Intranet portals, social networking and even a project management system. Looking at the wide array of Drupal implementations, it deserves to be the most widely adopted CMS. I believe there are few challenges that Drupal community needs to overcome. To understand these challenges, I surveyed some webmasters who use Joomla or WordPress but not Drupal. I asked them why they don’t want to use Drupal, following are the responses I got from them: Drupal is too complicated, takes time to learn. Drupal is great, but its admin panel is overwhelming. I couldn’t find any nice themes for Drupal. There is no WYSIWYG editor in Drupal. Most Drupal modules do not work out of the box. There aren’t enough modules like Ubercart which provides any out of the box functionality. I tried modules like CCK, Views and Panels. After wasting several hours struggling with them, I decided to give up on Drupal. I don’t use Drupal because of pushbutton and Garland theme. I had hard time trying to customize Garland and it messed up the whole layout. There are no premium modules and themes for Drupal. Joomla has tons of awesome themes and modules. I don’t want a million hacks like CCK, Views, Tokens, Pathauto, ImageCache and CTools just to run a simple website. Most of the complaints from users are related to the learning and development curve involved with Drupal, and the lack of ecosystem. While most of the problems will be gone in Drupal 7, ecosystem is something that needs to be built by the Drupal community. Drupal distributions are a great step forward. There are few awesome Drupal distributions available like Open Publish, Open Atrium and Drupal Commons. I predict, there will be a wave of many powerful Drupal distributions after Drupal 7 release. Many of them will be user-friendly and commercial supported. Following is my post at Drupal.org forums: Quote from: http://drupal.org/node/863776#comment-3313836 Brian Gardner (StudioPress) and Woo Themes launched premium WordPress themes in 2007, the developer community did not accept it at first. Moreover, they were not even GPL licensed. There was an outcry in WordPress community against them. Following that, most premium theme providers switched to GPL licensing. Despite controversies, users voted for premium theme and plugins by buying them. Inspired by their success, hundreds of other developers started to sell premium themes and plugins. It is now the acceptable and in fact most popular business model among WordPress community. Matt Mullenweg once told me, they would not support premium themes. If he supported, developers would no more give out free GPL themes & plugins. He pointed me towards Joomla, there were hardly any nice free themes & modules available. Now two years forward, premium products are not just accepted but embraced by the WordPress community – http://wordpress.org/extend/themes/commercial/ The quality and number of themes & modules has increased, even the free ones. This also helped to boost the adoption and ecosystem of WordPress. Today, state of Drupal is like WordPress was in 2007. There are hardly any out of the box solutions available for Drupal. Ubercart, Open Publish and Open Atrium are the only ones I can think of. Many of the popular Drupal modules are patches and hole-fillers. Thankfully, these hole-filler modules are going to be in Drupal 7 core. Drupal 7 and distributions will spawn a new array of solutions built upon Drupal. Soon, we will have more like Ubercarts and Open Atriums. If commercial solutions can help fuel this ecosystem and growth, Drupal community will accept them eventually. This debate will not stop your customers from buying your product. If your product is awesome, they will vote for you by buying your product.

    Read the article

  • How to attach turrets to tiles in a tile based game

    - by Joseph St. Pierre
    I am a flash developer, and I am building a Tower Defense game. The world is being built through tiles, and I have gotten that accomplished easily. I have also gotten level changes and enemy spawning down as well. However, I wish the player to be able to spawn turrets, and have those turrets be on specific tiles, based upon where the player placed it. Here is my code: stop(); colOffset = 50; rowOffset = 50; guns = []; placed = true; dead = 0; spawned = 0; level = 1; interval = 350 / level; amount = level * 20; counter = 0; numCol = 14; numRow = 10; tiles = []; k = 0; create = false; tileName = new Array("road","grass","end", "start"); board = new Array( new Array(1,1,1,1,3,1,1,1,1,1,2,1,1,1), new Array(1,1,1,0,0,1,1,1,1,1,0,1,1,1), new Array(1,1,1,0,1,1,1,1,1,1,0,0,1,1), new Array(1,1,1,0,0,0,1,1,1,1,1,0,1,1), new Array(1,1,1,0,1,0,0,0,1,1,1,0,0,1), new Array(1,1,1,0,1,1,1,0,0,1,1,1,0,1), new Array(1,1,0,0,1,1,1,1,0,1,1,0,0,1), new Array(1,1,0,1,1,1,1,1,0,1,0,0,1,1), new Array(1,1,0,0,0,0,0,0,0,1,0,1,1,1), new Array(1,1,1,1,1,1,1,1,0,0,0,1,1,1) ); buildBoard(); function buildBoard(){ for ( col = 0; col < numCol; col++){ for ( row = 0; row < numRow; row++){ _root.attachMovie("tile", "tile_" + col + "_" + row, _root.getNextHighestDepth()); theTile = eval("tile_" + col + "_" + row); theTile._x = (col * 50); theTile._y = (row * 50); theTile.row = row; theTile.col = col; tileType = board[row][col]; theTile.gotoAndStop(tileName[tileType]); tiles.push(theTile); } } } init(); function init(){ onEnterFrame = function(){ counter += 1; if ( spawned < amount && counter > 50){ min= _root.attachMovie("minion","minion",_root.getNextHighestDepth()); min._x = tile_4_0._x + 25; min._y = tile_4_0._y + 25; min.health = 100; choose = Math.round(Math.random()); if ( choose == 0 ){ min.waypointX = [ tile_4_1._x +25, tile_3_1._x + 25, tile_3_2._x + 25, tile_3_6._x + 25, tile_2_6._x + 25, tile_2_8._x + 25, tile_8_8._x + 25, tile_8_9._x + 25, tile_10_9._x + 25, tile_10_7._x + 25, tile_11_7._x + 25, tile_11_6._x + 25, tile_12_6._x + 25, tile_12_4._x + 25, tile_11_4._x + 25, tile_11_2._x + 25, tile_10_2._x + 25, tile_10_0._x + 25]; min.waypointY = [ tile_4_1._y +25, tile_3_1._y + 25, tile_3_2._y + 25, tile_3_6._y + 25, tile_2_6._y + 25, tile_2_8._y + 25, tile_8_8._y + 25, tile_8_9._y + 25, tile_10_9._y + 25, tile_10_7._y + 25, tile_11_7._y + 25, tile_11_6._y + 25, tile_12_6._y + 25, tile_12_4._y + 25, tile_11_4._y + 25, tile_11_2._y + 25, tile_10_2._y + 25, tile_10_0._y + 25]; } else if ( choose == 1 ){ min.waypointX = [ tile_4_1._x +25, tile_3_1._x + 25, tile_3_2._x + 25, tile_3_3._x + 25, tile_5_3._x + 25, tile_5_4._x + 25, tile_7_4._x + 25, tile_7_5._x + 25, tile_8_5._x + 25, tile_8_8._x + 25, tile_8_9._x + 25, tile_10_9._x + 25, tile_10_7._x + 25, tile_11_7._x + 25, tile_11_6._x + 25, tile_12_6._x + 25, tile_12_4._x + 25, tile_11_4._x + 25, tile_11_2._x + 25, tile_10_2._x + 25, tile_10_0._x + 25 ]; min.waypointY = [ tile_4_1._y +25, tile_3_1._y + 25, tile_3_2._y + 25, tile_3_3._y + 25, tile_5_3._y + 25, tile_5_4._y + 25, tile_7_4._y + 25, tile_7_5._y + 25, tile_8_5._y + 25, tile_8_8._y + 25, tile_8_9._y + 25, tile_10_9._y + 25, tile_10_7._y + 25, tile_11_7._y + 25, tile_11_6._y + 25, tile_12_6._y + 25, tile_12_4._y + 25, tile_11_4._y + 25, tile_11_2._y + 25, tile_10_2._y + 25, tile_10_0._y + 25 ]; } min.i = 0; counter = 0; spawned += 1; min.onEnterFrame = function(){ dx = this.waypointX[this.i] - this._x; dy = this.waypointY[this.i] - this._y; radians = Math.atan2(dy,dx); degrees = radians * 180 / Math.PI; xspeed = Math.cos(radians); yspeed = Math.sin(radians); this._x += xspeed; this._y += yspeed; if( this._x == this.waypointX[this.i] && this._y == this.waypointY[this.i]){ this.i++; } if ( this._x == tile_10_0._x + 25 && this._y == tile_10_0._y + 25){ this.removeMovieClip(); dead += 1; } } } if ( dead >= amount ){ dead = 0; level += 1; amount = level * 20; spawned = 0; } } btnM.onRelease = function(){ create = true; } } game.onEnterFrame = function(){ } It is possible for me however to complete this task, but only once. I am able to make the turret, drag it over to a tile, and have it attach itself to the tile. No problem. The issue is, I cannot do these multiple times. Please Help.

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Extra Life 2012 - The Final Plea ... Until the Next One

    - by Chris Gardner
    I thought I'd share the email stream that my friends and family get about the event.So, here we are again. We scream closer to the event, and the goal is not met.I was approached by the ghost of feral platypii past last night. Well, approached is putting it lightly. I was mugged by the ghost of platypii past last night. He reminded me, in no uncertain terms that I have only reached the midway point of my fundraising goal. He then reminded me, in even less uncertain terms, that we are one week away from the event. There were other reminders past that, but this is a family broadcast. *shudder*Now, let us be serious for a moment. The event organizers claim a personal story helps to tug heart strings, whatever those are...I've been to Children's Hospital of Birmingham. I had to take Spawn, the Latter, there to verify she was not going to die. Instead, she's just a ticking time bomb for the next generation, but I digress.While I was there, I saw things. I saw child after child after child waiting for their appointment. I saw the most sublime displays of children's art juxtaposed with hospital sterilization that I could ever possibly imagine. I saw and heard things that only occur in the nightmares of parents, and I was only in the waiting rooms.But I will never forget the 10-ish year old girl that came in for her regularly scheduled dialysis appointment ... as if it was just another Friday afternoon. She had her school books, a little snack, a book to read for pleasure, and a DVD, in case she finished her homework a little early. You know, everything you'd need for an afternoon hooked up to a huge medical machine that going to clean out all the toxins in your blood. As she entered the secured area, she warmly greeted all the doctors and nurses with the same familiarity that I would greet the staff of my favorite coffee shop as I stopped in for my morning cup of coffee.I don't know the status of that little girl. I don't know if she's healthy or, quite frankly, alive. I don't even know her name, as I only heard it in passing for the 37 seconds our paths crossed. However, I do remember being incredibly moved and touched by her upbeat attitude about the situations, and I hope that my efforts last two Octobers got her, in some way, a little comfort.And, if she is still with us, I hope we can get her a little more.=== PREVIOUS MESSAGE FOLLOWS ===Greetings (Again),If you are receiving this updated message, then you didn't feel generous the first time. Now, I tried to be nice the first time. I tried to send a simple, unobtrusive email message to get you into the spirit. Well, much like the bell ringers that I ignore in front of the Wal-Mart, you ignored me.I probably should have seen that coming...However, unlike those poor souls, I know how to contact you. And I can find out where you live. So, so, so, you better feel lucky that I'm too lazy to terrorize you people, but cause I could do it.Remember, it's not for me, it's for those poor kids... and the feral platypii.  Because, we can make more children, but platypii are hard to come by.=== ORIGINAL MESSAGE FOLLOWS ===It's that time of year again. The time when I beg you for money for charity. See, unlike those bell ringers outside Wal-Mart, I don't do it when you have ten bazillion holiday obligations...Once again, I will be enduring a 24-hour marathon of gaming to raise money for Children Hospital in Birmingham. All the money goes straight to them, and you get to tell Uncie Samuel that you're good for that money. I'd REALLY like to break $1000 this year, as I have come REALLY close for the past 2 year to doing so.This year, the event will take place on October 20th, beginning at 8 A.M. Once again, I will try to provide some web streams, etc, if you want to point and laugh (especially if I have to result to playing Dance Central at 4 AM to stay awake for the last part.)Look at it this way, I'm going to badger you about this for the next month. You might as well donate some money so you can righteously tell me to shut the Smurf up.You can place your bid at the link below. Feel free to spread the word to anyone and everyone.I thank you. The children thank you. Several breeds of feral platypus thank you. Maybe, just maybe, doing so will help you feel the love felt by re-fried beans when lovingly hugged in a warm tortilla.Enjoy your burrito.http://www.extra-life.org/participant/cgardner

    Read the article

  • Copies of GameScene created when called additional times

    - by Orin MacGregor
    I have a game with a level select managed by a SceneManager, which basically just uses ReplaceScene. The first time I load a level everything works fine. On subsequent calls, for example: completing the level and continuing to the next, things blow up. The level loads fine, but when I try to pan the map or try to move the player the game crashes. Debugging through I found that there are multiple occurrences of self and related children like player and mapLayer. As a test, I put this code in my ccTouchesBegan: NSLog(@"test %i", [self retainCount]); The first time a level is loaded, it gives: test 2 The second time I load a level it gives: test 2 test 1 as in it spits out both values by looping through twice, not just appending an output to the last. It continues with this pattern for each subsequent load. So the third time will give 2 1 1. Particular code that causes the game to crash involve calling _tileMap.tileSize because there is a second GameScene with a tileMap that was supposedly destroyed, so it has tileSize and mapSize of 0. I noticed dealloc doesn't really ever get called, so I tried to manage some things with -(void) onExit -(void) onExit { [self unscheduleAllSelectors]; [_player stopAllActions]; //stop any animations just in case. normally handled in ccTouchesEnded [self removeAllChildrenWithCleanup:YES]; } I never replace the GameScene while I'm in a GameScene; if the level is completed it goes to a GameOver scene, or I use a back button that goes to the LevelSelect scene. This is [the relevant parts of] my init, in case something like the adding of children matters: -(id) init { _mapLayer = [CCLayer node]; //load data for level GameData *gameData = [GameDataParser loadData]; int selectedChapter = gameData.selectedChapter; int selectedLevel = gameData.selectedLevel; Levels *chapterLevels = [LevelParser loadLevelsForChapter:selectedChapter]; //loop until we get selected level, then do stuff for (Level *level in chapterLevels.levels) { if (level.number == selectedLevel) { //load the level map _tileMap = [CCTMXTiledMap tiledMapWithTMXFile:level.file]; } } _background = [_tileMap layerNamed:@"Background"]; _foreground = [_tileMap layerNamed:@"Foreground"]; _meta = [_tileMap layerNamed:@"Meta"]; _meta.visible = NO; //initialize Spawn Point object and place player there CCTMXObjectGroup *objects = [_tileMap objectGroupNamed:@"Objects"]; NSAssert(objects != nil, @"'Objects' object group not found"); NSMutableDictionary *spawnPoint = [objects objectNamed:@"SpawnPoint"]; NSAssert(spawnPoint != nil, @"SpawnPoint object not found"); int x = [[spawnPoint valueForKey:@"x"] intValue] / retinaScaling; int y = [[spawnPoint valueForKey:@"y"] intValue] / retinaScaling; //setup animations [[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:@"MouseRightAnim_24x21.plist"]; CCSpriteBatchNode *spriteSheet = [CCSpriteBatchNode batchNodeWithFile:@"MouseRightAnim_24x21.png"]; [_mapLayer addChild:spriteSheet z:1]; NSMutableArray *rightAnimFrames = [NSMutableArray array]; for(int i = 1; i <= 3; ++i) { [rightAnimFrames addObject: [[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName: [NSString stringWithFormat:@"MouseRight%d_24x21.png", i]]]; } CCAnimation *rightAnim = [CCAnimation animationWithSpriteFrames:rightAnimFrames delay:0.1f]; self.player = [CCSprite spriteWithSpriteFrameName:@"MouseRight2_24x21.png"]; _player.position = ccp(x, y); self.rightAction = [CCRepeatForever actionWithAction:[CCAnimate actionWithAnimation:rightAnim]]; rightAnim.restoreOriginalFrame = NO; [spriteSheet addChild:_player]; //get map size in pixels mapHeight = _tileMap.contentSize.height; mapWidth = _tileMap.contentSize.width; //setup defaults //this value works well for the calculation later, trial and error really distance = 150; lastGoodDistance = 150; mapScale = 1; [self setViewpointCenter:_player.position]; [_mapLayer addChild:_tileMap]; [self addChild:_mapLayer z:-1]; self.isTouchEnabled = YES; } return self; } And here's the SceneManager code for replacing scenes: +(void) goGameScene { CCLayer *gameLayer = [GameScene node]; [SceneManager go:gameLayer:[GameHUD node]]; } //this is what every call looks like besides the GameScene one above +(void) goLevelSelect { [SceneManager go:[LevelSelect node]:nil]; } +(void) go:(CCLayer *)layer: (CCLayer *)hudLayer { CCDirector *director = [CCDirector sharedDirector]; CCScene *newScene = [SceneManager wrap:layer:hudLayer]; if ([director runningScene]) { [director replaceScene:newScene]; } else { [director runWithScene:newScene]; } } +(CCScene *) wrap:(CCLayer *)layer: (CCLayer *)hudLayer { CCScene *newScene = [CCScene node]; [newScene addChild: layer]; if (hudLayer != nil) { [newScene addChild: hudLayer z:1]; } return newScene; } Any ideas why I'm getting these fatal artifacts? I'm hoping this isn't considered too localized since it basically combines 3 tutorials that anyone could end up following. (Ray Wenderlich Animations, Tim Roadley Scene Manager, Pan and Zoom with Tiled Maps.

    Read the article

  • Eventlet or gevent or Stackless + Twisted, Pylons, Django and SQL Alchemy

    - by Khorkrak
    We're using Twisted extensively for apps requiring a great deal of asynchronous io. There are some cases where stuff is cpu bound instead and for that we spawn a pool of processes to do the work and have a system for managing these across multiple servers as well - all done in Twisted. Works great. The problem is that it's hard to bring new team members up to speed. Writing asynchronous code in Twisted requires a near vertical learning curve. It's as if humans just don't think that way naturally. We're considering a mixed approach perhaps. Maybe do the xmlrpc server part and process management in Twisted still but the other stuff in code that at least looks synchronous while not being as such. Then again I like explicit over implicit so hmmm. Anyway onto greenlets - how well does that stuff work? So there's Stackless and as you can see from my Gallentean avatar I'm well aware of the tremendous success in it's use for CCP's flagship EVE Online game first hand. What about Eventlet or gevent? Well for now only Eventlet works with Twisted. However gevent claims to be faster since it's not a pure python implementation it instead uses libevent. It also has fewer idiosyncrasies and defects supposedly. The documentation there is minimal in comparison to Eventlet and it's maintained by 1 guy as far as I can tell. This makes me leery but all great projects start this way so... Then there's PyPy - I haven't even finished reading about that one yet - just saw it in this thread: Drawbacks of Stackless. So confusing - I'm wondering what the heck to do - sounds like Eventlet is probably the best bet but is it really stable enough? Anyone out there have any experience with it? Should we go with Stackless instead as it's been around and is proven technology - just like Twisted is as well - and they do work together nicely. But still I hate having to have a separate version of Python to do this. what to do.... This somewhat obnoxious blog entry hit the nail on the head for me though: Asynchronous IO for Grownups We're stuck using MySQL as well - I never knew how great PostgreSQL was until having had to work on a production OLTP system in MySQL instead - but that's another story. But if that monkey patch thing really works then wow. Just wow.

    Read the article

  • .NET 1.0 ThreadPool Question

    - by dotnet-practitioner
    I am trying to spawn a thread to take care of DoWork task that should take less than 3 seconds. Inside DoWork its taking 15 seconds. I want to abort DoWork and transfer the control back to main thread. I have copied the code as follows and its not working. Instead of aborting DoWork, it still finishes DoWork and then transfers the control back to main thread. What am I doing wrong? class Class1 { /// <summary> /// The main entry point for the application. /// </summary> /// private static System.Threading.ManualResetEvent[] resetEvents; [STAThread] static void Main(string[] args) { resetEvents = new ManualResetEvent[1]; int i = 0; resetEvents[i] = new ManualResetEvent(false); ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork),(object)i); Thread.CurrentThread.Name = "main thread"; Console.WriteLine("[{0}] waiting in the main method", Thread.CurrentThread.Name); DateTime start = DateTime.Now; DateTime end ; TimeSpan span = DateTime.Now.Subtract(start); //abort dowork method if it takes more than 3 seconds //and transfer control to the main thread. do { if (span.Seconds < 3) WaitHandle.WaitAll(resetEvents); else resetEvents[0].Set(); end = DateTime.Now; span = end.Subtract(start); }while (span.Seconds < 2); Console.WriteLine(span.Seconds); Console.WriteLine("[{0}] all done in the main method",Thread.CurrentThread.Name); Console.ReadLine(); } static void DoWork(object o) { int index = (int)o; Thread.CurrentThread.Name = "do work thread"; //simulate heavy duty work. Thread.Sleep(15000); //work is done.. resetEvents[index].Set(); Console.WriteLine("[{0}] do work finished",Thread.CurrentThread.Name); } }

    Read the article

  • State machines in C#

    - by Sir Psycho
    Hi, I'm trying to work out what's going on with this code. I have two threads iterating over the range and I'm trying to understand what is happening when the second thread calls GetEnumerator(). This line in particular (T current = start;), seems to spawn a new 'instance' in this method by the second thread. Seeing that there is only one instance of the DateRange class, I'm trying to understand why this works. Thanks in advance. class Program { static void Main(string[] args) { var daterange = new DateRange(DateTime.Now, DateTime.Now.AddDays(10), new TimeSpan(24, 0, 0)); var ts1 = new ThreadStart(delegate { foreach (var date in daterange) { Console.WriteLine("Thread " + Thread.CurrentThread.ManagedThreadId + " " + date); } }); var ts2 = new ThreadStart(delegate { foreach (var date in daterange) { Console.WriteLine("Thread " + Thread.CurrentThread.ManagedThreadId + " " + date); } }); Thread t1 = new Thread(ts1); Thread t2 = new Thread(ts2); t1.Start(); Thread.Sleep(4000); t2.Start(); Console.Read(); } } public class DateRange : Range<DateTime> { public DateTime Start { get; private set; } public DateTime End { get; private set; } public TimeSpan SkipValue { get; private set; } public DateRange(DateTime start, DateTime end, TimeSpan skip) : base(start, end) { SkipValue = skip; } public override DateTime GetNextElement(DateTime current) { return current.Add(SkipValue); } } public abstract class Range<T> : IEnumerable<T> where T : IComparable<T> { readonly T start; readonly T end; public Range(T start, T end) { if (start.CompareTo(end) > 0) throw new ArgumentException("Start value greater than end value"); this.start = start; this.end = end; } public abstract T GetNextElement(T currentElement); public IEnumerator<T> GetEnumerator() { T current = start; do { Thread.Sleep(1000); yield return current; current = GetNextElement(current); } while (current.CompareTo(end) < 1); } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return GetEnumerator(); } }

    Read the article

  • Python: Script works, but seems to deadlock after some time

    - by sberry2A
    I have the following script, which is working for the most part Link to PasteBin The script's job is to start a number of threads which in turn each start a subprocess with Popen. The output from each subprocess is as follows: 1 2 3 . . . n Done Bascially the subprocess is transferring 10M records from tables in one database to different tables in another db with a lot of data massaging/manipulation in between because of the different schemas. If the subprocess fails at any time in it's execution (bad records, duplicate primary keys, etc), or it completes successfully, it will output "Done\n". If there are no more records to select against for transfer then it will output "NO DATA\n" My intent was to create my script "tableTransfer.py" which would spawn a number of these processes, read their output, and in turn output information such as number of updates completed, time remaining, time elapsed, and number of transfers per second. I started running the process last night and checked in this morning to see it had deadlocked. There were not subprocceses running, there are still records to be updated, and the script had not exited. It was simply sitting there, no longer outputting the current information because no subprocces were running to update the total number complete which is what controls updates to the output. This is running on OS X. I am looking for three things: I would like to get rid of the possibility of this deadlock occurring so I don't need to check in on it as frequently. Is there some issue with locking? Am I doing this in a bad way (gThreading variable to control looping of spawning additional thread... etc.) I would appreciate some suggestions for improving my overall methodology. How should I handle ctrl-c exit? Right now I need to kill the process, but assume I should be able to use the signal module or other to catch the signal and kill the threads, is that right? I am not sure whether I should be pasting my entire script here, since I usually just paste snippets. Let me know if I should paste it here as well.

    Read the article

  • linked list elements gone?

    - by Hristo
    I create a linked list dynamically and initialize the first node in main(), and I add to the list every time I spawn a worker process. Before the worker process exits, I print the list. Also, I print the list inside my sigchld signal handler. in main(): head = NULL; tail = NULL; // linked list to keep track of worker process dll_node_t *node; node = (dll_node_t *) malloc(sizeof(dll_node_t)); // initialize list, allocate memory append_node(node); node->pid = mainPID; // the first node is the MAIN process node->type = MAIN; in a fork()'d process: // add to list dll_node_t *node; node = (dll_node_t *) malloc(sizeof(dll_node_t)); append_node(node); node->pid = mmapFileWorkerStats->childPID; node->workerFileName = mmapFileWorkerStats->workerFileName; node->type = WORK; functions: void append_node(dll_node_t *nodeToAppend) { /* * append param node to end of list */ // if the list is empty if (head == NULL) { // create the first/head node head = nodeToAppend; nodeToAppend->prev = NULL; } else { tail->next = nodeToAppend; nodeToAppend->prev = tail; } // fix the tail to point to the new node tail = nodeToAppend; nodeToAppend->next = NULL; } finally... the signal handler: void chld_signalHandler() { dll_node_t *temp1 = head; while (temp1 != NULL) { printf("2. node's pid: %d\n", temp1->pid); temp1 = temp1->next; } int termChildPID = waitpid(-1, NULL, WNOHANG); dll_node_t *temp = head; while (temp != NULL) { if (temp->pid == termChildPID) { printf("found process: %d\n", temp->pid); } temp = temp->next; } return; } Is it true that upon the worker process exiting, the SIGCHLD signal handler is triggered? If so, that would mean that after I print the tree before exiting, the next thing I do is in the signal handler which is print the tree... which would mean i would print the tree twice? But the tree isn't the same. The node I add in the worker process doesn't exist when I print in the signal handler or at the very end of main(). Any idea why? Thanks, Hristo

    Read the article

  • Optimizing tasks to reduce CPU in a trading application

    - by Joel
    Hello, I have designed a trading application that handles customers stocks investment portfolio. I am using two datastore kinds: Stocks - Contains unique stock name and its daily percent change. UserTransactions - Contains information regarding a specific purchase of a stock made by a user : the value of the purchase along with a reference to Stock for the current purchase. db.Model python modules: class Stocks (db.Model): stockname = db.StringProperty(multiline=True) dailyPercentChange=db.FloatProperty(default=1.0) class UserTransactions (db.Model): buyer = db.UserProperty() value=db.FloatProperty() stockref = db.ReferenceProperty(Stocks) Once an hour I need to update the database: update the daily percent change in Stocks and then update the value of all entities in UserTransactions that refer to that stock. The following python module iterates over all the stocks, update the dailyPercentChange property, and invoke a task to go over all UserTransactions entities which refer to the stock and update their value: Stocks.py # Iterate over all stocks in datastore for stock in Stocks.all(): # update daily percent change in datastore db.run_in_transaction(updateStockTxn, stock.key()) # create a task to update all user transactions entities referring to this stock taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock') }) def updateStockTxn(stock_key): #fetch the stock again - necessary to avoid concurrency updates stock = db.get(stock_key) stock.dailyPercentChange= data.get('some_val_for_stock') # I get this value from outside ... some more calculations here ... stock.put() Task.py (/task) # Amount of transaction per task amountPerCall=10 stock=db.get(self.request.get("stock_key")) # Get all user transactions which point to current stock user_transaction_query=stock.usertransactions_set cursor=self.request.get("cursor") if cursor: user_transaction_query.with_cursor(cursor) # Spawn another task if more than 10 transactions are in datastore transactions = user_transaction_query.fetch(amountPerCall) if len(transactions)==amountPerCall: taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock'), 'cursor': user_transaction_query.cursor() }) # Iterate over all transaction pointing to stock and update their value for transaction in transactions: db.run_in_transaction(updateUserTransactionTxn, transaction.key()) def updateUserTransactionTxn(transaction_key): #fetch the transaction again - necessary to avoid concurrency updates transaction = db.get(transaction_key) transaction.value= transaction.value* self.request.get ('some_val_for_stock') db.put(transaction) The problem: Currently the system works great, but the problem is that it is not scaling well… I have around 100 Stocks with 300 User Transactions, and I run the update every hour. In the dashboard, I see that the task.py takes around 65% of the CPU (Stock.py takes around 20%-30%) and I am using almost all of the 6.5 free CPU hours given to me by app engine. I have no problem to enable billing and pay for additional CPU, but the problem is the scaling of the system… Using 6.5 CPU hours for 100 stocks is very poor. I was wondering, given the requirements of the system as mentioned above, if there is a better and more efficient implementation (or just a small change that can help with the current implemntation) than the one presented here. Thanks!! Joel

    Read the article

< Previous Page | 11 12 13 14 15 16 17  | Next Page >