Search Results

Search found 10789 results on 432 pages for 'cpu upgrade'.

Page 416/432 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • Curl Certificate Error when Using RVM to install Ruby 1.9.2

    - by Will Dennis
    RVM is running into a certificate error when trying to download ruby 1.9.2. It looks like curl is having a certificate issue but I am not sure how to bypass it. NAy help would be great. Thanks so much, I have included the exact error info below. $ rvm install 1.9.2 Installing Ruby from source to: /Users/willdennis/.rvm/rubies/ruby-1.9.2-p180, this may take a while depending on your cpu(s)... ruby-1.9.2-p180 - #fetching ERROR: Error running 'bunzip2 '/Users/willdennis/.rvm/archives/ruby-1.9.2-p180.tar.bz2'', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/extract.log ruby-1.9.2-p180 - #extracting ruby-1.9.2-p180 to /Users/willdennis/.rvm/src/ruby-1.9.2-p180 ruby-1.9.2-p180 - #extracted to /Users/willdennis/.rvm/src/ruby-1.9.2-p180 Fetching yaml-0.1.3.tar.gz to /Users/willdennis/.rvm/archives curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). The default bundle is named curl-ca-bundle.crt; you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. ERROR: There was an error, please check /Users/willdennis/.rvm/log/ruby-1.9.2-p180/*.log. Next we'll try to fetch via http. Trying http:// URL instead. curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). The default bundle is named curl-ca-bundle.crt; you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. ERROR: There was an error, please check /Users/willdennis/.rvm/log/ruby-1.9.2-p180/*.log Extracting yaml-0.1.3.tar.gz to /Users/willdennis/.rvm/src ERROR: Error running 'tar zxf /Users/willdennis/.rvm/archives/yaml-0.1.3.tar.gz -C /Users/willdennis/.rvm/src --no-same-owner', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/extract.log /Users/willdennis/.rvm/scripts/functions/packages: line 55: cd: /Users/willdennis/.rvm/src/yaml-0.1.3: No such file or directory Configuring yaml in /Users/willdennis/.rvm/src/yaml-0.1.3. ERROR: Error running ' ./configure --prefix="/Users/willdennis/.rvm/usr" ', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/configure.log Compiling yaml in /Users/willdennis/.rvm/src/yaml-0.1.3. ERROR: Error running '/usr/bin/make ', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/make.log Installing yaml to /Users/willdennis/.rvm/usr ERROR: Error running '/usr/bin/make install', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/make.install.log ruby-1.9.2-p180 - #configuring ERROR: Error running ' ./configure --prefix=/Users/willdennis/.rvm/rubies/ruby-1.9.2-p180 --enable-shared --disable-install-doc --with-libyaml-dir=/Users/willdennis/.rvm/usr ', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/configure.log ERROR: There has been an error while running configure. Halting the installation. Will

    Read the article

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • Understanding Windows 8 Recovery options

    - by stuffe
    Background: I am preparing a PC that I am sending to a relative abroad, who has little or no internet access, and next to no sensible options for getting IT support should anything go wrong. As such I am trying to provide a full set of recovery options such that they are able to reinstall the OS with minimum fuss or assistance if required. The PC is a brand new Acer laptop that came with Windows 7 pre-installed (and an associated recovery partition) and a free upgrade to Windows 8. I have installed Windows 8 from scratch performing a format and clean install from media I burned from the official download. The existing Windows 7 recovery partition is still there, and I can still boot from it. I have created recovery DVDs of that in case it is ever lost. Here are my recovery options so far. I can perform a factory reset of Win 7 via the recovery partition I can perform a factory reset of Win 7 via burned recovery DVDs I can re-install Windows 8 cleanly from a DVD All of these are useful, but not what I want, because the first 2 methods use Win 7, and still fill the machine with crapware, and the latter doesn't provide for any post-install customisation and software installation. So, I am looking to see what other options are available to perform a Windows 8 recovery that will be more than a simple install. I am aware that Win8 comes with some useful refresh tools: Refresh your PC - Re-install Win 8 over the top of your existing installation, recovering from any Windows corruption etc. I can run this from my current install, although it says some files are missing that will be provided by me install or recovery media, which seems to be code for stick your install DVD in, and it starts after I do that - unfortunately for this particular laptop you need to specify a particular WIFI driver or the install bombs out part way through with IRQL errors, and this refresh method skips the part where you can load a driver, so it's no use to me. I think I can fix this by creating a custom recovery image using the recimg.exe command but it takes hours to complete so I haven't tried it yet. Reset your PC - Perform a full install and lose all your files. Again it needs my Install media inserting before it will do anything, but then it provides an error (will include later when I recreate it...) Now, these recovery options look useful (in principal, although both are fail for me) but they rely on having a working system to access the tools, which leads me to the last option, of making a Recovery USB drive. I have made a recovery drive, and it should perform loads of useful things, including copying my WIN7 recovery partition to the drive, providing the above refresh and reset options, providing other troubleshooting options and also the ability to restore from a custom image, only none of them seem to work for me. Creating the Recovery Drive - the option to include my recovery partition is greyed out. The partition exists and works fine, why will it not copy it? Refresh - I imagine this would have the same issues as I described before, but this is moot because when I try it says that the "drive where Windows is installed is locked, please unlock the drive and try again" with no info on what that means and how to do it. Restore - Again, probably pointless as I can just use the DVD, but it also errors: "unable to reset your PC. A required drive partition is missing" System Restore - should let me roll back a bad driver etc as per normal in Windows, only it simply says "To use system restore you must specify which windows installation to restore. Restart this computer, select an operating system, and select system restore" ?!?! System Image Recovery - this seems to be offering to restore from a Windows system image, but this is deprecated in Windows 8, although you can still make one if you use the Windows 7 Backup tools, however the resultant file is too large to put on the USB stick as it's FAT formatted, and would be a massive stack of DVDs anyway. So useless. It would be nice it it would work with the custom recovery image you can use with the refresh command, but there seems no option to do this. Automatic Repair - some diagnostics, which seem useless as it happily tells me it can't fix my problem, even though I have none. Command Prompt - yay, this works! What on earth do I want to use it for... Had any of the above worked, it might be useful, but as any form of install still requires you to have the DVD, and any form of custom recovery image also requires you to have either a massive stack of DVDs or an NTFS formatted backup device in addition to the recovery drive, it sort of ruins the point. It doesn't seem rocket science. I want to create a bootable USB drive that I can refresh Windows over an existing install with, perform a clean reinstall to a bare system, or recovery a customised image with existing apps installed. If anyone can point me in a direction that allows me to make a single recovery drive do these all these things, I would appreciate it. I have a 32Gb USB3 thumb drive that I bought for this very purpose, but it's seems to be fighting to let me do anything useful. At this rate I will be making a DriveImageXML recovery stick and dumping the OS with that, which I know works, but isn't so elegant as using the proper tools..

    Read the article

  • Active directory authentication for Ubuntu Linux login and cifs mounting home directories...

    - by Jamie
    I've configured my Ubuntu 10.04 Server LTS Beta 2 residing on a windows network to authenticate logins using active directory, then mount a windows share to serve as there home directory. Here is what I did starting from the initial installation of Ubuntu. Download and install Ubuntu Server 10.04 LTS Beta 2 Get updates # sudo apt-get update && sudo apt-get upgrade Install an SSH server (sshd) # sudo apt-get install openssh-server Some would argue that you should "lock sshd down" by disabling root logins. I figure if your smart enough to hack an ssh session for a root password, you're probably not going to be thwarted by the addition of PermitRootLogin no in the /etc/ssh/sshd_config file. If your paranoid or not simply not convinced then edit the file or give the following a spin: # (grep PermitRootLogin /etc/ssh/sshd_conifg && sudo sed -ri 's/PermitRootLogin ).+/\1no/' /etc/ssh/sshd_conifg) || echo "PermitRootLogin not found. Add it manually." Install required packages # sudo apt-get install winbind samba smbfs smbclient ntp krb5-user Do some basic networking housecleaning in preparation for the specific package configurations to come. Determine your windows domain name, DNS server name, and IP address for the active directory server (for samba). For conveniance I set environment variables for the windows domain and DNS server. For me it was (my AD IP address was 192.168.20.11): # WINDOMAIN=mydomain.local && WINDNS=srv1.$WINDOMAIN If you want to figure out what your domain and DNS server is (I was contractor and didn't know the network) check out this helpful reference. The authentication and file sharing processes for the Windows and Linux boxes need to have their clocks agree. Do this with an NTP service, and on the server version of Ubuntu the NTP service comes installed and preconfigured. The network I was joining had the DNS server serving up the NTP service too. # sudo sed -ri "s/^(server[ \t]).+/\1$WINDNS/" /etc/ntp.conf Restart the NTP daemon # sudo /etc/init.d/ntp restart We need to christen the Linux box on the new network, this is done by editing the host file (replace the DNS of with the FQDN of the windows DNS): # sudo sed -ri "s/^(127\.0\.0\.1[ \t]).*/\1$(hostname).$WINDOMAIN localhost $(hostname)/" /etc/hosts Kerberos configuration. The instructions that follow here aren't to be taken literally: the values for MYDOMAIN.LOCAL and srv1.mydomain.local need to be replaced with what's appropriate for your network when you edit the files. Edit the (previously installed above) /etc/krb5.conf file. Find the [libdefaults] section and change (or add) the key value pair (and it is in UPPERCASE WHERE IT NEEDS TO BE): [libdefaults] default_realm = MYDOMAIN.LOCAL Add the following to the [realms] section of the file: MYDOMAIN.LOCAL = { kdc = srv1.mydomain.local admin_server = srv1.mydomain.local default_domain = MYDOMAIN.LOCAL } Add the following to the [domain_realm] section of the file: .mydomain.local = MYDOMAIN.LOCAL mydomain.local = MYDOMAIN.LOCAL Conmfigure samba. When it's all said done, I don't know where SAMBA fits in ... I used cifs to mount the windows shares ... regardless, my system works and this is how I did it. Replace /etc/samba/smb.conf (remember I was working from a clean distro of Ubuntu, so I wasn't worried about breaking anything): [global] security = ads realm = MYDOMAIN.LOCAL password server = 192.168.20.11 workgroup = MYDOMAIN idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes client ntlmv2 auth = yes encrypt passwords = yes winbind use default domain = yes restrict anonymous = 2 Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start Setup the authentication. Edit the /etc/nsswitch.conf. Here are the contents of mine: passwd: compat winbind group: compat winbind shadow: compat winbind hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start At this point I could login, home directories didn't exist, but I could login. Later I'll come back and add how I got the cifs automounting to work. Numerous resources were considered so I could figure this out. Here is a short list (a number of these links point to mine own questions on the topic): Samba Kerberos Active Directory WinBind Mounting Linux user home directories on CIFS server Authenticating OpenBSD against Active Directory How to use Active Directory to authenticate linux users Mounting windows shares with Active Directory permissions Using Active Directory authentication with Samba on Ubuntu 9.10 server 64bit How practical is to authenticate a Linux server against AD? Auto-mounting a windows share on Linux AD login

    Read the article

  • FFmpeg audio dont work in converted videos

    - by Juddy Swaft
    NOTICE: when i convert videos via terminal and download them from ftp into pc the audio works fine. I use: if($ext == "avi" && $convert_avi == true) { $convert_source = _VIDEOS_DIR_PATH.$new_name; $conv_name = substr(md5($file['name'].rand(1,888)), 2, 10).".mp4"; $converted_file = _VIDEOS_DIR_PATH.$conv_name; $ffmpeg_command = 'ffmpeg -i '.$convert_source.' -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 '.$converted_file; echo exec($ffmpeg_command); $sql = "UPDATE pm_temp SET url = '".$conv_name."' WHERE url = '".$new_name."' LIMIT 1"; $result = @mysql_query($sql); unlink($convert_source); } This code to convert avi to mp4 ffmpeg concole output: root@1tb:~# ffmpeg -i sample.avi -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 goodsample.mp4 ffmpeg version 0.7.15, Copyright (c) 2000-2013 the FFmpeg developers built on Feb 22 2013 07:18:58 with gcc 4.4.5 configuration: --enable-libdc1394 --prefix=/usr --extra-cflags='-Wall -g ' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-avfilter --enable-libdirac --disable-decoder=libdirac --enable-libfreetype --enable-libschroedinger --disable-encoder=libschroedinger - s libavutil 50. 43. 0 / 50. 43. 0 libavcodec 52.123. 0 / 52.123. 0 libavformat 52.111. 0 / 52.111. 0 libavdevice 52. 5. 0 / 52. 5. 0 libavfilter 1. 80. 0 / 1. 80. 0 libswscale 0. 14. 1 / 0. 14. 1 libpostproc 51. 2. 0 / 51. 2. 0 [mp3 @ 0x191d4100] Header missing [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'sample.avi': Metadata: encoder : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:01:01.81, start: 0.000000, bitrate: 1194 kb/s Stream #0.0: Video: mpeg4, yuv420p, 640x352 [PAR 1:1 DAR 20:11], 23.98 tbr, Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s [buffer @ 0x191d1c80] w:640 h:352 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param: [scale @ 0x191d6880] w:640 h:352 fmt:yuv420p -> w:1280 h:720 fmt:yuv420p flags:0 [libx264 @ 0x191ce5a0] Default settings detected, using medium profile [libx264 @ 0x191ce5a0] using SAR=45/44 [libx264 @ 0x191ce5a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle S [libx264 @ 0x191ce5a0] profile High, level 3.1 [libx264 @ 0x191ce5a0] 264 - core 118 - H.264/MPEG-4 AVC codec - Copyleft 2003-2 6 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_off 1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_l Output #0, mp4, to 'goodsample.mp4': Metadata: encoder : Lavf52.111.0 Stream #0.0: Video: libx264, yuv420p, 1280x720 [PAR 45:44 DAR 20:11], q=2-31 Stream #0.1: Audio: libmp3lame, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help [mp3 @ 0x191d4100] Header missing Error while decoding stream #0.1 [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected [mp3 @ 0x191d4100] incomplete frame 9467kB time=00:01:00.32 bitrate=1285.5kbits/ Error while decoding stream #0.1 frame= 1852 fps= 20 q=29.0 Lsize= 9652kB time=00:01:01.72 bitrate=1280.9kbits video:9121kB audio:483kB global headers:0kB muxing overhead 0.499688% frame I:11 Avg QP:16.78 size: 51456 [libx264 @ 0x191ce5a0] frame P:784 Avg QP:20.81 size: 8954 [libx264 @ 0x191ce5a0] frame B:1057 Avg QP:26.06 size: 1659 [libx264 @ 0x191ce5a0] consecutive B-frames: 22.0% 3.1% 7.5% 67.4% [libx264 @ 0x191ce5a0] mb I I16..4: 31.1% 59.8% 9.1% [libx264 @ 0x191ce5a0] mb P I16..4: 1.8% 2.6% 0.2% P16..4: 24.3% 7.0% 4.0 [libx264 @ 0x191ce5a0] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 22.7% 0.8% 0.2 [libx264 @ 0x191ce5a0] 8x8 transform intra:57.0% inter:72.6% [libx264 @ 0x191ce5a0] coded y,uvDC,uvAC intra: 44.4% 33.3% 10.3% inter: 7.6% 5. [libx264 @ 0x191ce5a0] i16 v,h,dc,p: 68% 14% 8% 10% [libx264 @ 0x191ce5a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 14% 27% 5% 7% 7% 6 [libx264 @ 0x191ce5a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 14% 14% 6% 10% 9% 7 [libx264 @ 0x191ce5a0] i8c dc,h,v,p: 67% 13% 17% 3% [libx264 @ 0x191ce5a0] Weighted P-Frames: Y:1.9% UV:0.4% [libx264 @ 0x191ce5a0] ref P L0: 62.2% 12.8% 10.3% 14.5% 0.2% [libx264 @ 0x191ce5a0] ref B L0: 88.1% 5.5% 6.4% [libx264 @ 0x191ce5a0] ref B L1: 95.7% 4.3% [libx264 @ 0x191ce5a0] kb/s:1209.03 I know there is couple errors tough, but i dont know hot to fix it. Also i would be very thankfull if someone can help reduce video size but is not main problem video weights as original avi but sill.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • NetApp erroring with: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT

    - by Sobrique
    Since a sitewide upgrade to Windows 7 on desktop, I've started having a problem with virus checking. Specifically - when doing a rename operation on a (filer hosted) CIFS share. The virus checker seems to be triggering a set of messages on the filer: [filerB: auth.trace.authenticateUser.loginTraceIP:info]: AUTH: Login attempt by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20 (server-wk8-r2). [filerB: auth.dc.trace.DCConnection.statusMsg:info]: AUTH: TraceDC- attempting authentication with domain controller \\MYDC. [filerB: auth.trace.authenticateUser.loginRejected:info]: AUTH: Login attempt by user rejected by the domain controller with error 0xc0000199: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT. [filerB: auth.trace.authenticateUser.loginTraceMsg:info]: AUTH: Delaying the response by 5 seconds due to continuous failed login attempts by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20. This seems to specifically trigger on a rename so what we think is going on is the virus checker is seeing a 'new' file, and trying to do an on-access scan. The virus checker - previously running as LocalSystem and thus sending null as it's authentication request is now looking rather like a DOS attack, and causing the filer to temporarily black list. This 5s lock out each 'access attempt' is a minor nuisance most of the time, and really quite significant for some operations - e.g. large file transfers, where every file takes 5s Having done some digging, this seems to be related to NLTM authentication: Symptoms Error message: System error 1808 has occurred. The account used is a computer account. Use your global user account or local user account to access this server. A packet trace of the failure will show the error as: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT (0xC0000199) Cause Microsoft has changed the functionality of how a Local System account identifies itself during NTLM authentication. This only impacts NTLM authentication. It does not impact Kerberos Authentication. Solution On the host, please set the following group policy entry and reboot the host. Network Security: Allow Local System to use computer identity for NTLM: Disabled Defining this group policy makes Windows Server 2008 R2 and Windows 7 function like Windows Server 2008 SP1. So we've now got a couple of workaround which aren't particularly nice - one is to change this security option. One is to disable virus checking, or otherwise exempt part of the infrastructure. And here's where I come to my request for assistance from ServerFault - what is the best way forwards? I lack Windows experience to be sure of what I'm seeing. I'm not entirely sure why NTLM is part of this picture in the first place - I thought we were using Kerberos authentication. I'm not sure how to start diagnosing or troubleshooting this. (We are going cross domain - workstation machine accounts are in a separate AD and DNS domain to my filer. Normal user authentication works fine however.) And failing that, can anyone suggest other lines of enquiry? I'd like to avoid a site wide security option change, or if I do go that way I'll need to be able to supply detailed reasoning. Likewise - disabling virus checking works as a short term workaround, and applying exclusions may help... but I'd rather not, and don't think that solves the underlying problem. EDIT: Filers in AD ldap have SPNs for: nfs/host.fully.qualified.domain nfs/host HOST/host.fully.qualified.domain HOST/host (Sorry, have to obfuscate those). Could it be that without a 'cifs/host.fully.qualified.domain' it's not going to work? (or some other SPN? ) Edit: As part of the searching I've been doing I've found: http://itwanderer.wordpress.com/2011/04/14/tread-lightly-kerberos-encryption-types/ Which suggests that several encryption types were disabled by default in Win7/2008R2. This might be pertinent, as we've definitely had a similar problem with Keberized NFSv4. There is a hidden option which may help some future Keberos users: options nfs.rpcsec.trace on (This hasn't given me anything yet though, so may just be NFS specific). Edit: Further digging has me tracking it back to cross domain authentication. It looks like my Windows 7 workstation (in one domain) is not getting Kerberos tickets for the other domain, in which my NetApp filer is CIFS joined. I've done this separately against a standalone server (Win2003 and Win2008) and didn't get Kerberos tickets for those either. Which means I think Kerberos might be broken, but I've no idea how to troubleshoot further. Edit: A further update: It looks like this may be down Kerberos tickets not being issued cross domain. This then triggers NTLM fallback, which then runs into this problem (since Windows 7). First port of call will be to investigate the Kerberos side of things, but in neither case do we have anything pointing at the Filer being the root cause. As such - as the storage engineer - it's out of my hands. However, if anyone can point me in the direction of troubleshooting Kerberos spanning two Windows AD domains (Kerberos Realms) then that would be appreciated. Options we're going to be considering for resolution: Amend policy option on all workstations via GPO (as above). Talking to AV vendor about the rename triggering scanning. Talking to AV vendor regarding running AV as service account. investigating Kerberos authentication (why it's not working, whether it should be).

    Read the article

  • Why do UInt16 arrays seem to add faster than int arrays?

    - by scraimer
    It seems that C# is faster at adding two arrays of UInt16[] than it is at adding two arrays of int[]. This makes no sense to me, since I would have assumed the arrays would be word-aligned, and thus int[] would require less work from the CPU, no? I ran the test-code below, and got the following results: Int for 1000 took 9896625613 tick (4227 msec) UInt16 for 1000 took 6297688551 tick (2689 msec) The test code does the following: Creates two arrays named a and b, once. Fills them with random data, once. Starts a stopwatch. Adds a and b, item-by-item. This is done 1000 times. Stops the stopwatch. Reports how long it took. This is done for int[] a, b and for UInt16 a,b. And every time I run the code, the tests for the UInt16 arrays take 30%-50% less time than the int arrays. Can you explain this to me? Here's the code, if you want to try if for yourself: public static UInt16[] GenerateRandomDataUInt16(int length) { UInt16[] noise = new UInt16[length]; Random random = new Random((int)DateTime.Now.Ticks); for (int i = 0; i < length; ++i) { noise[i] = (UInt16)random.Next(); } return noise; } public static int[] GenerateRandomDataInt(int length) { int[] noise = new int[length]; Random random = new Random((int)DateTime.Now.Ticks); for (int i = 0; i < length; ++i) { noise[i] = (int)random.Next(); } return noise; } public static int[] AddInt(int[] a, int[] b) { int len = a.Length; int[] result = new int[len]; for (int i = 0; i < len; ++i) { result[i] = (int)(a[i] + b[i]); } return result; } public static UInt16[] AddUInt16(UInt16[] a, UInt16[] b) { int len = a.Length; UInt16[] result = new UInt16[len]; for (int i = 0; i < len; ++i) { result[i] = (ushort)(a[i] + b[i]); } return result; } public static void Main() { int count = 1000; int len = 128 * 6000; int[] aInt = GenerateRandomDataInt(len); int[] bInt = GenerateRandomDataInt(len); Stopwatch s = new Stopwatch(); s.Start(); for (int i=0; i<count; ++i) { int[] resultInt = AddInt(aInt, bInt); } s.Stop(); Console.WriteLine("Int for " + count + " took " + s.ElapsedTicks + " tick (" + s.ElapsedMilliseconds + " msec)"); UInt16[] aUInt16 = GenerateRandomDataUInt16(len); UInt16[] bUInt16 = GenerateRandomDataUInt16(len); s = new Stopwatch(); s.Start(); for (int i=0; i<count; ++i) { UInt16[] resultUInt16 = AddUInt16(aUInt16, bUInt16); } s.Stop(); Console.WriteLine("UInt16 for " + count + " took " + s.ElapsedTicks + " tick (" + s.ElapsedMilliseconds + " msec)"); }

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • The best cross platform (portable) arbitrary precision math library

    - by Siu Ching Pong - Asuka Kenji
    Dear ninjas / hackers / wizards, I'm looking for a good arbitrary precision math library in C or C++. Could you please give me some advices / suggestions? The primary requirements: It MUST handle arbitrarily big integers (my primary interest is on integers). In case that you don't know what the word arbitrarily big means, imagine something like 100000! (the factorial of 100000). The precision MUST NOT NEED to be specified during library initialization / object creation. The precision should ONLY be constrained by the available resources of the system. It SHOULD utilize the full power of the platform, and should handle "small" numbers natively. That means on a 64-bit platform, calculating 2^33 + 2^32 should use the available 64-bit CPU instructions. The library SHOULD NOT calculate this in the same way as it does with 2^66 + 2^65 on the same platform. It MUST handle addition (+), subtraction (-), multiplication (*), integer division (/), remainder (%), power (**), increment (++), decrement (--), gcd(), factorial(), and other common integer arithmetic calculations efficiently. Ability to handle functions like sqrt() (square root), log() (logarithm) that do not produce integer results is a plus. Ability to handle symbolic computations is even better. Here are what I found so far: Java's BigInteger and BigDecimal class: I have been using these so far. I have read the source code, but I don't understand the math underneath. It may be based on theories / algorithms that I have never learnt. The built-in integer type or in core libraries of bc / Python / Ruby / Haskell / Lisp / Erlang / OCaml / PHP / some other languages: I have ever used some of these, but I have no idea on which library they are using, or which kind of implementation they are using. What I have already known: Using a char as a decimal digit, and a char* as a decimal string and do calculations on the digits using a for-loop. Using an int (or a long int, or a long long) as a basic "unit" and an array of it as an arbitrary long integer, and do calculations on the elements using a for-loop. Booth's multiplication algorithm What I don't know: Printing the binary array mentioned above in decimal without using naive methods. Example of a naive method: (1) add the bits from the lowest to the highest: 1, 2, 4, 8, 16, 32, ... (2) use a char* string mentioned above to store the intermediate decimal results). What I appreciate: Good comparisons on GMP, MPFR, decNumber (or other libraries that are good in your opinion). Good suggestions on books / articles that I should read. For example, an illustration with figures on how a un-naive arbitrarily long binary to decimal conversion algorithm works is good. Any help. Please DO NOT answer this question if: you think using a double (or a long double, or a long long double) can solve this problem easily. If you do think so, it means that you don't understand the issue under discussion. you have no experience on arbitrary precision mathematics. Thank you in advance! Asuka Kenji

    Read the article

  • Changing html <-> ajax <-> php/mysql to threaded approach

    - by Saif Bechan
    I have an application that needs to be updated real-time. There are various counters and other information that have to come from the database and the system needs to be up to date for the user. My approach now is just a normal ajax request every second to get the new values from the database. There is a JavaScript which loops every second getting the values trough ajax. This works fine but I think its very inefficient. The problem There is an ajax script that loops every second requesting data from php # On the server it has to load the PHP interpeter The PHP file has to get the data and format it correctly # PHP has to make a connection with the mysql database Work with the database(reads,never writes) Format the data so it can be send Send the data back to the browser # Close the database connection, and close the php interpeter Last the browser has to read these values and update the various html parts Now with this approach it has to load the interpreter and make a db connection every second. I was thinking of a way to make this more efficient, and maybe use a threaded approach to this. Threaded aprouch Do a post to the PHP when you enter the page and keep the connection alive In PHP only load the interpreter once, and make a connection to the DB ones Every second send an ajax response to the javascript listener The javascript listener than just changes values as the response from php arrives. I think this approach will be a great optimization to the server load and overall performance. But I can spot some weak point in the system and i need some help with these. Problems with the approach PHP execution time limit I don't think PHP is designed for such a setup. I know there is a time limit on php script execution. I don't know if an everlasting loop in PHP will cause any serious cpu/memory problems. Sending ajax request without breaking I don't know if it is possible to have just one ajax post action and have open and accepting data. user exists the page What will happen when the user exists the page and the PHP script is still going. Will it go on forever. security issues so far i can't think of any security issues. Almost every setup you use have some security issues. Maybe there are some with this solution I do not know of. Open to other solution I really want to change the setup as it is now and move to a threaded approach or better. If someone has a better approach to tackle this I definitely want to hear that. Maybe the usage of some other scripts is better suited for having an ongoing runtime. I only know php and java so any suggestions are welcome and I am willing to dig trough. I know there are things like perl, python etcetera that are used for this type of threaded but i don't know which one is best suited. When using other script If the best way is to go with other type of script like perl,python etcetera I do have some critera. The script has to be accessible via ajax post If it accepts some kind of json encode/decode it would be nice The script has to be able to access the session file This is essential because I need to know if the user is logged in The script has to be able to easily talk to MySQL All comments are welcome, and I hope this question is helpful to other also. Cheers!

    Read the article

  • php crashes with no core file and this message : apc_mmap failed

    - by greg0ire
    Description of the problem Regularly, cron php processes crash on our production server, which result in mails with the following body : PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 Segmentation fault (core dumped) I think the Segmentation fault (core dumped) should result in core files being handled by apport and then written in /var/crashes, but the files I can see there are there since yesterday, although the last crash occured today : -rw-r----- 1 root whoopsie 1138528 mai 22 04:09 _usr_bin_php5.0.crash -rw-r----- 1 frontoffice whoopsie 1166373 mai 20 18:00 _usr_bin_php5.1005.crash -rw-r----- 1 frontoffice whoopsie 81622658 mai 22 00:05 _usr_sbin_php5-fpm.1005.crash I tried to download the last one anyway, and ran gdb /usr/sbin/php5-fpm /tmp/_usr_sbin_php5-fpm.1005.crash, only to be told that the file is not a core file (its format was not recognized). Here is the server's apc configuration : cat /etc/php5/cli/conf.d/20-apc.ini extension=apc.so apc.shm_size=512M apc.ttl=3600 apc.user_ttl=3600 apc.enable_cli=1 I'm mostly worried about the apc.shm_size… isn't it too high or too low ? I understand it has to do with the size of memory segments. Question(s) What could be the problem ? How can I troubleshoot it (how can I get a valid core file ?) ? System information free total used free shared buffers cached Mem: 5081296 4354684 726612 0 374744 959968 -/+ buffers/cache: 3019972 2061324 Swap: 522236 516888 5348 cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" php -v PHP 5.4.17-1~precise+1 (cli) (built: Jul 17 2013 18:14:06) Copyright (c) 1997-2013 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies php -i excerpt : Configuration apc APC Support => enabled Version => 3.1.13 APC Debugging => Disabled MMAP Support => Enabled MMAP File Mask => Locking type => pthread mutex Locks Serialization Support => php Revision => $Revision: 327136 $ Build Date => Nov 20 2012 18:41:36 Directive => Local Value => Master Value apc.cache_by_default => On => On apc.canonicalize => On => On apc.coredump_unmap => Off => Off apc.enable_cli => On => On apc.enabled => On => On apc.file_md5 => Off => Off apc.file_update_protection => 2 => 2 apc.filters => no value => no value apc.gc_ttl => 3600 => 3600 apc.include_once_override => Off => Off apc.lazy_classes => Off => Off apc.lazy_functions => Off => Off apc.max_file_size => 1M => 1M apc.mmap_file_mask => no value => no value apc.num_files_hint => 1000 => 1000 apc.preload_path => no value => no value apc.report_autofilter => Off => Off apc.rfc1867 => Off => Off apc.rfc1867_freq => 0 => 0 apc.rfc1867_name => APC_UPLOAD_PROGRESS => APC_UPLOAD_PROGRESS apc.rfc1867_prefix => upload_ => upload_ apc.rfc1867_ttl => 3600 => 3600 apc.serializer => default => default apc.shm_segments => 1 => 1 apc.shm_size => 512M => 512M apc.shm_strings_buffer => 4M => 4M apc.slam_defense => On => On apc.stat => On => On apc.stat_ctime => Off => Off apc.ttl => 3600 => 3600 apc.use_request_time => On => On apc.user_entries_hint => 4096 => 4096 apc.user_ttl => 3600 => 3600 apc.write_lock => On => On php -m [PHP Modules] apc bcmath bz2 calendar Core ctype curl date dba dom ereg exif fileinfo filter ftp gd gettext hash iconv imagick intl json ldap libxml mbstring memcache memcached mhash mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_pgsql pdo_sqlite pgsql Phar posix Reflection session shmop SimpleXML soap sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tidy tokenizer wddx xml xmlreader xmlwriter zip zlib [Zend Modules] ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 39531 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 39531 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Missing a constant on load.. how can i get around this? (Rails::Plugin::OpenID)

    - by Chris Kimpton
    I have a Rails 2 project that I am trying to upgrade to Rails 3, but getting some issues with bundler. When I run "rake", it runs the tests just fine. But when I run "bundle exec rake" it fails to find a constant. The error is this: /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/dependencies.rb:131:in `const_missing': uninitialized constant Rails::Plugin::OpenID (NameError) from /Users/kimptoc/Documents/ruby/borisbikes/borisbikestats.pre3/vendor/plugins/open_id_authentication/init.rb:16:in `evaluate_init_rb' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:182:in `call' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:182:in `evaluate_method' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:166:in `call' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `run' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `each' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `run' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:276:in `run_callbacks' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/actionpack-2.3.9/lib/action_controller/dispatcher.rb:51:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/actionpack-2.3.9/lib/action_controller/dispatcher.rb:51:in `run_prepare_callbacks' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:631:in `prepare_dispatcher' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:185:in `process' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:113:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:113:in `run' from /Users/kimptoc/Documents/ruby/borisbikes/borisbikestats.pre3/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 I have these gems installed: $ gem list *** LOCAL GEMS *** actionmailer (2.3.9) actionpack (2.3.9) activerecord (2.3.9) activeresource (2.3.9) activesupport (2.3.9) authlogic (2.1.3) bundler (1.0.7) gravtastic (2.2.0) linecache (0.43) mocha (0.9.10) newrelic_rpm (2.13.4) parseexcel (0.5.2) rack (1.1.0) rack-openid (1.1.1) rails (2.3.9) rake (0.8.7) ruby-debug-base (0.10.5.jb2, 0.10.4) ruby-debug-ide (0.4.15) ruby-openid (2.1.8, 2.1.7, 2.0.4) sqlite3-ruby (1.3.2) The bundler Gemfile is as follows: source 'http://rubygems.org' #gem 'rails', '3.0.3' gem "rails", "2.3.9" gem "activesupport", "2.3.9" gem "ruby-openid", "2.1.7", :require => "openid" #gem "authlogic-oid", "1.0.4" # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3-ruby', :require => 'sqlite3' gem "authlogic", "= 2.1.3" gem "newrelic_rpm" # gem "facebooker" gem "parseexcel" gem 'gravtastic', '= 2.2.0' gem "rack-openid", '=1.1.1', :require => 'rack/openid' # not sure what this does... gem "mocha" I have these plugins installed: 2dc_jqgrid authlogic_openid open_id_authentication squirrel I see these similar questions: Missing a constant on load.. how can i get around this? and Requiring gem in Rails 3 Controller failing with "Constant Missing" But their solutions dont seem to work for my situation. I am guessing the issue is around the plugins, but my ruby-foo is too weak. Thanks in advance, Chris

    Read the article

  • Thread Synchronisation 101

    - by taspeotis
    Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I could put the CCriticalSection inside a tighter code block) but my code doesn't crash and it runs fast enough. At work my multithreaded code needs to be a tighter, only locking/synchronising at the lowest level needed. At work I was trying to debug some multithreaded code, and I came across this: EnterCriticalSection(&m_Crit4); m_bSomeVariable = true; LeaveCriticalSection(&m_Crit4); Now, m_bSomeVariable is a Win32 BOOL (not volatile), which as far as I know is defined to be an int, and on x86 reading and writing these values is a single instruction, and since context switches occur on an instruction boundary then there's no need for synchronising this operation with a critical section. I did some more research online to see whether this operation did not need synchronisation, and I came up with two scenarios it did: The CPU implements out of order execution or the second thread is running on a different core and the updated value is not written into RAM for the other core to see; and The int is not 4-byte aligned. I believe number 1 can be solved using the "volatile" keyword. In VS2005 and later the C++ compiler surrounds access to this variable using memory barriers, ensuring that the variable is always completely written/read to the main system memory before using it. Number 2 I cannot verify, I don't know why the byte alignment would make a difference. I don't know the x86 instruction set, but does mov need to be given a 4-byte aligned address? If not do you need to use a combination of instructions? That would introduce the problem. So... QUESTION 1: Does using the "volatile" keyword (implicity using memory barriers and hinting to the compiler not to optimise this code) absolve a programmer from the need to synchronise a 4-byte/8-byte on x86/x64 variable between read/write operations? QUESTION 2: Is there the explicit requirement that the variable be 4-byte/8-byte aligned? I did some more digging into our code and the variables defined in the class: class CExample { private: CRITICAL_SECTION m_Crit1; // Protects variable a CRITICAL_SECTION m_Crit2; // Protects variable b CRITICAL_SECTION m_Crit3; // Protects variable c CRITICAL_SECTION m_Crit4; // Protects variable d // ... }; Now, to me this seems excessive. I thought critical sections synchronised threads between a process, so if you've got one you can enter it and no other thread in that process can execute. There is no need for a critical section for each variable you want to protect, if you're in a critical section then nothing else can interrupt you. I think the only thing that can change the variables from outside a critical section is if the process shares a memory page with another process (can you do that?) and the other process starts to change the values. Mutexes would also help here, named mutexes are shared across processes, or only processes of the same name? QUESTION 3: Is my analysis of critical sections correct, and should this code be rewritten to use mutexes? I have had a look at other synchronisation objects (semaphores and spinlocks), are they better suited here? QUESTION 4: Where are critical sections/mutexes/semaphores/spinlocks best suited? That is, which synchronisation problem should they be applied to. Is there a vast performance penalty for choosing one over the other? And while we're on it, I read that spinlocks should not be used in a single-core multithreaded environment, only a multi-core multithreaded environment. So, QUESTION 5: Is this wrong, or if not, why is it right? Thanks in advance for any responses :)

    Read the article

  • SDL side-scroller scrolls inconsistantly

    - by SDLFunTimes
    So I'm working on an upgrade from my previous project (that I posted here for code review) this time implementing a repeating background (like what is used on cartoons) so that SDL doesn't have to load really big images for a level. There's a strange inconsistency in the program, however: the first time the user scrolls all the way to the right 2 less panels are shown than is specified. Going backwards (left) the correct number of panels is shown (that is the panels repeat the number of times specified in the code). After that it appears that going right again (once all the way at the left) the correct number of panels is shown and same going backwards. Here's some selected code and here's a .zip of all my code constructor: Game::Game(SDL_Event* event, SDL_Surface* scr, int level_w, int w, int h, int bpp) { this->event = event; this->bpp = bpp; level_width = level_w; screen = scr; w_width = w; w_height = h; //load images and set rects background = format_surface("background.jpg"); person = format_surface("person.png"); background_rect_left = background->clip_rect; background_rect_right = background->clip_rect; current_background_piece = 1; //we are displaying the first clip rect_in_view = &background_rect_right; other_rect = &background_rect_left; person_rect = person->clip_rect; background_rect_left.x = 0; background_rect_left.y = 0; background_rect_right.x = background->w; background_rect_right.y = 0; person_rect.y = background_rect_left.h - person_rect.h; person_rect.x = 0; } and here's the move method which is probably causing all the trouble: void Game::move(SDLKey direction) { if(direction == SDLK_RIGHT) { if(move_screen(direction)) { if(!background_reached_right()) { //move background right background_rect_left.x += movement_increment; background_rect_right.x += movement_increment; if(rect_in_view->x >= 0) { //move the other rect in to fill the empty space SDL_Rect* temp; other_rect->x = -w_width + rect_in_view->x; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece++; std::cout << current_background_piece << std::endl; } if(background_overshoots_right()) { //sees if this next blit is past the surface //this is used only for re-aligning the rects when //the end of the screen is reached background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x += movement_increment; if(get_person_right_side() > w_width) { //person went too far right person_rect.x = w_width - person_rect.w; } } } else if(direction == SDLK_LEFT) { if(move_screen(direction)) { if(!background_reached_left()) { //moves background left background_rect_left.x -= movement_increment; background_rect_right.x -= movement_increment; if(rect_in_view->x <= -w_width) { //swap the rect in view SDL_Rect* temp; rect_in_view->x = w_width; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece--; std::cout << current_background_piece << std::endl; } if(background_overshoots_left()) { background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x -= movement_increment; if(person_rect.x < 0) { //person went too far left person_rect.x = 0; } } } } without the rest of the code this doesn't make too much sense. Since there is too much of it I'll upload it here for testing. Anyway does anyone know how I could fix this inconsistency?

    Read the article

  • Java thread dump where main thread has no call stack? (jsvc)

    - by dwhsix
    We have a java process running as a daemon (under jsvc). Every several days it just stops doing any work; output to the logfile stops (it is pretty verbose, on 5-minute intervals) and it consumes no CPU or IO. There are no exceptions logged in the logfile nor in syserr or sysout. The last log statement is just prior to a db commit being done, but there is no open connection on the db server (MySQL) and reviewing the code, there should always be additional log output after that, even if it had encountered an exception that was going to bubble up. The most curious thing I find is that in the thread dump (included below), there's no thread in our code at all, and the main thread seems to have no context whatsoever: "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE As noted earlier, this is a daemon process running using jsvc, but I don't know if that has anything to do with it (I can restructure the code to also allow running it directly, to test). Any suggestions on what might be happening here? Thanks... dwh Full thread dump: Full thread dump Java HotSpot(TM) 64-Bit Server VM (14.2-b01 mixed mode): "MySQL Statement Cancellation Timer" daemon prio=10 tid=0x00002aaaf81b8800 nid=0x447b in Object.wait() [0x00002aaaf6a22000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab5556d50> (a java.util.TaskQueue) at java.lang.Object.wait(Object.java:485) at java.util.TimerThread.mainLoop(Timer.java:483) - locked <0x00002aaab5556d50> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) "Low Memory Detector" daemon prio=10 tid=0x00000000006a4000 nid=0x4479 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread1" daemon prio=10 tid=0x00000000006a1000 nid=0x4477 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread0" daemon prio=10 tid=0x000000000069d000 nid=0x4476 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Signal Dispatcher" daemon prio=10 tid=0x000000000069b000 nid=0x4465 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Finalizer" daemon prio=10 tid=0x0000000000678800 nid=0x4464 in Object.wait() [0x00002aaaf61d6000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) - locked <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159) "Reference Handler" daemon prio=10 tid=0x0000000000676800 nid=0x4463 in Object.wait() [0x00002aaaf60d5000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:485) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116) - locked <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "VM Thread" prio=10 tid=0x0000000000670000 nid=0x4462 runnable "GC task thread#0 (ParallelGC)" prio=10 tid=0x000000000061e000 nid=0x445e runnable "GC task thread#1 (ParallelGC)" prio=10 tid=0x0000000000620000 nid=0x445f runnable "GC task thread#2 (ParallelGC)" prio=10 tid=0x0000000000622000 nid=0x4460 runnable "GC task thread#3 (ParallelGC)" prio=10 tid=0x0000000000623800 nid=0x4461 runnable "VM Periodic Task Thread" prio=10 tid=0x00000000006a6800 nid=0x447a waiting on condition JNI global references: 797 Heap PSYoungGen total 162944K, used 48388K [0x00002aaadff40000, 0x00002aaaf2ab0000, 0x00002aaaf5490000) eden space 102784K, 47% used [0x00002aaadff40000,0x00002aaae2e81170,0x00002aaae63a0000) from space 60160K, 0% used [0x00002aaaeb850000,0x00002aaaeb850000,0x00002aaaef310000) to space 86720K, 0% used [0x00002aaae63a0000,0x00002aaae63a0000,0x00002aaaeb850000) PSOldGen total 699072K, used 699072K [0x00002aaab5490000, 0x00002aaadff40000, 0x00002aaadff40000) object space 699072K, 100% used [0x00002aaab5490000,0x00002aaadff40000,0x00002aaadff40000) PSPermGen total 21248K, used 9252K [0x00002aaab0090000, 0x00002aaab1550000, 0x00002aaab5490000) object space 21248K, 43% used [0x00002aaab0090000,0x00002aaab09993e8,0x00002aaab1550000)

    Read the article

  • PROJECT HELP NEEDED. SOME BASIC CONCEPTS GREAT CONFUSION BECAUSE OF LACK OF PROPER MATERIAL PLEASE H

    - by user287745
    Task ATTENDENCE RECORDER AND MANAGEMENT SYSTEM IN DISTRIBUTED ENVIRONMENT ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Example implementation needed. a main server in each lab where the operator punches in the attendence of the student. =========================================================== scenerio:- a college, 10 departments, all departments have a computer lab with 60-100 computers, the computers within each lab are interconnected and all computers in any department have to dail to a particular number (THE NUMBER GIVEN BY THE COLLEGE INTERNET DEPARTMENT) to get connected to the internet. therefore safe to assume that there is a central location to which all the computers in the college are connected to. there is a 'students attendence portal' which can be accessed using internet explorer, students enter there id and get the particular attendence record regarding to the labs only. a description of the working is like:- 1) the user will select which department, which year has arrived to the lab 2) the selection will give the user a return of all the students name and there roll numbers belonging to that department; 'with a check box to "TICK MARK IF THE STUDENT IS PRESENT" ' 3) A SUBMIT BUTTON when pressed reads the 'id' of the checkbox to determine the "particular count number of the student" from that an id of the student is constructed and that id is inserted with a present. (there is also date and time and much more to normalize the db and to avoid conflicts and keep historic records etc but that you will have to assume) steps taken till this date:- ( please note we are not computer students, we are to select something of some other line as a project!, as you will read in my many post 'i" have designed small websites just out of liking. have never ever done any thing official to implement like this.) * have made the database fully normalized. * have made the website which does the functions required on the database. Testing :- deployed the db and site on a free aspspider server and it worked. tested from several computers. Now the problem please help thank youuuuuuuu a practical demonstration has to be done within the college network. no internet! we have been assigned a lab - 60 computers- to demonstrate. (please dont give replies as 60 computers only! is not a big deal one CPU can manage it. i know that; IT IS A HYPOTHETICAL SITUATION WHERE WE ASSUME THAT 60 IS NOT 60 BUT ITS LIKE 60,000 COMPUTERS) 1a) make a web server, yes iis and put files in www folder and configure server to run aspx files- although a link to a step by step guide will be appreciated)\ ? which version of windows should i ask for xp or win server 2000 something? 2a) make a database server. ( well yes install sql server 2005, okay but then what? just put the database file on a pc share it and append the connection string to the share? ) 3a) make the site accessible from the remaining computers ? http://localhost/sitename ? all users "being operators of the particular lab" have the right to edit, write or delete(in dispute), thereby any "users" who hate our program can make the database inconsistent by accessing te same record and doing different edits and then complaining? so how to prevent this? you know something like when the db table is being written to others can only read but not write.. one big confusion:- IN DISTRIBUTED ENVIRONMENT "how to implement this" where does "distributed environment" come in! meaning :- alright the labs are in different departments but the "database server will be one" the "web server will be one" so whats distributed!?

    Read the article

  • Unusually high memory usage on a CentOS VPS with 512 guaranteed RAM

    - by Andrei Bârsan
    I'm working on a medium-sized web application written in PHP that's running on a VPS with 512mb ram. The webapp hasn't been officially launched yet, so there isn't too much traffic going on, just me and a few other people working on it. There is another slightly smaller webapp also hosted on this machine, among 4-5 other small static sites. We are running Centos 5 32-bit & cPanel/WHM. This is the result of running ps aux and, as you can see, it's not using 100% of the RAM. However, on the hypanel overview, it's always shown as using aroun 500MB ram, just for running apache, mysql, and the lowest-memory-footprint versions of the mail server, ftp server etc. -bash-3.2# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2156 664 ? Ss 12:08 0:00 init [3] root 1123 0.0 0.0 2260 548 ? S<s 12:08 0:00 /sbin/udevd -d root 1462 0.0 0.0 1812 568 ? Ss 12:08 0:00 syslogd -m 0 named 1496 0.0 0.0 3808 820 ? Ss 12:08 0:00 nsd named 1497 0.0 0.0 10672 756 ? S 12:08 0:00 nsd named 1499 0.0 0.0 3880 584 ? S 12:08 0:00 nsd root 1514 0.0 0.1 7240 1064 ? Ss 12:08 0:00 /usr/sbin/sshd root 1522 0.0 0.0 2832 832 ? Ss 12:08 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 1534 0.0 0.1 3712 1328 ? S 12:08 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql - mysql 1667 0.0 2.9 225680 30884 ? Sl 12:08 0:00 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql - mailnull 1766 0.0 0.1 9352 1100 ? Ss 12:08 0:00 /usr/sbin/exim -bd -q60m root 1797 0.0 0.0 2156 708 ? Ss 12:08 0:00 /usr/sbin/dovecot root 1798 0.0 0.0 2632 1012 ? S 12:08 0:00 dovecot-auth root 1816 0.0 3.0 38580 32456 ? Ss 12:08 0:01 /usr/local/bin/spamd -d --allowed-ips=127.0.0.1 --pidfi root 1839 0.0 1.6 63200 17496 ? Ss 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 1846 0.0 0.1 5416 1468 ? Ss 12:08 0:00 pure-ftpd (SERVER) root 1848 0.0 0.1 6212 1244 ? S 12:08 0:00 /usr/sbin/pure-authd -s /var/run/ftpd.sock -r /usr/sbin root 1856 0.0 0.1 4492 1112 ? Ss 12:08 0:00 crond root 1864 0.0 0.0 2356 428 ? Ss 12:08 0:00 /usr/sbin/atd dovecot 1927 0.0 0.1 5196 1952 ? S 12:08 0:00 pop3-login dovecot 1928 0.0 0.1 5196 1948 ? S 12:08 0:00 pop3-login dovecot 1929 0.0 0.1 5316 2012 ? S 12:08 0:00 imap-login dovecot 1930 0.0 0.2 5416 2228 ? S 12:08 0:00 imap-login root 1939 0.0 0.1 3936 1964 ? S 12:08 0:00 cPhulkd - processor root 1963 0.0 0.8 15876 8564 ? S 12:08 0:00 cpsrvd (SSL) - waiting for connections root 1966 0.0 0.7 15172 7748 ? S 12:08 0:00 cpdavd - accepting connections on 2077 and 2078 root 1990 0.0 0.2 5008 3136 ? S 12:08 0:00 queueprocd - wait to process a task root 2017 0.0 2.9 38580 31020 ? S 12:08 0:00 spamd child root 2018 0.0 0.5 8904 5636 ? S 12:08 0:00 /usr/bin/perl /usr/local/cpanel/bin/leechprotect nobody 2021 0.0 3.2 66512 33724 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2022 0.0 3.1 67812 33024 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2024 0.0 1.9 64364 20680 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 2027 0.0 0.4 9000 4540 ? S 12:08 0:00 tailwatchd root 2032 0.0 0.1 4176 1836 ? SN 12:08 0:00 cpanellogd - sleeping for logs nobody 3096 0.0 1.9 64572 20264 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3097 0.0 2.8 66008 30136 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3098 0.0 2.8 65704 29752 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3099 0.0 3.1 67260 32816 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL andrei 3448 0.0 0.1 3204 1632 ? S 12:50 0:00 imap nobody 3537 0.0 1.9 64308 20108 ? S 13:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3614 0.0 1.9 64576 20628 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3615 0.0 1.3 63200 14672 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 3626 0.0 0.2 10232 2964 ? Rs 13:14 0:00 sshd: root@pts/0 root 3648 0.0 0.1 3844 1600 pts/0 Ss 13:14 0:00 -bash root 3826 0.0 0.0 2532 908 pts/0 R+ 13:21 0:00 ps aux Lately, without any significant changes to the configuration, the memory usage started peaking and going over 512, causing the virtual server to kill apache, basically murdering our site in the process. Do you have any idea if this is normal and more resources should be acquired? I don't think... since there isn't too much data or traffic online yet.

    Read the article

  • wpf exit thread automatically when application closes

    - by toni
    Hi, I have a main wpf window and one of its controls is a user control that I have created. this user control is an analog clock and contains a thread that update hour, minute and second hands. Initially it wasn't a thread, it was a timer event that updated the hour, minutes and seconds but I have changed it to a thread because the application do some hard work when the user press a start button and then the clock don't update so I changed it to a thread. COde snippet of wpf window: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:GParts" xmlns:Microsoft_Windows_Themes="clr-namespace:Microsoft.Windows.Themes assembly=PresentationFramework.Aero" xmlns:UC="clr-namespace:GParts.UserControls" x:Class="GParts.WinMain" Title="GParts" WindowState="Maximized" Closing="Window_Closing" Icon="/Resources/Calendar-clock.png" x:Name="WMain" > <...> <!-- this is my user control --> <UC:AnalogClock Grid.Row="1" x:Name="AnalogClock" Background="Transparent" Margin="0" Height="Auto" Width="Auto"/> <...> </Window> My problem is when the user exits the application then the thread seems to continue executing. I would like the thread finishes automatically when main windows closes. code snippet of user control constructor: namespace GParts.UserControls { /// <summary> /// Lógica de interacción para AnalogClock.xaml /// </summary> public partial class AnalogClock : UserControl { System.Timers.Timer timer = new System.Timers.Timer(1000); public AnalogClock() { InitializeComponent(); MDCalendar mdCalendar = new MDCalendar(); DateTime date = DateTime.Now; TimeZone time = TimeZone.CurrentTimeZone; TimeSpan difference = time.GetUtcOffset(date); uint currentTime = mdCalendar.Time() + (uint)difference.TotalSeconds; christianityCalendar.Content = mdCalendar.Date("d/e/Z", currentTime, false); // this was before implementing thread //timer.Elapsed += new System.Timers.ElapsedEventHandler(timer_Elapsed); //timer.Enabled = true; // The Work to perform ThreadStart start = delegate() { // With this condition the thread exits when main window closes but // despite of this it seems like the thread continues executing after // exiting application because in task manager cpu is very busy // while ((this.IsInitialized) && (this.Dispatcher.HasShutdownFinished== false)) { this.Dispatcher.Invoke(DispatcherPriority.Normal, (Action)(() => { DateTime hora = DateTime.Now; secondHand.Angle = hora.Second * 6; minuteHand.Angle = hora.Minute * 6; hourHand.Angle = (hora.Hour * 30) + (hora.Minute * 0.5); DigitalClock.CurrentTime = hora; })); } Console.Write("Quit ok"); }; // Create the thread and kick it started! new Thread(start).Start(); } // this was before implementing thread void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e) { this.Dispatcher.Invoke(DispatcherPriority.Normal, (Action)(() => { DateTime hora = DateTime.Now; secondHand.Angle = hora.Second * 6; minuteHand.Angle = hora.Minute * 6; hourHand.Angle = (hora.Hour * 30) + (hora.Minute * 0.5); DigitalClock.CurrentTime = hora; })); } } // end class } // end namespace How can I exit correctly from thread automatically when main window closes and then application exits? Thanks very much!

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • HTTP downloads stop after some time, resuming is not possible

    - by cdauth
    When I try to download a file via HTTP, the downloads sometimes stop after around 30 MB. The download rates goes down to 0 B/s and no data keeps coming. When I stop the download and resume again, the download still hangs. But when I redownload it from byte 0 again, everything works fine up to 30 MB when it stops again. Sometimes, after some hours, it just works again without problems. The position in the file when the download stops is variable, but most of the time it is around 30–35 MB. As a download manager I use wget. The same behaviour happens though using curl and other download managers. The error occurs independently of the server I download from. I have also observed this error on other Linux computers in my network. All computers on my network run Gentoo Linux on x86. All internet connections on my network go through a server on my network which runs a transparent Squid proxy on port 80. That server is connected to a router, which is a Speedport W 700V by Deutsche Telekom AG. That router is connected to the internet using ADSL, with 448 kbit/s down speed and 96 kbit/s up speed. I am almost sure that my transparent proxy is not the problem. I turned that off without resolving the issue. I also connected to the router directly via WLAN without resolving the issue. I also tried to download over another port via HTTP. Furthermore, I tried to download the file using IPv6 with a gateway6 tunnel from my computer, which resulted in exactly the same problem. Now the strange thing is that everything works fine using FTP and HTTPS (also with wget on the same computer). Even more strange: when I resume the download that hanged over HTTP using FTP or HTTPS, download a few bytes that way, stop wget and then resume again using HTTP, it loads data again! But after a few MB, it may stop again. Unfortunately, files downloaded that way are always broken (the MD5 sum is not correct), so at some point, there must have been bogus data. I tried searching for HTML error messages in the downloaded file, but grep -i html does not find anything. (I cannot think of a way to search for GZIP-compressed HTML error messages in the file, so I did not try that.) I tried using strace on wget when it failed to resume a download, you can find the entire output on pastebin. The important lines are repeated every second: clock_gettime(CLOCK_MONOTONIC, {326102, 62176435}) = 0 ) = 1 write(2, "78% [++++++++++++++++++++++++++++"..., 19578% [+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ] 110,683,685 --.-K/s ) = 195 select(4, [3], NULL, NULL, {0, 949999}) = 0 (Timeout) I have absolutely no idea what could be the reason of this problem. It seems like whatever causes the issue speaks HTTP. It seems to speak HTTP that intelligently that it even regognises it in an IPv6-over-IPv4 tunnel. But what could that be and why does it only happen sometimes? The other possibility would be that there is a problem on my computer that is the same on other Gentoo Linux computers as well. Has anyone ever had such a problem? What could be the reason and where do I have to continue investigating to find out more about the issue? Update: I have just run into the problem again and tried to resume the download over the router’s WLAN, and this time it worked. Maybe I did something wrong during my last tests with the WLAN. Now maybe my transparent proxy server is in fact the problem. It is a very basic Squid proxy server that does not cache anything. Maybe the fact is interesting that a second Squid proxy runs on the same computer on another port. Update: A download hung again and this time I turned off all firewall settings and stopped all proxy servers. I failed to resume the download from my network server, which is directly connected to the router. So my proxy server definitely is not the cause the problem. I will try to upgrade the firmware of my router now, although I do not have admin access to it. I will see what I can do.

    Read the article

  • Nightmare: Upgrading Tomcat 5.5 to 6.0

    - by pavanlimo
    I'm trying to upgrade a perfectly running embedded Tomcat 5.5 to Tomcat 6.0. I understand that all I need to do is replace Tomcat 5.5 jars with 6.0. That's what I did. So I replaced the following jars: catalina-5.0.28.jar catalina-5.5.9.jar catalina-optional-5.5.9.jar commons-el.jar commons-modeler-1.1.0.jar jasper-compiler-jdt.jar jasper-compiler.jar jasper-runtime.jar jmx-5.0.28.jar jsp-api-2.0.jar naming-factory.jar naming-resources.jar servlet-api-2.4.jar servlets-default.jar tomcat-coyote.jar tomcat-http.jar tomcat-util.jar with: annotations-api.jar catalina.jar jasper.jar tomcat-dbcp.jar catalina-ant.jar el-api.jar jsp-api.jar tomcat-i18n-es.jar catalina-ha.jar jasper-el.jar servlet-api.jar tomcat-i18n-fr.jar catalina-tribes.jar jasper-jdt.jar tomcat-coyote.jar tomcat-i18n-ja.jar tomcat-juli.jar As soon as I start the server, I get the following message in the logs at INFO level: INFO: Starting Servlet Engine: Apache Tomcat/6.0.29 Dec 31, 2010 6:04:18 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Based on the this explanation, I need to remove a jar file which has a conflicting Servlet.class. I swear to God, there is no other conflicting jar file, I grepped system wide for Servlet.class, it matched only servlet-api.jar. I also downloaded javaee.jar and replaced it by servlet-api.jar, to same avail. Having tried lot of these stuff, I did not have much to look upto, so set the tomcat logging level to ALL. In the log I could see that it is trying to check for Servlet.class in each and every jar it is loading until it finds servlet-api.jar and throws "jar not loaded" message as soon as it finds servlet-api.jar. See below: FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories FINE: Deploy JAR /WEB-INF/lib/servlet-api.jar to /usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader addJar FINE: addJar(/WEB-INF/lib/servlet-api.jar) Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories Please note however, that Tomcat starts successfully! And as soon as I hit the URL on the browser, I get blank page(this may be in my case only, I guess 'cuz of my web.xml, sorta different from most. Other people on the internet have got Error 404 instead.) with following log statements(at finest level) Jan 2, 2011 9:40:01 AM org.apache.catalina.connector.CoyoteAdapter parseSessionCookiesId FINE: Requested cookie session id is 0FBA716E3F9B0147C3AF7ABAE3B1C27B Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Security checking request GET /login.jsp Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: No applicable constraint located Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Not subject to any constraint Jan 2, 2011 9:40:01 AM org.apache.catalina.core.StandardWrapper allocate FINEST: Returning non-STM instance I'm not sure if the above log message is important, but I'm for all-out disclosure here. One interesting thing though, I manually created a dummy jsp file containing only "helloooo" just outside WEB-INF folder(no security constraints for this file). This file was accessible and could be displayed. But, all my jsp's and classes are inside WEB-INF(ofcourse). Sick and tired of this issue, please help me solve it. I've already spent 20-24 hours on this unsuccessfully. Any pointers directions hints leads?

    Read the article

  • How to directly rotate CVImageBuffer image in IOS 4 without converting to UIImage?

    - by Ian Charnas
    I am using OpenCV 2.2 on the iPhone to detect faces. I'm using the IOS 4's AVCaptureSession to get access to the camera stream, as seen in the code that follows. My challenge is that the video frames come in as CVBufferRef (pointers to CVImageBuffer) objects, and they come in oriented as a landscape, 480px wide by 300px high. This is fine if you are holding the phone sideways, but when the phone is held in the upright position I want to rotate these frames 90 degrees clockwise so that OpenCV can find the faces correctly. I could convert the CVBufferRef to a CGImage, then to a UIImage, and then rotate, as this person is doing: Rotate CGImage taken from video frame However that wastes a lot of CPU. I'm looking for a faster way to rotate the images coming in, ideally using the GPU to do this processing if possible. Any ideas? Ian Code Sample: -(void) startCameraCapture { // Start up the face detector faceDetector = [[FaceDetector alloc] initWithCascade:@"haarcascade_frontalface_alt2" withFileExtension:@"xml"]; // Create the AVCapture Session session = [[AVCaptureSession alloc] init]; // create a preview layer to show the output from the camera AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session]; previewLayer.frame = previewView.frame; previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; [previewView.layer addSublayer:previewLayer]; // Get the default camera device AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // Create a AVCaptureInput with the camera device NSError *error=nil; AVCaptureInput* cameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error]; if (cameraInput == nil) { NSLog(@"Error to create camera capture:%@",error); } // Set the output AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init]; videoOutput.alwaysDiscardsLateVideoFrames = YES; // create a queue besides the main thread queue to run the capture on dispatch_queue_t captureQueue = dispatch_queue_create("catpureQueue", NULL); // setup our delegate [videoOutput setSampleBufferDelegate:self queue:captureQueue]; // release the queue. I still don't entirely understand why we're releasing it here, // but the code examples I've found indicate this is the right thing. Hmm... dispatch_release(captureQueue); // configure the pixel format videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil]; // and the size of the frames we want // try AVCaptureSessionPresetLow if this is too slow... [session setSessionPreset:AVCaptureSessionPresetMedium]; // If you wish to cap the frame rate to a known value, such as 10 fps, set // minFrameDuration. videoOutput.minFrameDuration = CMTimeMake(1, 10); // Add the input and output [session addInput:cameraInput]; [session addOutput:videoOutput]; // Start the session [session startRunning]; } - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // only run if we're not already processing an image if (!faceDetector.imageNeedsProcessing) { // Get CVImage from sample buffer CVImageBufferRef cvImage = CMSampleBufferGetImageBuffer(sampleBuffer); // Send the CVImage to the FaceDetector for later processing [faceDetector setImageFromCVPixelBufferRef:cvImage]; // Trigger the image processing on the main thread [self performSelectorOnMainThread:@selector(processImage) withObject:nil waitUntilDone:NO]; } }

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >