Search Results

Search found 5026 results on 202 pages for 'blocked threads'.

Page 188/202 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • How can I resolve this one application coming up with an "You don't have permission to use the application" error?

    - by morgant
    I've got a Mac OS X 10.6 Snow Leopard Server Open Directory Master with a user who's getting Mobility & Application managed preferences from a group (the only group they're a member of). The workstation is also running Mac OS X 10.6 Snow Leopard, when the user logs in and tries to run our primary application which they're explicitly allowed to run (via the group's preferences), it says "You don't have permission to use the application 'Blah'". Now, the application is added to the group's list of always allowed applications, unsigned (so a minor difference in application version or file contents shouldn't disallow it). It even lives in a subdirectory of /Applications which is in the list of folders to allow applications. I've run into this when logging this user into new workstations and the following usually works: Log them out Remove the following files from their mobile home folder on the workstation: /Library/Managed\ Preferences/, ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Remove the following files from their network home folder on the server: ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Log them back in on the workstation. However, this no longer resolves the issue. Their Home Sync preferences are set (on the group) to sync ~, but not the following files (manually, at login, and at logout... no background sync here): ~/.SymAVQSFile ~/NAVMac800QSFile ~/Library ~/.FileSync ~/.account Their Preferences Sync preferences are set (also on the group) to sync ~/Library & ~/Documents/Microsoft User Data, but not the following files (also manually, at login, and at logout... no background sync): ~/.SymAVQSFile ~/.Trash ~/.Trashes ~/Documents/Microsoft User Data/Entourage Temp ~/Library/Application Support/SyncServices ~/Library/Application Support/MobileSync ~/Library/Caches ~/Library/Calendars/Calendar Cache ~/Library/Logs ~/Library/Mail/AvailableFeeds ~/Library/Mail/Envelope Index ~/Library/Preferences/Macromedia/ ~/Library/Printers ~/Library/PubSub/Database ~/Library/PubSub/Downloads ~/Library/PubSub/Feeds ~/Library/Safari/Icons.db ~/Library/Safari/HistoryIndex.sk ~/Library/iTunes/iPhone Software Updates IMAP-* Exchange-* EWS-* Mac-* ~/Library/Preferences/ByHost ~/Library/Preferences/com.apple.dock.plist ~/Library/Preferences/com.apple.sitebarlists.plist ~/Library/Application Support/4D ~/Library/Preferences/com.apple.MCX.plist ~/.FileSync ~/.account Even with ~/Library/Preferences/com.apple.MCX.plist prevented from syncing during a Preferences Sync, it still seems to show up in the network home on the server frequently. Are there any other files other than ~/Library/Preferences/com.apple.MCX.plist that contain application Managed Preferences that might be causing this one app to be showing up as not allowed? Any ideas on how ~/Library/Preferences/com.apple.MCX.plist keeps getting sync'd back up the network home folder on the server? Update: I thought I had found a workaround this morning, but it also seemed to be extremely temporary. Basically, loking at /Library/Managed\ Preferences/[shortname]/com.apple.applicationaccess.new.plist I discovered that it didn't have an entry for the application in question, but /Library/Managed\ Preferences/[shortname]/complete.plist did. Naturally, I deleted com.apple.applicationaccess.new.plist, logged in again, and it worked... on one workstation. It failed on others, and after logging out & back in a couple more times it started failing on all of them again, even after further deletions of com.apple.applicationaccess.new.plist. Oddly, com.apple.applicationaccess.new.plist & complete.plist do both contain an entry for the application in question now, but it still says it's not allowed. Further Update: Okay, so I now have a reproducible workaround which seems to be required after every reboot of the workstation: Log in as the user (you'll discover you cannot launch the application in question). Fast User Switch to the local admin account on the workstation (we always have one on every machine). From that local admin account, run sudo mcxrefresh -n 'shortname' (logging out and back in as the user in question will not work). Fast User Switch back to the user (you'll still not be allowed to run the application). Log the user out and back in (you'll now be able to run the application in question.) Fast User Switch back to the local admin account, log it out, and log back in as the user in question. If you do all that exactly as described it'll keep working through log out & log back in, but NOT through a reboot. If, after a reboot, you try something like logging in as the local admin account, running sudo mcxrefresh -n 'shortname', logging out, then logging in as the user in question, it will NOT work. Yet Another Update We don't have any computer groups in our Open Directory, so it shouldn't be getting any conflicting settings from there. I ran sudo mcxquery -format xml -user shortname -group groupname before & after performing the aforementioned process to allow the application in question to be run and the results were identical (saved the result to files & diff'd... I'm not just guessing here). One Step Forward, Half a Step Back: When the Mac OS X 10.6.5 Server update was released, we upgraded our Open Directory Master to it as the changes included the following managed preferences fixes which I hoped might address this issue: Addresses an issue that could prevent managed preferences from being applied when a user logs in on a workstation that has been idle. Fixes an issue that could prevent administrators from bypassing client management settings on a workstation. This seemed to improve the situation slightly. The application in question now usually launches without error. If, and when it does launch with the "You don't have permission to use the application" error, logging the user out and back in seems to correct it. That said, we've since had to add a couple of applications to the user's ~/Applications/ directory and those are still prevented from launching. The workstations are running Mac OS X 10.6.4, the OD Master (which the workstations are bound to) is running Mac OS 10.6.5 Server (although there are two OD Replicas still running 10.6.4 Server), and we're using Workgroup Manager 10.6.3 (which is included with the Server Admin Tools 10.6.5 upgrade) to add the applications (unsigned, as always). This time, I've caught the following in /var/log/system.log when attempting to launch one of the allowed applications from ~/Applications: Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker checkApp:csFlags:] [954:username] -- *** Incoming app appears to be masquerading as white listed app and failed signature validation: /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro. Note: This may be a valid app of a different version than what was whitelisted (on a different volume?) Dec 22 17:36:24 hostname [0x0-0xa42a42].com.filemaker.filemakerpro[43304]: launch of /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro was blocked Dec 22 17:36:24 hostname com.apple.launchd.peruser.1340[6375] ([0x0-0xa42a42].com.filemaker.filemakerpro[43304]): Exited with exit code: 255 Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker(Private) _removeAppFromWhiteList:] [1362:username] -- *** Couldn't find local user record Running sudo mcxquery -format xml -user username -group groupname includes the following entry for FileMaker Pro 5.5 (and appears to include a full integration of the user's application whitelist & group's application whitelist): <dict> <key>bundleID</key> <string>com.filemaker.filemakerpro</string> <key>displayName</key> <string>FileMaker Pro</string> </dict> Note the lack of <key>appID</key><data> ... </data> which seems to specify a signed application. While whitelisted directories also appear to be correctly listed in the results, they too do not actually allow the applications to be run either. What is going on here?! Where else should I be looking?

    Read the article

  • dns server bind is not work

    - by milad
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; };` AND The content of "/var/named/data/named.mydomain.com": $TTL 38400 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 2012101201 ; serial number YYMMDDNN 28800 ; Refresh 7200 ; Retry 864000 ; Expire 38400 ; Min TTL ) mydomain.com. IN A 1.2.3.4 www IN A 1.2.3.4 ns1.mydomain.com. IN A 1.2.3.4 ns2.mydomain.com. IN A 1.2.3.4 mydomain.com. IN NS ns1.mydomain.com. mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running... Thanks for your answers. i know that the ping is not the job of bind, i use it just to check whether domain is pointed to host or not.(ping is open in my server as i got reply in pinging ip) i use network-tools.com to ping domain. here the output of dig utility: dig mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 6806 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mydomain.com. IN A ;; Query time: 321 msec ;; SERVER: 5.6.7.8#53(5.6.7.8)##note that 5.6.7.8 is my idc dns ip ;; WHEN: Sun Oct 14 23:53:47 2012

    Read the article

  • Trouble getting SSL to work with django + nginx + wsgi

    - by Kevin
    I've followed a couple of examples for Django + nginx + wsgi + ssl, but I can't get them to work. I simply get an error in my browser than I can't connect. I'm running two websites off the host. The config files are identical except for the ip addresses, server names, and directories. When neither use SSL, they work fine. When I try to listen on 443 with one of them, I can't connect to either. My config files are below, and any suggestions would be appreciated. server{ listen xxx.xxx.xxx.xxx:80; server_name sub.domain.com; access_log /home/django/logs/nginx_customerdb_http_access.log; error_log /home/django/logs/nginx_customerdb_http_error.log; location / { proxy_pass http://127.0.0.1:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } location /site_media/ { alias /home/django/customerdb_site_media/; } location /admin-media/ { alias /home/django/django_admin_media/; } } server{ listen xxx.xxx.xxx.xxx:443; server_name sub.domain.com; access_log /home/django/logs/nginx_customerdb_http_access.log; error_log /home/django/logs/nginx_customerdb_http_error.log; ssl on; ssl_certificate sub.domain.com.crt; ssl_certificate_key sub.domain.com.key; ssl_prefer_server_ciphers on; location / { proxy_pass http://127.0.0.1:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Protocol https; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } location /site_media/ { alias /home/django/customerdb_site_media/; } location /admin-media/ { alias /home/django/django_admin_media/; } } <VirtualHost *:8080> ServerName xxx.xxx.xxx.xxx ServerAlias xxx.xxx.xxx.xxx LogLevel warn ErrorLog /home/django/logs/apache_customerdb_error.log CustomLog /home/django/logs/apache_customerdb_access.log combined WSGIScriptAlias / /home/django/customerdb/apache/django.wsgi WSGIDaemonProcess customerdb_wsgi processes=4 threads=5 WSGIProcessGroup customerdb_wsgi SetEnvIf X-Forwarded-Protocol "^https$" HTTPS=on </VirtualHost> UDPATE: the existence of two sites (on separate IPs) on the host is the issue. if i delete the other site, the setting above mostly work. doing so also brings up another issue: chrome doesn't accept the site as secure saying that some content is not encrypted.

    Read the article

  • How can I resolve this one application coming up with an "You don't have permission to use the application" error?

    - by morgant
    I've got a Mac OS X 10.6 Snow Leopard Server Open Directory Master with a user who's getting Mobility & Application managed preferences from a group (the only group they're a member of). The workstation is also running Mac OS X 10.6 Snow Leopard, when the user logs in and tries to run our primary application which they're explicitly allowed to run (via the group's preferences), it says "You don't have permission to use the application 'Blah'". Now, the application is added to the group's list of always allowed applications, unsigned (so a minor difference in application version or file contents shouldn't disallow it). It even lives in a subdirectory of /Applications which is in the list of folders to allow applications. I've run into this when logging this user into new workstations and the following usually works: Log them out Remove the following files from their mobile home folder on the workstation: /Library/Managed\ Preferences/, ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Remove the following files from their network home folder on the server: ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Log them back in on the workstation. However, this no longer resolves the issue. Their Home Sync preferences are set (on the group) to sync ~, but not the following files (manually, at login, and at logout... no background sync here): ~/.SymAVQSFile ~/NAVMac800QSFile ~/Library ~/.FileSync ~/.account Their Preferences Sync preferences are set (also on the group) to sync ~/Library & ~/Documents/Microsoft User Data, but not the following files (also manually, at login, and at logout... no background sync): ~/.SymAVQSFile ~/.Trash ~/.Trashes ~/Documents/Microsoft User Data/Entourage Temp ~/Library/Application Support/SyncServices ~/Library/Application Support/MobileSync ~/Library/Caches ~/Library/Calendars/Calendar Cache ~/Library/Logs ~/Library/Mail/AvailableFeeds ~/Library/Mail/Envelope Index ~/Library/Preferences/Macromedia/ ~/Library/Printers ~/Library/PubSub/Database ~/Library/PubSub/Downloads ~/Library/PubSub/Feeds ~/Library/Safari/Icons.db ~/Library/Safari/HistoryIndex.sk ~/Library/iTunes/iPhone Software Updates IMAP-* Exchange-* EWS-* Mac-* ~/Library/Preferences/ByHost ~/Library/Preferences/com.apple.dock.plist ~/Library/Preferences/com.apple.sitebarlists.plist ~/Library/Application Support/4D ~/Library/Preferences/com.apple.MCX.plist ~/.FileSync ~/.account Even with ~/Library/Preferences/com.apple.MCX.plist prevented from syncing during a Preferences Sync, it still seems to show up in the network home on the server frequently. Are there any other files other than ~/Library/Preferences/com.apple.MCX.plist that contain application Managed Preferences that might be causing this one app to be showing up as not allowed? Any ideas on how ~/Library/Preferences/com.apple.MCX.plist keeps getting sync'd back up the network home folder on the server? Update: I thought I had found a workaround this morning, but it also seemed to be extremely temporary. Basically, loking at /Library/Managed\ Preferences/[shortname]/com.apple.applicationaccess.new.plist I discovered that it didn't have an entry for the application in question, but /Library/Managed\ Preferences/[shortname]/complete.plist did. Naturally, I deleted com.apple.applicationaccess.new.plist, logged in again, and it worked... on one workstation. It failed on others, and after logging out & back in a couple more times it started failing on all of them again, even after further deletions of com.apple.applicationaccess.new.plist. Oddly, com.apple.applicationaccess.new.plist & complete.plist do both contain an entry for the application in question now, but it still says it's not allowed. Further Update: Okay, so I now have a reproducible workaround which seems to be required after every reboot of the workstation: Log in as the user (you'll discover you cannot launch the application in question). Fast User Switch to the local admin account on the workstation (we always have one on every machine). From that local admin account, run sudo mcxrefresh -n 'shortname' (logging out and back in as the user in question will not work). Fast User Switch back to the user (you'll still not be allowed to run the application). Log the user out and back in (you'll now be able to run the application in question.) Fast User Switch back to the local admin account, log it out, and log back in as the user in question. If you do all that exactly as described it'll keep working through log out & log back in, but NOT through a reboot. If, after a reboot, you try something like logging in as the local admin account, running sudo mcxrefresh -n 'shortname', logging out, then logging in as the user in question, it will NOT work. Yet Another Update We don't have any computer groups in our Open Directory, so it shouldn't be getting any conflicting settings from there. I ran sudo mcxquery -format xml -user shortname -group groupname before & after performing the aforementioned process to allow the application in question to be run and the results were identical (saved the result to files & diff'd... I'm not just guessing here). One Step Forward, Half a Step Back: When the Mac OS X 10.6.5 Server update was released, we upgraded our Open Directory Master to it as the changes included the following managed preferences fixes which I hoped might address this issue: Addresses an issue that could prevent managed preferences from being applied when a user logs in on a workstation that has been idle. Fixes an issue that could prevent administrators from bypassing client management settings on a workstation. This seemed to improve the situation slightly. The application in question now usually launches without error. If, and when it does launch with the "You don't have permission to use the application" error, logging the user out and back in seems to correct it. That said, we've since had to add a couple of applications to the user's ~/Applications/ directory and those are still prevented from launching. The workstations are running Mac OS X 10.6.4, the OD Master (which the workstations are bound to) is running Mac OS 10.6.5 Server (although there are two OD Replicas still running 10.6.4 Server), and we're using Workgroup Manager 10.6.3 (which is included with the Server Admin Tools 10.6.5 upgrade) to add the applications (unsigned, as always). This time, I've caught the following in /var/log/system.log when attempting to launch one of the allowed applications from ~/Applications: Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker checkApp:csFlags:] [954:username] -- *** Incoming app appears to be masquerading as white listed app and failed signature validation: /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro. Note: This may be a valid app of a different version than what was whitelisted (on a different volume?) Dec 22 17:36:24 hostname [0x0-0xa42a42].com.filemaker.filemakerpro[43304]: launch of /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro was blocked Dec 22 17:36:24 hostname com.apple.launchd.peruser.1340[6375] ([0x0-0xa42a42].com.filemaker.filemakerpro[43304]): Exited with exit code: 255 Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker(Private) _removeAppFromWhiteList:] [1362:username] -- *** Couldn't find local user record Running sudo mcxquery -format xml -user username -group groupname includes the following entry for FileMaker Pro 5.5 (and appears to include a full integration of the user's application whitelist & group's application whitelist): <dict> <key>bundleID</key> <string>com.filemaker.filemakerpro</string> <key>displayName</key> <string>FileMaker Pro</string> </dict> Note the lack of <key>appID</key><data> ... </data> which seems to specify a signed application. While whitelisted directories also appear to be correctly listed in the results, they too do not actually allow the applications to be run either. What is going on here?! Where else should I be looking?

    Read the article

  • Server overloaded: DBCP issue on Mysql.

    - by taras
    Hi, I have 2 setups tomcat5.5.20 on Redhat and mysql 4.1.22 on another Redhat server. Recently i started getting the repeating error(each seconds) in catalina.out: DBCP object created 2010-12-22 13:33:12 by the following code was never closed: java.lang.Exception at org.apache.tomcat.dbcp.dbcp.AbandonedTrace.init(AbandonedTrace.java:96) at org.apache.tomcat.dbcp.dbcp.AbandonedTrace.(AbandonedTrace.java:79) at org.apache.tomcat.dbcp.dbcp.DelegatingResultSet.(DelegatingResultSet.java:71) at org.apache.tomcat.dbcp.dbcp.DelegatingResultSet.wrapResultSet(DelegatingResultSet.java:80) at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:92) at org.apache.jsp.external_005fpage.signup_jsp._jspService(signup_jsp.java:1185) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97) at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:334) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264) at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:199) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:282) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684) at java.lang.Thread.run(Thread.java:595) i have to restart tomcat once a day when server load reaches 80-90%. Also catalina.out file is growing too fast which every few hours need to clear the logs. My datasource config: <bean id="myDataSource" class="org.apache.tomcat.dbcp.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName"> <value>com.mysql.jdbc.Driver</value> </property> jdbc:mysql://XXX/XXX?autoReconnect=true 20 20 <property name="maxIdle"> <value>50</value> </property> <property name="maxActive"> <value>50</value> </property> <property name="removeAbandoned"> <value>false</value> </property> <property name="removeAbandonedTimeout"> <value>2400</value> </property> <property name="username"> <value>XXX</value> </property> <property name="password"> <value>XXX</value> </property> </bean> Any idea what can be the issue ? Thanks for any direction.

    Read the article

  • CPU Utilization LAMP stack

    - by Max
    We've got an ec2 m2.4xlarge running Magento (centos 5.6, httpd 2.2, php 5.2.17 with eaccelerator 0.9.5.3, mysql 5.1.52). Right now we're getting a large traffic spike, and our top looks like this: top - 09:41:29 up 31 days, 1:12, 1 user, load average: 120.01, 129.03, 113.23 Tasks: 1190 total, 18 running, 1172 sleeping, 0 stopped, 0 zombie Cpu(s): 97.3%us, 1.8%sy, 0.0%ni, 0.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 71687720k total, 36898928k used, 34788792k free, 49692k buffers Swap: 880737784k total, 0k used, 880737784k free, 1586524k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2433 mysql 15 0 23.6g 4.5g 7112 S 564.7 6.6 33607:34 mysqld 24046 apache 16 0 411m 65m 28m S 26.4 0.1 0:09.05 httpd 24360 apache 15 0 410m 60m 25m S 26.4 0.1 0:03.65 httpd 24993 apache 16 0 410m 57m 21m S 26.1 0.1 0:01.41 httpd 24838 apache 16 0 428m 74m 20m S 24.8 0.1 0:02.37 httpd 24359 apache 16 0 411m 62m 26m R 22.3 0.1 0:08.12 httpd 23850 apache 15 0 411m 64m 27m S 16.8 0.1 0:14.54 httpd 25229 apache 16 0 404m 46m 17m R 10.2 0.1 0:00.71 httpd 14594 apache 15 0 404m 63m 34m S 8.4 0.1 1:10.26 httpd 24955 apache 16 0 404m 50m 21m R 8.4 0.1 0:01.66 httpd 24313 apache 16 0 399m 46m 22m R 8.1 0.1 0:02.30 httpd 25119 apache 16 0 411m 59m 23m S 6.8 0.1 0:01.45 httpd Questions: Would giving msyqld more memory help it cache queries and react faster? If so, how? Other than splitting mysql and php to separate servers (which we're about to do) is there anything else we could/should be doing? Thanks! UPDATE: Here's our my.cnf along with the output of mysqltuner. It looks like a cache problem. Thanks again! # cat /etc/my.cnf [client] port = **** socket = /var/lib/mysql/mysql.sock [mysqld] datadir=/mnt/persistent/mysql port=**** socket=/var/lib/mysql/mysql.sock key_buffer = 512M max_allowed_packet = 64M table_cache = 1024 sort_buffer_size = 8M read_buffer_size = 4M read_rnd_buffer_size = 2M myisam_sort_buffer_size = 64M thread_cache_size = 128M tmp_table_size = 128M join_buffer_size = 1M query_cache_limit = 2M query_cache_size= 64M query_cache_type = 1 max_connections = 1000 thread_stack = 128K thread_concurrency = 48 log-bin=mysql-bin server-id = 1 wait_timeout = 300 innodb_data_home_dir = /mnt/persistent/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size = 20G innodb_additional_mem_pool_size = 20M innodb_log_file_size = 64M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_thread_concurrency = 48 ft_min_word_len=3 [myisamchk] ft_min_word_len=3 key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M # ./mysqltuner.pl >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.52-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB +Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 2G (Tables: 26) [--] Data in InnoDB tables: 749M (Tables: 250) [!!] Total fragmented tables: 262 -------- Security Recommendations ------------------------------------------- -------- Performance Metrics ------------------------------------------------- [--] Up for: 31d 2h 30m 38s (680M q [253.371 qps], 2M conn, TX: 4825B, RX: 236B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 20.6G global + 15.1M per thread (1000 max threads) [OK] Maximum possible memory usage: 35.4G (51% of installed RAM) [OK] Slow queries: 0% (35K/680M) [OK] Highest usage of available connections: 53% (537/1000) [OK] Key buffer size / total MyISAM indexes: 512.0M/457.2M [OK] Key buffer hit rate: 100.0% (9B cached / 264K reads) [OK] Query cache efficiency: 42.3% (260M cached / 615M selects) [!!] Query cache prunes per day: 4384652 [OK] Sorts requiring temporary tables: 0% (1K temp sorts / 38M sorts) [!!] Joins performed without indexes: 100404 [OK] Temporary tables created on disk: 17% (7M on disk / 45M total) [OK] Thread cache hit rate: 99% (537 created / 2M connections) [!!] Table cache hit rate: 0% (1K open / 946K opened) [OK] Open file limit used: 9% (453/5K) [OK] Table locks acquired immediately: 99% (758M immediate / 758M locks) [OK] InnoDB data size / buffer pool: 749.3M/20.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 64M) join_buffer_size (> 1.0M, or always use indexes with joins) table_cache (> 1024)

    Read the article

  • What would cause Memcached to Hang for 2+ seconds?

    - by Brad Dwyer
    I'm going nuts trying to scale memcached. From their site: Memcached operations are almost all O(1). Connecting to it and issuing a get or stat command should never lag. If connecting lags, you may be hitting the max connections limit. See ServerMaint for details on stats to monitor. If issuing commands lags, you can have a number of tuning problems. Most common are hardware problems, not enough RAM (swapping), network problems (bandwidth, dropped packets, half-duplex connections). On rare occasion OS bugs or memcached bugs can contribute. Well.. it is most certainly not performing like an O(1) operation for me. Under low to normal load on our site memcached response times for get and set ops are about 0.001 seconds. Not bad. But if we triple the load we get outliers that take 100x (or in rare cases 1000x!) that long. I even had one instance where it took 2.2442 seconds for memcached to store a value. Obviously this is killing our site. Here's the output of Memcached-getStats during one of the slow periods: [pid] => 18079 [uptime] => 8903 [threads] => 4 [time] => 1332795759 [pointer_size] => 32 [rusage_user_seconds] => 26 [rusage_user_microseconds] => 503872 [rusage_system_seconds] => 125 [rusage_system_microseconds] => 477008 [curr_items] => 42099 [total_items] => 422500 [limit_maxbytes] => 943718400 [curr_connections] => 84 [total_connections] => 4946 [connection_structures] => 178 [bytes] => 7259957 [cmd_get] => 1679091 [cmd_set] => 351809 [get_hits] => 1662048 [get_misses] => 17043 [evictions] => 0 [bytes_read] => 109388476 [bytes_written] => 3187646458 [version] => 1.4.13 So things that I have ruled out so far are: Hitting the max connections limit (curr_connections of 84 is well below the default of max of 1024) Swapping - the machine has 900M out of 1024M of memory dedicated to memcached on a dedicated machine. It only appears to be using about 7MB of data as per the bytes stat. How would I diagnose the other hardware problems? prstat doesn't really show a whole lot going on in terms of CPU or memory usage. Not sure how to figure out the network problems but as this is a dedicated server on the same private network as the web box I don't think it's a connectivity issue (ping is less than a millisecond between the boxes). Is there something else I'm missing here? It's driving me nuts. Edit: Also forgot to mention that I've tried both persistent and non-persistent connections with minimal-to-no impact.

    Read the article

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

  • libvirt upgrade caused vms to not see drives (boot media not found)

    - by bias
    I upgraded to Ubuntu 12.04.1 and now libvirt (via open nebula) successfully runs vms but they aren't finding the 2 drives (specifically, the boot drive). One is "hd" the other is "cdrom". The machine boots but fails and displays something like "boot media not found hd" (this was in a vnc terminal and I didn't copy the output anywhere so that's not the verbatim message). I tried constructing a new disk using the new version of qemu (via vmbuilder) and this new machine has the same problem as the old machine. In case it matters (I can't see why it would) I'm using open nebula to manage the machines. There's nothing relevant in any of the logs: syslog, libvirtd, oned. Which is to say nothing interesting/anomalous is reported when the machine is brought up. Versions libvirt 0.9.8-2ubuntu17.4 qemu-kvm 1.0+noroms-0ubuntu14.3 The libvirt xml config portions (relavent) <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> ... <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/one//203/images/disk.0'/> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/lib/one//203/images/disk.1'/> <target dev='sdc' bus='scsi'/> <readonly/> <alias name='scsi0-0-2'/> <address type='drive' controller='0' bus='0' unit='2'/> </disk> <controller type='scsi' index='0'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> ... </devices> The libvirt/qemu log contains 2012-11-25 22:19:24.328+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 256 -smp 1,sockets=1,cores=1,threads=1 -name one-204 -uuid 4be6c276-19e8-bdc2-e9c9-9ca5352f2be3 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-204.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device lsi,id=scsi0,bus=pci.0,addr=0x5 -drive file=/var/lib/one//204/images/disk.0,if=none,id=drive-scsi0-0-0,format=qcow2 -device scsi-disk,bus=scsi0.0,scsi-id=0,drive=drive-scsi0-0-0,id=scsi0-0-0,bootindex=1 -drive file=/var/lib/one//204/images/disk.1,if=none,media=cdrom,id=drive-scsi0-0-2,readonly=on,format=raw -device scsi-disk,bus=scsi0.0,scsi-id=2,drive=drive-scsi0-0-2,id=scsi0-0-2 -netdev tap,fd=18,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3 -netdev tap,fd=19,id=hostnet1 -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4 -usb -vnc 0.0.0.0:204 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 kvm: -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom" kvm: -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom"

    Read the article

  • MySQL reserves too much RAM

    - by Buddy
    I have a cheap VPS with 128Mb RAM and 256Mb burst. MySQL starts and reserves about 110Mb, but uses not more than 20Mb of them. My VPS Control Panel shows, that I use 127Mb (I also running nginx and sphinx), I know, that it shows reserved RAM, but when I reach over 128Mb, my VPS reboots automatically every 4 hours. So I want to force MySQL to reserve less RAM. How can i do that? I did some tweaks with my.conf but it helped not so much. top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2156 668 572 S 0.0 0.3 0:00.03 init 11311 root 15 0 11212 356 228 S 0.0 0.1 0:00.00 vzctl 11312 root 18 0 3712 1484 1248 S 0.0 0.6 0:00.01 bash 11347 root 18 0 2284 916 732 R 0.0 0.3 0:00.00 top 13978 root 17 -4 2248 552 344 S 0.0 0.2 0:00.00 udevd 14262 root 15 0 1812 564 472 S 0.0 0.2 0:00.03 syslogd 14293 sphinx 15 0 11816 1172 672 S 0.0 0.4 0:00.07 searchd 14305 root 25 0 7192 1036 636 S 0.0 0.4 0:00.00 sshd 14321 root 25 0 2832 836 668 S 0.0 0.3 0:00.00 xinetd 15389 root 18 0 3708 1300 1132 S 0.0 0.5 0:00.00 mysqld_safe 15441 mysql 15 0 113m 16m 4440 S 0.0 6.4 0:00.15 mysqld 15489 root 21 0 13056 1456 340 S 0.0 0.6 0:00.00 nginx 15490 nginx 18 0 13328 2388 992 S 0.0 0.9 0:00.06 nginx 15507 nginx 25 0 19520 5888 4244 S 0.0 2.2 0:00.00 php-cgi 15508 nginx 18 0 19636 4876 2748 S 0.0 1.9 0:00.12 php-cgi 15509 nginx 15 0 19668 4872 2716 S 0.0 1.9 0:00.11 php-cgi 15518 root 18 0 4492 1116 568 S 0.0 0.4 0:00.01 crond MySQL tuner: >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering Please enter your MySQL administrative login: root Please enter your MySQL administrative password: -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.0.77 [OK] Operating on 32-bit architecture with less than 2GB RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 1M (Tables: 1) [OK] Total fragmented tables: 0 -------- Performance Metrics ------------------------------------------------- [--] Up for: 38m 43s (37 q [0.016 qps], 20 conn, TX: 4M, RX: 3K) [--] Reads / Writes: 100% / 0% [--] Total buffers: 28.1M global + 832.0K per thread (100 max threads) [OK] Maximum possible memory usage: 109.4M (42% of installed RAM) [OK] Slow queries: 0% (0/37) [OK] Highest usage of available connections: 1% (1/100) [OK] Key buffer size / total MyISAM indexes: 128.0K/64.0K [OK] Query cache efficiency: 42.1% (8 cached / 19 selects) [OK] Query cache prunes per day: 0 [!!] Temporary tables created on disk: 27% (3 on disk / 11 total) [!!] Thread cache is disabled [OK] Table cache hit rate: 57% (8 open / 14 opened) [OK] Open file limit used: 1% (12/1K) [OK] Table locks acquired immediately: 100% (22 immediate / 22 locks) [!!] Connections aborted: 10% [OK] InnoDB data size / buffer pool: 1.5M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Set thread_cache_size to 4 as a starting value Your applications are not closing MySQL connections properly Variables to adjust: tmp_table_size (> 32M) max_heap_table_size (> 16M) thread_cache_size (start at 4) I think if I do what MySQLtuner says, MySQL will use more RAM.

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • MySQL InnoDB disappeared, all InnoDB data cant be accessed

    - by dogmatic69
    Mysql (including InnoDB) was working fine, after a restart the other day when mysql starts it says in the logs: 140604 23:36:07 [Note] Plugin 'FEDERATED' is disabled. 140604 23:36:07 [Note] Plugin 'InnoDB' is disabled. In the app it says: SQLSTATE[42000]: Syntax error or access violation: 1286 Unknown storage engine 'InnoDB' Now, according to google this is a very simple fix, just remove the ib_logfile[0|1] files, which I have done and does not do anything. I started by making a full copy of the data dir for testing various 'fixes'. I have also uninstalled mysql and reinstalled it with no change, I just cant get it to run with innodb working anymore :/ # mysql --version mysql Ver 14.14 Distrib 5.5.37, for debian-linux-gnu (x86_64) using readline 6.3 I have also tried the innodb_force_recovery setting, 0 - 6, Any time I run a command on an InnoDB table it says innodb_force_recovery LOGS (from around the time it died) was working here Version: '5.5.37-0ubuntu0.14.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 140530 1:24:22 [Note] /usr/sbin/mysqld: Normal shutdown 140530 1:24:22 [Note] Event Scheduler: Purging the queue. 0 events 140530 1:24:22 InnoDB: Starting shutdown... 140530 1:24:24 InnoDB: Shutdown completed; log sequence number 3345857316 140530 1:24:24 [Note] /usr/sbin/mysqld: Shutdown complete 140530 22:03:12 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140530 22:03:12 [Note] Plugin 'FEDERATED' is disabled. 140530 22:03:12 InnoDB: The InnoDB memory heap is disabled 140530 22:03:12 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140530 22:03:12 InnoDB: Compressed tables use zlib 1.2.8 140530 22:03:12 InnoDB: Using Linux native AIO 140530 22:03:12 InnoDB: Initializing buffer pool, size = 128.0M 140530 22:03:12 InnoDB: Completed initialization of buffer pool 140530 22:03:12 InnoDB: highest supported file format is Barracuda. 140530 22:03:15 InnoDB: Waiting for the background threads to start 140530 22:03:16 InnoDB: 5.5.37 started; log sequence number 3345857316 140530 22:03:16 [Note] Server hostname (bind-address): '192.168.1.20'; port: 3306 140530 22:03:16 [Note] - '192.168.1.20' resolves to '192.168.1.20'; 140530 22:03:16 [Note] Server socket created on IP: '192.168.1.20'. 140530 22:03:16 [Note] Event Scheduler: Loaded 0 events 140530 22:03:16 [Note] /usr/sbin/mysqld: ready for connections. 140602 0:58:39 [Note] Event Scheduler: Purging the queue. 0 events 140602 0:58:39 InnoDB: Starting shutdown... 140602 0:58:41 InnoDB: Shutdown completed; log sequence number 3345954467 140602 0:58:41 [Note] /usr/sbin/mysqld: Shutdown complete does not work anymore 140602 21:45:19 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140602 21:45:19 [Note] Plugin 'FEDERATED' is disabled. 140602 21:45:19 [Note] Plugin 'InnoDB' is disabled. 140602 21:45:19 [Note] Server hostname (bind-address): '192.168.1.20'; port: 3306 140602 21:45:19 [Note] - '192.168.1.20' resolves to '192.168.1.20'; 140602 21:45:19 [Note] Server socket created on IP: '192.168.1.20'. 140602 21:45:19 [Note] Event Scheduler: Loaded 0 events 140602 21:45:19 [Note] /usr/sbin/mysqld: ready for connections.

    Read the article

  • need assistance with my.cnf - 1500% CPU usage

    - by Alan Long
    I'm running into a few issues with our new database server. It is a HP G8 with 2 INTEL XEON E5-2650 processors and 32GB of ram. This server is dedicated as a MySQL server (5.1.69) for our intranet portal. I have been having issues with this server staying alive - I notice high CPU usage during certain times of day (8% ~ 1500%+) and see very low memory usage (7 ~ 15%) based on using the 'top' command. When the CPU usage passes 1000%, that is when the app usually dies. I'm trying to see what I'm doing wrong with the config file, hopefully one of the experts can chime in and let me know what they think. See below for my.cnf file: [mysqld] default-storage-engine=InnoDB datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock #user=mysql large-pages # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_connections=275 tmp_table_size=1G key_buffer_size=384M key_buffer=384M thread_cache_size=1024 long_query_time=5 low_priority_updates=1 max_heap_table_size=1G myisam_sort_buffer_size=8M concurrent_insert=2 table_cache=1024 sort_buffer_size=8M read_buffer_size=5M read_rnd_buffer_size=6M join_buffer_size=16M table_definition_cache=6k open_files_limit=8k slow_query_log #skip-name-resolve # Innodb Settings innodb_buffer_pool_size=18G innodb_thread_concurrency=0 innodb_log_file_size=1G innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_lock_wait_timeout=50 innodb_file_per_table #innodb_buffer_pool_instances=4 #eliminating double buffering innodb_flush_method = O_DIRECT flush_time=86400 innodb_additional_mem_pool_size=40M #innodb_io_capacity = 5000 #innodb_read_io_threads = 64 #innodb_write_io_threads = 64 # increase until threads_created doesnt grow anymore thread_cache=1024 query_cache_type=1 query_cache_limit=4M query_cache_size=256M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 0 wait_timeout = 1800 connect_timeout = 10 interactive_timeout = 60 [mysqldump] max_allowed_packet=32M [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-slow-queries=/var/log/mysql/slow-queries.log long_query_time = 1 log-queries-not-using-indexes we connect to one database with 75 tables, the largest table has 1,150,000 entries and the second largest has 128,036 entries. I have also verified that our PHP queries are optimized as best as possible. Reference - MySQLtuner: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.69-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 420M (Tables: 75) [!!] Total fragmented tables: 75 -------- Security Recommendations ------------------------------------------- [!!] User '[email protected]' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 1h 14m 50s (8M q [1K qps], 705 conn, TX: 6B, RX: 892M) [--] Reads / Writes: 68% / 32% [--] Total buffers: 19.7G global + 35.2M per thread (275 max threads) [!!] Maximum possible memory usage: 29.1G (93% of installed RAM) [OK] Slow queries: 0% (472/8M) [OK] Highest usage of available connections: 66% (183/275) [OK] Key buffer size / total MyISAM indexes: 384.0M/91.0K [OK] Key buffer hit rate: 100.0% (173 cached / 0 reads) [OK] Query cache efficiency: 96.2% (7M cached / 7M selects) [!!] Query cache prunes per day: 553614 [OK] Sorts requiring temporary tables: 0% (3 temp sorts / 1K sorts) [!!] Temporary tables created on disk: 49% (3K on disk / 7K total) [OK] Thread cache hit rate: 74% (183 created / 705 connections) [OK] Table cache hit rate: 97% (231 open / 238 opened) [OK] Open file limit used: 0% (17/8K) [OK] Table locks acquired immediately: 100% (432K immediate / 432K locks) [OK] InnoDB data size / buffer pool: 420.9M/18.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 256M) [see warning above] Thanks in advanced for your help!

    Read the article

  • MySQL crash. Unknown cause. Signal 11

    - by fortmac
    This is a database that I installed ~6 months ago and had been running fine. This is currently running in Ubuntu 12.04. Attempting to connect to MySQL causes this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Then theres: $ sudo mysqld which returns: 130702 15:38:54 [Note] Plugin 'FEDERATED' is disabled. 130702 15:38:54 InnoDB: The InnoDB memory heap is disabled 130702 15:38:54 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130702 15:38:54 InnoDB: Compressed tables use zlib 1.2.3.4 130702 15:38:54 InnoDB: Initializing buffer pool, size = 128.0M 130702 15:38:54 InnoDB: Completed initialization of buffer pool 130702 15:38:54 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130702 15:38:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130702 15:38:55 InnoDB: Waiting for the background threads to start 130702 15:38:56 InnoDB: 1.1.8 started; log sequence number 5201901917 130702 15:38:56 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130702 15:38:56 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130702 15:38:56 [Note] Server socket created on IP: '127.0.0.1'. 130702 15:38:56 [Note] Event Scheduler: Loaded 0 events 130702 15:38:56 [Note] mysqld: ready for connections. Version: '5.5.28-0ubuntu0.12.04.3' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 19:39:02 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f9509e51530 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f94f1d3de60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f95083427b9] mysqld(handle_fatal_signal+0x483)[0x7f9508209b43] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9506f5bcb0] mysqld(+0x320e1c)[0x7f9508113e1c] mysqld(_ZN4JOIN15alloc_func_listEv+0x9c)[0x7f950812391c] mysqld(_ZN4JOIN7prepareEPPP4ItemP10TABLE_LISTjS1_jP8st_orderS7_S1_S7_P13st_select_lexP18st_select_lex_unit+0x918)[0x7f9508124658] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x130)[0x7f950812d060] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f9508132fbc] mysqld(+0x2f6714)[0x7f95080e9714] mysqld(_Z21mysql_execute_commandP3THD+0x16d8)[0x7f95080f1178] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f95080f5e0f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f95080f7260] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f950819b80d] mysqld(handle_one_connection+0x50)[0x7f950819b870] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9506f53e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9506684cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f94e0004b80): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. I'm at a loss. What other reports would be useful in diagnosing this? /var/log/mysql.err & /var/log/mysql.log are empty.

    Read the article

  • Monitor randomly shutting down, computer accepting no input, need to restart to get working

    - by Sebastian Lamerichs
    First off, spec list: OS: Windows 7 Ultimate 64-bit SP1 CPU: i7-4820k @ 3.7GHz (stock) GPU: Two 3GB Radeon HD 7970s @ 1.05GHz Mobo: AsRock X79 Extreme6 HDD: 2TB Seagate Barracuda 7200rpm RAM: 16GB quad-channel Kingston 1600MHz PSU: Antec HCG 900W Monitors: Acer S220HQL 1920x1080 + ViewSonic VA2251 1920x1080. Plugged into different GPUs. My problem is that, on a daily-ish basis, my monitors will turn off and not turn back on. My computer will still be running, GPU/CPU/case fans all still going, but the monitors will not turn back on. Additionally, it seems to cease all network activity. It doesn't seem to log any errors at all. I've verified that this is not a monitor issue, as when I press the num/caps/scroll lock buttons on my keyboard, the lights don't change, so the computer is clearly not accepting input. I have noticed a few other people on the internet with this problem, and some have claimed that it was solved by disabling PCI-Express Link State Power Management, but the issue still occurs for me after this. Whilst my CPU and GPUs both run at 100% 24/7, the temperatures are certainly not at dangerous levels, with the CPU averaging 65°C and the GPUs at 70°C and 78°C average. All components are brand new. I have tried forcing MSI Afterburner to start when Windows starts and to force a constant voltage, as this fixed the issue for a few days for another user, but he reported back saying that it had stopped working properly again, so I'm not putting too much faith in this working. Many people have said to adjust display sleep mode settings, but this will clearly not work, as the keyboard lights would still work if the monitors were the issue. The closest I can get to a log file for this issue is the following Folding@Home logs: 14:45:21:WU01:FS00:0x17:Completed 1120000 out of 2000000 steps (56%) 14:46:43:WU00:FS01:0x17:Completed 480000 out of 2000000 steps (24%) 14:46:49:WU01:FS00:0x17:Completed 1140000 out of 2000000 steps (57%) 14:48:30:WU01:FS00:0x17:Completed 1160000 out of 2000000 steps (58%) 14:49:55:WU01:FS00:0x17:Completed 1180000 out of 2000000 steps (59%) As you can see, the second GPU (FS01) stops computation approximately three and a half minutes before the issue occurs (it should be completing 1% every 80-120 seconds), and the first GPU (FS00) continues for a few minutes more before the logs just end. As far as I can tell, the computer has a network failure at the time the first GPU stops working, the latest IRC message I received from this time was at 14:47:58. That being said, there could have just not been any messages between then and 14:50:00, so I'm going to be connecting a laptop to the same bouncer to double-check if it happens again. The GPUs functioned perfectly well in another computer for a significant period of time, so I'm fairly confident that they aren't the issue, which means that this is being caused by either software or the motherboard, or possibly RAM. I really hope it's software. I heard from a forum board that there was a patch from Microsoft that fixed this problem, but "I've forgot which KB it was or the google search terms I used to find the patch, LOL.", so that's not much help. Haven't seen it mentioned by anyone else on about a dozen threads about this issue either. The computer is plugged in via a surge-protected power board, and I've run several other computers and pieces of hardware through it with no issues, so that is not the cause. I have just set the hard disk to never turn off, although I don't believe that that will solve the issue. Strangely, this has only happened when I'm not at the computer (which is actually a minority of the time). Until today it had only happened when I had not been actively using the computer for 6 hours, but today it happened within 10-30 minutes of me last using the computer actively. I have enabled file logging from MSI Afterburner, so hopefully this will shed some light on the issue, but I'm not too optimistic. I've heard that it could be a motherboard problem, but I figured I should ask around before RMAing it. Any help?

    Read the article

  • pre-commit hook in svn: could not be translated from the native locale to UTF-8

    - by Alexandre Moraes
    Hi everybody, I have a problem with my pre-commit hook. This hook test if a file is locked when the user commits. When a bad condition happens, it should output that the another user is locking this file or if nobody is locking, it should show "you are not locking this file message (file´s name)". The error happens when the file´s name has some latin character like "ç" and tortoise show me this in the output. Commit failed (details follow): Commit blocked by pre-commit hook (exit code 1) with output: [Erro output could not be translated from the native locale to UTF-8.] Do you know how can I solve this? Thanks, Alexandre My shell script is here: #!/bin/sh REPOS="$1" TXN="$2" export LANG="en_US.UTF-8" /app/svn/hooks/ensure-has-need-lock.pl "$REPOS" "$TXN" if [ $? -ne 0 ]; then exit 1; fi exit 0 And my perl is here: !/usr/bin/env perl #Turn on warnings the best way depending on the Perl version. BEGIN { if ( $] >= 5.006_000) { require warnings; import warnings; } else { $^W = 1; } } use strict; use Carp; &usage unless @ARGV == 2; my $repos = shift; my $txn = shift; my $svnlook = "/usr/local/bin/svnlook"; my $user; my $ok = 1; foreach my $program ($svnlook) { if (-e $program) { unless (-x $program) { warn "$0: required program $program' is not executable, ", "edit $0.\n"; $ok = 0; } } else { warn "$0: required program $program' does not exist, edit $0.\n"; $ok = 0; } } exit 1 unless $ok; unless (-e $repos){ &usage("$0: repository directory $repos' does not exist."); } unless (-d $repos){ &usage("$0: repository directory $repos' is not a directory."); } foreach my $user_tmp (&read_from_process($svnlook, 'author', $repos, '-t', $txn)) { $user = $user_tmp; } my @errors; foreach my $transaction (&read_from_process($svnlook, 'changed', $repos, '-t', $txn)){ if ($transaction =~ /^U. (.*[^\/])$/){ my $file = $1; my $err = 0; foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ $err = 1; if($locks=~ /Owner: (.*)/){ if($1 != $user){ push @errors, "$file : You are not locking this file!"; } } } if($err==0){ push @errors, "$file : You are not locking this file!"; } } elsif($transaction =~ /^D. (.*[^\/])$/){ my $file = $1; my $tchan = &read_from_process($svnlook, 'lock', $repos, $file); foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ push @errors, "$1 : cannot delete locked Files"; } } elsif($transaction =~ /^A. (.*[^\/])$/){ my $needs_lock; my $path = $1; foreach my $prop (&read_from_process($svnlook, 'proplist', $repos, '-t', $txn, '--verbose', $path)){ if ($prop =~ /^\s*svn:needs-lock : (\S+)/){ $needs_lock = $1; } } if (not $needs_lock){ push @errors, "$path : svn:needs-lock is not set. Pleas ask TCC for support."; } } } if (@errors) { warn "$0:\n\n", join("\n", @errors), "\n\n"; exit 1; } else { exit 0; } sub usage { warn "@_\n" if @_; die "usage: $0 REPOS TXN-NAME\n"; } sub safe_read_from_pipe { unless (@_) { croak "$0: safe_read_from_pipe passed no arguments.\n"; } print "Running @_\n"; my $pid = open(SAFE_READ, '-|'); unless (defined $pid) { die "$0: cannot fork: $!\n"; } unless ($pid) { open(STDERR, ">&STDOUT") or die "$0: cannot dup STDOUT: $!\n"; exec(@_) or die "$0: cannot exec @_': $!\n"; } my @output; while (<SAFE_READ>) { chomp; push(@output, $_); } close(SAFE_READ); my $result = $?; my $exit = $result >> 8; my $signal = $result & 127; my $cd = $result & 128 ? "with core dump" : ""; if ($signal or $cd) { warn "$0: pipe from @_' failed $cd: exit=$exit signal=$signal\n"; } if (wantarray) { return ($result, @output); } else { return $result; } } sub read_from_process { unless (@_) { croak "$0: read_from_process passed no arguments.\n"; } my ($status, @output) = &safe_read_from_pipe(@_); if ($status) { if (@output) { die "$0: @_' failed with this output:\n", join("\n", @output), "\n"; } else { die "$0: @_' failed with no output.\n"; } } else { return @output; } }

    Read the article

  • Java Process "The pipe has been ended" problem

    - by Amit Kumar
    I am using Java Process API to write a class that receives binary input from the network (say via TCP port A), processes it and writes binary output to the network (say via TCP port B). I am using Windows XP. The code looks like this. There are two functions called run() and receive(): run is called once at the start, while receive is called whenever there is a new input received via the network. Run and receive are called from different threads. The run process starts an exe and receives the input and output stream of the exe. Run also starts a new thread to write output from the exe on to the port B. public void run() { try { Process prc = // some exe is `start`ed using ProcessBuilder OutputStream procStdIn = new BufferedOutputStream(prc.getOutputStream()); InputStream procStdOut = new BufferedInputStream(prc.getInputStream()); Thread t = new Thread(new ProcStdOutputToPort(procStdOut)); t.start(); prc.waitFor(); t.join(); procStdIn.close(); procStdOut.close(); } catch (Exception e) { e.printStackTrace(); printError("Error : " + e.getMessage()); } } The receive forwards the received input from the port A to the exe. public void receive(byte[] b) throws Exception { procStdIn.write(b); } class ProcStdOutputToPort implements Runnable { private BufferedInputStream bis; public ProcStdOutputToPort(BufferedInputStream bis) { this.bis = bis; } public void run() { try { int bytesRead; int bufLen = 1024; byte[] buffer = new byte[bufLen]; while ((bytesRead = bis.read(buffer)) != -1) { // write output to the network } } catch (IOException ex) { Logger.getLogger().log(Level.SEVERE, null, ex); } } } The problem is that I am getting the following stack inside receive() and the prc.waitfor() returns immediately afterwards. The line number shows that the stack is while writing to the exe. The pipe has been ended java.io.IOException: The pipe has been ended at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:260) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) at java.io.FilterOutputStream.write(FilterOutputStream.java:80) at xxx.receive(xxx.java:86) Any advice about this will be appreciated.

    Read the article

  • Android GPS timeout

    - by Scott Saunders
    I'm writing an Android application that, among other things, needs to send the current GPS coordinates to a server when the user tells it to. From a context menu, I run the service below. The service is a LocationListener and requests updates from the LocationManager. When it gets a location (onLocationChanged()), it removes itself as a listener and sends the coordinates off to the server. All of this is working. However, if GPS coordinates are not quickly available, my service just keeps running until it gets some. It holds up the UI with a progress dialog, which is annoying. Worse, if the user has moved since starting the service, the first GPS coordinates might be wrong and the app will send bad data to the server. I need a timeout on the service. Is there a good way to do that? I'm not very experienced with threads. I think I can run a Runnable in the onStartCommand() method that will somehow count down 30 seconds and then, if there is no GPS result yet, call my service's stop() method. Does that sound like the best way to do this? Alternatively, is it possible to tell if the GPS cannot get a fix? How would I go about doing that? Edit: To further clarify, I'm looking for the best way to "give up" on getting a Location after some amount of time. public class AddCurrentLocation extends Service implements LocationListener { Application app; LocationManager mLocManager; ProgressDialog mDialog; @Override public int onStartCommand(Intent intent, int arg0, int arg1) { app = getApplication(); // show progress dialog if (app.getScreen() != null) { mDialog = ProgressDialog.show(app.getScreen(), "", "Adding Location. Please wait...", true); } // find GPS service and start listening Criteria criteria = new Criteria(); criteria.setAccuracy(Criteria.ACCURACY_FINE); mLocManager = (LocationManager) getSystemService(Context.LOCATION_SERVICE); String bestProvider = mLocManager.getBestProvider(criteria, true); mLocManager.requestLocationUpdates(bestProvider, 2000, 0, this); return START_NOT_STICKY; } private void stop() { mLocManager.removeUpdates(this); if (mDialog != null) { mDialog.dismiss(); } stopSelf(); } @Override public void onLocationChanged(Location location) { // done with GPS stop listening mLocManager.removeUpdates(this); sendLocation(location); // method to send info to server stop(); } // other required methods and sendLocation() ... }

    Read the article

  • iPhone / Objective-C: NSMutableArray writeToFile won't write to file. Always returns NO

    - by Joel
    I'm trying to serialize two NSMutableArrays of NSObjects that implement the NSCoding protocol. However it works for one (stacks) and not the other (cards). I have the following block of code: -(void) saveCards { NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; NSString* cardsFile = [documentsDirectory stringByAppendingPathComponent:@"cards.state"]; NSString* stacksFile = [documentsDirectory stringByAppendingPathComponent:@"stacks.state"]; BOOL c = [rootStack.cards writeToFile:cardsFile atomically:YES]; BOOL s = [rootStack.stacks writeToFile:stacksFile atomically:YES]; } I step through this method using the debugger, and after the last two lines of code run, I check the values of the two BOOLs. BOOL c is NO and BOOL s is YES. The stacks array is actually empty (which is probably why it works). The cards array has contents. Why is it that the array with contents is failing? I can't figure this out. I've looked through numerous threads on SOF, each of them say the problem is because the protection level of the files they were writing were preventing them from writing. This is not my problem, as I'm writing to the Documents folder. I've double and tripple checked that neither rootStack.cards nor rootStack.stacks is nil. And I've checked that cards does indeed have content. Here are the coder methods for my Notecard class (I added all the if statments as part of trying to solve this problem to make sure trying to encode nil values doesn't break something): -(void) encodeWithCoder:(NSCoder *)encoder { if(text) [encoder encodeObject:text forKey:@"text"]; if(backText) [encoder encodeObject:backText forKey:@"backText"]; if(x) [encoder encodeObject:x forKey:@"x"]; if(y) [encoder encodeObject:y forKey:@"y"]; if(width) [encoder encodeObject:width forKey:@"width"]; if(height) [encoder encodeObject:height forKey:@"height"]; if(timeCreated) [encoder encodeObject:timeCreated forKey:@"timeCreated"]; if(audioManagerTicket) [encoder encodeObject:audioManagerTicket forKey:@"audioManagerTicket"]; if(backgroundColor) [encoder encodeObject:backgroundColor forKey:@"backgroundColor"]; } -(id) initWithCoder:(NSCoder *)decoder { self = [super init]; if(!self) return nil; self.text = [decoder decodeObjectForKey:@"text"]; self.backText = [decoder decodeObjectForKey:@"backText"]; self.x = [decoder decodeObjectForKey:@"x"]; self.y = [decoder decodeObjectForKey:@"y"]; self.width = [decoder decodeObjectForKey:@"width"]; self.height = [decoder decodeObjectForKey:@"height"]; self.timeCreated = [decoder decodeObjectForKey:@"timeCreated"]; self.audioManagerTicket = [decoder decodeObjectForKey:@"audioManagerTicket"]; self.backgroundColor = [decoder decodeObjectForKey:@"backgroundColor"]; return self; } each field is either an NSString, NSNumber, or UIColor. Thanks for any help

    Read the article

  • Android: OutofMemoryError: bitmap size exceeds VM budget with no reason I can see.

    - by Meymann
    Hi. I am having an OutOfMemory exception with a gallery over 600x800 pixels JPEG's. The environment I've been using Gallery with JPG images around 600x800 pixels. Since my content may be a bit more complex than just images, I have set each view to be a RelativeLayout that wraps ImageView with the JPG. In order to "speed up" the user experience I have a simple cache of 4 slots that prefetches (in a looper) about 1 image left and 1 image right to the displayed image and keeps them in a 4 slot HashMap. The platform I am using AVD of 256 RAM and 128 Heap Size, with a 600x800 screen. It also happens on an Entourage Edge target, except that with the device it's harder to debug. The problem I have been getting an exception: OutofMemoryError: bitmap size exceeds VM budget And it happens when fetching the fifth image. I have tried to change the size of my image cache, and it is still the same. The strange thing: There should not be a memory problem In order to make sure the heap limit is very far away from what I need, I have defined a dummy 8MB array in the beginning, and left it unreferenced so it's immediately dispatched. It is a member of the activity thread and is defined as following static { @SuppressWarnings("unused") byte dummy[] = new byte[ 8*1024*1024 ]; } The result is that the heap size is nearly 11MB and it's all free. Note I have added that trick after it began to crash. It makes OutOfMemory less frequent. Now, I am using DDMS. Just before the crash (does not change much after the crash), DDMS shows: ID Heap Size Allocated Free %Used #Objects 1 11.195 MB 2.428 MB 8.767 MB 21.69% 47,156 And in the detail table it shows: Type Count Total Size Smallest Largest Median Average free 1,536 8.739MB 16B 7.750MB 24B 5.825KB The largest block is 7.7MB. And yet the LogCat says: ERROR/dalvikvm-heap(1923): 925200-byte external allocation too large for this process. If you mind the relation of the median and the average, it is plausible to assume that most of the available blocks are very small. However, there is a block large enough for the bitmap, it's 7.7M. How come it is still not enough? Note: I recorded a heap trace. When looking at the amount of data allocated, it does not feel like more than 2M is allocated. It does match the free memory report by DDMS. Could it be that I experience some problem like heap-fragmentation? How do I solve/workaround the problem? Is the heap shared to all threads? Could it be that I interpret the DDMS readout in a wrong way, and there is really no 900K block to allocate? If so, can anybody please tell me where I can see that? Thanks a lot Meymann

    Read the article

  • Quartz scheduler is failing to start the cron job

    - by Amit
    Hi I am using Quartz scheduler to trigger a cron which needs to perform a host of activities. My Code for the same is as follow: In the init() method of my InitServlet class, I am defining my TimerServer public class InitServlet extends HttpServlet { public void init(ServletConfig config) throws ServletException { try { System.out.println("Starting the CRON"); //Set the DSO Handler CRON TimerServer task = TimerServer.getInstance(); task.setTask(); } catch (Exception ex) { System.out.println("Failed to start the cron"); ex.printStackTrace(); } } In my TimerServer class I have the following methods public void setTask() { try{ this.setSubscriptionDailyJob(); } catch(SchedulerException ex) { log.error("SchedulerException: "+ex.getMessage(), ex); } private void setSubscriptionDailyJob() throws SchedulerException { log.info("Step 1 "); Scheduler scheduler = schedulerFactory.getScheduler(); log.info("Step 2 "); JobDetail subscriptionJob = new JobDetail("subscription", "subscriptiongroup", SubscriptionDaily.class); log.info("Step 3 "); // Initiate CronTrigger with its name and group name CronTrigger subscriptionCronTrigger = new CronTrigger("subscriptionCronTrigger", "subscriptionTriggerGroup"); try { log.info("Subscription cron: "+Constants.SUBSCRIPTION_CRON); // setup CronExpression CronExpression cexp = new CronExpression(Constants.SUBSCRIPTION_CRON); // Assign the CronExpression to CronTrigger subscriptionCronTrigger.setCronExpression(cexp); } catch (Exception ex) { log.warn("Exception: "+ex.getMessage(), ex); } scheduler.scheduleJob(subscriptionJob, subscriptionCronTrigger); scheduler.start(); } In my SubscriptionDaily class : public class SubscriptionDaily implements Job { public void execute(JobExecutionContext arg0) throws JobExecutionException { //Actions to be performed } } Now checking my logs, I am getting Step 1, Step 2 but not further. My code is getting stucked at the TimerServer class itself. Logs wrt to Scheduler are : 17:24:43 INFO [TimerServer]: Step 1 17:24:43 INFO [SimpleThreadPool]: Job execution threads will use class loader of thread: http-8080-1 17:24:43 INFO [SchedulerSignalerImpl]: Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl 17:24:43 INFO [QuartzScheduler]: Quartz Scheduler v.1.6.5 created. 17:24:43 INFO [RAMJobStore]: RAMJobStore initialized. 17:24:43 INFO [StdSchedulerFactory]: Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties' 17:24:43 INFO [StdSchedulerFactory]: Quartz scheduler version: 1.6.5 17:24:43 INFO [TimerServer]: Step 2 I think a log entry is missing : [QuartzScheduler]: Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started. Please help.

    Read the article

  • Long-running ASP.NET tasks

    - by John Leidegren
    I know there's a bunch of APIs out there that do this, but I also know that the hosting environment (being ASP.NET) puts restrictions on what you can reliably do in a separate thread. I could be completely wrong, so please correct me if I am, this is however what I think I know. A request typically timeouts after 120 seconds (this is configurable) but eventually the ASP.NET runtime will kill a request that's taking too long to complete. The hosting environment, typically IIS, employs process recycling and can at any point decide to recycle your app. When this happens all threads are aborted and the app restarts. I'm however not sure how aggressive it is, it would be kind of stupid to assume that it would abort a normal ongoing HTTP request but I would expect it to abort a thread because it doesn't know anything about the unit of work of a thread. If you had to create a programming model that easily and reliably and theoretically put a long running task, that would have to run for days, how would you accomplish this from within an ASP.NET application? The following are my thoughts on the issue: I've been thinking a long the line of hosting a WCF service in a win32 service. And talk to the service through WCF. This is however not very practical, because the only reason I would choose to do so, is to send tasks (units of work) from several different web apps. I'd then eventually ask the service for status updates and act accordingly. My biggest concern with this is that it would NOT be a particular great experience if I had to deploy every task to the service for it to be able to execute some instructions. There's also this issue of input, how would I feed this service with data if I had a large data set and needed to chew through it? What I typically do right now is this SELECT TOP 10 * FROM WorkItem WITH (ROWLOCK, UPDLOCK, READPAST) WHERE WorkCompleted IS NULL It allows me to use a SQL Server database as a work queue and periodically poll the database with this query for work. If the work item completed with success, I mark it as done and proceed until there's nothing more to do. What I don't like is that I could theoretically be interrupted at any point and if I'm in-between success and marking it as done, I could end up processing the same work item twice. I might be a bit paranoid and this might be all fine but as I understand it there's no guarantee that that won't happen... I know there's been similar questions on SO before but non really answers with a definitive answer. This is a really common thing, yet the ASP.NET hosting environment is ill equipped to handle long-running work. Please share your thoughts.

    Read the article

  • [android] MediaRecorder prepare() causes segfault

    - by dwilde1
    Folks, I have a situation where my MediaRecorder instance causes a segfault. I'm working with a HTC Hero, Android 1.5+APIs. I've tried all variations, including 3gpp and H.263 and reducing the video resolution to 320x240. What am I missing? The state machine causes 4 MediaPlayer beeps and then turns on the video camera. Here's the pertinent source: UPDATE: ADDING SURFACE CREATE INFO I have rebooted the device based on previous answer to similar question. UPDATE 2: I seem to be following the MediaRecorder state machine perfectly, and if I trap out the MR code, the blank surface displays perfectly and everything else functions perfectly. I can record videos manually and play back via MediaPlayer in my code, so there should be nothing wrong with the underlying code. I've copied sample code on the surface and surfaceHolder code. I've looked at the MR instance in the Debug perspective in Eclipse and see that all (known) variables seem to be instantiated correctly. The setter calls are all now implemented in the exaxct order specced in the state diagram. // in activity class definition protected MediaPlayer mPlayer; protected MediaRecorder mRecorder; protected boolean inCapture = false; protected int phaseCapture = 0; protected int durCapturePhase = INF; protected SurfaceView surface; protected SurfaceHolder surfaceHolder; // in onCreate() // panelPreview is an empty LinearLayout surface = new SurfaceView(getApplicationContext()); surfaceHolder = surface.getHolder(); surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); panelPreview.addView(surface); // in timer handler runnable if (mRecorder == null) mRecorder = new MediaRecorder(); mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); mRecorder.setOutputFile(path + "/" + vlip); mRecorder.setVideoSize(320, 240); mRecorder.setVideoFrameRate(15); mRecorder.setPreviewDisplay(surfaceHolder.getSurface()); panelPreview.setVisibility(LinearLayout.VISIBLE); mRecorder.prepare(); mRecorder.start(); Here is a complete log trace for the process run and crash: I/ActivityManager( 80): Start proc com.ejf.convince.jenplus for activity com.ejf.convince.jenplus/.JenPLUS: pid=17738 uid=10075 gids={1006, 3003} I/jdwp (17738): received file descriptor 10 from ADB W/System.err(17738): Can't dispatch DDM chunk 46454154: no handler defined W/System.err(17738): Can't dispatch DDM chunk 4d505251: no handler defined I/WindowManager( 80): Screen status=true, current orientation=-1, SensorEnabled=false I/WindowManager( 80): needSensorRunningLp, mCurrentAppOrientation =-1 I/WindowManager( 80): Enabling listeners W/ActivityThread(17738): Application com.ejf.convince.jenplus is waiting for the debugger on port 8100... I/System.out(17738): Sending WAIT chunk I/dalvikvm(17738): Debugger is active I/AlertDialog( 80): [onCreate] auto launch SIP. I/WindowManager( 80): onOrientationChanged, rotation changed to 0 I/System.out(17738): Debugger has connected I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): waiting for debugger to settle... I/System.out(17738): debugger has settled (1370) I/ActivityManager( 80): Displayed activity com.ejf.convince.jenplus/.JenPLUS: 5186 ms I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/AudioHardwareMSM72XX( 2696): AUDIO_START: start kernel pcm_out driver. W/AudioFlinger( 2696): write blocked for 96 msecs I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 I/OpenCore( 2696): [Hank debug] LN 289 FN CreateNode I/PlayerDriver( 2696): CIQ 1625 sendEvent state=5 W/AuthorDriver( 2696): Intended width(640) exceeds the max allowed width(352). Max width is used instead. W/AuthorDriver( 2696): Intended height(480) exceeds the max allowed height(288). Max height is used instead. I/AudioHardwareMSM72XX( 2696): AudioHardware pcm playback is going to standby. I/DEBUG (16094): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** I/DEBUG (16094): Build fingerprint: 'sprint/htc_heroc/heroc/heroc: 1.5/CUPCAKE/85027:user/release-keys' I/DEBUG (16094): pid: 17738, tid: 17738 com.ejf.convince.jenplus Thanks in advance! -- Don Wilde http://www.ConvinceProject.com

    Read the article

  • UIViewAnimation done by a UIViewController belonging to a UINavigationController?

    - by RickiG
    Hi I have an UINavigationController which the user navigates with. When pushing a specific UIViewController onto the navigation stack, a "settings" button appear in the navigationBar. When the user clicks this button I would like to flip the current view/controller, i.e. everything on screen, including the navigationBar, over to a settings view. So I have a SettingsViewController which I would like to flip to from my CurrentViewController that lives on a navigationController stack. I get all kinds of strange behavior trying to do this, the UIViews belonging to the SettingsViewController will start to animate, sliding into place, the navigationButtons moves around, nothing acts as I would think. -(void)settingsHandler { SettingViewController *settingsView = [[SettingViewController alloc] init]; [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromRight forView:self.navigationController.view cache:YES]; [self.navigationController.view addSubview:settingsView.view]; [UIView commitAnimations]; } The above results in the views flipping correctly, but the subviews of the SettingsViewController are all positioned in (0, 0) and after the transition, they 'snap' into place? Is it because I instantiate and add my subviews in viewDidLoad, like this? - (void)viewDidLoad { UIImageView *imageBg = [[UIImageView alloc] initWithFrame:CGRectMake(0.0f, 0.0f, 320.0f, 460.0f)]; [imageBg setImage:[UIImage imageNamed:@"background.png"]]; [self.view addSubview:imageBg]; [imageBg release]; SettingsSubview *switchView = [[SettingsSubview alloc] initWithFrame:CGRectMake(0.0f, 0.0f, 320.0f, 460.0f)]; [self.view addSubview:switchView]; [switchView release]; [super viewDidLoad]; } 1: How should I correctly do the "flip" transition, from within the UIViewController in the UINavigationController, to a new UIViewController and subsequently from the new UIViewController and back to the "original" UIViewController residing on the UINavigationControllers stack? 2: Should I use a different approach, than the "viewDidLoad" method, when instantiating and adding subviews to a UIViewController? -question 2 is more of a "best practice" thing. I have seen different ways of doing it and I am having trouble either finding or understanding the life-cycle documentation and the different threads and posts on the subject. I am missing the "best practice" examples. Thank You very much for any help given:)

    Read the article

  • Redhat | Openssl installation error

    - by MMRUser
    make -f objs/Makefile make[1]: Entering directory `/root/fuse-ssh/nginx-0.7.65' cd /usr/bin/openssl \ && make clean \ && ./config --prefix=/usr/bin/openssl/.openssl no-shared no-threads \ && make \ && make install /bin/sh: line 0: cd: /usr/bin/openssl: Not a directory make[1]: *** [/usr/bin/openssl/.openssl/include/openssl/ssl.h] Error 1 make[1]: Leaving directory `/root/fuse-ssh/nginx-0.7.65' make: *** [build] Error 2 where's the actual location of openssl, there are several different places in my system.. How to solve this issue. rpm -ql openssl /usr/bin/openssl /usr/lib64/openssl /usr/lib64/openssl/engines /usr/lib64/openssl/engines/lib4758cca.so /usr/lib64/openssl/engines/libaep.so /usr/lib64/openssl/engines/libatalla.so /usr/lib64/openssl/engines/libchil.so /usr/lib64/openssl/engines/libcswift.so /usr/lib64/openssl/engines/libgmp.so /usr/lib64/openssl/engines/libnuron.so /usr/lib64/openssl/engines/libsureware.so /usr/lib64/openssl/engines/libubsec.so /usr/share/doc/openssl-0.9.8e /usr/share/doc/openssl-0.9.8e/CHANGES /usr/share/doc/openssl-0.9.8e/FAQ /usr/share/doc/openssl-0.9.8e/INSTALL /usr/share/doc/openssl-0.9.8e/LICENSE /usr/share/doc/openssl-0.9.8e/NEWS /usr/share/doc/openssl-0.9.8e/README /usr/share/doc/openssl-0.9.8e/README.FIPS /usr/share/doc/openssl-0.9.8e/c-indentation.el /usr/share/doc/openssl-0.9.8e/openssl.txt /usr/share/doc/openssl-0.9.8e/openssl_button.gif /usr/share/doc/openssl-0.9.8e/openssl_button.html /usr/share/doc/openssl-0.9.8e/ssleay.txt /usr/bin/openssl /usr/lib/openssl /usr/lib/openssl/engines /usr/lib/openssl/engines/lib4758cca.so /usr/lib/openssl/engines/libaep.so /usr/lib/openssl/engines/libatalla.so /usr/lib/openssl/engines/libchil.so /usr/lib/openssl/engines/libcswift.so /usr/lib/openssl/engines/libgmp.so /usr/lib/openssl/engines/libnuron.so /usr/lib/openssl/engines/libsureware.so /usr/lib/openssl/engines/libubsec.so /usr/share/doc/openssl-0.9.8e /usr/share/doc/openssl-0.9.8e/CHANGES /usr/share/doc/openssl-0.9.8e/FAQ /usr/share/doc/openssl-0.9.8e/INSTALL /usr/share/doc/openssl-0.9.8e/LICENSE /usr/share/doc/openssl-0.9.8e/NEWS /usr/share/doc/openssl-0.9.8e/README /usr/share/doc/openssl-0.9.8e/README.FIPS /usr/share/doc/openssl-0.9.8e/c-indentation.el /usr/share/doc/openssl-0.9.8e/openssl.txt /usr/share/doc/openssl-0.9.8e/openssl_button.gif /usr/share/doc/openssl-0.9.8e/openssl_button.html /usr/share/doc/openssl-0.9.8e/ssleay.txt These are the places.. Thanks.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >