Search Results

Search found 6703 results on 269 pages for 'gnome terminal'.

Page 262/269 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • ERROR 2003 (HY000): Can't connect to MySQL server on (111)

    - by JohnMerlino
    I am unable to connect to on my ubuntu installation a remote tcp/ip which contains a mysql installation: viggy@ubuntu:~$ mysql -u user.name -p -h xxx.xxx.xxx.xxx -P 3306 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (111) I commented out the line below using vim in /etc/mysql/my.cnf: # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 Then I restarted the server: sudo service mysql restart But still I get the same error. This is the content of my.cnf: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ (Note that I can log into my local mysql install just fine by running mysql (and it will log me in as root) and also note that I can get into mysql in the remote server by logging into via ssh and then invoking mysql), but I am unable to connect to the remote server via my terminal using the host, and I need to do it that way so that I can then use mysql workbench.

    Read the article

  • Netgear VPN endpoint drops connectivity to single IP address

    - by Justin Bowers
    I'm having a strange issue with one of the networks I manage recently. We have about 14 different networks connected together through a Netgear hardware VPN. Everything has been running fine (other than standard connectivity problems) for a few years now, but I've hit a wall with a problem that's just cropped up at one of the VPN endpoint locations. Our primary VPN network is on the 192.168.1.0/24 subnet and our other 13 networks are on the 192.168.2.0/24 - 192.168.14.0/24 subnets. We run a terminal server on the 192.168.1.0/24 network with IP address 192.168.1.100. Starting Thursday of last week, we had a problem with connectivity of the 192.168.2.0/24 network to 192.168.1.100. When troubleshooting the problem, I found that Network 2 (192.168.2.0/24) still had connectivity to the Internet as well as VPN connectivity to Network 1 (192.168.1.0/24). We could ping and connect to any other device other than the server with IP address 192.168.1.100. Also, none of our networks had an issue accessing 192.168.1.100. I ran a scan on Network 2 after assigning static IP addresses to one of the workstations but received no response from 192.168.1.100 (looking for possibly a new device that someone had plugged into Network 2 that had a duplicate IP address with the server). Asking the staff, noone had reported connecting a new device to Network 2 as well. I then assigned a secondary IP address of 192.168.1.88 to the server and could ping and connect to the secondary IP address from Network 2, but still couldn't access it via 192.168.1.100. I then just rebooted the Netgear VPN Firewall (FVS318v3) and after it came back up, connectivity to 192.168.1.100 was restored. Beforehand, when checking for devices with a possible duplicate IP address, I did run a check for available wireless access points and stations and found none (our wireless is secured via MAC address access control through a WG102 device). I thought that it may have been a fluke for some reason since everything came back up after a power cycle of the VPN Firewall. Things ran fine for a few days until this afternoon, when the problem happened again. One of our users claimed that they had connectivity problems to the server and after connecting to the computer, I found that I couldn't ping the server address anymore. I could still ping the alternate IP address of the server though, so I went ahead and rebooted the VPN firewall again and connectivity was restored. Unfortunately, I can't find anything in the security or VPN logs of the firewall that helps point me in the right direction, so I thought I would go ahead and ask to see if anyone else has any other insight into why we've started having this problem. I am aware that it could still be a device with a duplicate IP address of the server on Network 2, but every employee claim states that there's been no such new device brought in to the network. I know this is a long read, but any help is appreciated! Thanks, Justin

    Read the article

  • No device file for partition on logical volume (Linux LVM)

    - by Brian
    I created a logical volume (scandata) containing a single ext3 partition. It is the only logical volume in its volume group (case4t). Said volume group is comprised by 3 physical volumes, which are three primary partitions on a single block device (/dev/sdb). When I created it, I could mount the partition via the block device /dev/mapper/case4t-scandatap1. Since last reboot the aforementioned block device file has disappeared. It may be of note -- I'm not sure -- that my superior (a college professor) had prompted this reboot by running sudo chmod -R [his name] /usr/bin, which obliterated all suid in its path, preventing the both of us from sudo-ing. That issue has been (temporarily) rectified via this operation. Now I'll cut the chatter and get started with the terminal dumps: $ sudo pvs; sudo vgs; sudo lvs Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Scanning for physical volume names PV VG Fmt Attr PSize PFree /dev/sdb1 case4t lvm2 a- 819.32G 0 /dev/sdb2 case4t lvm2 a- 866.40G 0 /dev/sdb3 case4t lvm2 a- 47.09G 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" VG #PV #LV #SN Attr VSize VFree case4t 3 1 0 wz--n- 1.69T 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all logical volumes LV VG Attr LSize Origin Snap% Move Log Copy% Convert scandata case4t -wi-a- 1.69T Wiping internal VG cache $ sudo vgchange -a y Logging initialised at Sat Jan 8 11:43:14 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" 1 logical volume(s) in volume group "case4t" already active 1 existing logical volume(s) in volume group "case4t" monitored Found volume group "case4t" Activated logical volumes in volume group "case4t" 1 logical volume(s) in volume group "case4t" now active Wiping internal VG cache $ ls /dev | grep case4t case4t $ ls /dev/mapper case4t-scandata control $ sudo fdisk -l /dev/case4t/scandata Disk /dev/case4t/scandata: 1860.5 GB, 1860584865792 bytes 255 heads, 63 sectors/track, 226203 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00049bf5 Device Boot Start End Blocks Id System /dev/case4t/scandata1 1 226203 1816975566 83 Linux $ sudo parted /dev/case4t/scandata print Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/case4t-scandata: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 1861GB 1861GB primary ext3 $ sudo fdisk -l /dev/sdb Disk /dev/sdb: 1860.5 GB, 1860593254400 bytes 255 heads, 63 sectors/track, 226204 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000081 Device Boot Start End Blocks Id System /dev/sdb1 1 106955 859116006 83 Linux /dev/sdb2 113103 226204 908491815 83 Linux /dev/sdb3 106956 113102 49375777+ 83 Linux Partition table entries are not in disk order $ sudo parted /dev/sdb print Model: DELL PERC 6/i (scsi) Disk /dev/sdb: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 880GB 880GB primary reiserfs 3 880GB 930GB 50.6GB primary 2 930GB 1861GB 930GB primary I find it a bit strange that partition one above is said to be reiserfs, or if it matters -- it was previously reiserfs, but LVM recognizes it as a PV. To reiterate, neither /dev/mapper/case4t-scandatap1 (which I had used previously) nor /dev/case4t/scandata1 (as printed by fdisk) exists. And /dev/case4t/scandata (no partition number) cannot be mounted: $sudo mount -t ext3 /dev/case4t/scandata /mnt/new mount: wrong fs type, bad option, bad superblock on /dev/mapper/case4t-scandata, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so All I get on syslog is: [170059.538137] VFS: Can't find ext3 filesystem on dev dm-0. Thanks in advance for any help you can offer, Brian P.S. I am on Ubuntu GNU/Linux 2.6.28-11-server (Jaunty) (out of date, I know -- that's on the laundry list).

    Read the article

  • Riak "error":"insufficient_vnodes_available"

    - by Wolfiem
    We have 4 nodes Riak installation. They are running on Ubuntu 12.04 LTS Precise installed servers. We have installed 1.1.4 at August 1st 2012 and upgraded 1.2.0 when its available. Server names are: f1 - 10.10.0.12 - This is the first installed server. We have joined other ones to this server. This also serves Riak control. s2 - 10.10.0.22 - s3 - 10.10.0.23 - s4 - 10.10.0.24 - This server also serves Riak control. This morning we've seen "insufficient nodes available" error at our applications log and restarted all nodes. 3 of them became available except "f1" UPDATE : while I prepare this message live 3 nodes became unavailable and need restart Riak. wolfiem@f01:~$ sudo /etc/init.d/riak start Riak failed to start within 15 seconds, see the output of 'riak console' for more information. If you want to wait longer, set the environment variable WAIT_FOR_ERLANG to the number of seconds to wait. I've tried to set WAIT_FOR_ERLANG value to 60 seconds but I can't. adding this line in vm.args didn't work: -env WAIT_FOR_ERLANG 60 I also tried to set this from terminal but it didn't work either. wolfiem@f01:~$ export WAIT_FOR_ERLANG=60 It still says "Riak failed to start within 15 seconds" This is the console.log output: 2012-09-11 10:58:02.532 [info] <0.7.0> Application lager started on node '[email protected]' 2012-09-11 10:58:02.560 [warning] <0.148.0>@riak_core_ring_manager:reload_ring:231 No ring file available. 2012-09-11 10:58:02.585 [error] <0.164.0> CRASH REPORT Process <0.164.0> with 0 neighbours exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 This is the error.log output 2012-09-11 10:58:02.585 [error] <0.164.0> CRASH REPORT Process <0.164.0> with 0 neighbours exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 This is the crash.log output: 2012-09-11 10:58:02 =CRASH REPORT==== crasher: initial call: mochiweb_socket_server:init/1 pid: <0.164.0> registered_name: [] exception exit: {eaddrnotavail,[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,320}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]} ancestors: [riak_core_sup,<0.135.0>] messages: [] links: [<0.136.0>] dictionary: [] trap_exit: true status: running heap_size: 377 stack_size: 24 reductions: 403 neighbours: You can find the riak console output below: wolfiem@f01:~$ riak console Attempting to restart script through sudo -H -u riak Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot /usr/lib/riak/releases/1.2.0/riak -embedded -config /etc/riak/app.config -pa /usr/lib/riak/basho-patches -args_file /etc/riak/vm.args -- console Root: /usr/lib/riak Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:8:8] [async-threads:64] [kernel-poll:true] =INFO REPORT==== 11-Sep-2012::10:44:18 === alarm_handler: {set,{system_memory_high_watermark,[]}} ** /usr/lib/riak/lib/observer-1.1/ebin/etop_txt.beam hides /usr/lib/riak/lib/basho-patches/etop_txt.beam ** Found 1 name clashes in code paths 10:44:19.099 [info] Application lager started on node '[email protected]' 10:44:19.130 [warning] No ring file available. 10:44:19.158 [error] CRASH REPORT Process <0.164.0> with 0 neighbours exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 /usr/lib/riak/lib/os_mon-2.2.9/priv/bin/memsup: Erlang has closed. =INFO REPORT==== 11-Sep-2012::10:44:19 === alarm_handler: {clear,system_memory_high_watermark} Erlang has closed {"Kernel pid terminated",application_controller,"{application_start_failure,riak_core,{shutdown,{riak_core_app,start,[normal,[]]}}}"} Crash dump was written to: /var/log/riak/erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,riak_core,{shutdown,{riak_core_app,start,[normal,[]]}}})

    Read the article

  • Secure method of changing a user's password via Python script/non-interactively

    - by Matthew Rankin
    I've created a Python script using Fabric to configure a freshly built Slicehost Ubuntu slice. In case you're not familiar with Fabric, it uses Paramiko, a Python SSH2 client, to provide remote access "for application deployment or systems administration tasks." One of the first things I have the Fabric script do is to create a new admin user and set their password. Unlike Pexpect, Fabric cannot handle interactive commands on the remote system, so I need to set the user's password non-interactively. At present, I'm using the chpasswd command to change the password. This transmits the password as clear text over SSH to the remote system. Questions Is my current method of setting the password a security concern? Currently, the drawback I see is that Fabric shows the password as clear text on my local system as follows: [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd. Since I only run the Fabric script from my laptop, I don't think this is a security issue, but I'm interested in others' input. Is there a better method for setting the user's password non-interactively? Another option, would be to use Pexpect from within the Fabric script to set the password. Current Code # Fabric imports and host configuration excluded for brevity root_password = getpass.getpass("Root's password given by SliceManager: ") admin_username = prompt("Enter a username for the admin user to create: ") admin_password = getpass.getpass("Enter a password for the admin user: ") env.user = 'root' env.password = root_password # Create the admin group and add it to the sudoers file admin_group = 'admin' run('addgroup {group}'.format(group=admin_group)) run('echo "%{group} ALL=(ALL) ALL" >> /etc/sudoers'.format( group=admin_group) ) # Create the new admin user (default group=username); add to admin group run('adduser {username} --disabled-password --gecos ""'.format( username=admin_username) ) run('adduser {username} {group}'.format( username=admin_username, group=admin_group) ) # Set the password for the new admin user run('echo "{username}:{password}" | chpasswd'.format( username=admin_username, password=admin_password) ) Local System Terminal I/O $ fab config_rebuilt_slice Root's password given by SliceManager: Enter a username for the admin user to create: johnsmith Enter a password for the admin user: [xxx.xx.xx.xxx] run: addgroup admin [xxx.xx.xx.xxx] out: Adding group `admin' (GID 1000) ... [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "%admin ALL=(ALL) ALL" >> /etc/sudoers [xxx.xx.xx.xxx] run: adduser johnsmith --disabled-password --gecos "" [xxx.xx.xx.xxx] out: Adding user `johnsmith' ... [xxx.xx.xx.xxx] out: Adding new group `johnsmith' (1001) ... [xxx.xx.xx.xxx] out: Adding new user `johnsmith' (1000) with group `johnsmith' ... [xxx.xx.xx.xxx] out: Creating home directory `/home/johnsmith' ... [xxx.xx.xx.xxx] out: Copying files from `/etc/skel' ... [xxx.xx.xx.xxx] run: adduser johnsmith admin [xxx.xx.xx.xxx] out: Adding user `johnsmith' to group `admin' ... [xxx.xx.xx.xxx] out: Adding user johnsmith to group admin [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd [xxx.xx.xx.xxx] run: passwd --lock root [xxx.xx.xx.xxx] out: passwd: password expiry information changed. Done. Disconnecting from [email protected]... done.

    Read the article

  • How can I tell Firefox to ignore unprintable characters?

    - by BrianH
    Edit: Summary Apparently the intended character to display in this case is an "en-dash". This page has a table half way down that shows that for the &ndash;, some software will convert the correct hex code of 2013 to 0096. (look at the first row in the table). This answer on Stackoverflow explains that somehow this is a mixup between Windows-1252 and UTF-8 This blog article enforces this: Character 150 (0x96) is the unicode character "START OF GUARDED AREA" in the non-displayed C1 control character range, but in the Windows-1252 encoding it's mapped to to the displayable character 0x2013 "en-dash" (a short dash). Others have struggled with this when producing content, as this answer on Stackoverflow shows how to replace 0x0096 with 0x2013. Google must realize this, because as stated in my original question below, Google's cached version of the Amazon page has &ndash; so it seems they are automatically correcting these mistakes on pages they cache. I have tried setting my encoding to Windows-1252 but that does not help. So now I guess my question is, how can I tell Firefox to ignore unprintable characters like these? Original content below: (Firefox 3.6.13 on Windows XP) Every once in a while I notice an odd character on certain web pages when browsing the web. It is a outline of a box with a 4-digit number inside. And example of a page that has these characters is: http://aws.amazon.com/ec2/#highlights After each section heading (Elastic, Completely Controlled, ...) I see a box with the number "0096" inside. I looked at the cached version on Google, and google has &ndash; in it's place, so I'm guessing I should be seeing a dash there instead of the box with the numbers in it. I have tried changing the character encoding in Firefox but haven't been able to find one that shows these characters correctly. Is there a way to allow Firefox to view these characters? Thanks in advance! Edit - adding a screen shot of the "special" characters: Edit #2 - tried in Ubuntu - new screenshots I logged into my Ubuntu desktop and browsed to the amazon page in Chrome and Firefox. Chrome completely ignores character, even if I inspect or view page source. Firefox in Unbutu displays the character exactly like Firefox on my Windows XP box. I copied the character and played around with it at the command line - here is a screenshot of the results: It looks like I can paste the character into this post as well: `` It is definitely not isolated to Windows XP. I tried setting the character encoding for my terminal to Windows 1252 (from Dennis' comment below), but then it just displays this character as a question mark. I pulled the webpage down with wget and with curl, and both outputs show this characters as: <96> It makes me wonder if this character renders correctly for anyone? It appears webkit just ignores it, my IE6 ignores it, Firefox displays the box with the numbers in it. I would have to imagine the design team at Amazon can see it correctly? It's not a huge deal to get these characters displaying correctly, but it would be nice to know if there is a solution to this.

    Read the article

  • Server Admin is not allowing me to configure DNS

    - by Clinton Blackmore
    We have a Mac OS X 10.5.8 Server running DNS (and a few other services). When I connect to it (using Server Admin 10.5.3 [which comes from the Server Admin 10.5.7 tools]), and click to look at the DNS settings, all appears normal -- it shows many reverse entries and two top-level domains. However, when I select one of our domains and open the disclosure triangle, the list is empty! [There should be over a dozen entries, and the reverse entries do show up.] If I then tell it I want to add, say, an A Record to the domain, almost everything disappears -- and I am left with a list showing our two domains, one with a disclosure triangle underneath it showing a single entry, and one reverse entry to correspond to the new A record. named appears to be working fine. DNS names resolve. It appears to simply be that Server Admin is having problems with the data on the computer. No one here would have manually created a DNS entry. Now, while I think I've backed up the DNS (I backed up /var/named/, /etc/named.conf, and /etc/dns/, as mentioned here), I'm really not sure if just replacing the files would restore the DNS settings we have if things go south. I am contemplating going to settings and changing the log level from "Information" to "Debug", but 1) I am just a little concerned that it might write a bad configuration to the disk, and 2) I think it would only affect named and not Server Admin, and, so far as I can tell, named is not having a problem. (Nothing looks strange in /Library/Logs/named.log when I open it via Console/Terminal. Oddly, though, when I click on the 'log' button for DNS in Server Admin, I see no text at all, just a fully white window. When I look at one of our secondary DNS servers, I am able to see the log file through Server Admin.) This entry appears in the system log when I run Server Admin on the server: Jun 17 09:02:08 od1 Server Admin[3892]: Unexpected call to doMarkConfigurationAsDirty by 'DNS' plugin during updateConfigurationViewFromDescription It seems to occur after I've looked at DNS, look at another service, and then click back on DNS. Think that the most likely cause is a corrupt configuration file, I glanced through all the files that I backed up, and none of them is obviously gobbledygook. Here are some oddities I find when running Server Admin from a remote computer to manage the DNS. When I click to see the log file for DNS, the server starts writing messages like these to its system.log: Jun 17 09:59:04 od1 kernel[0]: Limiting open port RST response from 252 to 250 packets per second Jun 17 09:59:06 od1 kernel[0]: Limiting open port RST response from 258 to 250 packets per second This stops when I click on a different service. The inderterminate progress indicator (the spinning wheel that appears beside the "Revert" and "Save" buttons in the bottom-right corner of Server Admin) looks really strange. As far as I can tell, instead of just spinning and waiting, it is being told to start spinning repeatedly, resulting in a jerky animation. Here are some of the messages being logged on the computer running Server Admin: At startup: *** ERROR: -[GRAxes computeLayout]:1124 - plotRect height = 0.000000 <= 0.0 *** *** ERROR: -[GRChartView computeLayout]:1194 - Layout for overlay axes (0x18758f50) failed. *** (These messages don't concern me too much as they go away for a while if you delete ~/Library/Preferences/com.apple.ServerAdmin.plist). At shutdown: 2010-06-17 10:02:17.202 Server Admin[7770:10b] *** -[GroupTextField windowDidResignKey:]: unrecognized selector sent to instance 0x16e12490 More concerning are these messages: 2010-06-17 09:59:47.269 Server Admin[7770:10b] Unexpected call to doMarkConfigurationAsDirty by 'DNS' plugin during updateConfigurationViewFromDescription Server Admin(7770,0xb0453000) malloc: *** error for object 0x1c115390: double free *** set a breakpoint in malloc_error_break to debug 2010-06-17 10:01:00.795 Server Admin[7770:10b] *** -[ServiceEntry sessionHost]: unrecognized selector sent to instance 0x2af500 Any thoughts on: what the problem is how I can troubleshoot it or how to fix it? If I do need to wipe out DNS and restart, is there a good way to do this?

    Read the article

  • ls hangs for a certain directory

    - by Jakobud
    There is a particular directory (/var/www), that when I run ls (with or without some options), the command hangs and never completes. There is only about 10-15 files and directories in /var/www. Mostly just text files. Here is some investigative info: [me@server www]$ df . Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_dev-lv_root 50G 19G 29G 40% / [me@server www]$ df -i . Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vg_dev-lv_root 3.2M 435K 2.8M 14% / find works fine. Also I can type in cd /var/www/ and press TAB before pressing enter and it will successfully tab-completion list of all files/directories in there: [me@server www]$ cd /var/www/ cgi-bin/ create_vhost.sh html/ manual/ phpMyAdmin/ scripts/ usage/ conf/ error/ icons/ mediawiki/ rackspace sqlbuddy/ vhosts/ [me@server www]$ cd /var/www/ I have had to kill my terminal sessions several times because of the ls hanging: [me@server ~]$ ps | grep ls gdm 6215 0.0 0.0 488152 2488 ? S<sl Jan18 0:00 /usr/bin/pulseaudio --start --log-target=syslog root 23269 0.0 0.0 117724 1088 ? D 18:24 0:00 ls -Fh --color=always -l root 23477 0.0 0.0 117724 1088 ? D 18:34 0:00 ls -Fh --color=always -l root 23579 0.0 0.0 115592 820 ? D 18:36 0:00 ls -Fh --color=always root 23634 0.0 0.0 115592 816 ? D 18:38 0:00 ls -Fh --color=always root 23740 0.0 0.0 117724 1088 ? D 18:40 0:00 ls -Fh --color=always -l me 23770 0.0 0.0 103156 816 pts/6 S+ 18:41 0:00 grep ls kill doesn't seem to have any affect on the processes, even as sudo. What else should I do to investigate this problem? It just randomly started happening today. UPDATE dmesg is a big list of things, mostly related to an external USB HDD that I've mounted too many times and the max mount count has been reached, but that is an un-related problem I think. Near the bottom of dmesg I'm seeing this: INFO: task ls:23579 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. ls D ffff88041fc230c0 0 23579 23505 0x00000080 ffff8801688a1bb8 0000000000000086 0000000000000000 ffffffff8119d279 ffff880406d0ea20 ffff88007e2c2268 ffff880071fe80c8 00000003ae82967a ffff880407169ad8 ffff8801688a1fd8 0000000000010518 ffff880407169ad8 Call Trace: [<ffffffff8119d279>] ? __find_get_block+0xa9/0x200 [<ffffffff814c97ae>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814c964b>] mutex_lock+0x2b/0x50 [<ffffffff8117a4d3>] do_lookup+0xd3/0x220 [<ffffffff8117b145>] __link_path_walk+0x6f5/0x1040 [<ffffffff8117a47d>] ? do_lookup+0x7d/0x220 [<ffffffff8117bd1a>] path_walk+0x6a/0xe0 [<ffffffff8117beeb>] do_path_lookup+0x5b/0xa0 [<ffffffff8117cb57>] user_path_at+0x57/0xa0 [<ffffffff81178986>] ? generic_readlink+0x76/0xc0 [<ffffffff8117cb62>] ? user_path_at+0x62/0xa0 [<ffffffff81171d3c>] vfs_fstatat+0x3c/0x80 [<ffffffff81258ae5>] ? _atomic_dec_and_lock+0x55/0x80 [<ffffffff81171eab>] vfs_stat+0x1b/0x20 [<ffffffff81171ed4>] sys_newstat+0x24/0x50 [<ffffffff810d40a2>] ? audit_syscall_entry+0x272/0x2a0 [<ffffffff81013172>] system_call_fastpath+0x16/0x1b And also, strace ls /var/www/ spits out a whole BUNCH of information. I don't know what is useful here... The last handful of lines: ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, TIOCGWINSZ, {ws_row=68, ws_col=145, ws_xpixel=0, ws_ypixel=0}) = 0 stat("/var/www/", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 open("/var/www/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3 fcntl(3, F_GETFD) = 0x1 (flags FD_CLOEXEC) getdents(3, /* 16 entries */, 32768) = 488 getdents(3, /* 0 entries */, 32768) = 0 close(3) = 0 fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 9), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3093b18000 write(1, "cgi-bin conf create_vhost.sh\te"..., 125cgi-bin conf create_vhost.sh error html icons manual mediawiki phpMyAdmin rackspace scripts sqlbuddy usage vhosts ) = 125 close(1) = 0 munmap(0x7f3093b18000, 4096) = 0 close(2) = 0 exit_group(0) = ?

    Read the article

  • segfault when cd-ing into certain directories in bash

    - by user84207
    I have noticed this very strange behavior recently. After cd into certain directories, I get a segfault on the terminal. --- SIGSEGV (Segmentation fault) @ 0 (0) --- --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV (core dumped) +++ segmentation fault (core dumped) I proceeded to strace a bash session in which I cd into the target directory, and was able to reproduce the problem. I attached the log to this pastebin: I paste below the few lines from the read of "cd stumpwm", which is the directory in question, until the segfault. I included a few of the repetitions of calls to "rt_sigprocmask" and "brk" to give a glimpse of the pattern, which occurs for most of the strace, read(0, cd stumpwm "c", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "d", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, " ", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "s", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "t", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "u", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "m", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "p", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "w", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "m", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "\n", 1) = 1 rt_sigprocmask(SIG_BLOCK, [INT], [], 8) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig -icanon -echo ...}) = 0 ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig icanon -echo ...}) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon -echo ...}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGINT, {0x457d50, [], SA_RESTORER, 0x7ffff76254a0}, {0x49edc0, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGTERM, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGQUIT, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGALRM, {0x457f50, [HUP INT ILL TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM SYS], SA_RESTORER, 0x7ffff76254a0}, {0x49edc0, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGTSTP, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGTTOU, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGTTIN, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGWINCH, {0x457920, [], SA_RESTORER, 0x7ffff76254a0}, {0x49e6e0, [], SA_RESTORER|SA_RESTART, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGINT, {0x457d50, [], SA_RESTORER, 0x7ffff76254a0}, {0x457d50, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 brk(0xa9a000) = 0xa9a000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9b000) = 0xa9b000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9c000) = 0xa9c000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9d000) = 0xa9d000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9e000) = 0xa9e000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9f000) = 0xa9f000 brk(0xaa0000) = 0xaa0000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xaa1000) = 0xaa1000 brk(0xaa2000) = 0xaa2000 (pattern of rt_sigprocmask, brk continues ...) rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0x1d5b000) = 0x1d5b000 brk(0x1d5c000) = 0x1d5c000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0x1d5d000) = 0x1d5d000 brk(0x1d5e000) = 0x1d5e000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 --- SIGSEGV (Segmentation fault) @ 0 (0) --- --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV (core dumped) +++ segmentation fault (core dumped) How can I debug this? Is this likely to be a bash problem? The error does not occur with another shell, such as eshell. I have also run an fschk, although I haven't been able to see the output because of this bug.

    Read the article

  • Call to daemon in a /etc/init.d script is blocking, not running in background

    - by tony
    I have a Perl script that I want to daemonize. Basically this perl script will read a directory every 30 seconds, read the files that it finds and then process the data. To keep it simple here consider the following Perl script (called synpipe_server, there is a symbolic link of this script in /usr/sbin/) : #!/usr/bin/perl use strict; use warnings; my $continue = 1; $SIG{'TERM'} = sub { $continue = 0; print "Caught TERM signal\n"; }; $SIG{'INT'} = sub { $continue = 0; print "Caught INT signal\n"; }; my $i = 0; while ($continue) { #do stuff print "Hello, I am running " . ++$i . "\n"; sleep 3; } So this script basically prints something every 3 seconds. Then, as I want to daemonize this script, I've also put this bash script (also called synpipe_server) in /etc/init.d/ : #!/bin/bash # synpipe_server : This starts and stops synpipe_server # # chkconfig: 12345 12 88 # description: Monitors all production pipelines # processname: synpipe_server # pidfile: /var/run/synpipe_server.pid # Source function library. . /etc/rc.d/init.d/functions pname="synpipe_server" exe="/usr/sbin/synpipe_server" pidfile="/var/run/${pname}.pid" lockfile="/var/lock/subsys/${pname}" [ -x $exe ] || exit 0 RETVAL=0 start() { echo -n "Starting $pname : " daemon ${exe} RETVAL=$? PID=$! echo [ $RETVAL -eq 0 ] && touch ${lockfile} echo $PID > ${pidfile} } stop() { echo -n "Shutting down $pname : " killproc ${exe} RETVAL=$? echo if [ $RETVAL -eq 0 ]; then rm -f ${lockfile} rm -f ${pidfile} fi } restart() { echo -n "Restarting $pname : " stop sleep 2 start } case "$1" in start) start ;; stop) stop ;; status) status ${pname} ;; restart) restart ;; *) echo "Usage: $0 {start|stop|status|restart}" ;; esac exit 0 So, (if I have well understood the doc for daemon) the Perl script should run in the background and the output should be redirected to /dev/null if I execute : service synpipe_server start But here is what I get instead : [root@master init.d]# service synpipe_server start Starting synpipe_server : Hello, I am running 1 Hello, I am running 2 Hello, I am running 3 Hello, I am running 4 Caught INT signal [ OK ] [root@master init.d]# So it starts the Perl script but runs it without detaching it from the current terminal session, and I can see the output printed in my console ... which is not really what I was expecting. Moreover, the PID file is empty (or with a line feed only, no pid returned by daemon). Does anyone have any idea of what I am doing wrong ? EDIT : maybe I should say that I am on a Red Hat machine. Scientific Linux SL release 5.4 (Boron) Would it do the job if instead of using the daemon function, I use something like : nohup ${exe} >/dev/null 2>&1 & in the init script ?

    Read the article

  • htaccess on remote server issues - password prompt not accepting input

    - by pying saucepan
    EDIT: I will contact the university about my problem after labor day weekend, but I thought if someone knew a quick fix that I haven't tried, or if the problem has an obvious fix then I could hope to try my luck here, thanks! TLDR: Sorry its a long post, I thought I should be... thorough. I am having a common issue (found a dead thread through google with no solution to the same problem) with the prompt to enter in a username and password via htaccess rights, but this prompt will keep popping up asking for a username and password when trying to access my home directory on my university's server which has the .htaccess and .htpasswd files. It does not matter if I enter in correct or incorrect credentials, the prompt will keep asking me for input without displaying my home directory. Ever since I have included these ht files I have never once been able to get past the username/password no matter what I have tried, save for removing them from the directory I am trying to access (my top level directory that I own). This kind of served my original goal of making the top level directory inaccessible to casual users, but if I wanted to use this method on other places, I would want it to work as intended. And I also like it when computers do what I wish they would, so any help is appreciated. Some things I have tried: Changing the file/directory access rights: they told me to try these commands if people can't access my files cd ~/public_html find ./ -type d -exec chmod 755 {} \; find ./ -type f -exec chmod 644 {} \; enter in the single character name/pw at least twenty times in a row, no cheddar. so I changed directory with cd ~ in hopes that this would be my home directory, since my home directory contains the "public_html" directory, so logic tells me that the ~ tilde symbol is the top level directory that I have ownership of. Then I did those two commands to change the rights on the files inside, I am still having no luck. How I got to this point: I have been following the instructions given to me through my university's website for setting up my little directory. A link on how they describe how to password protect the home directory is given below: "Protect Web Directories" instructions I have everything in order except for one small detail that I feel probably does not matter. I am on windows and so I am using winSCP to remote control my allocated server space. The small detail is that as the instructions indicate (on step 3) that I should use the command htpasswd -c .htpasswd {username} where {username} is my folder that holds my allocated server space. But this command requires further input through the terminal, and unfortunately winSCP does not offer this kind of functionality. So I looked up some basic instructions on using htaccess and it is formatted correctly such that the .htaccess file appears as follows: AuthType Basic AuthName "Verify" AuthUserFile /correctpath/.htpasswd require valid-user and this file is in the root directory for my server space as well as the .htpasswd file which has only this data inside: username:password I know for sure that these two files must be formatted correctly, at least according to their tutorial, because before my path was incorrectly formatted via including some curly { braces } without knowing the correct way to do this at first. And the password prompt that shows up when accessing my directory responded by loading an error page indicating to contact OSU admin or something not important. But now that I have everything like it 'should' be. I know this because when I enter in my credentials "username and password" the prompt pops up for my username and password again and again whether or not I enter in correct information. The only exception is that if I click cancel it will direct me to a page saying that I need to enter in a username and password. Note that I am very inexperienced at server-related buisness, two days ago I couldn't have told you what a website actually consists of. So, if you use some technical jargon I may or may not need to look it up and get back to you before I actually understand what you mean, but I am a quick learner and it probably wont matter.

    Read the article

  • How do i change the BIOS boot splash screen?

    - by YumYumYum
    I have a Dell PC which has very ugly and bad luck looking Alien face on every boot. I want to change it or disable it forever, but in Bios they do not have any options. How can i change this from my linux Fedora or ArchLinux which is running now? Tried following does not work. ( http://www.pixelbeat.org/docs/bios/ ) ./flashrom -r firmware.old #save current flash ROM just in case ./flashrom -wv firmware.new #write and verify new flash ROM image Also tried: $ cat c.c #include <stdio.h> #include <inttypes.h> #include <netinet/in.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #define lengthof(x) (sizeof(x)/sizeof(x[0])) uint16_t checksum(const uint8_t* data, int len) { uint16_t sum = 0; int i; for (i=0; i<len; i++) sum+=*(data+i); return htons(sum); } void usage(void) { fprintf(stderr,"Usage: therm_limit [0,50,53,56,60,63,66,70]\n"); fprintf(stderr,"Report therm limit of terminal in BIOS\n"); fprintf(stderr,"If temp specifed, it is changed if required.\n"); exit(EXIT_FAILURE); } #define CHKSUM_START 51 #define CHKSUM_END 109 #define THERM_OFFSET 67 #define THERM_SHIFT 0 #define THERM_MASK (0x7 << THERM_SHIFT) #define THERM_OFF 0 uint8_t thermal_limits[]={0,50,53,56,60,63,66,70}; #define THERM_MAX (lengthof(thermal_limits)-1) #define DEV_NVRAM "/dev/nvram" #define NVRAM_MAX 114 uint8_t nvram[NVRAM_MAX]; int main(int argc, char* argv[]) { int therm_request = -1; if (argc>2) usage(); if (argc==2) { if (*argv[1]=='-') usage(); therm_request=atoi(argv[1]); int i; for (i=0; i<lengthof(thermal_limits); i++) if (thermal_limits[i]==therm_request) break; if (i==lengthof(thermal_limits)) usage(); else therm_request=i; } int fd_nvram=open(DEV_NVRAM, O_RDWR); if (fd_nvram < 0) { fprintf(stderr,"Error opening %s [%m]\n", DEV_NVRAM); exit(EXIT_FAILURE); } if (read(fd_nvram, nvram, sizeof(nvram))==-1) { fprintf(stderr,"Error reading %s [%m]\n", DEV_NVRAM); close(fd_nvram); exit(EXIT_FAILURE); } uint16_t chksum = *(uint16_t*)(nvram+CHKSUM_END); printf("%04X\n",chksum); exit(0); if (chksum == checksum(nvram+CHKSUM_START, CHKSUM_END-CHKSUM_START)) { uint8_t therm_byte = *(uint16_t*)(nvram+THERM_OFFSET); uint8_t therm_status=(therm_byte & THERM_MASK) >> THERM_SHIFT; printf("Current thermal limit: %d°C\n", thermal_limits[therm_status]); if ( (therm_status == therm_request) ) therm_request=-1; if (therm_request != -1) { if (therm_status != therm_request) printf("Setting thermal limit to %d°C\n", thermal_limits[therm_request]); uint8_t new_therm_byte = (therm_byte & ~THERM_MASK) | (therm_request << THERM_SHIFT); *(uint8_t*)(nvram+THERM_OFFSET) = new_therm_byte; *(uint16_t*)(nvram+CHKSUM_END) = checksum(nvram+CHKSUM_START, CHKSUM_END-CHKSUM_START); (void) lseek(fd_nvram,0,SEEK_SET); if (write(fd_nvram, nvram, sizeof(nvram))!=sizeof(nvram)) { fprintf(stderr,"Error writing %s [%m]\n", DEV_NVRAM); close(fd_nvram); exit(EXIT_FAILURE); } } } else { fprintf(stderr,"checksum failed. Aborting\n"); close(fd_nvram); exit(EXIT_FAILURE); } return EXIT_SUCCESS; } $ gcc c.c -o bios # ./bios 16DB

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • MacBook Pro 10.6 losing dns service, network connection still functional if you know the ip address.

    - by Vincent
    MacBook pro connected to a wireless network (not sure about wired) I lose DNS. I still have a functioning connection and as long as I know the ip address of the website, server... for example skype works, ssh name@ipaddress, .... Things can be working properly and then just quit, Once I was im via skype and lost dns skype continued to work. This has happened in multiple locations on private and public networks. What does not work/fix it: Resetting router changing dns server on computer or router connecting to another network removing the airport interface and adding it back flushing dns The only solution seems to be a restart. A solution to this would be great, but any ideas of this to try would be great. Even a sure way to reproduce this would be useful. Maybe related question: But this is most definitely not true for me. "if I refresh enough -- 3 to 4 times --, it will usually pull up the site. " Here are some tests from terminal. Basically this confirms dns in not functioning vmd17:~ vmd$ ping google.com ping: cannot resolve google.com: Unknown host Trace route to google dns, This works vmd17:~ vmd$ /usr/sbin/traceroute -n -w 2 -q 2 -m 30 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 52 byte packets 1 192.168.1.1 5.195 ms 2.519 ms 2 67.172.136.1 31.881 ms 9.177 ms 3 68.85.107.121 12.168 ms 10.003 ms 4 68.86.103.41 12.021 ms 9.594 ms 5 68.86.91.1 16.712 ms 12.837 ms 6 68.86.86.210 29.951 ms 25.826 ms 7 68.86.87.218 29.554 ms 42.894 ms 8 75.149.231.70 68.271 ms 68.362 ms 9 72.14.233.77 141.178 ms 72.14.233.85 82.553 ms 10 72.14.238.243 83.381 ms 82.811 ms 11 72.14.232.213 194.387 ms 72.14.232.215 84.837 ms 12 209.85.253.145 100.294 ms * 13 8.8.8.8 101.689 ms 89.694 ms 208.67.222.22 is the ip address of opendns dns server vmd17:~ vmd$ dig @208.67.222.222 8.8.8.8 ; <<>> DiG 9.6.0-APPLE-P2 <<>> @208.67.222.222 8.8.8.8 ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached vmd17:~ vmd$ dig @208.67.222.222 gogle.com vmd17:~ vmd$ dig @208.67.222.222 google.com ; <<>> DiG 9.6.0-APPLE-P2 <<>> @208.67.222.222 google.com ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached vmd17:~ vmd$ dig @8.8.8.8 google.com ; <<>> DiG 9.6.0-APPLE-P2 <<>> @8.8.8.8 google.com ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached

    Read the article

  • DNS with name.com and Amazon S3

    - by aledalgrande
    I have a website on a bucket in Amazon S3, and recently started to get emails from Google "Googlebot can't access your site". When I go to Webmaster Tools and I try to fetch in fact it doesn't work. Also people in locations different from mine sometimes reported they could not access the website. Now for curiosity I tried from my terminal: $ host xxx xxx is an alias for xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com is an alias for s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com has address yyy.yyy.yyy.yyy And when I try with dig: $ dig xxx ; <<>> DiG 9.8.3-P1 <<>> xxx ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17860 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;xxx. IN A ;; ANSWER SECTION: xxx. 300 IN CNAME xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com. 60 IN CNAME s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com. 60 IN A yyy ;; Query time: 1514 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 12:32:13 2014 ;; MSG SIZE rcvd: 127 It seems OK to me. Why would Google tell me there is a DNS error? UPDATE: Google also cannot fetch robots.txt, but I can fetch it from my browser. UPDATE 2: I have a forwarding on the root to the www.* hostname: $ dig thenifty.me ; <<>> DiG 9.8.3-P1 <<>> thenifty.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49286 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;thenifty.me. IN A ;; AUTHORITY SECTION: thenifty.me. 300 IN SOA ns1hwy.name.com. support.name.com. 1 10800 3600 604800 300 ;; Query time: 148 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 13:32:56 2014 ;; MSG SIZE rcvd: 88

    Read the article

  • Publishing an Excel spreadsheet using Microsoft SBS 2008 to a web page that is viewable by mobile ph

    - by Dave Heath
    I am getting well out of my “superuser” depth here and would love some support. At work we have an Excel workbook (*.xls format circa Office 2003) which maintains our “engineers” timesheet. This handles what events we are doing across the year and how many “work units” it is. As far as a workbook goes, it is fairly simple with just a few =SUM(range) cells and some linked across sheets (12 sheets, one for each month) It is stored on a server, in a folder that provides “management” with full access and “engineers” with read-only access. The workbook itself is read-only for “engineers” and full access for “management”. I think these permissions are controlled through Active Directory. The workbook is protected with a password, assumingly to allow “management” to edit it even if they are working at a terminal logged in as an “engineer”. This protection prevents “engineers” from going to certain cells to see formulae and therefore editing them. The workbook has a macro which saves and closes it ten minutes after opening. This is to stop other “management” from being locked out by any one person who has logged in with editing privileges. I hope this is making sense to someone... :S Now then, we have Microsoft Small Business Server 2008. We have a shiny new web-based login for when we are offsite so we can get to Exchange webmail and our internal site (which uses Sharepoint 3.0). “Management” would like to be able to publish this timesheet automatically after changes (they don’t want to have to do anything different to what they are currently doing) so that using an iPhone “engineers” can check on it while out of the office. I am currently having a look at “Excel Services” for Office 2007 on TechNet but I am not sure if I am running down the right garden path at the moment. < EDIT This seems to suggest that I have to have Sharepoint Server 2007, with no mention of Sharepoint 3.0... ... "MOSS builds on WSS by adding both core features as well as end user web parts" - Wikipedia entry for Microsoft Office SharePoint Server (MOSS) this is not good news... "...and using the ASP.NET APIs, web parts can be written to extend the functionality of WSS." Wikipedia entry for Windows Sharepoint Services. Could this bring back what I need? Is this good news? Do I need to start learning ASP.NET? This link here implies that we need MOSS to do what I want and the bosses say we aint' getting it. http://serverfault.com/questions/20198/what-is-some-cool-things-you-can-do-with-sharepoint-2007/22128#22128 Back to the drawing board. < /EDIT Please could someone suggest some “further reading” for me to help point me in the right direction or to put me back on the right track. Many thanks. I will try to keep this up to date with how I get on.

    Read the article

  • ubuntu mount iso but some files are unreadable

    - by Chao
    I'm new to Linux and just installed ubuntu 12.04 amd64 this month. I had failed installing Texlive with texlive2012 iso image. I used the recommended command to mount: "mount -t iso9660 -o ro,loop,noauto /your/texlive2012.iso /mnt " But the installer failed to read some file. The iso is fine, I checked the md5. I extracted everything from iso with archive manager, and it installed successfully. So, why mount is not working? Thanks. UPDATE with furius iso mount tool, mount with Fuse, and it installed, with warning: Summary of warning messages during installation: Downloaded ./archive/calligra-type1.tar.xz, size equal, but md5sum differs; downloading again. While mount with Loop, it failed to install. Updated Error message from terminal, mounted with furius iso mount, loop. texlive2012-20120701_iso$ ./install-tl -gui Loading ./tlpkg/texlive.tlpdb Installing TeX Live 2012 from: . Platform: x86_64-linux = 'x86_64 with GNU/Linux' Distribution: inst (compressed) Directory for temporary files: /tmp Installing [0001/2481, time/total: ??:??/??:??]: 12many [3k] Installing [0002/2481, time/total: 00:00/00:00]: 2up [4k] Installing [0003/2481, time/total: 00:00/00:00]: Asana-Math [457k] Installing [0004/2481, time/total: 00:00/00:00]: ESIEEcv [2k] ... Installing [0265/2481, time/total: 00:10/01:09]: calctab [5k] Installing [0266/2481, time/total: 00:10/01:09]: calligra [42k] Installing [0267/2481, time/total: 00:10/01:09]: calligra-type1 [59k] Downloaded ./archive/calligra-type1.tar.xz, size equal, but md5sum differs; downloading again. ./tlpkg/installer/xz/xzdec.x86_64-linux: (stdin): File is corrupt tar: Unexpected EOF in archive tar: rmtlseek not stopped at a record boundary tar: Error is not recoverable: exiting now untar: untarring /home/lichao/ttt/temp/calligra-type1.tar failed (in /home/lichao/ttt/texmf-dist) untarring /home/lichao/ttt/temp/calligra-type1.tar failed, stopping install. Installation failed. Rerunning the installer will try to restart the installation. Or you can restart by running the installer with: install-tl --profile installation.profile [EXTRA-ARGS] ./install-tl: Could not write to install-tl.log, so flushing messages to stderr. Loading ./tlpkg/texlive.tlpdb Installing TeX Live 2012 from: . Platform: x86_64-linux = 'x86_64 with GNU/Linux' Distribution: inst (compressed) Directory for temporary files: /tmp Installer revision: 26794 Database revision: 26935 Installing [0001/2481, time/total: ??:??/??:??]: 12many [3k] Installing [0002/2481, time/total: 00:00/00:00]: 2up [4k] Installing [0003/2481, time/total: 00:00/00:00]: Asana-Math [457k] Installing [0004/2481, time/total: 00:00/00:00]: ESIEEcv [2k] Installing [0005/2481, time/total: 00:00/00:00]: FAQ-en [1k] ... Installing [0262/2481, time/total: 00:10/01:09]: c90 [2k] Installing [0263/2481, time/total: 00:10/01:09]: cachepic [5k] Installing [0264/2481, time/total: 00:10/01:09]: cachepic.x86_64-linux [1k] Installing [0265/2481, time/total: 00:10/01:09]: calctab [5k] Installing [0266/2481, time/total: 00:10/01:09]: calligra [42k] Installing [0267/2481, time/total: 00:10/01:09]: calligra-type1 [59k] Downloaded ./archive/calligra-type1.tar.xz, size equal, but md5sum differs; downloading again. untar: untarring /home/lichao/ttt/temp/calligra-type1.tar failed (in /home/lichao/ttt/texmf-dist) untarring /home/lichao/ttt/temp/calligra-type1.tar failed, stopping install. Installation failed. Rerunning the installer will try to restart the installation. Or you can restart by running the installer with: install-tl --profile installation.profile [EXTRA-ARGS] Segmentation fault (core dumped) I am sure that the iso is fine. I can open it with archive manager and all files are good. But after mounting it, even archive manager failed to open some files (which can be opened when the iso is opened in archive manager).

    Read the article

  • Why are USB 2.0 devices crashing my system?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic baby-ATX motherboard with no identifying markings. CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. There's one stamp on the motherboard that says "Rev.B". On board it has 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. It has an 80Gb Western Digital hard drive freshly formatted and built with Windows XP Professional with Service Pack 3 and all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 crashes the machine as soon as it starts up). It's got a Daytek MV150 15" touch screen running through the VGA and COM1 with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem If I hook any USB 2.0 devices to this machine, it hangs the whole machine and I have to hard boot it to get it back up. Workarounds found I can plug the same devices into the on board USB 1.0 connectors and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions I've tried disabling all the on board devices that I'm not using - such as the on board LAN, the second COM port, the AGP connector etc. through the BIOS in an (perhaps misguided or futile) attempt to reduce the power consumption... I don't think it had any effect but it didn't do any harm. I was wondering if the PSU wattage just isn't enough to drive the USB 2.0 devices; I've seen this suggested but haven't found any confirmation that this could really be an issue - nor have I found a way to work around this issue - if indeed it is one. Any ideas? The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference. I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself. Other thoughts Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • libvirt upgrade caused vms to not see drives (boot media not found)

    - by bias
    I upgraded to Ubuntu 12.04.1 and now libvirt (via open nebula) successfully runs vms but they aren't finding the 2 drives (specifically, the boot drive). One is "hd" the other is "cdrom". The machine boots but fails and displays something like "boot media not found hd" (this was in a vnc terminal and I didn't copy the output anywhere so that's not the verbatim message). I tried constructing a new disk using the new version of qemu (via vmbuilder) and this new machine has the same problem as the old machine. In case it matters (I can't see why it would) I'm using open nebula to manage the machines. There's nothing relevant in any of the logs: syslog, libvirtd, oned. Which is to say nothing interesting/anomalous is reported when the machine is brought up. Versions libvirt 0.9.8-2ubuntu17.4 qemu-kvm 1.0+noroms-0ubuntu14.3 The libvirt xml config portions (relavent) <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> ... <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/one//203/images/disk.0'/> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/lib/one//203/images/disk.1'/> <target dev='sdc' bus='scsi'/> <readonly/> <alias name='scsi0-0-2'/> <address type='drive' controller='0' bus='0' unit='2'/> </disk> <controller type='scsi' index='0'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> ... </devices> The libvirt/qemu log contains 2012-11-25 22:19:24.328+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 256 -smp 1,sockets=1,cores=1,threads=1 -name one-204 -uuid 4be6c276-19e8-bdc2-e9c9-9ca5352f2be3 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-204.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device lsi,id=scsi0,bus=pci.0,addr=0x5 -drive file=/var/lib/one//204/images/disk.0,if=none,id=drive-scsi0-0-0,format=qcow2 -device scsi-disk,bus=scsi0.0,scsi-id=0,drive=drive-scsi0-0-0,id=scsi0-0-0,bootindex=1 -drive file=/var/lib/one//204/images/disk.1,if=none,media=cdrom,id=drive-scsi0-0-2,readonly=on,format=raw -device scsi-disk,bus=scsi0.0,scsi-id=2,drive=drive-scsi0-0-2,id=scsi0-0-2 -netdev tap,fd=18,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3 -netdev tap,fd=19,id=hostnet1 -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4 -usb -vnc 0.0.0.0:204 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 kvm: -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom" kvm: -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom"

    Read the article

  • "ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver" wheel-click is wrong

    - by sputnick
    I use this mouse under archlinux x86_64 with 3.2.8-1-ARCH kernel. I have some problems to select and then paste with the wheel-click in some applications like konversation, not in a terminal nor an editor. I don't know if it's a hardware problem or a software one. $ lsusb -v Bus 002 Device 110: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x046d Logitech, Inc. idProduct 0xc50e Cordless Mouse Receiver bcdDevice 25.10 iManufacturer 1 Logitech iProduct 2 USB RECEIVER iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 34 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 70mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 2 Mouse iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.11 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 95 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered) When I see what's happens in xev, the output is different compared to another mouse My buggy Logitech mouse : ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350700, (48,52), root:(1491,75), state 0x10, button 11, same_screen YES EnterNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170350700, (48,52), root:(1491,75), mode NotifyGrab, detail NotifyInferior, same_screen YES, focus YES, state 16 KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350716, (48,52), root:(1491,75), state 0x10, button 6, same_screen YES ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350716, (48,52), root:(1491,75), state 0x10, button 6, same_screen YES ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350988, (48,52), root:(1491,75), state 0x10, button 11, same_screen YES LeaveNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170350988, (48,52), root:(1491,75), mode NotifyUngrab, detail NotifyInferior, same_screen YES, focus YES, state 16 a working mouse (dell) : ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170245131, (46,32), root:(1489,55), state 0x10, button 2, same_screen YES EnterNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170245131, (46,32), root:(1489,55), mode NotifyGrab, detail NotifyInferior, same_screen YES, focus YES, state 528 KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170245411, (46,32), root:(1489,55), state 0x210, button 2, same_screen YES LeaveNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170245411, (46,32), root:(1489,55), mode NotifyUngrab, detail NotifyInferior, same_screen YES, focus YES, state 16 A demo of the problem when I use konversation (IRC) : http://www.youtube.com/watch?v=lhmr92M7NCc I tried to modify the button map with xmodmap like this with no success (one at a time) : xmodmap -e "pointer = 1 0 3" xmodmap -e "pointer = 1 1 3" xmodmap -e "pointer = 1 2 3" xmodmap -e "pointer = 1 3 3" xmodmap -e "pointer = 1 4 3" xmodmap -e "pointer = 1 5 3" xmodmap -e "pointer = 1 6 3" xmodmap -e "pointer = 1 7 3" xmodmap -e "pointer = 1 8 3" xmodmap -e "pointer = 1 9 3" xmodmap -e "pointer = 1 10 3" xmodmap -e "pointer = 1 11 3" xmodmap -e "pointer = 1 12 3" xmodmap -e "pointer = 1 13 3" xmodmap -e "pointer = 1 14 3" xmodmap -e "pointer = 1 15 3" xmodmap -e "pointer = 1 16 3" xmodmap -e "pointer = 1 17 3" xmodmap -e "pointer = 1 18 3" xmodmap -e "pointer = 1 19 3" xmodmap -e "pointer = 1 20 3" xmodmap -e "pointer = 1 21 3" xmodmap -e "pointer = 1 22 3" xmodmap -e "pointer = 1 23 3" xmodmap -e "pointer = 1 24 3" xmodmap -e "pointer = 1 25 3" Any clue ? I would like to avoid buying a new mouse just for a paste problem.

    Read the article

  • Installing EclipseFP on Mac OS X

    - by Dom Kennedy
    I am trying to install EclipseFP. I'm running OS X Mavericks. I've tried following both the official installation instructions and the advice in this answer on SU, but I'm still having the same problem. I can get the plugin itself installed painlessly using Help -> Install New Software..., Bbut when I restart and switch to the Haskell perspective, things start to go wrong. The installation instructions tells me that I should receive a prompt to install BuildWrapper and Scion Browser. I do not receive this prompt. Furthermore, if I create a new Haskell project, my code has no syntax highlighting, and the Hoogle search feature does not appear to do anything. It's clear that the plugin is not set up correctly yet. I've tried running cabal update in Terminal, but this does not change anything. After several attempts going round in circles with this on Eclipse Juno, I uninstalled Eclispe and the Haskell Platform and performed a clean install of Eclipse Luna and the latest Haskell Platform. However, the problems are persisting. I've tried going into Preferences to see if I could sort any of this out manually. I should initially point out that my GHC installation seems to be correctly references under Preferences -> Haskell Implementations Under Haskell -> Helper executables, there are areas for configuring the options of both BuildWrapper and Scion Browser. At present, both are blank. I tried clicking the Install from Hackage... button beside each of them with no success; I receive an error message saying Expected executable <workspace>/.metadata/.plugins/net.sf.eclipsefp.haskell.ui/sandbox/.cabal-sandbox/bin/buildwrapper not found!` (replace buildwrapper for scion-browser and the message is the same) The Eclipse console displays the following exception after doing the above with BuildWrapper: src/Language/Haskell/BuildWrapper/GHCStorage.hs:313:32: Not in scope: data constructor ‘MatchGroup’ cabal.real: Error: some packages failed to install: buildwrapper-0.7.4 failed during the building phase. The exception was: ExitFailure 1 and after doing it for Scion-Browser: zip-archive-0.2.3.4 (reinstall) changes: text-1.1.0.0 -> 0.11.3.1 pandoc-1.12.3.3 (latest: 1.13) -http-conduit (new version) Graphalyze-0.14.1.0 (reinstall) changes: pandoc-1.12.4.2 -> 1.12.3.3, text-1.1.0.0 -> 0.11.3.1 cabal.real: The following packages are likely to be broken by the reinstalls: pandoc-1.12.4.2 unordered-containers-0.2.4.0 aeson-0.7.0.4 scientific-0.2.0.2 case-insensitive-1.1.0.3 HTTP-4000.2.10 Use --force-reinstalls if you want to install anyway. After receiving similar results as the above on previous attempts, I've tried using force-reinstalls and ended up at more dead ends. I am at a loss as to what is wrong and how to solve this. I should point out that my GHC installation appears to be correctly configured under Preferences -> Haskell -> Haskell Implementations. Apologies if any of this information is irrelevant, I'm just not really sure what is important and what isn't at this point. Any help anyone could provide me with would be greatly appreciated.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Parse JSON in C#

    - by Ender
    I'm trying to parse some JSON data from the Google AJAX Search API. I have this URL and I'd like to break it down so that the results are displayed. I've currently written this code, but I'm pretty lost in regards of what to do next, although there are a number of examples out there with simplified JSON strings. Being new to C# and .NET in general I've struggled to get a genuine text output for my ASP.NET page so I've been recommended to give JSON.NET a try. Could anyone point me in the right direction to just simply writing some code that'll take in JSON from the Google AJAX Search API and print it out to the screen? EDIT: I think I've made some progress in regards to getting some code working using DataContractJsonSerializer. Here is the code I have so far. Any advice on whether this is close to working and/or how I would output my results in a clean format? EDIT 2: I've followed the advice from Dreas Grech and the StackOverflowException has gone. However, now I am getting no output. Any ideas on where to go next? EDIT 3: ALL FIXED! All results are working fine. Thank you again Dreas Grech! using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.ServiceModel.Web; using System.Runtime.Serialization; using System.Runtime.Serialization.Json; using System.IO; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { GoogleSearchResults g1 = new GoogleSearchResults(); const string json = @"{""responseData"": {""results"":[{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://www.cheese.com/"",""url"":""http://www.cheese.com/"",""visibleUrl"":""www.cheese.com"",""cacheUrl"":""http://www.google.com/search?q\u003dcache:bkg1gwNt8u4J:www.cheese.com"",""title"":""\u003cb\u003eCHEESE\u003c/b\u003e.COM - All about \u003cb\u003echeese\u003c/b\u003e!."",""titleNoFormatting"":""CHEESE.COM - All about cheese!."",""content"":""\u003cb\u003eCheese\u003c/b\u003e - everything you want to know about it. Search \u003cb\u003echeese\u003c/b\u003e by name, by types of milk, by textures and by countries.""},{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://en.wikipedia.org/wiki/Cheese"",""url"":""http://en.wikipedia.org/wiki/Cheese"",""visibleUrl"":""en.wikipedia.org"",""cacheUrl"":""http://www.google.com/search?q\u003dcache:n9icdgMlCXIJ:en.wikipedia.org"",""title"":""\u003cb\u003eCheese\u003c/b\u003e - Wikipedia, the free encyclopedia"",""titleNoFormatting"":""Cheese - Wikipedia, the free encyclopedia"",""content"":""\u003cb\u003eCheese\u003c/b\u003e is a food consisting of proteins and fat from milk, usually the milk of cows, buffalo, goats, or sheep. It is produced by coagulation of the milk \u003cb\u003e...\u003c/b\u003e""},{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://www.ilovecheese.com/"",""url"":""http://www.ilovecheese.com/"",""visibleUrl"":""www.ilovecheese.com"",""cacheUrl"":""http://www.google.com/search?q\u003dcache:GBhRR8ytMhQJ:www.ilovecheese.com"",""title"":""I Love \u003cb\u003eCheese\u003c/b\u003e!, Homepage"",""titleNoFormatting"":""I Love Cheese!, Homepage"",""content"":""The American Dairy Association\u0026#39;s official site includes recipes and information on nutrition and storage of \u003cb\u003echeese\u003c/b\u003e.""},{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://www.gnome.org/projects/cheese/"",""url"":""http://www.g

    Read the article

  • WebDriver with firefox-x11 on Mac

    - by binil
    I am trying to run a headless test for a web application using WebDriver on Mac OS X 10.6.3. My plan is to run firefox-x11 with Xvfb, but WebDriver is unable to launch firefox-x11. My code is: System.setProperty("webdriver.firefox.bin", "/opt/local/bin/firefox-x11-devel-standalone"); WebDriver browser = new FirefoxDriver(); try { browser.get("http://google.com"); } finally { browser.close(); } but this fails with: org.openqa.selenium.WebDriverException: Unable to start firefox cleanly. Exit value: 1 Ran from: [/opt/local/bin/firefox-x11-devel-standalone, --verbose, -silent] System info: os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '10.6.3', java.version: '1.6.0_17' Driver info: driver.version: firefox at org.openqa.selenium.firefox.FirefoxBinary.copeWithTheStrangenessOfTheMac(FirefoxBinary.java:200) at org.openqa.selenium.firefox.FirefoxBinary.startProfile(FirefoxBinary.java:83) at org.openqa.selenium.firefox.FirefoxBinary.clean(FirefoxBinary.java:264) at org.openqa.selenium.firefox.FirefoxLauncher.startProfile(FirefoxLauncher.java:66) at org.openqa.selenium.firefox.internal.NewProfileExtensionConnection.<init>(NewProfileExtensionConnection.java:45) at org.openqa.selenium.firefox.internal.ExtensionConnectionFactory.connectTo(ExtensionConnectionFactory.java:44) When I manually launch /opt/local/bin/firefox-x11-devel-standalone from the terminal, it seems to work fine despite some harmless errors shown. So, I tried to patch the org.openqa.selenium.firefox.FirefoxBinary.copeWithTheStrangenessOfTheMac(ProcessBuilder) method to ignore the errors and exit value of 1. Now it proceeds further, but fails with: Caused by: org.openqa.selenium.WebDriverException: Failed to connect to binary FirefoxBinary(/opt/local/bin/firefox-x11-devel-standalone) on port 7055; process output follows: Xlib: extension "RANDR" missing on display "/tmp/launch-tElCRZ/org.x:0". Dynamic session lookup supported but failed: launchd did not provide a socket path, verify that org.freedesktop.dbus-session.plist is loaded! (firefox-bin:88837): Gtk-CRITICAL **: gtk_widget_has_screen: assertion `GTK_IS_WIDGET (widget)' failed ?Xlib: extension "RANDR" missing on display "/tmp/launch-tElCRZ/org.x:0". Dynamic session lookup supported but failed: launchd did not provide a socket path, verify that org.freedesktop.dbus-session.plist is loaded! (firefox-bin:88877): Gtk-CRITICAL **: gtk_widget_has_screen: assertion `GTK_IS_WIDGET (widget)' failed ? System info: os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '10.6.3', java.version: '1.6.0_17' Driver info: driver.version: firefox at org.openqa.selenium.firefox.internal.NewProfileExtensionConnection.connectToBrowser(NewProfileExtensionConnection.java:60) at org.openqa.selenium.firefox.internal.NewProfileExtensionConnection.<init>(NewProfileExtensionConnection.java:49) at org.openqa.selenium.firefox.internal.ExtensionConnectionFactory.connectTo(ExtensionConnectionFactory.java:44) ... 4 more Caused by: org.openqa.selenium.firefox.NotConnectedException: Failed to start up socket within 45000 at org.openqa.selenium.firefox.internal.AbstractExtensionConnection.connectToBrowser(AbstractExtensionConnection.java:143) at org.openqa.selenium.firefox.internal.NewProfileExtensionConnection.connectToBrowser(NewProfileExtensionConnection.java:58) ... 6 more Has anyone got WebDriver to work with firefox-x11? This post seems to suggest that one person had got it working, but it does not contain much details.

    Read the article

  • Detect user logout / shutdown in Python / GTK under Linux - SIGTERM/HUP not received

    - by Ivo Wetzel
    OK this is presumably a hard one, I've got an pyGTK application that has random crashes due to X Window errors that I can't catch/control. So I created a wrapper that restarts the app as soon as it detects a crash, now comes the problem, when the user logs out or shuts down the system, the app exits with status 1. But on some X errors it does so too. So I tried literally anything to catch the shutdown/logout, with no success, here's what I've tried: import pygtk import gtk import sys class Test(gtk.Window): def delete_event(self, widget, event, data=None): open("delete_event", "wb") def destroy_event(self, widget, data=None): open("destroy_event", "wb") def destroy_event2(self, widget, event, data=None): open("destroy_event2", "wb") def __init__(self): gtk.Window.__init__(self, gtk.WINDOW_TOPLEVEL) self.show() self.connect("delete_event", self.delete_event) self.connect("destroy", self.destroy_event) self.connect("destroy-event", self.destroy_event2) def foo(): open("add_event", "wb") def ex(): open("sys_event", "wb") from signal import * def clean(sig): f = open("sig_event", "wb") f.write(str(sig)) f.close() exit(0) for sig in (SIGABRT, SIGILL, SIGINT, SIGSEGV, SIGTERM): signal(sig, lambda *args: clean(sig)) def at(): open("at_event", "wb") import atexit atexit.register(at) f = Test() sys.exitfunc = ex gtk.quit_add(gtk.main_level(), foo) gtk.main() open("exit_event", "wb") Not one of these succeeds, is there any low level way to detect the system shutdown? Google didn't find anything related to that. I guess there must be a way, am I right? :/ EDIT: OK, more stuff. I've created this shell script: #!/bin/bash trap test_term TERM trap test_hup HUP test_term(){ echo "teeeeeeeeeerm" >~/Desktop/term.info exit 0 } test_hup(){ echo "huuuuuuuuuuup" >~/Desktop/hup.info exit 1 } while [ true ] do echo "idle..." sleep 2 done And also created a .desktop file to run it: [Desktop Entry] Name=Kittens GenericName=Kittens Comment=Kitten Script Exec=kittens StartupNotify=true Terminal=false Encoding=UTF-8 Type=Application Categories=Network;GTK; Name[de_DE]=Kittens Normally this should create the term file on logout and the hup file when it has been started with &. But not on my System. GDM doesn't care about the script at all, when I relog, it's still running. I've also tried using shopt -s huponexit, with no success. EDIT2: Also here's some more information aboute the real code, the whole thing looks like this: Wrapper Script, that catches errors and restarts the programm -> Main Programm with GTK Mainloop -> Background Updater Thread The flow is like this: Start Wrapper -> enter restart loop while restarts < max: -> start program -> check return code -> write error to file or exit the wrapper on 0 Now on shutdown, start program return 1. That means either it did hanup or the parent process terminated, the main problem is to figure out which of these two did just happen. X Errors result in a 1 too. Trapping in the shellscript doesn't work. If you want to take a look at the actual code check it out over at GitHub: http://github.com/BonsaiDen/Atarashii

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >