Search Results

Search found 15099 results on 604 pages for 'stop loading'.

Page 541/604 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • innobackupex - after restoring - quit without updating PID file

    - by clarkk
    After restoring a backup the server can't start.. restoring # tar -izxf /var/www/bak/db/2013-11-10-1437_mysql.tar.gz -C /var/www/bak/db_import # innobackupex --use-memory=1G --apply-log /var/www/bak/db_import # service mysql stop # mv /var/lib/mysql /var/lib/mysql-old # mkdir /var/lib/mysql # innobackupex --copy-back /var/www/bak/db_import # chown -R mysql:mysql /var/lib/mysql # service mysql start error log 131110 21:24:20 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 2013-11-10 21:24:21 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2013-11-10 21:24:21 6194 [Warning] Using pre 5.5 semantics to load error messages from /opt/mysql/server-5.6/share/english/. 2013-11-10 21:24:21 6194 [Warning] If this is not intended, refer to the documentation for valid usage of --lc-messages-dir and --language parameters. 2013-11-10 21:24:21 6194 [Note] Plugin 'FEDERATED' is disabled. /usr/local/mysql/bin/mysqld: Table 'mysql.plugin' doesn't exist 2013-11-10 21:24:21 6194 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 2013-11-10 21:24:21 6194 [Note] InnoDB: The InnoDB memory heap is disabled 2013-11-10 21:24:21 6194 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2013-11-10 21:24:21 6194 [Note] InnoDB: Compressed tables use zlib 1.2.3 2013-11-10 21:24:21 6194 [Note] InnoDB: Using Linux native AIO 2013-11-10 21:24:21 6194 [Note] InnoDB: Not using CPU crc32 instructions 2013-11-10 21:24:21 6194 [Note] InnoDB: Initializing buffer pool, size = 128.0M 2013-11-10 21:24:21 6194 [Note] InnoDB: Completed initialization of buffer pool 2013-11-10 21:24:21 6194 [Note] InnoDB: Highest supported file format is Barracuda. 2013-11-10 21:24:22 6194 [Note] InnoDB: 128 rollback segment(s) are active. 2013-11-10 21:24:22 6194 [Note] InnoDB: Waiting for purge to start 2013-11-10 21:24:22 6194 [Note] InnoDB: 5.6.12 started; log sequence number 636992658 2013-11-10 21:24:22 6194 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 2013-11-10 21:24:22 6194 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 2013-11-10 21:24:22 6194 [Note] Server socket created on IP: '127.0.0.1'. 2013-11-10 21:24:22 6194 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist 131110 21:24:22 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended mysql_upgrade /opt/mysql/server-5.6/bin/mysql_upgrade -u root -pxxxxx -P 3308 Warning: Using a password on the command line interface can be insecure. Looking for 'mysql' as: /opt/mysql/server-5.6/bin/mysql Looking for 'mysqlcheck' as: /opt/mysql/server-5.6/bin/mysqlcheck FATAL ERROR: Upgrade failed

    Read the article

  • "No route to host" with ssl but not with telnet

    - by Clemens Bergmann
    I have a strange problem with connecting to a https site from one of my servers. When I type: telnet puppet 8140 I am presented with a standard telnet console and can talk to the Server as always: Connected to athena.hidden.tld. Escape character is '^]'. GET / HTTP/1.1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://athena.hidden.tld:8140/"><b>https://athena.hidden.tld:8140/</b></a></blockquote></p> <hr> <address>Apache/2.2.16 (Debian) Server at athena.hidden.tld Port 8140</address> </body></html> Connection closed by foreign host. But when I try to connect to the same host and port with ssl: openssl s_client -connect puppet:8140 It is not working connect: No route to host connect:errno=113 I am confused. At first it sounded like a firewall problem but this could not be, could it? Because this would also prevent the telnet connection. As Firewall I am using ferm on both servers. The systems are debian squeeze vm-boxes. [edit 1] Even when I try to connect directly with the IP address: openssl s_client -connect 198.51.100.1:8140 #address exchanged connect: No route to host connect:errno=113 Bringing down the firewalls on both hosts with service ferm stop is also not helping. But when I do openssl s_client -connect localhost:8140 on the server machine it is connecting fine. [edit 2] if I connect to the IP with telnet it also is not working. telnet 198.51.100.1 8140 Trying 198.51.100.1... telnet: Unable to connect to remote host: No route to host The confusion might come from IPv6. I have IPv6 on all my hosts. It seems that telnet uses IPv6 by default and this works. For example: telnet -6 puppet 8140 works but telnet -4 puppet 8140 does not work. So there seems to be a problem with the IPv4 route. openssl seems to only (or by default) use IPv4 and therefore fails but telnet uses IPv6 and succeeds.

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • Windows 7 sometimes boots in VGA mode

    - by TuxRug
    I have an Asus G50VT-x5 laptop with nVidia GeForce9800M-GS graphics. Normally, Windows boots normally, but about 20% of the time (rough estimate), it will boot with the fallback VGA driver, maxing out at 800x600 with no Aero. I've checked the system logs and there is nothing indicating an error loading the nVidia driver. It even specifies in the logs that the Nvidia Display Driver service started successfully, even though it has booted in safe graphics mode. This has been happening for a while, but it's happening a little more often now than it was before. Since the first time my system exhibited this behavior, I have updated my graphics driver a handful of times. I used System Information for Windows to check for problems there, but the only thing that stood out was the following: Core Temperature 4486449 °C (8075639 °F) Shaders Temperature 1171513530 °C (2108724330 °F) I know this reading is incorrect, because my laptop is nowhere near the surface of the sun and my desk has not burst into flames. When it's opererating normally, I get a sane reading like [Core Temperature 58 °C (136 °F)] with no Shaders Temperature listed. All I have to do to resolve the issue is reboot. I have seen no stability issues with the graphics or anything else. A long time ago, I had an issue with this computer where my framerate would suddenly drop during a 3D game from 40fps to <1fps, but after looking at the temperature readout immediately after quitting a game, I removed the bottom panel and blew the dust out of the vent and heatsink. Since then I have no drops in framerate under any situation. I have uploaded a zip containing the SIW reports for when the problem is occurring and when the computer is operating normally. I don't have a paid account so it can only be downloaded 10 times, so please only download the reports if you think you can use them. If you try to download the reports and they are no longer available, please comment and I will re-upload them. If you want to look at the files, they are on Rapidshare. EDIT It happened again, and I looked a little deeper into the System logs. When this happens, there are a lot of errors about other device drivers unable to start. All of these errors are for PnP drivers. Also, my USB keyboard and mouse take a few moments before they actually start working, although this happens sometimes the first normal boot as well. I am quite sure this is related, so I am adding the pnp tag. Also, CHKDSK will not run on boot. Even if a check is scheduled or a volume is manually set as dirty, CHKDSK will be skipped entirely, not even leaving an entry in the System logs. I tried running CHKNTFS /D, which did not work. I then manually changed my HKLM\System\CurrentControlSet\Control\Session Manager BootExecute value to the default listed on Microsoft's website. That did not work either. I ended up booting to repair mode and running CHKDSK there, which found a number of minor inconsistencies on my system drive, but none on my data drive. I have no idea if this is related. Some more information for those who don't download my SIW report file: Antivirus and Firewall are ESET Smart Security I have three different virutalization programs installed: VMware Player, Windows Virtual PC, and VirtualBox. The network adapters for these show up in the log of failed device starts. EDIT 2 I tried running sfc /scannow, which reported that it found corrupted files that could not be fixed. The CBS log is extremely cryptic. I tried booting to my install disk, launching repair mode, and doing an offline sfc from there, which produced the same result.

    Read the article

  • Alternative Methods of Sharing Folders in Windows?

    - by Blaenk
    Hey guys. I'm running Windows 7 and as of now I simply share folders as one usually does in Windows. I then have a MacBook with Leopard (Now Snow Leopard) which I use to connect to my computer to mount the shares by going to Finder, then CMD + K and typing smb://BlaenkPC (The name of my PC) into the address box. This consequently connects to my computer and mounts all of the shares. The problem is that sometimes, if for example I close my MacBook (Which makes it go to sleep) or sometimes even without doing that, the connection somehow drops. Sometimes I close the MacBook and upon re-opening it, everything still works; it's random. It still shows the computer as being connected, but it just shows 'loading' indefinitely. If I hit 'eject' with the intention of re-connecting to the computer, it disappears from the sidebar (The Computer Icon) in Finder, but I cannot re-connect. Activity Monitor (or ps aux, whichever) both show hung instances of umount; one for each share that was mounted. I cannot kill these processes with kill or killall (Yes, even with sudo, and sending signal -9). This has happened to me before, and here is another person who has experienced this. My question boils down to this: Is there an alternative method of sharing folders in Windows, that my Mac can read/understand, that is possibly more reliable and preferably just as fast? I usually use the mounted shares to watch television episodes off my computer, or movies, etc. (In other words, I open them in VLC and they automatically stream from my computer). As far as I can tell, this is a problem with the Samba protocol. I have heard of NFS, but I am not sure if I would have to re-format my drives, or what. I don't mind running a service or daemon to allow the sharing of the folders, I just want it to be done and hopefully in a better way than typical Windows shares through Samba. Usually when I encounter this problem, which is often (read: every day), I have no other option but to restart the MacBook. As I stated in the first question I linked to, shutting down and restarting don't work; I have to manually force the shutdown by holding the power button. I have not modified my installation of Mac OS X in any hackish way, so I doubt it's something with the Operating System, but worst come to worst, I might end up reformatting and doing a clean install to see if that fixes anything, as I am at a complete loss as to what may be causing the problem, and no one else seems to have any idea or care, despite there being quite a few people suffering from this problem, as my research has shown. Any pieces of information that can help are extremely appreciated. You don't have to answer every question on here, but maybe even some insight as to why it might not be possible to kill those hung umount instances for example, or why I may not be able to reconnect using samba (Is it something regarding the way the protocol works?). One thing to note is that I have another computer in the home network that doesn't seem to have this problem. However, it is also running Windows 7 (Note though that I am not using the homegroup feature, but the typical windows sharing feature). My only deduction is that the problem is being caused by the way the Mac (Or Samba implementation, whichever) is handling things. Perhaps it is a limitation.

    Read the article

  • Long connection times from PHP to MySQL on EC2

    - by Erik Giberti
    I'm having an intermittent issue connecting to a database slave with InnoDB. Intermittently I get connections taking longer than 2 seconds. These servers are hosted on Amazon's EC2. The app server is PHP 5.2/Apache running on Ubuntu. The DB slave is running Percona's XtraDB 5.1 on Ubuntu 9.10. It's using an EBS Raid array for the data storage. We already use skip name resolve and bind to address 0.0.0.0. This is a stub of the PHP code that's failing $tmp = mysqli_init(); $start_time = microtime(true); $tmp-options(MYSQLI_OPT_CONNECT_TIMEOUT, 2); $tmp-real_connect($DB_SERVERS[$server]['server'], $DB_SERVERS[$server]['username'], $DB_SERVERS[$server]['password'], $DB_SERVERS[$server]['schema'], $DB_SERVERS[$server]['port']); if(mysqli_connect_errno()){ $timer = microtime(true) - $start_time; mail($errors_to,'DB connection error',$timer); } There's more than 300Mb available on the DB server for new connections and the server is nowhere near the max allowed (60 of 1,200). Loading on both servers is < 2 on 4 core m1.xlarge instances. Some highlights from the mysql config max_connections = 1200 thread_stack = 512K thread_cache_size = 1024 thread_concurrency = 16 innodb-file-per-table innodb_additional_mem_pool_size = 16M innodb_buffer_pool_size = 13G Any help on tracing the source of the slowdown is appreciated. [EDIT] I have been updating the sysctl values for the network but they don't seem to be fixing the problem. I made the following adjustments on both the database and application servers. net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 180 net.ipv4.tcp_max_syn_backlog = 1280 net.ipv4.tcp_synack_retries = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 87380 16777216 [EDIT] Per jaimieb's suggestion, I added some tracing and captured the following data using time. This server handles about 51 queries/second at this the time of day. The connection error was raised once (at 13:06:36) during the 3 minute window outlined below. Since there was 1 failure and roughly 9,200 successful connections, I think this isn't going to produce anything meaningful in terms of reporting. Script: date /root/database_server.txt (time mysql -h database_Server -D schema_name -u appuser -p apppassword -e '') /dev/null 2 /root/database_server.txt Results: === Application Server 1 === Mon Feb 22 13:05:01 EST 2010 real 0m0.008s user 0m0.001s sys 0m0.000s Mon Feb 22 13:06:01 EST 2010 real 0m0.007s user 0m0.002s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Application Server 2 === Mon Feb 22 13:05:01 EST 2010 real 0m0.009s user 0m0.000s sys 0m0.002s Mon Feb 22 13:06:01 EST 2010 real 0m0.009s user 0m0.001s sys 0m0.003s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Database Server === Mon Feb 22 13:05:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s Mon Feb 22 13:06:01 EST 2010 real 0m0.006s user 0m0.010s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s [EDIT] Per a suggestion received on a LinkedIn question, I tried setting the back_log value higher. We had been running the default value (50) and increased it to 150. We also raised the kernel value /proc/sys/net/core/somaxconn (maximum socket connections) to 256 on both the application and database server from the default 128. We did see some elevation in processor utilization as a result but still received connection timeouts.

    Read the article

  • How do I configure Apache 2.2 to Run Background PHP Processes on Win 2003?

    - by Captain Obvious
    I have a script, testforeground.php, that kicks off a background script, testbackground.php, then returns while the background script continues to run until it's finished. Both the foreground and background scripts write to the output file correctly when I run the foreground script from the command line using php-cgi: C:\>php-cgi testforeground.php The above command starts a php-cgi.exe process, then a php-win.exe process, then closes the php-cgi.exe almost immediately, while the php-win.exe continues until it's finished. The same script runs correctly but does not have permission to write to the output file when I run it from the command line using plain php: C:\>php testforeground.php AND when I run the same script from the browser, instead of php-cgi.exe, a single cmd.exe process opens and closes almost instantly, only the foreground script writes to the output file, and it doesn't appear that the 2nd process starts: http://XXX/testforeground.php Here is the server info: OS: Win 2003 32-bit HTTP: Apache 2.2.11 PHP: 5.2.13 Loaded Modules: core mod_win32 mpm_winnt http_core mod_so mod_actions mod_alias mod_asis mod_auth_basic mod_authn_default mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_dir mod_env mod_include mod_isapi mod_log_config mod_mime mod_negotiation mod_setenvif mod_userdir mod_php5 Here's the foreground script: <?php ini_set("display_errors",1); error_reporting(E_ALL); echo "<pre>loading page</pre>"; function run_background_process() { file_put_contents("0testprocesses.txt","foreground start time = " . time() . "\n"); echo "<pre> foreground start time = " . time() . "</pre>"; $command = "start /B \"{$_SERVER['CMS_PHP_HOMEPATH']}\php-cgi.exe\" {$_SERVER['CMS_HOMEPATH']}/testbackground.php"; $rp = popen($command, 'r'); if(isset($rp)) { pclose($rp); } echo "<pre> foreground end time = " . time() . "</pre>"; file_put_contents("0testprocesses.txt","foreground end time = " . time() . "\n", FILE_APPEND); return true; } echo "<pre>calling run_background_process</pre>"; $output = run_background_process(); echo "<pre>output = $output</pre>"; echo "<pre>end of page</pre>"; ?> And the background script: <?php $start = "background start time = " . time() . "\n"; file_put_contents("0testprocesses.txt",$start, FILE_APPEND); sleep(10); $end = "background end time = " . time() . "\n"; file_put_contents("0testprocesses.txt", $end, FILE_APPEND); ?> I've confirmed that the above scripts work correctly using Apache 2.2.3 on Linux. I'm sure I just need to change some Apache and/or PHP config settings, but I'm not sure which ones. I've been muddling over this for too long already, so any help would be appreciated.

    Read the article

  • Problems when trying to connect to a router wirelessly

    - by Ruud Lenders
    The situation - At my girlfriend's parents' place there are six Windows 7 devices that are wired or wireless connected to a router: 3 dekstops and 3 laptops. There are also several smartphones using the router. The router is secured with WPA2 (AES). The problem - We never had any problems with the router for over a year. But recently - about 3 weeks ago - my girlfriend's laptop (HP) and my laptop (ASUS) started to develop problems while trying to connect to the router. The router has stopped showing up from the network list. Sometimes it comes back and shows up, but then it keeps saying something along the lines of "Could not connect", and not long after that it dissapears again. The range of the router is not the problem here, because we experience the same when we sit next to the router. Sometimes, if we are lucky, and waited a long time (10-15 minutes) without using the laptop for anything, the laptop will eventually succesful connect to the router. The attempts - Of course, the Window 7 troubleshooter. We tried troubleshooting the connection problems and the wireless network adapter, but no luck. We also reset the router enough times to know that's not helping either. Here's the full list of things we tried, but did not help: Running the Windows 7 troubleshooter Resetting the router (more than once) Setting the router settings to factory defaults Disconnecting all other devices except one laptop Applying a system restore Trying static/dynamic IP/DNS - Dynamic is better, right? Enabling/disabling IPv6 - Should I keep IPv6 disabled? Running the command: netsh wlan stop hostednetwork Running the command: netsh wlan set hostednetwork mode=disallow Updating/reïnstalling wireless adapter drivers The tests - To help finding the core of the problem, we tested the following: Plugging an ethernet cable in the router and in our laptops - worked fine Connecting someone else's laptop to the router (wireless) - worked fine Connecting our laptops to someone else's router - worked fine The router - This information might be relevant: Router model: Sitecom 300N Wireless Router Router hardware: version 01 The DCHP Server's IPs range from 192.168.0.100 to 192.168.0.200. Router settings: Wireless channel: 12 Channel bandwidth: 20/40 MHz Extension channel: 8 Preamble type: Long 802.11g protection: Disabled UPnP: Enabled The laptops - If you are wondering about our laptops: My laptop model: ASUS Pro64JQ Girlfriend's laptop: HP Pavillion G6 OS: Both Windows 7 Professional x64 - with Service Pack 1 My wireless adapter: Atheros AR9285 AdHoc 11n: Enabled The question - Does anyone have experienced the same problems as I do? Or does someone know how to solve this? Are there more tricks I can try, or settings I should change? Note - Our laptops are not slow or old. My laptop is 1.5 years old, and the other laptop is just 5 months old. I know how to keep laptops clean and I'm pretty sure both laptops are not bloated with useless software.

    Read the article

  • XenServer Converting HVM to Paravirtualised

    - by Karl Kloppenborg
    Recently I have been tasked with the daunting process of converting a setup of HVM enabled VMs (running on Citrix XenServer 5.6.0) into PV (paravirtualised) containers. The constraints of the project was that: The operating system must be functionally identical after the migration. minimal modification to the operating system (with exception of kernel / drive mapping) I also was allowed to change the bootloader(ie, grub) in what ever way I see fit. However, I have attempted this, I will firstly like to show you my steps I took. This at the moment is CentOS5.5 specific: Steps: yum install kernel-xen This installed: 2.6.18-194.32.1.el5xen edited: /boot/grub/menu.lst changed my specs to match: title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img Then I changed my xenserver parameters to match: xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true Some things to note, I am running a VolGroup LVM ;) Anyways, after all these steps (which aren't much!) I boot the VM and it boots initial kernel just fine, however I am presented with this error: Boot Screen: device-mapper: dm-raid45: initialized v0.2594l Waiting for driver initialization. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Activating logical volumes Volume group "VolGroup00" not found Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Now my hints are that it cannot detect / because of the fact that when you change from HVM mode to PV it does something (not that obvious) When you make a SR (storage) on a HVM, you get it mounted to the guest os as /dev/hda. However in PV mode, this presents itself as /dev/xvda... Could this be the answer? and if so, how the heck to I implement it?? Update: So I have gotten a bit further in my quest, as it now detects the LVM's... To do this, I required to recompile the xen-kernel initrd image. Command: mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-194.32.1.el5xen.img 2.6.18-194.32.1.el5xen Now when I boot I get this: Boot Screen: Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.2594l Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Activating logical volumes 3 logical volume(s) in volume group "VolGroup00" now active Creating root device. Mounting root filesystem. mount: error mounting /dev/root on /sysroot as ext3: Device or resource busy Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

  • ffmpeg video4linux2 at specified resolution

    - by wim
    When I'm trying to record a clip from my webcam, using: ffmpeg -f video4linux2 -s 640x480 -i /dev/video0 /tmp/spam.avi I get annoying problem with very low resolution video, and there is a message from ffmpeg saying: [video4linux2,v4l2 @ 0x2bff3e0] The V4L2 driver changed the video from 800x600 to 176x144 I have tried not specifying -s, or trying other sizes like 800x600, and always it forces me back to 176x144. Why is this and how can I prevent it? My webcam is one of those Logitech 9000 Pro, I know it supports better resolutions than this and I can see with v4l2-ctl --list-formats-ext that it goes up to at least 800x600. edit: complete console output follows wim@wim-desktop:~$ ffmpeg -f video4linux2 -s 640x480 -i /dev/video0 /tmp/spam.avi ffmpeg version git-2012-11-20-70c0f13 Copyright (c) 2000-2012 the FFmpeg developers built on Nov 21 2012 00:09:36 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --enable-gpl --enable-libfaac --enable-libfdk-aac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-x11grab --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 8.100 / 52. 8.100 libavcodec 54. 73.100 / 54. 73.100 libavformat 54. 37.100 / 54. 37.100 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 23.101 / 3. 23.101 libswscale 2. 1.102 / 2. 1.102 libswresample 0. 17.100 / 0. 17.100 libpostproc 52. 2.100 / 52. 2.100 [video4linux2,v4l2 @ 0x37a33e0] The V4L2 driver changed the video from 640x480 to 176x144 [video4linux2,v4l2 @ 0x37a33e0] Estimating duration from bitrate, this may be inaccurate Input #0, video4linux2,v4l2, from '/dev/video0': Duration: N/A, start: 37066.740548, bitrate: 6082 kb/s Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 176x144, 6082 kb/s, 15 tbr, 1000k tbn, 15 tbc File '/tmp/spam.avi' already exists. Overwrite ? [y/N] y Output #0, avi, to '/tmp/spam.avi': Metadata: ISFT : Lavf54.37.100 Stream #0:0: Video: mpeg4 (FMP4 / 0x34504D46), yuv420p, 176x144, q=2-31, 200 kb/s, 15 tbn, 15 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> mpeg4) Press [q] to stop, [?] for help frame= 95 fps= 22 q=2.0 Lsize= 88kB time=00:00:13.86 bitrate= 51.8kbits/s video:77kB audio:0kB subtitle:0 global headers:0kB muxing overhead 13.553706%

    Read the article

  • How do I permanently delete e-mail messages in the sendmail queue and keep them from coming back?

    - by Steven Oxley
    I have a pretty annoying problem here. I have been testing an application and have created some test e-mails to bogus e-mail addresses (not to mention that my server isn't really set up to send e-mail anyway). Of course, sendmail is not able to send these messages and they have been getting stuck in the sendmail queue. I want to manually delete the messages that have been building up in the queue instead of waiting the 5 days that sendmail usually takes to stop retrying. I am using Ubuntu 10.04 and /var/spool/mqueue/ is the directory in which every how-to I have read says the e-mails that are queued up are kept. When I delete the files in this directory, sendmail stops trying to process the e-mails until what appears to be a cron script runs and re-populates this directory with the messages I don't want sent. Here are some lines from my syslog: Jun 2 17:35:19 sajo-laptop sm-mta[9367]: o530SlbK009365: to=, ctladdr= (33/33), delay=00:06:27, xdelay=00:06:22, mailer=esmtp, pri=120418, relay=e.mx.mail.yahoo.com. [67.195.168.230], dsn=4.0.0, stat=Deferred: Connection timed out with e.mx.mail.yahoo.com. Jun 2 17:35:48 sajo-laptop sm-mta[9149]: o4VHn3cw003597: to=, ctladdr= (33/33), delay=2+06:46:45, xdelay=00:34:12, mailer=esmtp, pri=3540649, relay=mx2.hotmail.com. [65.54.188.94], dsn=4.0.0, stat=Deferred: Connection timed out with mx2.hotmail.com. Jun 2 17:39:02 sajo-laptop CRON[9510]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) Jun 2 17:39:43 sajo-laptop sm-mta[9372]: o52LHK4s007585: to=, ctladdr= (33/33), delay=03:22:18, xdelay=00:06:28, mailer=esmtp, pri=1470404, relay=c.mx.mail.yahoo.com. [206.190.54.127], dsn=4.0.0, stat=Deferred: Connection timed out with c.mx.mail.yahoo.com. Jun 2 17:39:50 sajo-laptop sm-mta[9149]: o51I8ieV004377: to=, ctladdr= (33/33), delay=1+06:31:06, xdelay=00:03:57, mailer=esmtp, pri=6601668, relay=alt4.gmail-smtp-in.l.google.com. [74.125.79.114], dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. Jun 2 17:40:01 sajo-laptop CRON[9523]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Does anyone know how I can get rid of these messages permanently? As a side note, I'd also like to know if there is a way to set up sendmail to "fake" sending e-mail. Is there?

    Read the article

  • Hanging page loads every n loads - SOLVED

    - by Christian
    Hi Guys I recently moved my site to a new server (Apache 2, PHP5, MySQL5). The site is an Invision based forum. Every few posts / topics it just hangs. The data has been written because if you stop and reload, the post / thread is there. I thought it was a write issue initially, but nope. So, the data is written but the page load never completes. It doesn't leave the page where the data has been input. Whats the best way to trouble shoot this issue? The only thing I have done recently is reduce my MySQL timeouts, but I can't see that being an issue as the values are still big enough and there are no mentions of timeouts in the MySQL log. (For the record there is nothing in PHP's error log either) Thanks in advance! EDIT: I checked my server-status. It all looked ok, but I have a suspicion I was hitting my ServerLimit, so I doubled that. Also enabled my Keepalives. Will keep an eye on it. EDIT 2: Its now been a few days and this is still occuring. I have more info though; Apache is throwing seg faults, but enabling core dumps does not produce them. I have tried disabling the modules in apache but it just stops things from working. I fear it may actually be DNS related. If I watch Live Headers in Firefox, absolutely nothing happens during this 'hanging' period. After that, the responses come back fairly promptly. UPDATE (05/04): I built the latest versions of Apache and PHP from source, no luck. I then removed those and used the remi repo to update all my packages to the latest stable. Segfaults seem to have stopped, but the hanging is continuing. ini's are at; www.skylinesaustralia.com/php.ini www.skylinesaustralia.com/my.cnf www.skylinesaustralia.com/httpd.conf UPDATE - SOLVED! - The issue was having a gigantic query cache size in MySQL. It was 2GB, changing it to 64M sorted it. Thanks for all the help everybody, much appreciated!!

    Read the article

  • Installing rtorrent on my ubuntu server

    - by Shishant
    Hello, I am try to install rtorrent on my ubuntu server. I ran these commands and they worked fine. ./autogen.sh ./configure --with-xmlrpc-c make and then when i tried to use make install i guess it didnt get install because no .rtorrent.rc' was created in home directory and running rtorrent returned this error rtorrent: error while loading shared libraries: libtorrent.so.11: cannot open shared object file: No such file or directory below is the log of my make install. root@ubuntu:~/rtorrent-0.8.6# make install Making install in doc make[1]: Entering directory `/root/rtorrent-0.8.6/doc' make[2]: Entering directory `/root/rtorrent-0.8.6/doc' make[2]: Nothing to be done for `install-exec-am'. test -z "/usr/local/share/man/man1" || /bin/mkdir -p "/usr/local/share/man/man1" /usr/bin/install -c -m 644 './rtorrent.1' '/usr/local/share/man/man1/rtorrent.1 ' make[2]: Leaving directory `/root/rtorrent-0.8.6/doc' make[1]: Leaving directory `/root/rtorrent-0.8.6/doc' Making install in src make[1]: Entering directory `/root/rtorrent-0.8.6/src' Making install in core make[2]: Entering directory `/root/rtorrent-0.8.6/src/core' make[3]: Entering directory `/root/rtorrent-0.8.6/src/core' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/core' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/core' Making install in display make[2]: Entering directory `/root/rtorrent-0.8.6/src/display' make[3]: Entering directory `/root/rtorrent-0.8.6/src/display' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/display' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/display' Making install in input make[2]: Entering directory `/root/rtorrent-0.8.6/src/input' make[3]: Entering directory `/root/rtorrent-0.8.6/src/input' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/input' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/input' Making install in rpc make[2]: Entering directory `/root/rtorrent-0.8.6/src/rpc' make[3]: Entering directory `/root/rtorrent-0.8.6/src/rpc' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/rpc' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/rpc' Making install in ui make[2]: Entering directory `/root/rtorrent-0.8.6/src/ui' make[3]: Entering directory `/root/rtorrent-0.8.6/src/ui' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/ui' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/ui' Making install in utils make[2]: Entering directory `/root/rtorrent-0.8.6/src/utils' make[3]: Entering directory `/root/rtorrent-0.8.6/src/utils' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/utils' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/utils' make[2]: Entering directory `/root/rtorrent-0.8.6/src' make[3]: Entering directory `/root/rtorrent-0.8.6/src' test -z "/usr/local/bin" || /bin/mkdir -p "/usr/local/bin" /bin/bash ../libtool --mode=install /usr/bin/install -c 'rtorrent' '/usr/loc al/bin/rtorrent' libtool: install: /usr/bin/install -c rtorrent /usr/local/bin/rtorrent make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src' make[2]: Leaving directory `/root/rtorrent-0.8.6/src' make[1]: Leaving directory `/root/rtorrent-0.8.6/src' make[1]: Entering directory `/root/rtorrent-0.8.6' make[2]: Entering directory `/root/rtorrent-0.8.6' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/root/rtorrent-0.8.6' make[1]: Leaving directory `/root/rtorrent-0.8.6' Thank You.

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you. Thank you Anees Bakrain Only the ATI Radeon HD 4300/4500 Series adapter is displayed in the Device Manager, for that reason I have to assume that the onboard display adaptor is not active. All 40 drivers of Sarastro2 are up to date and the HDMI cable can not be the problem because both monitors displayed the boot sequence up to the moment when Windows 7 was loaded completely. This was the moment, when the Asus monitor lost its signal. Both connectors, HDMI and DVI are connected and removing the DVI connector would not solve my problem of running both monitors simultaneously. However, your suggestions shifted my seventy one year old brain into the next gear. The only question remaining is; “Why the signals to the Asus monitor stop after the sequence is complete”. The ATI Radeon HD 4300/4500 Series adapter seems to be capable of sending simultaneous HDMI and DVI signals, what is done during the boot sequence. Why do the signals change after the boot sequence is complete is the key question or der springende Punkt? Is this a correct assumption slhck?

    Read the article

  • Windows Server 2003 W3SVC Failing, Brute Force attack possibly the cause

    - by Roaders
    This week my website has disappeared twice for no apparent reason. I logged onto my server (Windows Server 2003 Service Pack 2) and restarted the World Web Publishing service, website still down. I tried restarting a few other services like DNS and Cold Fusion and the website was still down. In the end I restarted the server and the website reappeared. Last night the website went down again. This time I logged on and looked at the event log. SCARY STUFF! There were hundreds of these: Event Type: Information Event Source: TermService Event Category: None Event ID: 1012 Date: 30/01/2012 Time: 15:25:12 User: N/A Computer: SERVER51338 Description: Remote session from client name a exceeded the maximum allowed failed logon attempts. The session was forcibly terminated. At a frequency of around 3 -5 a minute. At about the time my website died there was one of these: Event Type: Information Event Source: W3SVC Event Category: None Event ID: 1074 Date: 30/01/2012 Time: 19:36:14 User: N/A Computer: SERVER51338 Description: A worker process with process id of '6308' serving application pool 'DefaultAppPool' has requested a recycle because the worker process reached its allowed processing time limit. Which is obviously what killed the web service. There were then a few of these: Event Type: Error Event Source: TermDD Event Category: None Event ID: 50 Date: 30/01/2012 Time: 20:32:51 User: N/A Computer: SERVER51338 Description: The RDP protocol component "DATA ENCRYPTION" detected an error in the protocol stream and has disconnected the client. Data: 0000: 00 00 04 00 02 00 52 00 ......R. 0008: 00 00 00 00 32 00 0a c0 ....2..À 0010: 00 00 00 00 32 00 0a c0 ....2..À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 92 01 00 00 ... With no more of the first error type. I am concerned that someone is trying to brute force their way into my server. I have disabled all the accounts apart from the IIS ones and Administrator (which I have renamed). I have also changed the password to an even more secure one. I don't know why this brute force attack caused the webservice to stop and I don't know why restarting the service didn't fix the problem. What should I do to make sure my server is secure and what should I do to make sure the webserver doesn't go down any more? Thanks.

    Read the article

  • Apache server completely freezes until it gets restarted

    - by nbv4
    My server does this every few days. What sucks is that it always seems to do this right after I go to bed, so when I wake up, I'm greeted with the fact that my server has been down for the past 6 or 7 hours. When I first noticed this, I added a cronjob that tries to restart the server every 15 minutes, but I guess that didn't fix it. Once I noticed the server was down, I can this command: /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName ... waiting ...........................................................apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName httpd (pid 17597) already running ...which is odd, because a restart should restart the server, even if it's already running, correct? I eventually had to "stop" then "start" to get it working again. I then looked through the logs, and found something very weird. It seems that around the time the server crashed, the logs have entries that are wildly out of order. It looks a little like this: xx.xxx.xxx.x - - [21/Apr/2010:06:32:05 -0400] "GET / blah" xx.xxx.xxx.x - - [21/Apr/2010:06:51:25 -0400] "GET / blah" x.xx.xxx.xxx - - [21/Apr/2010:06:38:23 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:31:56 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:51:49 -0400] "GET / blah" xx.xx.xxx.xx - - [21/Apr/2010:06:33:20 -0400] "GET / blah" I don't think the problem is memory, because this: tells me that right before the crash, memory usage is fine. I'm running apache with the worker mpm, here are the settings for that: <IfModule mpm_worker_module> StartServers 1 MaxClients 100 MinSpareThreads 5 MaxSpareThreads 10 ThreadsPerChild 10 MaxRequestsPerChild 3000 </IfModule> This apache server is running a bunch of stuff, but most of the traffic comes from a django project I'm hosting, that uses mod_wsgi. There also is a simple machines forum that is running off of mod_fcgid. Those setting are below: <IfModule mod_fcgid.c> MaxRequestsPerProcess 500 MaxProcessCount 3 AddHandler fcgid-script .php .fcgi AddHandler cgi-script .cgi .pl FCGIWrapper "/usr/bin/php-cgi" .php </IfModule> Anyone know of anything else I can check? I've just about tweaked every single setting I can think of, yet these freezes still happen.

    Read the article

  • Problems with Windows XP Plug and Play devices, maybe relating to MSVCR71.dll

    - by Richard
    I believe this question is unanswered as of now so I appologize if I've overlooked it. I have been having trouble some external devices with windows recently and I'm trying my darnedest to get to the bottom of it. At first, my Zero Tension USB mouse would stop working...as in the laser in the bottom of it would be on and would register movement, but the mouse on the screen wouldn't budge even an inch. At first this would happen randomly and then it would correct itself. As time went on, it became more and more frequent. At some point, the computer would make the "doo doo" sound of plugging or unplugging a USB device when the mouse stopped/started working. I dealt with it for a while and usually if I rebooted my machine, the mouse would work again for a day or two. As more time has gone on, the computer fully does not recognize the mouse AT ALL...I have another mouse that I use with the computer that works just fine and cannot seem to figure out why my Zero Tension mouse has failed. I tried plugging the Zero Tension mouse into my Mac and low and behold, it works without hesitation and never stops on me... Needless to say, I am stumped about this. I figured because I had another mouse I could deal with the loss of my fancy one for now...until my speakers stopped being recognized. I have a set of Logitec speakers that I have plugged into my sound card. Again, every now and again the audio devices would cease to be recognized by my computer, but a reboot would fix the problem. Now my speakers do not work at all with my computer and I feel like it's time to ask for help. My computer seems to be having a neural shutdown...where I can plug in devices and the computer doesn't seem to notice anything wrong, but none of the devices work. I hope this doesn't get any worse! Please help! Also, on a potentially (un)related note, when I start up my machine I get the message "This application has failed to start because Msvcp71.dll was not found. Re-installing the application may fix this problem." in reference to qbupdate.exe I don't know if that DLL being messed up has anything to do with my mouse or speakers, but I figure it might...anyway, thanks in advance for an answer and let me know if I need to clarify anything. Let me sum up: Zero Tension Mouse gradually stopped working Logitec Speakers gradually stopped working MSVCR71.dll seems to be messed up I don't know if any of those are related but any help would be much appreciated

    Read the article

  • Lag spikes at full CPU usage, maybe video card

    - by Roberts
    I am posting this thread in hurry so few things may be missed (I will update tomorrow). My PC specs: Motherboard Name - Gigabyte GA-945PL-S3 CPU Type - DualCore Intel Core 2 Duo E4300, 1800 MHz (9 x 200) OS - Microsoft Windows 7 Ultimate OS Kernel Type - 32-bit OS Version - 6.1.7601 I bougth a new video card one month ago. GeForce 210. I didn't have any problems. I wanted to overclock it, in other words: "Play with it". So I installed Gigabyte EasyBoost from CD and overclocked the GPU 590 + 110 mhz, memory to max to 960mhz from 800mhz. Benchmarks showed a little bit bigger score. Then I overclocked shader clock from 1405 to [..] (don't remeber really). So I was playing Modern Warfare 2 when off sudden computer froze when I wanted to select team, I was afk before that. I had to reset CMOS. After that I had problems with Skype: unread messages and no sound. Then I figured it out that when ever I open EasyBoost - Skype starts to glitch again. Now I use EVGA Precission X. Now after a month, I cleaned computer and closed the case, it was open all the time. I started to overclock GPU clock only (just a bit) because there was no problems that would stop me. So sometimes on heavy CPU load graphics starts to lag. Dragging a window is painful to watch too. Sometimes the screen freezes for 5 to 10 seconds (I can see that hard disk activity is maximal). You may say that CPU fault it is, isn't it? But sometimes lag spikes starts randomly when CPU load is at maximum. All 3 benchmark softwares (PerformanceTest, NovaBench and MSI Kombustor) shows that performance of my video card has dropped about 25%. BUT! CPU score is lower too. I ignored these problems but when I refreshed Windows Experience Index I was shocked. Month before (in latvian language but not so hard to understand): Now (upgraded RAM): This happened when I tried to capture Minecraft with Fraps on underclocked GPU to 580mhz (def: 590mhz): All drivers are up to date. Average CPU temperature from 55°C to 75°C (at 70°C sometimes starts these lag spikes). Video card's tempratures are from 45°C to 60°C (very hard to reach 60°C). So my hope is that the video card is fine, cause this card is very new and I want to upgrade CPU anyways. Aplogies for my mistakes in vocabulary (I am trying to type this as fast I can).

    Read the article

  • SSH to an ubuntu machine using avahi

    - by tensaiji
    I have an ubuntu box that I connect to using avahi. Connecting to that box works fine for all services (I regularly use AFP, SSH and SMB on it) but I've noticed that whenever I connect to it from a mac using SSH (and using the ".local" dns name provided by avahi - eg. "ssh .local") SSH tries to connect using ipv6, which for some reason times out (after two minutes) then it tries ipv4 which connects immediately. I'd like to avoid this timeout, as it's really annoying for me and other users - if SSH tried ipv4 first or if ssh over ipv6 worked then that would solve the problem. But so far I've been unable to get either to work (the best I've managed is to specify the "-4" option to SSH to stop it from trying ipv6 at all). I'm using Ubuntu 10.04. Any solution has to be on the server (not the client) as there are multiple clients connecting. A possible complication might be that my LAN is set up to allow link-local ipv6 addresses only, but I have other servers (using Mac OS) that I can SSH into using ipv6) I suspect that the problem could be solved by either preventing avahi from broadcasting the ipv6 address, or by enabling ssh over ipv6, but so far as I can tell avahi is already configured not to broadcast the ipv6 address and sshd is configured to allow ipv6 connections! Here's my /etc/avahi/avahi-daemon.conf (I don't think I've changed anything from the ubuntu defaults) [server] #host-name=foo #domain-name=local #browse-domains=0pointer.de, zeroconf.org use-ipv4=yes use-ipv6=no #allow-interfaces=eth0 #deny-interfaces=eth1 #check-response-ttl=no #use-iff-running=no #enable-dbus=yes #disallow-other-stacks=no #allow-point-to-point=no [wide-area] enable-wide-area=yes [publish] #disable-publishing=no #disable-user-service-publishing=no #add-service-cookie=no #publish-addresses=yes #publish-hinfo=yes #publish-workstation=yes #publish-domain=yes #publish-dns-servers=192.168.50.1, 192.168.50.2 #publish-resolv-conf-dns-servers=yes #publish-aaaa-on-ipv4=yes #publish-a-on-ipv6=no [reflector] #enable-reflector=no #reflect-ipv=no [rlimits] #rlimit-as= rlimit-core=0 rlimit-data=4194304 rlimit-fsize=0 rlimit-nofile=300 rlimit-stack=4194304 rlimit-nproc=3 and here's my sshd_config (mainly updated to only allow pub/private keys): # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 180 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication no AllowGroups sshusers # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Does anyone have any ideas that I can try, or has experienced anything similar?

    Read the article

  • Anyone else experiencing high rates of Linux server crashes during a leap second day?

    - by Bron Gondwana
    POSTMORTEM Anticlimax: only thing that died was my VPN (openvpn) link to the cluster, so there was an exciting few seconds while it re-established. Everything else was fine. Starting back ntp everywhere. If you look at Marco's blog at http://my.opera.com/marcomarongiu/blog/2012/06/01/an-humble-attempt-to-work-around-the-leap-second - he has a solution for phasing the time change over 24 hours using ntpd -x to avoid the 1 second skip. Give that a go if it matters to you. For the systems I run, the jump isn't a problem. Just today, Sat June 30th - starting soon after the start of the day GMT. We've had a handful of blades in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358 [3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21. THE WORKAROUND Ok people, here's how I worked around it. disabled ntp: /etc/init.d/ntp stop created http://linux.brong.fastmail.fm/2012-06-30/fixtime.pl (code stolen from Marco, see blog posts in comments) ran fixtime.pl without an argument to see that there was a leap second set ran fixtime.pl with an argument to remove the leap second NOTE: depends on adjtimex. I've put a copy of the squeeze adjtimex binary at http://linux.brong.fastmail.fm/2012-06-30/adjtimex - it will run without dependencies on a squeeze 64 bit system. If you put it in the same directory as fixtime.pl, it will be used if the system one isn't present. Obviously if you don't have squeeze 64 bit... find your own. I'm going to start ntp again tomorrow. As an anonymous user suggested - an alternative to running adjtimex is to just set the time yourself, which will presumably also clear the leapsecond counter.

    Read the article

  • All my sites are 403 but the server is running. Errors on startup

    - by Craig
    We gave access to a contractor to install a firewall and somehow while he was doing it he fracked something up. Everything went off-line about 24 hours ago and we are effectively out of business until I solve this and the person who messed up the thing is not returning calls. I found a few errors. First, I'm not a server guy - I can look at log files and normally everything runs fine. All 'services' are running according to 1and1 server monitoring and mail is being delivered just fine. The whole thing was off-line until I (probably stupidly) updated the kernel from 6.2 to 6.3 this morning and I got everything back except the http access. All the domains (~200 of them) are returning a 403 error and nothing is recorded in the access log. On every restart I see this error in the messages log file: init: Failed to spawn ttyS0 main process: unable to execute: No such file or directory and a little later these: kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) kernel: Hardware name: X9SCL/X9SCM kernel: Modules linked in: xt_iprange iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 ext4 jbd2 serio_raw i2c_i801 i2c_core sg iTCO_wdt iTCO_vendor_support e1000e ext3 jbd mbcache raid1 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] kernel: Pid: 367, comm: md3_raid1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 kernel: Call Trace: kernel: [<ffffffff81069997>] ? warn_slowpath_common+0x87/0xc0 kernel: [<ffffffff810699ea>] ? warn_slowpath_null+0x1a/0x20 kernel: [<ffffffff814eccc5>] ? thread_return+0x232/0x79d kernel: [<ffffffff8126a4d9>] ? cpumask_next_and+0x29/0x50 kernel: [<ffffffff813e9c05>] ? md_super_wait+0x55/0x90 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813ebf46>] ? md_update_sb+0x206/0x3f0 kernel: [<ffffffff813ee922>] ? md_check_recovery+0x3f2/0x6d0 kernel: [<ffffffffa005b129>] ? raid1d+0x49/0x1050 [raid1] kernel: [<ffffffff814ed985>] ? schedule_timeout+0x215/0x2e0 kernel: [<ffffffff814ef447>] ? _spin_unlock_irqrestore+0x17/0x20 kernel: [<ffffffff813eb336>] ? md_thread+0x116/0x150 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813eb220>] ? md_thread+0x0/0x150 kernel: [<ffffffff810906a6>] ? kthread+0x96/0xa0 kernel: [<ffffffff8100c14a>] ? child_rip+0xa/0x20 kernel: [<ffffffff81090610>] ? kthread+0x0/0xa0 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 And something is wrong with the Named/BIND resulting in the same error for all domains: zone DOMAINEXAMPLE.com/IN: loading from master file DOMAINEXAMPLE.com failed: file not found zone DOMAINEXAMPLE.com/IN: not loaded due to errors. _default/DOMAINEXAMPLE.com/IN: file not found I'm pretty sure this is not enough information to solve the problem, but I'm willing to engage someone who can work this out for me. Any help would be greatly appreciated.

    Read the article

  • http://localhost/ always gives 502 unknown host

    - by Nitesh Panchal
    My service for World Wide Web Publishing Service started successfully but whenever I browse to http://localhost/ I always get 502 Unknown host. Also, wampapache is installed side by side but when I stop IIS service and start wampapache from services.msc I get error and when I view it in System event log I get this error: - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7024</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2011-06-12T17:43:28.223498400Z" /> <EventRecordID>346799</EventRecordID> <Correlation /> <Execution ProcessID="456" ThreadID="3936" /> <Channel>System</Channel> <Computer>MACHINENAME</Computer> <Security /> </System> - <EventData> <Data Name="param1">wampapache</Data> <Data Name="param2">%%1</Data> </EventData> </Event> I am fed up of this error and it is driving me nuts. I feel like banging my head against the laptop. I am really serious. Without concentrating on my real application I am trying to solve this issue since 3 hours. I google various threads and few of them said that there could be issue of Reporting Services or Skype. But I have uninstalled Skype and Reporting Services are disabled. What more should I do? I have hosts file present in etc directory and it does have mapping for localhost to 127.0.0.1. What more could I do?

    Read the article

  • Are random packets normal?

    - by TheLQ
    About a month ago on one of my servers I started receiving random packets from IPs all over the world. So I did the smart thing and stopped putting off installing an IDS. This IDS is a ClearOS Gateway which comes with Snort and SnortSam. I enabled it, checked There is a total of 4 ports open, two of which forward to the server I'm talking about. These ports are 3724 and 8085, so they aren't going to be easily detected in a port scan. However checking some logs of this server I found that the attack is resuming. I found this ... Accepting connection from '75.166.155.122' [Auth] got unknown packet from '75.166.155.122' Accepting connection from '98.164.154.93' [Auth] got unknown packet from '98.164.154.93' Ping MySQL to keep connection alive Accepting connection from '70.241.195.129' [Auth] got unknown packet from '70.241.195.129' Accepting connection from '67.182.229.169' [Auth] got unknown packet from '67.182.229.169' Accepting connection from '69.137.140.38' [Auth] got unknown packet from '69.137.140.38' Accepting connection from '76.31.72.55' [Auth] got unknown packet from '76.31.72.55' Accepting connection from '97.88.139.39' [Auth] got unknown packet from '97.88.139.39' Accepting connection from '173.35.62.112' [Auth] got unknown packet from '173.35.62.112' Accepting connection from '187.15.10.73' [Auth] got unknown packet from '187.15.10.73' Accepting connection from '66.66.94.124' [Auth] got unknown packet from '66.66.94.124' Accepting connection from '75.159.219.124' [Auth] got unknown packet from '75.159.219.124' Accepting connection from '99.102.100.82' [Auth] got unknown packet from '99.102.100.82' Accepting connection from '24.128.240.45' [Auth] got unknown packet from '24.128.240.45' Accepting connection from '99.231.7.39' [Auth] got unknown packet from '99.231.7.39' Accepting connection from '206.255.79.56' [Auth] got unknown packet from '206.255.79.56' Accepting connection from '68.97.106.235' [Auth] got unknown packet from '68.97.106.235' Accepting connection from '69.134.67.251' [Auth] got unknown packet from '69.134.67.251' Accepting connection from '63.228.138.186' [Auth] got unknown packet from '63.228.138.186' Accepting connection from '184.39.146.193' [Auth] got unknown packet from '184.39.146.193' Accepting connection from '69.171.161.102' [Auth] got unknown packet from '69.171.161.102' Accepting connection from '76.0.47.228' [Auth] got unknown packet from '76.0.47.228' Ping MySQL to keep connection alive Accepting connection from '126.112.201.14' [Auth] got unknown packet from '126.112.201.14' Ping MySQL to keep connection alive Now that scares me. Why isn't Snort detecting this? How were they able to find this specific port? More importantly, what normally would these packets contain? Is this something I should be worried about? How can I stop this?

    Read the article

  • Windows Service Limit Crashes Services on Startup

    - by Paul Williams
    We have developed a custom Windows service in C# as part of a large Enterprise application. Our QA department tests multiple versions of this service. The QA lab has several (over 20) copies of this service installed on one Windows 2003 test box. Each copy is in its own folder and has a unique service name, though each executable file is named the same (OurWindowsService.exe, for example). Each service uses the same Windows credentials (a domain user). The purpose of this service is to handle MSMQ messages. The queued messages do all sorts of important stuff. For some reason, they can run only 5 of these services at a time. When we start a 6th, the service crashes on startup. For example, I can start #1, #2, #3, #4, and #5. When I start #6, it crashes. However, if I stop #1 and start #6, #6 runs fine, and now #1 fails to start. When the services crash, the following error appears in the Windows event log: Faulting application OurWindowsService.exe, version 5.40.1.1, faulting module kernel32.dll, version 5.2.3790.4480, fault address 0x0000bef7. I was able to use WinDbg to generate a postmortem dump file. The dump file revealed that the crash occurs trying to delay load SHLWAPI.dll: 0:000> kb100 ChildEBP RetAddr Args to Child 0012ece4 79037966 c06d007e 00000000 00000001 KERNEL32!RaiseException+0x53 0012ed4c 790099ba 00000008 0012ed08 7c82860c mscoree!__delayLoadHelper2+0x139 0012ed98 790075b1 001550c8 0012edac 0012fb34 mscoree!_tailMerge_**SHLWAPI_dll**+0xd 0012edb0 79007623 001550c8 0012edf8 0012edf4 mscoree!XMLGetVersionWithSupported+0x22 0012ee00 790069a4 aa06f1b0 00000000 000001fe mscoree!RuntimeRequest::GetRuntimeVersion+0x56 0012f478 790077aa 00000001 7903fb4c 0012fb34 mscoree!RuntimeRequest::ComputeVersionString+0x5bd 0012f89c 79007802 00000001 0012f8b4 7903fb4c mscoree!RuntimeRequest::FindVersionedRuntime+0x11c 0012f8b8 79007b19 00000001 00000000 aa06fa6c mscoree!RuntimeRequest::RequestRuntimeDll+0x2c 0012ffa4 79007c02 00000001 0012ffbc 00000000 mscoree!GetInstallation+0x72 0012ffc0 77e6f23b 00000000 00000000 7ffdf000 mscoree!_CorExeMain+0x12 0012fff0 00000000 79007bf0 00000000 78746341 KERNEL32!BaseProcessStart+0x23 I believe the error code handed to Kernel32.RaiseException, c06d007e, means Module Not Found, but I'm not certain. Does this sound familiar to anyone? Are we hitting some limit on the number of service instances on some file name? Does MSMQ dislike more than 5 listening services?

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >