Search Results

Search found 20140 results on 806 pages for 'output formatting'.

Page 620/806 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • Mac OS X Terminal.app Ubuntu 9.10 SSHD and incorrect keyboard mapping

    - by Jesse
    Does anyone have any Idea how to handle this? I can't stand connecting to certain Ubuntu boxes via Mac OS X because of issues with keyboard layout etc. I have set TERM=vt100 and TERM=xterm-color in Ubuntu .bashrc and also in the Terminal.app advanced preferences and nothing seems to fix this issue. Trying to use arrow keys on slim silver keyboard results in ^[[A etc. From Answer OS X 10.6.4 When I try to run /lib/terminfo/x/xterm-color I get permission denied? Maybe this is the issue?! Regular bash login shell. If I sudo often it works. Which leads me to believe the above permissions problem is the cause. Output from stty -a: $ stty -a speed 9600 baud; rows 47; columns 181; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = M-^?; eol2 = M-^?; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc ixany imaxbel -iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe -echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Alfa AWUSO36H 1W dysfunctional driver

    - by BrainStorm
    I recently purchased an Alfa AWUSO36H 1W wireless USB adapter for my notebook, in order to improve signal strength and quality. I'm currently using Linux Mint 11, and the it uses the RTL8187 driver for this adapter, I'm also using a 4dbi antenna, though I have others. The problem is that this adapter does exactly the opposite of what it should, actually my internal Broadcom BCM4313 adapter works way better than the alfa. Browsing is slow, some network applications don't even work, pings against Google.com on the internal adapter runs smooth, while in the alfa it gets like 25% packets lost or more! I'm less them 50 feet from my AP, the internal adapter gets 44/70 link quality, and the alfa gets around 60/70 (iwconfig output). Also the system always sets alfa power to 20dbm(100mw), then I have to do sudo iw set reg B0 to make it 30dbm(1000mw), but apparently no significant change. I've installed wireless-compat drivers, no change either. And worst of all, in Windows 7 it works way more smoothly for browsing, though I couldn't test it properly there. I hope its a driver problem, even if it's a pain to find/compile Linux drivers for a starter, I prefer it to a hardware problem where I would need to buy another adapter, since I have no money left (except for the cantenna pieces).

    Read the article

  • vmware player won't run on CentOS due to missing /dev/vmmon, what could be the problem?

    - by Graphics Noob
    So I've tried installing vmware player 3.1.4 and 3.1.3 and both times had the same problem, when I try to load a VM I get the error "Could not open /dev/vmmon". When I ls /dev/ I can see there is no "vmmon" device present. When I try running: sudo /etc/init.d/vmware start I get the output: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [FAILED] which shows that the Virtual Machine Monitor fails to load. I tried following the advice on this site and ran vmware-modconfig --console --install-all I notice during the compilation there are no errors, but at the end I get the message: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [ OK ] Unable to start services Out of curiousity I tried: sudo /sbin/insmod /lib/modules/2.6.18-238.9.1.el5xen/misc/vmmod.ko But got the error message: insmod: error inserting 'vmmon.ko': -1 Invalid module format I have a feeling this may be the root of the problem, but I don't know what could be causing it or how to fix it.

    Read the article

  • WOL doesn't work if set to anything other than `a` but this setting makes it boot all the time

    - by Elton Carvalho
    I manage a small "cluster" of 4 Xeon machines with Intel boards in my lab. They are all plugged to a 5-port 3-Com switch with static IP addresses like 10.0.0.x. They are all running OpenSuse 11.4 and their /home/ is served by one of the machines (node00) via NFS. They are plugged to an UPS that can keep them on for ca. 15 minutes, but there are lots of electric shortages due to "unscheduled maintenace" that are longer than this. So they end up being powered down without notice. If I set the BIOS to turn them on after power shortages, the issue is that they all boot at the same time and, if node00 decides to run fsck in the /home/ partition, it does not finish booting before the others try to NFS mount their /home/. I am trying to make wake on lan work, so I can choose to boot the NFS clients only after the server has successfully booted. The problem is that when I run ethtool I get an output like this: Supports Wake-on: pumbag Wake-on: g Theoretically, it is set to wake on MagicPacket(tm), according to the manual. But sending the WOL packet using wol -i 10.0.0.255 $MACADDR does not wake up the box after I shut it down with halt. The ethernet link led blinks after I send the packet, so it appears to be getting to the machine. However, if I set it up with ethtool -s eth1 wol bag, the machine always wakes up right after halting, even if I don't send the Magic packet. This means that the device can wake up with LAN activity, but seems to be ignoring the magic packet. Setting wol ag does not wake the box with the MagicPacket. Does setting wol a mean that it should boot with any broadcast message? How can I diagnose the issue of the machine not waking up with the MagicPacket even though I am sending it and it's set up to wake up with it? Thanks in advance!

    Read the article

  • Why isn't my phone charging with some micro usb cables?

    - by Jacxel
    I ordered 3 microUSB cables over ebay. My phone was at about 50% battery and I wanted to use it as a hot spot for some browsing on my laptop, so I plugged it in, a charging icon appeared on the phone, my laptop showed it as a connected usb device and so I went about my business. About 30 minutes later I checked the phone and to my dismay saw 45% battery. But ahh, I thought, I have been putting the poor little thing under too much pressure, acting as a WiFi hotspot must drain the battery quicker than it can charge via usb, perhaps even using my laptops usb port wouldn't output enough power. Unscathed I continued on and when I was going to bed I plugged the usb cable into a mains adapter and switched everything battery consuming off and content, went to sleep. The next morning I was awoken by my phones alarm which got cut off unexpectedly. I attempted to unlock my phone which showed no more signs of life. Why isn't my phone charging with these new USB cables? For clarity: They transfer data with no problems The phone appears to be charging, showing all the signs and lights it normally would, the cable that came with the phone works as you would expect, so its not a fault with the phone, I think they slow the discharging of the phone, but I could be wrong. Are these just bad quality cables? Is there a way to fix this issue?

    Read the article

  • Cut (smart edit) .mts (AVCHD Progressive) files un Ubuntu Lucid

    - by pts
    I have a bunch of .mts files containing AVCHD Progressive video recorded by a Panasonic camera, and I need software on Ubuntu Lucid with which I can remove the boring parts, and concatenate the interesting parts, all this without reencoding the video stream. It's OK for me to cut at keyframe boundary. If Avidemux was able to open the files, it would take about 60 hours of work for me to cut the files. (At least that was it last time I tried with similar videos, but of a file format supported by Avidemux.) So I need a fast, powerful and stable video editor, because I don't want that 60 hours of work go up to 240 or even 480 hours just because the tool is too slow or unstable or has a terrible UI. I've tried Avidemux 2.5.5 and 2.5.6, but they crash trying to open such a file, even if I convert the file to .avi first using mencoder -oac copy -ovc copy. mplayer can play the files. I've tried Avidemux 2.6.0, which can open the file, but it cannot jump to the previous or next keyframe etc. (if I make it jump to the next keyframe, and then to the previous keyframe, it doesn't end up at the original keyframe, sometimes displays an error etc.). Also I'm not sure if Avidemux 2.6.x would let me save the result without reencoding. I've tried Kdenlive 0.7.7.1, but playback is very choppy, and it cannot play audio at all (complaining that SDL cannot find the device; but many other programs on the system can play audio). It would be a pain to work with. I've tried converting the .mts file to .mkv using ffmpeg -i input.mts -vcodec copy -sameq -acodec copy -f matroska output.mkv, but that caused too much visible distortions in the video in both mplayer and Avidemux. I've tried converting the .mts file with TsRemux.exe, but Avidemux 2.5.x still can't open that file. Is there another program to cut and concatenate the files? Is there a preprocessor which would create a file (without reencoding the video) on which Avidemux wouldn't crash?

    Read the article

  • ssh-add insists on passphrase

    - by Sam Walton
    I have a new ssh key problem. I have successfully used them for years with Heroku, Git and other servers so I can login without having to issue a passphrase. A few weeks ago, I was unable to push a git repository on my machine to my Heroku and it responded with Permission denied (publickey). Hmm. Everything else but this Heroku function still works. So I ssh-keygen -t rsa -C "newHeroku" with no passphrase (hit return so it would be empty). So I enter: sudo chmod 600 ~/.ssh/newHeroku* Then: ssh-add ~/.ssh/newHeroku.pub Returning return for the passphrase asked it exits without error. The next step is to: ssh-add /Users/sam/.ssh/newHeroku.pub To verify that it's "live" I enter: ssh-add -l To which the output is still The agent has no identities. Okay, to eliminate variables, I repeat the key generation process but entering in a passphrase for a new key. I ssh-add the new key and get the "Enter passphrase" as expected. Now this is why I'm posting here and not on a Heroku blog because ssh-add fails because the passphrase I used keeps getting rejected. It appears, even though I have no problem with my keys elsewhere, that something is wrong with passphrase because even though I get no errors, I get errors when on the one that expects a passphrase. One question, should I expect the Passphrase request for ssh-add when I have not generated a passphrase? It's been suggested that this is a clue and I offer it. Or maybe I have a poor understanding of what ssh-add is doing. Wouldn't be the first time I asked a stupid Q. Also, I'm on Lion and have updated no system updates in the few weeks of this period except application updates.

    Read the article

  • Why does Windows Event Log stop logging events before maximum log size is reached?

    - by Tuure Laurinolli
    I have a service that produces a lot of event log output. Currently the event log is configured to overwrite any old events to keep the log from ever getting full. We have also increased the event log size considerably (to about 600 MB). Recently the service started reporting errors to its clients, and the error message it was sending to its clients is "The event log file is full". How can this be, when event log is configured to overwrite as necessary? In our hurry to get the service back up we cleared the event log without saving its contents, but most likely it had not reached 600 MB yet, judging from sizes of some earlier log dumps. There is also MS KB entry 312571, which reports that a hot fix to a similar issue is available, but the the configuration that the fix applies to is not exactly the same we have. Specifically, the fix only applies if event logs are configured to never overwrite old events. I wonder if this has something to do with the fact that the log files apparently are memory-mapped. What happens if the system runs out of address space to map files to?

    Read the article

  • Apache can't get viewed from outside of my LAN

    - by Javier Martinez
    I fixed it in PORTS TRIGGER menu of my router. Thanks you anyway I have a weird problem related with (i think) my cable-router and my configured vhosts in Apache2. The point is I can't access from outside of my LAN to any of my configured vhosts if I set the http port of Apache to 80 and i add a NAT rule for it. Otherwise, if I set my Apache port to 81 (or any else) with its respective NAT rule on my router it works. My router is an ARRIS TG952S and I am using Apache/2.2.22 (Debian) ports.conf NameVirtualHost *:80 Listen 80 vhost1.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost1.mydomain.net ServerAlias vhost1.mydomain.net www.vhost1.mydomain.net vhost2.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost2.mydomain.net ServerAlias vhost2.mydomain.net www.vhost2.mydomain.net DNS records (using FreeDNS) are: mydomain.net --> pointing to another server vhost1.mydomain.net --> pointing to my server vhost2.mydomain.net --> pointing to my server iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-apache-noscript tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-apache tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-apache (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-apache-noscript (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Thanks you

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • GNU screen, how to get current sessionname programmatically

    - by Jimm Chen
    [ This can be considered step 2 of my previous question Is it possible to change GNU screen session name after created? ] Actually, I'd like to write a script that can display current screen session name and change current session name. For example: sren armcross It will change the session name to armcross (ARM gcc cross compiler) and output something like: screen session name changed from '25278.pts-15.linux-ic37' to 'armcross' So, the key question now is how to get current session name. Not only for display the old session name, but according to Is it possible to change GNU screen session name after created? , I have to know it(pass to -d -r) before I can change it to something else. Can we use $STY for current session name? No. $STY will not change after you have changed the session name to a user-defined one. However, for command screen -d -r <oldsessname> -X sessionname armcross should be the user-defined name(if ever defined) instead of $STY, otherwise, screen spouts error "No screen session found." Maybe, there is a verbose way, use screen -list to list all sessions(user-defined name listed), then, match the pid part from $STY against those listed sessions and we will find current session's user-defined name. It should not be so verbose for such a straightforward question. Don't you think so? The -d -D and -r -R options seems to expose too much implementation detail to screen's user. It seems, to rename a session, you have to detach it, then do the rename, then reattach it. Right? My env: opensuse 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06 Thank you.

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • RD Gateway reporting features/capabilities

    - by Don
    We have just implemented RD Gateway for our own department in preparation for a push to the whole agency for telecommuting. It is all setup and working great, but I was trying to figure out how best to go about monitoring/reporting of users. I see third party software out there that will do it, but is there anything built-in or via powershell/scripting that I could use that would give me a report of the daily activity of users? Something to say, "User A connected at this time, was on for this long, sent/received this much data"? Basically some of the same stuff you can see in event viewer. Ideally I'd like to be able to have this setup so that once a day it emails me with the daily usage for when a supervisor asks about if their person is actually working (or at least online sending and receiving x amount of data), I'll have some metrics to give them. I realize that actual work output is relevant and more of a managerial issue, but I would like to be able to offer as much as I can from my end when asked. Thoughts? Thanks!

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • Only tunnel certain applications via OpenVPN

    - by jinjin
    Hi, I've purchased a VPN solution, it works correctly when I have "redirect-gateway def1" in the configuration file (routing all traffic through the VPN). However when I remove that line from the configuration file, I am still able to ping-out of the machine (ping -I tap0), however I cannot ping the IP assigned to the machine (it's a public ip), i get the error: Destination Host Unreachable. I only want to have certain applications sending traffic through the VPN tunnel (eg: ZNC, irssi), all of which i can select which IP they use. However they can't recieve any data, making the tunnel essentially useless to me when disabling redirect-gateway. Any ideas on how to allow specific applications use the tunnel, without of forcing everything to go through it? My configuration file is as follows: dev tap remote #.#.#.# float #.#.#.# port 5129 comp-lzo ifconfig #.#.#.# 255.255.255.128 route-gateway #.#.#.# #redirect-gateway def1 secret key.txt cipher AES-128-CBC The output of ifconfig -a when the tunnel is connected: tap0 Link encap:Ethernet HWaddr 00:ff:47:d3:6d:f3 inet addr:#.#.#.# Bcast:#.#.#.# Mask:255.255.255.255 inet6 addr: <snip> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:612 errors:0 dropped:0 overruns:0 frame:0 TX packets:35 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:25704 (25.1 KiB) TX bytes:6427 (6.2 KiB) EDIT: the Bcast:#.#.#.# (ifconfig) is different from route-gateway #.#.#.# (openvpn) if that makes any difference.

    Read the article

  • Multiple logins with pam_mount means multiple (redundant) mounts ...

    - by Jamie
    I've configured pam_mount.so to automagically mount a cifs share when users login; the problem is if a user logs into multiple times simultaneously, the mount command is repeated multiple times. This so far isn't a problem but it's messy when you look at the output of a mount command. # mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) I'm assuming I need to fiddle with either the pam.d/common-auth file or pam_mount.conf.xml to accomplish this. How can I instruct pam_mount.so to avoid duplicate mountings?

    Read the article

  • How do I move a linked file on Unix?

    - by r3mbol
    I have a bunch of files in one directory and links to each one of those files in another directory. So ls -l looks something like this: lrwxrwxrwx 1 rembol rembol 89 Jan 25 10:00 copyright.txt -> /home/rembol/solr/target/deploy/data/core/copyright.txt lrwxrwxrwx 1 rembol rembol 92 Jan 25 10:00 jar-versions.xml -> /home/rembol/solr/target/deploy/data/core/jar-versions.xml lrwxrwxrwx 1 rembol rembol 85 Jan 25 10:00 lgpl.html -> /home/rembol/solr/target/deploy/data/core/lgpl.html lrwxrwxrwx 1 rembol rembol 79 Jan 25 10:00 lib -> /home/rembol/solr/target/deploy/data/core/lib lrwxrwxrwx 1 rembol rembol 87 Jan 25 10:00 readme.html -> /home/rembol/solr/target/deploy/data/core/readme.html drwxr-xr-x 3 rembol rembol 4096 Jan 25 10:00 server drwxr-xr-x 2 rembol rembol 4096 Jan 25 10:00 startup Now I want to move those linked files from /home/rembol/solr/target/deploy to /home/rembol/output/. If I do that my simply calling mv, links will break. I don't want to re-link each file separately, cause there are hundreds of them (they are generated automatically). Is there some clever way to move linked files, rather than writing a script that unlinks, moves and relinks recursively for each file in each subdirectory?

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • which grabber is good enough to get 1000fps?

    - by user261002
    I have two framegrabber with a fast camera (1800+ fps). can anybody who understand the hardware, explain to me which of the following grabbers can help me more to grab 1000fps ? here are the the features of the two grabbers : Inspecta-5 Full Camera Link® Version: · Support for line scan and area cameras. · Video data rate of up to 660 Mbytes/sec. · PCI – X bus interface for 64 Bit data width and 66 MHz clock frequency. · PCI bus interface for 32 Bit data width and 33 MHz clock frequency. · 2 Gigabyte Onboard Memory for fast video streams. · Four opt coupled input- output ports for external trigger and encoder signals. · 528 Mbytes/sec. maximum data rate on the PCI–X Bus. · SDK for Windows 2000/XP SILICONSOFTWARE V-Series Camera Link : “microEnable IV VD4-CL” · Camera Pixel Clock Support 85 MHz · Area Scan Cameras 32k * 64k max. image size · Line Scan Cameras 64k max. image width · Acquisition Buffer: 512 MB DDR-RAM · Sustainable Transfer Rate (max.) 850 MBytes/sec. · microEnable SDK for Windows XP/Vista/ 7/ Linux

    Read the article

  • Three server processes consume no more than 50% of Dual Core CPU

    - by thor
    I have three processes running on Intel Core 2 Duo CPU. From watching output of 'top' and graphs of CPU load (drawn by MRTG, data collection via SNMP) I can see that CPU load is never more than 50%, and, most of the day, when those processes are busy CPU load has a ceiling at 50 %. I mean, CPU load grows up to 50% in the morning and stays there until late evening. My first thought was that only one core was used at 100% thus giving 50% of both CPUs. But, as there are three processes running and from 'top' I see that both cores are being loaded, so this is not the case. schedtool shows that CPU affinity for those three processes is at default, 0x03, allowing them to use both cores. If I force one process to one core (schedtool -a 0x01), and two others to second (schedtool -a 0x02), cumulative usage grows beyond 50%. Why three processes seem to consume only 50% of two cores? Why forcing them to different CPUs allows usage to grow higher? Any hints? P.S. Processes in question are Counter-Strike servers.

    Read the article

  • Converting flv and mp4 video format to '.ogg' using FFmpeg

    - by user163906
    I have HostGator VPS server with FFmpeg installed. It allows me to convert .wmv to .flv as well as .mp4 files successfully using the following commands for flv and mp4: ffmpeg -i WantsABath.wmv -b 600k -r 24 -ar 22050 -ab 96k WantsABath.flv ffmpeg -i WantsABath.wmv WantsABath.mp4 but it won't allow me to convert any file format to .ogg. I tried using the command: ffmpeg -i input.mp4 -acodec libvorbis -vcodec libtheora -f ogv output.ogv by mondain but no luck with it. I am doubting that my VPS doesn't have libtheora installed. I tried configuring it by using SSH but I don't know how to make sure if it is installed or not. I tried checking with php_info but can't find anything regarding libtheora. Here's my FFmpeg version: FFmpeg version SVN-r19795, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-libmp3lame --enable-libvorbis --disable-mmx --enable-shared --prefix=/usr/ --enable-gpl libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.35. 0 / 52.35. 0 libavformat 52.38. 0 / 52.38. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 This details doean't show libtheor Can anyone please suggest me something?

    Read the article

  • MySQL command appends '@localhost' to username

    - by Mikee
    I just can't seem to figure this one out. I want to use the command line to connect to a MySQL database residing on another server. I went ahead and created the username and password for the user. I have also granted all privileges on that user for that database. When using the command: mysql -h <hostname> -u <username> -p, I get the following error: ERROR 1045 (28000): Access denied for user '<username>'@'<local_machine_hostname>' (using password: YES) The problem is that it keeps appending the current machine's hostname into the username. Obviously, that user@<local_machine_hostname> is not correct. It doesn't matter what I type. For instance, if I type: mysql -h <hostname> -u '<username>'@'<hostname>' -p It does the same, only in the error output, it says: Access denied for user '<username>@<hostname>'@'<local_machine_hostname>' Is there a setting in a configuration file which is allowing this to happen? It's really quite annoying. I need to set up a tikiwiki server, and it cannot connect because during the step where you set up mysql, it keeps appending the local machine's hostname to the mysql login name.

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >