Search Results

Search found 29876 results on 1196 pages for 'ubuntu 8 04 lts'.

Page 389/1196 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • Failed loading ioncube

    - by time
    I recently upgraded a small server to Ubuntu 12.10 (from 12.04), thus upgrading PHP from 5.3 to 5.4. However, I'm getting this in root's mailbox several times a day: Subject: Cron <root@xxxxxxx> [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -ignore_readdir_race -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> Message-Id: xxxxxxxxxxxxxxxxxxxxxxxx Date: Sun, 9 Dec 2012 05:09:02 -0500 (EST) Failed loading /usr/lib/php5/20090626+lfs/ioncube_loader_lin_5.3.so: /usr/lib/php5/20090626+lfs/ioncube_loader_lin_5.3.so: undefined symbol: php_body_write I assume that's coming up because it's for PHP 5.3. How can I just get rid of ioncube? I have no need for it, I don't even remember installing it. That .so file doesn't exist, and I've grep'd several locations for "ioncube" and I can't seem to figure how to stop that message from flooding the mailbox.

    Read the article

  • hosts file seems to be ignored

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.mydomain.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • changing filesystem format from xfs to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux. Update The /home filesystem was xfs not jfs, and it seems there is a bug with this filesystem for some reason, I had to re-install the OS twice until I ended up with ext4 for the entire / However, as a conclusion, it seems that there is no way to make a conversion

    Read the article

  • I Made Wordpress Multisite. And Now Everything Doesn't Display Properly

    - by piratepartypumpkin
    I installed Ubuntu 10.4 then I installed Bitnami LAMP stack then I installed bitnami wordpress module then I tried to make the site into a multisite by following these instructions: http://wiki.bitnami.org/Applications/BitNami_WordPress_Multisite#How_to_add_several_WordPress_Multisite_blogs_with_different_domains.3f AND http://wiki.bitnami.org/Applications/BitNami_WordPress_Multisite I can elaborate if needed: I enabled multisite in my wp-config.php file. Then I created a network using the wordpress dashboard. I was given two blocks of text to copy and paste 1 block into my wp-config.php file and one into my .htaccess file. I did that and now I get this: When I go to my mywebsite.com I get this (this was there before I switched to multisite): If I go to mywebsite.com/wordpress I get this (this used to be a functioning wordpress theme): If I click on "My Blog" it redirects to 1mywebsite.com (where the "1" came from I have no idea) If I try to login by going to mywebsite.com/wordpress/wp-login I get this: and if I enter my user and password it redirects me to mywebsite.com/wp-login.php and gives me "Not Found" The requested URL /wp-login.php was not found on this server. NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /opt/lampstack-5.3.16-0/apps/wordpress ServerName mywebsite.com ServerAlias www.mywebsite.com </VirtualHost>

    Read the article

  • `sh` access denied over ssh connection

    - by inspectorG4dget
    I have an ubuntu server and a windows XP client running Cygwin. The server ssh's into the client and tries to execute a shell script with some params, with the following command: ssh user@IP_ADDR 'sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR' where IP_ADDR is the IP address of client. However, while doing so, I get the following error: Access is denied. Thinking this might be a user permissions error, I tried running sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR on the client, on Cygwin, while logged in as user. This works as expected. Then I thought that this might be an error with the login that I use when I ssh into the client. So I executed this instead: ssh user@IP_ADDR 'whoami' and got back user. This happened even after I did chmod -R 777 /home/user/project on the client, in Cygwin. For kicks, I got on Cygwin on the client and did ssh localhost and manually executed sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR. This worked as expected. However, when I did ssh IP_ADDR from Cygwin and did ssh localhost and manually executed sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR, I get the same Access is denied. error. Why is this happening? How can I fix this? By the way, both the server and the client have each other's rsa public key for passwordless ssh

    Read the article

  • JVM system time runs faster than HP UNIX OS system time

    - by winston
    Hello I have the following output from a simple debug jsp: Weblogic Startup Since: Friday, October 19, 2012, 08:36:12 AM Database Current Time: Wednesday, December 12, 2012, 11:43:44 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:45:38 AM Line 1 was a recorded variable during WebLogic webapp startup. Line 2 was output from database query select sysdate from dual; Line 3 was output from java code new Date() I have checked from shell date command that line 2 output conforms with OS time. The output of line 3 was mysterious. I don't know how it comes from Java VM. On another machine with same setting, the same jsp output like this: Weblogic Startup Since: Tuesday, December 11, 2012, 02:29:06 PM Database Current Time: Wednesday, December 12, 2012, 11:51:48 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:51:50 AM Another machine: Weblogic Startup Since: Monday, December 10, 2012, 05:00:34 PM Database Current Time: Wednesday, December 12, 2012, 11:52:03 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:52:07 AM Findings: the pattern shows that the longer Weblogic startup, the larger the discrepancy of OS time with JVM time. Anybody could help on HP JVM? On HP UNIX, NTP was done daily. Anyway here comes the server versions: HP-UX machinex B.11.31 U ia64 2426956366 unlimited-user license java version "1.6.0.04" Java(TM) SE Runtime Environment (build 1.6.0.04-jinteg_28_apr_2009_04_46-b00) Java HotSpot(TM) Server VM (build 11.3-b02-jre1.6.0.04-rc2, mixed mode) WebLogic Server Version: 10.3.2.0 Java properties java.runtime.name=Java(TM) SE Runtime Environment java.runtime.version=1.6.0.04-jinteg_28_apr_2009_04_46-b00 java.vendor=Hewlett-Packard Co. java.vendor.url=http\://www.hp.com/go/Java java.version=1.6.0.04 java.vm.name=Java HotSpot(TM) 64-Bit Server VM java.vm.info=mixed mode java.vm.specification.vendor=Sun Microsystems Inc. java.vm.vendor="Hewlett-Packard Company" sun.arch.data.model=64 sun.cpu.endian=big sun.cpu.isalist=ia64r0 sun.io.unicode.encoding=UnicodeBig sun.java.launcher=SUN_STANDARD sun.jnu.encoding=8859_1 sun.management.compiler=HotSpot 64-Bit Server Compiler sun.os.patch.level=unknown os.name=HP-UX os.version=B.11.31

    Read the article

  • FFMPEG: how to add watermark to video?

    - by DocWiki
    My Platform: Ubuntu 10.10 + FFMPEG 0.5.3(I installed ffmpeg from source) I try to add Watermark to a .MOV video with FFMPEG 0.5.3 imlib2.so (Please note FFMPEG 0.6+ dont support imlib2.so, so I use ffmpeg 0.5.3) Here is my code: ffmpeg -sameq -i example.mov -vhook '/usr/local/lib/vhook/imlib2.so -x 0 -y 0 -i /var/www/files/watermark.png' newexample.mov Here is the output: FFmpeg version 0.5.3, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-avfilter --enable-filter=movie --enable-avfilter-lavf libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 built on Jul 3 2011 12:05:08, gcc: 4.4.5 Seems stream 1 codec frame rate differs from container frame rate: 59.94 (5994/100) - 29.97 (30000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'example.mov': Duration: 00:03:14.06, start: 0.000000, bitrate: 3350 kb/s Stream #0.0(eng): Audio: aac, 48000 Hz, stereo, s16 Stream #0.1(eng): Video: h264, yuv420p, 1150x647, 29.97 tbr, 29.97 tbn, 59.94 tbc Output #0, mov, to 'newexample.mov': Stream #0.0(eng): Video: mpeg4, yuv420p, 1150x647, q=2-31, 200 kb/s, 90k tbn, 29.97 tbc Stream #0.1(eng): Audio: 0x0000, 48000 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.1 - #0.0 Stream #0.0 - #0.1 Unsupported codec for output stream #0.1 What could be the possible problem? Is that AAC or H264 that is not supported? I installed libavcodec-extra-52, linfaac, libfaad and etc. but the error is the same. Do I have to install following this instruction? HOWTO: Install and use the latest FFmpeg and x264 or there is a simpler solution?

    Read the article

  • Gwibber doesnt launch post upgrade

    - by Arcath
    i updated my pc from ubuntu 9.10 to 10.04 and one of the things i was looking forward too was the "me menu" and gwibber. For some reason gwibber doesnt launch at all. when i try to launch it from terminal i get: [21:02:20][arcath@Highgate ~]$ gwibber ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowState' as enum when in fact it is of type 'GFlags' ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowActions' as enum when in fact it is of type 'GFlags' ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' Traceback (most recent call last): File "/usr/bin/gwibber", line 50, in <module> from gwibber import client File "/usr/lib/python2.6/dist-packages/gwibber/client.py", line 3, in <module> import gtk, gobject, gwui, util, resources, actions, json, gconf File "/usr/lib/python2.6/dist-packages/gwibber/gwui.py", line 2, in <module> import os, json, urlparse, resources, util File "/usr/lib/python2.6/dist-packages/gwibber/util.py", line 2, in <module> from microblog.util.couch import RecordMonitor File "/usr/lib/python2.6/dist-packages/gwibber/microblog/util/couch.py", line 10, in <module> OAUTH_DATA = desktopcouch.local_files.get_oauth_tokens() File "/usr/lib/python2.6/dist-packages/desktopcouch/local_files.py", line 323, in get_oauth_tokens oauth_token_secrets = cf.items_in_section("oauth_token_secrets")[0] File "/usr/lib/python2.6/dist-packages/desktopcouch/local_files.py", line 189, in items_in_section raise ValueError("Section %r not present." % (section_name,)) ValueError: Section 'oauth_token_secrets' not present. and i cant work out whats wrong with it. Can anyone point me in the right direction?

    Read the article

  • How to tell which process is hogging my CPU when they don't add up to 100%?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. The individual processes do not add up to 100%. I try killing lots of processes, but the overall usage continues to be 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, which is never anywhere near 100% CPU unless I'm doing something that requires it (like loading 32 Firefox tabs), after which it goes back to a normal idle level. It's not a new install or anything. There is no reason the processor should be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing every process, one at a time, until the problem went away, and killing vino-server finally fixed it, even though that process never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). But the question remains: How did a single process manage to use 100% CPU while top only showed that process at 5%? How do I identify culprits like this in the future? Looks like I'm not the only one who's had this problem: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

  • Why am I unable to reach local network computers, but able to browse the web?

    - by Igor Zinov'yev
    I have a weird problem. Today after turning my Ubuntu 9.10 PC on I can't connect to my local network, but I can use the Internet. We have a single Windows 2003 server machine that acts as a local main DNS server, DHCP server and a domain controller. Although it seems to give me the local IP address, I can not ping it, as well as any other machine on the net. I have tried all of the below and it didn't help: Rebooting; Reconnecting to the network; Forcing the dhclient to renew the IP address; Deleting and creating new connection profiles; Plugging my machine into another network outlet; Maybe it has something to do with routing, because I have tampered with routing tables the day before, but the tables seem ok to me: $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 vboxnet0 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 Our LAN uses a D-Link DI-604 router, and it looks to me as if I am connected to the network outside the router. I can not even access its administration page. Please at least suggest what I can do to solve this. P.S. What seems strangest to me is that I can access the PC in question from outside the network by opening a port on the router. I have managed to ssh to it from outside, but I still can't ping nothing on the inside. P.P.S Today I tried reinstalling network-manager with --purge option, but it did no good. After that I created a new DCHP reservation for my PC in order to change my local IP, but that didn't change anything either. My PC is able to get a DHCP offer, but then it's unable to connect to any local computers. I am desperate.

    Read the article

  • Minimum specs needed to run 1080p HD movies?

    - by Wesley
    Hi all, I was wondering how low you could go with hardware that would still be able to smoothly run HD movies. My current plan is around $215 CAD w/o a hard drive: Intel Pentium 4 @ 3.2 GHz - 1GB DDR/SDRAM - 512MB X1600 Pro AGP - 350W PSU - ECS P4VXASD2+ - Bluray/DVD-RW drive. As for the hard drive, could I just get a 7200RPM IDE HDD? Also, I'm planning on installing XP Pro SP3, unless Ubuntu somehow has an advantage. I'm not wanting this to be a media center only computer... just want it to be a normal computer with just enough oomph to play HD movies. Thanks in advance. EDIT1: Forgot to mention that the Bluray/DVD-RW drive is SATA, but I was planning on getting an SATA-IDE adapter or a PCI card with SATA. EDIT2: Oh yes... if I get a PCI card with SATA, would you then recommend I use an SATA hard drive as well? That would leave all the IDE ports on the motherboard unused... is that a good idea?

    Read the article

  • Why apache throws 403 on index file after install?

    - by den-javamaniac
    Hi. I've just installed apache and php from sources using next commands: ./configure --prefix="/mnt/workspace/servers/web/apache-2.2.17" \ --enable-info --enable-rewrite --enable-usertrack --enable-mime-magic for apache and ./configure --with-apxs2=/mnt/workspace/servers/web/apache-2.2.17/bin/apxs \ --prefix=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-config-file-path=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-mysql=mysqlnd for php. After adjusting configuration (httpd.conf) and starting apache it gives a 403 response on http://localhost:8060/index.html (presuming that 8060 is used) request. There are next directory settings in httpd.conf: <Directory "/mnt/workspace/servers/web/apache-2.2.17/htdocs"> ... Order allow,deny Allow from all ... </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> It should be noted that I've got apache on a mounted (default auto mount configured while installing ubuntu) partition. Log Files Access log: ::1 - - [12/Feb/2011:17:48:30 +0200] "GET / HTTP/1.1" 403 202 ::1 - - [12/Feb/2011:17:48:31 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /favicon.ico HTTP/1.1" 403 213 Error log: [Sat Feb 12 18:59:13 2011] [notice] Apache/2.2.17 (Unix) PHP/5.3.5 configured -- resuming normal operations [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to / denied [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to /favicon.ico denied [Sat Feb 12 18:59:36 2011] [error] [client ::1] (13)Permission denied: access to /index.html denied

    Read the article

  • 1and1 ssh - connection refused

    - by kitensei
    I'm having troubles connecting through SSH to my 1&1 account. When I try to connect with command userXXX@host -p22 -vv I have the following output: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to mySite.com [ip_here] port 22. debug1: connect to address ip_here port 22: Connection refused Moreover, once I try to connect through SSH and it fails, even the HTTP access is dead, I cannot access the website through explorer anymore :/ please help < I'm running ubuntu 11.10 EDIT: don't know if it can help, here's the .htaccess of the 1and1 server Options +Indexes Satisfy any Order Deny,Allow Allow from 212.227.X.X Deny from all RemoveType .html .gif AuthType Basic AuthName "Access to /logs" AuthUserFile /kunden/homepages/43/d376072470/htpasswd Require user "user_here" and sftp.log: Mar 26 09:21:24 193.251.X USER_HERE Connection from 193.251.X port 51809 Mar 26 09:21:30 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:39 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:41 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:45 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:57 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 10:53:36 212.227.X tmp64459736-3228 Connection from 212.227.X port 23275 Mar 26 10:53:36 212.227.X tmp64459736-3228 Accepted password for tmp64459736-3228 from 212.227.X port 23275 ssh2 Mar 26 11:53:37 212.227.X tmp64459736-3228 Connection closed by 212.227.X Mar 26 18:58:17 212.227.X tmp64459736-5363 Connection from 212.227.X port 23353 Mar 26 18:58:17 212.227.X tmp64459736-5363 Accepted password for tmp64459736-5363 from 212.227.X port 23353 ssh2 Mar 26 19:53:36 212.227.X tmp64459736-8525 Connection from 212.227.X port 5166 Mar 26 19:53:36 212.227.X tmp64459736-8525 Accepted password for tmp64459736-8525 from 212.227.X port 5166 ssh2 Mar 26 19:58:17 212.227.X tmp64459736-5363 Connection closed by 212.227.X

    Read the article

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

  • my X server doesn't load a module called "glx", but my video drivers seem to be installed

    - by rumtscho
    I just got a new, very wide monitor (2560x1440) and there is no sense maximizing my applications. So I installed Compiz Config Settings Manager and enabled Grid. Nothing happened, the shortcuts don't move application windows. Went to System - Preferences - Appearance, the Visual effects tab. It at "disabled". When I try to set them to "normal" or "extra", a message box appears telling me that it's searching for video drivers, then disappears, and I get an error message "Desktop effects could not be enabled". I opened Xorg.0.log, and had errors there: (EE) Failed to load /usr/lib/xorg/modules/extensions//libglx.so (II) UnloadModule: "glx" (EE) Failed to load module "glx" (loader failed, 7) (II) LoadModule: "extmod" Going to Administration - System - Hardware drivers, it said that there are no available and/or installed hardware drivers. But apt-get said that it cannot install nvidia-glx-185, as it is already installed. Googling my error message suggested that I install and run something called envyng. This let me install the nvidia drivers again, and now I can see in the Hardware Drivers window that they are installed and active. But the error message in Xorg.0.log remains, and I still cannot enable the Compiz effects or use Grid. Now, I don't have enough Linux experience to understand if this is a single cause-effect-chain of problems, or three independent ones, but I'd appreciate help for any of them. I am running Ubuntu 9.10, the video card is a GeForce 7600GS.

    Read the article

  • x86_64 and memory issues

    - by Valery
    Recently I've switched from ubuntu 32bit to 64bit version. And now I experiencing some problems. All application take twice more memory. And some application takes even more. For example sshd on new server: root 6608 0.0 0.0 67972 2912 ? Ss 14:43 0:00 sshd: deploy [priv] deploy 6616 0.0 0.0 67972 1724 ? S 14:43 0:00 sshd: deploy@pts/4 root 20892 0.0 0.0 50916 1160 ? Ss 15:53 0:00 /usr/sbin/sshd root 21170 0.0 0.0 67972 2912 ? Ss 15:56 0:00 sshd: deploy [priv] deploy 21173 0.0 0.0 67972 1728 ? S 15:56 0:00 sshd: deploy@pts/0 root 23802 0.0 0.0 67972 2912 ? Ss 16:08 0:00 sshd: deploy [priv] deploy 23804 0.0 0.0 67972 1724 ? S 16:08 0:00 sshd: deploy@pts/1 root 24570 0.0 0.0 67972 2908 ? Ss 12:45 0:00 sshd: deploy [priv] deploy 24573 0.0 0.0 68112 1804 ? S 12:45 0:00 sshd: deploy@pts/3 deploy 25014 0.0 0.0 5168 852 pts/0 S+ 16:13 0:00 grep ssh the same on the old server: root 4867 0.0 0.0 5312 1028 ? Ss Mar23 0:00 /usr/sbin/sshd root 23753 0.0 0.0 8052 2556 ? Ss 16:15 0:00 sshd: deploy [priv] deploy 23755 0.0 0.0 8052 1524 ? S 16:15 0:00 sshd: deploy@pts/0 deploy 23770 0.0 0.0 3004 748 pts/0 D+ 16:15 0:00 grep ssh The same problems with postfix, nginx and some other application.

    Read the article

  • Problems getting Cron to run processes tagged @reboot for LDAP users

    - by Ben Torell
    I have a lab of computers running Ubuntu 9.10. Most of the people who log on to these computers are users from an LDAP server, and not local users. We discovered that if an LDAP user has a crontab with an entry marked to be run @reboot, the command will not actually run upon the reboot of a machine. I'm pretty sure that this is because the cron daemon starts before networking is fully up, so the crontabs of any LDAP users aren't loaded and run or checked for @reboot. In fact, cron will ignore LDAP users' crontabs entirely after a reboot until that user runs crontab -e again and saves, or until the cron daemon is rebooted. We were able to fix one part of this problem by adding the following line to /etc/crontab: @reboot root /bin/sleep 45 && /etc/init.d/cron restart Thus, when cron starts back up upon a reboot, it waits for networking to get up, then restarts the cron daemon. That fixes the problem of crontabs not being read at all for LDAP users. However, since it's the cron daemon being restarted and not the computer, @reboot entries are ignored. Is there a way for a user to make a command run upon restarting the daemon, rather than a reboot? Or is there a better solution to this overall problem? Thanks.

    Read the article

  • Enable re-attached mouse/keyboard via ssh?!

    - by aidan
    I had Ubuntu 9.10 x64 Desktop installed on a nettop I have (that I normally run headless), and yesterday I decided to take the plunge and update to 10.04. So, I plugged in a screen and usb mouse/keyboard, booted up and set to work. It was 1am, and it was telling me it had 3hrs left to install all the new packages, so I unplugged the screen and usb mouse/keyboard, left the box running, and went to bed. This evening, I plugged it all back in again to check progress. It's asking if I want to remove obsolete packages. I do, but neither the mouse nor keyboard work! I can access the box via SSH like I normally do; is there any way I can re-enable the keyboard from there? I'm reluctant to restart the box (via ssh) mid-way through such a complicated upgrade. Thanks for any help! lsusb (with wireless mouse/keyboard receiver unplugged): Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsusb (with wireless mouse/keyboard receiver attached): Bus 004 Device 005: ID 045e:005f Microsoft Corp. Wireless MultiMedia Keyboard Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Facing error: "Could not open a connection to your authentication agent."; trying to add ssh-key.

    - by Kaustubh P
    I use ubuntu server 10.04. ssh-add /foo/cert.pem gave the following output Could not open a connection to your authentication agent. These are my running processes: ps -aux | grep ssh Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html root 1523 0.0 0.0 49260 632 ? Ss Dec25 0:00 /usr/sbin/sshd root 10023 0.0 0.3 141304 6012 ? Ss 12:58 0:00 sshd: padmin [priv] padmin 10117 0.0 0.1 141304 2400 ? S 12:58 0:00 sshd: padmin@pts/1 padmin 11867 0.0 0.0 7628 964 pts/1 S+ 13:06 0:00 grep --color=auto ssh root 31041 0.0 0.3 141264 5884 ? Ss 11:24 0:00 sshd: padmin [priv] padmin 31138 0.0 0.1 141264 2312 ? S 11:25 0:00 sshd: padmin@pts/0 root 31382 0.0 0.3 139240 5844 ? Ss 11:26 0:00 sshd: padmin [priv] padmin 31475 0.0 0.1 139372 2488 ? S 11:27 0:00 sshd: padmin@notty padmin 31476 0.0 0.0 12468 964 ? Ss 11:27 0:00 /usr/lib/openssh/sftp-server These are my environment variables: $ env | grep SSH SSH_CLIENT=192.168.1.13 42626 22 SSH_TTY=/dev/pts/1 SSH_CONNECTION=192.168.1.13 42626 192.168.1.2 22 What is wrong? Why cant I add any identities? Thanks.

    Read the article

  • emule algorithm and how to get fast download speed.

    - by Benjamin
    Actually, I use amule in ubuntu. Because emule users are much more, I wrote emule in this title. But whatever emule or amule, it's okay. Both of them are very similar. I want to get fast-download speed as much as I can. But I don't understand emule(or amule)'s detail functions and algorithms. These are always very qurious to me. If I provide higher upload-speed or more valuable files to other people, can I get benefit?(My download speed) Is serverlist important? Does it cause my download-speed? I captured a image for my amule. Please explain these columns and let me know your tips for getting fast speed. What does 8/9+23 mean in the Source column? What does 294/300(1) mean in the Source column? What does QR:608(0) mean in the Priority? What do I do for getting fast download speed as much as I can get? You can also explain other columns.

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • Linux commands shows different results

    - by ClydeFrog
    I'm really having a hard time to process these results on my Ubuntu server. I have a major problem with my JBoss server where I get FileNotFoundExceptions along with "No space left on device" errors. And I thought "maybe I'm out of disk space", and used df command to figure out how much I have left: root@ubuntu1:/# df -h Filsystem Storlek Anvnt Tillg Anv% Monterat på /dev/mapper/ubuntu1-root 36G 13G 21G 38% / none 2,0G 192K 2,0G 1% /dev none 2,0G 0 2,0G 0% /dev/shm none 2,0G 64K 2,0G 1% /var/run none 2,0G 0 2,0G 0% /var/lock /dev/sda1 228M 23M 193M 11% /boot /dev/mapper/vgdata-lvdata 79G 9,2G 66G 13% /data And as you can see, I have plenty of space left. And I also checked if I'm out of i-nodes: root@ubuntu1:/# df -i Filsystem Inoder IAnv IFria IAnv% Monterat på /dev/mapper/ubuntu1-root 2346512 61992 2284520 3% / none 505380 773 504607 1% /dev none 507383 1 507382 1% /dev/shm none 507383 30 507353 1% /var/run none 507383 2 507381 1% /var/lock /dev/sda1 124496 230 124266 1% /boot /dev/mapper/vgdata-lvdata 10486784 233945 10252839 3% /data But then i used du: root@ubuntu1:/# du -s -h /* 7,5M /bin 23M /boot 19G /data 192K /dev 11G /eniro 5,3M /etc 112K /home 0 /initrd.img 183M /lib 0 /lib64 16K /lost+found 12K /media 4,0K /mnt 4,0K /opt du: kan inte komma åt "/proc/20452/task/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/task/20452/fdinfo/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fdinfo/3": Filen eller katalogen finns inte 0 /proc 18M /root 8,2M /sbin 4,0K /selinux 8,0K /srv 0 /sys 40K /tmp 691M /usr 1,2G /var 0 /vmlinuz Notice that /data and /eniro are 30G combined! How is it possible? Do I have a memory leak somewhere? Or is it something else? ----- EDIT 1 ----- Ok, I figured out that /data has its own mount so it's not possible to combine /data and /eniro because they aren't on the same mount. But how come it says 9,2G on the first command when it says 19G on the third on directory /data?

    Read the article

  • How to tell statd to use portmap on a non-localhost ipadress?

    - by jneves
    How can I make statd connect to other IP address other than 127.0.0.1? I have a server that is connected to 2 different networks (one is public, another a private). I want it to provide a NFS share for only the private network. The host in an ubuntu 8.04. The private ip address is 192.168.1.202 I changed /etc/default/portmap to add: OPTIONS="-i 192.168.1.202" The command lsof -n | grep portmap returns: portmap 10252 daemon cwd DIR 202,0 4096 2 / portmap 10252 daemon rtd DIR 202,0 4096 2 / portmap 10252 daemon txt REG 202,0 15248 13461 /sbin/portmap portmap 10252 daemon mem REG 202,0 83708 32823 /lib/tls/i686/cmov/libnsl-2.7.so portmap 10252 daemon mem REG 202,0 1364388 32817 /lib/tls/i686/cmov/libc-2.7.so portmap 10252 daemon mem REG 202,0 31304 16588 /lib/libwrap.so.0.7.6 portmap 10252 daemon mem REG 202,0 109152 16955 /lib/ld-2.7.so portmap 10252 daemon 0u CHR 1,3 960 /dev/null portmap 10252 daemon 1u CHR 1,3 960 /dev/null portmap 10252 daemon 2u CHR 1,3 960 /dev/null portmap 10252 daemon 3u unix 0xecc8c3c0 4332992 socket portmap 10252 daemon 4u IPv4 4332993 UDP 192.168.1.202:sunrpc portmap 10252 daemon 5u IPv4 4332994 TCP 192.168.1.202:sunrpc (LISTEN) portmap 10252 daemon 6u REG 0,12 289 3821511 /var/run/portmap_mapping I defined in /etc/hosts the following: 192.168.1.202 server.local In /etc/default/nfs-common I changed STATDOPTS to: STATDOPTS="--name server.local" Yet when I run /etc/init.d/nfs-common start if fails to start. The log shows: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Version 1.1.2 Starting Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Flags: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: unable to register (statd, 1, udp). An strace -f rpc.statd -n server.local results in a lot of lines, including this one: sendto(9, "\200]3\362\0\0\0\0\0\0\0\2\0\1\206\240\0\0\0\2\0\0\0\1"..., 56, 0, {sa_family=AF_INET, sin_port=htons(111), sin_addr=inet_addr("127.0.0.1")}, 16) = 56

    Read the article

  • Finding text in Ubuntu (gnome) terminal output

    - by Rickson
    Imagine this scenario: You run a command at gnome terminal. This command has made a bunch of outputs to the terminal. After some time, you realize you need the value of a variable (let's say variable_needed) that was printed by the command somewhere in the terminal. How to find it? KDE terminal used to have a shortcut ctrl+shift+f which searched the terminal output. It seems that gnome-terminal doesn't have it (at least at Ubuntu 10.04.2 LTS). Is there any way of adding it? Is there any other good terminal I could use that has it? Notice that the output has already been written so I don't want (cannot) run the command again combined with grep, |, , vim, emacs, etc.

    Read the article

  • Migrating away from LVM

    - by Kye
    I have an Ubuntu home media server setup with 4.5TB split across a few hard-drives (1x3TB, 2x1TB) and I'm using LVM2 to manage the volumes. I have recently added a 60GB SSD to my server, and I wish to use it to house the 'root' partition of my server (which is currently under the LVM group). I don't want to simply add it to the LVM volume group, because (afaik) there's no way to ensure that the SSD will be used for the root filesystem. If I just throw it at the VG, it may be used to house my media, which would defeat the purpose of having the SSD in the first place. I feel that my only solution is to somehow remove my root partition from the LVM setup and copy it across to the SSD. My boot partition is, of course, not part of the LVM group. My disk setup is as follows: 60GB SSD: EMPTY. 1TB HDD: /boot, LVM space. 1TB HDD: LVM space. 3TB HHD: LVM space. I have a few logical volumes. my root (/), a 'media' volume for my media collection, a backup one for my network backups.etc. Does anyone have any advice as to how to go about this? My end goal is to have the 60GB SSD used for my boot and root partitions, with everything else on the 3TB/1TB/1TB hard-drives.

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >