Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 442/991 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Using Plesk to setup MySQL

    - by chris
    Having trouble getting my mysql up and running on a new virtual server. The host gave me Plesk and I think MySQL is installed but I can't seem to access it. I keep getting this: mysql -u admin -p Enter password: ERROR 1045 (28000): Access denied for user 'admin'@'localhost' (using password: YES) How do I make sure its running properly? How do I reset the root password? (I have root access to the server)

    Read the article

  • How to retry connections with wget?

    - by Andrei
    I have a very unstable internet connection, and sometimes have to download files as large as 200 MB. The problem is that the speed frequently drops and sits at --, -K/s and the process remains alive. I thought just to send some KILL signals to the process, but as I read in the wget manual about signals it doesn't help. How can I force wget to reinitialize itself and pick the download up where it left off after the connection drops and comes back up again? I would like to leave wget running, and when I come back, I want to see it downloading, and not waiting with speed --,-K/s.

    Read the article

  • Lighttpd based server issues crop up when port forwarding

    - by michael
    I have four host computers running lighttpd webservers. they are sitting behind a hspa modem, which each occupying a http port between [81 - 84]. 80 is taken by the modem itself. The port forwarding is setup correctly, however, only a portion of any webpage I request from any of the hosts comes through (they all fails after %20 of the page). If I put the host on port 81 into the dmz, it serves pages fine. The others do not respond to the dmz treatment. Is it possible the web content on the hosts somehow require ports aside from their respective http port? Or is it possible that even though the server.port in the lighttpd_ssl.conf file is set, the individual hosts are still expecting to serve on port 80? I am not familiar with lighttpd, nor did i set them up. they are running on video encoders i purchased. I can grab any files from them required for further information on the problem.

    Read the article

  • migration of physical server to a virtual solution, what i have to do?

    - by bibarse
    Hello I'm new in this forum, so i would like that you forgive me for my blissfully and my low English level. I'm a trainee in company one month ago, and my mission is to migrate 3 physicals servers to a virtualization technology. The company edit softwares for E-learning so there are lots of data like videos, flash and compressed (zip). This is some inventory of the servers: OS: Debian, 2 redhat, apache, php/mysql, sendMail/Dovecot, webmin with virtualmin template to create dynamically the web sites because there is no sysadmin ... The future provider will be responsible of to secure, update and create the virtual machines (outsourcing) and with a RedHat OS's. So i want that you help me to choose a virtualisation technologie (for the i prefer KVM of Redhat RHEV, VMWare is expensive), how evaluate the hardware needs (this for evolution of 4 or 5 years) and to elaborate a good planing to don't forget any think. Thank you for your responses.

    Read the article

  • List installed packages with the repo they came from?

    - by Sandra
    With rpm it is possible to list installed packages with additional info rpm -qa --queryformat "%-35{NAME} %-35{DISTRIBUTION} %{VERSION}-%{RELEASE}\n" | sort -k 1,2 -t " " -i which will produce something like xorg-x11-drv-ur98 (none) 1.1.0-1.1 xorg-x11-drv-vesa CentOS-5 1.3.0-8.3.el5 xorg-x11-drv-vga (none) 4.1.0-2.1 xorg-x11-drv-via (none) 0.2.1-9 On Ubnutu server would I like to list all installed packages and show from which repository in came from. Can that be done?

    Read the article

  • Why should I use a puppet parametrized class?

    - by robbyt
    Generally when working with complex puppet modules, I will set variables at the node level or inside a class. e.g., node 'foo.com' { $file_owner = "larry" include bar } class bar { $file_name = "larry.txt" include do_stuff } class do_stuff { file { $file_name: ensure => file, owner => $file_owner, } } How/when/why does parametrized classes help when this situation? How are you using parametrized classes to structure your puppet modules?

    Read the article

  • Apache2 WebServer not allowing me to view website/files in /var/www

    - by CitadelCSAlum
    I used to be able to access websites/files that were stored in the directory /var/www I have not used this for a while, but now I have a need to store, media in this directory or in the directory/var/www/images I noticed that my apache web server wasnt running correctly so I did a complete package removal and then reinstalled, but I am still unable to access a test page inde.html in the directory /var/www/index.html by going to http://myipaddresshere/index.html Is there some initial configuration I need to do to allow me to store HTML and media files in this directory and be able to access them from the browser? I dont remember having to do anything before.

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Run a service after networking is ready on Ubuntu?

    - by TK Kocheran
    I'm trying to start a service that depends on networking being started, whenever the computer is rebooted. I have a few questions: Is this easily possible from an /etc/init.d script? I have tried creating a script here (conforming to the standards), but I'm really doubtful that it's even running on boot, let alone working. When I test it manually, it works. I've seen the new Upstart service, but as far as how that actually works, I'm completely in the dark. How can I make a script that runs on boot which runs after networking has been started? If I could run it after connected to wireless network, even better :)

    Read the article

  • Disaster - partitions lost, data seemingly alive, how to recover?

    - by a2h
    I've used TestDisk and it's written my old partition structure of a ~20GiB partition for Vista, ~25GiB partition for 7 (but it now shows up as unallocated) and a ~400GiB partition for documents. What it's meant to be is a 30GiB partition for 7, some unallocated space, and a ~400GiB partition for documents. So currently, I have access to all my documents, but not any of the programs I've installed on C:, or AppData, because my boot partition is now supposedly a 20GB vista partition. I've tried using my Windows 7 install disc's repair function, but that did nothing beyond wasting about 10 minutes of my time. I'm currently posting from an Ubuntu live CD. Any help?

    Read the article

  • getting Thunderbird rescan imap folder

    - by asdmin
    Since I got an external program (imapfilter) modifying my imap folder, thunderbird keeps loosing track of new messages. Messages are moved upon arrival in sub-folders, making Thunderbird unable keep tracking them - therefore I have no clue which folders to look for new messages, and newly created folders (even I subscribe them after creation) does not show up until I restart the mail client. Is there any extension or setting for Thunderbird which I could use to trigger re-scanning my folder tree? Please don't waste time on advices like restarting Thunderbird: takes a great amount of time, or "use Evolution (or any other mail client)", or use internal mail filters: they are not sophisticated enough or procmail/fetchmail: I'm arranging a remote imap server for good EXTENSION 1: even folders can be created in the background, without Thunderbird would know it has been created.

    Read the article

  • Rescue data from damaged hard disk

    - by Lexsys
    Hello. I have a 500 GB hard drive with one NTFS-partition on it. I can mount it with Ubuntu and view the contents. But when I try to copy something, I get an I/O error. Ok, I tried to make its image with dd. I/O error as soon as it starts. I have installed ddrescue, but its manual page says not to use it with drives, failing on I/O. Can I manage to get some information from this drive and how to do this?

    Read the article

  • securing communication between 2 Linux servers on local network for ports only they need access to

    - by gkdsp
    I have two Linux servers connected to each other via a cross-connect cable, forming a local network. One of the servers presents a DMZ for the other server (e.g. database server) that must be very secure. I'm restricting this question to communication between the two servers for ports that only need to be available to these servers (and no one else). Thus, communication between the two servers can be established by: (1) opening the required port(s) on both servers, and authenticating according to the applications' rules. (2) disabling IP Tables associated with the NIC cards the cross-connect cable is attached to (on both servers). Which method is more secure? In the first case, the needed ports are open to the external world, but protected by user name and password. In the second case, none of the needed ports are open to the outside world, but since the IP Tables are disabled for the NIC cards associated with the cross-connect cables, essentially all of the ports may be considered to be "open" between the two servers (and so if the server creating the DMZ is compromized, the hacker on the DMZ server could view all ports open using the cross-connect cable). Any conventional wisdom how to make the communication secure between two servers for ports only these servers need access to?

    Read the article

  • apache: lists of all directives for a context?

    - by ajsie
    in the apache online documentation each directive could belong to a context eg: server-config, virtualhost, directory, .htaccess and so on. i wonder if there is a list of all directives belonging to each context? eg. a list with all directives for virtualhost so i know exactly which one i can use? and also, where can i find directives for apache modules? on their page or does each module has its own page with documentation (eg. mod_rewrite)?

    Read the article

  • Ubuntu Wired network(ethernet does not work)

    - by user36777
    It was working just fine, until the other day I yanked it out. The wireless works just fine on the same router. If I login to a windows 7 instance on this dual boot laptop then the ehternet works just fine. So it's not a hardware, cable or router issue. The card even gets an ip, but I can't connect to the internet. Here are the details from route, iptables, ifconfig, ping etc. Any ideas? I have been struggling with this for day, none seems to have an answer. http://pastie.org/954816

    Read the article

  • Distributed filesystem across a slow link

    - by Jeff Ferland
    I have an image in my head where a link is too slow to realize the real-time transfer of files, but fast enough to catch up every day. What I'd like to see is a master <- master setup where when I write a file to Server A, the metadata will transfer to Server B immediately and the file will transfer at idle or immediately when Server B's client tries to read the file before Server A has sent it. It seems that there are many filesystems which can perform well over fast links, but I don't know of any that do well with a big bottle neck and a few hours of latency.

    Read the article

  • How to set ulimits for a service starting at boot?

    - by jayofdoom
    I need, for mysql to use large-pages, to set a ulimit - I've done this in limits.conf. However, limits.conf (pam_limits.so), doesn't get read in for init, only for "real" shells. I solved this before by adding a "ulimit -l" to the initscript start function. but I need some sort of repeatable way to do this, now that the boxes are managed with chef, and we don't want to take over a file that's actually owned by the RPM.

    Read the article

  • Proper way to configure ~/.Xsession with a standalone window manager to gracefully end a session

    - by cYrus
    I'm using xdm and my ~/.Xsession looks like this: # <initialization stuff here> exec openbox It works, but I've noticed that when I log out Openbox doesn't gracefully kill all the applications. In particular Google Chrome complains about that. How can I make sure to wait for all processes to exit (just like others configurations: Gnome, KDE, Windows ...)? The only (ugly) solution that I've found involves sleep and kill into ~/.Xsession.

    Read the article

  • Redirect non-www ssl traffic to www ssl (apache)

    - by The NinjaSysadmin
    Hello, I'm attempting to get a redirect which is failing, and for some reason I can't think today. I have a vHost file within HTTPD that listens on standard port 80 and port 443. I'm attempting to redirect https://domain.com/(.*) to https://www.domain.com/$1 so that the URL remains intact. My config is as follows: ServerName www.domain.com ServerAlias tempdomain.testdomain.co.uk ServerAlias domain.com My rerwrite rule I'm using is. RewriteCond %{HTTP_HOST} ^domain.com$ RewriteRule ^(.*)$ https://www.domain.com$1 [R=301,L] I've also tried removing the . and $ but nothing.. When I visit the url https://domain.com/secure.page?action=comp it doesn't redirect to https://www.domain.com/secure.page?action=comp I do also have other SSL pages, the above was just an example.. Can anyone point out my stupidity.

    Read the article

  • How can I use '{}' to redirect the output of a command run through find's -exec option?

    - by pkaeding
    I am trying to automate an svnadmin dump command for a backup script, and I want to do something like this: find /var/svn/* \( ! -name dir -prune \) -type d -exec svnadmin dump {} > {}.svn \; This seems to work, in that it looks through each svn repository in /var/svn, and runs svnadmin dump on it. However, the second {} in the exec command doesn't get substituted for the name of the directory being processed. It basically just results a single file named {}.svn. I suspect that this is because the shell interprets > to end the find command, and it tries redirecting stdout from that command to the file named {}.svn. Any ideas?

    Read the article

  • How do I tell which kernel module is servicing a /dev device?

    - by regulatre
    How do I find out which kernel module (as seen by typing lsmod) is servicing a particular device in /dev ? In other words, say I have a device, /dev/mouse0 and I want to find out which kernel module is installed to service that device. How do I do that? Another way to look at this is, some loaded kernel modules associate themselves with a device in /dev. How does one find out which device(s) a module is "attached" to?

    Read the article

  • How can I redirect HTTPS(S) traffic to anothr gateway?

    - by PsyStyle
    I have a network like 192.168.0.0/15 with the default gateway set to 192.168.0.1. Al the workstations of the network use this gateway for all kind of accesses to the Internet. Now I am testing a new Internet connection with another provider and for this I am using a second gateway on the same subnet with 192.168.0.2 as IP address. I want to redirect only HTTP and HTTPS traffic to this second gateway without touching the address of the default gateway set inside every workstation. How can I accomplish this task? What I have to change inside the first's gateway firewall configuration or routes? I tried with a dnat like DNAT loc:192.168.0.1 loc:192.168.0.2 tcp 80 but nothing worked. I use Shorewall for simplicity in configuration but I can understand even theorical answers which I will try to adapt to my case

    Read the article

  • Allow SFTP in iptables

    - by Kevin Orriss
    I have just purchased a VPS from linode and am going through the setup guide. I have everything running (apache2, php, mysql etc) but I am being denied access via SFTP when using fileZilla to upload a file. Now this is my second time installing the server as I missed a section out the first time. I was able to connect to my server through SFTP on filezilla the first time and the thing I missed out was adding a new user and editing the iptables in the firewall. So it would seem that the guide I have been following has blocked SFTP but allowed SSH. Here is the iptables file: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allow SSH connections # # The -dport number should be the same port number you set in sshd_config # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT All I would like is a line I need to put in there which allows SFTP over port 22. Thank you for reading this.

    Read the article

  • Formatting LVM partition with HFS+

    - by Pyetras
    I've already tried fsck.hfsplus from hfsprogs, which doesn't do anything at all, and gparted (doesn't work with LVM). Are there any other ways to do that? If all else fails I have OSX install DVD, but I'm not sure if its installer would see a LVM partition (and running it just to check that would be quite troublesome, as I don't have a DVD drive ATM).

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >