Search Results

Search found 30329 results on 1214 pages for 'ubuntu forums'.

Page 356/1214 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • how to debug mysql has gone?

    - by fefe
    I have a virtual machine(Ubuntu 12.04, MySQL 5.5) running under VMware and is dedicated to host a mysql server. I connect to this server on internal IP. I'm trying to find out why I get mysql server has gone error. One my windows machines apache it stops because of this issue. I have been trying to fine tune my mysql my.cnf with the following parameters but did not bring the desired result. # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 0.0.0.0 # # * Fine Tuning # wait_timeout = 180 key_buffer = 384M max_allowed_packet = 64M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 500 table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M how to debug this issue what is missing from configuration to avoid this error?

    Read the article

  • Multiple SSH private keys for the same host

    - by Sencha
    How can I store 2 different private SSH keys for the same host? I have tried 2 entries in /etc/ssh/ssh_config for the same host with the different keys, and I've also tried to put both keys in the same file and referencing it from one hosts setting, however both do not work. More detail: I'm running Ubuntu server (12.04) and I want to connect to GitHub via SSH to download the latest source for my projects. There are multiple projects running on the same server and each project has a GitHub repo with it's own unique deloyment key-pair. So the host is always the same (github.com) but the keys need to be different depending on which repo I'm using. Different /etc/ssh/ssh_config versions I have tried: Host github.com IdentityFile /etc/ssh/my_project_1_github_deploy_key StrictHostKeyChecking no Host github.com IdentityFile /etc/ssh/my_project_2_github_deploy_key StrictHostKeyChecking no and this with both keys in the same file: Host github.com IdentityFile /etc/ssh/my_project_github_deploy_keys StrictHostKeyChecking no I've had no luck with either. Any help would be greatly appreciated!

    Read the article

  • my.cnf in server directory, why

    - by Mellon
    On my Ubuntu machine, I have installed MySQL . I notice that there are /etc/my.cnf file which contain the content (only two lines): innodb_buffer_pool_size = 1G max_allowed_packet = 512M While there is also /etc/mysql/my.cnf with a long content like: # The MySQL database server configuration file. ... ... For me, it looks like both are configurations for MySQL server, but Why there are two my.cnf in different locations, can't the content to be merged to one my.cnf ? What is the purpose to have seperate my.cnf for MySQL server ?

    Read the article

  • Minimize VirtualBox Hard Drive disk

    - by Aviv
    I have Ubuntu Server 10.04 TLS installed on a Virtual Machine in a VirtualBox. The size of the Hard Drive is dynamic growing hard drive and the maximum is 32GB. At the beginning i had 4GB on the Hard Drive and the size of the .vdi was 4GB. Lately the size of data on the disk is 15GB but the size of the .vdi is almost 32GB. Why is that? How can i pack / optimize / defrag the HD so it will be the same size of the data on the disk? Thanks.

    Read the article

  • Slow write speeds on new Gigabit home file server

    - by Ryan Holder
    So I finally got all my parts delivered to setup a home file/backup server this week. It's currently running Ubuntu Server and I'm using Samba to share files on my network. The server currently has a 2TB WD Green drive in it connected to a Asus M5A78L-M This is then connected via CAT6a to my new Gigabit switch (TP-Link TL-SG1005D). My home desktop is then also connected to this switch and again also through CAT6a cable. Currently when transfering files I will get a perfect 100MB/s read from the server to my Windows machine. When copying from my Windows machine to the server I get around 30/38MB/s. I know this drive is capable is faster speeds so would anybody have an idea of where the bottleneck is? Any help would be greatly appreciated :) EDIT: I have found ftp's write speed is much closer to what my Samba read speed is so I'm going to give it a guess that is a software problem rather than hardware

    Read the article

  • API server not function ["The connection has been reset"]

    - by Miguel Beltrán
    I'm having some troubles with one of my servers. I've done an application with two servers, one the frontend that grabs the data of server API (Ubuntu server). Well, yesterday had a lot of visits and the API server stop functioning but: -I can do stuff in MySQL by SSH. -The memory usage is ok. -The logs are ok. -The bandwitch usage is ok. -If i restart the server or Apache2, function by some time (3-4 minutes). And the most important i think if i tries to access to API (Is rest-style with http) it puts me the Firefox error "The connection has been reset". I'd tried: -Restart the server -Restart Apache2 -Restart MySQL -Viewed the logs of Apache2/MySQL I don't know too much about systems so i don't know what to do more.

    Read the article

  • Ethernet port sleeping on PS3 running linux

    - by Doug
    My lab has a PS3 running Ubuntu Linux 9.04 Server Edition. After a period of a few hours with no use, the Ethernet connection (eth0) seems to go to sleep, causing the connection to be lost. Pinging or trying to SSH into the machine results in no response. The fix I've been using is to access the machine locally and restart it (trying to bring eth0 down then up doesn't seem to correct it). I've tried setting up an hourly cron job that runs on the PS3 and pings another machine just to create network activity, but this doesn't seem to solve the problem either. Update: The solution was to run the above cron job much more frequently: every 10 minutes works.

    Read the article

  • HAProxy and Intermediate SSL Certificate Issue

    - by Sam K
    We are currently experiencing an issue with verifying a Comodo SSL certificate on an Ubuntu AWS cluster. Browsers are displaying the site/content fine and showing all the relevant certificate information (at least, all the ones we've checked), but certain network proxies and the online SSL checkers are showing we have an incomplete chain. We have tried the following to try to resolve this: Upgraded haproxy to the latest 1.5.3 Created a concatenated ".pem" file containing all the certificate (site, intermediate, w/ and w/out root) Added an explicit "ca-file" attribute to the "bind" line in our haproxy.cfg file. The ".pem" file verifies OK using openssl. The various intermediate and root certificates are installed and showing in /etc/ssl/certs. But the checks still come back with an incomplete chain. Can anyone advise about anything else we can check or any other changes we can make to try to fix this? Many thanks in advance... UPDATE: The only relevant line from the haproxy.cfg (I believe), is this one: bind *:443 ssl crt /etc/ssl/domainaname.com.pem

    Read the article

  • What's a fast way to copy a lot of files from an internal hard-drive to external (USB) storage?

    - by jonathanconway
    I have a large amount of data - about 500 GB - on the internal hard drive of a desktop PC. This includes music, videos, PDFs... you name it. I want to copy everything to an external USB hard drive (1.5 tb capacity). The desktop PC runs Ubuntu. To being with, I simply plugged in and mounted the hard drive and dragged the top-level folder onto the drive. It's started copying, but it seems to be proceeding very slowly. About 10 minutes later and it's only done about 500 MB. I'm sure this is slower than what I could achieve with less total data. So I'm wondering if there's a quicker way of doing this. Would it be better to copy it in portions of 500MB or so, rather than all at once?

    Read the article

  • Secure Browsing, how [closed]

    - by Jhonny Bigodes
    Possible Duplicate: How to browse safely? What's the best way to browse "suspicious" sites safely. I know Firefox used to be "the thing", but now I don't think it is (IMHO). What I'm using now is a virtual machine (with virtual box), rhat I periodically format. I heard some time ago of a project that glued the 2 together (kinda... everytime you startup the program it used a fresh machine with a fresh browser), but I lost track of them So my question is: How can I Browse the web securely ? Ps.: In in ubuntu

    Read the article

  • How to take mysql replication backup

    - by user53864
    I have a MySQL master-master replication setup with a slave for each master(only one master used for read/writes at a time) on Ubuntu server. Wondering what would be the best way to schedule backup of replication databases with mysqldump. I have following clarifications because of which could not proceed further. Scheduling mysqldump backup on masters safe for replication? Connecting masters with GUI applications(workbench) for database manipulations(read, writes.. by developers) is safe? Any inputs are welcome.

    Read the article

  • Gitosis installation of public key not working...

    - by user29600
    I've been following this tutorial to install and setup git on Ubuntu Server 10.04 using Windows 7 as a client. However, after finally figuring out how it works (executed gitosis-init a bunch of times on the wrong key), I copied the id_rsa.pub file over to the server in /tmp folder and ran it again. Unfortunately it still doesn't work and when I execute git clone [email protected]:gitosis-admin.git it asks for gitosis's password rather than the RSA Passphrase. I'm assuming is the same problem this guy is having here... however, after following his instructions: Purge git-core and gitosis and manually remove the /srv/gitosis folder and following the instructions again (with the proper id_rsa.pub file this time), I'm still having the same issue. Anyone know what I'm doing wrong? Is there any way to probe for more information that might help in solving this?

    Read the article

  • Burning iso images with wodim loses 2048 bytes at the end

    - by Grumbel
    If I burn an iso image with: wodim -data dev=/dev/scd0 in.iso and then read it back out with: dd if=/dev/scd0 of=out.iso The resulting files are not identical, out.iso is 2048 bytes shorter then in.iso. What is going on here and how can I fix it? Using Ubuntu 10.04 and Wodim 1.1.10 PS: dd always ends with an Input/output error, not just with this CD, but with all of them. I think its just a limitation of dd, but an explanation why it happens and how to avoid it would be welcome as well.

    Read the article

  • Which FTP Daemon should I use if I want to use MySQL for authentication?

    - by wag2639
    We want to set up a FTP Daemon on our Ubuntu 10.04 server that can use a simple (probably custom) built web interface for a FTP server using MySQL for authentication. It'll be public facing but only intended for use by a few customers or clients. I know vsftpd, ProFTPd, and Pure-FTPd but I'm not sure which is best for this application. Main features we would like: a. Very good MySQL authentication integration b. Able to specify a list folders/files (folder level is sufficient) each user has access to through MySQL Anything else would just be sprinkles on top.

    Read the article

  • Badwidth-Hogging Linux Server Causing Trouble

    - by BlairHippo
    We have a Linux server (2.6.28-11-generic #42-Ubuntu) that's misbehaving on a client site, gobbling up an entirely unacceptable percentage of the client's bandwidth, and we're trying to figure out what the heck it's doing. And the guy who had the sysadmin skillset has yet to be replaced. We're at a loss for what could be causing all that network traffic, and need to figure it out SOON. What log files should I be looking at to find this information? What analysis tools would you recommend for this task? Please note that I'm not looking for a tool that will allow me to analyze FUTURE traffic. The client is on the verge of shutting the machine off entirely; I need to figure out what it's been doing with the data I already have, if that's at all possible. My thanks in advance for helping a development monkey play sysadmin.

    Read the article

  • Server downtime - are these APC warnings the cause?

    - by DisgruntledGoat
    Yesterday I had a problem with my dedicated server (Ubuntu 10.04, LAMP). It wasn't down per se, but running incredibly slowly as if we had a massive overload of visitors (though I don't think we did). It's running smoothly again now. I've been checking through log files etc to see if I can find any issues, the only strange thing is a bunch of these errors, occurring at about the same time as the downtime: [apc-warning] Unable to allocate memory for pool. in [file] on line 49. And a bit later on: [apc-warning] GC cache entry '[file1]' (dev=2056 ino=8988092) was on gc-list for 3601 seconds in [file2] on line 746. Could these errors indicate the cause of the server slowdown, or are they simply a result of the server being slow in the first place? What would be the solution?

    Read the article

  • Long lag and errors clicking gmail links in google chrome

    - by Doug T.
    I recently downloaded Google Chrome with Ubuntu 9.04 and love it. Ironically one site I consistently have issues with is gmail. When I enter gmail I will click a link, say an email or the inbox link. After a very long wait (on the order of 30 seconds to a couple of minutes) my page will load with an error such as: Some Gmail features have failed to load due to an Internet connectivity problem. If this problem persists, try reloading the page, using the older version, or using basic HTML mode. Learn More. Googling the symptoms has not helped. Has anyone else had any similar issues? Has anything helped?

    Read the article

  • VSFTPD: Cannot figure this thing out...

    - by A Wizard Did It
    Alright, I've been giving this the best that I can, reading through various tutorials on google, but I cannot seem to get vsftpd running the way I want. For a short while I had it working with one account, but then that stopped and I haven't been able to get it to work since. I've since reformated and reinstall Ubuntu 10.04 LTS. I used apt-get install vsftpd and that's where I am now... I'd really appreciate if anyone could help me understand exactly how this is supposed to work... How do I add FTP accounts and set their home directory to something like /var/www/public_html?

    Read the article

  • df says disk is full, but it is not

    - by Chris
    On a virtualized server running Ubuntu 10.04, df reports the following: # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.4G 7.0G 0 100% / none 498M 160K 498M 1% /dev none 500M 0 500M 0% /dev/shm none 500M 92K 500M 1% /var/run none 500M 0 500M 0% /var/lock none 500M 0 500M 0% /lib/init/rw /dev/sda3 917G 305G 566G 36% /home This is puzzling me for two reasons: 1.) df says that /dev/sda1, mounted at /, has a 7.4 gigabyte capacity, of which only 7.0 gigabytes are in use, yet it reports / being 100 percent full; and 2.) I can create files on / so it clearly does have space left. Possibly relevant is that the directory /www is a symbolic link to /home/www, which is on a different partition (/dev/sda3, mounted at /home). Can anyone offer suggestions on what might be going on here? The server appears to be working without issue, but I want to make sure there's not a problem with the partition table, file systems or something else which might result in implosion (or explosion) later.

    Read the article

  • HAProxy being killed with more that 54,000 connections

    - by Olly
    I am trying to run HAProxy (1.4.8) on a EC2 machine running Ubuntu 10.04. I need HAProxy to be able to handle many thousands of long-running persistent connections (websockets). With the current setup HAProxy gets killed at around 54,300 connections (roughly). If I am running HAProxy in the foreground, the only output is "Killed". Am I right in thinking this is the Kernel killing the process? Is this because it is out of resources? Can I increase the resources? The CPU and memory consumption are low with 50,000 connections, so I don't suspect either of these. How can I prevent this from happening?

    Read the article

  • Open file in local text editor from within an SSH connection

    - by Sam
    I'm not a vim guy. I'd like to be able to open log files in Sublime Text when in an SSH connection from within Terminal. Is there a way I could do this? I'm thinking there must be a command or something that could copy the file over to a temporary directory in OS X and then open it in Sublime Text, and when I save it, it'll copy back to the original location through SSH; similar to how FileZilla does it. I'm on Mac OS X MT. The server I SSH into is running Ubuntu. I'm using Terminal.

    Read the article

  • Can ping/nmap server, nothing else

    - by lowgain
    I was SSHed into our ubuntu LAMP server , and was just doing a svn update, which hung. I disconnected, and since then, I have not been able to SSH in or view any of our websites (neither from my network or through a remote machine). I would have just assumed the server went down, but I can ping the machine and get really quick responses. Using nmap on the box shows all the normal ports open, so I am confused This server is hosted remotely in a datacenter, do I have any remaining options except contacting them for support? Thanks!

    Read the article

  • Why can't SVN checkout into a virtualbox shared folder?

    - by Alex Waters
    I am trying to checkout into the virtualbox shared folder with svn 1.7 in ubuntu 12.04 running as a guest on a windows 7 host. I had read that this error was a 1.6 problem, and updated - but am still receiving the error: svn: E000071: Can't move '/mnt/hostShare/code/www/.svn/tmp/svn-hsOG5X' to '/mnt/hostShare/code/www/trunk/statement.aspx?d=201108': Protocol error I found this blog post about the same error in a mac environment, but am finding that changing the folder/file permissions does nothing. vim .svn/entires just has the number 12 - does this need to be changed? Thank you for any assistance! (just another reason for why I prefer git...)

    Read the article

  • Is it bad to have a very full hard drive on a high traffic database server?

    - by MikeN
    Running an Ubuntu server with MySQL for a high traffic production database server. Nothing else is running on the machine except the MySQL instance. We store daily database backups on the DB server, is there any performance hit or reason why we should keep the hard disk relatively empty? If the disk is filled up to 86%+ with the database and all of the backups, does it hurt performance at all? So would the DB server running with 86-90%+ full capacity perform less well in any way than the server running with only a 10% full disk? The total disk size on the server is over 1 TB so even 10% of the disk should be enough for basic O/S swapping and such.

    Read the article

  • Install Linux with two hard drives

    - by rdecourt
    I've a machine with two hard drives. The first one has 80 GB and the second has 120 GB. I'm about to format this machine and install Linux, and I want to install all the main partitions (/, /boot, /usr/, etc.) on the first hard disk drive (sda) and mount the /home and /var partition on second disk (sdb). Is this possible, and do I have to do something after the instalation? Or is the second hard disk drive automatically mounted? How can I do it? I won't do it, but is there any problem to mount /boot on the second hard disk drive? I'm using Ubuntu 12.04.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >