Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 331/1051 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • MySQL blocking new connections, and mysqladmin flush-hosts

    - by aidan
    I'm running MySQL on a remote server, and it suddenly started rejecting all connections: $ mysql -h 192.168.1.10 -u root -p ERROR 1129 (00000): Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' So, I try this flush-hosts command... $ mysqladmin flush-hosts -h 192.168.1.10 -u root -p mysqladmin: connect to server at '192.168.1.10' failed error: 'Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'' I.e. it's blocking the very un-blocking tool it recommends. Am I doing it wrong, or will I have to resort to ssh/cpanel/physical access?

    Read the article

  • IP route ppp0 + eth0 access to outside network

    - by Vitor
    I need some help in define a route I have two connections one from eth0 and other a ppp0 (a 3G card) Not having the ppp0 connection active my route table is: Destination Gateway Genmask Flags Metric Ref Use Iface default DD-WRT 0.0.0.0 UG 100 0 0 eth0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 I can access my webserver from an outside network through ethernet interface Than I have also my ppp0 3G connection active havig the following route table: D estination Gateway Genmask Flags Metric Ref Use Iface default 10.64.64.64 0.0.0.0 UG 0 0 0 ppp0 10.64.64.64 * 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 Now I only can access my webserver in outside networks through the IP of the 3G connection Note that my server is serving at 0.0.0.0 IP (to all interfaces) But I need to get access to webserver to both interfaces ethernet and 3G connection I only can have access to both connection in local network Any help to configure this network to have both interfaces with outside networks access is welcome Can anyone give me an example to configure this network with 2 gateways to give outside networks access One for IP 192.168.1.149 and other for the ppp0 IP 89.214.60.196 Tanks

    Read the article

  • List full timestamps of files in a tarball

    - by Mechanical snail
    I have a large tar archive and want to see the exact (nanosecond) timestamps that are stored for each file in the archive. In case it's relevant, the tarball is in POSIX-2001 format (tar --format=posix). tar --list --verbose displays the timestamps rounded off to the minute. For comparison, ls --full-time does what I want, but I'd rather not have to extract everything first because it's huge. For my purposes, command-line and GUI tools are both fine.

    Read the article

  • What can lead to a zone memory exhaustion and how Nginx reacts to it?

    - by Miles Hughes
    What is a possible scenario for exhausting the memory designated to a connection zone with limit_conn_zone directive and what are the implication in this case? Suppose I have this in my configuration: http { limit_conn_zone $binary_remote_addr zone=connzone:1m; ... server { limit_conn connzone 5; which, according to the documentation, allocates 16000 states for connzone on a 64-bit server. It also says that If the storage for a zone is exhausted, the server will return error 503 (Service Temporarily Unavailable) to all further requests. Well, Ok. But what does it mean on practice? When does this happen? Who receives those 503s? Does it mean that if the number of IPs somehow associated with connzone hits 16000 everyone gets a 503 and it's all over? How does Nginx decide? The documentation is weirdly vague on this. So, considering the example config, who would actually get a 503 and under which circumstances and how would things go from there? Same with request zones?

    Read the article

  • Looking for the best ec2 setup for 3 sites totaling in 1.5 mil in traffic monthly

    - by john h.
    I am looking to consolidate our current aws setup of 2 Large ubuntu ec2 servers and 2 large RDS server for our 3 websites that have a total of about 1.5 million hits a month and increasing every month with the majority of traffic (1 mil) to one forum site in the group and the rest of traffic to an ecommerce site and a small wordpress site. So here is my question/thought? Would it be better for us to combine the two ec2 large servers to just one and same with the 2 RDS servers so we run all three sites off one large ec2 and one RDS. -or- Should we setup maybe 2-3 smaller ec2 servers load balenced and a single RDS. -or- Something completely different setup? One concern is that if one site crashes it takes with it the others. It happened in the past but I am pretty sure its because of the forum software and not the server setup. -john

    Read the article

  • Oracle HTTP Server access_log - GET /error/404.html HTTP/1.0 200 7001 entries

    - by Pavan
    access_log shows the following entries repeatedly, seems like it is polling some thing. There were so many entries keep on adding to the log, making it difficult to debug for actual error message. aaa.bbb.ccc.ddd - - [07/Nov/2012:00:02:48 -0800] "HEAD /index.html HTTP/1.1" 200 - abc.bcd.cda.dab - - [07/Nov/2012:00:02:50 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:02:51 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dab - - [07/Nov/2012:00:02:56 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:02:56 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dab - - [07/Nov/2012:00:03:01 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:03:01 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dab - - [07/Nov/2012:00:03:06 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:03:06 -0800] "GET /error/404.html HTTP/1.0" 200 7001 aaa.bbb.ccc.ddd - - [07/Nov/2012:00:03:08 -0800] "HEAD /index.html HTTP/1.1" 200 - how to avoid these repeating entries?

    Read the article

  • Terse, documented, correct way to create Kerberos-backed user shares in Greyhole

    - by MrGomez
    As a migration strategy away from Windows Home Server (which is currently out of support and intractable for our needs, for a variety of reasons), our little cloister of nerds has targeted Greyhole for our shared use at home. Despite the documentation's terseness, getting the system set up for simple, single-user operation isn't especially difficult, but this scenario fails to service our needs. Among other highlights of the system, we're attempting to emulate Integrated Windows Authentication (with Kerberos) and single-user shares to keep the Windows users in the house happy and well-supported. I'm aware of the underlying systems that go into Greyhole and understand how to set up per-user shares in Samba, but the documentation doesn't seem to support cases for Greyhole to sop up these directories as separate landing zones for replication. Enter my question: are both of these cases (IWA user authentication and user-partitioned personal shares) supported by Greyhole? If so, please cite or link the supporting documentation if it exists.

    Read the article

  • How to change MySQL data directory?

    - by Jonathan Frank
    I want to place my databases in another directory, so I can store them in an ESB (elastic block storage, just a fancy name for a virtualized harddisk) together with my web-apps and other persistent data. I have tried to walk through a tutorial at http://crashmag.net/change-the-default-mysql-data-directory-with-selinux-enabled. Everything seems fine until I type this command: # semanage fcontext -a -t mysqld_db_t "/srv/mysql(/.*)?" Then the command fails and tells me that mysqld_db_t is an invalid SELinux context even if the default MySQL data directory is labelled with this context. I am running Fedora 15 on Virtualbox (behaves like an ordinary x86-compatible box) and Amazon EC2 (based on Xen) so the tutorial should be compatible. It is also worth to mention that turning off SELinux globally or just for the MySQL process is not an option, because such a solution will decrease the security of the system if a hacker gains access to the system via the MySQL server. I have never seen this problem before I changed to the Redhat/Fedora architecture, so it could be a distribution specific issue. Any help is highly appreciated

    Read the article

  • Default route not on LAN

    - by jarmund
    I have a network that in principle looks like this: H1---\ /----Inet1 H2---->---GW1---< H3---/ \----GW2-----Inet2 H1 and H2 = Hosts that need access to internet with GW1 Inet1 = Internet link over 3G connection Inet2 = 5GHz link to Internet (not always up) GW1 = Works as a router, automatically picking the "best" connection between Inet1 and Inet2 (the latter via GW2). GW2 = 5GHz wifi router And here's the problem: H3 only needs internet access when Inet2 is up. What i was thinking of doing was a routing table that looks like this: route to GW2 via GW1 default route is via GW2 I first set the route to GW2 via GW1 without a problem. But when i try route add default gw 1.2.3.4 (1.2.3.4 being the IP of GW2), it complains "SIOCADDRT: No such device" Is the problem that the default gw i'm trying to set is not reachable directly? Is there a different approach that would allow me to achieve this? An alternative (and hypothetical) approach: Since H3 will be using a static IP, is it possible to do some magic with iptables on GW1 to forward any packets from H3 to GW3, thereby "tricking" H3 into using GW2 as its default router?

    Read the article

  • What Hypervisors support non-homogenous clusters?

    - by edude05
    I've been using Citrx Xenserver for awhile on a few machines that don't support Hardware Virtualization as a test for various small servers. I recently have been experimenting with moving the PV Vms between machines but Xenserver gives me errors that roughly say I need to have homogenous hardware for this to work. Because of this I haven't been able to setup XenMotion or any of the nice features that come with server pooling in Xenserver. I'm considering moving away from XenServer, however I can't seem to find a Hypervisor that explicitly supports non-homogenous clusters. On a side note, we do have a few idenitally configured Dell 1950s that haven't had any VM solution setup on yet, so if we can find a solution that can allow us to move PVs to those as well that would be great. Non free solutions are OK as well. What hypervisor will allow this? Thanks!

    Read the article

  • Password protect web directory with htpasswd on Cherokee

    - by wdkrnls
    I have a directory on my Cherokee webserver that I am trying to password protect so that when I try to enter it from a web browser, I get a pop up demanding username and password. Needless to say I am getting stuck. I have created the .htaccess file with: AuthUserFile /srv/http/protected AuthGroupFile /dev/null AuthName "Protected Stuff" AuthType Basic Require valid-user And I used the apache-tools' htpasswd command: htpasswd -c .htpasswd wdkrnls I configured Cherokee with a behavior rule on the /protected directory which requires htpasswd authentication and restarted. I get Error 405 Method Not Allowed whenever I navigate there in a directory. What more do I need to do? Thanks for your help.

    Read the article

  • Apple keyboard key remapping under Ubuntu

    - by jfmessier
    I have an Apple keyboard that I simply love. I now hate my regular keyboard at work. I just have a small problem with the Apple keyboard. There is no "insert" key. The one that is usually Insert on regualr keyboard is replaced by the "fn" key. I would like to keep the fn functionality, as it is useful with the Fx keys on the top of my keyboard. If I have another key that I want to remap, whoe can I get the code, and then assign the code to the "Insert" function ? I mainly use this key for clipboard stuff (Ctrl-Ins, Shift-Ins), and sometime I have no other option than use the mouse, which is something I want to avoid. For example, the "Eject" button could be re-assigned, or use the F13..F19 keys, which are not on regular keyboards anyway. Thanks :-)

    Read the article

  • brctl Not working fine with bridging eth0 and at0

    - by Passi0n
    I made an access point with airbase-ng and its at0 I tried to bridge my eth0 and at0 by brctl addbr demo brctl addif demo eth0 brctl addif demo at0 brctl demo up dhclient3 demo & already removed eth0 ip so when i use ping 192.168.1.1 -I eth0 theres no reply but if i use ping 192.168.1.1 -I demo it works!!! In browser internet works fine so when i connect my android with at0 (access point) it should same work. but its now working at all :(

    Read the article

  • How to resolve IPs in DNS based on the subnet of the requesting client?

    - by Nohsib
    Is it possible to configure Bind9 or other DNS to resolve the domain name of a machine into different IPs based on the subnet of the requesting client? e.g. Say the same service is running on 2 different application servers at different geographical points and based on the incoming request to resolve the domain name, the name server provides the IP of the application server based on the requesting client's IP, so the service could be offered by servers that are geographically closer to the client. In short, something like a CDN but just the IP resolution part based on the client's subnet. Is this configurable in any DNS?

    Read the article

  • 'Memory read error',Sever hardware error?

    - by wss8848
    hello I got a error about my server which is running CentOS5.5. MCE 20 HARDWARE ERROR. This is *NOT* a software problem! Please contact your hardware vendor CPU 1 BANK 8 TSC 6ab9ff9745f62 [at 2394 Mhz 9 days 1:50:52 uptime (unreliable)] MISC cf36ad0100081186 ADDR 203376500 MCG status: MCi status: MCi_MISC register valid MCi_ADDR register valid MCA: MEMORY CONTROLLER RD_CHANNELunspecified_ERR Transaction: Memory read error STATUS 8c0000400001009f MCGSTATUS 0 what is the matter? is memory card error or memory controller error?

    Read the article

  • chroot on OSX as a different OS

    - by ekaqu
    I was wondering if anyone has been able to use chroot on OSX to run another OS (ubuntu, centos). I know that they are very different, but almost everything I want to use this for wouldn't care about anything at the level of the kernel, so was hoping there would be a way to do this without using a VM. Based off my google searches, I see this question is asked, but no real answer other than "try a VM". Would really like to do this without a VM though.

    Read the article

  • Search all files containing text

    - by enthdegree
    With Busybox, how do you search for an expression within a bunch of files recursively through a bunch of directories, but only look through text files? We don't know what the file's suffix is going to be; it could be .sh, it could be nothing, it could be something else. I was considering somehow basing the search on encoding although I am not quite sure what the encoding would be either. I've tried busybox grep -r but it searches through binary files too, which wastes a lot of time.

    Read the article

  • postfix specify limited relay domain while allowing sasl-auth relay

    - by tylerl
    I'm trying to set up postfix to allow relaying under a limited set of conditions: The destination domain is one of a pre-defined list -or- The client successfully logs in Here's the relevant bits o' config: smtpd_sasl_auth_enable=yes relay_domains=example.com smtpd_recipient_restrictions=permit_auth_destination,reject_unauth_destination smtpd_client_restrictions=permit_sasl_authenticated,reject The problem is that it requires that BOTH restrictions be satisfied, rather than either-or. Which is to say, it only allows relaying if the client is authenticated AND the recipient domain is @example.com. Instead, I need it to allow relaying if either one of the requirements is satisfied. How do I do this without resorting to running SMTP on two separate ports with different rules? Note: The context is an outbound-use-only (bound to 127.0.0.1) MTA on a shared web server which all site owners are allowed to relay mail to one of the "owned" domains (not server-local, though), and for which a limited set of "trusted" site owners are allowed to relay mail without restriction provided they have a valid SMTP login.

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • after installing monit when i do monit status myproc i get "error connecting to the monit daemon"

    - by Jason
    after installing monit when i do monit status myproc i get "error connecting to the monit daemon" I read somewhere that The status command won't work in the case that monit is running indaemon mode without its http support - the command 'monit status' in such case tries to get the status from the daemon via http/tcp. To start the http interface you need to add the 'set httpd ...' statement to theconfiguration. is that still correct? that post was from 2005

    Read the article

  • Network Access via Terminal

    - by HamdiKavak
    I have a weird problem. Here is my configuration. I installed VirtualBox on Windows 7 PC. I installed Ubuntu 10.04 on VirtualBox. I installed many programs via terminal and I can still install. My browser can connect to internet. But I cannot ping any website e.g. google.com. I cannot download anything from git.I can only ping 192.168.1.1 that is all. What would be the reason guys? UPDATE I can ping with another internet connection which I use in office.

    Read the article

  • Testing home directory scripts by setting $HOME to the location of the test directory

    - by intuited
    I have an interdependent collection of scripts in my ~/bin directory as well as a developed ~/.vim directory and some other libraries and such in other subdirectories. I've been versioning all of this using git, and have realized that it would be potentially very easy and useful to do development and testing of new and existing scripts, vim plugins, etc. using a cloned repo, and then pull the working code into my actual home directory with a merge. The easiest way to do this would seem to be to just change & export $HOME, eg cd ~/testing; git clone ~ home export HOME=~/testing/home cd ~ screen -S testing-home # start vim, write/revise plugins, edit scripts, etc. # test revisions However since I've never tried this before I'm concerned that some programs, environment variables, etc., may end up using my actual home directory instead of the exported one. Is this a viable strategy? Are there just a few outliers that I should be careful about? Is there a much better way to do this sort of thing?

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >