Search Results

Search found 4578 results on 184 pages for 'connections'.

Page 38/184 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • MySQL password changed every other day in windows?

    - by PHP
    Every morning when I check server status,I will find MySQL's password is changed: mysql -uuser -ppassword will report ERROR 1045 (28000): Access denied for user 'user'@'localhost' (using password: YES) And then I restart server,and when it's up,MySQL will be back to normal. It has now become a routinely job. What can be the cause for this? How can I know what's exactly happening to MySQL? Here is the error log: 100122 10:11:16 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100122 10:11:16 InnoDB: Starting shutdown... 100122 10:11:18 InnoDB: Shutdown completed; log sequence number 0 22939338 100122 10:11:18 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100122 10:12:40 InnoDB: Started; log sequence number 0 22939338 100122 10:12:42 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL) 100123 16:20:44 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100123 16:20:44 InnoDB: Starting shutdown... 100123 16:20:46 InnoDB: Shutdown completed; log sequence number 0 22939832 100123 16:20:46 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100123 16:22:09 InnoDB: Started; log sequence number 0 22939832 100123 16:22:11 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL) 100125 9:18:59 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100125 9:18:59 InnoDB: Starting shutdown... 100125 9:19:00 InnoDB: Shutdown completed; log sequence number 0 22941001 100125 9:19:00 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100125 9:20:22 InnoDB: Started; log sequence number 0 22941001 100125 9:20:25 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL)

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • PostgreSQL pg_hba.conf with "password" auth wouldn't work with PHP pg_connect?

    - by tftd
    I've recently experimented with the settings in pg_hba.conf. I read the PostgreSQL documentation and I though that the "password" auth method is what I want. There are many people that have access to the server PostgreSQL is working on so I don't want the "trust" method. So I changed it. But then PHP stopped working with the database. The message I get is "Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "myuser" in /my/path/to/connection/class.php on line 35". It is kind of strange because I can connect via phppgadmin without any problems and also I can connect from my home computer with psql - again without any problems. This is my pg_hba.conf: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all password # IPv4 local connections: host all all 127.0.0.1/32 password # IPv6 local connections: host all all ::1/128 password The connection string I'm using with pg_conenct is: $connect_string = "host=localhost port=5432 dbname=mydbname user=auser password=apassword"; $dbConnection = pg_connect($connection_string); Does anybody know why is this happening ? Did I misconfigured something ?

    Read the article

  • Administrator view all mapped drives

    - by kskid19
    In my understanding of security, an administrator should be able to view all connections to and from a computer - just as they can view all processes/owner, network connections/owning process. However, Windows 8 seems to have disabled this. As administrator running an elevated in Win Vista+ when you run net use you get back all drives mapped, listed as unavailable. In Windows 8, the same command run from an elevated prompt returns "There are no entries in the list". The behavior is identical for powershell Get-WmiObject Win32_LogonSessionMappedDisk. A workaround for persistent mappings is to run Get-ChildItem Registry::HKU*\Network*. This does not include temporary mappings (in my particular example it was created through explorer on an administrator account and I did not select "Reconnect at sign-in") Is there a direct/simple way for Administrator to view connections of any user (short of a script that runs under each user context)? I have read Some Programs Cannot Access Network Locations When UAC Is Enabled but I do not think it particularly applies. ServerFault has an answer, but it still does not address non-persistent drives How can I tell what network drives users have mapped?

    Read the article

  • Administrator view ALL mapped drives

    - by kskid19
    In my understanding of security, an administrator should be able to view all connections to and from a computer - just as they can view all processes/owner, network connections/owning process. However, Windows 8 seems to have disabled this. As administrator running an elevated in Win Vista+ when you run net use you get back all drives mapped, listed as unavailable. In Windows 8, the same command run from an elevated prompt returns "There are no entries in the list". The behavior is identical for powershell Get-WmiObject Win32_LogonSessionMappedDisk. A workaround for persistent mappings is to run Get-ChildItem Registry::HKU*\Network*. This does not include temporary mappings (in my particular example it was created through explorer on an administrator account and I did not select "Reconnect at sign-in") Is there a direct/simple way for Administrator to view connections of any user (short of a script that runs under each user context)? I have read Some Programs Cannot Access Network Locations When UAC Is Enabled but I do not think it particularly applies. I have seen this answer, but it still does not address non-persistent drives How can I tell what network drives users have mapped?

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • How to set up a easy-to-use proxy for the whole system with WinXP client and server?

    - by Pekka
    I am working together intensively with a colleague on the Canary Islands. We speak through live messenger and work together using a RDP software. She has frequent problems with connections to certain big-name and small-name sites (amongst others live.com, google.com, gmx.de) very likely to be caused by the spanish provider (the connections simply time out, this has been going on for weeks already). I have been thinking about setting up my computer as a proxy to make these connections work. I have a DSL connection and am behind a NAT capable router that I control. Does anybody know a simple, "one-click" way to transport ALL network traffic through a remote proxy? Without having to set proxy settings for each application that uses the internet? VPN is not an option, because I am behind a firewall that supports protocol 47 and such, but I have never succeeded in getting an incoming VPN connection to work. I can however redirect normal traffic using NAT. A VPN solution that does not need strange protocols would also be an option.

    Read the article

  • Unable to logon using terminal server connection

    - by satch
    I have several W2K3 SP2 servers, admin TS enabled. I discovered this morning, I was unable to logon into some of them. I've a couple of Citrix servers in different farms, a SAP (IA64) app server and a cvs server. All of them show same sympthoms; remote connections are refused. I've been able to logon locally, and terminal server service is up, there are no users (so connections are not depleted). There are no errors in log in most servers. One of the Citrix ones, reported following errors: Event ID 50 Source TermDD Type Error Description The RDP protocol component X.224 detected an error in the protocol stream and has disconnected the client. and Event ID 1006 Source TermService Type Error Description The terminal server received large number of incomplete connections. The system may be under attack. Anyway, I suppose these errors appear because server isn't working, and Citrix users try to logon massively. (I nmap'ed server and port seems up). I've solved this problem rebooting before, but with so many servers affected it seems like a crappy workaround. Any idea about troubleshooting it properly? Thanks in advance

    Read the article

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • Cannot Connect To VMWare Guest OS Using Either RDP or VNC

    - by Humanier
    I have a PC (Windows XP SP3) with VMWare Workstation 7 installed. The VMWare hosts Windows Server 2003 Enterprise Edition R2. RealVNC (4.1.3) is installed on both OS'es. Both of them use Hamachi2. Host OS (WinXP) also runs ZoneAlarm Firewall. Hamachi network is set as trusted. My goal is to allow RDP and VNC connections to be made to the guest OS (Windows Server 2003). Both options work absolutely fine if I connect from the host OS. However I have problems when other computers from our Hamachi network try to connect the guest OS (Win2K3). RDP connections. RDP window opens, shows black content and after 15-20 seconds displays following error: RealVCN connections. Users are able to connect but all they see is a black screen inside VNC window. At the same their input (keystrokes or mouse moves/clicks) are visible when looking at the console window of the Win2K3. I really appreciate any ideas on how to resolve the mentioned problems.

    Read the article

  • Setting up vsftpd, hangs on list command

    - by Victor
    I installed vsftpd and configured it. When I try to connect to the ftp server using Transmit, it manages to connect but hangs on Listing "/" Then, I get a message stating: Could not retrieve file listing for “/”. Control connection timed out. Does it have anything to do with my iptables? My rules are as listed: *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 21 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • Cannot Connect To VMWare Guest OS Using NEither RDP Nor

    - by Humanier
    Hi, I have a PC (Windows XP SP3) with VMWare Workstation 7 installed. The VMWare hosts Windows Server 2003 Enterprise Edition R2. RealVNC (4.1.3) is installed on both OS'es. Both of them use Hamachi2. Host OS (WinXP) also runs ZoneAlarm Firewall. Hamachi network is set as trusted. My goal is to allow RDP and VNC connections to be made to the guest OS (Windows Server 2003). Both options work absolutely fine if I connect from the host OS. However I have problems when other computers from our Hamachi network try to connect the guest OS (Win2K3). 1) RDP connections. RDP window opens, shows black content and after 15-20 seconds displays following error: http://lh6.ggpht.com/_yQhsRRimgKU/TArRrtiteQI/AAAAAAAABZA/e96za-y9wzo/rdp_error.JPG 2) RealVCN connections. Users are able to connect but all they see is a black screen inside VNC window. At the same their input (keystrokes or mouse moves/clicks) are visible when looking at the console window of the Win2K3. I really appreciate any ideas on how to resolve mentioned problems. Thank you in advance.

    Read the article

  • Trouble connecting to a local SQL server instance from the web

    - by dfarney
    We have a small network behind a firewall (WatchGuard XTM 2 series) and network switch. On our network we have multiple instances of SQL server, but 1 in specific that I would like to be able to access remotely from our website. We have a static IP address from our ISP and then all the machines on the network have a locally assigned dynamic IP address. When trying to connect to the database from outside our network how do I get the request to be directed to the proper machine / SQL instance? Is it a parameter in my connection string or something in my firewall? A few things to rule out: 1) The firewall is allowing access from the website to our network. I added the site's IP and opened up port 1433. Also, when trying to connect and monitoring the firewall no exceptions come up as they did before I added the proper IP address. 2) Remote connections on the SQL server has been setup and enabled. I've done a lot of reading up on remote connections and I am sure it has been setup properly. I am currently getting this error message on my site: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)

    Read the article

  • How to set a static route for an external IP address

    - by HorusKol
    Further to my earlier question about bridging different subnets - I now need to route requests for one particular IP address differently to all other traffic. I have the following routing in my iptables on our router: # Allow established connections, and those !not! coming from the public interface # eth0 = public interface # eth1 = private interface #1 (10.1.1.0/24) # eth2 = private interface #2 (129.2.2.0/25) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -m state --state NEW ! -i eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth0 -o eth2 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the private interfaces iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT iptables -A FORWARD -i eth2 -o eth0 -j ACCEPT # Allow the two private connections to talk to each other iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT iptables -A FORWARD -i eth2 -o eth1 -j ACCEPT # Masquerade (NAT) iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # Don't forward any other traffic from the public to the private iptables -A FORWARD -i eth0 -o eth1 -j REJECT iptables -A FORWARD -i eth0 -o eth2 -j REJECT This configuration means that users will be forwarded through a modem/router with a public address - this is all well and good for most purposes, and in the main it doesn't matter that all computers are hidden behind the one public IP. However, some users need to be able to access a proxy at 192.111.222.111:8080 - and the proxy needs to identify this traffic as coming through a gateway at 129.2.2.126 - it won't respond otherwise. I tried adding a static route on our local gateway with: route add -host 192.111.222.111 gw 129.2.2.126 dev eth2 I can successfully ping 192.111.222.111 from the router. When I trace the route, it lists the 129.2.2.126 gateway, but I just get * on each of the following hops (I think this makes sense since this is just a web-proxy and requires authentication). When I try to ping this address from a host on the 129.2.2.0/25 network it fails. Should I do this in the iptables chain instead? How would I configure this routing?

    Read the article

  • How to connect via SSH to a linux mint system that is connected via OpenVPN

    - by Hilyin
    Is there a way to make SSH port not get sent through VPN so when my computer is connected to a VPN, it can still be remoted in via SSH from its non-VPN IP? I am using Mint Linux 13. Thank you for your help! This is the instructions I followed to setup the VPN: Open Terminal Type: sudo apt-get install network-manager-openvpn Press Y to continue. Type: sudo restart network-manager Download BTGuard certificate (CA) by typing: sudo wget -O /etc/openvpn/btguard.ca.crt http://btguard.com/btguard.ca.crt Click on the Network Manager icon, expand VPN Connections, and choose Configure VPN A Network Connections window will appear with the VPN tab open. Click Add. 8. A Choose A VPN Connection Type window will open. Select OpenVPN in the drop-down menu and click Create.. . In the Editing VPN connection window, enter the following: Connection name: BTGuard VPN Gateway: vpn.btguard.com Optional: Manually select your server location by using ca.vpn.btguard.com for Canada or eu.vpn.btguard.com for Germany. Type: select Password User name: username Password: password CA Certificate: browse and select this file: /etc/openvpn/btguard.ca.crt Click Advanced... near the bottom of the window. Under the General tab, check the box next to Use a TCP connection Click OK, then click Apply. Setup complete! How To Connect Click on the Network Manager icon in the panel bar. Click on VPN Connections Select BTGuard VPN The Network Manager icon will begin spinning. You may be prompted to enter a password. If so, this is your system account keychain password, NOT your BTGuard password. Once connected, the Network Manager icon will have a lock next to it indicating you are browsing securely with BTGuard.

    Read the article

  • Internet setup for my office

    - by prakash
    We have two internet connections to our office and our current setup is like this.. The internet connections require pppoe log in so i take each cable and plug it into a wifi router and configure the router to log in to the pppoe and then plug in a cable from the router to a switch and distribute the internet throughout my office. The problem with this setup is it is really hard to monitor and im not able to monitor who is hogging internet usage and what he or she is actually using it for. apart from this we also have a nas setup which is routed through another switch . Could someone please throw a little light on how i can restructure this setup for easy monitoring and better transparency... ? each wan router is connected to a different switch and is distributed to users accordingly.. we have around 40 users in the office.. we want to setup a single linux box to which i want to connect the two wan connections and from there distribute it to all our users.... im looking for a solution where we do not have to invest more that buying a single pc and a couple of nics

    Read the article

  • TCP dies on a Linux laptop

    - by Roman Cheplyaka
    Once in several days I have the following problem. My laptop (Debian GNU/Linux testing) suddenly becomes unable to work with TCP connections to the internet. The following things continue to work fine: UDP (DNS), ICMP (ping) — I get instant response TCP connections to other machines in the local network (e.g. I can ssh to a neighbour laptop) everything is ok for other machines in my LAN But when I try TCP connections from my laptop, they time out (no response to SYN packets). Here's a typical curl output: % curl -v google.com * About to connect() to google.com port 80 (#0) * Trying 173.194.39.105... * Connection timed out * Trying 173.194.39.110... * Connection timed out * Trying 173.194.39.97... * Connection timed out * Trying 173.194.39.102... * Timeout * Trying 173.194.39.98... * Timeout * Trying 173.194.39.96... * Timeout * Trying 173.194.39.103... * Timeout * Trying 173.194.39.99... * Timeout * Trying 173.194.39.101... * Timeout * Trying 173.194.39.104... * Timeout * Trying 173.194.39.100... * Timeout * Trying 2a00:1450:400d:803::1009... * Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable * Success * couldn't connect to host * Closing connection #0 curl: (7) Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable Restarting the connection and/or reloading the network card kernel module doesn't help. The only thing that helps is reboot. Clearly something is wrong with my system (everything else works fine), but I have no idea what exactly. I don't know how to reproduce this, but as I said, it happens every several days. My setup is a wireless router that is connected to the ISP via PPPoE. Any advice?

    Read the article

  • How do I enable Ubuntu Gnome system tools

    - by RussellW
    I am running Ubuntu 10 with Gnome 2.30.2. This is a VMWare workstation image provided by another company that I do not have support in this regard. I am trying to access the graphical tools for configuring the network, users, and services but the System-Administration menu does not have these options listed. The main issue I am trying to solve is to correct the problems with the gnome menu options and network settings I have the gnome-system-tools package installed, and I am unable to run command-line versions of the tools, such as nm-applet (I get no GUI if I run that command, the process is running in the background). I realize that I can perform many tasks command-line, but I would like to use the GUI for administrative functions as I am not overly proficient for all command for restarting services and setting a static IP with a specific gateway. Further, I can run gnome-nettool, but I cannot change the IP, I can only see my network card. nm-connection-editor does not show any network cards that I can configure to change the IP. Currently, I am getting a DHCP through my NAT in VMWare, I want to set it to a specific IP address though. Preferences Menu (note some missing options) ![Preferences Menu][1] Admin Menu (note some missing options) ![Admin Menu][2] Network Tools (I can view but not change IP address) ![Network Tools][3] Network Settings (Unable to change IP address) ![Network Settings][4] Network Connections (no connections listed, not even my existing ethernet NAT connection through VMWare) ![Network Connections][5] See images here that I have referenced: 1- http://i.imgur.com/kl8pP.png 2- http://i.imgur.com/K3Cjz.png 3- Iq7Xb.png 4- 7wheV.png 5- J2ad8.png

    Read the article

  • MySQL Memory Limit Windows Server 2003

    - by Matt
    I am running MySQL 5.0.51a on Windows Server 2003 Standard Edition on an HP DL580 G4 with 3GB installed. One of my database tables has grown to 5.3 GB with an index file of 2.5 GB, which I believe is causing MySQL to be slow due to having to constantly load and unload the index file when updates are made to the table. The server itself seems to be performing OK because MySQL is only using about 500MB of memory (there are other apps running on the system, but MySQL uses the most memory). The table is fairly active with new records getting adding all during day but no deletes, ever. The MySQL server has up to 600 connections allowed, but only small number (10 or 20) would actually be writing to this table. I increased the memory limits in MySQL but since the max connections is so high I don't think I can give each connection 1GB without risking a problem. Is there some tuning that would let just certain connections get a lot of memory? So I have started to look for alternatives to avert the crisis I know is coming soon. Some of the options I have: Upgrade to Server 2003 Enterprise to install 64GB of memory. Question: would 32 bit MySQL be able to access more than 2GB? Would that be 2GB per thread? That would still be smaller than the index table size so it might not solve the problem completely, but it would be better than now. Upgrade to Server 200x 64 bit and MySQL 64 bit. Switch to a *nix 64 bit server. If anybody has suggestions for things to do in the meantime, opinions on which way to go, or other things that I have overlooked I would appreciate the help. Thanks

    Read the article

  • mysqld-nt.exe exist in task list, but actually it's not running?

    - by PHP
    mysqld-nt.exe is showing in the task manager, but I cannot connect to it. I tried: telnet localhost 3306 And it fails to connect. So I restarted the server,and it's ok. This happens every day. Any ideas? EDIT Here is the error log(I didn't find anything abnormal though): 100122 10:11:16 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100122 10:11:16 InnoDB: Starting shutdown... 100122 10:11:18 InnoDB: Shutdown completed; log sequence number 0 22939338 100122 10:11:18 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100122 10:12:40 InnoDB: Started; log sequence number 0 22939338 100122 10:12:42 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL) 100123 16:20:44 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100123 16:20:44 InnoDB: Starting shutdown... 100123 16:20:46 InnoDB: Shutdown completed; log sequence number 0 22939832 100123 16:20:46 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100123 16:22:09 InnoDB: Started; log sequence number 0 22939832 100123 16:22:11 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL) 100125 9:18:59 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100125 9:18:59 InnoDB: Starting shutdown... 100125 9:19:00 InnoDB: Shutdown completed; log sequence number 0 22941001 100125 9:19:00 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100125 9:20:22 InnoDB: Started; log sequence number 0 22941001 100125 9:20:25 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL)

    Read the article

  • Tuning Linux + HAProxy

    - by react
    I'm currently rolling out HAProxy on Centos 6 which will send requests to some Apache HTTPD servers and I'm having issues with performance. I've spent the last couple of days googling and still can't seem to get past 10k/sec connections consistently when benchmarking (sometimes I do get 30k/sec though). I've pinned the IRQ's of the TX/RX queues for both the internal and external NICS to separate CPU cores and made sure HAProxy is pinned to it's own core. I've also made the following adjustments to sysctl.conf: # Max open file descriptors fs.file-max = 331287 # TCP Tuning net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10240 net.ipv4.tcp_max_tw_buckets = 400000 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 40000 net.ipv4.tcp_rmem = 4096 8192 16384 net.ipv4.tcp_wmem = 4096 8192 16384 net.ipv4.tcp_mem = 65536 98304 131072 net.core.netdev_max_backlog = 40000 net.ipv4.tcp_tw_reuse = 1 If I use AB to hit the a webserver directly I easily get 30k/s connections. If I stop the webservers and use AB to hit HAProxy then I get 30k/s connections but obviously it's useless. I've also disabled iptables for now since I read that nf_conntrack can slow everything down, no change. I've also disabled the irqbalance service. The fact that I can hit each individual device with 30k/s makes me believe the tuning of the servers is OK and that it must be some HAProxy config? Here's the config which I've built from reading tuning articles, etc http://pastebin.com/zsCyAtgU The server is a dual Xeon CPU E5-2620 (6 cores) with 32GB of RAM. Running Centos 6.2 x64. The private and public interfaces are on separate NICS. Anyone have any ideas? Thanks.

    Read the article

  • How to configure iptables to use apt-get in a server?

    - by segaco
    I'm starting using iptables (newbie) to protect a linux server (specifically Debian 5.0). Before I configure the iptables settings, I can use apt-get without a problem. But after I configure the iptables, the apt-get stop working. For example I use this script in iptables: #!/bin/sh IPT=/sbin/iptables ## FLUSH $IPT -F $IPT -X $IPT -t nat -F $IPT -t nat -X $IPT -t mangle -F $IPT -t mangle -X $IPT -P INPUT DROP $IPT -P OUTPUT DROP $IPT -P FORWARD DROP $IPT -A INPUT -i lo -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT $IPT -A INPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 22 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 80 -j ACCEPT $IPT -A INPUT -p tcp --dport 443 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 443 -j ACCEPT # Allow FTP connections @ port 21 $IPT -A INPUT -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT # Allow Active FTP Connections $IPT -A INPUT -p tcp --sport 20 -m state --state ESTABLISHED,RELATED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 20 -m state --state ESTABLISHED -j ACCEPT # Allow Passive FTP Connections $IPT -A INPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED,RELATED -j ACCEPT #DNS $IPT -A OUTPUT -p udp --dport 53 --sport 1024:65535 -j ACCEPT $IPT -A INPUT -p tcp --dport 1:1024 $IPT -A INPUT -p udp --dport 1:1024 $IPT -A INPUT -p tcp --dport 3306 -j DROP $IPT -A INPUT -p tcp --dport 10000 -j DROP $IPT -A INPUT -p udp --dport 10000 -j DROP then when I run apt-get I obtain: core:~# apt-get update 0% [Connecting to ftp.us.debian.org] [Connecting to security.debian.org] [Conne and it stalls. What rules I need to configure to make it works. Thanks

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • JBoss database connection pool configuration

    - by Qben
    I am facing an connection pool issue in my clustered JBoss installation. From time to time one of my connection pools will hit the roof and I get a lot of these in my logfile. java.sql.SQLException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ); The odd thing is that I can see in the JMX console that the ConnectionCount hit the roof, but at the same time InUseConnectionCount is often quite small. The problem will resolve itself after a couple of minutes but during recovery phase my application will not work (for obvious reasons). The question is if this indicate an error in the configured timeouts of the connections (I pretty much use defaults), or if my pool is simply too small to handle the peaks. Under normal operation I would say I use ~40% of the configured max number of connections. The reason I just don't increase the max number of connection is that if I actually used up all connections I suspect that InUseConnectionCount would hit the roof. Hence I suspect I might have more issues than just a too small pool size. Maybe InUseConnectionCount has decreased at the time I check jmx-console and it actually do hit the roof? I tend to collect data every second minute. Any hints are more than welcome.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >