Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 217/285 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • MySQL Config on Large Machine

    - by Jonathon
    We have a Windows 2003 Enterprise Edition server (64bit) running only MySQL 5.1.45 64-bit. It has 16G RAM and 10T of hard-drive space in RAID 10. We are having horrible performance from mysqld (85-100% CPU utilization). We were running a smaller machine with better performance, so I am assuming our my.ini file is not correct for our current machine. The my.ini file is as follows: [client] port=3306 [mysql] default-character-set=latin1 [mysqld] port=3306 basedir="D:/MySQL/" datadir="D:/MySQL/data" default-character-set=latin1 default-storage-engine=MYISAM sql-mode="" skip-innodb skip-locking max_allowed_packet = 1M max_connections=800 myisam_max_sort_file_size=5G myisam_sort_buffer_size=500M table_open_cache = 512 table_cache=8000 tmp_table_size=30M query_cache_size=50M thread_cache_size=128 key_buffer_size=3072M read_buffer_size=2M read_rnd_buffer_size=16M sort_buffer_size=2M #replication settings (this is the master) log-bin=log server-id = 1 Does anyone see anything wrong with this setup? For a machine with this much RAM, why in the world would mysqld eat up so much CPU? I know we can optimize some queries, etc., but it did run okay on a smaller machine, so I am pretty sure it is the config. Thanks in advance for any help.

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • Dedicated virtual setup is slow with WordPress

    - by kovshenin
    Hey. I'm running a Fedora linux server on the Amazon EC2 platform. I'm pretty sure there's something wrong with my configuration as it seems to be very slow. SSH sometimes takes over 30 seconds to connect, a WordPress generated web page could take 5 seconds to load, and it could take 20 seconds to load, which is pretty awkward. MySQL queries are all executed in less than a second, so I don't think that's the case. I'm not really sure where the issue lies, but a simple page written in PHP loads instantly. A fresh WordPress installation starts lagging. Same works perfect on grid hosting at MediaTemple for instance, so I'm pretty sure I missed something. If you could please direct me to the right tools and articles which would help me out. Thanks so much! Fedora Core 8, php 5.2.6, MySQL 5.0.45, OpenSSH 4.7p1, OpenSSL 0.9.8b. PHP is configured as a module to Apache 2.2.9, all websites based on virtual hosts. I have some on-going php scripts running from time to time in the background via cron. Thanks.

    Read the article

  • Enabling JMX for proxool with tomcat

    - by dialt0ne
    I am trying to get proxool's MBeans available so that I can see/manipulate them with jconsole. I have jconsole working, but I don't see anything related to proxool. The system is using Sun Java 1.5.0_17 (I know, I know... I'm working with the developers to upgrade). JMX is enabled by modifying $JAVA_OPTS in my tomcat 5.5 startup script: SJO="$SJO -Dcom.sun.management.jmxremote" SJO="$SJO -Dcom.sun.management.jmxremote.port=4998" SJO="$SJO -Dcom.sun.management.jmxremote.authenticate=false" SJO="$SJO -Dcom.sun.management.jmxremote.ssl=false" JAVA_OPTS="$JAVA_OPTS $SJO" I have proxool configured with JNDI in server.xml: <GlobalNamingResources> <Resource name="jdbc/database" auth="Container" type="javax.sql.DataSource" factory="org.logicalcobwebs.proxool.ProxoolDataSource" user="username" password="password" proxool.driver-url="jdbc:oracle:thin:@fqdn.example.com:1521:MYSID" proxool.driver-class="oracle.jdbc.driver.OracleDriver" proxool.alias="mysid" proxool.maximum-connection-count="20" proxool.statistics="20s,5m,15m" proxool.statistics-log-level="INFO" proxool.jmx="true" proxool.verbose="true" /> </GlobalNamingResources> My test .jsp can run queries and I can see it using the connections with the proxool admin servlet, but I'm unsure if there's more I need to configure in tomcat or proxool to get JMX functioning. Advice? jmxproxy info edit: The jmxproxy servlet is working - when I go to the URL http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=*:type%3DRequestProcessor,* the results are: OK - Number of results: 2 Name: Catalina:type=RequestProcessor,worker=http-8080,name=HttpRequest0 modelerType: org.apache.coyote.RequestInfo bytesSent: 0 requestBytesSent: 0 contentLength: -1 bytesReceived: 0 requestProcessingTime: 1297983483666 globalProcessor: org.apache.coyote.RequestGroupInfo@32dc51c8 requestBytesReceived: 0 serverPort: -1 stage: 0 requestCount: 0 maxTime: 0 processingTime: 0 errorCount: 0 Name: Catalina:type=RequestProcessor,worker=jk-127.0.0.1-8009,name=JkRequest794 modelerType: org.apache.coyote.RequestInfo virtualHost: tomcatserver.example.com bytesSent: 0 method: GET remoteAddr: 172.30.3.51 requestBytesSent: 0 contentLength: -1 workerThreadName: TP-Processor15 bytesReceived: 0 requestProcessingTime: 9 globalProcessor: org.apache.coyote.RequestGroupInfo@1e7d3b8e protocol: HTTP/1.1 currentQueryString: qry=*%3Atype%3DRequestProcessor%2C* requestBytesReceived: 0 serverPort: 4999 stage: 3 requestCount: 0 maxTime: 0 processingTime: 0 currentUri: /manager/jmxproxy/ errorCount: 0 And more to the point http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=Catalina:type%3DEnvironment,resourcetype%3DGlobal,name%3DProxool yields: OK - Number of results: 0

    Read the article

  • .NET 2.0 Application now running slow on IIS 7.5

    - by Valien
    I recently moved (and still in testing) an application from a Windows 2003 Server (Physical box) running IIS 6.x to a Windows 2008 R2 Standard (VM) IIS 7.5 server. The application is a .NET framework 2.0 application and is running under a 2.0 App Pool. This site works great except for one thing: Takes forever to get a request back. I've been tracking it with Chrome Inspect Element and it queries the site and can take up to 45 seconds to answer. Now when it does the page(s) render instantly but it's that initial request that's killing it. I see no error logs or issues with the application or Windows Event Viewer or even IIS logs so not sure where to start looking next. Some new changes was that previously the app resided behind a Pix firewall and now is behind a larger network environment in a DMZ zone (and I believe NetScaler is also being used to manage the network). I do not have rights/abilities to look at the network itself but can contact the Data center folks to look deeper into this but I wanted to make sure it's not my application that might be causing the slowdown or IIS. In summary: .NET 2.0 application works great in IIS 6.x Application moved to an IIS 7.5 server and now slow on rendering but when it does render responds back with pages instantly. Edit for solution Found out that it was the SOAP calls that were slowing the site down. In the new datacenter my application cannot request SOAP calls and so they time out after 40-45 seconds or so. Now trying to find out if I can install a proxy server to redirect this...

    Read the article

  • DNS: how to get local server to superimpose results over authoritative server?

    - by growse
    I've got a domain for which the DNS I control, and is hosted on the internet. I also have a NAT'd internal network (192.168.0.0/24) which has internet access, and which I also control. On this internal network, I also have a DNS resolver. DNS software on both is PowerDNS. What I want to be able to do is for the DNS resolver on the internal network to be able to add/change records of queries and results that come down from the authoritative server. For example, the authoritative server might have a single record for animal.example.com: animal.example.com. IN AAAA 2001:140:283::1 However, I'd like it so that when internal clients do a dns lookup for animal.example.com, they might get back the following: animal.example.com. IN AAAA 2001:140:283::1 animal.example.com. IN A 192.168.0.2 Obviously, I could set up the internal DNS server to pretend to be authoritative for example.com, but that would require a fair bit of effort to keep the main DNS server and the internal DNS server in sync for the records which are the same between both. If the internal DNS server could somehow be made a slave of the main DNS server, but also have the provision to add its own results in, that would be ideal. Is this possible?

    Read the article

  • SQL Server replication and load balance

    - by Ahmed Galal
    I'm running a web service that serves a mobile app on IIS 8 and SQL Server 2014, my service has a massive load and i'm trying to improve performance, most of the load is happening on SQL. i don't think i have a bottleneck, my processor and ram is up to the max and i think my code is not that bad, am already using memcached and other stuff to avoid hitting SQL too much. i know i can always upgrade the server hardware but i already have a spare server that i would like to use, so i was thinking to split the SQL load on the 2 servers. What i was thinking of is to setup replication on the other server and do some load balancing, but am not sure how to do the load balance. I know i can adjust my code to hit the other server for some queries but i was hoping to find a solution that avoid changing my code. So my question is, What are the ways of doing load balancing between 2 SQL servers ? I would appreciate suggestions or best practices or some directions. Thanks.

    Read the article

  • Problems forwarding zone to another DNS server.

    - by sebastian nielsen
    I have a authorative DNS server at 83.248.21.18 which are authorative for the domain "finahemgoteborg.se". Now my registrar is requiring me to have 2 DNS servers for the domain, so I would now want the machine 85.228.103.141 just forward all incoming queries for "finahemgoteborg.se" to the 83.248.21.18 server. In the 85.228.103.141 BIND server, I have the following config: zone "finahemgoteborg.se" in { type forward; forwarders {83.248.21.18;}; }; But the problem is that 85.228.103.141 is still responding with "REFUSED" when querying it for example www.finahemgoteborg.se A record. How can I fix it. I do NOT want to set up a master/slave situation, just one nameserver that forwards to a another. Edit The Rest of named.conf: options { directory "/var/cache/bind"; version "none"; allow-recursion {"none";}; minimal-responses no; }; zone "sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns1sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns2sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "finahemgoteborg.se" in{ type forward; forwarders {83.248.21.18;}; };

    Read the article

  • Firefox proxy authentication with Kerberos: one service ticket per connection (Linux)

    - by Dari
    I am trying to enable proxy authentication via Kerberos for Firefox. The setup is: Active Directory domain (for LDAP and Kerberos; this works and I can log in the computer and get Kerberos tickets without problems) Microsoft Windows witness machine (on which Firefox runs fine with no ticket problem) CentOS 6.3 system with Firefox (the tests were performed with both the 10.0.1 ESR found in the CentOS package repositories and the 15.0.1 downloaded from Mozilla's website) BlueCoat proxy with Kerberos authentication enabled For the moment, Firefox requests an element of a website, gets an HTTP error code of "407 Proxy Authentication Required" from the proxy, gets a ticket granting service (TGS) from the domain for the proxy and performs the request again while passing the ticket. The transaction runs fine. However, when more elements are requested (in parallel), Firefox requests one more ticket per proxy connection. And this takes many DNS queries, Kerberos interactions with domain controllers and costs a lot of time (for example, the home page of Adobe takes several minutes to load and at the end, I have about 30 valid Kerberos tickets). I am stuck on this since a while, and help would be greatly appreciated. Minor information: the CentOS operating system is virtualized with VMware Player 3.1.3, but I do not think this would be a game changer.

    Read the article

  • Free, simple, configurable SOCKS5 server

    - by Pooria Azimi
    I've been looking (for the past 6-7 hours) for a fast, free and configurable SOCKS5 server. I haven't found anything that matches my needs. They are either too complicated, too bare-bones or simply buggy as hell. This is (all) I need: I want it to run on Linux (and also OS X, preferably) I want it to listen on localhost:8888 When my app (say wget.. or curl --socks5=localhost:8888) requests http://www.google.com/search?q=asd (or any other url - both http and https), I want it to fetch the page not from google's servers, but from http://localhost:4444/cached?uri=http://www.google.com/search%3Fq%3Dasd. Nothing more! I don't need caching, or anything else. I just want a SOCKS5 server, running locally, which redirects all queries to my own (local) server. It could be written in C, C++, Python, PHP, Perl, Node.js or any other language. I don't care, as long as it supports my (very limited) needs, or I can easily change the source to make it so. Thanks a lot

    Read the article

  • nginx: dump HTTP requests for debugging

    - by Alexander Gladysh
    Ubuntu 10.04.2 nginx 0.7.65 I see some weird HTTP requests coming to my nginx server. To better understand what is going on, I want to dump whole HTTP request data for such queries. (I.e. dump all request headers and body somewhere I can read them.) Can I do this with nginx? Alternatively, is there some HTTP server that allows me to do this out of the box, to which I can proxy these requests by the means of nginx? Update: Note that this box has a bunch of normal traffic, and I would like to avoid capturing all of it on low level (say, with tcpdump) and filtering it out later. I think it would be much easier to filter good traffic first in a rewrite rule (fortunately I can write one quite easily in this case), and then deal with bogus traffic only. And I do not want to channel bogus traffic to another box just to be able to capture it there with tcpdump. Update 2: To give a bit more details, bogus request have parameter named (say) foo in their GET query (the value of the parameter can differ). Good requests are guaranteed not to have this parameter ever. If I can filter by this in tcpdump or ngrep somehow — no problem, I'll use these.

    Read the article

  • Difference between CurrentClockSpeed and MaxClockSpeed

    - by Ben
    Rationale this belongs on ServerFault rather than StackOverflow - I already have my program which gets the value, I am querying the value returned and what it means. I have an in-house program which audits our company PCs, and one of the things it checks is the speed of the processor. To do this, it queries the Win32_Processor WMI class and gets the value of CurrentClockSpeed. We were playing with the data today and found an anomaly with some of the speeds being reported incorrectly (for example, CurrentClockSpeed said 1.0GHz, whereas the CPU name said Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz [Confirmed it is in fact 1.83GHz]). I did a bit of digging on the internet and found this blog post which might explain what is going on. My initial thought was that I could change the program to instead get the value for MaxClockSpeed instead of CurrentClockSpeed, but Microsoft's documentation doesn't clearly define what this will return. What I mean by that is will this return a value which is its actual maximum speed (say if it were overclocked) but which it would not normally be running at, or would it return what I expect, which is its maximum speed under normal (not overclocked) conditions?

    Read the article

  • How to enable IPv6 glue records (AAAA) in PowerDNS

    - by aef
    I'm running a PowerDNS 3.1 on a Debian Wheezy Beta 4 system. The zone data is accessed through a PostgreSQL database, the server answers to both IPv4 and IPv6 queries. If the DNS-Server knows the A record for one of the name servers referenced by NS records on a zone, it automatically return these A records as additional information to the response on an NS query for that zone. Now even if it knows the AAAA record for one of the name servers of the NS records, it currently does never return an AAAA record as additional information. How can I enable this? Or is there anything I could be doing wrong? Output of dig @ns.mydomain.tld NS mydomain.tld: ;; QUESTION SECTION: ;mydomain.tld. IN NS ;; ANSWER SECTION: mydomain.tld. 86400 IN NS ns3.nsprovider.de. mydomain.tld. 86400 IN NS ns2.nsprovider.de. mydomain.tld. 86400 IN NS ns.mydomain.tld. mydomain.tld. 86400 IN NS ns.nsprovider.de. ;; ADDITIONAL SECTION: ns2.nsprovider.de. 86400 IN A 1.2.3.1 ns.nsprovider.de. 86400 IN A 1.2.3.2 ns.mydomain.tld. 600 IN A 192.0.2.194 ns3.nsprovider.de. 86400 IN A 1.2.3.3 Output of dig @ns.mydomain.tld A ns.mydomain.tld: ;; QUESTION SECTION: ;ns.mydomain.tld. IN A ;; ANSWER SECTION: ns.mydomain.tld. 600 IN A 192.0.2.194 Output of dig @ns.mydomain.tld AAAA ns.mydomain.tld: ;; QUESTION SECTION: ;ns.mydomain.tld. IN AAAA ;; ANSWER SECTION: ns.mydomain.tld. 86400 IN AAAA 2001:db8:100:3022:1::3

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • Orphaned SQL Recordsets/Connections with IIS

    - by Damian
    I have an IIS 6 site running on Windows 2003 Server x86 with MS SQL2005 Enterprise edition running ASP Classic (no choice). The site runs very fast with about 8000 page views per hour. All of my SQL tables are indexed and I have used the profiler to check my queries, the slowest of which is only about 10-15ms. I have autoshrink disabled, autogrow is set to 250mb, database is 2gb with 800mb of free space. My problem is that every now and then the site will slow to a crawl for no reason. Pages that just have a simple 'connect to databse and increment a hit counter' work ok, but more SQL intensive pages that normally execute in about 60ms take 25,000ms to run. This happens for about 30 seconds and then goes away. I was having an issue with orphan recordsets and connections due to the way I was releasing them. I have fixed this up and the issue is much better, but I am still getting them. Is there a way with permon, etc. to track when SQL Server or Windows closes these Orphan connections? At least if I can monitor the issue I will know if I am making progress or if I am even looking at the right things. Is there anything else I might be missing? Thank you!

    Read the article

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • How to Set up MySQL Server to utilize more memory

    - by Cyril Gupta
    Hi there, I have MySQL setup on Windows along with Plesk. The version is 5.0.45 Community. The databases I have on the server are MyISAM as well as InnoDb, but predominantly innodb. I had 8G memory on my server, but MySQL isn't going up more than 1.3G and tweaking the settings isn't helping. I tried to increase the memory allocation for innodb_buffer_pool_size, it works if I set it up to 1G, but if I set 2G, or above the server doesn't come back online! I want mySQL to use at least 5-6 Gigs of the memory I have for performance, but I can't get this to work. Can anyone please help? My mysql config file is below (there are 2 mysqld sections... when i used MySQL workbench it created another one!) [MySQLD] port=3306 basedir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL datadir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL\\Data default-character-set=latin1 default-storage-engine=INNODB query_cache_size=128M table_cache=1024 tmp_table_size=32M thread_cache=32 myisam_max_sort_file_size=100G myisam_max_extra_sort_file_size=100G myisam_sort_buffer_size=2M key_buffer_size=32M read_buffer_size=16M read_rnd_buffer_size=2M sort_buffer_size=8M innodb_additional_mem_pool_size=24M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=10M innodb_buffer_pool_size=1G innodb_log_file_size=10M innodb_thread_concurrency=8 max_connections=700 key_buffer=48M max_allowed_packet=5M sort_buffer=2M net_buffer_length=4K old_passwords=1 wait_timeout=20 connect_timeout=60 [client] port=3306 [mysqld] query_cache_min_res_unit = 4096 innodb_additional_mem_pool_size = 1048576 innodb_buffer_pool_size = 1G query_cache_limit = 1048576 key_buffer_size = 8388608 sort_buffer_size = 2097144 query_cache_type = 1 query_cache_size = 312M log-slow-queries connect_timeout = 5 wait_timeout = 20 thread_cache_size = 15 read_buffer_size = 131072 table_cache = 64

    Read the article

  • ipv6 reverse DNS delegation

    - by user1709492
    I currently have 2001:1973:2303::/48 assigned to me and i'll be assigning /64's to customer's I'd like to have 1 zonefile for the /48 where i can essentially point / redirect query to different nameservers. Example ( Desired effect ) 2001:1973:2303:1234::/64 -> ns1.example.com, ns2.example.com 2001:1973:2303:2345::/64 -> ns99.example2.com, ns100.example2.com 2001:1973:2303:4321::/64 -> ns1.cust1.com, ns2.cust1.com Current /48 zonefile $TTL 3h $ORIGIN 3.0.3.2.3.7.9.1.1.0.0.2.ip6.arpa. @ IN SOA ns3.example.ca. ns4.example.ca. ( 2011071030 ; serial 3h ; refresh after 3 hours 1h ; retry after 1 hour 1w ; expire after 1 week 1h ) ; negative caching TTL of 1 hour IN NS ns3.example.ca. IN NS ns4.example.ca. 1234 IN NS ns1.example.com. NS ns2.example.com. 2345 IN NS ns99.example2.com. NS ns100.example2.com. 4321 IN NS ns1.cust1.com. NS ns2.cust1.com. Where am i going wrong ? My request seems simple to me atleast. To put it in terms of firewalling i want to redirect traffic client queries 2001:1973:2303:4321::1 - ns3.example.ca sees the request and redirects the query to ns1.cust1.com - ns1.cust1.com answers the query with omg.itworks.ca ( provided ns1.cust1.com is properly configured.

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • openvpn port 53 bypasses allows restrictions ( find similar ports)

    - by user181216
    scenario of wifi : i'm using wifi in hostel which having cyberoam firewall and all the computer which uses that access point. that access point have following configuration default gateway : 192.168.100.1 primary dns server : 192.168.100.1 here, when i try to open a website the cyberoam firewall redirects the page to a login page (with correct login information, we can browse internet else not), and also website access and bandwidth limitations. once i've heard about pd-proxy which finds open port and tunnels through a port ( usually udp 53). using pd-proxy with UDP 53 port, i can browse internet without login, even bandwidth limit is bypassed !!! and another software called openvpn with connecting openvpn server through udp port 53 i can browse internet without even login into the cyberoam. both of softwares uses port 53, specially openvpn with port 53, now i've a VPS server in which i can install openvpn server and connect through the VPS server to browse internet. i know why that is happening because with pinging on some website(eb. google.com) it returns it's ip address that means it allows dns queries without login. but the problem is there is already DNS service is running on the VPS server on port 53. and i can only use 53 port to bypass the limitations as i think. and i can not run openvpn service on my VPS server on port 53. so how to scan the wifi for vulnerable ports like 53 so that i can figure out the magic port and start a openvpn service on VPS on the same port. ( i want to scan similar vulnerable ports like 53 on cyberoam in which the traffic can be tunneled, not want to scan services running on ports). improvement of the question with retags and edits are always welcomed... NOTE : all these are for Educational purpose only, i'm curious about network related knowledge.....

    Read the article

  • openvpn port 53 bypasses allows restrictions ( find similar ports)

    - by user181216
    scenario of wifi : i'm using wifi in hostel which having cyberoam firewall and all the computer which uses that access point. that access point have following configuration default gateway : 192.168.100.1 primary dns server : 192.168.100.1 here, when i try to open a website the cyberoam firewall redirects the page to a login page (with correct login information, we can browse internet else not), and also website access and bandwidth limitations. once i've heard about pd-proxy which finds open port and tunnels through a port ( usually udp 53). using pd-proxy with UDP 53 port, i can browse internet without login, even bandwidth limit is bypassed !!! and another software called openvpn with connecting openvpn server through udp port 53 i can browse internet without even login into the cyberoam. both of softwares uses port 53, specially openvpn with port 53, now i've a VPS server in which i can install openvpn server and connect through the VPS server to browse internet. i know why that is happening because with pinging on some website(eb. google.com) it returns it's ip address that means it allows dns queries without login. but the problem is there is already DNS service is running on the VPS server on port 53. and i can only use 53 port to bypass the limitations as i think. and i can not run openvpn service on my VPS server on port 53. so how to scan the wifi for vulnerable ports like 53 so that i can figure out the magic port and start a openvpn service on VPS on the same port. ( i want to scan similar vulnerable ports like 53 on cyberoam in which the traffic can be tunneled, not want to scan services running on ports). improvement of the question with retags and edits are always welcomed... NOTE : all these are for Educational purpose only, i'm curious about network related knowledge.....

    Read the article

  • SBS2011 Standard DNS suddenly not resolving some domains

    - by Matt
    Suddenly today I am unable to resolve common domains like serverfault.com, facebook.com; but other domains like google.com, cnn.com work fine. This is on a client machine (Win7 Pro) connected to an SBS2011 Standard domain. The only DNS server is the SBS2011 server. The same domains work fine on all client PCs I have tried, and the same ones do not work. Using nslookup, I get 'no such domain' errors for facebook.com, and the correct DNS entries for the ones that do work. When I add Google's Public DNS to my client PC as a backup (primary = local SBS server, secondary = 8.8.8.8), everything works fine for my client PC, but querying from the SBS server directly or from other client PCs are broken (so I don't believe it's a firewall issue). My main question is how can I see what servers the SBS2011 server queries if it doesn't know about a domain? There is nothing in our firewall logs that say it blocked any DNS-based packets, but I also wanted to query based on the IP/FQDN on the servers that the SBS server was likely to contact to find out about facebook.com for example. Update 23/05/2012: It appears DNS is working again this morning for the affected websites. Both the DC on its own and all client PCs can once again access the websites that were not loading last night, as well as the websites that were working. I haven't changed anything overnight, so it appears that there was some kind of temporary glitch, but I can't understand what would have caused it on the network.

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >