Search Results

Search found 16467 results on 659 pages for 'request filtering'.

Page 423/659 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • caches domain user on local PC

    - by user630320
    We have a fully working domain in UK and around the world we have user who use VPN ( checkpoint) to connect to or domain. One of the user in USA has a laptop which he never logged on to before ( it does caches the user login details). Does anyone know how to cache user login information on this laptop. I have tried netdom trust to add this user to the laptop but i was not able to do this. At the moment user is logging in with a local administrator account and then using VPN to log on to our domain but when it comes to accessing files on domain user get access deieded. When user try to login it gets There are currently no log on servers available to service the logon request Does anyone know how to add user.

    Read the article

  • Default documentroot apache does not work

    - by James Wise
    I have apache version 2.2 and php 5.3.15 on a single server. I configured virtual hosting and a default vhost. 0_default_.conf - goes to /var/www/default sub.domain.com.conf - goes to /var/www/sub.domain.com My question is, how could I set the default documentroot to sub.domain.com permanently? That means all request should be redirected to sub.domain.com. I try to remove 0_default_.conf but when viewing the page it display the php source code of sub.domain.com. Here is my configurations -- http://pastebin.com/4e3awUJ4 Although I can create index.php to /var/www/default and permanently redirect to sub.domain.com site but it's not viable solution for me because what if I didn't point the ip address of sub.domain.com to the server so user cannot view that subdomain. I would appreciate if anyone could share their knowledge and wisdom. Thanks. JamesW

    Read the article

  • Crontab + .sh + php

    - by Kristaps Karlsons
    Hi. I'm trying to call a shell script every 5 minutes, witch executes php file under root. # crontab -l */5 * * * * /home/regularuser/call.sh permissions: -rw-rw-rw- 1 root root 162 Jun 6 23:40 call.php -rwxr-xr-x 1 root root 66 Jun 7 01:20 call.sh call.sh contents: #!/bin/bash php -q /home/regularuser/call.php echo "request processed" My problem is that my php file doesn't get executed via crontab. However, if I call call.sh - everything works perfectly. I'm new to crontab and shell scripting, so any advice/resources are welcome.

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

  • Possible to redirect from HTTPS to HTTP behind load-balancer?

    - by Derek Hunziker
    I have a basic ASP.NET application that sits behind an F5 load-balancer. Incoming SSL requests (over HTTPS) terminate at the load-balancer and all internal communication between the load-balancer and my application servers is unsecure (over HTTP). When a unsecure request comes in, my app is able to use Response.Redirect("https://...") to redirect a secure URL with no problems. However, the other direction appears to be impossible - I cannot redirect from HTTPS to HTTP using Response.Redirect() from my application. The URL remains HTTPS for the client and does not change. Could the F5 be preventing the redirect for ever reaching the client? Is there any special configuration necessary to let this happen?

    Read the article

  • Laptop power button not working

    - by DyP
    I have a (seemingly rather exotic) Dell Latitude XT2 notebook running Lubuntu 12.04, fresh install. Tried to get the power button to work as expected (opening lubuntu-logout, just as I think it was meant to be out-of-the box), but no success: power button does nothing but forced power-off on long press. Here some more information, please ask if something more could help: Power button is recognized as a device: xinput lists Power Button as a slave keyboard at /dev/input/event1, and I can run a tool to listen to that device and invoke some action (successfully). But using .config/openbox/lubuntu-rc.xml to assign some action to XF86ShutDown or XF86PowerOff does nothing. Works well for any key of the keyboard. xinput --test reports a keycode (key press, key release) of 124 when pressing & releasing the power button. xmodmap -pk lists 124 mapped to XF86PowerOff which seems correct to me (btw: is this the same as XF86ShutDown?) When assigning another keysymname to the 124 keycode, e.g. 'w', still nothing happens when pressing the power button (no window seems to receive a 'w'). Works well for any key on the keyboard. The outputs of xev for a press/release of the power button irritate me: MappingNotify event, serial 41, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248 FocusOut event, serial 41, synthetic NO, window 0x2a00001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 41, synthetic NO, window 0x2a00001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 41, synthetic NO, window 0x0, keys: 93 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (instead of 93, I also get different values) I might have missed something very basic since I'm not very familiar with both openbox/lubuntu and the X input system. Btw.: same goes for three other special buttons on my laptop..

    Read the article

  • WebsitePanel 2 totally NOT working on Windows Server 2012 on Azure

    - by Carmine Giangregorio
    I’m having many troubles installing WebSitePanel on an Azure Virtual Machine, with Windows Server 2012. I followed the steps in http://www.websitepanel.net/documentation/deployment-guide/server-configuration/preparing-windows-server-2008-r2-for-websitepanel-installation/ and installed everything I needed. Then, I installed the WebSitePanel Standalone Server package with the installer. I opened the endpoint for the port 9002 on Windows Azure; so I pointed my browser to myhostname.cloudapp.net (note: in Azure you don’t have a static IP address, instead you have an hostname like [hostname].cloudapp.net). So, loading myhostname.cloudapp.net:9002 fails, and any browser shows something like “Unable to load page”. Notice: if I try to load the WebSitePanel Portal directly on the server, I get an error HTTP 400 Bad Request. How come? IIS works perfectly on the server, in fact the default website runs without problems on port 80.

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • .htaccess redirect root directory and subpages with parameters

    - by wali
    I am having difficulty trying to redirect a root directory while at the same time redirect pages in a sub directory to a different URL. For example: http://test.example.com/olddir/sub/page.php?v=one to http://test.example.com/new/one while also redirecting the any request to the root of the olddir folder. I have tried RewriteCond %{QUERY_STRING} v=one RewriteRule ^/olddir/sub/page.php /new/? [R=301] and RedirectMatch /oldir "test.example.com" RedirectMatch /olddir/sub/page.php?v=one "test.example.com/new/one" Any help at this point will be extremely appreciated...Thanks!

    Read the article

  • IOException opening RFCOMM on openSUSE

    - by Chief A-G
    I have a permissions problem on openSUSE with Bluetooth /dev/rfcomm0. I've written a small test application which opens /dev/rfcomm0 and sends a request message and retrieves a response message. At first my problem was a permission denied error on /dev/rfcomm0 until I added the user account to the dialup users group. Now I get a System.IO.IOException Interrupted system call error whenever I run the app. I can sudo my application and it runs fine. I'm not sure how and which permissions to set to get this work work under my normal user account.

    Read the article

  • Proxy - Some general questions

    - by user68802
    Is it possible to accomplish the following scenario with a proxy server? We are having one internet facing server that we want to put behind a proxy for some reasons. We want everything to work as before. When they do a request all connections will be forward to the internal server which will send back the information through the proxy. We want to be able to change to proxy to show an maintenance page whenever we are doing maintenance and change it back to forwarding traffic when we are done. We do also want to be able to keep forwarding all users that are using the sites but show an maintenance page for all new users for a time before showing the maintenance page for everyone in order to give the users some time to finish their work before kicking them out.

    Read the article

  • When using ssh with priv/pub keys, how to connect to the destination using a user different from the origin machine?

    - by lpacheco
    I need to connect to hostB using user2 from hostA where I´m connected using user1. I've run ssh-keygen -t rsa on hostA and copied the public key generated in ~/.ssh/id_rsa.pub to the ~/.ssh/authorized_keys of user2 in hostB. Then I tried to connect from hostA to hostB using the command: $user1@hostA> ssh user2@hostB I still get a request for password: user2@hostB's password: If I try to connect using the same user on both hosts, it works correctly: $user1@hostA> ssh user1@hostB Enter passphrase for key '/home/user1/.ssh/id_rsa': What am I missing?

    Read the article

  • How should I group these variables?

    - by stariz77
    I have a shape that will be defined by: char s_type; char color; double height; double width; These variables are scanned in from a request string sent to my server and passed into my printing function, which then prints out the shape. Currently they are just local variables sitting in my main(); however, I was wondering if there would be any advantage in creating a struct containing these variables, and then passing the struct to my printing function? or how else might I improve my program's structure/style, would passing a struct by reference have any kind of performance benefit if there were many requests and therefore many printing function calls? printer(char st, char cr, double ht, double wd); int main() { // Other main functionality. char s_type; char color; double height; double width; sscanf (serv_req, "GET /%c/%c/%lf/%lf", &s_type, &color, &height, &width); printer(s_type, color, height, width); // Other main functionality. return 0; } It seemed "neater" if I had a struct or something that didn't leave me with declarations in the middle of everything else going on in main. I'm interested in structure/style as well as performance. EDIT: didn't mean to put printer declaration inside main.

    Read the article

  • eCryptfs on ubuntu server : How to keep the home mounted without being over ssh?

    - by Bebeoix
    I have a daemon program who need to read in a file who is saved somewhere in my home folder. But every time I close my ssh connection, this daemon can't read the file because it appear that eCryptfs unmount the home. Maybe there is an option to force eCryptfs to not only mount with an ssh connection ? I didn't found it. Thanks. PS : I know this thread, http://askubuntu.com/questions/165608/why-is-ecryptfs-only-mounting-private-home-directory-over-ssh, but this is not the proper/good way to deal with the request.

    Read the article

  • what can be reasons for localhost responding super-slow the first time a page is requested?

    - by frequent
    Still learning my server-ways with Apache2.2/MySQL5.2/Coldfusion8 on localhost (running Windows XP) What I notice is every time I request a page for the first time after firing up Coldfusion and Apache, localhost needs forever (1+ minute to respond and send the inital page). After that all seems to run at normal loading time. I'm using require.js to pull in Jquery, Jquery Mobile and two other plugins on first page load, but loading the same page from a real server works normally, so I'm also ruling out this as a probable cause. Since it happens regardless of which page I'm loading first it shoud not be page related, so I'm looking for other clues on why this could happen. Thanks for some thought!

    Read the article

  • Welcome to the Java Training Beat!

    - by tmcginn
    We are a group of dedicated training developers for Java, located in the US, India, and now Mexico. In this blog we will announce new training content and events that might be of interest to our readers. In this first installment of the Java Training Beat, I would like to introduce three new Oracle By Example (OBE) modules I recently released and posted to the Oracle Online Learning Library. Creating a Simple Java Message Service (JMS) Producer with NetBeans and GlassFish - covers how to create a simple text message producer with NetBeans 7 and GlassFish. Creating Java Message Service (JMS) Resources in WebLogic Server 12c - covers how to create JMS resources using the console and WebLogic Server 12c. With this tutorial, you can replicate the results of the first tutorial in WebLogic. Creating a Publish/Subscribe Model with Message-Driven Beans and GlassFish Server - covers how to create a publish/subscribe application using JMS. This tutorial includes a short case study that includes a JSF front-end application that sends a hotel reservation request object to the server as a MapMessage. Hope you find these useful!  And do check out the Online Learning Library - we have a wide range of additional content posted and more being added every month!

    Read the article

  • Any recommendations on a NAS for a home-super-user?

    - by marc_s
    Can anyone recommend a good NAS for use in a home-server environment? I would request at least 2, preferably 4 disks, and I am most interested in good to excellent throughput for file-server and backup purposes - don't need any of the fancy media-streaming or -sharing features, that's not of interest to me. For a 4 or more disk solution, support for the various RAID levels (0, 1, 1+0, 5) would be a plus - especially if supported in hardware (rather than just a software emulation). I just need a place to put my collection of data, ISO images, and so forth - and since several external disks (self-built and off-the-shelf) have failed so far, I'm looking into a more reliable solution. Marc

    Read the article

  • Some $_SERVER parameters missing when accessing PHP script via Cron

    - by Jakobud
    I have a script that I need to run with PHP via cron. The original author of the script made a lot of user of certain $_SERVER parameters (like REQUEST_URI). But it appears that certain variables don't exist when running PHP via command line or via CRON. For example, there is no request uri, so it makes sense that the REQUEST_URI parameter wouldn't be available. Is there any way around this other than to completely rewrite the script in order to avoid using special $_SERVER parameters that aren't universally available?

    Read the article

  • Sun Directory Server 5.2 performance

    - by tmow
    Hi all, I'm using logconv.pl (provided by Sun), to measure performance on my server. These two metrics results, are worrying me a bit: Binds: 192164 Unbinds: 111569 In fact the difference between the two it's quite big, how can I determine which are the unbound requests? As stated by Lodovic: Many applications just close the connections without sending an Unbind request. This simply can explain the difference. But the logconv.pl doesn't show details about the unbound requests, do you know any other tools or can you suggest some queries or whatever that can help me find out the root cause? Do you think anyway that the performances may improve fixing the issue?

    Read the article

  • General purpose ticketing/tech support system [closed]

    - by crazybyte
    Possible Duplicate: What’s your favorite ticketing system? I was wondering if somebody could recommend me a very user friendly or simple general purpose ticketing/tech support system. I need something that is web based, preferably open-sourced/free software implemented using PHP, Ruby, Ruby on Rails or Java (as back end) with MySQL or PostgreSQL as database engine. I need something that is not development management oriented or project management oriented like Eventum or similar (random example), something to which the user can connect open a tech support request and be able to follow it until is solved or dropped.I need it to be open-sourced to be able to modify it if there is a need or extend it. I tried a number of such systems available and I found that osTicket or eTicket is something that it's close to what I need, but the code is somewhat flaky and some of the features are working badly or behaving strangely. Any thoughts/advice where to find something similar? Thanks!

    Read the article

  • Updating, etc., automatically

    - by Steve D
    Is there a way to set up Ubuntu 12.04 (or earlier versions) so that all recommended updates are done automatically, say once a week? When I say automatically, I mean no password entry or user intervention required. This sounds like a stupid request, so let me tell why I'm asking. My grandfather knows nothing about computers; he uses his solely to read Yahoo! mail. I want to get rid of his clunky, spyware-ridden Windows XP and install Ubuntu. I want to set it up so when he turns the computer on, after a couple minutes, voila!, Yahoo! mail, already signed in, ready to go. The problem is I don't want to have to go over there every week or so and make sure everything is up-to-date, he hasn't accidentally installed any spyware, etc. So can this be done? Is this the best way to set things up for my grandfather? Are there other things I should be worried about when it comes to keeping things hassle-free for him? Please don't post anything like "why not teach him how to... blah blah blah". My grandfather is 80 years old and has made it clear email is the only thing he will ever use a computer for! Thanks!

    Read the article

  • Random timeout now and then

    - by KenavR
    Maybe this is a to generic question, but since we have this issue for quite a while now, I give it a shot. We have some applications which use HTTP for the connection between the client (website or fat-client) and the server. The Computer who runs this applications is in a Network behind a firewall and a proxy, the server isn't inside the same network. The problem is that every now and then the https Request times out and depending on the Client the Application "hangs" or does some other funky stuff. The problem is definitely inside our network, because if i try the applications outside our network it works fine. Can you give me a hint where i can most likely find the problem?

    Read the article

  • Remove Content-Length header in nginx proxy_pass

    - by Luc
    I use nginx with proxy path directive. When the application to which the request is proxied return a response, it seems nginx add some header containing the Content-Length. Is that possible to remove this additional header ? UPDATE I have re-installed nginx with the more_headers module but I still have the same result. My config is: upstream my_sock { server unix:/tmp/test.sock fail_timeout=0; } server { listen 11111; client_max_body_size 4G; server_name localhost; keepalive_timeout 5; location / { more_clear_headers 'Content-Length'; proxy_pass http://my_sock; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } }

    Read the article

  • Strange traffic on fresh Ubuntu Server install

    - by Fishy
    I've just installed Ubuntu Server on my home box after becoming partially familiar with it at work and wanting to train up as a Pen Tester. I installed the latest version on a logical partition (the main one contained Win7), and selected none of the extra modules (I think). I installed ngrep and fired it up (along with TCPdump) and immediately saw some strange traffic which I am unable to identify. My pc is sending out UDP packets every couple of seconds to a seemingly random series of IP addresses, all on the same port (47669 - though I did also see it use another port for a while). I watched it do this for about 20 mins, whilst trying to work out why it was doing it. The only other traffic was the odd ARP request for the router and SSDP UPnP broadcasts from the router. Anyone know what this is, or have any advice on how best to find out? Thanks. EDIT: Actually, it's not my box generating the traffic. It's receiving the traffic on that port, from a series of IP addresses, and returning 'port unreachable' messages.

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >